[Skip to content]

FM World logo
Text Size: A A A
22 October 2019
View the latest issue of FM
Sign up to Facilitate Daily >
FM World daily e-newsletter logo
.

A QUESTION OF AI ETHICS

© iStock
© iStock

07 October 2019 Herpreet Kaur Grewal


Are companies considering ethics enough when it comes to enacting artificial intelligence (AI) and Internet of Things (IoT) technology in the workplace?


At a workplace conference last year Neil Steele of Asure Software stressed how sensor technology would only become more popular. But he added that while “sensor-based data collection” was “in and accepted” – the worker’s “perception [of it] is crucial to its successful adoption in the workplace”.

A recent report claimed that while companies are preparing for more use of AI and IoT, a discussion on the ethics of using such technologies is lagging behind. 

The report, from call centre tech firm Genesys, saw 64 per cent of the firms it surveyed expecting to be using AI or advanced automation by 2022 in support of efficiency in operations, staffing, budgeting or performance. Nevertheless, 54 per cent of those employers said that they were not troubled that AI could be used unethically by their companies.

Against this backdrop comes growing unease about the potential for errors in data leading to the creation of algorithms that “generate sexist or racist outcomes” (as described in a recent Forbes article) and introduce biases. AI use, we are warned,  needs to tie in with a company’s corporate social responsibilities and in how it cultivates trust and transparency. So we asked: has your organisation considered ethics when looking to enact AI and IoT policies?


Removing human error

AI is increasingly being adopted by the FM industry as it can significantly improve efficiencies, save businesses money and free up employee time for more high-value work rather than dull, repetitive tasks. We’re seeing some job roles change with the introduction of AI – for the better. Human error is the weakest link in many business processes, but AI removes this risk and provides businesses with the up-to-date, relevant information they need, when they need it. One of the key benefits of AI is that it removes any subjective bias in decision-making – it’s purely data-driven rather than being swayed by opinion and experiences. 

There needs to be a level of human oversight. AI should not be used to make the final decision, but it can highlight the relevant information quickly. Our platform Broadstone uses AI to identify the best candidates for a role quickly, whittling down a list of, say, 3,000 to 10, which is a much more manageable number for the recruitment manager to review. This also helps the workers registered with us by highlighting the roles where they are most likely to be successful.


Tom Pickersgill, co-founder and CEO of Broadstone


Consult your employees

Discussions about artificial intelligence and automation taking over workers’ jobs are highly topical, but despite huge technological advancements, full process automation is not a realistic goal for the immediate future. The objective should be to apply technology in collaboration with the workforce to augment their experience and deliver valuable efficiencies – not replace them entirely.

New technology can provoke feelings of uncertainty for employees, but instead of starting with the technology and working backwards, prioritising the workforce and consulting them on how innovation can augment their role means they can be brought in to the process from the start. This is particularly important for peak periods, where AI and automation can help to streamline picking and packing within the warehouse environment, ensuring that the workers perform their roles safely, efficiently and accurately.

As the warehouse environment changes and companies look to automate simple, repetitive tasks, the role of the human workforce will evolve. The provision of real-time information via an ecosystem of wearable devices means that workers are more empowered by real-time information to make decisions and leverage essential human intelligence to respond to the situation. The result is not only better human-to-human collaboration, but also human to machine – and a workforce that is fulfilling its potential.

 

Axel Schmidt, senior communications manager at ProGlove


Potential for misuse is high

Ethical application of AI is even more important in the workplace than in the consumer space. An employer has immense power over its employees, meaning that practices such as opt-in and informed consent are much harder to implement and follow. Analysing sensor data in these settings, while powerful, requires even more thought to do well.

Companies need to clearly articulate why they’re collecting data, what data they’re collecting and who has access to it. If these questions can’t be easily answered, it’s probably premature to start with a large-scale deployment.

Typically, data does not need to be stored long term at the individual level. Trends and distributions are what matter for predicting what behaviours and workplace changes lead to better outcomes. While individual data can be used to give personalised feedback, there are only a small number of legitimate business applications. The potential for misuse is high.

Companies need to consider not just what is legal, but what is right. Any benefit derived from AI or IoT technologies will be dwarfed by the negative reactions employees will legitimately have if they’re used for suspect ends. If we hope to continue to use this powerful technology to its utmost potential, ethics will be integral to its evolution.


Ben Waber, CEO and co-founder of Humanyze


Benefiting the user

Many of the concerns around AI are based on instances of misuse and poor communication regarding its purpose. There are cases where sensors have been installed in workplaces without employees’ knowledge that have created media firestorms. The issue is not the AI itself, but data privacy, purpose and consent. 

While AI data is almost always anonymised, organisations need to be careful how they communicate with staff so that people are aware that they may be monitored, even in an anonymised way. Organisations should be completely transparent about why the technology is being introduced and what it will – and won’t – be used for. 

The key to successful acceptance of AI is demonstrating the user benefit. The phrase ‘workplace occupancy sensors’ does not do justice to what the technology is actually capable of. Traditionally, this technology has been merely focused to allow real estate managers to understand occupancy, hence its name. This needs to be realigned to providing the primary benefit to the office user. By designing them to become workplace availability sensors, the benefit can be restated and implemented to drive primary benefit to the end user. The purpose, therefore, is to reduce user frustration in finding an empty desk or a meeting space, a common complaint in non-allocated desk environments. At the same time, sensors allow organisations to see which are the most – and least – popular spaces and analyse the reason why (typically poor temperature, light, noise, or air quality) and improve the space accordingly. 

By demonstrating how AI technologies can benefit individuals and communicating honestly about why they technology is being used, any concerns about ethics can be successfully overcome and privacy concerns laid to rest.


Raj Krishnamurthy, technology expert and CEO of Workplace Fabric