Global judgements and ideas.
Talk prepared for the Alan Turing Institute 01/11/18 which reflects the report I am preparing for EU-OSHA* which looks at rising risks and benefits for safety and health as AI is integrated into workplaces.
Dr Phoebe V. Moore, Associate Professor of Political Economy and Technology, University of Leicester School of Business
Now, computers can see, hear and speak, via the application of AI processes and systems, and the contribution of AI in many spheres, from disaster relief to medical advancements, is being touted with breathless anticipation in various prestigious camps.
The commissioned report I am now writing for the EU agency on safety and health at work, EU-OSHA, outlines where and how AI is being applied in the workplace and the benefits and risks to occupational safety and health.
AI can be a force for good. By boosting economic growth, some predictions indicate that AI should create as many jobs as it eliminates (PwC 2018).
However, there are many ethical questions arising, and significant risks arising in digitalised workplaces including psychosocial and physical harassment violence which I wrote about earlier this year for the ILO ACTRAV for a new Labour Standard that looks in part at these issues.
The IOSH Magazine emphasises that AI risks ‘outpacing our safeguards’ (Wustemann 2017).
Stephen Hawking warned that AI could ‘take off on its own and redesign itself at an ever-increasing rate’ (Ibid.).
But to begin my short talk today I want to emphasise, it is not technology in isolation that creates benefits or risks for workers.
It is instead the implementation of technology which creates negative or positive conditions.
In that light, the report analyses the technology and how it works, but emphasises how AI technologies are being used for workplace design, decision-making and production processes, asking a range of ethical questions that reflect a significantly different future for work.
Today I am going to touch on some of the highlights from the report, indicating where it is happening and what the possible risks and benefits to workers’ safety and health are.
Use of AI in Human Resources and Work Design in Gig Work
‘People analytics’ is defined broadly as the use of individualised data about workers or prospective workers, to make decisions about them.
As companies have collected more and more data about people from various sources in the workplace and outside the workplace over time, people analytics tools can increasingly involve AI processes via machine learning, to identify patterns in data about people and to compare the patterns identified across data silos.
Therefore, computer programmes can effectively make decisions to aid employers to ‘measure, report and understand employee performance, aspects of workforce planning, talent management and operational management’ (Collins, Fineman and Tsuchida 2017) and to take care of what is called the ‘people problem’. People problems are also called ‘people risks’ (Houghton and Green 2018), which are outlined into seven dimensions in a recent CIPD report as:
Another form of people analytics involves micro-filmed biotracking where employers to interview candidates on camera and AI is used to judge both verbal and nonverbal cues. One such product is made by a group called HireVue which is used by over 600 companies. The aim is to reduce bias that can come about if for example an interviewee’s energy levels are low, or the hiring manager has more affinity to the interview based on similar e.g. age, race and related demographics.
Gig workers. Gig work in taxis, where drivers use the Uber and food delivery riders use the Deliveroo apps; and other workers using online platforms like Mechanical Turks who carry out e.g. editing, translation and programming work online; are subject to a work design model that involves a high extent of algorithmic decisionmaking and often no social protections.
If processes of algorithmic decision-making in people analytics do not involve human intervention and ethical consideration, AI driven human resource tools could expose workers to heightened structural, physical and psychosocial risks and stress, particularly if workers feel that decisions are being made based on numbers which they have no control over. Indeed, there is a growing arena of research that demonstrates that discrimination is not eliminated by AI in decision-making, but instead, codification of data perpetuates the problem (Noble 2018).
In a nutshell, AI can only learn from data that is already there. In other words, if bias and discrimination have already happened in a workplace over time (e.g. men are paid more than women, white men have held more management positions over time, disabled applicants are not invited for interview, etc.), data will reveal imbalance and even perpetuate discrimination.
So, training AI on existing data sets will not eliminate the problem of discriminatory decision-making without some precautionary measures (note that IBM has developed a tool recently that may deal with this).
The World Economic Forum (WEF) Global Future Council on Human Rights reported in 2018 that, even when good data sets are used to set up algorithms for machine learning, there are considerable risks of discrimination if the following occur:
Huws has outlined the risks posed to workers in gig work and the Oxford Internet Institute is working on a Fairwork Foundation for platform work. In the work design of gig work and practices in human resources, AI accelerates the employment relationship in several ways that merit a close look at the risks as well as possible benefits of safety and health for workers.
But now I will look at where AI is being used in factories, warehouses and call centres.
Thinking Robots: AI in Industry and Call Centres
AI systems now allow robots in factories and warehouses to do more than they ever have. AI enhanced machines can even detect counterfeit goods. GPS and RFID systems allow wearable devices on a warehouse worker’s arm to direct workers to the right places to stock items on warehouse shelves, collect products and shift them to the right places in supply chains. Now, new production methods and the introduction of a range of new computers and machines introduce opportunities for work enhancement and of course, a range of possible risks.
Lot size manufacturing. One new feature of automation and Industrie 4.0 processes where AI-enhanced automation is underway is in the area of ‘lot size’ manufacturing.**
The assembly line model has not disappeared, where a worker carries out one repeated, specific task for several hours at a time, but the lot size method in manufacturing is different.
Within a lot size production line, workers are provided with a visual on-the-spot guide facilitated by a holo lense screen and virtual reality frameworks, or on a tablet within a work station, where workers carry out new tasks that are learned with this immediate training; and only carried out for the period of time required to manufacture the specific order the factory receives.
Cobots. Amazon has 100,000 AI augmented robots, which has shortened training to less than two days. Walmart is using virtual reality technology to improve training, with simulations of customer environments. Airbus and Nissan are using collaborative bots, or cobots, which work beside factory workers. The cobot industry is predicted to grow by USD1 billion by 2020 and shipments should increase by 4,800 per cent by 2025. These new types of robots are working alongside humans and sometimes autonomously in a range of new environments with lots of possibilities for enhanced production.
Call centres. Chatbots in call centres can deal with a high percentage of basic customer service queries, freeing up humans to deal with more complex questions. Dixons Carphone uses a conversational chatbot now named Cami which can respond to consumer questions on the Curry website and through Facebook messenger. Insurance company Nuance launched a chatbot named Nina to respond to questions and access documentation in 2017. Morgan Stanley have provided 16,000 financial advisers with machine learning algorithms to automate routine tasks. AI and robots are now being used in health care to speed up patient service, monitor employees’ wellbeing and re-organise medical records.
So AI is being used to significantly change the world of work, introducing questions about how workers’ health and safety will be protected as well as enhanced.
If the results result in workplace restructuring involving job replacement, job description changes and the like, overwork on the one hand and reduction in paid working time and significant job losses on the other, could result.
These practices are likely to increase workers’ stress even further if they are used for appraisals and performance management without due diligence in process and implementation.
Lot size methods have led to deskilling of workers, where skilled labour is only needed to design the on-the-spot training programmes used by workers who no longer need to specialise. The OSH risks can also emerge because of the lack of communications, where worker won’t be able to comprehend the complexity of the new technology quickly enough. This is exacerbated if workers are not trained to prepare for any arising hazards.
As a recent TNO report states, the risks in human – cobot environment interactions, the risks are:
These are just a few of the new innovations that are being experimented with as well as fully implemented in many industry locations today.
Policy developments, debates and discussion
So what can be done? In China, the government will soon give each person a Citizen Score, or an economic and personal reputation scoring, that will look at people’s rent payments, credit rankings, phone usage and so on, used to determine conditions for obtaining loans, jobs and travel visas. Perhaps people analytics could be used to give people Worker Scores used for decision-making in appraisals, which would introduce all sorts of questions about privacy and surveillance.
The European Commission has recently indicated that the emergence of AI and in particular the ecosystem and features of autonomous decision-making require a ‘reflection about the suitability of some established rules on safety and civil law questions on liability’ (2018). So, horizontal and sectoral rules may need to be reviewed. The Machinery Directive, the Radio Equipment Directive, the General Product Safety Directive and other specific safety rules provide some guidance, but more will be needed for workplace safety &health.
A committee within the International Standards Organisation (ISO) is currently working on designing a standard that will apply to uses of dashboards and metrics in workplaces. The standard will include regulations around how dashboards can be set up and on worker data gathering and use.
The EU has rolled out the General Data Protection Regulation (GDPR) which requires worker consent for data collection and usage and prohibits workplace decisionmaking based on algorithm alone. It is unclear how much preparation in the area of human resources is being undertaken given the extensive changes this new policy requires. I have written about the significance of GDPR requirements here , where the entire work design model in gig work is under review. Trade unionist Colclough of the Uni Global union federation has composed a set of principles for data rights and other unions are pursuing projects to protect workers against the worst impacts of AI at work, including IG Metall, where industry training is being rewritten with the spotlight on risks of AI tech.
Several discussions are now underway to consider the worst case scenarios as well as possible benefits of AI at work which are putting workers and companies on a path to a new world of work that requires consideration now, which is of course, why we are here today.
AI’s seeming invisibility, operations and potential power, are perpetuated because it resides within a ‘black box’ (Pasquale 2015) of obfuscation, where its workings are considered beyond comprehension and thus accepted by the majority of people.
Indeed, most people are not engineers and so do not understand how computers and AI systems work, and only .004 per cent of the world’s population are designing technologies that are predicted to impact the majority of people. Nonetheless, human experts are often surprised with AI actions, such as the chess GO player who recently was beaten by a computer programme.
Prediction machines are actually often better than humans at carrying out complex interactions among different indicators, in particular in settings with rich and extensive data.
So even though they reside in a black box, people are usually happy for computer programmes to, e.g. make ‘prediction[s] by exception’ (Agarwal et al 2018).
Prediction by exception refers to processes whereby computers deal with large data sets and are able to make reliable predictions based on routine and regular data, but also spot outliers and can at that point notify people, ‘telling’ them that checks should be done, and that human assistance should be provided.
In this case and in so many that are now being revealed, humans could become like a resource for machines.
But again, it is not the technology itself that creates risks for safety and health of workers, it is the way that it is implemented, and it is up to all of us to ensure mitigation, due diligence and debate, to ensure the next phase of the future of work is approached with the human still in focus.
Please email me at pm358 at leicester.ac.uk for more information about the timing of this report and if you are interested in citing these points.
*EU-OSHA is the European Union Agency for Safety and Health at Work.
**Michael Bretschneider-Hagemes discussed this in an interview with me 18/09/18.