Global judgements and ideas.
I spoke at NESTA FutureFest Forward: AI and The Future of Work curated by Ghislaine Boddington on 17th May 2018 5:30 p.m. This was second event in the FutureFest Forward series explored future visions of the workplace.
See the video of my talk here and read the text from my talk below:
The history and future of AI in work
John McCarthy, an Assistant Professor at Dartmouth College, almost accidentally coined the term Artificial Intelligence in 1955, describing it as a basket of processes that could make ‘a machine behave in ways that would be called intelligent if humans were so behaving’.
So, the definition, at its origins, aligned humans with machines.
The earliest phase of AI research, nicknamed good old fashioned artificial intelligence or ‘GOFAI’, held that human-readable representations of problems, in the form of symbols, should inform how AI research would be conducted. This form of AI involved expert systems which reflected production rules that are altered by human deduction in the context of emergent errors – the processes relying on an ‘if this, then that’ type of formula, basically, a flow chart. Intelligence required, according to the symbolic AI approach in this preliminary phase, ‘making the appropriate inferences’ from machines’ seeming internal representations, whereby, ‘a physical symbol system’ was seen to posit a ‘necessary and sufficient means for general intelligent action’ (Dreyfus, 2007). Critics quickly emerged who held that machines should be autonomous, and the role of humans should be reduced.
Hubert L. Dreyfus, a critic of symbolic AI, noticed when reading original texts in AI research in the 1960s, that the ontology and epistemologies underpinning early AI researchers’ thoughts were derived from a range of rationalist philosophical tenets.
Researchers seemed to anthropomorphise the machine, projecting intelligence onto it, thinking that a machine can comprehend a symbol in the same way that humans do, and that robots’ sensors would mimic humans’ ability to process meaning from their surroundings. Dreyfus’s work indicates that researchers had begun to come across problems of significance and relevance, issues that philosophically are dealt with in the existentialist tradition. Dreyfus argued that humans do not experience objects in the world as models of the world, or symbols of the world, but experience the world itself.[i] That is the problem with the analogy between machines and humans. Whereas robots must continuously adapt and find ways to learn about the world to create symbols and internal representations. This is a problem that is ‘avoided by human beings because their model of the world is the world itself’ (Dreyfus, 2007: 1140). More recently, research focussed far more on making robots autonomous and on ‘machine learning’. The machine’s capacity to store data in the form of ‘memory’ was an early attraction. But now, it is able to process and extract data and look for patterns. So, it can make predictions and maybe even judgements that don’t rely as explicitly on human intervention. These pattern-seeking and pattern-matching algorithms have led to significant breakthroughs in machine translation and machine vision, for example. ‘Training’ all these new systems requires data. A lot of data! Fortunately, we now have that too. Human profiling techniques which resource big data are being used for credit scoring, predictions for re-offending in the case of criminal trials and even employment recruitment purposes and other areas in human resources, as I will discuss in a minute. Data can now be accumulated from a range of sources such as CCTV camera footage, data about hospital visits, Facebook likes, recordings of speeches, online ads clicks, birth records, sonar recordings, credit card transaction, phone calls, tweets, information about all of which can be digitalised.
My own research has looked at the implications of the integration of machines into workplaces since the era of scientific management. In my recent publications I have both outlined the history of the use of technologies to measure human labour from that era to the current era of ‘agility’ and looked at the ways new technologies are being used to attempt to measure new areas of human labour such as affective and emotional labour, leading to a reconsideration of what is considered ‘work’ at all.
I have just completed a large report commissioned by the International Labour Organization (of the United Nations) that looks at the risks that are created when technology is used to make human decisions in workplaces.
My report spans the arenas where this happening, from factories and warehouses to offices and home-working spaces; from the use of algorithms in gig work to the integration of robots into factories to the use of people analytics in offices.
The argument is that automation and mechanization in factories and offices alike, are putting workers at risk of heightened stress and overwork, resulting in reduction in paid working time and significant job losses.
AI is the fuel for the range of digitalized management methods.
Digitalized management methods include:
“People analytics” are defined broadly as the use of big data and digital tools to “measure, report and understand employee performance, aspects of workforce planning, talent management and operational management” (Collins, Fineman and Tsuchida, 2017). People analytics is said to allow organizations to conduct “real-time analytics at the point of need in the business process … [and] allows for a deeper understanding of issues and actionable insights for the business” (ibid.). 71 per cent of international companies consider people analytics a high priority for their organizations (ibid.). It is not difficult to see what kinds of risks workers face as people analytics are introduced.
“People problems” could, of course, mean “who to fire” or decisions on “who not to promote” and the like. In any case, without human intervention as will be required under the new GDPR, these HR judgements become potentially very dubious when the qualitative dimensions of the workplace are eliminated and could increase workers’ stress.
A 2017 PricewaterhouseCoopers (PwC) report shows that 30 per cent of UK jobs and 38 per cent of US jobs are at risk of automation. In the past, routine jobs were seen as susceptible. Now, a range of non-routine and non-repetitive jobs are at risk. Automated working environments increase stress and reduce worker autonomy, according to an IG Metall trade unionist. An IMF report by Berg, Buffie and Zanna (2016) explored what will happen to workers’ wages, in particular low-skilled workers, in the wake of the inevitability of widespread automation. The report forecasts that wages will be driven down, as investors invest in robots rather than human employees, buildings or machinery.
Human resource and management practices involving AI have introduced the use of big data to make judgements to eliminate the supposed “people problem”. However, the ethical and moral questions this raises must be addressed, where the possibilities for discrimination and labour market exclusion are real. People’s autonomy must not be forgotten. People have specific needs or want to make choices in the context of irregular career patterns, time out of work for reproductive and domestic labour, maternity leave, physical illness and mental health issues. These must be addressed when AI is used for workplace decision-making.
[i] In 1972 he wrote in What Computers Can’t Do: ‘[T] he meaningful objects . . . among which we live are not a model of the world stored in our mind or brain; they are the world itself’.