phoebevmoore

Global judgements and ideas.

Which is the mirror? (Artificial) intelligence, a historical materialist reading

This text is based on the presentation I gave at ILPC 2019 Vienna. It reflects research I am carrying out now for the Special Issue ‘Automation, AI and Labour Protection’ edited by Valerio De Stefano for Comparative Labor Law and Policy, and in the Special Issue I am editing for Capital and Class with Frank Engster and Kendra Briken called ‘Machines and Measure’ (forthcoming 2019).

The ways that technologies and machines are incorporated into societies, and humans’ relationships with machines, reveal ideas about what types or features of ‘intelligence’ are considered to be valid for humans and increasingly today, for machines. As ‘artificial intelligence’ (AI) becomes increasingly debated and is now being introduced into workplaces as a way to make human resource and work design decisions, it is important to discuss what is meant by intelligence, in what context, and think about what is, overall, at stake.

The history of the ideational manufacturing of intelligence demonstrates a pattern of humans’ interest in calculation and computation. The simultaneous series of machine and technological invention and experiments shows how machines facilitate not only the processes of normalisation of what is considered intelligent behaviour, via both human and machinic intelligence, but also facilitate and enable the integration of a specific model of political economy and specific social relations into everyday work and life.

Through construing human and machine capabilities for intelligence within work design ideal types using AI such as in human resources, gig work and robotics, experimentation may appear to reveal work’s true nature by exposing its seeming absolute value. However, the process of abstraction by numbers via specific machine/human relations attributed to machinic intelligence, at points supposedly overtaking humans, overall works to detract from the qualified and material experience of labour and to invisibilise suffering and the non-denumerable within material conditions.

The Intelligence of Humans, by Humans Cybernetics researcher Norbert Wiener cogently noted in 1948 in Cybernetics, or, Control and Communication in the Animal and the Machine that ‘the thought of every age is its technique’ (1948), in other words, noting that there are continuous entanglements between human thought and machinic invention, which is the case in the history of ideas about intelligence. It would be nigh impossible to cover all ontological and epistemological debates leading up to, and during the phases of AI research; which started, in name, in 1955;  but some important moments in the development of thinking about human reason stand out, some of which were later contemplated by the school of cybernetics, and then AI.

Philosophers, work design gurus and social theorists’ expositions around ‘intelligence’ pre-date any explicit consideration of the machine/human relationship, but several identified how this supposedly human feature, is portrayed with specific characteristics. In discussions in cybernetics and later in the AI communities of research, intelligence of both machines and humans has been explicitly depicted, but it becomes increasingly unclear which of the players in the discussion is in fact, the mirror: the machine or ourselves? I argue that the characteristics of intelligence that humans (who have, within this account of the history of ideas, usually been white men) have ascribed to ourselves as humans, and then, the intelligence ascribed to machines, tend to fit an overarching set of norms, where quantification is the dominant modus operandi.

Intelligence as calculation parcels In British empirical philosophy, the mind was seen to be composed of ideas. In the chapter ‘Of Reason and Science’ of Leviathan, Hobbes muses: ‘When a man reasons, he does nothing else but conceive a sum total from addition of parcels… for reason is nothing but reckoning’ (1651). So, humans’ capacity for reason, which animals were not expected to have, is a process whereby we simply carve the world into symbolic units and use sums to make decisions, informing intention. Man can consider the consequences of our actions and make theories and aphorisms, reasoning and reckoning ‘not only in number, but in all other things whereof one may be added unto or subtracted from one another’.  So, this form of human intelligence is portrayed in terms of calculative processes.

Another major British empiricist, John Locke, held that the ideas that make up the human mind exist in a wholly passive way, where, on the basis of sensory contact with the outside world, pull themselves into bundles of similarity, borders, or cause and effect. This is, interestingly, a process that is similar to the ways that neural networks much later were expected to behave. Later, Ivan Pavlov’s studies of dog responses to desired objects such as food was based on patterns of action. Locke had looked at ideas but not patterns. But bodies respond to stimulus, Pavlov noted, not only by generating idea clusters but by actual empirical responses to repetitive stimulus. So, intelligent responses occur when the mind and body are faced with repeated exposure to an object or stimulus, which is an interesting predecessor to thinking about machine learning, where machines are expected to take note of and respond to patterns across data, where the data provides relevant stimulus.

Intelligence as computation and machinic  Charles Babbage’s Analytical Engine of 1837 marks the emergence of supposedly thinking machine. Though it was never built, this ‘engine’ was the first imagined digital computer, where punched cards allowed for the operation of logical and mathematical computations. These differed from analogue computers which measured magnitudes and physical quantified like voltage, duration, angle of rotation in proportion to the quantity to be manipulated. The Countess of Lovelace Ada Byron King actually wrote the first algorithm that was part of the design and invention of the Analytical Engine. During her short 36 years, the only legitimate child of Lord Byron and Lady Wentworth is seen as the first to perceive the full possibilities of computer programmes.

An important predecessor to the invention of the digital computer who became very important for ideas about human thought as they relate to machinic capacities is the initiator for all logic in mathematical computation and forms of statistics George Boole, of now-labelled Boolean algebra. In The Laws of Thought (1854/2009), Boole indicated that ‘all reasoning is calculating’. According to this very early logic, all thought can be reduced to numbers, symbols and ultimately, quantification. Binary code, upon which all programming languages are now built, is derived from his postulates, where variables are based on truth values, where the choices are two: 1 (true) and 0 (false). These distinctions form all logic of computation today. It is fairly easy to see the weaknesses with assuming that human thought and reasoning (and thus ‘intelligence’) can be associated with such a black and white calculation, but these processes are notable for their early significance in discussions about capacities for reasoning and intelligence.

Intelligence looks for the One Best Way Fast forwarding through history, the era of scientific management is perhaps the best-known modern era during which conceptions of intelligence became significantly and perhaps irreversibly wrapped up with workplace design. This period of work design experimentation began before the two world wars in the late 1900s. Specific technologies were used to calculate workers’ physical movements and provide information, or what today we would call data, to select evidence around which worker movements are the best for productivity. The technologies Frederick W. Taylor and Lillian and Frank Gilbreth were using were fairly simple, early forms of cameras, stopwatches and stethoscopes, along with the michrocromometer. The latter tool was used to measure very small time intervals, which reflects the time keeping seen in the armbands used in factories today to track workers’ movements around the floor. The division of possibilities for intelligence are clear during this period and in Taylor’s Principles for Scientific Management he states that handling pig-iron is ‘so crude and elementary in its nature that the writer firmly believes that it would be possible to train an intelligent gorilla so as to become a more efficient pig-iron handler than any man can be’.

However, Taylor stressed, ‘the science of handling pig iron is so great and amounts to so much that it is impossible for the man who is best suited to this type of work to understand the principles of this science’ (1911/1998: 18). Scientific management required the separation of unskilled and skilled labour categorised in terms of manual and mental labour and Taylor was quite scathing in his accounts of the less-able human who, he argued, would be best suited for manual work. So, quite clearly, only the boss and administrators were permitted to be intelligent and the intelligence of machinic capacities was assumed. (Read more about this in my 2018 book The Quantified Self in Precarity: Work, Technology and What Counts.) For now, this period of technical social experimentation is worth mentioning, because Taylor’s ideas were very influential internationally and incorporated in more than just the workplace, but also banking, education and agriculture (Moore, 2018), where intelligence is once again related to abilities for calculation and especially, efficiency for idealised productivity.

The Intelligence of Machines, by Humans A couple of decades after scientific management fell out of fashion, prominent scientists sought, at the origins of the field of research called AI by McCarthy at Dartmouth College in 1955, to find ways to make machines directly behave like humans, intelligently. The early phase of AI research was committed to seeing ongoing human input into the processes of machinic intelligence formation and manifestation. Today, AI researchers have begun to focus on creating autonomous machines which would be machines that think for themselves; make decisions and choices; and are capable, even, of affective responses.

So, the history of actually termed ‘artificial intelligence’ begins at an academic conference in 1956, which were really a series of workshops led by an Assistant Professor named John McCarthy, who worked with Marvin Minsky of Harvard, Nathan Rochester of IBM and Claude Shannon of Bell Telephone Laboratories at these workshops. The funding application to the Rockefeller Foundation for the summer events designed to create intelligent machines, states that:

…the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving

This funding application also indicates that:

In a single sentence the problem is: how can I make a machine which will exhibit originality in its solution of problems?

Text within the application already demonstrates the emerging decisions about what intelligence is. Humans behave in a myriad of ways, so it would be important to define not only machinic behaviours, but human behaviours simultaneously, with some recognition of previous research projects during these summer workshops on the American East Coast. Interestingly, psychologists were not consulted in these early phases on the ideas around intelligence and being. Intelligence as an idea was often, just assumed, by the small group of male intellectuals who are responsible for the origins of what they labelled ‘AI’ research and execution. It is not that psychologists are the sole owners of intelligence designation, but given human thought and behaviour inform most, if not all research questions within that discipline, it is a curious oversight. There was also a dearth of considerations of the philosophical questions that such explicit technological investigations introduced, aspects that are somewhat later visited by such figures as Hubert L. Dreyfus.

Indeed, it was Dreyfus, a now well-known critic of symbolic AI, who noticed when reading original texts in AI research in the 1960s and examining the work of Newell and Simon, that the ontology and epistemologies underpinning early AI researchers’ thoughts were derived from a range of rationalist tenets. Dreyfus’s work indicates that researchers had come across problems of significance and relevance, issues that are philosophically dealt with in the existentialist tradition. Dreyfus argued that humans do not experience objects in the world as models of the world, or symbols of the world, but experience the world itself.

About a decade later, in 1966, Joseph Weizenbaum, a German American MIT computer scientist who is also considered one of the forefathers of AI, invented the predecessor for today’s Chatbots, naming this computer programme ‘Eliza’, after the ingenue in George Bernard Shaw’s Pygmalion. Pygmalion is a character in Greek mythology who develops a love interest in his own sculpture which comes to life.  This seemed an appropriate name for this chatbot given ‘her’ quickly observed capacity to induce emotion from those speaking to her. Weizenbaum took this response from humans seriously and was genuinely surprised when people responded so genuinely to Eliza. Alan Turing would have probably found this finding quite interesting, given the experiments he pursued, where humans project their own assumptions onto machines. Overall, Weizenbaum, who was himself a socialist, became very sceptical about the integration of computers into society and saw the dark sides that it introduced.

This section has not covered the range of robotic experiments conducted at this time which attributed intelligence to machines (see the publications mentioned below and my next book), but suffice to say here that it is important to revisit these debates not least because there is a notable return to discussions internationally, regionally and locally about the hoped benefits of AI in seemingly, most aspects of day to day life.

The next section asks, who is being asked to behave like who, or what, as we move into the era of cognitive, autonomous AI?

Who/What is the Mirror? (Or: The Intelligence of Machines, by Machines?)

These days, very little AI research is actually related to the workings of the human mind, but project direct forms of (hoped) autonomous intelligence onto machines themselves. Indeed, AI research has reached a stage now where it is expected to somehow transform societies forever, with contrasting visions from the late Stephen Hawking who declared that AI was the ‘worst event in the history of our civilization’ and Elon Musk who noted that ‘AI is a fundamental risk to the existence of human civilization’, to enthusiastic predictions about its ability to facilitate superpower status for countries.

These final sections now ask, who is mirroring who in a human/machine configuration, with all the complications that build and surround this relationship? Today, technology supplements management control over workers’ movements by both the use of wearable calculation devices which capture seemingly objective data, which now can be used for algorithmically derived machine learning that is used by management to make decisions not only about the ‘one best way’ to move around a factory as was sought during the period of Taylorism, but also for decisions about workplace rationalisation and laying off workers who are not fit enough to keep up. Technology also helps management to reduce accountability for workers’ livelihoods and material living conditions, because the intelligent design of the workplace, whether, these days, around agile norms, or within a digital Taylorist framework, furthers the core principles of productivity and efficiency through digital decision-making, with less and less human involvement and intervention. Where are the safety nets in AI augmented workplaces?

Who controls the workplace?

Software designers and engineers are creating calculation machines for workplace decision making and design which are inextricably linked to the work design frameworks into which they are integrated, putting a cadre of digital experts in the driving seat for what might have once been the role of a management guru with little or no technical expertise. Chatbots are now ready to answer basic questions typed into a chat box on bank account websites. Cobots are being integrated into factories to help with picking and moving boxes across consoles. Wearable technologies with AI augmentation are facilitating on-the-spot factory training for workers. Food delivery riders are being directed to specific orders via algorithmic decision-making and judged by aggregate scores in performance metrics. Human resource groups are using people analytics software to select candidates for jobs on the basis of AI-driven machine learning that locates specific terms and patterns of forms of expression from CVs. Also in the hiring and firing phases, filmed interviews allow management to look for personality cues in speech and facial movements. This form of AI allows for ‘affect recognition’ in humans, which is also called emotion recognition and facial coding.

All of these practices are forms of automation and AI augmented decision making, which has implications for all stages of HR development and in industry and service work. In HR practices, that includes decisions about hiring, rationalising the workforce (firing people), appraisals, talent spotting, scheduling work programmes, and selecting for promotions. These practices can and have already been shown to lead to unfair discrimination. The factory and warehouse usages of AI in robotics could lead to complete automation, where few or no training programmes are made available for workers whose jobs are replaced.  The use of AI for performance analytics in call centres and in gig work such as delivery and taxi driving can lead to stress and anxiety as well as unfair discrimination.

In an article written for the forthcoming Special Issue ‘Automation, AI and Labour Protection’ edited by Valerio De Stefano for Comparative Labor Law and Policy, and in the Special Issue I am editing for Capital and Class with Frank Engster and Kendra Briken called ‘Machines and Measure’ (forthcoming 2019), I argue that the practices outlined here for the most recent uses of AI augmented tools and applications in workplaces each associate specific forms of autonomous intelligence to machines including:

  • assistive,
  • prescriptive,
  • descriptive,
  • collaborative,
  • predictive, and
  • affective.

Ultimately, these forms of intelligence are each oriented around capitalist expectations in the employment relationship. AI augmented ‘assistive’ and ‘collaborative’ robots in the warehouse and call centre are ultimately a way to reduce labour costs; ‘prescriptive’ performance analytics allows reduction in management accountability and can thus reduce duty of care; ‘descriptive’ leads to interpretations of work and performance that can be used in ways that are not revealed to workers; ‘predictive’ intelligence is a technique also used in decisions about criminal recidivism, where, in workplaces, both talented and troublemaker workers should be spotted with calculative precision, with, paradoxically, risks for unfair discrimination; and ‘affective’, where chatbots may in the future be able to respond to people in the way Eliza was seen to do, or used for emotion depiction in facial interviewing filming as is being used in recruitment techniques today.

As I have said elsewhere, the strengths of AI to make seemingly reliable and accurate decisions, are also its weaknesses, that is if data used to train machine learning via algorithmic processes demonstrates that human intelligence is itself, discriminatory. The material conditions that this creates, i.e. the failure to hire people, the elimination of jobs, the reduction of wages, and so on, are not highlighted in mainstream discussions of the AI arms race, where competition and the hope for prosperity are dominant themes.

A conclusion: AI ‘ethics’?

The difference with AI and other forms of technological development and invention for workplace usage is that because of the intelligence projected onto machines, they are increasingly treated as decision makers and management tools themselves because of their seemingly superior capacity to calculate. Where many articles on AI try to deal with the questions of ‘what can be done’ or ‘how can AI be implemented ethically’, I want to argue that the move to a reliance on machinic calculation for workplace intelligent decision-making actually introduces extensive problems for any discussion of ‘ethics’ in AI implementation and use, at all.

In Locke’s An Essay Concerning Human Understanding, this empiricist philosopher wrote that ethics can be defined as “the seeking out [of] those Rules, and Measures of humane Actions, which lead to Happiness, and the Means to practice them” (Essay, IV.xxi.3 1689). This is of course just one quote, by one ethics philosopher, but it is worth noting that the seeking out of and setting such rules as are the parameters for ethics, has only been carried out and conducted, so far, by humans. When we introduce the machine as an agent for rule setting, as AI does, the entire concept of ethics falls under scrutiny. Rather than talking about how to implement AI without the risk of death, business collapse or legal battles, which are effectively the underlying concerns that drive ethics in AI discussions today, it would make sense to rewind the discussions and focus on the question: why AI? Will the introduction of AI lead to prosperous, thriving societies? Or will it deplete material conditions for workers and promote a kind of intelligence that is not oriented toward e.g. a thriving welfare state, good working conditions or qualitative experiences of work and life?

This short piece has begun to looked at how historically, human intelligence was considered to be possible and relevant, and then, by looking at the most recent uses of AI in workplaces, the kinds of intelligence that are now ascribed and attributed to machines. The lines of force between the subject which is humanity, and the subject which is the machine, are not a priori designated, so who or what, after all, is the mirror for the portrayal of intelligence? A discussion of the role of technology and machines in the labour process is, more than ever, necessary. While the human/machine mirror is subject to constant interpretation, ultimately, the social and material conditions that could be facilitated with the introduction of new forms of AI augmented applications and tools so far only allows for so much material interpretation, and this is what is missing in the AI debate today.

 

MIRROR

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Information

This entry was posted on May 2, 2019 by .
%d bloggers like this: