phoebevmoore

Global judgements and ideas.

AI @ Work. Data subjects, AI trainers, quantified workers.

This text is based on three of my presentations November 2020. The first is for the Unboxing AI conference organised by Antonio Casilli and co for the INDL European Network on Digital Labour, NEXA Centre for Internet and Society and ISRF Independent Social Research foundation. The second is a keynote talk for the Nürnberg Digital Festival and the third, keynote for the Berliner Gazette’s Silent Works conference.
Paola Tubaro introduces this event. My talk starts at 3:06.

There are several iterations of digitalisation and semi-automation of management practices over time. While tracking and monitoring of work occurred in an analogue form even before scientific management, contemporary discussions really ignited in the 1980s when computers were normalised in Western societies in office, clerical, bookkeeping, accounting and call centre work whereby ‘telephone call accounting’ became possible.

While simply having a computer in the office doesn’t automatically lead to worker surveillance, in 1987, 40 million US workers were seen to be subject to electronic monitoring, 6 million US workers were electronically monitored, and as many as 75 per cent of large companies electronically monitored their workers (Alder 2001: 323). During this period, women were the most impacted by this transformation. The new forms of EPM in 1980s were seen to be different, as they are ‘constant, pervasive, and unblinking’ (Alder 2001: 323), where ‘Minute-by-minute records’ (US Congress 1987: 5) were kept. These ‘constant and unblinking’ practices are now really crystallised in the AI era, but their seeming complexity have rendered practices into a supposed ‘black box’.  

There are thus several lines of force in the integration of AI into the workplace which are rendered unidentifiable, and so, to release a number of questions and paradoxes from their black boxes, we need to look at what the integration of AI technologies into workplaces explicitly means for workers in many industries; what are the new forms of EPM via algorithm, machine learning and AI augmented practices, but importantly, how the backbone for AI is developed, who produces the datasets and what conditions do they endure? Content moderators and data workers.

The questions I have been asking in my recent research projects for the International Labour Organization; European Union Safety and Health Agency; RLS; academic work; and right now the European Parliament Science and Technology office that have relevance for the terms of reference for this conference:

What is at stake when workplace decision making is fully automated via AI, and rendered beyond human comprehension?

In these contexts: What changes are emerging for the employment relationship?  e.g. how can worker ‘consent’ be achieved? How can ‘inference’ be fair?

What policies surround this shift toward automated decision-making in the workplace, and will they protect workers’ rights to privacy and dignity? 

And to take this to a new level entirely: Who and what sits behind the datasets used to train machine learning for AI, datasets without which, AI cannot exist?

To deal with these questions, today I am going to talk about workers, portraying a series of ‘ideal types’ that I have written about in my research, at varying levels of abstraction,

to highlight trends in social relations of production as technology is ramped up in workplaces to monitor and surveil work. Research on Quantified Work as I have called it, is an important body of literature, where body-studies meets research on quantification and the digitalised labour process.

The data subject is in fact the key actor in the quantified workplace and I will discuss some of the surrounding issues she faces in a minute. The concept that I put forward for my talk today of the AI Trainer is a kind of fruition of some work/ideas that I have been developing over some years, where I see technological augmentation and interventions into work’s measure, workers’ bodies, and the workplace as becoming increasingly a site of transformation, subjectification, and potentially, struggle.

The data subject is the key actor in the quantified workplace.

Where some of my critique starts: is probably in a paper I published in Body & Society which is the sister journal of Theory, Culture and Society. Building on the feminist research of Dewart-McEwen (2017), Jarrett (2016), Crain et al, 2016; Cherry, 2016 and others, I analyse a trend in workplace quantification for agility, identifying the ways that work is both quantified and invisibilised through technologies, where workers are expected to self-manage and self-track e.g. stress via the use of wellness focussed devices. In this article, I focus on worker self-management and self-discipline for ‘agility’ in digitalised working environments.

I claim that this demonstrates management recognition of the qualified aspects of labour that operate at the level of the unseen and the ‘affective’ level, and as seen in the case study presented, through wearable and tracking devices, management set out to gather information about other aspects of worker’s lives than just traditionally understood work practices, where other activities and behaviours are themselves linked to productivity,

-e.g. sleeps, steps and heartrate are transformed into something measurable, tangible and otherwise identifiable.

While this was not a precisely AI-driven project, the findings in my case study demonstrate that the drive toward contemporary work design model of agility, paired with quantification of workers’ affective labour as linked to autonomic and physiological signifiers, does not reflect recognition and intentions for remuneration in a way that absolves aspects of unseen labour. Rather, the actions reflect the quantified self-movement ideals, where tracking and monitoring should help management to identify: how much stress, or other autonomic drivers, via forms such as sentiment analysis, are experienced.

The difference between the ‘quantified self’ and the ‘quantified worker’ of course is that the ‘worker’ is not a ‘free agent’, like the notions of ‘self’ coming out of the Silicon Valley Quantified Self movement. The worker is subject to an employment relationship, which introduces significant complexity.

The difference between the ‘quantified self’ and the ‘quantified worker’ of course is that the worker is not a free agent (except the freedom to sell her labour).

So, the extent to which the ‘black box’ of technologically augmented HR decision-making is impenetrable, has to be weighed up against the absence of understandability which is often assumed to lie within the materiality of the machine, and it is the case that algorithms’ outcomes often surprise their designers, so it makes sense that ‘understandability’ and ‘trustworthiness’ of AI are questioned.

But I argue that what we are dealing with is not so much a black box, but a set of contradictions which lie in what is exposed and what is obscured via metrics and measure, which have explicit implications for the materiality of workers’ everyday experiences. In the title of my last book where I ask: ‘what counts?’ where, what is not counted, is as important or even more so, than what is.

What is not counted, is as important or even more so, than what is.

Any time data is produced about real time workplace activities, I argue there is an abstraction of human experience.  People’s ‘data doubles’ are ‘born’ by virtue of data collection, processing, storage. And there is a possible situation whereby the data subject loses an element of autonomy, possibly privacy and dignity, that can lend itself to shifting accountabilities.

There is a creation of the potential for analysis under specific human-selected criteria, which are presented as (seemingly) objective measures based on:

  • Profiling
  • Inference
  • Surveillance

These measures can lead to discrimination, structural violence, work intensification, threats to safety and health. Particularly via the obfuscation of particular aspects of work, such as affective labour, preparatory labour and autonomic activities which are not usually measured in digitalised employment relationships.

Why do we not talk about that mysterious new data double ‘self’ or the ideal types of workers as I am describing them here, as existing within a black box? And perhaps just as importantly: WHO decides what can be known/quantified and what cannot be known/should not be quantified? In the new materialist frame, the unconscious is more influential on human behaviour than the conscious. In my last book, I talk about the ‘autonomic self’ which exists at the sub-level of the physiological, where this nervous system is oriented around body processes occurring without people’s conscious intention. In the book, I focus on the corporeal aspects of tracking, where wearable technologies used in warehouses e.g. the haptic bands dictating where Amazon workers’ arms should go for picking and stacking as well as the smart devices used in office wellness initiatives such as the one I have referred to just now, where the data accumulation from such devices quantifies work in an ever-closer way, where the physiological autonomous ‘self’ is now tracked at unprecedented levels. Now, the latest trends in digitalised workplaces is the integration of AI augmented tools and applications which are seen in devices, cobots, chatbots, platforms and human resources interfaces.

The European Commission announced in 2018 that it would be making annual investments in AI by 70 per cent under the research and innovation programme Horizon 2020, reaching EUR 1.5 billion for the period 2018-2020. The investment was expected to:

…connect and strengthen AI research centres across Europe; support the development of an AI-on-demand platform that will provide access to relevant AI resources in the EU for all users; support the development of AI applications in key sectors. (European Commission 2018)

Most recently, the Commission published the ‘White Paper on Artificial Intelligence: A European approach to excellence and trust’, in February 2020.

This White Paper defines AI as follows:

AI is a collection of technologies that combine data, algorithms and computing power. Advances in computing and the increasing availability of data are therefore key drivers of the current upsurge of AI. (European Commission 2020)

The Paper’s introduction emphasises the positive angles for AI in healthcare improvements, disease prevention, farming efficiency, climate change relief, and security improvements for Europeans.Later in the Commission’s White Paper, business development and productivity and efficiency gains are emphasised, where AI is ‘key to countries’ seeming competitiveness’. Indeed, in most cases, high-level governmental and organisational reports are predicting that AI will improve productivity, enhance economic growth and lead to prosperity for all, in a similar way that scientific management was positioned as a promise of prosperity for all people.

However, also as we saw with scientific management at the turn of the century, these high-level discussions do not always link productivity and prosperity at the abstract level directly with the materiality of everyday (and everynight) human work impacted by AI nor the work that is ultimately fuelling national growth metrics or global scorecards.

Governmental discussions do not always link productivity and prosperity at the abstract level directly with the materiality of everyday (and everynight) human work.

Nonetheless, workplaces are the site for AI projects via image and text recognition work of data workers and where AI augmented tools and applications are being introduced in order to improve productivity, e.g. in factories, in gig work and in office contexts at the back end in Human Resources, and in the front end, by way of platforms. So, to ‘unbox’ the AI, I suggest looking behind the smoke screens and the obfuscation and putting some emphasise workers’ experiences and relations of production through looking at the employment relationship and surrounding policies on data protection, and the tensions that emerge as AI becomes increasingly prevalent.

Now I am going to talk about what is at stake for workers when they are subjectified as ‘data subjects’.  This section is mostly policy oriented as a way to ‘unbox AI’.

In fact, in May 2019, the Council of Europe Commissioner for Human Rights Dunja Mijatović made a recommendation Unboxing artificial intelligence: 10 steps to protect human rights. He indicated that:

Ensuring that human rights are strengthened and not undermined by artificial intelligence is one of the key factors that will define the world we live in.

Alongside this call for reflection, and as part of the EU’s drive to invest in AI, several projects have been funded and groups formed across the region, including my own project commissioned by the European Parliament Panel for the Future of Science and Technology (STOA), ‘Surveillance and monitoring: The future of work in the digital era’. In the report, I look specifically at data collection, processing, usage and storage of worker data and its usage.

In my commissioned STOA report, I interrogate the concept ‘surveillance’ & digitalised applications in workplaces/spaces. I identify changes to the employment relationship due to increased monitoring and surveillance and where precisely this is happening. I then outline surrounding legal instruments, policy parameters with some emphasis on GDPR and a range of relevant legal cases on data privacy and protection for workers. Here, AI and the workplace is covered. Discuss some of the tensions in legal principles – inference, consent, wellbeing. Then I bring these issues to light in specific legal country case studies – best practices. Then, to look at workers’ experiences based on a series of in person semi-structured interviews ‘Worker Cameos’. And finally, based on these research activities and findings, provide a list of First Principles and Policy Options (see end of post for the Policy Options) for MEPs. 

The GDPR, which has replaced the 1995 Data Protection Directive; and which is enforceable across the EU today and for all companies aiming to work with EU companies, where data processing occurs; is written with the individual as a focal ‘data subject’.

The GDPR is written with the individual as a focal ‘data subject’. But data is not only an individual object. Indeed, it is identified, or has to be identified, in its collective form, to keep within the regulations themselves.

This is similar to the way that the 1978 French Law on Freedom and Information Technology portrays the individual citizen and her right to privacy. While this is not wrong, data collection operates at more levels than the discrete and the use of it will impact people individually, as well groups of all kinds, qualities, and quantities.

There are special categories of personal data that are explicitly safeguarded by the GDPR e.g. race and ethnic origin, religious beliefs, political opinions, trade union memberships, biometric, genetic and health data, and data about sexual preferences and sexual orientation.

However, there are a range of inferences about these categories that can be made based on large groupings of data about other characteristics, a possibility which should be just as protected against, as the special category of data for individuals, is. Even where data about individuals is anonymised, machine learning and ultimately AI augmented workplace activities, allow a researcher, scientist, or boss, to make judgements about patterns as that data is parcelled out.

Even where data about individuals is anonymised, AI techniques allow a researcher, scientist, or boss, to make judgements about patterns as that data is parcelled out.

The bigger the dataset, the more powerful it is.

So, one of the arguments I am making in this Report is that the approaches and responses to data and its collection should not be individualised, such as expected in a consent framework, but should also be collective.

Data collection should not be individualised, but governance should be collective.

Inference. Inferred or derived data is data whose exact categories are not explicitly made transparent to the data subject, but where decisions are made, and/or conclusions about individuals are established, based on such data. Thus, inferences and decisions can be made about a data subject which are not entirely accurate, nor agreed by a subject and can therefore be classified as being both non-transparent and potentially, highly discriminatory. This can have direct, and unfair, implications for people’s reputations.  There is no good protection against inference in EU law. The GDPR gives a data subject the right to query inferences made about them such as reification (Art. 16), erasure (Art. 17), objection to processing (Art. 21) and the right to contest any decision-making and profiling based on automated processes (Art. 22(3)). But there remain significant barriers to a data subject’s rights with regard to derived and inferred data, nonetheless (Wachter and Mittelstadt 2019: 37). 

Consent. Questions about how informed consent can be gained have been consistently part of the debates, as the GDPR is rolled out. The concept of consent does not sit easily with the nature of the relationships between workers and management, for perhaps self-evident reasons. Workers in all kinds of sectors within the capitalist wage labour framework rely on salaries for basic survival and may feel compelled to consent to things they would not consent to otherwise.

‘Consent’ was defined within the 1995 DPD (EU 95/46/EC) as ‘any freely given specific and informed indication of his[/her] wishes by which the data subject signifies his agreement to personal data relating to him[/her] being processed’ (EU 1995).

The GDPR definition goes further, where the way that consent is sought, and given, is now also under scrutiny. The GDPR’s Art.4(11) makes it clear that consent is: ‘any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her’. Art. 7 and recitals 32, 33, 42 and 43 of the GDPR provide guidance for the ways the Data Controller (usually the organisation or institution which employs workers and collects data about them) must behave in order to attempt to meet the main elements of the consent option for lawfulness.

Recital 32 of the GDPR provides particularly good clarification building on Art. 4 as quoted above:

Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her, such as by a written statement, including by electronic means, or an oral statement. This could include ticking a box when visiting an internet website, choosing technical settings for information society services or another statement or conduct which clearly indicates in this context the data subject’s acceptance of the proposed processing of his or her personal data. Silence, pre-ticked boxes or inactivity should not therefore constitute consent. Consent should cover all processing activities carried out for the same purpose or purposes. When the processing has multiple purposes, consent should be given for all of them. If the data subject’s consent is to be given following a request by electronic means, the request must be clear, concise and not unnecessarily disruptive to the use of the service for which it is provided.

Consent to data collection by a consumer, however, is not equivalent to the consent to data collection from a worker. Indeed, ‘freely given’, can only exist in a situation where a data subject has an authentic say and a real choice. Furthermore, if it is ‘bundled up’ and ‘non-negotiable’ or whereby a subject cannot refuse or withdraw consent without detriment, then consent has not been freely offered (EDPB 2020: 7).

The legal basis for the concept of consent does not appear to always be compatible with the basis for a companies’ ‘legitimate interest’. If an employer can infallibly prove a commercial interest is salient, the employer can process personal data even when it might be considered an invasion of privacy grounds under normal circumstances.

The Art. 29 WP 2/2017 Ch. 6.2 on Data Processing at Work included this extremely insightful phrase:

Employees are almost never in a position to freely give, refuse or revoke consent, given the dependency that results from the employer/employee relationship. Given the imbalance of power, employees can only give free consent in exceptional circumstances. (Art. 29 WP 2017: 21)

There are, therefore, tough issues to consider when thinking about the possibility for workers to consent to AI-augmented monitoring and tracking of their work, given their position in the employment relationship. In practice, it is incredibly difficult to gain informed consent from workers as well as from consumers, such as where there are too many requests for consent, consent is complex and lengthy, or there is the feeling that there is no real choice – ‘consent is often framed as a take-it-or-leave-it offer’ (Custers 2016).

Some ways to rectify this could be for example, meaningfully providing the right to change minds: one way to deal with the question of worker consent to data collection and processing is by giving data subjects the right to change their minds and withdraw consent.

Give data subjects the right to change their minds and withdraw consent.

Providing specific expiry dates for consent would reduce the risk of function creep, would raise awareness, and pave the way for a greater range of preferences over data use. Further to this, providing workers with not only the means to withdraw consent, but to also control any surveillance software or hardware tracking their work, and the inviolable right to opt out altogether.

Consent is rarely updated/renewed, even though user’s preferences may change.

Where data consent is impossible, it is also said that transparency can have justifiable means. No discussion has been held on the question of how much knowledge is necessary for transparency to be fully and meaningfully achieved; nor whether there is an inherent virtue nor interminably successful outcome for workers when transparency is attempted.

Meaningful dialogue and transparency are certainly important for consent to be achieved, but is it a real substitute for consent?

Transparency is not a substitute for consent. Better collective governance and updated versions of lawful consent, rather than reliance on transparency, is necessary, to protect data subjects’ human rights.

Recognising this ambivalence and tension, the on 4th May 2020, the EDPB published an update of the Art. 29 WP guidelines on consent under Regulation 2016/679 (WP259.01) ‘Guidelines 05/2020 on consent under Regulation 2016/679 Version 1.1’ that had been endorsed by the EDPB at its first plenary meeting.

The 2020 update published by the EPDB indicates, ‘if the data subject has no real choice, feels compelled to consent or will endure negative consequences if they do not consent, then consent will not be valid’. The update emphasises the point that consent is ‘one of the six lawful bases to process personal data, as listed in Article 6 of the GDPR’ (EDPB 2020: 5). The Guidelines 05/2020 go on to emphasise, however that requirements for consent under the GDPR are not considered to be an “additional obligation”, but rather as preconditions for lawful processing. (EDPB 2020: 6)

The prescient update and the emphasis upon consent as something that should be considered a precondition for lawful processing is encouraging. The Guidelines also place responsibility of the Controller to ensure data collection and processing practices are within these legal parameters. Consent can only be considered a lawful basis, this update reads, if the ‘data subject is offered control and offered a genuine choice with regard to accepting or declining the terms offered or declining the without detriment’ (EDPB 2020: 5). In my policy guidance, I advocate for a continued debate on whether it can be achieved, such as through better collective governance over data processes and reconsidered for any data collection, and usage, in explicit communication with trade union, works councils and other worker representative groups. Better collective governance, rather than reliance on transparency, is necessary, to protect data subjects’ human rights.

We should be thinking through the possibility for an expanded understanding of ‘consent’, i.e. something more resembling collective consent, based on meaningful dialogue with social partners. This kind of discussion must be pursued, before any plausibility for data collection at work at the individually consenting level, is considered. Valerio De Stefano talks about collective rights which are a ‘fundamental tool to rationalise and limit the exercise of managerial prerogatives’ over individual workers as well as over groups of workers (De Stefano 2019: 41).

Ultimately, the idea of consent must itself be rewritten to allow for workers’ data consent, where, because workers cannot automatically provide meaningful consent individually, the idea of a union based, or a kind of collective consent should be considered.

Ultimately, the idea of consent must itself be rewritten to allow for workers’ data consent at a more collective level.

So there are two further ideal type of worker I have advanced in recent studies that have implications not only for how AI is now being used in the workplaces and to what extent its usage in EPM HR systems can be lawful and consented to, but also for the very formation of AI, where workers’ production itself contribute to and create datasets that are used to, in fact, train machines for AI.

In conclusion, I want to put forward a challenge for a mobilisation and a rethinking of some of the core concepts and practices surrounding the employment relationship as AI becomes increasingly prevalent in various workplace practices and initiatives that I have talked about in my work.  Furthermore, I am devising another ideal type of worker who I am calling the AI Trainer, a worker on the other side of the screen e.g., from social media consumers, where content moderator’s activities directly service to experience trauma and where workers carry out affective labour on behalf of the consumer and where their activities are also used for image recognition purposes, to formulate datasets which can be used to train machines for AI.

On the other side of the screen, content moderators who I am also discussing as ‘AI Trainers’, experience severe trauma to protect social media consumers from the same.

This is one of the cases that reveal the prescient points I am making with regards to the materiality of AI and social relations of production, adding to the discussions of stress and discrimination emphasized in my last two commissioned projects for EU-OSHA and earlier for ILO ACTRAV.

So, in total, the way to ‘unbox AI’ is to look at how workers are impacted by AI and the employment relationships and policy frameworks which surround these where data collection and processing is all-important; as well as look at the working conditions that are endured by who Mary Gray and Siddharth Suri have called the ‘ghost workers’, who create the databases which themselves train AI. We should be looking at who we are as humans within the AI at work era; ask to what extent can we be ‘smart workers’ where we define what ‘smart’ means, as well as reclaim ‘intelligence’ from its assumed position in the very concept of ‘artificial intelligence’; identify what processes of subjectification are they subject to and to what extent can we consent to these processes, and perhaps most importantly: focus on what we can do to further social protections as this technology advances and is advanced, today.

Policy Options.

First Pinciples Data protection and First Principles by design and default
Proportionality, necessity, transparency
Co-determination
Prevention over detection
Collective data governance
Policy OptionsRequire union/worker involvement at all stages
Introduce and enforce co-determination into labour law in all EU member states
Businesses to compile certification and codes of conduct
Prioritise collective governance
This Policy Options Brief was prepared alongside the European Parliament STOA commissioned report entitled ‘Surveillance and Monitoring: The future of work in the digital era’ by Phoebe V. Moore.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Information

This entry was posted on November 14, 2020 by .
<span>%d</span> bloggers like this: