phoebevmoore

Global judgements and ideas.

Data subjects, digital surveillance, AI and the future of work

Watch & read my keynote to the European Parliament Panel for the Future of Science and Technology.

The full ±200 page Report and Policy Brief are available here

Presentation text (also please see slides within the video presentation linked above):

This Project was commissioned by, and written for, the Panel for the Future of Science and Technology (STOA), and managed by the Scientific Foresight Unit of the Directorate for Impact Assessment and European Added Value, within the Directorate-General for Parliamentary Research Services (EPRS), of the Secretariat of the European Parliament.

I am really interested in these subjects as they potentially impact the day-to-day lives of most people. I have previously written about artificial intelligence augmented tools and applications and other digitalised transformations at work, and the implications they have for health and safety and workers’ rights, for the UN’s International Labour Organisation (psychosocial risks and violence in digitalised work see here) and for the European Agency for Safety and Health (EU-OSHA) (AI-augmented workplace technologies and OSH risks report), who I continue to work with. Indeed, I am beginning a project right now with the German Federal Institute for Safety and Health on ‘Advanced Robotics and AI-based Systems for Automation of Tasks and Occupational Safety and Health’ for EU-OSHA.

We currently see a rise in the use of algorithms, machine learning, AI and biometrics for HR-based people analytics with prediction and prescriptive intelligence; prevention before detection and ergonomics oriented health and safety pursuits; and a range of new technologies, where machines are expected to take autonomous decisions as well as to provide collaborative and assistive interventions.

My aims in this project have been to identify how we have arrived at this historical moment and to identify current and imminent challenges faced.

AIMS.

  1. Interrogate the concept ‘surveillance’ & digitalised applications in workplaces/spaces
  2. Identify changes to the employment relationship due to increased monitoring and surveillance
  3. Outline surrounding legal instruments, policy parameters with some emphasis on GDPR and a range of relevant legal cases and breaches in data privacy and protection for workers
  4. Discuss some of the tensions in legal principles e.g. inference, consent, wellbeing, inviolability, command
  5. Bring these issues to light in specific country case studies – best practices
  6. To look at workers’ experiences based on a series of in person semi-structured interviews
  7. And finally, based on these research activities and findings, provide a list of First Principles and Policy Options for MEPs. 

Methodology. I selected a mixed methods approach, using primary and secondary materials addressing macro-, meso- and micro-levels of analysis.

Macro: I prepared a literature review of the mostly sociological material about ‘surveillance’ identifying how the ‘work surveillance’ field has evolved synchronously to the adoption of computation machines into workspaces;

Meso: I sought out country case studies that identify how a series of EU countries approach data protection and privacy rights;

Micro: I collected fieldwork findings involving semi-structured interviews with several workers in a variety of industries and professions about their experiences of work monitoring.

Findings I: Employers.

Early forms of worker surveillance: its roots often attributed to the interwar popularisation of scientific management. Later experimentation with more invasive technologies accelerated in the 1980s, which at first, mostly impacted women workers, who held the majority of jobs in routine and clerical tasks and call and contact centre work. Speeding ahead, now, there is a significant increase in usage with additional augmentation of machine learning and other potentially more invasive and seemingly accurate tools/applications. No precise numbers yet but a surge in companies’ procurement of ‘bossware’ in recent months, due to home and mobile working in C19 environment.

Findings II: EU/local governance.

The GDPR is a highly significant advancement in data and privacy history and has been widely perceived as a game-changer across the EU and the world, because it strengthens the rights of individuals and expects an increase in obligations on the part of organizations.

The revisions to the Data Protection Directive 95/46/EC are good for workers in a number of ways:

  1.   Permits data transfers to non-party states but only when personal data is protected;
  2.   Requires data minimisation and proportionality;
  3.   Provides rights in the area of automated decision-making, where algorithmic transparency is recommended;
  4.   and requires prompt notification of data breaches.

The GDPR requires stricter enforcement mechanisms than the Directive, where each country must have national supervisory authorities to check compliance.In fact, significant fines are associated with non-compliance, where fines for infringements are up to a maximum of €20 million (about £18 million) or 4% of annual global turnover – whichever is greater.

A recent example in May 2020 is the Dutch Protection Authority’s fine of 725,000 euros against one company for scanning employees’ biometrics via a fingerprint time and attendance system. The Autoriteit Persoonsgegevens ruled that the company did not establish the exceptional grounds for the system’s implementation which would have provided a legal basis for its use.

Findings III: Workers.

There is evidence of worker surveillance in most industries and most professions and jobs.

Worker Cameos:

  • Content moderation and data trainer job, who I call artificial intelligence trainers (AIT), appear to be the most surveilled and psychosocially damaging jobs in digitalised workplaces/spaces at the moment.
  • No worker interviewed believed that they had meaningful communications with DPOs nor adequate/transparent information provided, regarding data collection.
  • Where workers gave consent, they did not realise there could be any sort of option. Nor were they aware of e.g. withdrawal rights. 

Will not go into a lot of detail but some of GDPR sections that are good for workers:

  • Article 22 ‘Automated individual decision-making, including profiling’

The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.[1]  

Art. 22 Recital 71 also advises against elements of e-recruiting, or unreasonable profiling and digitalised prediction of performances, reliability, behaviour, location, movements.

  • Article 12 ‘Transparent information, communication and modalities for the exercise of the rights of the data subject’. The Controller is responsible for making transparent the data subject’s exercise of their rights (Art. 12: 1-2) and should data subject’s requests for information within one month. Information must be given free of charge.
  • Article 5(1)(c): ‘Principles relating to processing of personal data’: Lawfulness, fairness, transparency, data minimization, purpose limitation, accuracy, integrity, confidentiality. Also indicates the accountability of the Controller for compliance and the importance of putting the right technical and organizational systems in place to ensure these occur.
  • Articles 16, 17, 21, 22(3): The GDPR gives a data subject rights to query inferences made about them, linked to reification (Art. 16), erasure (Art. 17), objection to processing (Art. 21) and the right to contest any decision-making and profiling based on automated processes (Art. 22(3)).[2]

Slide 4: FIRST PRINCIPLES AND POLICY OPTIONS

I present now my derived First Principles and suggestions for Policy Options based on my findings.

My First Principle is an overarching Principle, which requires the direct involvement of union representatives in co-decision making; the continuous involvement of representative organisations at all stages of the data life cycle in co-creation and co-design; and meaningfully involvement in the execution phases of any worker data collection and processing intervention.

My First Principles and Policy Options advocate for the insurance of data transparency and meaningful assessment and discernment of proportionality and necessity of practices before a Data Protection Impact Assessment is carried out, which Ian Brown has also emphasised; data accuracy; and worker protection facilitated by checks on practice, time-boundedness and appropriate storage, which are monitored and enforceable.

All of the activities in support of these Principles must be both collectively decided and consented to by unions but also designed and implemented with worker representatives in partnership. Worker representatives must be appropriately trained with expertise in data privacy and protection alongside DPOs to ensure these Principles are met. Unions and worker councils can work with the DPO to formulate technical solutions based on ensuring compliance with the GDPR and surrounding right, e.g. workers should be able to explicitly control any device or software that will monitor their work and associated behaviour, as well as have an authentic means to opt out.

A technical shop steward should be appointed to work in parallel and alongside a DPO in all organisations. I also suggest the reconsideration of the very ontology of the idea of consent.

The GDPR is written with the individual as a focal subject. This is similar to the way the 1978 French Law on Freedom and Information Technology portrays the individual citizen and her right to privacy. Consent is usually perceived to be a unidirectional arrangement and considered intrinsically impossible in the employment relationship. While this is not wrong, data collection operates at more levels than the discrete, and the use of it will impact people individually, as well groups of all kinds, qualities, and quantities. Even where data about individuals is anonymised, machine learning allows a researcher, scientist, or boss, to make judgements about patterns as that data is parcelled out.

The bigger the dataset, the more powerful it is. Therefore, approaches and responses to data and its collection should not be individualized, such as expected in a consent framework, but should also be collective.

Collective rights are a ‘fundamental tool to rationalise and limit the exercise of managerial prerogatives’ over individual workers as well as over groups of workers (De Stefano 2019: 41). So, ultimately, the idea of consent must itself be rewritten to allow for workers’ data consent, where, because workers cannot automatically provide meaningful consent individually, the idea of a union based, or a kind of collective consent should be considered.

In conclusion:

The aim of this project Report is to build on the discussions that are already emerging, identifying a series of First Principles and Policy Options based on my findings, as a guide for workers’ right in the imminent future of work, where surveillance, tracking and monitoring are becoming increasingly familiar experiences for workers across industries in the EU, and across the world. 


[1] (22(1)) (Recital 71 also advises against): e-recruiting practices without any human intervention. Such processing includes profiling that consists of any form of automated processing of personal data evaluating the personal aspects of a natural person, in particular to analyse or predict aspects concerning the data subject’s performance at work… reliability or behaviour, location or movements, where it produces legal effects concerning him or her or similarly significantly affects him or her. 

[2] The right to reasonable inference sits in tension with the range of decision-making capabilities made possible with new technologies of investigation and measure. Inferred or derived data is data whose exact categories are not explicitly made transparent to the data subject but upon which, decisions are made or conclusions about reputation are established. Thus, inferences and decisions can be made about a data subject which are not entirely accurate, nor agreed by a subject and can therefore be classified as being discriminatory.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Information

This entry was posted on September 19, 2020 by .
<span>%d</span> bloggers like this: