Friday, December 20, 2024

Barrister apprenticeship standard endorsed by Bar Standards Board

The Institute for Apprenticeships and Technical Education...

BARRISTER MAGAZINE

Read the Barrister Magazine, a fantastic legal resource for online News, Articles & Information for Barristers in the UK. Keep abreast of Law Articles, Find a Barrister, Subscribe to our Articles on the Latest Legal News, Legal Services, Law Events, Expert Witnesses & Barrister Services. Its all here, ready to educate, inspire & Inform

The Challenge of Emerging Technologies and Data Protection

TechThe Challenge of Emerging Technologies and Data Protection

By Melissa Stock, barrister, Normanton Chambers

The past two decades have seen the development of technology at an extraordinary pace. The COVID-19 pandemic has accelerated the application of technology in the U.K. to try to tackle some of the problems that it has created. From the NHS’s tracking mobile phone application to virtual medical consultations, there can be no doubt that technology has come to the fore. But with the use of technology comes a plethora of potential problems.

There are concerns about the use of personal information gathered, both now and in the future. For example, how securely is such data being stored and is the organisation collecting the data likely to become the target of a data breach? Is the information being used in a way that the person would object to, even if it didn’t have a direct impact on them as an individual, but could impact them as a member of society? Is the data being used ethically? What processes do organisations have in place to make sure that how data is being used is not ultimately at odds with the purposes that people expect? The Cambridge Analytica scandal perfectly illustrates how seemingly innocuous information can be used with profound consequences.

And it is not just the growing use of our data that causes unease. A number of novel technologies are currently being developed that may significantly impact society. The recent growth in a computer’s processing capabilities and the massive increase in the amount of available data has enabled the progression of machine learning and artificial intelligence (‘AI’). With the application of machine learning and AI, in turn, technologies such as facial recognition, voice recognition, empathic technology are all emerging. But these new technologies bring some important questions about how we are going to tackle the potential issues that their use will entail, and in particular, how the law will keep up with them.

The majority of these new technologies use biometric data: identifiers related to measurable human characteristics, both physiological and behavioural, such as fingerprints, facial measurements, eye blink rate, gait, and keystrokes. Cameras, sensors and microphones are now ubiquitous. They are found in all our devices: virtual assistants like Amazon’s echo device ‘Alexa’ or Apple iPhone’s Siri, fitness trackers such as Google’s Fitbit, and Amazon’s ring doorbell that captures images and video.

For lawyers, the use of these technologies brings interesting legal questions. When algorithms are making decisions, who is ultimately responsible when they give rise to unexpected harmful consequences? Decisions made by humans are governed by law but what of decisions made by machines? There are different types of algorithms – for some, the reasoning between the inputs and outputs is determined by those persons developing it. For others, the developers provide the inputs, but the algorithm determines the outputs by applying its own reasoning (often referred to as the ‘AI black box’). Who is responsible when the algorithm created causes harm or financial loss? Discrimination may also arise; there is a not insubstantial risk that some algorithms are ‘taught’ to replicate or even amplify human biases without oversight. For those algorithms that make ‘black box’ decisions, who is liable when it is not possible to determine how that decision is made?

While it may seem that we are still some way off from tackling these questions, as technologies are being rapidly developed and deployed, they may need answering sooner than we believe. Empathic technology – the tracking of so-called biometric signatures such as eye pupil measurement, gait, heartbeat rate – to make conclusions about a person’s emotional, mental or physical condition is already being applied in industries such as gaming, automotive, education and advertising. Voice recognition is being used in various everyday applications and in verification mechanisms despite concerns about ‘deepfake’ software that can synthesize a natural fake voice to sound like the real voice of a specific person.

So far, it has been data protection that has been relied upon to challenge the use of new technologies. The recent case of R (on the application of Edward Bridges) v the Chief Constable of South Wales Police [2020] EWCA Civ 1058 in the Court of Appeal concerned the use of automated facial recognition technology: the capturing of images of people’s faces in real time. South Wales Police were testing a software called ‘NeoFace Watch’ in public places and at events.

The Court of Appeal found that in principle the use of automated facial recognition technology did not violate Article 8 of the Convention on Human Rights (the right to private and family life). For the purposes of data protection, it was found that the police had failed to properly consider the question of who could be placed on the software’s watchlist and where it could be deployed in their data protection impact assessment, but the court did not find that the use of the technology was itself an unlawful processing of personal data.

Instead, it found that there were fundamental deficiencies in the way that the technology was being used. In particular, that the police had not considered the scientific evidence that significant bias can often be found in facial recognition software against women and ethnic minorities. For this reason, it was found to be in breach of its public sector equality duty under the Equality Act 2010.

How far data protection can be relied upon to challenge new technologies is yet to be determined. While the UK GDPR places certain obligations on organisations using personal data, the burden is on the individual ‘data subject’ to challenge the organisation. The individual must first be aware of the fact that their data is being used unlawfully. However, most of these technologies are being used on a large scale and the impact to one individual may be imperceptible, or, as in the case of the Cambridge Analytica scandal, the harm could be societal rather than impacting one particular individual. There are also constraints to bringing group litigation in data protection. Unlike in the United States, in England and Wales it is not possible to bring ‘class action’ lawsuits except in competition law. There has been a recent attempt to use the representative action procedure to bring a group litigation claim in data protection.

However, the Supreme Court ruled in that case (Lloyd v Google [2021] USC 50) that ‘loss of control’ of personal data without proof of pecuniary or non-pecuniary damage cannot be considered a uniform loss that would permit a representative action.  It will therefore be challenging for any group litigation to be brought where a large group of individuals has been impacted in different ways from the unlawful use of their personal data. This is because without an ‘opt-out’ class action type mechanism, group litigation can only be brought where individuals ‘opt-in’ to lawsuits involving large groups of people. However, as the damages awarded in data protection are typically low, it may be unlikely to attract the numbers required to make group litigation feasible, or attractive to litigation funders.

There is currently no legislation that provides specific rules governing liability for AI in the UK. Some argue that the current legal framework does suffice and that the common law can accommodate new commercial creations. However, where exactly AI and the use of new technologies fits into the law of negligence, product liability, intellectual property, contract and data protection (among others) and how flexible the common law could be is not clear.

The UK government has initiated a ‘National AI Strategy’ which aims to accelerate AI innovation in the next decade through investment and establishing governance that promotes AI technologies. While the strategy does not specify what regulatory framework is intended, it is anticipated that there will be a White Paper published in 2022 that will provide more detail. Whatever the future may hold, it will certainly be interesting for those lawyers working in this field.

Melissa Stock is a barrister who specialises in privacy and data protection. She has written a number of articles on this area of law and a book on ‘the Right to be Forgotten’. Melissa also operates the blog www.privacylawbarrister.com. Her forthcoming book about biometric data will be available from law brief publishing at the beginning of 2022.

Melissa Stock, barrister, Normanton Chambers

Check out our other content

Most Popular Articles

Translate »