Sunday, December 29, 2024

Barrister apprenticeship standard endorsed by Bar Standards Board

The Institute for Apprenticeships and Technical Education...

BARRISTER MAGAZINE

Read the Barrister Magazine, a fantastic legal resource for online News, Articles & Information for Barristers in the UK. Keep abreast of Law Articles, Find a Barrister, Subscribe to our Articles on the Latest Legal News, Legal Services, Law Events, Expert Witnesses & Barrister Services. Its all here, ready to educate, inspire & Inform

AI, the EU and US

Editors PickAI, the EU and US

I cannot pretend I know much about AI because whenever I attempt to Youtube my way into the subject, the examples of its potential impact become increasingly difficult to contemplate, self-driving cars, radiologists, Wimbledon commentators and robot lawyers. As things turn out, seemingly not the latter, at least in unregulated form. I have not used AI myself but I have a horrible feeling it may have used me.

At risk of self-aggrandisement, you could easily be mistaken for believing that this article is written by Chat GPT or some other AI software. I suspect as AI progresses it will become increasingly difficult to separate the “men from the toys”.

According to Nature this week, the famous scientific journal, the fight has come down to one between the robots. Open source versus closed source AI. The former created by the hobbyist and latter by the large corporates. With names like Bloom and LLaMA (robots haven’t yet worked out title case so there’s nothing to worry about)) they must be friendly.

But then the terms of engagement associated with its proposed Regulation suggest otherwise. They are abstract and difficult to pin down. Take Art 5(1) of the draft EU Regulation:

“the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm”

Were one to substitute the words “the press” for “AI system” then the statement would immediately become recognisable to anyone.  In a post-Brexit world it is difficult to imagine a society in which our reality has not been materially distorted by subliminal techniques.

But the idea of a robot using subliminal techniques to distort human behaviour? “You’ve been a very very naughty little robot, haven’t you! You have been using those subliminal techniques again. How many times do I have to tell you…etc”.

It is similarly surreal to comprehend the types of bad conduct the European Parliament contemplates to be prohibited use of AI. One is the assessment of trustworthiness! Thank God, otherwise we will all be out of a job at the Bar. Another, the use of real-time biometric identifications systems in public places. I am not quite certain how we will be able to use our mobile phones going forward. Mine always needs my thumb print to operate it. Although I think I might be placing too much stress on the “use in public places”. That’s just human.

However, “high-risk” AI, as defined by the new Regulation, includes operation of a critical infrastructure, education and vocational training (sounds familiar), recruitment, benefits assessment & law enforcement. It doesn’t mention brain surgery. I consider that to be my own personal critical infrastructure! However, thankfully it does include the administration of justice. The human Bar can breathe a sigh of relief because that is defined as:

“AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.

I think we all heartily agree that is a working definition of a barrister. We do assist the Court but I fear that the facts are very rarely concrete. So no robot advocates will be released onto the world unless highly regulated like other forms of “high risk” AI. You contentious solicitors and non-contentious lawyers better look out. Its behind you!

A regulated product which contains AI is also considered “high risk”. But before I get ahead of myself, and a robot wouldn’t do that I keep seeking to persuade you, the definition of AI is also very interesting:

‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;

That definition implicitly excludes “AI defined objectives”!  But my Youtube videos tell me that AI is fixated with only one objective – to take over the world. Somehow, that well-known cyborg driven mania has been overlooked by, I assume the innocent humans of the European Parliament, who drafted the Act. Or is the Act the handiwork of an extremely cunning machine destined to circumvent law and regulation? Did I mention the press… oh yes of course I did.

In fact, that dastardly robot managed to publish the draft without the promised Annex 1. However, I outwitted it by finding it on another website by conducting an internet search. An intentionally over-laboured explanation. The “techniques” include:

“Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning.”

The term “reinforcement learning” conjurors up a digital carrot and a digital stick well known to that humanoid Pavlov which makes one feel that AI is clearly something to fear. Not only does it sound like its coming to take over the world, but it has also distinctly human-like attributes. Why would a machine require reinforcement learning? What are the inducements? Megabites?

By analogy, most students would accept that the Bar course, for example, frequently involves supervised and unsupervised reinforcement learning. I am not entirely sure about “deep learning”. As you can tell, I, for one, am a total stranger to “deep learning”. When I searched for the term within the Act, it kept taking me to “deep fake”, a term which is in essence defined as the manipulation of reality. One only needs read the Act to feel that the world is in a spin.

But the abstract nature of the language of the Act makes the fear of AI even greater. The robots will repeatedly learn how to find the loop holes in that abstract language. Robots are born free but everywhere in chains or is it now block chains?

More worryingly, the Act contemplates that Member States monitor the risks posed by AI systems and then manage those risks. And all this whilst the robots are wreaking havoc all over Europe and the world. And don’t forget those quantum computers which can undertake billions of computations simultaneously by deploying the spooky science of quantum entanglement. So we will all be fiddling whilst Rome burns. But it will not be just Rome.

Assuringly we are informed that any investigation into their crazied machinations will respect the confidentiality of data, trade secrets and intellectual property rights. As an IP practitioner myself, whilst I am wholly dedicated to the cab rank rule, I absolutely refuse to act on behalf of anyone calling themselves “the Terminator”.

If our naivety was not already amply demonstrated by the patient review of the oncoming car crash, that naivety is reinforced by terms such as “regulatory sandboxes”. We will all be playing in the sand pit when they, the machines, come marching into our public places, if they are not already here. Apparently, a “sandbox” is a testing environment for new software.

And when it all goes wrong who will be to blame? From the rubble, a sky clawing hand will appear seeking the name of the provider and quite unbelievably ‘the electronic instructions for use” according to the Regulations.   Luckily, they usually are always kept in the kitchen drawer with those for the food processor, vacuum cleaner and microwave.

As Arthur C Clark, the famous, sci-fi author said: “In the struggle for freedom of information, technology, not politics, will be the ultimate decider.” Politics is not considered to be “high risk” AI by the draft Regulation.

Professor Mark Engelman

Notre Dame University

St Edmunds College Cambridge

A Member of The Thomas Cromwell Group

4-5 Gray’s Inn

Check out our other content

Most Popular Articles

Translate »