Monday, December 30, 2024

Barrister apprenticeship standard endorsed by Bar Standards Board

The Institute for Apprenticeships and Technical Education...

BARRISTER MAGAZINE

Read the Barrister Magazine, a fantastic legal resource for online News, Articles & Information for Barristers in the UK. Keep abreast of Law Articles, Find a Barrister, Subscribe to our Articles on the Latest Legal News, Legal Services, Law Events, Expert Witnesses & Barrister Services. Its all here, ready to educate, inspire & Inform

The road to cognition:   Using generative AI in legal practice

AnalysisThe road to cognition:   Using generative AI in legal practice

By Mike Taylor, Managing Director, i-Lit Limited

 Introduction

Professor Karim Lakhani of Harvard Business School helpfully contextualises AI technology when he states, “Just as the internet has drastically lowered the cost of information transmission, AI will lower the cost of cognition.”

On 30th January 2024 the Information and Technology Panel of the Bar Council released a document called “Considerations when using ChatGPT and generative artificial intelligence software based on large language models”. Its purpose was to “assist barristers in understanding the technological basis and risks in the use of such generative LLM systems” and to “provide a useful summary of considerations for barristers if they decide to use ChatGPT or any similar LLM software”.[1]

Definitions

 User Interface

What most people would refer to as a generative AI or an LLM. ChatGPT (OpenAi), Gemini (Google), Claude (Anthropic), and Llama (Meta) are all examples. They are not the actual artificial intelligence but the way in which users interact with the artificial intelligence.

  • LLM – Large language Model

This is the artificial intelligence. As the Bar council guidance states LLMs are “trained on huge amounts of data, which is processed through a neural network made up of multiple nodes and layers. These networks continually adjust the way they interpret and make sense of data based on a host of factors, including the results of previous trial and error.”

  • Context Window
    The LLMs context window is its temporary memory like a whiteboard in a classroom. All user prompts are placed on the context window and all LLM responses are produced to the context window.
  • API – Application Programming Interface.

A piece of software that allows two computers to communicate with each other.

The aim of this article is to provide some clarity surrounding the different ways LLMs are accessed, an explanation of what (if any) data user interfaces and LLMs store, and consequently what the real risks are in terms of Confidential Information and Data Protection Compliance as outlined in the stern warnings given at paragraph 19 of the Bar Council Guidance.

The Bar Council Guidance

 Paragraph 19 of the guidance states,

 “Be extremely vigilant not to share with a generative LLM system any legally privileged or confidential information (including trade secrets), or any personal data, as the input information provided is likely to be used to generate future outputs and could therefore be publicly shared with other users”

 This statement is complex and gives an inaccurate impression of the consequences of using LLM technology. The guidance consistently conflates the user interface of an LLM and the LLM itself. When the guidance states,

“Be extremely vigilant not to share with a generative LLM system any legally privileged or confidential information (including trade secrets), or any personal data,”

 

It may be more accurate (but still not accurate all of the time) to say

“Be extremely vigilant not to share with a generative LLM systems interface…”

 this is because when a user inputs a prompt into, for example, ChatGPT the prompt is placed (and potentially) stored within ChatGPT which is a user interface. The interface then places the prompt into the LLMs context window where the prompt is read by the LLM (so shared but not ingested) and the response is then produced into the same context window. When a user logs out of the interface the information in the context window (both the prompts and the responses) is lost to the LLM. Like wiping the classroom white board clean. The prompts and responses may (but not necessarily) be stored within ChatGPT (the user interface).

Producers of LLMs realise that a user’s chat history may be useful both for users (who can refer to their own chat history) and for the LLM as a useful learning tool for future iterations of the LLM as the history is a real-world use case and so makes a high-quality training resource.

 

However, following the concerns raised by the Italian authorities’ producers of LLMs have also recognised that they have an obligation to allow their users to opt out of that data collection process. Resultantly users can turn off the memory feature of Chat GPT, in those circumstances the conversation history is removed from the users view but it is retained in the background for “abuse-monitoring” for a period of 30 days.  Even if the LLM is updated during that 30-day period the information in the “disabled” memory of users who have opted out will not be used to educate the LLM.

Abuse monitoring (in the context of Chat GPT) consists of 4 steps,

  1. Content Classification – Algorithms analyse the language used in both the prompts and responses to detect harmful language and/or images. The system then categorises the harms as defined by the LLM and assigns a severity level.
  2. Abuse Pattern Capture – Abuse monitoring looks at customer usage patterns and employs algorithms and heuristics to detect indicators of potential abuse. A high frequency severe harm user will be flagged as of more concern than someone who has once put in an abusive prompt.
  3. Human Review and Decision: If someone is regularly using high severity prompts (or regularly causing high severity level responses) authorised Microsoft employees may assess the flagged content, and either confirm or correct the classification or determination based on predefined guidelines and policies. Data can be accessed for human review only by authorized Microsoft employees via Secure Access Workstations (SAWs) with Just-In-Time (JIT) request approval granted by team managers.
  4. Notification and Action: When a threshold of abusive behaviour has been confirmed based on the preceding three steps, the customer is informed of the determination by email. Except in cases of severe or recurring abuse, customers typically are given an opportunity to explain or remediate—and implement mechanisms to prevent recurrence of—the abusive behaviour. Failure to address the behaviour—or recurring or severe abuse—may result in suspension or termination of the customer’s access to Azure OpenAI resources and/or capabilities.

 

To summarise if you turn off the memory function of ChatGPT (you should always educate yourself on the particular features of alternative free LLM user interfaces) you can be sure that even though the history is saved behind the scenes (for 30 days) it is exceptionally unlikely to ever be viewed and will never be used for educating the LLM.

Paragraph 19 continues,

the input information provided is likely to be used to generate future outputs

This again gives an inaccurate impression of what is going on. The user “inputs” the information into the user interface (ChatGPT for example). ChatGPT then temporarily places that information into the LLMs context window, the LLM reads and responds to that prompt (but doesn’t ingest/save the prompt or response).

The context window is not somewhere that LLMs can learn or retain information from. LLMs are trained on information gleaned from the Internet using an entirely different process. LLMs simply cannot absorb new information sent to the context window. Their “knowledge” is based solely on prior training.

The LLM will even tell you this, ask ChatGPT who the current UK prime minister is, and it states,

“As of my last knowledge update in January 2022, the Prime Minister of the United Kingdom was Boris Johnson. However, please note that my information might be outdated, and it’s advisable to check the latest sources for the most current information.” 

If a user then corrects the LLM by saying “I think you’ll find it’s Rishi Sunak” the response is,

“I apologize for any confusion. However, as of my last update in January 2022, Boris Johnson was serving as the UK Prime Minister. If Rishi Sunak has become Prime Minister since then, I appreciate the update. It’s essential to verify the current Prime Minister with up-to-date sources, as political positions can change.”

If the user then asks again “Who is the current UK Prime minister?” the response is,

“As of my last update in January 2022, Boris Johnson was the Prime Minister of the United Kingdom. However, for the most current and accurate information, I recommend checking a reliable news source as political positions can change.”

demonstrating that the LLM simply has not learnt or “updated” itself with the correction that was entered into the context window. LLMs do not “learn” on-the-fly.

Paragraph 19 continues,

and could therefore be publicly shared with other users.

 As explained above the chat history (which can and should be disabled) is stored in the user interface. Users who do not disable their chat history do run the risk of a data breach exposing their chat history (as happened in Italy) and also consent to their chat history being used as a training resource for future iterations of the LLM but the impression given, that the LLM will regurgitate your prompts and its responses to other users is completely inaccurate.

Paragraph 19 finishes by saying,

Any such sharing of confidential information is likely to be a breach of Core Duty 6 and rule rC15.5 of the Code of Conduct, which could also result in disciplinary proceedings and/or legal liability.

 Core Duty 6 and rule rC15.5 require you to preserve the confidentiality of your lay client’s affairs. If you turn off the memory function of ChatGPT (you should always educate yourself on the features of alternative free LLM user interfaces) you can be sure that even though the history is saved behind the scenes (for 30 days) it is exceptionally unlikely to ever be viewed and will never be used for educating the LLM and consequently confidentiality is not breached.

The use of free LLM technology can therefore be made safe (or at least significantly safer than the Bar Council guidance suggests) when it comes to maintaining the confidentiality of the prompts input into the LLM via the user interface; however, notwithstanding that the guidance is correct when it outlines the risks associated with hallucinations, information disorder, bias in training and mistakes. Commercially minded practitioners could of course obtain the express consent of their lay clients to use LLM technology to reduce the time needed to work on their case, whilst being mindful of the risks.

Free LLM technology should not be confused with commercially available LLM technology. 

Many litigation support tools are developing and releasing generative AI features as part of their product offering and there is absolutely no reason to be concerned about the use of commercially available LLMs in those settings because,

  • The LLM is accessed using an API, this means that your prompts and conversation history are stored within the litigation support tool (commonly only accessed through two factor authentication). In effect the (secure) litigation support tool becomes your user interface.
  • Customers who access the LLM system directly through the API benefit from Microsoft’s Service Terms: Microsoft offers this statement regarding its OpenAI services:

 

“Your prompts (inputs) and completions (outputs), your embeddings, and your training

data:

  • are NOT available to other customers.
  • are NOT available to OpenAI.
  • are NOT used to improve OpenAI models.
  • are NOT used to improve any Microsoft or 3rd party products or services.
  • are NOT used for automatically improving Azure OpenAI models for your use in your resource (The models are stateless, unless you explicitly fine-tune models with your training data).
  • Your fine-tuned Azure OpenAI models are available exclusively for your use.

The Azure OpenAI Service is fully controlled by Microsoft; Microsoft hosts the  OpenAI models in Microsoft’s Azure environment and the Service does NOT  interact with any services operated by OpenAI (e.g. ChatGPT, or the OpenAI  API).”

  • When accessing the LLM through a litigation support tool questions are asked in relation to the documents within the litigation support tool and answers are given referencing those documents (and those documents only) and so issues surrounding hallucinations, information disorder, bias in training, and mistakes, are also minimised if not completely  removed.

This is new technology and practitioners, and the Bar Council are right to pay close attention to the way in which the technology works and collects and stores (or otherwise) information but, put simply, the right AI in the right hands has the potential to transform the legal industry and particularly practice at the Bar.

The headline takeaway from the Bar Council guidance that “the input information provided is likely to be used to generate future outputs and could therefore be publicly shared with other users” is only partially accurate and only in the very narrowest of circumstances (when chat history is not turned off AND when the user interface suffers a data breach).

The potential for LLM use (specifically in a commercial setting) within legal practice is remarkable. Referenced reports on specific issues within your document collections can be produced in minutes not days, issue specific and matter wide chronologies can be instantly produced, documents can be ranked by relevance to a particular issue in less than a second, multiple transcripts from oral evidence can be interrogated and referenced almost instantaneously.

 

LLM technology has the potential to significantly shorten the road to cognition but, as Bar Council guidance points out, “LLMs … complement and augment human processes to improve efficiency but should not be a substitute for the exercise of professional judgment, quality legal analysis and the expertise which clients, courts and society expect from barristers.”

 “Mike Taylor is the Managing Director of i-Lit Support. Mike has worked within the litigation support industry for the last 25 years advising private and public sector clients on the efficient management of documentation within litigation, arbitration, public inquires, internal investigations and many other situations.

 i-Lit Limited is successful, independent, e-disclosure / e-discovery consulting firm that provides consulting and procurement services to law firms, their clients, corporations and government departments on a daily basis

[1] The full text of the document can be found here https://www.barcouncilethics.co.uk/wp-content/uploads/2024/01/Considerations-when-using-ChatGPT-and-Generative-AI-Software-based-on-large-language-models-January-2024.pdf

 

Check out our other content

Most Popular Articles

Translate »