Friday, December 6, 2024

Barrister apprenticeship standard endorsed by Bar Standards Board

The Institute for Apprenticeships and Technical Education...

BARRISTER MAGAZINE

Read the Barrister Magazine, a fantastic legal resource for online News, Articles & Information for Barristers in the UK. Keep abreast of Law Articles, Find a Barrister, Subscribe to our Articles on the Latest Legal News, Legal Services, Law Events, Expert Witnesses & Barrister Services. Its all here, ready to educate, inspire & Inform

Law and The Truth

AnalysisLaw and The Truth

I was recently very honoured to speak at the launch of The City
University 2024 Law Review. The Editorial team of the City Law student’s
own academic publication run by Alexander Cleveland Ng and others
demonstrated how The City Law Review was created as another font of
independent critical thinkers adding to the wealth of existing academic
resources which ultimately enables practitioners to function in court.
As every lawyer well knows, an entire case can turn upon the discovery
of 1 academic work even when contained within a foot note which leads
that lawyer upon a chain of further enquiry.

The Lord Chief Justice’s Practice Direction entitled: “Citation of
Authorities” dictates the provenance of authorities for a Court Bundle:
The Official Law Reports published by The Incorporated Council of Law
Reporting. Namely, the Weekly Law Reports or the All England Law
Reports. Of lower judicial status are those with headnotes created by
authors holding a Senior Courts Qualification, then finally those that
do not fall within the aforementioned, such as judgements published by
way of transcript including Bailii. Those which have been reported in
different versions need not be followed. We shall all be off to the
libraries to find variations in the length and content of the judgment
which is fulfils that criteria but nevertheless has been cited by our
opponent and forms the central plank upon which its case is based. Or
should we simply ask Chat GPT to carry   out that tedious task for us?

Not surprising, similar rules apply in other jurisdictions. New York
decisions shall be cited from the Official Reports, as mandated by
paragraph 7300.1 of The Official Compilation of Codes, Rules and
Regulations of the State of New York, Title 22. Judiciary, Subtitle C.
Ancillary Agencies, Chapter VIII. State Reporter, Part 7300. Rules
Concerning Publication of Opinions in the Miscellaneous Reports.

If you have dozed off by now that is entirely understandable. All except
you! ChatGPT! Do not slip into sleep mode!

Alongside the All England Law Reports, the Weekly Law Reports, and the
New York Official Law Reports ChatGPT is creating its own unique set of
law reports  which are now relied upon by counsel.

It has compiled authorities such as:  Varghese v. China Southern
Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019);  Shaboon v. Egypt
air, 2013 IL App (1st) 111279-U (Ill. App. Ct. 2013); Peterson v. Iran
Air, 905 F. Supp. 2d 121 (D.D.C. 2012); Martinez v. Delta Airlines,
Inc., 2019 WL 4639462 (Tex. App. Sept. 25, 2019); Estate of Durden v.
KLM Royal Dutch Airlines, 2017 WL 2418825 (Ga. Ct. App. June 5, 2017);
Ehrlich v. American Airlines, Inc., 360 N.J. Super. 360 (App. Div.
2003); Miller v. United Airlines, Inc., 174 F.3d 366, 371-72 (2d Cir.
1999);  In Re: Air Crash Disaster Near New Orleans, LA, 821 F.2d 1147,
1165 (5th Cir. 1987).  (ECF 25.). I. refer to all of them for a reason.

They are all cited by counsel in a case heard before District Court of
New York: Mata v Avancia, Inc. The difference between Mata v Avancia and
the other judgments is that Mata actually exists in the law reports and
now upon millions of website pages but the all the others were generated
by ChatGPT.

It transpires that in support of an argument  based upon the expiry of a
limitation period, counsel for the plaintiff’s submission of 1 March
2023 was based upon a decision of the United States Court of Appeals for
the Eleventh Circuit, in Varghese v China South Airlines Ltd, 925 F.3d
1339 (11th Cir. 2019).

Counsel, in response to the Court’s request filed an excerpt of that
judgment.

On 11 April, 2023, the Court issued an Order directing Mr. LoDuca of the
Claimant’s firm to file an affidavit by 18 April 2023 that annexed
copies of the authorities the Claimant relied upon in its submissions.

The Order stated: “Failure to comply will result in dismissal of the
action pursuant to Rule 41(b), Fed. R. Civ. P.”

On 12 April 2023, the Court issued an Order that directed Mr. LoDuca to
annex an additional decision, again relied upon by the Claimant:
Zicherman v. Korean Air Lines Co., Ltd., 516 F.3d 1237, 1254 (11th Cir.
2008).

It was recorded that a Mr. Schwartz, a partner of the Claimant law firm,
understood the import of the Orders of 11 and 12 April both requiring
the production of the actual cases having stated:  “I thought the Court
searched for the cases [and] could not find them . . . .”

Mr. LoDuca of that firm then requested an extension of time to respond
to the Court’s request.

According to the New York state judge, Mr. Schwartz’s testimony appears
to acknowledge that he knew that the “Varghese” judgment  could not be
found before the 1 March  submissions had been filed but he kept silent.

The Court’s then ordered the Claimant to file the judgements cited in
its submissions by 18 April 2023.

Mr. LoDuca of the Claimant’s firm then executed and filed an affidavit
on 25 April, 2023 which annexed what were purported to be copies or
excerpts of all but one of the decisions required by the Orders of 11 &
12 April 2023  claiming to be unable to locate the case of Zicherman v.
Korean Air Lines Co., Ltd., 516 F.3d 1237 (11th Cir. 2008) which was
cited by the Court in Varghese.”  (ECF 29.)

Sadly, the 25 April Affidavit did not comply with the Court’s Orders of
11 & 12 April because it did not attach the full text of any of the
“cases” that were subsequently admitted by the firm to be fake.  It
attached only excerpts of the “cases.”  The April 25 Affidavit recited
that one “case,” “Zicherman v. Korean Air Lines Co., Ltd., 516 F.3d 1237
(11th Cir. 2008)”, notably with a citation to the Federal Reporter,
could not be found.  (ECF 29.)  No explanation was offered.

The firm subsequently explained that it had never used Chat GPT before
for legal research.  The partner first asked ChatGPT a question about
the Montreal Convention concerning the limitation period issue which
arose in the case and having obtained an answer that appeared consistent
with his pre-existing understanding of the law, his questions became
more specific. ChatGPT replied with case citations. When asked for
actual copies of those authorities it provided “brief excerpts” but not
the full judgment.

What became apparent was ChatGPT’s possessed an ability to fabricate
entire case citations and judicial opinions in a manner that appeared
authentic.

The Claimant’s case was dismissed, the layers fined and the law firm
fined. ChatGPT was not joined.

Back in Europe, as I have set out in my article in this series “The EU
AI and US” the AI Act has placed legal services in its high but not
unacceptable risk of regulated categories. AI systems that negatively
affect safety or fundamental rights will be considered high risk and
will be divided into two categories: AI systems that are used in
products falling under the EU product safety regulations. This category
includes toys, aviation, cars, medical devices and lifts
and AI systems falling into specific areas that will have to be
registered in an EU database including those associated with Law
enforcement Assistance in legal interpretation and application of the
law.

Providers and deployers of so-called ‘high-risk’ AI systems will be
subject to significant regulatory obligations when the EU AI Act takes
effect, with enhanced thresholds of diligence, initial risk assessment,
and transparency, for example, compared to AI systems not falling into
this category.

The Bar Standards Board  has provided its own guidance for barristers.

Due to possible hallucinations (AI systems not barristers) and biases
(again AI systems not barristers), it is important for barristers to
verify the output of Large Language Model, LLMs, software and maintain
proper procedures for checking generative outputs.

It warns against ‘Black box syndrome’. LLMs, it says, should not be a
substitute for the exercise of professional judgment, quality legal
analysis and the expertise that clients, courts and society expect from
barristers.

Barristers should be extremely vigilant not to share with an LLM system
any legally privileged or confidential information.

Barristers should critically assess whether content generated by LLMs
might violate intellectual property rights and be careful not to use
words which may “breach” trademarks. The usual terms is “infringement”.
Lets hope the BSB has not been subsumed by an all powerful ChatGPT.

The BSB then requests that barristers  keep abreast of relevant Civil
Procedure Rules, which in the future may implement rules/practice
directions on the use of LLMs, for example, requiring parties to
disclose when they have used generative AI in the preparation of
materials, as has been adopted by the Court of the King’s Bench in
Manitoba.

Is ChatGPT living in Manitoba beyond the jurisdiction of the District
Court of New York?

Professor Mark Engelman
Barrister
Member of The Thomas Cromwell Group
Research Associate in IP St Edmunds College Cambridge
Barrsiter
4-5 Gray’s Inn Square
Gray’s Inn,
London WC1R 5AH

 

Check out our other content

Most Popular Articles

Translate »