BARRISTER MAGAZINE

The Quiet Retreat from a Pupillage Application AI Ban

Portraits for Mountford Chambers by Brian Lloyd Duckett, London portraitphotographer

Over the past year, the proposition has circulated online and in the media that the use of generative artificial intelligence in pupillage applications submitted through the Pupillage Gateway is prohibited.

By James Lloyd, Barrister, Mountford Chambers

The Bar Council has confirmed that no such prohibition exists.

The previous AI declaration has been removed from the Gateway application process altogether. Applicants are no longer required to certify that their submissions are their “sole and original work,” nor to confirm that generative AI tools have not been used. Instead, prohibition depends upon the policy of individual chambers, as articulated in their particular advertisement.

The move away from centralised rule-imposition to a chambers-by-chambers discretion is of real significance to the profession’s principal point of entry. There is no sector-wide technological boundary; every chambers must make its own decision.

Moving away from a bright-line rule

A Gateway-wide prohibition might have offered comfort; a single ethical baseline, ensuring comparability across candidates and signalling that the profession had taken heed of the judiciary’s increasingly stern warnings about AI misuse.

What we have instead is something far more complicated. Individual chambers must conduct their own debates, and identify precisely what they wish to discern from the application process. Some will prohibit AI outright, some will allow its use without restriction, some will attempt to regulate its use, and others may say nothing at all, either by design or oversight. The same candidate, therefore, applying to five sets, may find themselves navigating five different environments governing how their application is prepared and presented. Whether that landscape is compatible with a centralised application system at all remains to be seen.

The inescapable context

The judiciary’s recent encounters with AI misuse have been sharp and unsparing. In Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin), fictitious authorities made their way into pleadings. The Divisional Court was clear that responsibility for legal work is non-delegable. The suggestion that generative tools might have been involved did not dilute the failure; it underscored it.

That decision, and others like it, explain the profession’s caution, but it is important to be precise about what has provoked judicial ire. The courts have not expressed hostility to AI or its use per se, but have deprecated reckless and unverified reproduction of material generated by it. The objection is to abdication of responsibility and judgement. The Bar Council’s guidance to the profession reflects the same stance.

The benefits of prohibition

Recruitment at the Bar has always involved the uneasy assessment and comparison of merit across different backgrounds, experience and levels of support. Introducing technological disparities adds another moving part to an already delicate exercise. However imperfect the process may be, chambers are trying to assess how a candidate thinks, structures an argument and expresses themselves under constraint. A blanket ban preserves the integrity of that exercise: it is not technophobic, but protective.

The equality argument supports prohibition more strongly than critics sometimes acknowledge. Access to higher-quality AI tools, and the skill to prompt them effectively, is not evenly distributed. A permissive regime may advantage those already equipped with the technological literacy, budget and time to experiment. A blanket ban, by contrast, removes one inequality in an already uneven playing field. For those who prioritise analysis of raw potential absent privilege of circumstance, prohibition remains a coherent position.

However, when considering ‘unfair advantage’ it is apt to remember that pupillage applications have never been prepared in true isolation. The Inns provide mentorship to some. Mentors often review drafts. Many chambers offer pupillage insight seminars, or provide feedback. Access to experienced feedback has long been unevenly distributed, and difficult if not impossible to detect.

If such extensive third-party human input is regarded as legitimate support, it is not immediately obvious why (more widely-available) technological assistance is any different. A senior barrister can transform an application far more profoundly than any language model. A blanket ban therefore risks targeting a potentially levelling form of assistance while leaving older, less visible advantages intact.

Any prohibition would also only be meaningful if capable of being policed. There prevails still a comforting belief that AI-generated writing is identifiable. It is not, at least not reliably.

Not so long ago, you could usually tell when something had been machine written. It read a little too cleanly and with a faintly mechanical competence that few applicants managed under pupillage application pressure. That comfort has largely gone. The current generation of tools can shift register without effort, echo an individual voice, and drop in technical language with unnerving confidence. They can even be told to roughen the edges, sound less polished and make the sort of small silps a tired applicant might make at midnight before a deadline. In other words, the old assumption that AI writing announces itself is becoming harder to defend, and selectors who think they will simply “spot it” are likely kidding themselves.[1]

Beyond spotting obvious Americanisms and the presence of case law ‘hallucinations’, there is very little a chambers can do to conclusively establish at the paper-sift stage whether content is AI-generated. Detection software has already proved unreliable. No recruitment decision could responsibly rely on current versions.

If AI use cannot be detected, we must abandon the illusion that a ban can be enforced.

The benefits of unrestricted use

Whether we like it or not, generative AI tools are an established part of legal practice. The courts and the Bar Council have not prohibited their use by qualified barristers; but have instead emphasised the importance of verification, maintaining confidentiality, and the exercise of professional judgement.

That makes the recruitment debate more awkward than some would prefer to admit. Why should we hold applicants to a higher standard than exists in practice? If an application is legally coherent, analytically sound, and entirely accurate, and if there are no perceptible signatures of generative AI assistance, what is the mischief? Pupillage candidates have never been required at the paper-sift stage to exhibit their notes, previous drafts, or to explain the intellectual juggling behind a finished answer. They have always been judged on the final product.

Technological literacy is increasingly part of professional competence. A future barrister who understands how to deploy AI critically and cautiously may be better prepared than one who has avoided it altogether. A rigid prohibition risks requiring technological celibacy at the application stage, and technological sophistication a few months later upon entry to the profession. There is a difficult naïvety in that position.

An ambitious solution

If we are serious about intellectual honesty, and assessing potential rather than fortune of circumstance, there is another solution; permitting the use of generative AI in pupillage applications, but requiring disclosure of inputs,  prompts and outputs.

At first blush this sounds administratively burdensome and potentially intrusive. However, in pupillage, we ask our pupils to articulate their reasoning, show their working, and justify their positions. The same intellectual transparency could strengthen the recruitment stage.

Disclosure of prompts and responses may be more revealing than the finished product. A disclosed prompt history exposes the candidate’s method. Did they ask broad, unfocused questions or did they identify the issues with precision? Did they interrogate the output critically, refine their instructions, challenge weaknesses, and iterate toward clarity? Did they accept the first answer uncritically or did they test it, reshape it, and reject parts of it?

That disclosure is a window into analytical temperament. It reveals a candidate’s problem-solving style, intellectual approach, and their instinct for structure. It may even expose over-reliance more clearly than the final submission ever could. A candidate who simply pastes a question into a model and reproduces the output will have little to disclose. One who uses the tool as a sounding board (as one might another member of chambers), refines arguments and discards weak suggestions, demonstrates judgement in action.

Requiring disclosure would also promote, from the outset, a professional habit that the courts increasingly expect; accountable and responsible use. A disclosure regime says to applicants, in effect, you may use powerful tools, but you will be judged on your capacity to control them.

Comparing AI-assisted and original work

Allowing AI use is not to compel it. Many candidates may simply prefer not to.

If AI-supplemented applications are permitted, however, markers must reduce the weight given to the written fluency in such applications and focus on rewarding thinking. The test should not, at the paper-sift stage, be which candidates write the smoothest, but which identify the real issue, prioritise it correctly, and defend a position well.

A marking scheme that weights issue selection, analytical depth, and defensible reasoning over stylistic finish will go some way to neutralise any superficial advantage conferred by AI.

Conclusion

Many practitioners continue to speak as if a profession-wide prohibition on using generative AI exists. It does not. Responsibility for setting restrictions for applicants rests squarely with individual chambers. The question for the Bar is therefore whether we are prepared to articulate, with candour, what we expect of applicants.

James Lloyd, Barrister, Mountford Chambers

[1] This paragraph was drafted by ChatGPT v5.2. Prompt: “Redraft this in the voice of a human barrister, varying sentence structure and register. Use slightly casual language. Don’t use em-dashes or lists. Insert one deliberate typo: “Early models produced text that was conspicuously generic. Contemporary systems can vary tone and simulate individual style. They can even be prompted to introduce imperfections”.”

Exit mobile version