When AI Lies: Liability For Incorrect Or Unintended Legal Advice In The Age Of Legora And Harvey AI

Latest PostWhen AI Lies: Liability For Incorrect Or Unintended Legal Advice In The Age Of Legora And Harvey AI

 

By Nicholas Blomfield

A corporate legal team uses an AI platform such as Legora to analyse contractual risk ahead of a transaction. The output is structured, confident, and commercially framed. It concludes that a particular regulatory approval is not required.

Completion proceeds. The conclusion is incorrect. The approval was required and as a result, client losses follow.

No external counsel was instructed on the point. No individual lawyer expressly advised or reviewed it. The output was generated purely by an AI system and relied upon in the ordinary course of business.

The question unavoidably becomes who is legally responsible for that loss? More specifically, can a firm be held liable (directly or vicariously) for AI-generated outputs, whether monitored or not?

The inadequacy of existing doctrine

The law of negligent misstatement and professional negligence provides the natural starting point. Under Hedley Byrne & Co Ltd v Heller & Partners Ltd, liability depends on an assumption of responsibility coupled with reasonable reliance. Caparo Industries plc v Dickman adds foreseeability, proximity, and whether it is fair, just and reasonable to impose a duty.

Applied to AI-generated outputs, each of these established elements begins to strain. Who made the statement? The output is generated by an autonomous system. There is no human author or reviewer in the conventional sense. The law assumes a human speaker, but here, there is none.

Where is the assumption of responsibility?
No individual has assumed responsibility for the content. The platform provider characterises the system as a tool and likely has very sharply drafted disclaimers and the deploying business treats it as an internal resource. Neither position fits comfortably within the doctrine.

What is the nature of reliance?

Reliance clearly exists in fact. The output informed a real-world decision. But that reliance is not on personal expertise, it is on an autonomous process. This creates a serious discrepancy given that the framework was designed for human statements, not machine-generated outputs that mimic advice without a human speaker.

Potential defendants

In practice, liability will be argued to fall into one of three places; the deploying entity, the platform provider and / or borne by the client.

(1) The deploying entity

The most direct route is to attribute responsibility to the business that chose to rely (presumably without checks or verification) on the AI output.

The logic is straightforward where the system is considered to form part of the company’s internal decision-making process. Errors within that process are therefore attributable and are the responsibility of (rightly or wrongly) the company. The analogy is similar to reliance placed on junior staff or flawed internal systems whose work product is sent out unchecked or unverified.

This approach aligns with commercial reality. However, it risks overstating the degree of control the deploying entity has over complex AI systems, particularly where the underlying technology sits well beyond the deploying entity or user’s expertise.

(2) The platform provider

An alternative is to look to the provider of the AI system, particularly where the platform is designed and presented in a way that closely resembles professional analysis.

Where outputs are structured like legal advice, expressed with authority, and deployed in contexts where reliance is clearly foreseeable, it can be argued that the provider has created a system that invites reliance akin to advisory services.

The counter argument is predictable. Providers will point to tightly worded contractual disclaimers and the characterisation of the system as a ‘general-purpose’ tool.

There is force in the argument that imposing liability here risks opening the door to indeterminate exposure, with questions of remoteness quickly arising. Whether those protections survive scrutiny where the system functionally operates as an advisory engine remains unresolved.

(3) No liability (risk allocated to the client)

The most commercially convenient position is that no duty arises at all. On this view, AI systems are tools, and the risk lies entirely with the customer. This position is unlikely to go untested. It sits uneasily with the reality that such systems are designed to produce outputs that appear reliable, complete, and professionally usable. Where reliance is not merely foreseeable but actively encouraged by design and such reliance is the work product of your legal advisor, a blanket “tool only” defence may prove difficult to sustain.

Te structural gap

The deeper issue is structural. The law is attempting to apply human-centred doctrines to rapidly advancing non-human systems.

The current framework assumes:

  • a person (professional) makes a statement;
  • that person (professional) generally assumes responsibility; and
  • another person relies on that statement as fact.

AI disrupts each of these steps. There is no intention, no consciousness, and no conventional assumption of responsibility. However, the output performs the same function as advice and is relied upon in materially the same way. The law recognises the harm or potential harm caused, but it lacks a clean mechanism for attributing responsibility.

A potential judicial direction

Courts are unlikely to leave this situation unresolved. One credible and pragmatic direction is that AI outputs used in commercial decision-making will be treated as attributable to the deploying entity (an evolution of vicarious liability), absent clear and substantive human verification, which again in many cases is subject to the doctrine of vicarious liability.

On that approach:

  • deploying the system in a decision-critical context carries inherent responsibility;
  • internal reliance on unverified outputs is treated as a failure of internal process; and
  • liability should follow such internal failures accordingly.

Recent decisions are challenging the current situation and are beginning to recognise a deemed assumption of responsibility where AI systems are deployed in contexts that replicate professional advisory functions. At present, there is little evidence that firms are fully pricing in these risks. Adoption appears to be driven by efficiency, cost, and competitive fear of missing out pressure, often without equivalent consideration of consequences or liability exposure. The technology is being embraced at speed; the legal consequences remain severely underdeveloped and under considered.

The limits of “human in the loop”

It is often suggested that human oversight mitigates risk. In many respect, particularly legally, that assumption is fragile.

If the human:

  • does not interrogate the reasoning or material,
  • cannot meaningfully challenge the output, or
  • simply approves it under time pressure,

then their involvement adds little substance.

Courts are likely to look beyond form to function. Superficial oversight will not break the chain of causation. For these purposes, human involvement only matters where it is real, informed, and intervention capable, but this in some respects reduces the purpose of AI systems in increasing efficiency, reducing costs and producing more accurate work product.

Recent cases have suggested that the courts are not going to tolerate a lacklustre approach to AI produced materials. In Ayinde v London Borough of Haringey, the High Court submission included AI-generated fabricated authorities which resulted in serious consequences for the legal representatives involved. The Court reinforced that professional duties are not displaced when using AI tools and warned of serious consequences including disciplinary action, wasted costs orders and contempt consequences where practitioners fail to verify AI-generated material properly. It must be noted though, that the High Court did not prohibit the use of AI or indeed AI-generated materials, but it did mandate strict, independent human verification of all citations and legal authorities.

Conclusion

The first wave of AI litigation is beginning and will not turn on abstract questions of ethics. It will turn on the familiar concepts of causation, attribution, reliance, and loss. AI systems do not need to be conscious to cause harm, they simply need to be trusted.

The law therefore faces a growing choice; force AI into doctrines built for human actors or adapt those doctrines to reflect how advice is being delivered, materials are being produced and decisions are now being made. The question is beyond whether AI can be wrong, but rather, who the law will hold responsible when it is.

 

About Nicholas Blomfield

A forward thinking, commercially minded senior lawyer with expertise covering funds, start-ups (predominantly fintechs and crypto), VC and banking. Working across the GCC, EMEA, LatAm and APAC. – Solicitor of the Supreme Courts of England and Wales (practicing) – Solicitor on Trinity Sitting, Ireland (non-practicing) –

 

Check out our other content

Most Popular Articles