top of page

The UKJT’s Legal Statement on AI Liability: Clarification, Conservatism, and the Coming Litigation Landscape

  • Feb 13
  • 9 min read


The UK Jurisdiction Taskforce (UKJT) has issued a draft Legal Statement addressing a question that has increasingly concerned those working in the legal and commercial spheres; including lawyers, insurers, businesses deploying AI, and technology developers: how does English private law respond to harms caused by artificial intelligence? The consultation document does not propose legislative reform. Rather, it advances a more restrained but consequential claim; that the common law is already equal to the task.


The Statement’s central thesis is straightforward. AI does not require new doctrines of liability. English private law, particularly contract and negligence, can accommodate AI harms through incremental reasoning from established principles. In short, perceived uncertainty exceeds actual doctrinal difficulty. The Statement thus frames AI as a problem of legal application rather than doctrinal transformation.


Whether that reassurance is convincing depends on how one evaluates three aspects of the Statement: (1) its definition of AI, (2) its treatment of negligence (particularly professional liability), and (3) its approach to strict liability. However, the more practical question is an equally important issue: how this framework will operate in real litigation. It is there; in disclosure, expert evidence, causation arguments and multi‑party claims that the true testing of the common law will occur. Procedural realities may ultimately prove decisive.


The UKJT’s Statement also sits within a wider policy landscape. The Law Commission had recently published a discussion paper examining the broader interaction between AI and the law, including questions of autonomy, causation, legal personality and potential liability gaps. While the Commission does not propose immediate reform, its analysis highlights areas where increasingly autonomous systems may place pressure on established doctrines. Read alongside that wider review, the UKJT’s Statement can be seen as a more focused clarification of how existing private law principles are expected to operate in practice.


In his introduction to the consultation, Sir Geoffrey Vos emphasizes the importance of clarifying how existing legal principles apply in an era of increasingly capable AI.


(a) Defining AI: Autonomy as the Legally Salient Feature

The Statement adopts a definition based on its features of AI as “a technology that is autonomous.” Autonomy is described as involving (1) an unpredictable relationship between input and output, (2) opacity in reasoning processes, and (3) limited user control over outputs.


This approach is important. It moves away from definitions based on comparing AI to human intelligence and instead focuses on what makes AI different in legal terms: the unclear and less predictable link between input and output. The concern is that AI can produce outcomes not specifically programmed by any identifiable actor, creating distance between human intention and harmful consequence. The challenge becomes evidential rather than conceptual.


Yet the Statement keeps the focus on people rather than machines. It cannot be sued, cannot owe duties, and cannot bear responsibility. Liability must always attach to legal persons such as developers, deployers, users or suppliers. As the Statement notes, "in many cases the addition of AI will simply be considered a tool of those who exercise relevant control over it and can be said to have been 'responsible for its actions.” Autonomy may complicate analysis, but it does not displace foundational principles..


(b) Contract as Primary Risk Allocation

One of the most practically important conclusions of the Statement is that most AI liability disputes will be contractual. AI supply chains are layered and increasingly complex: data providers, compute providers, foundation model developers, hosting suppliers, application developers and users. Between these actors, risk is typically allocated through warranties, indemnities, limitations and exclusions of liability.


In commercial settings, liability is usually shaped first and foremost by the contract between the parties. The courts are more likely to become involved under tort law where there is no contractual claim available; especially where someone outside the contractual chain has suffered harm.


For litigators, this suggests that early AI disputes are likely to centre on contractual interpretation: what was warranted, what was excluded, whether liability caps apply, and whether losses fall within indemnity clauses. Insurance coverage disputes are also likely to feature prominently, especially where AI‑related losses are novel and policy wording has not yet adapted. In many disputes, liability may turn less on technology than on drafting.


(c) Negligence and Physical Harm

In cases of physical harm, the Statement applies orthodox negligence analysis: (1) duty of care, breach, (2) causation and (3) remoteness. Established relational duties, doctor and patient, manufacturer and consumer, driver and pedestrian, remain intact even where AI tools are used. AI is characterised as a tool deployed by responsible actors.


Liability further up the supply chain is less straightforward. Foundation model developers may owe a duty where harm and the way the system is used are reasonably foreseeable, but they are unlikely to be responsible for completely new or unexpected uses further down the chain. Courts are likely to apply existing product and component cases step by step rather than creating new rules.


In practice, courts will look closely at how the system was built and tested. This may include the choice of training data, the testing carried out, how the system was checked before release, how it was monitored, and whether it followed recognized standards. Records showing proper oversight and governance are likely to be important evidence in any dispute.


The Statement also recognises that the complexity of AI systems does not remove the need to establish the traditional elements of negligence. Claimants will still need to show that a specific act or omission fell below the required standard and that it caused the harm complained of. In practice, this may involve careful examination of who exercised control at each stage of the system’s development and deployment. Where multiple actors are involved; for example, a foundation model developer, an application developer and an end user, the court may need to analyse their respective roles separately. Questions of contribution and allocation of responsibility may therefore arise alongside the primary negligence claim. None of this represents a departure from existing doctrine, but it does illustrate how AI-related disputes may require detailed factual analysis within familiar legal structures. These disputes are unlikely to reshape legal doctrine, but they are likely to be factually complex.



(d) Professional Liability: Use and Non‑Use of AI

The Statement’s treatment of professional negligence is particularly significant. Professionals remain bound by the ordinary standard of reasonable skill and care. Blind reliance on AI output is plainly unacceptable. A barrister submitting hallucinated authorities or a surveyor failing to review AI‑generated analysis would likely be in breach.


More interesting is the possibility that professionals may be negligent for failing to use AI where competent practitioners would do so. This represents one of the Statement’s most consequential observations.


As AI tools become widely adopted and demonstrably reduce error rates, failure to deploy them may fall below the evolving standard of care. The draft gives concrete illustrations of how this might arise in practice. It refers, for example, to a radiologist who fails to use an AI system that is highly effective at identifying cancerous tumors and could reasonably have been obtained. It also mentions an auditor who does not use AI tools to detect anomalies or potential fraud in very large datasets where individual human review would be impractical. Similarly, it points to a solicitor involved in large-scale litigation who fails to advise on or consider AI-assisted document review. In each case, the suggestion is that the omission, rather than the use of AI, may fall below the standard of reasonable skill and care. Expert evidence and professional guidance will be central in determining what constitutes reasonable practice.


(e) False Statements by AI

The draft also considers who may be liable where harm is caused by false statements generated by an AI system, such as a chatbot. The starting point is attribution. Because an AI system is not a legal person, any claim must be directed at a natural or corporate person. The key question is therefore whether the statement can properly be treated as having been made by, or on behalf of, a particular legal person.


This will depend heavily on the context in which the AI system is deployed. Where an organisation presents an AI tool as part of its own service, and invites users to rely on its output, it may be easier to argue that the statements are attributable to that organisation. By contrast, where a system is made available more generally without any specific undertaking, establishing that the output was made "on behalf of" a particular defendant may be more difficult.


Even where attribution is established, different causes of action raise different hurdles. For claims in negligent misstatement, a claimant will still need to show an assumption of responsibility or a sufficiently close relationship giving rise to a duty of care. For defamation, ordinary principles concerning publication and responsibility for content will apply. In other words, the draft does not suggest any new category of liability for AI-generated speech; existing doctrines are expected to do the work.


At the same time, the Statement indicates that in many cases the more realistic claim may be framed in negligence rather than as a standalone claim in misstatement, defamation or deceit. If the harm flows from careless design, inadequate testing or poor deployment of the system, the focus may shift to the human decisions that permitted the output. In that sense, false statements by AI are often analysed not as independent acts of a machine, but as consequences of earlier human acts or omissions.


(f) Strict Liability and the Limits of the Consumer Protection Act 1987

Absent negligence, English private law generally leaves loss where it falls. The principal statutory exception is the Consumer Protection Act 1987 (CPA), which imposes strict liability for defective products causing death, personal injury or certain property damage.


The Statement takes a narrow view of the CPA’s scope. Pure software is unlikely to qualify as “goods” unless embedded in tangible products. Standalone AI services, including large language models and cloud‑based systems, will generally fall outside the regime. Even where AI is embedded in physical products, claimants must establish that the product was “defective” in the sense of being unsafe.


This limits no‑fault exposure. Most AI litigation will remain fault‑based unless legislative reform intervenes. Fault-based liability therefore remains the primary framework.


(g) The Coming Litigation Landscape

One of the most significant implications of the Statement is procedural rather than doctrinal. The Statement recognises that AI cases are likely to raise difficult practical issues in court, particularly around procedure and evidence.


First, causation will be heavily contested. Establishing that a particular design flaw, training deficiency or deployment decision caused harm may be difficult where systems are complex and multi‑layered. Defendants may argue that harm would have occurred regardless, or that intervening human decisions broke the chain of causation.


Secondly, disclosure may be contentious. Source code, training data, internal validation reports and model documentation may be sought. Commercial confidentiality and trade secrets will clash with claimants’ need to understand how a system functioned. Courts may increasingly rely on confidentiality rings and expert inspection regimes.


Thirdly, multi‑party proceedings are likely. Claims may involve users, application developers and upstream model providers. Contribution proceedings under the Civil Liability (Contribution) Act 1978 may become common. The allocation of responsibility across a distributed supply chain will be strategically complex.


Fourthly, group litigation is plausible. AI systems often operate at scale. A defective decision‑making model used by a financial institution or public authority could generate numerous small but cumulative losses. The economics of collective redress; through Group Litigation Orders or representative actions; may shape the practical enforcement of AI accountability.


Fifthly, expert evidence will dominate. Courts will depend heavily on technical experts to explain system architecture, training methodology and model behaviour. The risk of “battle of the experts” litigation is substantial, particularly where explainability is limited.


Finally, insurers will play a central role. Professional indemnity policies, product liability policies and cyber policies will all be tested. Disputes may arise as to whether AI‑related losses fall within existing coverage definitions.


There is also a broader concern about the potential impact on the administration of justice. If AI-related disputes routinely require highly specialised and expensive expert evidence, can this increase the cost of litigation and make it more difficult for some claimants or defendants to pursue or defend claims effectively? In extreme cases, the practical difficulty of proving fault in complex AI systems could risk creating gaps in accountability.


While the Statement emphasizes that existing legal principles are sufficient, questions may remain as to whether procedural rules or substantive doctrines will need to evolve to ensure that access to justice is not undermined as technology becomes more complex.


(h) Clarification or Conservatism?

The Statement’s message is reassuring: English private law remains fit for purpose. The authors conclude AI does not require major changes to existing legal principles and that courts can extend existing principles step by step.


However, the real test will come in actual cases. The features that make AI distinctive, its complexity, scale, and lack of transparency, may not change the law itself, but they may make claims harder to prove and defend. Much will depend on how judges handle technical evidence and multi‑party disputes.


For now, the UKJT’s position is clear. AI is a new technology, but it is not treated as creating a new kind of legal responsibility. Whether that approach remains convincing will be decided in the courtroom. The consultation remains open for responses until 13 February 2026.


This article is intended as commentary only and does not constitute legal advice.

 
 

Other Insights

bottom of page