Possible Solutions

The attribution of liability in a whole socio-technical system becomes complex when AI is involved. As well as the humans directly present at the event, there were humans involved in the design and commissioning of the AI system, humans who signed off on its safety, and humans overseeing its running or working in tandem with it. Complexity is further increased with AI because human oversight may be more influenced by automation bias - where humans attribute greater than warranted intelligence to the machine - and because the AI’s decision-making cannot be clearly understood by its operators. Given that automation bias and AI inscrutability are problems across many settings where AI is used, it is no surprise that efforts are already being made to solve them.10,11
Whilst we are some way off it being possible, or even appropriate, to hold an AI system itself liable,12 any of the humans involved in an AI’s design, building, provisioning, and operation might be held liable to a degree. Smith and Fotheringham argue that using clinicians as the sole focus for liability is not “fair, just and reasonable”.13 Without a clear understanding of how an AI came to a decision, a clinician is faced with either treating it as a knowledgeable colleague,14–16 or coming to their own judgement and largely ignoring the AI - or even turning it off. Even if they resolve to make their own decision and then check it against the AI’s recommendations, this only avoids the problem when there is agreement. If the AI disagrees, the clinician faces the same dilemma.
Although the AI system is a product, product liability does not provide an attractive alternative for claimants, versus a claim against the clinician and their employer via vicarious liability. Product liability claims are notoriously difficult, and forensically expensive, compared with ordinary professional negligence claims. The claimant needs to identify and prove the defect in the product.  Within the context of opaque and interconnected AI, this may be an extremely difficult and costly exercise requiring significant expertise.17,18 Also, the current product liability regime in England and Wales, found in the Consumer Protection Act 1987, based on the EU Product Liability Directive (PLD),19 predates the digital age. It has significant problems in an AI context. Software, when not delivered alongside hardware, appears not to be a “product” for the purposes of this regime, and the assessment of defectiveness occurs at the time of the supply of the hardware, so over the air updates and post-delivery learning are not relevant. The regime also contains a state-of-the-art defence, strengthening the position of producers. Although the fault-based tort of negligence (based on the manufacturer’s/producer’s duty of care) may also be deployed in a product liability context, such claims also have significant problems in an AI context.18 Unfortunately, the clinician and their employer via vicarious liability for the clinician’s negligence, remain the most attractive defendants to sue.20
‘Vicarious liability’ is when an employer is held strictly liable for the negligence or wrongdoing of an employee, which is closely connected to their employment. The wrongdoing of the employee must be established first. In a medical negligence context, negligence still traditionally focuses on the individual, with the hospital being vicariously liable for that individual’s tort – although a system-based model would perhaps be better, both for patient safety and for the impact on individual clinicians.21 Even if the clinician’s employer ultimately pays, this vicarious liability is based on a finding that the clinician themselves is at fault.
There may be alternative claims against the hospital (which also owes a duty of care to the patient) for systemic negligence, for instance regarding the staff training on such technologies. At a stretch a claimant could attempt to target regulators, such as the Care Quality Commission (CQC), although establishing their duty of care to the relevant claimant will be a substantial hurdle to cross. The negligence claim against the clinician is easier to establish, requires less evidence, and in particular does not require extensive proof of causation, unlike actors further up the causal chain. The doctor at the chain’s end is thus a softer target. Piggy-backing their employer via vicarious liability ensures a solvent defendant.
Meanwhile, AI systems are currently treated as products, so the software development company (SDC) would only be liable to the patient through product liability, which we have seen above has considerable problems in this context. In the future, it may be that the AI system is treated as part of the clinical team – and not as a product – so that its ‘conduct’ could be attributed to those who ‘employ’ the AI system, which may for instance be the SDC, or clinician’s trust.18 But that is not the current legal context. It is also unclear what ‘standard of care’ would apply to an AI that is treated as part of the clinical team: that of the reasonable AI system, or that of the reasonable clinician?22 The SDC might argue that the higher standard is unreasonable. But this implies that their system is simply not good enough - that its recommendations are inferior to the decisions of a clinician - and few organisations would be willing to deploy an AI system on that basis.
Smith and Fotheringham argue that there should be risk pooling between clinicians and SDCs for harms - with actuarially-based risk pooling insurance schemes to provide cover for AI-related damage.13 However, these are at present merely proposals. Currently, a clinician (using an AI system) who is held liable in negligence to the patient may seek contribution from the SDC via the Civil Liability (Contribution) Act 1978, although, as with the patient’s claim against the SDC there are significant difficulties in doing so, since as noted above establishing that the SDC is itself liable for the damage suffered is problematic. The SDC may also have sought to contractually exclude any right of clinicians to seek such contribution. Thus, in practical terms with systems of this type the clinician remains liable for acting on the recommendations or decisions of an AI they do not and cannot fully understand. Facing the stress and worry of the consequences of using it, many clinicians may refuse to accept the risk, and simply turn off the machine.