Possible Solutions
The attribution of liability in a whole socio-technical system becomes complex when AI is involved. As well as the humans directly present at the event, there were humans involved in the design and commissioning of the AI system, humans who signed off on its safety, and humans overseeing its running or working in tandem with it. Complexity is further increased with AI because human oversight may be more influenced by automation bias - where humans attribute greater than warranted intelligence to the machine - and because the AI’s decision-making cannot be clearly understood by those operating it. Given that automation bias and AI inscrutability are problems across many settings where AI is used, it is no surprise that efforts are already being made to solve them.9,10
Whilst we are some way off it being possible, or even appropriate, to hold an AI system itself liable,11 any of the humans involved in an AI’s design, building, provisioning, and operation might be held liable to a degree. Smith and Fotheringham argue that using clinicians as the sole focus for liability is not “fair, just and reasonable”.12 Without a clear understanding of how an AI came to a decision, a clinician is faced with either treating it as a knowledgeable colleague,13–15 or coming to their own judgement and largely ignoring the AI - or even turning it off. Even if they resolve to make their own decision and then check it against the AI’s recommendations, this only avoids the problem when there is agreement. If the AI disagrees, the clinician faces the same dilemma.
Unfortunately, the clinician and their employer via vicarious liability for the clinician’s negligence, remain the most attractive defendants to sue.16 ‘Vicarious liability’ is when an employer is held liable for the negligence or wrongdoing of an employee. In a medical negligence context, negligence still traditionally focuses on the individual – although moving to a model focused on the system as a whole would be more useful, both for patient safety and for the impact on individual clinicians.17 Meanwhile, AI systems are currently treated as products, so the software development company (SDC) would only liable to the patient through product liability. In the future, it may be that the AI system is treated as part of the clinical team – and not as a product – so that its ‘conduct’ could be attributed to those who ‘employ’ the AI system, which may for instance be the SDC, or clinician’s trust.18 But that is not the current legal context. It is also unclear what ‘standard of care’ would apply to an AI that is treated as part of the clinical team: that of the reasonable AI system, or that of the reasonable clinician?19 In a case where the system was being held to the higher standard, the SDC might argue that this is unreasonable. But this implies that their system is simply not good enough - that its recommendations are inferior to the decisions of a clinician - and few organisations would be willing to deploy an AI system on that basis.
Given the SDC’s involvement, Smith and Fotheringham argue that there should be risk pooling between clinicians and SDCs for the harm - with actuarially-based risk pooling insurance schemes to provide cover for AI-related damage.12 However, these are at present merely proposals. Currently, a clinician (using an AI system) who is held liable in negligence to the patient may seek contribution from the SDC via the Civil Liability (Contribution) Act 1978, although, as with the patient’s claim against the SDC there are significant difficulties in doing so, since as noted above establishing that the SDC is itself liable for the damage suffered is problematic. The SDC may also have sought to contractually exclude any right of clinicians to seek such contribution. Thus, in practical terms with systems of this type the clinician remains liable for acting on the recommendations or decisions of an AI they do not and cannot fully understand. Facing the stress and worry of the consequences of using it, many clinicians may refuse to accept the risk, and simply turn off the machine.