Artificial Intelligence (AI) is often touted as healthcare’s saviour, but its potential will only be realised if developers and providers consider the whole clinical context and AI’s place within it. One of many aspects of that clinical context is the question of liability.
Analysis of responsibility attributions in complex, partly automated socio-technical systems has identified the risk that the nearest human operator may bear the brunt of responsibility for overall system malfunctions.1 As we move towards integrating AI into healthcare systems, it is important to ensure that this does not translate into clinicians unfairly absorbing legal liability for errors and adverse outcomes over which they have limited control.
In the current, standard model of AI-supported decision-making in healthcare, electronic data is fed into an algorithm, typically a machine-learnt model, which combines it all to arrive at a recommendation which is output to a human clinician. The clinician then acts as a final check on the system’s recommendation, and can either accept it as-is, or replace it with a decision they make themselves (see Figure 1 below). We are aware of this model already being assumed by AI radiology companies, who label their systems as “assistance” and clarify that responsibility lies fully with the user, largely to reassure about safety concerns. Given recent guidance from the National Health Service in England, which clarifies that the final decision must be taken by a healthcare professional,2 this model looks set to become the norm across the UK healthcare system.