Artificial Intelligence (AI) is often touted as healthcare’s saviour, but its potential will only be realised if developers and providers consider the whole clinical context and AI’s place within it. One of many aspects of that clinical context is the question of liability.
In the current, standard model of AI-supported decision-making in healthcare, electronic data is fed into an algorithm, typically a machine-learnt model, which combines it all to arrive at a recommendation which is output to a human clinician. The clinician then acts as a final check on the system’s recommendation, and can either accept it as-is, or replace it with a decision they make themselves (see Figure 1 below). We are aware of it already being assumed by AI radiology companies, who label their systems as “assistance” and clarify that responsibility lies fully with the user, largely to reassure about safety concerns. Given recent guidance from the National Health Service in England, which clarifies that the final decision must be taken by a healthcare professional,1 this model looks set to become the norm across the UK healthcare system.