Introduction
Artificial intelligence (AI) holds great promises for health care,
according to developers, policy makers and medical professionals. It is
expected to improve health care by alleviating workload of care workers,
improving the quality of decision-making or improving the efficiency of
health care. Hence, it is often presented as a solution to deal with the
challenges faced by health care in the (near) future.11See for
example the white paper issued by the European Commission in February
2020, which in its first sentence states that AI “will change our
lives by improving healthcare (e.g. making diagnosis more precise,
enabling better prevention of diseases)” and healthcare is repeatedly
mentioned as sector that will benefit greatly from AI. The
introduction of AI systems to medical practice is one aspect of the
increasing digitization of society. Responsible digitization of society
and the medical domain requires that the consequences for specific
practices and people are carefully considered and taken into account at
an early stage of the development. Public values such as equity and
equality, privacy, autonomy and human dignity must be safeguarded. In
addition, citizens and practitioners must be enabled to develop the
skills needed to deal with the new tasks and responsibilities associated
with digital
technologies.1,2Our paper focuses on this last point, namely the epistemological issues
arising from the development and implementation of AI technologies
(particularly clinical decision support systems, CDSS) in clinical
diagnostic practices, and their implications for the epistemic tasks and
responsibilities of health-care professionals.22Because the
introduction of AI poses many ethical, regulatory, technological,
medical, legal and organizational challenges for medical practice, the
Dutch Rathenau institute has asked (through a series of blog posts)
several relevant players in the field of Dutch health care and
innovation (i.e. government, developers, entrepreneurs, lawyers and
scientists) to share their view on the responsible innovation of AI
for health care.:
https://www.rathenau.nl/nl/maakbare-levens/kunstmatige-intelligentie-de-zorg-samen-beslissen-blijkt-de-crux.
In addition to challenges related to the safe (i.e. taking into
account the privacy and other fundamental right of patients)
collection, sharing, saving and use of medical data they identify
opportunities and challenges that concern the implementation of AI
systems in health care practices, such as fitting AI into specific
clinical situations, and training (future) medical professionals to
critically reflect on their use of such technologies.
Although research in CDSS is developing rapidly, the uptake of such
technologies into medical practice is slow.3,4Kelly et al. (2019) show that this
is partly, due to the fact that clinical evaluation through randomized
controlled trials (as the gold standard for evidence generation) through
machine learning is not always appropriate or feasible. Furthermore, the
metrics for technical accuracy used in machine learning studies
often do not reflect metrics used in robust clinical evaluation, which
essentially includes quality of care and patient
outcomes.3 Greenes et al. (2018) provide an overview
of the factors that need to be considered to overcome challenges related
to the implementation of computer-based CDSS, namely: how systems are
integrated into the clinical workflow; how the output of a CDSS is
represented to the user and (intended to be) used for cognitive support;
how the systems can be implemented legally and institutionally; how the
quality and the effectiveness of a systems can be evaluated; and how the
cognitive tasks of medical professionals can be
supported.4 In this paper, we focus on one of these
factors: what cognitive tasks can be supported by CDSS, and how?
More specifically, our question is
how CDSS impacts the epistemic activities of (a team of) medical
professionals, who have the task of determining a diagnosis and a
strategy for cure or care based on heterogeneous information (from
different sources) about a patient. To answer this question, we will
first provide an overview of the epistemic tasks of medical
professionals in performing these clinical tasks. Then, we analyse which
of the epistemic tasks can be supported by computer-based systems, while
also explaining why some of them tasks should remain the territory of
human experts.
Applications of CDSS
CDSS is a class of computer and AI-based systems that is designed as a
tool to support clinical decision-making by medical professionals or
patients. More technically, CDSSs are ‘active knowledge systems which
use two or more items of patient data to generate case-specific
advice’.5 There are many different types of CDSS which
provide different types of support to different kinds of decision-making
processes in a variety of clinical situations, ranging from providing
alerts or reminders for example while monitoring patients, emphasizing
clinical guidelines during care, identify drug-drug interactions, or,
advise on possible diagnosis or treatment plans.6Regarding diagnosis and treatment, CDSS can have many functions, such as
predicting the outcome of a specific treatment, image interpretation
(i.e. contouring, segmentation or pathology detection), prescribing (the
dosage of) medication, and screening and prevention.7In performing these kinds of epistemic tasks, a CDSS uses artificial
intelligence to ‘reason’ according to its algorithms about a specific
patient by comparing that patient’s data with the data in its system.
CDSSs are primarily designed to mimic reasoning by medical
professionals, but faster, less prone to human error or
cheaper.6 The rules that the CDSS follows to reason
about a specific patient are either programmed by the developers (i.e.
‘knowledge’ or ‘rule-based’ expert systems), or inferred from a large
amount of data about a group of patients, using statistical AI methods,
such as machine learning or deep learning algorithms (i.e.
‘data-driven’).8,9,10
Preventing risks of CDSS by better understanding cognitive
tasks
However, there are several potential risks associated with the
introduction of CDSS in clinical practice, which were reviewed in a
recent report.6 Because the clinical decisions made by
healthcare professionals have consequences for the wellbeing of
patients, risks associated with the uses of CDSS are substantial and
undesirable. These risks can be classified into: 1) risk related to the
‘datafication’ of medical information; 2) control that is transferred
from humans to machines; 3) the lack of a human element, and 4) the
changing division of labour.6 An important aspect of
each of these risks is that cognitive tasks, which are usually performed
by medical professionals who bear the responsibility to perform these
tasks to the best of their knowledge and ability,11are now delegated to machines. Therefore, to deal with the risks
associated with the implementation of CDSS, it is crucial to understand
how the use of a CDSS will impact the daily practice of medical
professionals (i.e. clinicians) – more specifically, to understand the
cognitive tasks involved in decision-making on diagnosis and treatment.
Overview
In this paper, we will argue that CDSS can potentially support clinical
decision-making, but that this poses specific requirements on the CDSS
as well as on the (training of) cognitive abilities of the professionals
using the CDSS.
In Section 2, we will analyse the epistemic tasks in clinical
decision-making and suggest that human and artificial intelligence each
have different capacities to fulfil specific kinds of epistemic tasks.
In order to achieve a high quality decision-making process for diagnosis
and treatment of patients, human and artificial intelligence should
complement each other in performing these epistemic tasks. For example,knowledge-based CDSSs , on the one hand, can function as an
automated ‘handbook’ that efficiently supports searches by clinicians.Data-driven CDSSs , on the other hand, may identify patterns in
data that are inaccessible to humans or detect similarity of data
patterns among patients, thus providing a diagnosis and suggesting a
possible treatment.3,8,10,12,13 Clinicians, in turn,
deal with individual patients, and will diagnose based on existing data
and their experience. They will find the most suitable treatment taking
into account both the diagnosis, the personal situation of the patient,
and the local situation of the hospital. In arriving at a suitable
treatment, they may also consult colleagues and deliberate with them. In
other words, the CDSS makes a proposal for treatment based on the
diagnosis only, i.e., without taking into account the specific context
of the patient. We will conclude that, when using a computer-based CDSS,
clinicians have an epistemological responsibility to collect ,contextualize and integrate all kinds of clinical data and
medical information about an individual patient similar to when usingevidence based medicine. 11,14
Section 3 elaborates on what is needed for good use of computer-based
CDSSs in clinical practice. We suggest that, since clinical
decision-making involves a complex and demanding cognitive process for
which they bear ultimate responsibility, it is more appropriate to think
of a CDSSs as a clinical reasoning support system (CRSS) rather
than a decision support system. Based on this analysis, some
suggestions can be made on what this implies for the collaborations of
clinicians and CRSSs. We will conclude that for CRSSs this means that:
1) CRSSs are developed on the basis of relevant and well-processed data,
the preparations of which requires human expertise; 2) the system
facilitates an interaction with the clinician, allowing the clinician to
ask questions that a CRSS answers and thereby also providing some
insight into how the answer is created; and 3) there is a clear
empirical relationship between the data generated by the CRSS and the
information of the individual patient, providing empirical
justification for the use of the CRSS in reasoning about that patient.
Conversely, clinicians must have cognitive skills to perform epistemic
tasks that cannot performed by the CRSS (such as collecting,
contextualizing and integrating data on individual patients) and to
understand the (CRSS supported) clinical reasoning for each specific
patient to the extent that they can still take responsibility for the
outcome.
In Section 4, finally, we will defend that proper implementation of CRSS
allows clinicians to combine their (human) intelligence with the
artificial intelligence of the CRSS into hybrid intelligence , in
which both have clearly delineated and complementary tasks. We will
sketch out how the epistemic tasks can be divided between the clinician
and the system, based on their respective capacities. CRSS, for example
can assist in cognitive tasks that humans are notoriously bad at, such
as the statistical reasoning, or finding patterns in complex data. The
task of clinicians is to incorporate the outcomes of CRSS into medical
reasoning, by asking questions that the machine (CRSS) can answer, and
by interpreting, integrating and contextualizing the outcome of the
system. We conclude that the configuration such a hybrid intelligence
poses requirements on the side of the CRSS as well as the clinician.