Love it or hate it (or just harbour mild curiosity) AI in healthcare is a popular topic. It represents the future, and also a lot of unknowns. There are already several applications in use or in development around the world which are answering a lot of questions and raising new points to address. The Nuffield Council on Bioethics examines much of this in their new briefing note, Artificial Intelligence (AI) in healthcare and research. For those of you not familiar with them, the Nuffield Council on Bioethics is an independent body that has been advising policy makers from in the UK for over 25 years. This is the theirs in a new series of bioethics briefing notes they have been publishing, and is a series worth keeping an eye on.

The briefing note looks specifically at the ethical issues raised by the use of AI in healthcare, such as:

  • the potential for AI to make erroneous decisions;
  • who is responsible when AI is used to support decision-making;
  • difficulties in validating the outputs of AI systems;
  • the risk of inherent bias in the data used to train AI systems;
  • ensuring the security and privacy of potentially sensitive data;
  • securing public trust in the development and use of AI technology;
  • effects on people’s sense of dignity and social isolation in care situations;
  • effects on the roles and skill-requirements of healthcare professionals; and
  • the potential for AI to be used for malicious purposes.


Hugh Whittall, Director of the Nuffield Council on Bioethics, says:

“The potential applications of AI in healthcare are being explored through a number of promising initiatives across different sectors – by industry, health sector organisations and through government investment. While their aims and interests may vary, there are some common ethical issues that arise from their work.

Our briefing note outlines some of the key ethical issues that need to be considered if the benefits of AI technology are to be realised, and public trust maintained. These are live questions that set out an agenda for newly-established bodies like the UK Government Centre for Data Ethics and Innovation, and the Ada Lovelace Institute. The challenge will be to ensure that innovation in AI is developed and used in a ways that are transparent, that address societal needs, and that are consistent with public values.”

More on these topics