Artificial Intelligence: Friend or Foe?

(Credit: Stockvault)

If you regularly head out to precision medicine meetings, it’s more than likely you’ll have noticed the growing examples of artificial intelligence (AI) in action. This is largely due to the increasing amount of genomic and phenotypic data being produced, which is enabling an exciting hub of technologies. But, it is also largely down to the fact that its applications are already having an impact in the clinic itself, enabling better management and treatment of disease.

Early successes with AI across multiple sectors culminated in the technology receiving backing from the UK government who pledged to invest almost £1 billion over four years. Despite such strong support, there is no shying away from the growing unease that surrounds the capabilities of AI. Could there be truth to the claim that machines could take 4 million UK private sector jobs within ten years? Well, it is with this in mind that we are seeking to explore the true story of AI: where are we now, what do we need to overcome, and how should we integrate this technology within the genomics sphere?

Before we jump in, it’s important that we understand the definition of AI. Put simply, it’s the theory and development of computer systems that are able to perform tasks normally requiring human intelligence. This can be done through a number of mediums, including: visual perception, speech recognition, decision-making, and translation between languages.

The ability to use this technique to accelerate diagnosis and treatment is a real breakthrough in science. “AI technologies can be used to spot patterns and find subtle trends that can improve the detection, diagnosis and treatment of diseases,” explains Geraldine McGinty, Chief Strategy and Contracting Officer, Weill Cornell Medicine Physician Organisation. “The evaluation of patient information will not be limited to imaging data but structured and unstructured data from medical records, pathology, genomic and even wearable devices.”

 

AI in the Clinic

AI is getting a lot of us excited because of the impact we have already seen it have in the clinic; the same can’t be said for a lot of other genomic technologies. FDNA, IBM and Verily (former Google Life Sciences), to name but just a few are some of the big players who have developed their own AI platforms after recognising its potential.

However, it is SOPHiA GENETICS who has made some noticeable advances. Despite being founded in 2011, the company has already rolled out their own AI, aptly named Sophia AI into several hundred hospitals around Europe. One of those is over in Denmark at the Aarhus University Hospital, who is using AI to detect mutation relevance for some leukaemias.

“We carry out the mutation analyses and I report back to doctors which mutations I find,” says molecular biologist, Anni Aggerholm. “They then use it in their prognosis, diagnosis and treatment of their patient. Artificial intelligence helps us to see similarities between different leukaemia’s, it gives a broad perspective of how doctors can diagnose and help patients in the long run.”

What’s more is that, “it’s possible to do more individualised treatment of the patient, where you can gain knowledge and are capable of collecting data from different fields and mutations,” she added.

From this it’s clear that AI exhibits the ability to contribute to the wider precision medicine effort, which is why it’s no surprise that researchers are actively searching for other diseases to apply the technology to. One hot area of interest right now is in eye disease.

Deep Learning is Outperforming Humans in Diagnostics

Changing Employment

The Food and Drug Administration (FDA) hit headlines after approving an AI diagnostic device that can make clinical decisions by itself, a recognised nod from the top. The software program, known as IDx-DR, is able to detect a specific form of eye disease by looking at photos of the retina, with no “specialist looking over the shoulder of this algorithm,” explains founder, Michael Abramoff.

This is made doable by using autonomous AI, which “performs specialty level diagnostics and makes clinical decisions where the patients are, in the frontlines of care, such as GPs,” he adds.  “The improved diagnostic accuracy, patient friendliness, accessibility and cost savings will be huge.”

However, it is this very news that has caused a stir amongst healthcare professionals. Largely because of the impact such developments will have on employment, with some even fearing redundancy.

“The impact of technology on employment is part of a much bigger problem,” highlights Cooper. “Technology will change employment, and we need to be talking about what economic systems could look like when AI technology reduces demand for labour.”

These concerns have been intensified with the public sharing of views from some pretty prolific figures, with Stephen Hawking claiming that, “the development of full artificial intelligence could spell the end of the human race.”

The issue stems from a fear that technology could evolve to a point where it will go beyond human control. But, such claims have equally been squashed publicly, especially by Microsoft research chief, Eric Horvitz, who despite believing that AI could achieve consciousness, slammed any threat on human life.

“There have been concerns about the long-term prospect that we lost control of certain kinds of intelligence,” he emphasised. “I fundamentally don’t think that’s going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”

Yes, these views do stray away from genomic applications, but they do force us to recognise some important considerations to take when dealing with AI in the medical space. It’s reassuring to see that there is evidence of this already being done, after a call for public discussion, especially since in the future diagnosis might be provided by computers and not doctors. What will patients think about that?

Nevertheless, it’s important to stress here that we don’t become consumed by these what-ifs, and rather explore the positive impact we may well see from adoption.

“Technology may improve patient outcomes,” explains Cooper. “It may liberate clinicians from routine work, enabling them to focus on more interesting or challenging cases. There is also potential for reducing healthcare costs and providing quality care in underserved populations where expertise is scarce.”

“There have been concerns about the long-term prospect that we lost control of certain kinds of intelligences. I fundamentally don’t think that’s going to happen,” he explains. “I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”

 

Entering Its Next Phase of Development

Although it’s essential for us to consider these future implications, we are a long way from this becoming a reality. Instead, one of the biggest tasks right now is actually integrating AI into a clinical workflow, something at the top of the list of the radiology realm.

In the past, deep neural networks have noticeably led to breakthroughs in recognition from photographic images, “the same types of networks are beginning to reliably detect disease processes in medical images,” notes McGinty. “In addition to supporting the work that radiologists perform at the time of interpretation, such as detecting lesions, AI could also improve workflow by prioritising critical cases or improving the quality of images through optimising equipment settings and acquisition parameters.”

But, at the same time for this to be effective integration must be met with care protocols, electronic medical records, and reimbursement systems. Specific challenges range from defining and prioritising the use cases that will be most important for improving radiological care across the healthcare continuum; ensuring algorithms are developed in ways that recognise the diverse nature of our patients, imaging data and clinical workflows; developing pathways for validating an algorithms into clinical workflows; and monitoring algorithms’ effectiveness after widespread deployment across clinical practices, explains McGinty.

Following such stumbling blocks led to the development of the American College of Radiology Data Science Institute (ACR DSI). The DSI successfully developed a foundational framework for AI to improve clinical care and ensure that algorithms can be safely deployed and monitored in clinical practices on a large scale. The framework “outlines a standardised pathway for developers to create and mature algorithms from idea to widespread clinical use,” says McGinty.

“We believe developing AI use cases that follow this path will be the way that radiological professionals can most effectively influence the development of AI models that will have the greatest benefit to our professions and our patients.” The capabilities of AI in this instance were confirmed earlier this year, after SOPHiA GENETICS revealed that its technology had gained radiomics competencies – a promising nudge in the right direction.

It is clear that AI is entering the next phase of its development, and it’s reassuring to see institutions taking the right steps to enable the technology to be applied to a wider range of diseases. This is exactly what Lee Cooper is trying to achieve over at the Emory University School of Medicine. His team are focused on improving how data is used for clinical management of patients diagnosed with a form of brain tumour, known as glioma, as well as developing algorithms that consume genomic profiles and digital pathology images to predict how rapidly a patient’s cancer will progress following diagnosis.

“Some of the recent advances in a technology called deep learning enable software algorithms to identify molecular patterns in complex genomic data and visual patterns in images of histology that can very accurately predict clinical outcomes,” he comments. “This information can help physicians formulate better treatment strategies to extend life or minimise side effects, and can help patients to better plan for the future.”

But, it would seem Cooper and his team aren’t satisfied with those results. Instead they are taking things further by exploring how machine learning and AI can be used to assess histology and bring digital pathology together with genomics in a single framework to improve prognostic accuracy.

“Histology reflects so much about a patient’s disease, but extracting information from digital pathology images can be very difficult,” he stresses. “The emergence of deep learning has been really transformative in this area. Previously, we developed algorithms that explicitly encode expert knowledge. This approach often referred to now as “engineering,” was limited by our knowledge and biases. Deep learning does not rely on expert knowledge; it starts with a naïve model that adapts to learn relationships from the data in an unbiased manner. This learning approach can address some really interesting problems, including predicting survival or disease progression.”

Why AI Can’t Solve Everything

A Complicated Technology to Implement

I’m sure this has already crossed your mind, but AI is exceptionally vast, but that doesn’t mean that it isn’t plagued by some of the more common problems that we deal with on a regular basis in genomics. This makes it a much more complicated technology to implement. One of those is data collection, as without it the technology is unable to progress. Cooper has stressed a need to adopt a stronger focus on this. “I read a quote recently about machine learning, something like “it’s hard to imagine a problem that cannot be solved given enough data” – I tend to agree with that. Sharing data could significantly accelerate progress, and efforts like the NIH Data Commons or the Oncology Research Information Exchange Network (ORIEN) to establish data standards and exchanges are a great idea. While there are clearly incentives to sharing data, there are also disincentives that need to be overcome, particularly if you are a large institute.”

Despite regular advances, AI is still very much a work in progress. But, there is a lot about this technology that makes it a front-runner, the most obvious being that it’s already having a positive impact in the clinic, improving the diagnosis and treatment of patients. There’s no denying that its future looks bright, and “within the decade we will see greater adoption of AI in clinical settings in pathology,” explains Cooper. And with the technology being applied to more and more diseases, its scope is only getting larger. However, we can’t shy away from its ‘dark side’.

Not only will “the details of its use in practice take many years to develop,” says McGinty, its threatening nature seems to be a concern for healthcare professionals. In this instant, I believe we should stay focused on the advances we have already witnessed, and get excited about the future relationship between AI and healthcare professionals. By the sounds of it, they could very well be the perfect match.