AI

Credit: Margot Wood

By Richard Lumb, with excerpts from AI and Business: A Practical Guide, published by Access AI and written by Michael Garwood. You can download that guide here

You’ve probably read about AI (artificial intelligence) in the news, usually associated with (real and imminent) fears of job losses and (real and, thankfully, less imminent) forecasts of human demise.

AI is already being applied in genomics.

In fact, it’s highly likely that the organisation you work for is already integrating AI into some aspect of its operation – something other than genomics. Hopefully not replacing YOU with a robot. A cute one, who works harder than you. And smells nicer.

I thought it would be useful to write a blog post that would explain what AI is, and why you should be excited (and a little scared).

What is AI?

Rather than get caught up in those things, It helps to start with a solid definition.

In short, AI is a field of computer-based science that is aimed at programming computers to do things that are normally done by humans.

Here’s a more complete definition from our friends at Gartner:

‘Artificial intelligence is technology that appears to emulate human performance typically by learning, coming to its own conclusions, appearing to understand complex content, engaging in natural dialogs with people, enhancing human cognitive performance (also known as cognitive computing) or replacing people on execution of non-routine tasks.

‘Applications include autonomous vehicles, automatic speech recognition and generation and detecting novel concepts and abstractions (useful for detecting potential new risks and aiding humans to quickly understand very large bodies of ever changing information).’

If you have a spare 30 minutes, for a brilliantly entertaining description of AI check out the two ‘Wait But Why’ posts by Tim Urban, starting with this one.

If you don’t have time for the long version, read on.

How scary/exciting is AI?

The answer ranges from ‘pretty scary/exciting’, depending partly on what type of AI you’re talking about. Broadly, there are three types:

Narrow or Weak AI

This a descriptive term used for AI that can demonstrate human like intelligence, but only for a specific task or tasks. Think of it as an AI which has no level of consciousness. 

A useful example is Siri, the digital personal assistant found on Apple products. This form of AI uses speech recognition (Neuro-Lingustic Programming) to identify what it’s being asked, and then retrieve the answers via the internet. It will also perform hands-free tasks on the device for you; such as making calls, sending text/email and even checking/managing your schedule.

You may also have read stories about IBM’s Deep Blue computer beating former Russian Chess champion, Garry Kasparov, in 1996 and 1997, or Google’s Deepmind AlphaGo computer beating world Go champion, Lee Sedol, in March last year.

These are again all examples of Weak AI in action. They are limited to specific tasks they are programmed to do.  Deep Blue may beat a champion at chess, but ask it to play a game of checkers, and it wouldn’t know where to start.

When you hear about using AI in genomics, it’s almost certainly concerning Weak AI. But it’s not really ‘weak’. It’s just focused.

Weak AI will not decide to drop any bombs or bring down the internet (unless we program and teach it to). For the serious existential threats, we need to look elsewhere.

General or Strong AI

Also described as True AI or Full AI.

Strong AI is designed to ascribe self-aware consciousness. That means it’s able to think and understand for itself in the same way as you and I do using cognitive learning to mirror the process akin to a human brain.

Using the definition from Berkeley University: ‘Strong AI is a term used to describe a certain mind-set of artificial intelligence development. Strong AI’s goal is to develop artificial intelligence to the point where the machine’s intellectual capability is functionally equal to a human’s.

“The ideal Strong AI machine would be built in the form of a person, have the same sensory perception as a human, and go through the same education and learning processes as a human child. Essentially, the machine would be “born” as a child and eventually develop to an adult in a way analogous to human development.” 

This is the type of AI typically more akin to what you read about or see in the science fiction genre, but is still some years from being achieved. How many, exactly, is still very much up for debate. Estimates range from 10 years (less frequently expressed) to 100 years. The mean average seems to settle at around 30 years. For a much more detailed analysis, check out this excellent article.

But it doesn’t stop with General AI. Get prepared for your mind to be blown.

Super Intelligent AI

For those of you whose thoughts of AI fall into the ‘sense of dread, trepidation, or fear’ bracket, you may want to skip this next bit.

Artificial Super Intelligence (ASI), often referred to as Artificial General Intelligence (AGI) is, as you’d expect, a giant step-up from Strong AI. It will be (emphasis on future tense) be superior to any level of human intelligence and will (potentially), if allowed, be in complete control of its own decision making. 

This form of AI is something which has been discussed by some of the world’s leading tech companies and world leaders as potentially having a detrimental impact on the human-race if not governed correctly.

The idea of super intelligence is not new and has been discussed (loosely) since the definition of AI was first coined back in 1956.

Alan Turing, the ‘godfather of AI’, famously stated: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control.” 

But when will this happen? Again, there is no concrete answer. Oxford University professor, philosopher and author Nick Bostrom noted in his latest book ‘Superintelligence’, that, based on  several expert surveys,  this level of ‘human level intelligence’ could arrive between 2075 and 2090. Other reports suggest much later or even not at all.

Perhaps for now, it’s best to leave that one for the kids to worry about. In the meantime, here’s what Bill Gates thinks:

“First the machines will do a lot of jobs for us and not be super intelligent. A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Bill Gates, Co-founder, Microsoft


There’s so much more I could write by way of an introduction to AI. To get a comprehensive introduction to AI, check out this free report  from Access AI.


Machine Learning and Genomics

Now that we’ve established what AI is, and the three distinct types of AI, how does it impact genomics?

Mostly through a branch of AI called machine learning.

What is machine learning?

Where better to start than everyone’s favourite academic reference source (winky face), Wikipedia:

‘Machine learning is the subfield of computer science that gives computers the ability to learn without being explicitly programmed. Evolved from the study of pattern recognition and computational learning theory in artificial intelligence, machine learning explores the study and construction of algorithms that can learn from and make predictions on data.’

Here’s an example of machine learning in action:

Have you ever wondered why or how your email account deposits certain emails (spam) into the ‘Junk’ folder? This isn’t guess work, nor is it someone personally assorting them for you. It’s a machine, or more accurately, a computer algorithm that has been programmed and trained to automatically spot patterns, detect and predict unsolicited contents on its own. No human interaction.  

Confused? Here’s another example?

Are you one of the 1.86 billion people that use Facebook? You may notice now that when uploading pictures, that the social media platform automatically detects when there is an image of a person, prompting you to tag them. This is an example of machine learning. This cognitive approach is no different to a human brain. By extracting and analysing the data, the software has learnt to identify a person both on their physical appearance and by name and predict what you’re about to do next – saving you the need.

Starting to make sense? Here’s one more.

If you’re one of the 244 million people with an Amazon.com account (30 million active monthly users), you may have noticed that it’s no longer all about searching for something you might like. Instead, Amazon, as are many other online retailers, offers you personalised recommendations. A personal shopper, if you will. No human interference. The machine (software) has used your data to build a profile of you to make an accurate prediction on what you might like, the same way as a friend might, but with hard evidence.

In summary, machine learning is mostly about pattern recognition.

And back to genomics…

That’s what the more complex elements of genomic data analysis involve. Pattern recognition.

The potential for machine learning to dramatically improve the effectiveness and efficiency of meaningful pattern recognition – focusing on those patterns within genomics that deliver meaningful clinical insights – is profound.

In genomics, interpretation at the clinical ‘back end’ is the main limiting factor. That’s mostly where AI comes in – at least now. It’s about understanding the relationship between genotype and phenotype, and using that to deliver better preventative and reactionary healthcare.

For genomics, AI couldn’t come in quickly enough. The cost of whole genome sequencing continues to plummet. In January of this year, Illumina unveiled a new machine that looks likely to make consumer genomics a whole order of magnitude more accessible – potentially at just $100. AI could also be used to drop the analysis time from days and hours to seconds.

As a result of these, and other similar developments, genomics in the clinic will become much more widespread as a direct result of AI.

And as clinical use of genomics takes off, we will need something much better to analyse the ensuing tsunami of newly generate genomic data.

Luckily, a bunch of different people and companies are positioning to meet that demand.

Who are they? And what are they up to?

Stay tuned to Front Line Genomics to find out.

But that’s potentially just the start. For instance, take epigenetics. AI could be used to build complex predictive models that help to understand the influence of environmental and other factors on gene expression. These new insights could be used to support life decisions that dramatically reduce the risk of illness and enhance quality of life. That’s a potentially dynamite consumer application that we could access on something like a smart phone.

Or if Elon Musk has his way, ‘neural lace’ technology.

And it doesn’t stop there. Bring genome editing and synthetic biology into the mix, and it truly does become mind-boggling. Throw in some terraforming and trans-humanism…

 

Richard Lumb is Founder and Chairman of Front Line Genomics, and Founder and CEO of Access AI. You can contact him here.