Computers excel at finding a signal in the noise, but we’re still wary of AI doctors.
Photos of Watson in a medical setting are hard to come by. Here you can see how Watson produces a number of possible answers then chooses the one with the highest confidence. Credit: IBM / Jeopardy
An occupant of the SPHERE house, wearing a lot of sensors. Credit: SPHERE
In an unassuming, two-story Victorian town house in Bristol, people are being filmed, monitored, and tracked 24/7. Invisible sensors constantly keep a watchful eye as they go about their business. But what these folks lose in privacy could be our collective gain in life expectancy—that is, if the long-term data bears out.
Pivotal to the £15-million (~$21M) Sensor Platform for Healthcare in a Residential Environment (SPHERE) project, this house has been invisibly fitted with dozens of cameras and sensors while its occupants are asked to don wearable devices. The aim is to research how health is related to everyday lifestyle and living conditions over time.
The smart home observes everything, from how long the occupants slouch in front of the TV to their activity sitting or walking or exercising. The house captures such pieces of information and can contextualize them against each other. It takes note of how much and how frequently the occupants eat and drink and which appliances are being used. It even records when occupants sleep and keeps track of temperatures around the house.
“Many long-term health conditions are highly correlated to lifestyle,” says Ian Craddock, an engineering professor at the University of Bristol and the research team leader at the Toshiba Telecommunications Research Laboratory. Small changes in long-term lifestyle factors can reveal valuable information about the state of a person’s current health and even offer an early warning sign of trouble ahead.
This may be individual data collection at its extreme, but such initiatives are becoming more rampant and take many less invasive forms. For instance, at Apple’s most recent product event, the company boasted it had over 1 billion active devices worldwide. And in the last few years, Apple has leveraged these—particularly newer iPhones and its Apple Watch—for data collection in conjunction with health professionals and researchers. Its open source ResearchKit (unveiled in 2015) helped create a Parkinson’s diagnostic app that sparked the “largest Parkinson’s study in history” in less than a year, according to Apple VP Jeff Williams. This week the company released an open source CareKit to aid treatment in the same way its iDevices previously aided research; a Parkinson’s app was again among the first announced creations.
Tech-driven automated health data collection obviously opens up tremendous possibilities. But a vast trove of information is useless without the ability to efficiently store, analyze, individualize, and implement this information on a case-by-case basis. Despite any science fiction depictions of super-capable, user-facing robodoctors of the future, this behind-the-scenes legwork is the genuine niche where artificial intelligence can revolutionize health.
But 20 years after AI publicly burst onto the scene with IBM’s Deep Blue thrashing chess world champion Garry Kasparov, has the tech finally started helping us save lives?
Bigger, healthier data
As SPHERE’s house project continues to expand with plans to fit many further properties with sensors and smart devices, the initiative is developing additional ways of mining the data it acquires. The company wants to produce reports that over time should increase in quality as the data itself grows richer.
Again, the key with any data collection process in healthcare is making sense of all the data you have collected. And luckily AI and machine learning technologies are beginning to make their impact on the health sector. As well as being able to produce summary reports on the data, technologies are being developed that will evaluate and interpret it, offer recommendations, and make predictions, autonomously.
Not all these developments fit the consulting-room stereotype of AI in healthcare. Barbara Hann, a research scientist at the Cary Institute of Ecosystem Studies in Millbrook, New York, has been using machine learning to model and predict the geographical spread of the Ebola virus.
The problem of keeping track of the wildlife that might carry the virus, given that biologists have identified 1.6 million species of animal, is one uniquely suited to big data and machine learning. “Our algorithm can deal with incomplete data sets,” Hann wrote in a report for IEEE Spectrum. “Machine-learning also deals well with complexity. Ecological analyses can easily include dozens of variables but it’s often not clear how those variables interact.
“Moreover, our approach counteracts the sampling bias that can skew the study of infectious diseases,” she says, referring to the fact that having better quality data from America and Europe should not override the more pertinent data coming in from Africa.
Hann’s algorithm-based hunt for Ebola carriers led to the identification of a species of bat that can carry Ebola-like viruses, which, as it turns out, is not confined to Africa. Public health officials might want to take note.
Even the Earl of Wessex is getting eye exams via smartphone thanks to PEEK. Credit: Getty Images
A signal in the social noise
Another unusual approach to intelligent data mining can be found in the analysis of social media. Back in 2009, the Centers for Disease Control and Prevention (CDC) in the US used a combination of traditional and social media to invite members of the public to report symptoms related to the H1N1 virus outbreak. This made it possible to monitor both the symptoms and affected regions and to help direct the response by health services.
Taking this concept a step further, researchers are currently investigating ways of using social media networks to gather information on mental health to build population models that can be used by social policymakers and to trigger red flags based on cross-referenced keywords that the system has learnt might suggest suicidal tendencies. To take a more commercial direction, machine learning could be used with social media to gather information about adverse drug reactions, enhancing pharma companies’ ability to react quickly to alerts that might otherwise be missed regarding side effects.
Naturally, the big question hanging over social media as a source of information is whether the raw data is any good. There are much more accurate ways of obtaining medical data, albeit for more overtly physical ailments, through the use of mobile and wearable devices. What makes wristbands and smartphones so valuable to the healthcare sector is not just their widespread use but how powerful they have become as highly connected, multi-functional computing devices.
For example, consider the purpose of regular eye examinations. These are used for more than just the treatment of sight impairment; they also reveal wider health issues including easily curable eye diseases and the onset of diabetes. By replacing bulky equipment with portable, everyday hardware—which is essential in locations that lack a local ophthalmologist—the health benefits to remote areas could be enormous.
Currently, a product called PEEK (the Portable Eye Examination Kit) is undergoing field trials in Kenya, Mali, Malawi, Tanzania, Botswana, Madagascar, and India, as well as at home in the UK where it was developed. It is a smartphone app with a low-cost adapter for retinal imaging that can test for core vision problems along with such issues as visual acuity, color and contrast sensitivity, and image grading cataracts.
So far, PEEK is capable of highlighting obvious problems and allows an operator to take retinal photographs to send back to specialists in the lab. It lacks the autonomous analysis of an artificial intelligence that would allow it to make its own diagnosis. However, it is typical of the way in which the volume of high-quality data being collected by handheld and wearable devices continues to swell.
Diagnosing and treating cancer in the cloud
In Star Trek: Voyager, the doctor is a hologram. What form will AI-powered practitioners take?
As Barbara Hann’s research with Ebola makes clear, the ability of AI-based processing to handle vast amounts of data from disparate information sources is just one advantage. AI is also particularly good at identifying patterns in the digital noise that the human eye might miss. And possibly the most complex and challenging field of patient healthcare today is the diagnosis and treatment of cancers.
Flatiron Health is one of a number of major big-data specialists to establish themselves in the oncology sector. Its OncologyCloud platform incorporates electronic health records, practice management, and—significantly for US hospitals—billing data while providing detailed patient treatment records in an analytics dashboard. Its most recent installation was at West Cancer Center, which has 10 clinic sites around Tennessee.
“We wanted an innovative, medical oncology, cloud-based electronic health record that could provide our clinicians with the oncology workflows and clinical pathways they need,” Lee Schwartzberg, the center’s executive director, tells Ars.
That is the essential final component that completes a technology environment ready for AI to thrive—the ability to access data and run applications via the cloud.
Photos of Watson in a medical setting are hard to come by. Here you can see how Watson produces a number of possible answers then chooses the one with the highest confidence.
Credit: Getty Images
Photos of Watson in a medical setting are hard to come by. Here you can see how Watson produces a number of possible answers then chooses the one with the highest confidence. Credit: Getty Images
One night in Bangkok and the world’s your oyster
With events like Deep Blue’s chess victory or its successor Watson’s more recent win on the TV gameshow Jeopardy in 2011, AIs were introduced theatrically to audiences as huge, humming, van-sized cubes of hardware with obligatory flashing lights. As countless Cold War thrillers of the 1960s informed us, this is not a good look for an AI-based diagnostician hanging around a consulting room.
The cloud changes all of that. It is very rare that a customer would consider ordering a physical installation of IBM Watson on-site, which is just as well since there is no such thing. Today, Watson almost exclusively exists as an instance in the cloud. But what is Watson exactly? And now that its fame-garnering gameshow days are over, what is it doing for healthcare?
For health sector clients, an instance of Watson is composed of several technology attributes. The first is the ability to understand natural human language (either typed or spoken), including specific instances of medical nomenclature. The second is the ability to be trained using what is called a “corpus” of information, such as best practice guidelines, quality medical literature, details of new developments in the field, knowledge of local approaches to treatment, and so on.
This way, Watson understands a question you put to it and then runs many thousands of variations of that question against that corpus of training in a cognitive fashion. As such, Watson can provide recommended answers ranked by confidence.
An important third attribute of a Watson instance is the machine-learning aspect. As Matthew Howard, UK head of IBM Watson Health, puts it, this is “the ability to improve the ranking of the outputs by training the system, telling it when it gets it right or wrong. You can do this by giving it case notes and patient examples, and seeing how effectively it arrives at an answer for those patients. You are training the system based on what we call the ‘ground truth’ of how that answer should be given.”
Essentially, an organization will fire up their instance of Watson and train it against their own corpus before running transactions in an appropriate way. Watson’s answers will ultimately depend upon how it has been trained and configured for a particular client. “A commitment to training is a critical part of all cognitive technologies,” Howard says. “You don’t just ingest information and you’re ready to go.”
Howard is also keen to manage expectations while stressing the intended functionality of the system. Watson is designed to present to healthcare professionals a set of ranked recommendations, not to make an outright diagnosis, and certainly not to take decisions out of their hands. A hospital could use it to help with complex diagnosis and treatment—oncology, again, is a popular application. Pharma companies could just as easily buy a Watson Discovery Advisor for R&D purposes to help make optimum use of their vast corpus of proprietary information sources.
“Cognitive technology is very good at working at volume, something that humans find difficult,” Howard says. “It also removes bias: it is not necessarily there to agree with the person using it. But in a clinical environment, you can always drill down to find the information that led Watson to make its recommendations.”
At this stage, IBM Watson Health is in commercial discussions with potential clients across the world, but IBM keeps these agreements very close to the vest. One of the few public examples we could find of an active healthcare deployment of Watson is at Bumrungrad International Hospital in Thailand where the AI helps with the treatment of cancer patients.
Watson for Oncology at Bumrungrad ingests the patient’s own personal data; the fast-growing pool of medical literature; guidelines from recognized world-class experts; and the experience of specialists at the Memorial Sloan Kettering Cancer Center in New York. In return, it outputs expert personalized treatment recommendations for consideration and use by the hospital’s own oncologists.
More such sites are likely to be announced during the course of this year, but Howard plays down the likelihood of Watson (or any future competitors) replacing a human diagnostician. Even if it was feasible, he doesn’t think it would be desirable. “I can see public Watsons but not for the diagnosis and treatment of patients,” Howard says. “It would not be right and proper to ask these questions outside of an engagement of a professional. And we need doctors to treat our patients.”
Howard notes that simply asking a computer to dole out treatment unchallenged would contravene local healthcare laws wherever you go. On the other hand, he imagines a day very soon when “patients will begin to see, as part of their consultation process, the treatment recommendations augmented by these technologies.” This, one hopes, would stop them from idly searching the Web for their symptoms.
“Actually, one of the use cases for a Watson advisor is as a second opinion,” Howard says. “A Watson solution, appropriately trained, can give you an instantaneous second opinion.”
Otherwise, Howard regards Watson as “augmented” rather than “artificial” intelligence, and he sees strong potential for sharing information between different instances of Watson to enrich each other’s quality of learning. For example, sharing safety signal detection in the treatment of patients—taking away the resource-intensive human labor of tracking real-world adverse events, reading literature and reports, and writing summaries—could be of particular interest to those in life-science organizations.
The potential power of artificial (or augmented) intelligences sharing information was thrust into the spotlight just this month. AlphaGo, programmed by DeepMind, optimized its Go-playing skills by playing against other instances of itself. Likewise, another group of researchers recently reported that 14 AI-controlled robot arms had learned how to pick objects up by watching and learning from each other arm’s failed attempts.
How do humans feel about being treated by computers?
According to research released this month by Ipsos MORI on behalf of the Wellcome Trust, it seems that humans are mostly fine with computers helping with their medical care for now. Of course, there are some reservations. The research found that a slight majority of people (53 percent) support the idea of patient data being used by commercial organizations for research depending on how commercial those organizations were. Hospitals and charities were seen as good; Big Pharma not so.
Itself a healthcare big-data proponent, the Wellcome Trust is an active participant in the Expert Advisory Group on Data Access (EAGDA), a group it convened with the Medical Research Council, Economic and Social Research Council, and Cancer Research UK. While the likes of IBM Watson might provide very good reasons for ingesting, storing, and sharing patient data, the Wellcome Trust’s current position is cautious rather than evangelical.
“People are naturally cautious about the complexities and sensitivities surrounding their personal information, especially where these may not have been fully explained to them in the past,” says Nicola Perrin, Wellcome’s head of policy. “We must make sure that there are no surprises for people about how their data could be used, especially by commercial organizations, and to do this it is critical that the government, the NHS, and researchers work together to communicate and engage the public.”
One last emerging technology might help the public make up its mind—the ability to add a human face to the augmented intelligence. People are much easier to interact with than text readouts or robots even if they are themselves computer-generated, according to research from the likes of the Auckland Face Simulator Project in New Zealand.
For the moment, these faces are being tested for use as virtual assistants and personal education systems, but adding an avatar to AI healthcare provision seems a perfect fit. See for yourself in the video above. But regardless of whether or not the machine ever becomes what a patient ultimately interacts with, clearly AI has already made an impact and solidified its place in our overall approach to health. The only lingering question is how great of an impact that turns out to be.
Alistair Dabbs is a London-based freelance technology journalist, author, and columnist most commonly specializing in digital imaging and responsive publishing despite his print industry roots.