Certainly impressive and looking like a real leap forward even with errors occurring. This is of course a enormous task for any computer but even to achieve success in certain instances is extremely impressive and very exciting. Here we are 13 years on from Deep Blue’s famous feat of beating Gary Kasparov at chess. The New York Times featured this in the magazine over the weekend: Insert Title. As they point out this is approaching the innovation we have seen on Star Trek
The computer on Star Trek is a question-answering machine, it understands what you’re asking and provides just the right chunk of response that you needed. When is the computer going to get to a point where the computer knows how to talk to you?
Well it seems we stepped a lot closer to the Hollywood vision that’s been in place since 1963. In fact I have been making this point for a number of years. We have been fooled into believing Speech Recognition achieved much more than recognizing words. In fact Spock’s original interaction with the computer in 1963
Computer, compute to the last digit the value of pi” — Spock (Wolf in the Fold)
Was asking for much more than just speech recognition but included comprehension and then actions based on that comprehension
Over time we have seen many instances but the challenge of comprehension is brought home in Star Trek IV – The Voyage Home when Scotty discovers that speaking to a computer and expecting it to understand was beyond the capabilities:
As we see (even in Hollywood) computers continue to struggle with complexity in language (Direction Unclear):
But with Watson’s success in what is a good analogy of the complexity of human language we are approaching the point of genuine interaction with technology and as some of the contestants intimated:
Several made references to Skynet, the computer system in the “Terminator” movies that achieves consciousness and decides humanity should be destroyed. “My husband and I talked about what my role in this was,” Samantha Boardman, a graduate student, told me jokingly. “Was I the thing that was going to help the A.I. become aware of itself?”
I think we are still a ways away from this but with the change in approach as opposed to trying to teach computers all the variations of data and linkage allowing the system to “learn” by feeding in data and creating algorithms that link data statistically for future inference.
Much like the challenge in medicine Watson applies extensive knowledge that has been previously analyzed and stored and importantly applies multiple algorithms to come up with a stack rank of answers. In fact in the of all the predictive systems available ones that take multiple predictions form different sources and then takes the most frequent tend to be the most accurate
Watson’s speed allows it to try thousands of ways of simultaneously tackling a “Jeopardy!” clue. Most question-answering systems rely on a handful of algorithms, but Ferrucci decided this was why those systems do not work very well: no single algorithm can simulate the human ability to parse language and facts. Instead, Watson uses more than a hundred algorithms at the same time to analyze a question in different ways, generating hundreds of possible solutions. Another set of algorithms ranks these answers according to plausibility; for example, if dozens of algorithms working in different directions all arrive at the same answer, it’s more likely to be the right one. In essence, Watson thinks in probabilities. It produces not one single “right” answer, but an enormous number of possibilities, then ranks them by assessing how likely each one is to answer the question.
Thinking about this system and its application to medicine we are stepping increasingly closer to analysis of multiple inputs of signs, symptoms and subsequently examination and laboratory testing and imaging. A number of years ago I saw a similar solution in very basic form that analyzed inputs as they arrived and started to produce a short list for differential diagnosis. The limitations at the time related to computing power and inputs but and to some degree the capture of knowledge in a form that could then be used. Watson turns this process on its head providing a means to input knowledge in large quantities that can then be analyzed, cataloged and then applied. There remains the question of what is valid information that can and should be accepted but even with this problem processing the rapidly expanding knowledge base automatically provides a means to help clinicians who today do not have the time to process all the moves/adds/changes to the clinical corpus of knowledge:
The problem right now is the procedures, the new procedures, the new medicines, the new capability is being generated faster than physicians can absorb on the front lines and it can be deployed
I don’t see call centers being the route of interaction but much more likely as an adjunct tool providing guidance and short lists to the clinicians at the point of care of differential diagnosis and what steps (what additional history, examination or investigation) can help rule out or confirm the various choices. This may not be a patient level tool but as an adjunct to clinical knowledge is likely to offer significant support to clinical care and help improve the diagnosis and treatment of patients.
Combine this with a speech recognition tool that accurately renders the clinical data and you have some level of real time evidence based medicine that will revolutionize healthcare. DoctorNet will become self aware….very soon.
You can also follow me here on medium, on twitter, or on facebook or Sign up to receive my posts each week
Latest posts by Dr Nick (see all)