Nearly four years have passed since IBM’s Watson beat two human beings at Jeopardy, and not just any two human beings: Ken Jennings, record-holder for longest winning streak (74 appearances) and Brad Rudder, largest single money-winner ($3.25-million). As I’ve noted before, IBM moved with alacrity to begin commercializing the Watson technology and now the Wharton School’s online publication, Knowledge@Wharton, has an interview with Brad Becker, who holds the either intriguing or nonsensical title, “chief design officer for IBM Watson.”
The article is headlined “the ‘humane’ promise of cognitive computing,” (their single quotation marks) and I decided I had to read it to find out whether that implied Watson is not actually out to take all our jobs, or whether he fully intends to do so but will only go about it in the most fastidious, polite, and deferential manner.
Mr. Becker certainly begins the conversation innocently enough:
[It’s based on] the idea that technology should work for people, not the other way around. For a long time, people have worked to better understand technology. Watson is technology that works to understand us. It’s more humane, it’s helpful to humans, it speaks our language, it can deal with ambiguity, it can create hypotheses, it can learn from us.
I think I understand what Mr. Becker is trying to accomplish here—something meant to be as soothing as “Nothing to see here, folks; move along,” but it comes off at least to me as equally ineffectual. I admire Moore’s Law as much as the next guy, but I’m not sure I can enthusiastically embrace a technology that, among other things, is designed to “learn from us.” I’m sure that during the Cold War the KGB learned a lot from the CIA, and vice versa, but not every calculated campaign to learn from another needs to be embraced. Not only that, but after Watson learns everything we have to teach, won’t he expect to move on towards greater things and leave humanity in the dust?
But enough speculation. Let’s talk about what Watson’s doing now and IBM’s ambitions for him in the near-term future.
At the most elementary and prosaic level, dealing with Watson (the “human/machine interface,” in engineer-speak) is meant to be far more natural for and accommodating to humans rather than being designed around the machine’s limitations and computers’ notorious lack of imagination and sense of context. So, IBM research is delving into how the people actually using the technology express their needs so that Watson can understand, as opposed to the humans having to figure out how Watson would prefer to converse if left to his own devices, as it were.
And what’s this about “cognitive computing?”
The concept they seem to be driving at is, at bottom, “that technology should work for people, and not the other way around.” (There’s a theme developing here.)
Most intriguing is that with Watson IBM is striving to “move beyond just question and answer to discovery.” “Discovery,” in this sense, is finding the answer to questions you didn’t even ask. This holds potential power because, after all, anyone can ask, and most of us can answer, the most obvious questions. It’s the unasked and tacit questions that can be a source of unexpected and genuine value.
What are the connections between different elements that have not been adequately explored? It’s one thing to find an obvious connection, but it’s another to go find a weaker, less obvious or more indirect connection that warrants exploration.
Becker gives one concrete and one hypothetical example.
In the concrete case, Baylor University College of Medicine used Watson to digest 70,000 scientific papers and articles on a protein called p53, which is involved in many types of cancer, and assigned Watson the task of predicting which other proteins might turn p53 off or on. In the space of a few weeks, Watson suggested six such proteins worthy of additional research. Typically, industry discovers one such protein annually.
We already are familiar with some aspects of computerized “learning”, as in voice recognition: the more you work with the program, the better it becomes at correctly recognizing words, punctuation, etc. in your voice and accent. But there is an area of this that overlaps with your excellent concern about people becoming overly reliant on computer output. [This is a huge problem in my work, where computerized model results are implicitly granted virtually oracular status.] All of us have prejudices, and almost all of us are prone to confirmation bias. As a computer “learns” from us, there is a good possibility that its estimation of probabilities of answers will drift to align with the responses we give to the ongoing data-analysis. It would be interesting to learn how Watson addresses this issue so that the power of “learning” is coupled with some powers of “skepticism.”