Nearly four years have passed since IBM’s Watson beat two human beings at Jeopardy, and not just any two human beings: Ken Jennings, record-holder for longest winning streak (74 appearances) and Brad Rudder, largest single money-winner ($3.25-million). As I’ve noted before, IBM moved with alacrity to begin commercializing the Watson technology and now the Wharton School’s online publication, Knowledge@Wharton, has an interview with Brad Becker, who holds the either intriguing or nonsensical title, “chief design officer for IBM Watson.”
The article is headlined “the ‘humane’ promise of cognitive computing,” (their single quotation marks) and I decided I had to read it to find out whether that implied Watson is not actually out to take all our jobs, or whether he fully intends to do so but will only go about it in the most fastidious, polite, and deferential manner.
Mr. Becker certainly begins the conversation innocently enough:
[It’s based on] the idea that technology should work for people, not the other way around. For a long time, people have worked to better understand technology. Watson is technology that works to understand us. It’s more humane, it’s helpful to humans, it speaks our language, it can deal with ambiguity, it can create hypotheses, it can learn from us.
I think I understand what Mr. Becker is trying to accomplish here—something meant to be as soothing as “Nothing to see here, folks; move along,” but it comes off at least to me as equally ineffectual. I admire Moore’s Law as much as the next guy, but I’m not sure I can enthusiastically embrace a technology that, among other things, is designed to “learn from us.” I’m sure that during the Cold War the KGB learned a lot from the CIA, and vice versa, but not every calculated campaign to learn from another needs to be embraced. Not only that, but after Watson learns everything we have to teach, won’t he expect to move on towards greater things and leave humanity in the dust?
But enough speculation. Let’s talk about what Watson’s doing now and IBM’s ambitions for him in the near-term future.
At the most elementary and prosaic level, dealing with Watson (the “human/machine interface,” in engineer-speak) is meant to be far more natural for and accommodating to humans rather than being designed around the machine’s limitations and computers’ notorious lack of imagination and sense of context. So, IBM research is delving into how the people actually using the technology express their needs so that Watson can understand, as opposed to the humans having to figure out how Watson would prefer to converse if left to his own devices, as it were.
And what’s this about “cognitive computing?”
The concept they seem to be driving at is, at bottom, “that technology should work for people, and not the other way around.” (There’s a theme developing here.)
Most intriguing is that with Watson IBM is striving to “move beyond just question and answer to discovery.” “Discovery,” in this sense, is finding the answer to questions you didn’t even ask. This holds potential power because, after all, anyone can ask, and most of us can answer, the most obvious questions. It’s the unasked and tacit questions that can be a source of unexpected and genuine value.
What are the connections between different elements that have not been adequately explored? It’s one thing to find an obvious connection, but it’s another to go find a weaker, less obvious or more indirect connection that warrants exploration.
Becker gives one concrete and one hypothetical example.
In the concrete case, Baylor University College of Medicine used Watson to digest 70,000 scientific papers and articles on a protein called p53, which is involved in many types of cancer, and assigned Watson the task of predicting which other proteins might turn p53 off or on. In the space of a few weeks, Watson suggested six such proteins worthy of additional research. Typically, industry discovers one such protein annually.
The hypothetical falls within Law Land, although probably outside the areas most readers of Adam Smith, Esq. practice in: Crime. The colloquy touches on the subtle question of whether human investigators will be over-awed by the presumed rationality, comprehensiveness, dispassion, and just plain brute power of Watson’s processes and defer to his “answers:”
Knowledge@Wharton: When you talk about Watson working in areas like health care and crime detection, should we be concerned that people will have too much faith in its analysis? For example, we’ve seen cases in which the knowledge that when a spouse is killed the husband or wife is the most likely suspect can circumvent the exploration of other scenarios. Is there a similar concern with Watson that we’ll have too much confidence in its analysis, so that other avenues — which even though they are less likely could still be correct — may not be pursued?
Becker: That’s actually the main purpose of Discovery Advisor — to look for potential, subtle connections, not necessarily the obvious ones. The obvious connections are self-evident, so you don’t need Watson to find [them]. The fact that a spouse is an obvious suspect in a domestic murder case — you don’t need Watson for that.
The Discovery Advisor is focused on the opposite [problem]: looking for all those subtle, indirect, weak signal connections; finding the nonobvious connections and the fertile ground for investigation and for human expertise to pay attention; helping humans find where the needle in the haystack might be.
Becker also offers some thoughtful observations on the state of “design” in technology—yes, that would be part of his formal job—including recognizing the premise that design has heretofore been treated most often as an afterthought. He analogizes it to a house where doorways are too low or walls are too close together; the industry needs to begin with basic hygiene, which means standards. He’s optimistic in that online applications (big screen and small screen) are now being developed with the user experience as a priority.
How does one actually do that? “Get out of the building and watch people use your product. […] The second thing I would say is: hire professionals [who] have experience in this. [And the third thing is] your culture has to promote it.” Apple, he notes, is known for its great design “not because they have the most designers, but from Steve Jobs on down there was an appreciation of design and the importance that things work well.”
Here’s the message for Law Land: Wouldn’t it be refreshing if we could all keep this in mind:
You have to inculcate a culture that says, “At the end of the day, we’re trying to solve a problem for somebody or provide some sort of value for someone. We’d better understand and be able to articulate what that is.”
Finally, we come to the question we opened with: Watson, job creator or job destroyer?
Becker notes that every advance in technology up until now has ended up creating more (and of course quite different) jobs than it destroyed. But this time around there’s strenuous debate among economists as to whether that answer isn’t too glib by half. In all fairness to Becker (and to Knowledge@Wharton), this topic is far too broad to be settled in an interview.
Perhaps the ultimate question we need to address is whether technology will really help us excel at what humans are best at—judgment, nuance, discernment, insight, serendipity, pursuing insatiable curiosity—or whether technology will, by seemingly serving our every need, ultimately denature us and cause our skills to atrophy.
We don’t know the answer, of course, but lest you think this is all speculation for the future, it’s already playing out in the cockpits of commercial aircraft all over the globe. Autopilots are now so good that pilots are forgetting how to fly. Then, when the emergency comes, they’re lost. “Months and months of boredom punctuated by moments of sheer terror,” as the aviators’ old joke has it.
Will the day come when lawyers structuring a deal have lost the spark of insight? Will all our e-discovery, document-tagging, issue-surfacing, document-drafting, auto-templating, software reduce us to the condition of pilots? And if so, will it be time to de-tune the autopilot and force us to grab the controls manually a lot more often?