June 23 was the 100th birthday of Alan Turing, and I was delinquent not to have written about him closer to the date, but I was searching (excuses, excuses) for the appropriate BigLaw hook.

Google took the occasion to honor him with one of their famous “Google doodles:”

 

Don’t recognize the name?  Well, you are using a machine based on his key insights to read Adam Smith, Esq. at this very instant, as am I in composing and publishing this piece.

Turing was born in London in 1912 and died by his own hand (cyanide poisoning) on June 7, 1954, aged 41, in Cheshire, England.  In his short life, he achieved, among many others things:

  • graduating with first-class honors in mathematics from King’s College, Cambridge, where he was also elected a fellow at age 22
  • studied at the Institute for Advanced Study and obtained his PhD from Princeton
  • was a leading figure in the British code-breaking effort during WWII headquartered in Bletchley Park, which ultimately broke the German’s “Enigma machine” cryptography, for which he earned the OBE in 1945 (although both the award and the work for which he earned it remained secret for many years)
  • presented a paper in 1946 detailing the first design of a stored-program computer, and
  • proposed an experiment called the “Turing test,” the gold standard of artificial intelligence to this day, which defined a machine as “intelligent” if a human being conversing with it could not tell it apart from another human being—to date no machine has passed the Turing test.

Tragically, homosexuality was criminal in the United Kingdom in 1952, when Turing acknowledged a sexual relationship with a 19-year-old man and pled guilty to a charge of gross indecency, which led to removal of his security clearance and accordingly deprived him of the chance to do any further work in cryptography for the British government.   Prime Minister Gordon Brown officially apologized on behalf of the British government in 2009 for “the appalling way he was treated.”

But this isn’t actually an essay about Alan Turing; it’s about computers’ ability to embody competence without comprehension, and it’s dedicated to all of you who subscribe to the belief that computers will never be able to (pick one or more):

  • find essentially duplicative documents in an e-discovery warehouse;
  • identify key individuals, entities, times, and places;
  • sort documents for privilege;
  • sort them for relevance;
  • extract key concepts;
  • identify and present sets of documents appropriate for preparing particular witnesses;
  • and so on and on up the food chain.

Step back. I mentioned that in 1946 Turing presented a paper detailing the design of a stored-program computer, but his key insight was 10 years earlier in 1936 in a paper on computable numbers, and came in the form of this simple declarative sentence:

It is possible to invent a single machine which can be used to compute any computable sequence.

Not only did Turing declare this to be true, he proceeded to show how exactly to make such a machine.

Now, things called “computers” existed before Turing’s insight, but they were human beings, people with enough skill at math, patience, and punctiliousness about their work to generate reliable results from  hours of computation, day after day.  (Not surprisingly, many were women.)  Turing was about to turn on its head the assumption that difficult problems required the persistent application of a highly educated intelligence to solve them, much as Darwin had turned on its head a century earlier the (sacred) assumption that it required an omniscient and omnipotent Creator to deliver up everything from man to cocker spaniels to honeybees to algae.

Before Darwin, and before Turing, the world was top-down; after them it was bottom-up. The Atlantic published a nice piece outlining just this thesis about 10 days ago, in which they quoted Robert Beverley MacKenzie (a resolute 19th-Century critic of Darwin):

In the theory with which we have to deal, Absolute Ignorance is the artificer; so that we may enunciate as the fundamental principle of the whole system, that, in order to make a perfect and beautiful machine, it is not requisite to know how to make it. This proposition will be found, on careful examination, to express, in condensed form, the essential purport of the Theory, and to express in a few words all Mr. Darwin’s meaning; who, by a strange inversion of reasoning, seems to think Absolute Ignorance fully qualified to take the place of Absolute Wisdom in all the achievements of creative skill.

Although the good Mr. MacKenzie meant this as parody of the highest order, he has actually summarized quite pithily a key insight of evolution—and for that matter anticipated by about 80 years Turing’s theory of computing machines. We could paraphrase MacKenzie and put the following words into Turing’s mouth: In order to make a perfect and beautiful computing machine, it is not necessary to know arithmetic.

This is the concept of “competence without comprehension.”

Now, to be sure, it is counterintuitive to the point of being positively odd to postulate that a heedless, oblivious, purposeless process (evolution) can beaver away across the millennia and end up yielding shockingly complex, efficient, and nuanced creatures without ever having had an inkling of understanding what it was up to—or consciousness that it was “up to” anything at all.  Some people still can’t get their heads around it.

And of course nothing could be further from the way we educate and train ourselves, striving precisely for ever greater, wider, broader, deeper, and more subtle comprehension, hopefully over an entire lifetime. That, I hasten to add, is resoundingly the right approach for our species. But it’s not the way a computer, or evolution, works.

Computers—be they human or silicon—take the symbols in front of them at any given moment, and depending on the content and arrangement of those symbols, perform an operation upon them, which in turn generates a new set of observed symbols. Repeat.  After Turing described this technique as being how the human computers performed their tasks, he wrote, disarmingly:

We may now construct a machine to do the work of this computer.

But wait! The computer program executing these instructions had to have been written by a human being exercising creativity and judgment, right? Haven’t we just relocated the role of “real” intelligence up a level, rather than stripping it out of the picture altogether?

Actually, plausible as it sounds at first blush, that altogether misses the point of the mechanisms of evolution or of computing. What if the computer itself could reprogram itself and thereby “learn?” (“Learn,” in this sense, means improve itself.) Think of it this way:

[Turing] saw clearly that all the versatility and self-modifiability of human thought — learning and re-evaluation and, language and problem-solving, for instance — could in principle be constructed out of these building blocks. Call this the bubble-up theory of mind …

Turing, like Darwin, broke down the mystery of intelligence (or Intelligent Design) into what we might call atomic steps of dumb happenstance, which, when accumulated by the millions, added up to a sort of pseudo-intelligence. The Central Processing Unit of a computer doesn’t really know what arithmetic is, or understand what addition is, but it “understands” the “command” to add two numbers and put their sum in a register — in the minimal sense that it reliably adds when called upon to add and puts the sum in the right place. Let’s say it sorta understands addition. A few levels higher, the operating system doesn’t really understand that it is checking for errors of transmission and fixing them but it sorta understands this, and reliably does this work when called upon. A few further levels higher, when the building blocks are stacked up by the billions and trillions, the chess-playing program doesn’t really understand that its queen is in jeopardy, but it sorta understands this, and IBM’s Watson on Jeopardy sorta understands the questions it answers.

The strength of thinking this way (using the “sorta” formulation, which is the Atlantic author’s along with the emphasis), is that it helps us understand the actually quite profound mystery of how a human brain, composed of billions of neurons, and nothing but billions of neurons, can somehow spontaneously give rise to consciousness, Versailles and the French Revolution, the Metropolitan Museum of Art, Wagner’s Ring Cycle, the Constitution, capitalism, or the theories of evolution and computing themselves.

Maybe it’s simpler or more accessible to approach this using the evolution paradigm in lieu of the consciousness paradigm.

The Atlantic author puts it this way:

Before there were bacteria there were sorta bacteria, and before there were mammals there were sorta mammals and before there were dogs there were sorta dogs, and so forth.

Now let’s get back to Law Land (or, you might be thinking, reality).

I would like to conclude by urging you to accept the inevitability that computer-driven e-discovery, document generation, and knowledge management is only going to get more and more capable and sophisticated. There is no limit to what Turing machines can do.

And, coincidentally, the same week of the 100th anniversary of Turing’s birthday, Google announced that it had exposed 16,000 computer processors to YouTube and this neural network had spontaneously learned how to identify the image of a cat—without being told what a cat was, or what to look for at all.“

It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses,” the researchers wrote.

Despite being dwarfed by the immense scale of biological brains, the Google research provides new evidence that existing machine learning algorithms improve greatly as the machines are given access to large pools of data.

“The Stanford/Google paper pushes the envelope on the size and scale of neural networks by an order of magnitude over previous efforts,” said David A. Bader, executive director of high-performance computing at the Georgia Tech College of Computing. He said that rapid increases in computer technology would close the gap within a relatively short period of time: “The scale of modeling the full human visual cortex may be within reach before the end of the decade.”

The implications are yours to draw; just wanted you to know.

Related Articles

Email Delivery

Get Our Latest Articles Delivered to your inbox +
X

Sign-up for email

Be the first to learn of Adam Smith, Esq. invitation-only events, surveys, and reports.





Get Our Latest Articles Delivered to Your Inbox

Like having coffee with Adam Smith, Esq. in the morning (coffee not included).

Oops, we need this information
Oops, we need this information
Oops, we need this information

Thanks and a hearty virtual handshake from the team at Adam Smith, Esq.; we’re glad you opted to hear from us.

What you can expect from us:

  • an email whenever we publish a new article;
  • respect and affection for our loyal readers. This means we’ll exercise the strictest discretion with your contact info; we will never release it outside our firm under any circumstances, not for love and not for money. And we ourselves will email you about a new article and only about a new article.

Welcome onboard! If you like what you read, tell your friends, and if you don’t, tell us.

PS: You know where to find us so we invite you to make this a two-way conversation; if you have an idea or suggestion for something you’d like us to discuss, drop it in our inbox. No promises that we’ll write about it, but we will faithfully promise to read your thoughts carefully.