Belatedly, some of you might opine, it is time to make an introductory contribution to the great debate about the impact of Generative AI on the practice, and more importantly the business and revenue/expense model, of BigLaw.  (After all, this is Adam Smith, Esq. and not Thomson Reuters.)  By way of explanatory mitigation for the delay, it is an enormous topic, certainly the most daunting, profound, and complex to have arisen since I founded ASE over two decades ago, and I have been waiting to see published a cogent and comprehensive article on the topic which I would be able to point you towards as an introductory survey piece.  Sorry to report I have not yet seen any such article.  So let’s just jump in.

I anticipate that my approach in this series will be to provide selectively edited extracts from some of the leading (insightful, reflective) writers on this topic to provide you all with a commonly shared baseline of commentary before I offer my own views.

There are few better places to start than with Richard Susskind, who over the past few decades has published an engaging and nearly comprehensive body of work on a wide array of topics at the intersection of law and society.  In this Part 1 I will give you a window into his latest book, How to Think About AI: A Guide for the Perplexed (Oxford University Press: 2025). [See footnote below.]  These extracts are not presented in any particular order but my hope is that by the time you get to the end of this column your mind will be spurred into your own internal conversation pursuing a cogent view of your own on this new frontier.

There is no apparent finishing line in the global competition to develop digital technologies. No one in the world’s leading tech companies or in research laboratories in the US, China, or South Korea is expecting the tech job to be over soon. There’s no final destination in sight. Rather, it’s a relentless race without end. 

Many, if not most, professional workers regard themselves as artists in their own craft. They look upon their work as the very embodiment of what machines will never be able to do. All manner of biases and dissonances are going on here but the undeniable fact is that the professions and the white color workforce are overflowing with those who believe that AI has massive potential but “not for us.”

 

In the context of AI, by way of illustration, some leading professional firms – lawyers, accountants, and consultants – are now recognizing what this will mean for them in the long run. They see that their main competition in the years ahead will not be with one another. Instead they will be vying with AI empowered organizations that will be able to undertake a wide range of tasks and activities without engaging professional advisors. Or AI businesses may develop systems that eliminate the need for professional work altogether.

 

Massively capable systems will displace humans here and elsewhere. And it is dawning on firms that if they do not build the systems that replace them, then others certainly will. This self-disruption may seem like self-cannibalization, but if there’s going to be cannibalization, it’s best to be first to the feast. Astute leaders can see that this self-disruption cannot be brought about from within. They need to develop self-disruptive and self-destructive systems and services from entirely different structures that are nimbler, that are heavily populated by technologists, that are managed and capitalized quite differently from traditional firms, and that are focused on licensing products and solutions rather than charging for human services in 6 minute units

 

It’s much easier to talk about and dedicate resources to the risks rather than to the opportunities. It’s tempting to succumb to technological myopia, that is, to misjudge the future potential of systems by fixating on today’s limitations, and so to be inclined to underinvest. Under the everyday pressures of keeping major organizations up and running, leaders tend to focus their energies on short-term wrinkles rather than long-term seismic shifts, kicking commitment to AI down the road.  The “not us” lobby, too, will inhibit progress in many areas, insisting that AI systems cannot take on the work they do and discouraging us from trying

 

To answer all the questions thrown up by “what if AGI? ” thinking, we should engage our best philosophers. more, we must draw on centuries of our finest philosophical thinking. We need to be guided by Plato, Aristotle, and Kant rather than, with respect, Sam Altman, Elon Musk, and Mark Zuckerberg 

“Plato, Aristotle, and Kant?”  Srsly?  Yes, seriously.  Are you at your firm assigning the design of your strategic roadmap in case of “what if AGI?” to your very best thinkers?  If not, do you think they have something more consequential to do? Do they themselves think that?

We have observed in our travels that leadership at law firms can act is if it’s afraid of the partnership.  This is not leadership and it’s not a recipe for building or sustaining a durable and high-performing firm.  On the other hand, it is your choice.  Cards face up, folks.


Image generated by Google Gemini


Footnote referenced above:

A thoughtful review covering three recent books on GenAI appeared in this past Sunday’s NY Times Book Review.  Here’s in capsule form what the reviewer had to say:

  1. “If Anyone Builds It, Everyone Dies,”Their book’s claim is simple enough: “If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of A.I., then everyone, everywhere on Earth, will die.” The authors cannot be faulted for indirectness….Critics of A.I. doomerism maintain that the mind-set suffers from several interlocking conceptual flaws, including that it fails to define the terms of its discussion — words like “intelligence” or “superintelligence” or “will” — and that it becomes vacuous and unspecific at key moments and thus belongs more properly to the realm of science fiction than to serious debates over technology and its impacts. Unfortunately, “If Anyone Builds It, Everyone Dies” contains all these flaws and more. The book reads like a Scientology manual,  [Can you say “acidulous?”–Bruce]
  2. “The AI Con,””[The authors] Bender, a computational linguist, and Hanna, a tech sociologist, look at artificial intelligence and see not a force for destruction or creation, but a colossal scam. They have found a rich lode to mine; the hype machine around artificial intelligence has entered its rococo period….

    The authors are excellent at tearing down Silicon Valley overstatement, and their skepticism is a welcome corrective. There’s just one problem. You can use ChatGPT right now, and it is astonishing.

  3. “How to Think About AI,” by Richard Susskind–the source of this article’s many quotes.The review opens by stating its premise as clearly as possible:  “The shelf of general guides to artificial intelligence is crowded by now, but HOW TO THINK ABOUT AI: A Guide for the Perplexed (Oxford, 202 pp., $13.99) is one of the best, filled with real insight and common sense, and refusing to engage in either fear-mongering or a casual dismissal of other, more opinionated takes. Susskind, a prolific British writer on A.I., has been studying the subject since the 1980s, and both fear and loathing diminish with perspective.”

    “Susskind is honest and clear, but at this juncture in the history of artificial intelligence, honesty and clarity are unfortunately deeply unsatisfying. “We do not have the vocabulary and concepts to capture and discuss the way that our increasingly capable systems work,” Susskind writes. “Instead, we root our debate in language that relates to humans.” He is absolutely correct. I have been waiting for somebody to say this in a book for years. The truth is that when you turn a decent and informed mind like Susskind’s on to the state of artificial intelligence, the most perceptive thing he has to say is that we don’t know much. But at least his book is not one-sided or catastrophizing. It is frank about the confusion and mystery that any candid approach to artificial intelligence entails, and that is as good a record as exists right now.”

Related Articles

Email Delivery

Get Our Latest Articles Delivered to your inbox +
X
[mc4wp_form id="8741"]

Get Our Latest Articles Delivered to Your Inbox

Like having coffee with Adam Smith, Esq. in the morning (coffee not included).

Oops, we need this information
Oops, we need this information
Oops, we need this information

Thanks and a hearty virtual handshake from the team at Adam Smith, Esq.; we’re glad you opted to hear from us.

What you can expect from us:

  • an email whenever we publish a new article;
  • respect and affection for our loyal readers. This means we’ll exercise the strictest discretion with your contact info; we will never release it outside our firm under any circumstances, not for love and not for money. And we ourselves will email you about a new article and only about a new article.

Welcome onboard! If you like what you read, tell your friends, and if you don’t, tell us.

PS: You know where to find us so we invite you to make this a two-way conversation; if you have an idea or suggestion for something you’d like us to discuss, drop it in our inbox. No promises that we’ll write about it, but we will faithfully promise to read your thoughts carefully.