I don’t know about you, but I find talent markets fascinating. They have several characteristics that make them quite distinctive from regular old goods and services markets:
- Talent is extremely heterogeneous; it’s not as if there’s another Honda Accord where that one came from.
- Talent is what economists call both “excludable” and “rivalrous,” meaning that if I hire you Suzie can’t hire you at the same time. (Knowledge is the classic non-rivalrous and non-excludable good; everyone can know the same thing at the same time without its impairing anyone else’s knowledge of that same thing, and without shutting off anyone else’s access to it.)
- Talent is notoriously difficult to judge in advance, without actually experiencing it, that is to say, without actually hiring the individual and putting them to work in your organization. Some other markets approach this condition of “ignorance until purchased,” such as attending performing arts events or taking a vacation to a previously unknown locale, but the stakes tend to be much higher for all parties concerned in talent markets.
- Once talent is hired, it’s stickier than most other purchases. You can walk out of the movie theater or reconfigure your travel plans, but once you hire someone, short of felonious or otherwise appalling behavior, you’re stuck with them for a decent interval.
All this leads to a number of devices and stratagems that attempt to mitigate uncertainty and delay serious resource commitments until some first-hand evaluation can be performed. For example:
- Performance bonuses paid in arrears, that is, after the activity you wish to reward has (or hasn’t) happened;
- Likewise, commission payment structures;
- and deferred compensation in general.
Given all that’s at stake in the talent market, firms find it worthwhile to invest considerable imagination and resources in trying to assess new hires before they’re taken on. In the world of professional services—not to mention industries where it’s even more widespread—tools such as aptitude tests, psychometric profiling, compatibility assessments, and more, are universal.
What types of tests? It depends on the organization, of course, and what they’re looking for, but firms such as DF Shaw (an exotic quant trading firm where Jeff Bezos got his start before upping and outing for Seattle), Google, McKinsey, and Sanford Bernstein (an elite investment research boutique) use these tests religiously. Candidates might be asked questions that appear oddball but which are designed to expose how they think. “How many pubs in Great Britain?” “How many cans of Coke sold in the US annually?” (They’re not looking for the most accurate answer, they’re looking for the rigor and imagination of your analytic process.)
An hour or two, even a day, of objective testing seems a small gate to require one to go through for one of these intensely desirable jobs.
But before talking about BigLaw, let’s go to a talent market where even more money is at stake: The NFL.
I am very glad to see a publication of this repute raising and addressing this matter. In the interest of disclosure, I graduated from a 50-60ish school with high marks, and had success against my peers in many interscholastic competitions, winning two competitions and placing in the teens (out of one-hundred competitors) in the other. These peers included students from top schools, several of whom I competed directly against.
Unlike sports, however, there were no talent evaluators (beyond judges) present. Nor, as you mention, did there seem to be much interest in engaging law journal members in any meaningful way. I use my own example not to draw conclusions about a larger sample size of talent, or in an attempt to self-prove my own (whatever it may be; who knows), but to illustrate where steps could be taken to “Moneyball” talent.
Specifically, every professional sports team employs scouts to identify future talent. Whether you hail from Missouri State, Chapel Hill, or Southern Cal, it doesn’t matter: Strike batters out, catch passes consistently, or play great defense, and you’ll earn an opportunity. Indeed, even the most die-hard stat heads will tell you that the eyes must assess what the paper says.
I believe the same evaluation can be done in an effective manner for law students because there are ample areas to do so. Attending inter-scholastic competitions, talking with professors, and determining whether a journal maintains something as fundamental as a consistent publication schedule are easy things to do. To be sure, there may be barriers to certain means of evaluation, but I don’t believe that should prevent discerning inquiry.
Great post Russell (and Bruce of course) and certainly an improvement over hiring by resume, but this still assumes some correlation between law school and practice which I think is very valid in some areas of practice but not so much in others. One of my best associates worked as the head of a construction crew before law school and I think it is those talents of organization and execution under a tight timeline that makes him successful in supporting my tort trial practice (plus a healthy fear of going back to “waiting for a truck full of dry wall in the rain” as he says). So maybe “scouting” combined with a practice specific “wunderlic” of our own? Bruce, sounds like there is money to be made here.
Dear Bruce,
Do you anticipate future artciles in which you elaborate what sorts of matters should (or could) be included in “scouting” and how those might be structured? I know that JD Match includes such tools, so I am not asking you to expose your IP, but rather perhaps to discuss the concepts that may need to be included and others that may be useful. Along with approaches to validation, of course.
Mark