The New York Times reported recently, in advance of the annual NFL scouting “combine” (the recruiting event of the year for the hundreds of college players aspiring to the pro leagues), that a new segment of their extensive job interviewing will be:
“an hourlong psychological assessment designed to determine and quantify the nebulous qualities that coaches have long believed make the most successful players — motivation, competitiveness, passion and mental toughness — and to divine how each player learns best. The new test, like the [previous test], is mandatory for the more than 300 players who attend, and it will be given for the first time Friday.”
Here’s the protocol:
To determine their personalities, the test will ask players a series of questions about their preferences and behavior. To evaluate their cognitive abilities, it might tell them to look at four diagrams and figure out how they relate. Then, to measure how quickly they can adjust their thinking, the items they are comparing might change, forcing the players to determine their relationships anew.
To see how they learn best, the test will present questions in verbal and graphic form. Players will have an hour to take the exam on a computer.
Imagine if a comparable tool were available to law firms.
The criteria we’ve been using to evaluate entry-level talent has been unidimensional for decades: The students with the highest GPAs at the best law schools. Great in theory but it turns out to have terrible predictive value of success. In fact, the proportion of associates from Tier 1 law schools exceeds the proportion of partners from Tier 1 schools by a sizable and statistically significant margin. And law review is negatively correlated with making partner. But these are the criteria we continue to hire on. What on earth are we thinking?
Again, given all that’s at stake, why wouldn’t we want to look for different, or at least additional, tools? At least for those who care to look at the actual evidence, we’re clearly doing something wrong. I hate to break this to you, but any head of recruiting at a Fortune 500 whose criteria consistently predicted failure and not success would have a very short tenure indeed.
Virtually every law firm I’ve talked to about this admits their criteria aren’t filtering everything that matters, although a few stoutly maintain that “the best predictor of success in future is success in the past,” resolutely ignoring the reality that academic success and worldly success are poorly (or, a la law review, negatively) correlated. And the vast majority wish they had better tools. The problems are three-fold:
- They don’t know what those tools might be and they’re not in the business of designing them;
- They refuse to be the first to put their applicants “through an extra hoop—they’d just go somewhere else;” and
- Even if the tools existed and were offered by some independent third party, so as not to be tarred by association with any specific firm, they don’t think they’d know what to do with the results.
What if all three of those reservations could be solved? What if you had available:
- A proven capability assessment developed by a firm that does nothing else;
- Offered by a disinterested third party;
- Which you could validate by comparing the clusters of candidates it generates to the clusters of lawyers in your own firm? (After all, you know who the winners, losers, and can’t-yet-tell’s are in your own firm.)
It may be on the horizon.
But just because it’s good enough for the NFL…
I am very glad to see a publication of this repute raising and addressing this matter. In the interest of disclosure, I graduated from a 50-60ish school with high marks, and had success against my peers in many interscholastic competitions, winning two competitions and placing in the teens (out of one-hundred competitors) in the other. These peers included students from top schools, several of whom I competed directly against.
Unlike sports, however, there were no talent evaluators (beyond judges) present. Nor, as you mention, did there seem to be much interest in engaging law journal members in any meaningful way. I use my own example not to draw conclusions about a larger sample size of talent, or in an attempt to self-prove my own (whatever it may be; who knows), but to illustrate where steps could be taken to “Moneyball” talent.
Specifically, every professional sports team employs scouts to identify future talent. Whether you hail from Missouri State, Chapel Hill, or Southern Cal, it doesn’t matter: Strike batters out, catch passes consistently, or play great defense, and you’ll earn an opportunity. Indeed, even the most die-hard stat heads will tell you that the eyes must assess what the paper says.
I believe the same evaluation can be done in an effective manner for law students because there are ample areas to do so. Attending inter-scholastic competitions, talking with professors, and determining whether a journal maintains something as fundamental as a consistent publication schedule are easy things to do. To be sure, there may be barriers to certain means of evaluation, but I don’t believe that should prevent discerning inquiry.
Great post Russell (and Bruce of course) and certainly an improvement over hiring by resume, but this still assumes some correlation between law school and practice which I think is very valid in some areas of practice but not so much in others. One of my best associates worked as the head of a construction crew before law school and I think it is those talents of organization and execution under a tight timeline that makes him successful in supporting my tort trial practice (plus a healthy fear of going back to “waiting for a truck full of dry wall in the rain” as he says). So maybe “scouting” combined with a practice specific “wunderlic” of our own? Bruce, sounds like there is money to be made here.
Dear Bruce,
Do you anticipate future artciles in which you elaborate what sorts of matters should (or could) be included in “scouting” and how those might be structured? I know that JD Match includes such tools, so I am not asking you to expose your IP, but rather perhaps to discuss the concepts that may need to be included and others that may be useful. Along with approaches to validation, of course.
Mark