Finally, “Placement Success” (USNWR language) accounts for the final 20%. The problem is that this metric is as arbitrary as they come, being a dart-board toss combination of employment at graduation and nine months (270 days) later. This is one of those consummately meaningless metrics, which nevertheless achieves the daunting hat trick of:
- Saying nothing about what any actual, sentient, human being would define as employment prospects or care about;
- Being consummately manipulable (schools have been known to hire their own grads for part-time jobs at the magic nine month mark—pure coincidence, I hasten to assure you); while
- Still sounding objective, unbiased, and above the fray.
So what goes into the ATL rankings?
First, a bit more about their philosophy, starting with why they only list 50 schools, not the entire steroidally expanding universe now approaching 200 with the ABA cheering loudly on he sidelines. Forgive the extensive quote, but this is such a refreshingly revolutionary, clear-eyed, and needed approach that all of us who’ve been imprisoned in the USNWR cave for so many years need the oxygen it provides:
Why would we limit the list to only 50 schools? Well, there are only a certain number of schools whose graduates are realistically in the running for the best jobs and clerkships. Only a certain number of schools are even arguably “national” schools. Though there is bound to be something arbitrary about any designated cutoff, we had to make a judgment call. In any event, the fact that one law school is #98 and another is #113—in any rankings system—is not a useful piece of consumer information.
The basic premise underlying the ATL approach to ranking schools: the economics of the legal job market are so out of balance that it is proper to consider some legal jobs as more equal than others. In other words, a position as an associate with a large firm is a “better” employment outcome than becoming a temp doc reviewer or even an associate with a small local firm. That might seem crassly elitist, but then again only the Biglaw associate has a plausible prospect of paying off his student loans.
In addition to placing a higher premium on “quality” (i.e., lucrative) job outcomes, we also acknowledge that “prestige” plays an out-sized role in the legal profession. We can all agree that Supreme Court clerkships and federal judgeships are among the most “prestigious” gigs to be had. Our methodology rewards schools for producing both.
Now more than ever, potential law students should prioritize their future job prospects over all other factors in deciding whether to attend law school. So the relative quality of law schools is best viewed through the prism of how they deliver on the promise of gainful legal employment. The bottom line is that we have a terrible legal job market. Of the 60,000 legal sector jobs lost in 2008-9, only 10,000 have come back. So the industry is down 50,000 jobs and there is no reason to believe they will ever reappear. If you ignore school-funded positions (5% of the total number of jobs), this market is worse than its previous low point of 1993-4. The time has come for a law school ranking that relies on nothing but employment outcomes.
So what matters, and what doesn’t? What does not matter are “inputs:” LSAT scores, GPA, student scholarships, and more. What does matter are results: Full-time legal jobs requiring JD’s, school costs, and alumni satisfaction. At a more conceptual level, here’s why it’s important that USNWR is looking at inputs and ATL is looking at outputs.
Judging quality by what goes into a good or service rather than how it actually performs in practice is an error of the first order. This is not an abstract notion: Would you rather drive over a bridge whose engineers had designed it to support XXX thousand tons (XXX being appropriately defined, of course) or who had relied on an expensive steel alloy without calculating what weight it could actually support? Yet it’s such a common fallacy as to defy belief: Closer to home, the ABA, in accrediting law schools, values such antique things as the amount spent on the library and (conversely) won’t give credit for adjunct professors. And the upshot is they invite the Law of Unintended Consequences to kick in brutally.
Many is the conversation I’ve had lately about the seemingly glacial, but accelerating, migration of our industry from prestige-driven to outcomes-driven, and this is one milestone along that salutary path. (The enormous issue of “prestige” vs “outcomes” is worthy of one or more columns of its own, but lest anyone take immediate umbrage at my questioning one of the profession’s century-old pillars, suffice to say for now that prestige should follow from consistently superior outcomes and not be floating without visible means of support in mid-air, thanks to some ancient and indecipherable runic decree.)
So I fundamentally agree with the methodology. A second welcome element is that that graphic presentation is both vivid and informative:
And finally, as one has come to expect from Above the Law, the writing is fresh and conversational. For example, answering the self-posed question, why does employment score merit a 30% weighting, they write: