Why Law Firm Rankings Are Useless
blog.litimetrics.com>> "the job of a lawyer is to persuade, which is part legal reasoning and part hard work. Since the ability to persuade directly affects litigation outcomes, participants are willing to pay highly regarded barristers upwards of $28,000 a day, and some law firm partners an hourly rate of over $1,000. In this post, we analyse data from 22,000 cases to measure the differential impact law firms and barristers have on litigation outcomes."
No. That's what TV lawyers are paid for. That's the equivalent of learning about doctors from watching MASH. Most doctors are not surgeons. Most lawyers are not litigators. Many great and valuable lawyers will never win any case. Imho the best lawyer is the one who's client never faces the uncertainty and protracted negativity that is modern litigation, just as a great and valuable doctor is one who's patients avoid the knife. (Apples and oranges but the point stands.)
Lawyering is, in my world, mostly about listening to clients and understanding how their realities fit into external standards such as laws and industry norms. It's about hedging, documentation and interpretations. And sometimes it's about holding the client's feet to a fire. Litigators only appear once all of that has failed. Litigators may have the highest hourly rates, but do not have the highest salaries overall. Those are reserved for corporate counsels and board members.
Depends on the subject matter and the willingness of a plaintiff and defendant to negotiate a fair settlement.
In some areas, big money comes with big litigation.
As they acknowledge, it is essential to account for case difficulty in comparing success rates. Having been on both sides of the "v." I can say it is easier on the defense side (at least on the civil side). Defense counsel representing Big Cos. will have a better record not only because those companies are often the target of weaker lawsuits, but because various rules intended to filter out those weaker lawsuits stack the deck against plaintiffs. Thus, there are fantastic lawyers working at Sierra Club or NRDC, but they're not going to have the same record of wins as similarly-good lawyers at a big New York firm.
According to the article, their analysis does account for that. They allegedly "calculate the probability of a successful outcome for the applicant" using machine learning. But there is no description of how they do that. If I had an algorithm that produced halfway decent results in an automated fashion, I wouldn't use it to get into the legal technology market. I'd set myself up as a litigation funder and make boatloads of money with accurate valuations of potential investments.
> If I had an algorithm that produced halfway decent results in an automated fashion, I wouldn't use it to get into the legal technology market. I'd set myself up as a litigation funder and make boatloads of money with accurate valuations of potential investments.
Legalist is a recent YC startup aiming to do exactly that:
http://www.newyorker.com/business/currency/what-litigation-f...
If I were interested in litigation funding, I wouldn't evaluate the ability of law firms to litigate but rather the ability of lawyers to accurate predict the outcome of cases (regardless of whether they're actually representing the client) -- e.g. associates at law firms frequently write memos to assess the probability of success before a case makes it way to trial. Seems like there's value in actually correlating memos with actual outcomes.
The article says they have a machine-learning algorithm that can "calculate the probability of a successful outcome" of a case. Then they calculate firms' ability to improve (or not) that outcome. But if I had such an algorithm I'd just use it directly to decide what cases to fund.
My understanding from talking to litigation funders is that lawyers are terrible at valuing cases. There is money to be made for anyone that can use technology to improve those predictions.
> Thus, there are fantastic lawyers working at Sierra Club or NRDC, but they're not going to have the same record of wins as similarly-good lawyers at a big New York firm.
Not a problem for the analysis of law firms. Sierra Club is a less effective law firm than the big New York firm.
> Not a problem for the analysis of law firms. Sierra Club is a less effective law firm than the big New York firm.
Why is this not a problem? The parent comment was saying that the Sierra Club takes harder cases and is often the plaintiff, not the defendant, so they may be more effective as a law firm but have worse numbers because the win/loss numbers don't mean the same things in different contexts.
It's also worth nothing that the Sierra Club isn't a law firm at all. It's an environmental organization that, among many other things, sometimes uses the legal system to advance its goals.
"To sidestep the randomness of observed outcomes, we focus on modeling the probability of a successful outcome for a given case. Secondly, to account for the context or difficulty of the case, we measure the change in the probability of a successful litigation outcome, from substituting a legal service provider in. The difficulty of the case is not important, but rather the impact a legal service provider has on that original probability of success. From now on, we will refer to this measure as the impact on expected performance. This substitution exercise is known as a counterfactual, as we apply the substitutions to historical data."
-----------------
how exactly did they model the probability of successful outcomes for given cases?
because those depend on highly detailed fact patterns, many of which are largely qualitative in nature
if they were actually able to put all those in a data base that you could then model
that'd be a more interesting accomplishment to read about
Right. The post links to slideshow "model" for counterfactuals given at Hulu. And I won't pretend to analyze it scientifically, but it's hard to shake the feeling that it's circular in this application.
We want to rank lawyers based on effect on outcome, not win-loss record. But it looks like the model is one that derives "counterfactual" outcomes by making comparisons based on win-loss records.
Can someone give a better lay-level discussion of the expected outcome model?
Why not use a modified Elo rating? That way you can not only easily figure out if you're going to win, but simulate the outcome of different scenarios?
The problem is the second assumption of Elo's system, per wikipedia: "if a player wins a game, he is assumed to have performed at a higher level than his opponent for that game. Conversely, if he loses, he is assumed to have performed at a lower level."
But you can't assume that about lawsuits. If the client should have paid ten million in damages, the better lawyer might limit the award to nominal damages. But the same nominal damages loss might be a disaster in another case the lawyer should have won.
Lawsuits aren't zero-sum, bilateral agreements to compete like most sports. Parties aren't just competing for a 'win,' but looking to maximize gain or minimize loss.
What is a modified elo rating?
Elo rating is a system used to gauge chess players. You win/lose point depending on your points difference before a match. If the favourite loses, more points are lost to the underdog than if the underdog loses.
You can apply this to a wide variety of sports where most teams don't get to play most other teams, eg you can guess that Germany are better than India at soccer despite the rarity of the tie.
And of course you can modify the system to take account of various issues such as the retirement of better players, which tends to remove points from the system.
As someone who works with a tax law firm that is ranked pretty high on the legal500 (and obviously, I know a rank can be gamed relatively easily), I must say the experiences I've had with them have been excellent. For all intents and purposes, they are exactly what I would expect from a good law firm.
I worked with several "independent" lawyers before, and honestly, they are nowhere near close to the level of skill of my current law firm. In addition, they're quite responsive, much more than the average independent lawyer. And yes, they're more expensive, but not outrageously so. They seem to reach a favourable outcome faster, and seem to be more effective.
Tl;dr: Love the lawyer I work with in this firm, he gets the job done when I need him to + has a great team with broad experience to back him up. He's managed to resolve any tax issue before we even went to court.
I worked on applying machine learning algorithms to data mined from court judgments during my PhD. You want to know what we learned? That it's better to be white and rich than black and poor. Point. Everything else was bullshit.
What about black and rich vs white and poor?
Sample size too small I'd guess.
How much better?
Lol.
It is a legitimate question.
Also, being a Lawyer, if someone comes to me, it's because of me, not because of the firm I am working in. And trying to rank lawyers? Why not. But what's the point. I wouldn't accept being ranked anyway.
> being a Lawyer, if someone comes to me, it's because of me, not because of the firm I am working in
Are you speaking hypothetically, i.e., 'if I was a lawyer'? I'm not an attorney but for a variety of reasons I have relatively deep insight into the legal industry, and I think this statement is generally not true.
As in any field, most business comes from referrals and networking, and much of that happens within a firm. If your career is at a top firm then you are going to build a much different network than a similar attorney who opens their own office in a strip mall. When Goldman Sachs needs an outside attorney, where do they go and who do they ask? Even ignoring referrals, just imagine who is there when you are invited to dinner by your colleague: Is it the mayor and local Fortune 500 CEO, one of whom takes a liking to you and asks you to join the board of a local charity, or is it the lovely neighbors who drive Uber or run a small medical practice, and ask for some legal tips?
Personally, I don't enjoy the hunt for status and I think it impedes socio-economic mobility, but I'm afraid that's the way it works.
I have a tax lawyer I work with, and I couldn't care less which firm he works at. However, he does work at a good firm, and a good name helps in negotiations (in this case, with the tax authority). A while back, I had a tax audit, and surprisingly the person doing the audit told me my firm can turn "black" into "white". That says quite a bit about the name and reputation of the firm.
I am a lawyer yes. But let's say I have a good cv.
> By applying machine learning algorithms to data mined from court judgments...
And, with that sentence, this post becomes bullshit.
The overwhelming majority of disputes do not reach litigation. The overwhelming majority of litigation does not reach judgment. And then a gigantic proportion of those judgments are appealed, and a non-trivial percentage are overturned.
Even at that, only on the order of a single digit-percentage of disputes ever reaches a verdict.
This is like judging a hospital-system based on its success-rate for transplant surgery. It's just deeply ignorant of what constitutes the vast, vast majority of the practice of law - even in disputes.
This article has no value, whatsoever, aside from a witty aside at a cocktail party about how deeply misunderstood the legal profession is. This is particularly shocking given that litimetrics is a legal service provider.
> Although A has not faced B, if both have faced C, then we have some information about the hypothetical A and B matchup. This is the same reasoning that allows one to compare tennis players across different eras.
The implication is that the lawyer efficacy, based on judgments, is subject to the transitive property? Sweet mercy that is ridiculous.
This post is a catastrophe.
> This post is a catastrophe.
This is a pretty ungenerous reading of the article. Your complaint seems to be mostly about the generalizability of the model (a legitimate complaint, as you've explained) rather than its statistical validity. It still provides some interesting insights about the performance of firms at trial. Notably, there isn't a correlation between favorable judgments in those cases which have gone to trial and traditional law firm rankings. That's surprising, even if it doesn't generalize to overall law firm "success".
A large part of statistics is devoted to making valid inference from noisy samples, largely by making sensible assumptions. Even in your example, it is common practice to assess a surgeon on outcomes of surgery, even though a lot of a surgeon's work does not lead to surgery.
We aren't claiming to assess the wide practice of law, just litigation. We do Litigation Analytics.
This is a great point about making sensible assumptions. Too often I see evidence that people think that data analysis should be devoid of assumptions, and any assumptions completely invalidate the analysis. In reality, almost all analysis will have some assumptions, and the good questions to ask is how plausible are the assumptions and what is the plausible impact of what happens if the assumptions are wrong.
What do you count as a "court judgment" and how are they evaluated? Are, for instance, a successful motion to dismiss and a jury decision for the defendant weighed equally? Or do you give higher weight to the motion to dismiss since that's usually a better outcome for the defendant than making it to trial?
And do you look at motions other than actual judgments? I.e. if Firm A wins a motion to suppress some piece of evidence and then there's a subsequent settlement, we should infer that Firm A "won" the case, even if there's no judgment to that effect.
So you sue me and get a judgment for $100,000.
If I felt that I owed you nothing, surely this counts as a "loss" for me at trial.
If, however, you had originally sued for $50M and I felt I might owe you some money but no more than $1M so I decided to fight it, the $100k judgment would be considered an enormous "win" for me.
What data does your analysis use to distinguish between the two scenarios given that the verdicts are identical?
do you understand the difference between "noisy" and "biased"?
I understand sarcasm, that's for sure.
As a matter develops, parties tend to drop out if they think that their chances have declined beyond some threshold. But if they continue, or more likely, their legal representation suggests they continue, it implies that they think they have a good chance of success.
Of course, you can get irrational litigants but most of the time, if it goes to judgement, both parties think they have a high chance of success. Otherwise they would have bailed.
It seems like surgery has similar issues. Some surgeons might only take easy cases, while others might take higher risk cases.
I'd add that the majority of civil cases are resolved through non-public deals whereby no third party can tell who has won or lost.
That said, it might be interesting to apply machine learning at the appellate level. There, the outcomes are public and more readable by machines. Something as simple as looking for tell-tales such as "reverse", "dismissed" or "we side with the appellant" in holdings could be mapped to firms.
Hell, if the lawyers have done a good job, both sides walk away from a settlement thinking that they won.
Or that neither won.
Your comment is filled with unhelpful hyperbole that makes civil, constructive discussion much more difficult.
It seems reasonable to me that a firm with a great track record of judgments is more likely to succeed in other significant areas than a firm without a similarly great track record of judgments. That is, I would expect a strong, positive correlation between winning judgments and other desirable outcomes in disputes, like winning settlements.
We could even measure that correlation through a representative sample of firms to prove or disprove that hypothesis (and the OP probably should), but dismissing it outright with hyperbole isn't an effective response.
That seems like as strong enough assumption to require quite careful analysis and support. I know very little about the practice of law, but I am certain many other fields of endeavor just don't work like this.It seems reasonable to me that a firm with a great track record of judgments is more likely to succeed in other significant areas than a firm without a similarly great track record of judgments.
Doesn't your whole post rely on the assumption that outcomes in the cases they measured don't correlate closely with outcomes in the (far bigger array of) cases they didn't measure? Don't you have to show that before you can definitely say their analysis is completely invalid?
Depends on what is meant by bullshit. Yes, if it meant that the study was absolutely wrong. No, if it meant that the study failed to provide any evidence whatsoever that there exists a correlation.
In science guess who has the burden?
In a legal context the burden of proof is broken down into the burden of production and the burden of persuasion. If you fail to meet the burden of production it's fair to say a claim is bullsh*t. It necessarily follows that if you haven't met the burden of production you cannot have met the burden of persuasion as a logical matter.
Remember, any claim must stand on its own terms. It doesn't matter what is the reality or truth of the matter. We can never know the absolute truth of something. We can only attain a slightly firmer grip on reality when people propose persuasive arguments. Maybe you can mine a failed argument for useful bits, but a good rule of thumb is that it's not worth the time if the argument cannot even meet a minimal burden. If it wasn't worth the claimant's time why would it be worth yours? That's strong circumstantial evidence to move on.
Yes, it does, and yes, that is correct.
Lawyers can drop clients with cases that they think are losers and refuse to take them to trial. And, very, very often, before or directly after closing arguments, if the writing is on the wall, a case settles.
Additionally, what sorts of cases are we talking about? Insurance defense? Toxic torts? Data breach?
The 'winnability' of these cases varies highly. There are many, many law firms who's entire business model is built entirely only on taking no-brainer winning cases. Think injury lawyers. They may - perversely - have very low winning percentages, because 90-95% of their cases settle, and only the real squeakers get to trial.
Let's look at the inverse. Do you really need Quinn Emmanuel or Gibson Dunn if you have a slam-dunk case? Or do you need the best litigators around when your case is a total coin-toss? And, in that event, is it an example of bad lawyering if your crack-team loses because they are pushing the bounds of appellate advocacy?
The idea of judging 'law firms' without further taxonomic distinction, generally, by trial disposition is just - it is frankly idiotic.
It is always easy to criticize a data-driven analysis by saying its assumptions could be wrong. In the real world, all analysis is based on assumptions, some of which you can always claim might not be correct. But you have to really present an argument as to why and by how much the assumption is likely to be wrong, you can't just state that the assumption might be wrong. The assumption that cases which don't settle are not at all indicative of how well a lawyer performs is a very bold claim, much bolder in my mind (admittedly I am not a lawyer) than the claim that there should be some correlation between lawyer performance and results in cases that did not settle. It is possible that lower ranking lawyers settle less often, I'd like to see the data on that.
Furthermore, on average the effects you are mentioning will wash-out, unless there is a systematic bias whereby lower ranking firms and higher ranking firms settle in different manners.
There's a difference between saying ones assumptions are wrong and stating that the units of measure are completely meaningless.
Why should we expect a correlation? Why should we assume they even thought about the issue, if it isn't mentioned in their analysis?
Why should we expect there isn't a correlation? They do mention it in their analysis. At the end of the day, just because there might be a systematic bias in your result doesn't mean there is a systematic bias.
All real-world analysis (especially for observational studies) rests on certain assumptions. It is always true that these assumptions might be wrong, but it is important to think about whether or not the assumption is plausible. It seems plausible that on average, a lawyer who is able to get better outcomes when they don't settle is also able to get better outcomes when they do settle.
Furthermore, even if most cases are settled, the rare cases that do go to trial can have an outsized impact. Usually people settle because a bad judgment is devastating (as well as not wanting to pay legal costs).
> This is particularly shocking given that litimetrics is a legal service provider.
Not so shocking. Law firm ranking services are their competition. They drew a conclusion from the data that supports their business.
Law firm rankings absolutely can provide a value. Both on the firm and client side of the transaction.
Imagine you're on the client side of a qui tam (Whistleblower) or Immigration case. Are you going to bring that case to a law firm that you don't have visibility into a) how the firm is trusted by their clients and b) how the law firm is regarded among other law firms? That would be crazy!
Heck, in a qui tam you can't even get quality analytics into how well a firm litigates a case because the best qui tam cases get picked up and tried by the Federal Government!
(Disclosure: I'm a dev at a law firm.)
Rankings seem to be very subjective and to be a function of firm biz dev teams: http://www.chambersandpartners.com/submissions
Size of marketing team might correlate with success/positive market perception though.
While not all cases go to trial, it is important to know what your chances would be at trial with that particular firm. This is the same metric that the other side will primarily look at when considering the terms of any settlement.
Therefore, I disagree that this post "becomes bullshit" merely because it relies on trial outcomes.
Having spent the last 12 years of my life wrapped up in a civil trial related to estate law and learned a ton along the way about how lawyers have failed me, I would much rather have known how often a firm's cases go to trial and if not, which way the case went.
It will take _years_ before your case actually goes to trial and you will meet plenty of lawyers along the way if you aren't careful who will happily fuck off with your money. And you'll have to sue them separately to deal with that, if you even have the energy to spare.
Good luck having a full-time job through that mess.
> While not all cases go to trial, it is important to know what your chances would be at trial with that particular firm.
Read above to my last reply.
This is really a poor way to think about law firms.
Is the firm a toxic tort firm? Is it an appellate advocacy firm? Is it maritime law? Is it an intellectual property dispute? A business dispute?
Win percentages are totally meaningless without more information, including client-positive settlement rates.
This metric is totally useless.
>This metric is totally useless.
You honestly believe that an opponent won't take a 90%+ trial win rate into consideration when considering a settlement? All you hear from lawyers pretrial is "our chances at trial are X" and then based on whether X is high or low, they'll make a decision to go to trial or settle.
"88% trials and arbitrations won": on the front page of Quinn Emanuel's website.
> You honestly believe that an opponent won't take a 90%+ trial win rate into consideration when considering a settlement?
First, I'm a 7th year corporate lawyer.
Second, of course I would look at it. However, it would not be dispositive and I would likely discount it super, super highly. I have vastly superior data about the winnability of my case as a result of being the attorney on the case than I would knowing the 'win' ratio of my opponents law firm - which is almost totally irrelevant to the case at hand. Ceteris paribus, bad lawyering loses more cases than good lawyering wins cases. This in mind, the thing to know is whether or not your opponent is a bozo or competent. Regardless of your opponent's competence, you act as if he is at least as smart as you and I would litigate as hard against Ted Olsen as I would someone no one has ever heard of. The controlling factors come down to your client's budget, your opponents budget, the animus they have toward each other and, lastly, the facts.
Additionally, channel checks that I perform on opposition attorneys are far, far more sophisticated than algorithmically computed win rates. Maybe I'd use it as a data point, but the frank reality is that I have far better means at my disposal for judging the merits of a case and the efficacy of my litigation opponents including google, the phone on my desk and my judgment. Law is a very close-knit and reputationally driven profession. I do channel checks on opposing counsel frequently. It is an extraordinarily rare situation where I cannot get a second opinion, from someone who's judgment I trust, on another law firm or another lawyer when I really need one.
Finally, what is missing from this whole consideration is that law suits are not fair fights and they are highly fact and context dependent. Not every game has the same rules, the rules and the facts may be skewed. Does it matter if a particular NFL team is last in the league if they are going to be playing a HS JV team in freeze-tag? Accordingly, it is not like the win ratio of a poker player or a football team. The case itself very often is going to have a huge skew in favor of one opponent or another. If I had a winning case and you told me that my opponents were Abraham Lincoln's Zombie and Scalia's ghost, I'd still fight it.
Let's make an analogy: Let's look at batting averages as a predictor of general sports ability. But instead of confining the batting averages to major league players and major league pitchers, let's make it the life-time batting average of every American over the age of 18, for every game of stick-ball, alley-ball and beer league softball they have ever played in their lives, referreed or not, as a predictor of their ability to play "sports" generally, regardless of their age.
I think that is a fairer analogy to the 'win' ratio for trial dispositions.
Do they mention win percentages? I read the post to suggest that ratios/percentages were too simplistic a measure
"The overwhelming majority of disputes do not reach litigation. ..."
I'm shopping for a personal injury lawyer. How to pick? The two referrals I got didn't pan out. I don't know any lawyers. Aha, search the court documents to see who the winners are!
I (finally) consulted my paralegal friend. He made the same points you have.
Additionally, he said the courts all use different IT systems. One could use an expensive service to do the search, but it's hardly worth it for laypersons.
Too bad.
I did start to wonder if one could data mine the filings, just to see who's getting the most cases, and maybe infer something actionable from that.
This is just a case study of law firm rankings in general. Popular law firm ranking systems use criteria that may not be relevant to your business. When ranking law firms, you should use a custom ranking system.