Published on:

Law departments want to generate value. What is the value of an acquisition? The amount paid to buy a company may be clear, but the worth of the deal depends in part on how far out you look. What is the value of a license agreement? Projected revenue? Possibly, but during what number of years? What value comes from a law firm obtaining a zoning variance? Tell me the number of years to cover in the estimate.

Just as value cannot be begun to be quantified unless you state a period of time in which to accrue returns, neither can risk be assessed, and for the same reasons of timing. How risky is filing a patent application? Depends on when you stop looking into the future. How risky is buying a company? Quantification of both value and risk depend ultimately on what must be an arbitrarily chosen time frame. As Keynes famously wrote, “in the long run we’re all dead.” (And this point about duration leaves out the discount rate you select.)

Value and risk are complements of each other. Future risk diminishes future value; the higher the potential value, the larger the possible risks. Both are suffused with Knightian uncertainty (See my post of Jan. 13, 2006: risk means the probability of an outcome is possible to calculate whereas uncertainty means the probability of an outcome is not possible to determine.). The period of time that we accept or assume for the projection of value obtained or risks run takes on great significance.

Published on:

Patrick Dransfield, the Publishing Director of Pacific Business Press, suggested another ploy of prestigious firms, and he called it bundling. This blog has referred to unbundling as the practice of taking away from law firms some tasks that others can do better, cheaper, or both. Dransfield sees it like an anti-trust violation of tie-in: to get brains you must also take brawn. Worse, law firm lawyers bundle their billing rates for all level of services.

“Prior to the financial crisis, large international law firms bundled up their services with the same skill and cunning as the Bordeaux Premier Cru wine broker, effectively telling inhouse counsel that if they wanted the premium ‘bet the firm’ innovative service at the top end (i.e. the equivalent of a Bordeaux First Growth), then they’d have to accept this service bundled up with …the work of less experienced associates and paralegals underneath. Secondly, the same charge–out rates for elevated work whose price is ‘beyond market forces’ was also charged for partners providing lower commodity work, such as Employment Contracts, and the like.”

I have not witnessed the first form of express bundling – “You don’t cherry pick us on sophisticated services unless you hire us for commodity work!” – at work in the United States. But his point about the same hourly rate for high end and low end work holds true. The quote comes from Asian-MENA Counsel, Nov. 2011, at 20.

Published on:

Self-help best-sellers (“You can be anything you want to be if you [do this magical thing]!!”) leave me more than a bit cynical so it was with considerable interest that I read in the NY Times, Nov. 27, 2011 at SR8, about willpower. It’s hard for in-house counsel to slog through the final pages of a turgid contract, or to review the bill that runs into hundreds of thousands of dollars, to push for the conversion of data by the end of the month – to demonstrate resolute willpower.

The research described in the article disputes the notion that willpower has a set biological limit due to either the hypothalamus or your glucose level and that you can’t increase it. The authors found that those who believe that their willpower doesn’t face natural limits are much more able to push on and exercise more self-control. If you change your mind-set from willpower constrained to willpower unleashed, you will benefit.

Published on:

General counsel would do well to ponder two contrarian points made recently about leadership. A commentary on the latest book by Jim Collins, Great by Choice, in the Economist, Nov. 26, 2011 at 80, refers to two common beliefs that he challenges. Collins does not believe that “turbulent times call for bold and risk-loving leaders.” To the contrary, most of the business leaders Collins profiles are risk-averse to the point of paranoia.

Nor does Collins agree that doing something novel and innovative is the only virtue that counts. Efficiency, continuous improvement, gradual adoption of ideas tested by other law departments will serve over time better than an adventurous pioneering step. Thoughtfully stay within the envelope; improve within the box.
Perhaps these findings resonate with me because in my consulting I rarely find support for dramatic, breakthrough paths forward. Neither the anti-bold nor the anti-new views fit with those who urge transformations, but then my tent is not pitched in their camp.

Posted in:
Published on:
Updated:
Published on:

Two websites are particularly well known for analyzing politician’s statements for accuracy, FactCheck and PolitiFact. Reading about them in the Economist, Nov. 26, 2011 at 43, I found myself wishing there were equivalents for articles about law department management (or blogs, for that matter). In some measure I have cast myself in that role. When facts or benchmarks regarding legal departments come to my attention, one of my first reactions questions the believability and accuracy of whatever is asserted. Hardly credulous, more like a pain-in-the neck quant pedant, it troubles me when numbers are tossed around carelessly. Even if a number sounds right, was the methodology for arriving at it sound?

Surveys by interested parties leak the most, but other times writers seize on a number and don’t bother to confirm it against other sources or to poke at it for even surface plausibility. A vivid and disturbing example is all the guesstimation of the size of the U.S. legal market.

The article recommends crowdsourcing tools, comment boxes for online articles, retractions and corrections by the publication, as well as “Standardisation – of data sources, measures of factual reliability, and platforms for sharing information.” I’m all for that and I hope this blog contributes to clarity and reliability in the facts about the legal industry.

Published on:

When you hear of a statistical finding, you should want to understand that number’s reliability. If the research that produced the number were repeated several times, how much would the results vary?

Consider an example. Let’s make the simplifying assumption that the participants in the GC Metrics benchmark survey make up a reasonably random and representative sample of at least U.S. law departments. The margin of error for findings from a set of normal numbers shrinks in proportion to the square root of the size of the set. Hence, a benchmark finding based on 200 law departments – the participant base of the HBR Consulting (nee HildebrandtBaker Robbins) report – has a margin of error of 14.1. A finding from the GC Metrics report, based on 800 law departments, four times as many, has a margin of error of 28.3. That means the margin of error shrinks in half from the smaller to the four-times-larger survey.

A close approximation of the margin of error is 0.98/√n where n is the sample size. With 800 law departments (n=800), the margin of error calculates to approximately 3.5 percent. A finding based on that group could vary up or down by 3.5 percent and be just as reliable or likely as the finding given. For 200 law departments, the swing is 6.9 percent — four times more participants cuts in half the confidence interval so the results from the larger set is more precise and reliable (See my post of Dec. 9, 2005: margin of error and sample size; Aug. 30, 2006; sampling error; April 22, 2007: error; and Oct. 31, 2007: formula for margin of error.). With benchmarks, respondent size matters.

Published on:

Metrics from small law departments exhibit much more variability than the same metrics from large law departments. For example, from one year to the next, outside counsel spending per lawyer will swing higher or lower for law departments with one-to-three lawyers than for departments with 20+ lawyers. The explanation, drawn from Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus & Giroux 2011) Chapter 10, results from what he calls the “Law of Small Numbers.” Kahneman explains that “extreme outcomes (both high and low) are more likely to be found in small than in large samples.” Think of it this way. Small law departments operate on a smaller sample of incoming invoices than do larger departments, so the variability (the standard of deviation in the annual sets of invoices) is greater.

As a second illustration, year-over-year variability will tend to be greater from smaller benchmark surveys than from larger ones. If 150 companies take part in back-to-back years, it is less reliable to state something like “a 2% increase in total legal spending” than if 850 law departments take part each year.
Sadly, Kahneman notes, “We pay more attention to the content of messages than to information about their reliability” (at 118).

Published on:

More law department sales. Vendors, let me know of your successes. I still hope to announce license arrangements. So, to prime the pump, note that LexisNexis CounselLink was chosen by Fannie Mae and Hawaiian Electric, according to Met. Corp. Counsel, Nov. 2011 at 41.

Arguments by analogy are fallacies. “Almost any analogy between any two things contains some grain of truth, but one cannot tell what that is until one has an independent explanation for what is analogous to what, and why.” David Deutsch elaborates on this point in The Beginning of Infinity: Explanations that Transform the World (Viking 2011) at 371. A well-run law department is a sewing machine requires the reader to step back and know all kinds of fundamentals about both sides of that analogic metaphor (See my post of Oct. 12, 2010: the fundamental cognitive function of metaphors.).

Eigenvectors and matrix mathematics. A matrix would be a table of the five law firms you paid the most during the past five years. The first column names the firm and the five columns to the right give for each year the ranking of the firm, where a 1 means it was paid the most that year, a 2 the second most, and so on. The 5X5 table is a matrix and mathematical tools can calculate the “score” of each firm. That score is the so-called “first-rank” eigenvector. Eigenvectors are useful for sophisticated mathematical functions, we learn from John D. Barrow, 100 Essential Things You Didn’t Know You Didn’t Know: Math Explains Your World (Norton 2008) at 223-24, who gives an example of a matrix (See my post of Oct. 29, 2011: matrices.).

Published on:

Our evolution equipped us to create causal explanations for events much more readily than to grasp underlying statistical explanations, to use the terms of Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus & Giroux 2011). Causal explanations, often in the form of a narrative, explain what has happened by people or understood forces doing explicable things to bring it about. “Chris had a good day in court, so we won.” “Mega Corp.’s CEO had to close the deal before year end to get his bonus, so they conceded several points.”

Statistical thinking, by contrast, derives conclusions about individual cases from properties of categories and ensembles. Chris’s company typically prevails on 75 percent of its cases. Or, more than 60 percent of deals that reach a certain stage go on to close.

Savings attributed to the start of a new process or new software or new training often exemplify a causal explanation. “We did X and Y followed.” Our quick System 1 minds favor neat patterns recognized and stories fit snugly together. In fact, it weaves them at the snap of a finger and out of few facts. But it may often be wrong or fanciful. Our slower System 2 minds can turn to probabilities and bigger-picture explanations.

Posted in:
Published on:
Updated:
Published on:

Early in the data collection this year for the GC Metrics benchmark survey, 69 law departments stated how many years they had been using their matter management system. The average number of years those departments had had their system installed was 6.4 years. Later, with 652 law departments in the survey, twice as many (139) had submitted longevity data. The average dropped a year, to 5.4, and the median number of years was 4. The law department with the longest history of a matter management system had just reached the two decade mark!

Given the cost, diversion of time and energy, and risks that face law departments when they license and install a matter management system, it surprises me to see a five-year average life. But it could be that several of these law departments had another system before the current one, so their average years of using that kind of software is higher. The question asked only about the current system and how many years since its installation date.