Articles Posted in Benchmarks

Published on:

Law departments can share in benchmarking surveys nearly all of their metrics without concern for harming their company. Spending, staffing, matter loads and other data does not yield anyone a competitive advantage so long as the data is aggregated, normalized and anonymous (See my post of Oct. 17, 2005 that urges general counsel to take part in surveys.). I would argue that even if you disclosed this information to your arch-rival, your competitor cannot do anything nefarious with it.

This is far from saying that all information can be disclosed without repercussions. For example, amounts paid in settlement or settlement strategy are highly confidential. So are amounts set as reserves on individual cases. A practice of registering patents in different corporate names should be kept quiet as well as negotiating positions for basic contracts, such as terms and conditions. Some trends in lawsuits should never see the light of day any more than should specifics about what is spent on certain law firms.

Published on:

Deep in an article about economics expert witnesses, in the Wall St. J., Vol. 249, March 19, 2007 at A1, A11, there appears at first read an astonishing factoid. “Peter Nordberg, a partner in the Philadelphia law firm of Berger & Montague, counts 87 cases since 2000 where economic or accounting witnesses have come under scrutiny by federal appeals courts. The witnesses’ testimony was knocked out 40% of the time.”

Shocking! The testimony of four out of ten expensive, fancy-resume experts eviscerated? Can’t you see the headline: “40 percent of experts’ money spent by law departments goes down the drain”?

Wrong. Any time an analysis of metrics draws conclusions from a non-representative set of data, such as the quoted figure which came not from all cases where economic or accounting witnesses took the stand but only from a small subset of those cases that were appealed, the conclusions must be fairly described. The appeals might have been taken precisely because the expert testimony was suspect.

Published on:

A number is a digit – “Ten is larger than eight.”

A digit used as an adjective becomes datum: “We have ten cases.”

Data presented to make a larger point than mere enumeration become a metric: “The ten cases pending is lower than the volume we typically face” (See my post of Sept. 25, 2006 on net score analyses; March 25, 2005 on weighting client satisfaction scores; and March 12, 2006 and Dec. 31, 2006 on nominal versus inflation-adjusted figures.).

Published on:

Let’s convert a common description for stock markets – volatility – to an application for law departments. A law department tracks the amount of all bills that come from outside counsel each quarter. The department then creates a chart with columns corresponding to the number of bills it receives during the quarter by equal value ranges (e.g., $1 to $1,000; $1001 to $2000 etc.) increasing to a column for the largest bill range on the right. With that data someone could calculate the standard of deviation for that humped graph. If the bills are normally distributed (which is a significant if), one standard deviation on either side of the average bill size covers 66 percent of the bills.

If the law department diligently did this quarter by quarter, the standard deviations each quarter will vary. They vary because the frequency of amounts billed in a quarter has unpredictability, some volatility. Sometimes there will be very large bills, sometimes a preponderance of relatively small bills. Likewise, the prices of shares on an exchange exhibit variability and standard deviation captures that volatility.

Over time, the law department may see in its patterns of volatility a way to better predict its spending in future quarters (See my posts of July 25, 2005; and Nov. 13, 2005 on power law distributions; and Jan. 14, 2007 on correlations and the squares of them to give the explanatory percentage.).

Published on:

Met. Corp. Counsel, Vol. 15, Feb. 2007 at 28, includes an interview of Mark Holton, the general counsel of RJ Reynolds. Holton uses the chic term “data mining” repeatedly and uses it interchangeably with “data analysis” to describe what his department does with the e-billing data it amasses. He doesn’t mention a matter management system, per se, but he makes much of the value of digging into the invoice data, thinking about what it means, communicating those conclusions to inside and outside lawyers, and reporting on it.

Old wine in new jugs, to me, but whatever the term – the trendy “data mining,” or the traditional “analysis of data,” “process re-engineering” (See my post of Jan. 25, 2007 about all processes produce data for mining; and July 21, 2005 about a consortium to share legal data for mining.), or “business intelligence” (See my post of March 23, 2006 on this marketing term that has many definitions.) – the goal is the same: learn something useful from the bills of outside counsel and act on that knowledge.

Published on:

In its “Management Report 2006,” Team Factors Ltd. presents data from 102 New Zealand corporations. A summary of the report states within those companies median lawyers per 1,000 employees was 6.86. I question the usefulness of such a metric (See my post of June 7, 2006 that criticizes a metric of lawyers per 1,000 employees; yet see my post of Dec. 22, 2005 that refers to 2.55 lawyers per 1,000 US employees.).

The report went one step further, moreover, and shows (at 7) “median legal costs per employee” at [NZ]$1,719. My reservations are compounded. Not only our employees widely variable in number, because of the business models of various companies, but total legal spending is a number that has eluded a standard definition (See my post of Sept. 4, 2005 on the inclusiveness of “total legal spending”.). Combine a spurious metric with an amorphous figure and the resulting data has all meaning sucked out of it.

Published on:

Five posts savaged an assortment of metrics in the November 10, 2005 overview of the Open Legal Standards Initiative (OLSI, a non-profit entity) (See my posts of Sept. 13, 2006 (2); Sept. 17, 2006; Nov. 2 and 5, 2006.). Nina Wong, the CEO of Corporate Legal Standard (a for-profit company) has graciously and correctly pointed out that I confused the two entities; OLSI, not the Corporate Legal Standard, distributed the draft collection of metrics for comment.

Wong also brought to my attention an article, “Constructing Standard Metrics of Your Law Department’s Value, ACC Docket, Nov./Dec. 2006 at 74 by Jeffrey Carr, Steven Lauer and Wong, that provides background on the initiatives and the revised set of metrics. Many, if not all, of the metrics I castigated did not survive into the final draft of the metrics as described by the article.

Published on:

Legal metrics catch my eye and the Nat’l L.J., Vol. 29, Jan. 8, 2007 at 8, about to Matthew Fawcett, the general counsel of JDS Uniphase Corp., offered some eye-catchers. The profile says that the company’s “legal arm” – it does the legwork? – consists of 30 lawyers and that they perform nine-tenths of their total work in-house. That says to me that of all the legal services needed by JDS Uniphase, only one-tenth is done by external providers.

Now the eyes are caught: a paragraph later, the profile gives Fawcett’s estimate that “66% of the legal department’s annual spending goes to external providers.” Nothing unusual, since a median spending ratio is about 40 percent inside, 60 percent outside.

But, and I’m probably missing something and certainly being microscopic, if two-thirds of that large law department’s budget goes to pay for 10 percent of the company’s legal work, something is out of kilter.

Published on:

One hundred percent of the readers of this blog should realize that I like statistical analyses, think they are insightful, and wish the law department industry had more and better statistics (See my posts of May 31, 2006 urging in-house counsel to become comfortable with statistics; Sept. 4, 2005 on Richard Thaler and the value of statistics and evidence in decisions; and Sept. 4, 2005 on how legal intuition should yield more often to metrics.).

Let’s catalogue the appearances on this blog of 20 or so statistics terms or concepts:

See my posts of Jan. 20, 2006 on Bayesian statistics; May 31, July 25, and Oct. 24, 2005 on bell curves; April 5 and May 10, 2005 on correlation and Jan. 14, 2007 on the amount of variance in an independent variable explained by correlation; June 30, 2006 on several dispersion statistics; Oct. 22, 2006 on Gini coefficients; June 6, 2006 on trend-line equations with least squares calculation; Jan. 3, 2007 and March 10, 2005 on linear and exponential growth; Jan. 14, 2007 on log-log analyses; Dec. 9, 2005 on margin of error; Nov. 30, 2005 on mean, median and average; May 31, 2005 and Jan. 1, 2006 on normalized data; Nov. 13, 2005 on power laws, and bell curves; Aug. 14, 2005 on regression analysis and Feb. 7, 2006 on regression to the mean; Jan. 6, 2006 on rolling averages; Nov. 25, 2006 on weighted averages and references cited; Nov. 6, 2006 on standard deviation; and June 19, 2006 and Feb. 8, 2006 [lawyers and productivity] and Dec. 8, 2006 [patents and financial performance] on statistical significance.

Published on:

Michael Woods made several points regarding my comments on PricewaterhouseCooper’s study of patent litigation (See my post of Dec. 31, 2006).

First, wrote Woods, “although 53% of the cases result in damage awards, according to the study, the study does not count injunctions, summary judgments, and motions to dismiss. The ‘success’ of a plaintiff in achieving an injunction can sometimes be far more valuable than damages.” For a plaintiff, injunctive relief may be as much a success, or a different kind of success, than damages. Point well taken.

Second, Woods notes that as to the 70 percent of the patent cases that were overturned, adjusted, or remanded, in some of them damage awards might have been increased. That an appellate court could add to an award of damages was a possibility new to me. Thanks, Michael.