Articles Posted in Benchmarks

Published on:

An article in strategy + business, Winter 2009 at 42, states the “R&D intensity” for 1,000 global companies. In 2008, the R&D intensity, which is the percentage of sales devoted to R&D, was 3.6 percent, the same as in 2007. Two thoughts deserve mention related to this benchmark.

Is it accurate to say that R&D spends typically around six times what legal spends (See my post of Jan. 12, 2009 #1: R&D spend is 7 times total legal spend.)? Compare the R&D percentage to the typical 0.4 to 0.7 percent of sales that legal departments typically report (See my post of Aug. 21, 2008: total legal spend as percent of revenue with 9 references and one metapost.).

Second, does R&D spend exclude the costs of patent prosecution and protection and of trademark registration and protection? (See my post of March 2, 2009: R&D with 12 references.). No company should double count these costs, so there needs to be a consistent definition of what’s an R&D cost and what’s a legal cost.

Published on:

Empsight International compiles benchmarks from large law departments as a competitor to Thomson Hildebrandt’s survey. Empsight’s website states that its most recent CLCM Law covers 170 companies, which had median revenue of $12.9 billion dollars, and covers 7,748 lawyers out of 13,558 overall incumbents.

Although I could criticize myself for the calculations that follow, let me squeeze some juice from those numbers. First, lawyers as a percentage of total staff comes to 57 percent, which is quite close to the typical ratio of lawyers per support staff (See my post of Oct. 27, 2009: one-to-one ratio of lawyers to support staff with 9 references.).

With a further squeeze what comes out is that the survey cohort averages 45.5 lawyers per department. At the median revenue of $12.9 billion, division of average lawyers by median revenue leads to 3.5 lawyers per billion, which is a quite plausible ratio (See my post of Feb. 25, 2009: lawyers per billion with 22 references and one metapost.).

Published on:

A review of a book about “evidence-based medicine” launched this post. Essentially, EBM emphasizes tests and data collection as the guide to the efficacy of medical interventions. Don’t rely on doctors’ anecdotal conclusions, but deliberately gather figures and wade into the statistics of whether aspirin reduces heart disease or flu shots reduce H1N1.

In the domain of legal department operations, we sorely need EBM — Evidence-Based Management. We need data on the efficacy of legal services. Unfortunately, we cannot run controlled experiments, only natural experiments (See my post of Aug. 1, 2006: natural experiments.). We can gather benchmark metrics from law departments that subscribe to a certain practice and the same metrics from otherwise similar departments that don’t. Similarly, we can dig much deeper into practice group metrics to start to untangle – with the insights of statistics – the effects on performance and productivity of different structures and practices. We can accumulate evidence of effectiveness and gradually clear some of the jungle of ignorance. Evidence for management practices from benchmarks are the best we can do.

Published on:

No

But allow me to elaborate.

If you retain fewer firms, the average number of matters will rise, all other things being equal. If you define matters more broadly, the number will fall. If you allow firms to charge time to general matters, the number will fall. If you set in place fixed fee agreements, the number will fall. If you do business in more countries and retain local counsel, the number will fall. If you assign large matters to certain firms, the number will be distorted by that practice. Finally, other law departments do not track this metric so you will not have a comparison. For all these reasons, the metric is not worth gathering, at least without other metrics that might explain it.

Published on:

Only eight people have taken the moment or two to complete my poll, to the right, about the most useful benchmark metrics for their legal department. Do it now; don’t read the answer below!

Seven of the eight respondents (87.5%) chose “total legal spend as a percentage of revenue.” Four chose “inside-to-outside spend ratio; ” three “lawyers per billion of revenue;” and one chose “external legal spend per lawyer.” Remember, everyone could choose two out of the list of six. No one picked as their two most useful “internal spend per lawyer” or “total legal spend per 1,000 employees.”

Published on:

“You can’t prevent people from gaming numbers, no matter how outstanding your organization. The moment you choose to manage by a metric, you invite your managers to manipulate it. Metrics are only proxies for performance. Someone who has learned how to optimize a metric without actually having to perform will often do just that. To create an effective performance measurement system, you have to work with that fact rather than resort to wishful thinking and denial.” This blunt reality comes from the Harvard Bus. Rev., Vol. 86, Oct. 2009 at 100.

A drawback to any benchmarking scorecard, therefore, is the risk that whoever’s ox is gored may turn to bull. Numbers will mysteriously transmute (See my post of March 11, 2009: gaming and manipulating with 12 references.). People will adjust their efforts and reporting to the priorities set by benchmarks.

Some of the article’s solutions to the problem of gaming are to (1) diversify the benchmarks, because it is harder to play around with several at once; (2) draw on various sources for metrics, because it is more difficult to fiddle with multiple contributors, and (3) vary the time periods covered, because that creates different scales and intervals. Another solution is to gather the metrics over several periods of time. A lawyer, for example, might manipulate the number of patents filed by splitting some inventions into more than one patent, but if the lawyer regularly does so there will still be meaningful data over several years. Stated differently, inflated or distorted measurements, when done consistently, still yield insights about performance — perhaps not the underlying “true” performance, but relative to prior years.

Published on:

Raw numbers have some usefulness, but much more if they are fitted into a context by being weighted. So, for instance, 100 patent applications filed in 2009 tells something about a law department, but that number weighted by R&D spending or invention reports submitted or revenue of the company tells more. Or giving a weight to that figure in comparison to some other figure, such as licensing fees received, puts the figure in perspective.

For that reason, more insight in context, this blog has frequently mentioned various circumstances where weighting numbers makes sense (See my post of Nov. 30, 2005: one way to compute a weighted average; Dec. 31, 2006: a second way to calculate a weighted average; Nov. 25, 2006: weight surveys that cover multiple business components by component revenue; Feb. 20, 2006: a weighted average for litigation cycle time; Feb. 6, 2007: weight the components of law firms’ proposals; April 16, 2009: weight RFP responses; Sept. 5, 2007: probability-weighted sample; Dec. 31, 2006: how to weight lawyers per billion by revenue; March 25, 2009: weightings and a grid analysis; April 1, 2009: benchmarks should calculate weighted averages correctly; June 15, 2009: weight survey results to be nationally representative; and Nov. 8, 2009: weight multiple attributes in evaluations of law firms rather than ask a global rating question.).

Published on:

I have been compiling a guide to law department benchmarks. While researching it, I realized that this blog has hundreds of metrics, but only some of them fall into a category most general counsel would call benchmarks. They would define benchmarks as commonly-accepted metrics of law department performance. Not that the particular metrics produced are granted gospel status, but the way to calculate them is and so is their significance. So no one says that four lawyers per billion is right; everyone agrees that lawyers divided by billions of revenue ranks among the most important benchmark calculations and metrics.

Less well-known are benchmarks that compare the performance figures of law departments on a selectively acknowledged basis. For example, I am completing a benchmark study for leading trademark holders. Thousand active marks per full-time-equivalent trademark lawyer tells those experts quite a lot, but as an esoteric figure it offers little to general counsel.

Some numbers particular legal departments track, but their general counsel do not see it worthwhile to try to obtain similar numbers from other law departments. They may feel some comparative metric would be fine but far too hard to define and collect. Consider percentage of contracts reviewed. A general counsel can tally that number for herself, but to get methodological reliability and persuade a sufficient number of other general counsel to gather it would likely be too painful for the insights obtained.

Published on:

An article in the Practical Law J., Vol. 1, Nov. 2009 at 70, offers a benchmark calculation about Computer Sciences Corporation (CSC). That calculation creates an opportunity to reflect on the shortcomings of the common benchmark, lawyers per billion. At $16.5 billion in revenue and around 100 in-house lawyers, CSC employs six lawyers per billion, perhaps a bit below the typical figure you find in technology companies.

When you think about that metric – lawyers per billion – for law departments generally, you realize its infirmities. Fundamentally, it leaves out the support staff who can make such a difference in the productivity of those lawyers (See my post of Oct. 27, 2009: one-to-one ratio.). With that omission, what is also lost is the more nuanced understanding of specialty roles, such as IT support, knowledge managers, paralegals experienced in various practice areas, and electronic discovery mavens.

Not that any benchmark metric can bear more than its weight, but going further, lawyers per billion of revenue takes no account of where the lawyers are located (See my post of Jan. 16, 2009: decentralized law departments physically with 13 references.).

Published on:

An article in the Admin. Sciences Quarterly, Vol. 54, June 2009 at 282, describes an index to measure how even or skewed distributions of nonprofits funding were from multiple sources. The same formula should describe the distribution of spending by a legal department on, say, the 20 law firms on which it spends the most during a fiscal year. [I picked 20 simply to create a cutoff point and because most law departments in fact engage 20 or more law firms each year.] If several departments computed the figure, we could have a benchmark figure for concentration of outlay on law firms.

The formula is

n