Articles Posted in Benchmarks

Published on:

An article by Thomson Hildebrandt consultants in Exec. Counsel, April/May 2010 at 27, mentions differences in cost per hour of inside lawyers and outside counsel. As found by the group’s survey, “the fully-loaded inside hourly cost per lawyer is $214. The effective rate of outside counsel is 35-50 percent higher.”

That would put outside counsel at $290 per hour to $320 per hour, which seems light to me on the external side. Perhaps the external rate covers only lawyers and what they bill, but does not load in paralegals, other timekeepers or disbursements. Major US law departments push into the low $200s for their all-in attorney rate. The General Counsel Metrics survey shows for 225 US departments a median cost per lawyer hour of $191.

Published on:

The profiles of general counsel that appear in various trade publications dwell on employees, revenue and the number of lawyers, but they don’t ask about spending or non-lawyer staff. Even so, there is much to glean.

Take one instance. In Practical Law, June 2010 at 92, we learn that Proctor & Gamble’s 135,000 employees worldwide generated $76.7 billion in revenue in the company’s latest fiscal year thanks in no small part, I am sure to its law department of 315 lawyers based in at least nine cities outside two US cities.

From those base facts we can calculate that P&G has 429 employees per in-house lawyers and 4.1 lawyers for every billion of revenue.

Published on:

Using my data from legal departments of 42 technology companies, I calculated their standardized scores. Standardized scores convert a metric into standard deviations above or below the average (See my post of Aug. 4, 2009: compare differences in terms of standard deviations; July 31, 2009 #4: also known as a z-score analysis; and Jan. 4, 2010: z-scores to create indices.).

The formula takes each department’s figure (such as the number of lawyers it reported) and subtracts from it the average for the data set (26.6 for these tech companies). Then it divides the result by the standard deviation of the set (52.7). That standard deviation tells us that 66 percent of the departments have been 1 lawyer and 79 lawyers.

For one particular department I chose, its standardized score is 0.120, which expresses in terms of a the standard deviation that the department is slightly bigger than the group. On a bell curve, that department would be slightly to the right of the top of the curve. When you calculate standard scores, you can compare many departments on the same basis. Outliers become more visible, since 95 percent of the figures will fall within two standard deviations if the figures are normally distributed.

Published on:

Using my data from legal departments of 42 technology companies, I calculated their standardized scores. Standardized scores convert a metric into standard deviations above or below the average (See my post of Aug. 4, 2009: compare differences in terms of standard deviations; July 31, 2009 #4: also known as a z-score analysis; and Jan. 4, 2010: z-scores to create indices.).

The formula takes each department’s figure (such as the number of lawyers it reported) and subtracts from it the average for the data set (26.6 for these tech companies). Then it divides the result by the standard deviation of the set (52.7). That standard deviation tells us that 66 percent of the departments have been 1 lawyer and 79 lawyers.

For one particular department I chose, its standardized score is 0.120, which expresses in terms of a the standard deviation that the department is slightly bigger than the group. On a bell curve, that department would be slightly to the right of the top of the curve. When you calculate standard scores, you can compare many departments on the same basis. Outliers become more visible, since 95 percent of the figures will fall within two standard deviations if the figures are normally distributed.

Published on:

In a network market for a service, the value of the service increases with the number of its adopters. A clear example would be a benchmark survey since the value of the reports increase directly with the expansion of the survey participants group (the same for collective evaluations of law firms). In fact, value grows faster than participant numbers. This multiplicative quality is referred to as a “network effect.”

The more law departments are able to isolate comparable law departments in a benchmark data set, the more valuable its findings are to them and that in turn attracts more law departments to join. A snowball effect of viral marketing takes off. At some point, a benchmark service with the most members and capabilities will dominate the scope and interpretation of the benchmark metrics it collects (See my post of Feb. 12, 2006: legal wikis will spread as people appreciate the value; Nov. 13, 2007: network effects and law department management; Nov. 17, 2008: references for vendors build network effects; and Dec. 17, 2009: Metcalf’s law.).


Tune into the network! Submit your department’s data now and get the July release with more than 525 law departments! Add your six pieces of 2009 staffing and spending data here.

Published on:

With so much of my effort the past five months on the General Counsel Metrics benchmark survey, I made a bard pun to lead into my posts on benchmark methodology.

Several of them grapple with the meanings of key terms (See my post of Dec. 15, 2009: how to define “full-time equivalent”; Feb. 4, 2010: nuances of “lawyer” as in lawyer per billion; March 2, 2010: “industry”; Feb. 23, 2010: revenue and accounting treatments of insurance companies; March 30, 2010: more “lawyer” nuances; April 16, 2010: are European patent agents “lawyers?; and June 16, 2010: what do we cover with “litigation”.).

A handful of posts fret about the completeness and accuracy of data submitted on benchmark surveys (See my post of Sept. 29, 2009: needed: independent audits of benchmark methodology; Feb. 10, 2010: varying reliability of benchmark metrics; Feb. 19, 2010: representativeness of participants; Feb. 22, 2010: incomplete revenue figures from privately-traded companies; May 10, 2010: data confounded by branch offices; May 20, 2010: four methodological challenges when you collect benchmarks; and June 14, 2010: reliability of answers to a question on outsourcing.).

Published on:

Having offered ten reasons for initial reluctance to participate in a benchmarking study, I realized disappointedly I have not exhausted the topic (See my post of Aug. 29: 2008: ten reasons some CLOs defer benchmark surveys.). In fairness to general counsel who decline to include their department’s data in a benchmark survey, here are six more excuses.

  1. Futile – they don’t believe they can change their department’s or company’s fundamentals, so why bother to learn about management metrics?
  2. Higher priorities – demands on their time and attention of other sorts take temporary precedence
Published on:

My inaugural column for the online InsideCounsel, dated May 24th, looked at an example of correlations between inside and outside spending of law departments.

We are at an exciting point in law department research because the database of metrics from 600+ law departments permits analyses that could not previously have been done reliably. Correlations, as illustrated in the Morrison on Metrics column, are but one of the statistical tools that we can now bring to bear. Insights from it will improve hugely on mere descriptive metrics. A step beyond, regression will be the exciting next frontier, especially as regression calculations will be tied to actual practices.

Published on:

A recently published study of litigation had to set some parameters for the general counsel who responded to a survey. What activities should they count as “litigation” and which should they excluded?

“Respondents were instructed not to consider internal investigations or the broader universe of administrative or regulatory proceedings that might precede or replace the filing of a complaint.” By my understanding, therefore, EEOC investigations and the equivalent state inquiries would be excluded as would environmental or OSHA fact-finding proceedings. Nor would this definition count workers’ compensation determinations. Rate-change requests by utilities would not be included. SEC investigations or efforts by state counterparts would fall outside this scope of litigation. I suppose arbitrations and mediations, whether with employees or customers, might also be deemed non-litigation.

The definition is not airtight but I commend the University of Denver’s Institute for the Advancement of the American Legal System (IAALS) not only for its research but for its careful effort to define this key term.

Published on:

One way to help us understand a data set is a stem-and-leaf display. Click on the example below. It illustrates the 32 smallest law departments that took part in the General Counsel Metrics benchmark since the first release on June 1st.

Each number in the left column – the stem – is the first digit of the number of lawyers in a law department. The top row shows that single digit law departments – those with the first digit being zero – accounted for 19 of that group.

To the right of the vertical line are the “leaves.” Four departments had one lawyer, three had two lawyers, two had three lawyers and so on to one with nine lawyers. Moving down a row, where departments had at least 10 lawyers, indicated by the stem of 1, one department had 10 (the 0), one had 12 (the 2) and so on. One more example. Among departments with 40 or more lawyers, there were only two: one survey participant had 42 lawyers and one had 46. I used a free software from Shodor to create this display. Once you learn to decipher it, a stem and leaf conveys detailed data conveniently.