Articles Posted in Benchmarks

Published on:

An earlier post looked at total legal spending as a percentage of data found by the 1993 Law Department Spending Survey of Price Waterhouse and its successor, the 2007 Law Department Survey of Hildebrandt (See my post of Dec.5, 2007.).

Also noteworthy are the stability of other basic metrics over the 14-year period. In 1992, the median figure for total legal spending (TLS) as a percent of revenues was 0.40 percent (See my post of Jan.18, 2007 on lawyers per billion of revenue across industries.). Fourteen years later, that figure had barely budged, at 0.44 percent.

Outside counsel spending as a percent of revenues shifted slightly, from 0.21 to 0.19 percent, while total inside legal spending dipped a tiny amount, 0.14 percent to 0.13 percent. Given the number of departments in the two surveys, neither the TLS nudge upward nor the components’ minor adjustments is likely to be statistically significant (See my post of June 19, 2006 on statistical significance.).

Published on:

The 1993 Law Department Spending Survey of Price Waterhouse, which reported 1992 data, had 196 participants, with median worldwide revenue of $3.4 billion. Fourteen years later, the 2007 Law Department Survey of Hildebrandt, the successor of the PW survey, has 172 benchmark participants, with median worldwide revenue of $10.4 billion. Adjusted for inflation (stated in constant dollars), the median revenue was the same as $4.88 billion in today’s dollars.

In the 1993 study, the median number of lawyers was 20 (US only), and median total law department staff (US) was 45. After fourteen years, the comparable figures are 28 lawyers and 59 total legal staff. Thus, worldwide revenue for these comparable populations of mostly large law departments swelled 300 percent in nominal dollars, while US lawyers went up 40 percent and total US legal headcount went up 31 percent.

Note that more growth occurred in the lawyer ranks than in the non-lawyer ranks (See my post of Nov. 28, 2007 about the decline in support-staff ratios.). Increased numbers of non-US lawyers and legal staff accounts for some of the ability to support the larger revenue.

Published on:

The usefulness of benchmark data depends on the number of survey respondents, in part, as minor score differentials (such as the variation between a 4.1 and a 4.2 on a five-point scale) may only be significant with larger sample sizes (See my posts of Dec. 9, 2005 on margin of error generally; and Aug. 29, 2006 on subgroup analyses.). My faithful and intellectually-insatiable readership demands a more precise explanation of margin of error.

n = 2 * z2

D2

Published on:

Some surveys and the interpretation of their results deliberately lean a particular way because the surveying organization has a bias and seeks certain results. Admittedly, no survey can be completely neutral, but that should be the goal. Examples on this blog have been plentiful of possible survey bias. Here is a sampling where, in order, bias might have seeped in due to financial gain, ideological goals, or manipulated methodology.

Law firms, consultants and vendors apply some spin so they can sell their services (See my posts of April 7, 2006 on surveys conducted by law firms; and June 18, 2007 on a consultant’s headlines about law departments “firing” law firms; April 3, 2005 regarding research by Blackberry on PDA’s; Feb. 26, 2005 on compensation data from executive search firms; May 4, 2007 on international arbitration costs and PwC data; and May 27, 2007 on some data from the American Arbitration Association.).

Proponents of causes want to find data that strengthens their cause, which might tilt their survey efforts (See my post of May 27, 2007 about an ADR survey; Oct. 18, 2005 on latent desires for telecommuting; Oct. 18, 2006 on diversity efforts; July 18, 2006 on women researchers finding advantages with women managers; and June 7, 2006 regarding the US Chamber of Commerce and its ranking of states’ judicial systems.).

Published on:

When a survey announces that the average or median figure for something one year was X and the same figure rose or fell Y percent the next year, someone who wants to make use of that data must have confidence that the survey populations both years were either very large or reasonably similar. Otherwise, any change may be mostly regression to the mean (See my post of Feb. 7, 2006 for that concept.) or reflect a different set of participants in year two.

The smaller the set of participants, the greater the built-in variability. The concern with consistent data sets arises particularly if a survey reports averages, because on large result can put off the comparison.

Published on:

An interview in the Met. Corp. Counsel, Vol. 15, July 2007 at 27, spills over with metrics. In the Eastern District of Virginia “there is a 15% chance of the defendants succeeding on a motion to transfer.” Later the interviewed notes that in almost every District Court, “patentee plaintiffs have well over a 50 percent chance of winning a summary judgment motion.” Data also show that in two Courts “plaintiffs have more than an 80% chance of success at a jury trial.” As to the speed with which cases are handled, which depend in part on judges’ caseloads, great variance is found. In the Eastern District of Virginia fewer than three cases per judge were filed in 2006 whereas in the District of Delaware more than 47 cases per judge were filed.

Each of these metrics – and others of a similar kind – enables law departments to do two things. They can establish performance metrics for law firms that handle matters on a fixed fee. They can also base incentive payments on whether or not law firms match or better the norms of performance. To manage well, it helps to have metrics.

Published on:

A frequent kind of question on surveys is the multiple-choice question, even though they are beset with methodological traps. Especially egregious are those questions that invite respondents to “choose all that apply” (See my posts of July 21, 2005 about that instruction; Dec. 20, 2005 that criticizes such a methodology; and March 13, 2006 on “choose more than one.”).

An improvement is to ask respondents to pick their top three or four of a large set (See my posts of Aug. 14, 2005 on this variation as well as Nov. 5, 2006; July 4, 2006 on “pick the top 5”; Nov. 5, 2006 on “pick your top three from ten.”).

Even better is to ask respondents to rank their choices (See my posts of Dec. 20, 2005 on putting priorities on the choices; and March 31, 2007 on alignment with clients.). One problem with this step is that even undesirable choices will be ranked. They can also misread the instructions and reverse their ranking from what you wanted.

Published on:

It’s an abysmal survey that mostly asks for data in ranges. With range data, “Check size of department: 1-3 lawyers, 4-8 lawyers; 9-15…” as an example,” other than parroting back the results, all you can create are three-by-three or four-by-fours tables – or whatever number of ranges you have – to show how your data fits with other data, but otherwise you cannot do calculations. I must confess I have tried sometimes to select a number below the mid-point of the ranges, but my make-shift efforts to use them in calculations are undoubtedly problematic and fraught with error.

I grant that it may be easier for respondents to pick a range, but against that slight advantage it is also hard to set the appropriate ranges. If ranges are not equivalent, another challenge arises because your scales have changed. Ask for actual numbers so that you can do calculations with the results.

Published on:

A previous post explored the statistical relationships in large US law departments between both lawyers and total legal spending compared to market capitalization (See my post of July 1, 2007.). That data set, admittedly small with only 35 companies, enticed me into calculating some additional correlations.

The correlation between the number of lawyers in each of these law departments and their companies’ 2005 revenue was 0.600. The moderate relationship indicates in part the wide range of staffing models and ratios.

Turn then to three other, related, relationships. The correlation between lawyers and inside spending was very strong, at 0.915, because most internal spending is compensation and lawyers bulk largest in that cost. Hence, the link between more or fewer lawyers and higher or lower internal spending ought to be quite close.

Published on:

Multiple-choice questions on surveys present several challenges (See my post of Dec. 20, 2005 on several methodological issues.). Because surveys use them all the time, it is worth noting one more criteria to keep in mind for multiple-choice inquiries.

One egregious error is for a survey question to give several choices, some of which overlap with others, while at the same time to omit reasonable, obvious choices (See my post of May 10, 2006 for some sloppiness from a Canadian survey; and March 31, 2007 about obstacles to alignment with clients.).

The ultimate goal for a surveyor is to provide all the choices that respondents might want to select and yet to avoid overlap between any of them. Consulting firms refer to this two-pronged test as MECE (pronounced Me Sea) – Mutually Exclusive and Collectively Exhaustive. It is an unattainable goal but it is a guideline to improve the quality of surveys and the conclusions drawn from them.