Articles Posted in Benchmarks

Published on:

It hurt to realize that all is not well with one metric: median sick days taken per year by in-house lawyers. We know from the Fin. Times, July 18, 2006 at 6 that “on average, managers in the UK take just 3.19 days off a year due to sickness — about half the rate recorded by other staff.” That precision means the finding is not doctored. We do not know the comparable figure for the workaholic in-house counsel of the world.

I have a malingering fear that the absent metric will not get well. I can’t put to bed the worry that the ward of law department management may never recover from this lack of an operating benchmark.

Published on:

Here is a snippet of data from GC Mid Atlantic, Sept. 2006 at 29-30: “The Esquire Group, a national legal search and consulting firm, reports that its average hourly rate for contract attorneys is $80 per hour, while the average hourly rate of in-house counsel is $108 per hour and outside counsel rates average $185 per hour.”

The fact is, contract attorneys have the same hourly cost has employed attorneys, but they do not incur a typical 25-to-30 percent benefits load. As to the rate comparison between in-house counsel and outside counsel, both figures seem low (See my post of Oct. 18, 2005 on how to calculate the former figure.), and the gap seems much larger than other data suggests (See my post of Nov. 16, 2005 and a $190 an hour outside figure.),

Published on:

Previous posts have flayed at the Nov. 10, 2005 release by The Open Legal Standard of its “top 25 general law department metrics” (at 9) (See my posts of Sept. 13, 2006.). Unfortunately, criticism of seven of those metrics doesn’t end the onslaught. More are flawed.

What sense is there in “Percentage of in-house time devoted to counseling/proactive risk-reduction efforts” (No. 22)? Every time an in-house lawyer communicates with a client could be deemed “counseling,” and every time an in-house lawyer spots a possible legal risk and does something about it could be deemed “proactive risk-reduction.” The parts of this metric consume the whole.

“Percentage of cost of resolving a matter associated with non-professional staff time” (No. 23) bewilders me. It may be a clunky way to try to measure leverage: does a department use paralegals (are they insulted to be lumped as non-professionals?). But even so, the “cost of resolving” implies settlement dollars, and those funds bear no evident relationship to the time those secretaries and file clerks spend on a matter. What does this strange metric mean, even if it could be collected?

Published on:

Earlier I savaged three of the 27 general metrics proposed by the Corporate Legal Standard (See my post of Sept. 13, 2006.). My poison pen still has ink in it.

Two of the metrics concern themselves with “the ratio of cost of legal research conducted” by outsourcing firms as compared to law firms (No. 10) or internally as compared to externally (No. 11). Sorry, neither metric matters an iota to law departments.

One metric excludes law firms of less than 6-7 lawyers – the plurality of US law departments – because it looks at the “ratio of non-management in-house attorneys to in-house attorneys” (No. 17). Not only is the definition of “management attorney” elusive and arguable, in smaller departments only the general counsel manages other lawyers. If management extends to secretaries or paralegals, every lawyer is likely to be a “management attorney.”

Published on:

The Nov. 10, 2005 report by The Corporate Legal Standard includes at page 9 a table of 26 metrics, which pertain to cycle time (3 metrics), cost (2), process efficiency (14), and productivity (7). The listing presumes that the direction of improvement for each metric is obvious: shorter cycle time, more matters with budgets, more matters that use prior work product, more time devoted to strategic planning. My objections to the list are many, but let me start with three (See my post of Sept. 13, 2006 for more disagreements.).

“Percentage of matters handled under alternative fee arrangements” (No. 7) allows too much opportunity for gaming. Better to focus on the percentage of fees paid under alternative fee arrangements. Not all legal services fall into countable “matters,” such as sporadic counseling, and law departments can define matters finely or grossly (See my post of April 17, 2006 on how accounting standards dictate some matters.). More important, smaller commodity matters (See my post of Sept. 13, 2006 on the meaning of “commodity legal work.”) lend themselves more to alternative billing arrangements than do expensive, unusual matters. Still, fees count more than matters.

“Percentage of matters handled entirely consistently with established law department procedures” (No. 9) is ridiculous. Who would make all these assessments, how appropriate are the department’s “procedures,” what does it mean for a procedure to be “established,” where is the value in checking boxes of compliance are but a few of my many objections to this putative metric.

Published on:

The life-blood of comparative empirical research on how law departments function has to be survey data, since no one aside from a handful of consultants develops any multi-department understanding. We rely on responses by law department lawyers, even though a well-documented flaw in many surveys may contest the accuracy of the data.

The flaw is that many survey respondents dislike saying that they have no knowledge or no basis for an answer; they blush not to give an uninformed opinion (See my posts of May 17, 2006 where many respondents assessed judicial districts even though they had no first-hand experience; of Aug. 28, 2006 on probably subject opinions as to whether law firms make too much money; and May 24, 2006 on the dreamed up payoffs from alternative billing arrangements.).

We often have nothing better to draw on than survey responses, but we should not blind ourselves to the risk that some respondents gave answers even thought they didn’t have a clue.

Published on:

While I have complained about the lack of benchmark metrics for practice areas (See my posts of July 20, 2005 and of May 28, 2005 on this missing set of metrics.), in fact some have appeared.

Metrics for litigators are discussed (See my posts of Jan. 25, 2006 on lawsuits pending; May 31, 2005 on Canadian case loads per litigator; and June 15, 2006 on claims per lawsuit.). Intellectual property data also shows up (See my posts of Aug. 3, 2005 and July 18, 2006 on patents; and April 9, 2006 on trademarks.) as does data on international mergers and acquisitions deals per lawyer (See my post of Dec. 22, 2005). Contracts handled per commercial lawyer make an appearance (See my post of Jan. 6, 2006.) and hints of HR matters per lawyer (See my post of Jan. 3, 2006 on EEOC charges and June 7, 2006 on lawyers per 1,000 employees.).

Still incognito are metrics that suggest the appropriate staff for environmental work, antitrust and import/export. Soon these too, and other practice group metrics, will be unmasked (See my post of April 7, 2006 on international lawyers in the US; and Dec. 22, 2005 on compliance spend compared to legal spend.).

Published on:

The NY Times, Aug. 27, 2006 at 10WK, discusses a smorgasbord of survey methodology risks (See my post of Aug. 29, 2006 on margin of error and subgroups.). False precision and non-randomness deserve comments.

“When a polling story presents data down to tenths of a percentage point, what the pollster demonstrates is not precision but pretension.” An example is BTI’s survey about law department technology (See my post of April 26, 2006 on its survey of “more than 200 lawyers.). The margin of error being more than plus or minus ten, it is misleading to report that respondents most wanted technology solutions to be “user friendly” at the level of 19.3%. BTI should have written much more broadly, like “between one in four and one in five chose “user friendly.’”

The Times also emphasizes how the respondents to a survey must be randomly selected for there to be statistical reliability. For example, if a survey of law departments only collects data from an internet site, randomness evaporates. One reason is that some significant number of in-house lawyers do not venture into the online ether; another reason is that of those who did go on the Internet only some chose to respond, which creates a self-selection bias (See my post of May 14, 2005 for an example of selection bias as well as my fretting about systemic bias in surveys from my post of April 9, 2005 about a survey by Serengetti and Aug. 27, 2005 about a survey of IT respondents.).

Published on:

An excellent commentary on survey methodology, in the NY Times, Aug. 27, 2006 at 10WK, discusses sampling error (See my posts of Dec. 9, 2005 and its description of margin of error; and Jan. 30, 2006 on Kirkpatrick & Lockhart’s “+ 10%” results.). The statistical term “sampling error” only properly applies with a randomly-sampled survey population; the term actually describes the range of approximation of results from a survey. The article explains that there is even a formula for calculating the error range in comparing one survey’s results to another survey’s similar question and results. None of the annual surveys published about law departments has ever done that calculation.

In this post, though, my point concerns subgroups. I wrote earlier about a survey with responses from about 400 in-house counsel (See my post of Aug. 28, 2006 on 34% had fired or considered firing a firm.). Assuming the respondents were randomly distributed – invited to participate and did participate without any pattern – the error rate for that number of responses would be plus or minus five points. Accordingly, the survey should have pointed out that to be 95 percent certain of having gotten a reliable percentage the swing would be between 39% and 29%.

If anyone tried to extend that range to a subgroup, such as law departments larger than five lawyers, the number of respondents in that group would be less than 400 so the range of approximation (sampling error) would increase.

Published on:

A year ago I wrote about a Corporate Counsel survey (See my post of July 16, 2005 on law firm and law department relations.). This year the renamed monthly, InsideCounsel July 2006 at 52, repeated a question from 2005: “Most law firms pad their bills.” In 2005 36% of the respondents agreed that most law firms pad their bills, 37% disagreed, and 28% were neutral. This year, 42% agreed (an increase of 16% over 2005’s results) and 33% disagreed (an 11% drop), with 24% neutral.

Set aside the ambiguity of “most” and what level and frequency of inflated bills creates a “pad.” Just because the lawyer who reviews a bill knocks something off the bill does not mean that the law firm improperly stuffed it. Could methodological influences account for some or all of the swing toward “agreed” and its condemnation of wide-spread law firm greed? Or, have perceptions drastically soured?

Last year’s survey had 295 law department respondents whereas this years had 407, of which about 200 were general counsel. Let’s not dwell on what is probably a very low response rate (See my posts of April 8, 2005 about employee satisfaction surveys and response rates and Nov. 24, 2005 about client satisfaction survey rates.). Hence, about a third more responded, but what difference that makes, other than giving the 2006 results even more representativeness, I can’t tell. We also can’t tell how many lawyers replied to both surveys. The less the overlap, the more the shift toward suspicion of padding has credibility. Another subject of speculation is the possibility that last year’s survey results raised the consciousness of in-house counsel and that alone bumped up the percentages of agreed to greed.