Articles Posted in Benchmarks

Published on:

Consider one more contributor to the mystery why total legal spending as a percentage of revenue (TLS/Rev.) declines as companies grow larger. TLS/Rev is widely recognized as the preeminent metric in terms of reliability and importance (See my posts of Dec. 5, 2007: stability of the ratio over a decade.) and reasons for its shrinkage as revenue grows appeared in my article in Legal Times, Jan. 28, 2008. R&D spend may be another reason.

All things being equal, it seems reasonable to hypothesize that a company with more R&D spending generates higher legal costs than a company with less R&D spending. Patent lawyers and staff, patent application fees, foreign patent associates, patent infringement costs, and licensing work would all rise as R&D generates proprietary and patentable ideas (not to mention trademark work). Whether those increased IP-related costs rise faster than corporate revenue is an empirical question.

Published on:

A methodological spoke in the wheel of many surveys is selection bias. If survey data comes mostly from what surfaces on its own, because someone chooses to respond, the data may not be representative of the universe of data. It may suffer from what statisticians refer to as “selection bias” (See my posts of Dec. 1, 2006; Aug. 27, 2005; and Dec. 3, 2006: general discussions.).

Several posts have raised a red flag about the possibility that data commented on was skewed by this bias (See my posts of May 14, 2005: an example regarding knowledge management; Aug. 30, 2006: those who chose to reply to an internet survey; Oct. 16, 2006: diversity statistics; May 27, 2007: former users of arbitration service; and July 2, 2007: discounts granted by law firms.). Since no surveyor of law departments can force their members to respond, all data in this industry suffers from some degree of selection bias.

Selection bias is not the same phenomenon as either adverse selection or survivor bias (See my posts of Feb. 9, 2008: adverse selection; and March 20, 2007: expert witness testimony.).

Published on:

Many times I have dissected survey results where I suspected poor methodology, so I pulled together my posts on survey methodology. I have followed a framework (in bold) defined by the Manual for Complex Litigation (See my post of Jan. 16, 2006: the Manual on trustworthy surveys; and Oct. 26, 2007: only poor methodology could explain bizarre results.).

The population was properly chosen and defined.

The sample reported on needs to have been representative of the population (See my posts of May 14, 2005 and March 28, 2005.). This starting point implicates how the respondents were invited (See my posts of Oct. 16, 2006 and June 22, 2008.). The survey should also have obtained enough respondents for the statistics to be meaningful (See my posts of May 11, 2008: large survey on morale in law departments; April 22, 2007: power tests and sample size; and March 28, 2005: number of respondents.). Low participation rates cast doubt on the attribution of findings to the entire population (See my posts of Dec. 19, 2007 and April 9, 2005: few respondents from a large invitee pool; and May 8, 2007 #4: bill padding example.).

Published on:

I bump into metrics and I can’t resist probing them. For example, a press release by the Association of Corporate Counsel, dated June 30, 2008, announces that “global growth drives agenda of in-house lawyers in top companies.”

I couldn’t help trying to figure out whether we should rely on that sweeping conclusion. The first paragraph says that “more than 100 senior corporate counsel” were polled at a recent conference so I assume 100 is close enough, as promoters always give the highest number possible. The conference used an audience response system (electronic voting pads and software) to survey the attendees. Of the attendees, “36% were counsel of companies with more than $10 billion in revenues, and 34% with $1-$10 billion in revenues.” That means that one third of the lawyers – presumably 30-35 of them – work in companies with less than $1 billion in revenue. Companies of that size are unlikely to be deeply immersed in global transactions and the attendant legal problems.

Moreover, “85% of the respondents were corporate counsel, with 62% in a chief legal officer (CLO) role or a direct report to the CLO.” But that means four out of ten of the corporate counsel do not report to the CLO, so they were presumably more junior lawyers and therefore endowed with less perspective on the company and its preparedness for global legal issues.

Published on:

This blog has at least a dozen posts on specific benchmarks for law departments. I will eventually compile and publish that metapost. Meanwhile, other aspects of benchmarks – aside from specific metrics – deserve mention.

A general counsel ought to give thought how best to present benchmark data to senior executives (See my post of March 19, 2005: metrics to defend, not to change; Oct. 1, 2006: visual display of quantitative data; and May 8, 2008: online tool to help graphically present data.).

Processes may be more important to learn about than metrics, but they are trickier to study (See my posts of May 18, 2008: harder to do; May 14, 2005; and Oct. 18, 2005: metrics, practices or both; Nov. 2, 2006: process improvement ratios; Feb. 4, 2008: visits to other departments; and Jan. 13, 2008: benchmarking bad practices.).

Published on:

For several years now I have chafed when clients respond to a recommendation with “Who else does this?” Even though lawyers like to follow precedent and many of them are allergic to risks and change, I haven’t yet screwed up my courage enough to say, “Who cares? If the change we are considering feels like it makes sense for your department, why does it matter whether or not others have tramped the path flat?”

Especially I feel this way since I believe that all practices are embedded in a layered context, such that someone else who adopts only a part of another law department’s practice cannot really follow suit (See my posts of Nov. 11, 2007: complex contexts; and Nov. 27, 2007: best practices ride roughshod on context.).

Do what makes sense for your circumstances even if you makes you a pioneer.

Published on:

If a company tracks total spending by staff functions – IT, Facilities, HR, and Finance – as a percentage of revenue, then each function can show relative performance over time as a benchmark against the other functions (See my posts of April 9, 2005: finance, IT and HR benchmarks; and Sept. 4, 2005: total spend as a percentage of revenue for staff groups.).

Additionally, to compare changes by function in personnel per thousand employees or internal spending per personnel allows you to benchmark relative performance within a company. The absolute numbers would not be what you report, but changes in ratios. Ratios are key (See my posts of March 12, 2006: librarians to lawyers; Feb. 4, 2007: partner time to other timekeepers’ time; Dec. 22, 2005: compliance and ethics spending to legal spending; June 15, 2005: D&O defense costs to settlements; Sept. 13, 2005: external spend on vendors to law firms.).

Published on:

When economists publish articles based on results from analyzed datasets, they publish the dataset online so that others can test it or make use of it in other ways, according to Ian Ayres, Super-Crunchers: Why Thinking-By-Numbers is the New Way to be Smart (Bantam 2007). It would be wonderful if benchmark data about law departments, with confidentiality preserved, could at least be partially made available for everyone to study.

We need a creative commons license by respondents to surveys: I will give you my data but it must be made available for all others who can make use of it (See my post of June 6, 2006: Empirical Legal Studies.). That way there would be a shared pool of raw metrics, available to all researchers, with company names and industries deleted or somehow coded so that no one could figure out which law department a particular number comes from. A way around this worry, perhaps, would be to post online only ratios, not absolute numbers.

My dream is probably a long way off, although I happen to believe that most data of law departments, even if shouted from the rooftops, won’t help another law department (See my post of April 15, 2007: what information should law departments be concerned about disclosing.). The biggest obstacle is that those who collect the data make money from it and view the data as a source of proprietary gain from publicity and knowledge. It should be the analysis and clarity of graphics that distinguish someone, not the raw data itself.

Published on:

Many organizations want to survey law departments. In my recent gargantuan collection, I listed 72 of my posts during 2007 that drew on a survey of law departments, most of which surveys had been conducted that year (See my post of March 2, 2008.). Because some survey results deserved more than one post, I would estimate that I found approximately 45 different surveys.

At least 35 different organizations conducted those surveys – law firms, trade publications, vendors, consultants, most of which are service providers of one kind or another. They seek metrics to gain insights into their market, to help them sell more of their services or products, or to attract favorable publicity (See my post of Aug. 5, 2007: possible bias in surveys by interested parties.).

Nearly all the surveys I wrote about targeted US law departments. If the average survey had 100 respondents after a five-percent response rate, then on average 2,000 law departments received each survey. Actually, my guess would be that the average number of participants invited to submit data is much higher and the response rate somewhat lower (See my posts of April 9, 2005: very low participation rates in a survey by a software vendor; and April 9, 2005: 3.6% rate.).

Published on:

Surveys of law departments go on all the time (See my post of Oct, 17, 2005 on the plethora of law-department surveys.), and I implore readers to send word of any to me. I warmly embrace every survey of law departments that I can lay my hands on (See my posts of Oct. 10, 2005: 10 surveys cited in 2005; June 27, 2006: collects 9 surveys by interested parties; and Aug. 5, 2007: suggests several more.).

What I hadn’t realized was how common surveys of law departments are. Metrics buff that I am, I laboriously looked at the 1,000 posts I published in 2007 and looked for ones that drew from a multi-department survey. I have listed them below in chronological order by quarter: an astonishing 72!.

The first quarter started with 18 posts (See my posts of Feb. 11, 2007: ACC’s Seventh Annual Chief Legal Officer Survey on pro bono; Feb. 11, 2007 [four posts]: ACC’s Seventh Annual Chief Legal Officer Survey; Feb. 16, 2007: American Lawyer survey of law firms on markups of contract attorneys; Feb. 18, 2007: Corp. Secretary: corporate responsibility officers; Feb. 19, 2007: Altman Weil/LexisNexis Martindale-Hubble time needed to find outside counsel; Feb. 19, 2007: ACC’s Seventh Annual Chief Legal Officer Survey on firing law firms; March 7, 2007 [Brad Blickstein]: In-house Tech survey; March 9, 2007: International Paralegal Managers Association on billable hours; March 13, 2007: on procurement; March 17, 2007: Legal Week and UK firings of firms; March 26, 2007: InsideCounsel on hiring preferences [four posts]; and March 31, 2007: LexisNexis Martindale-Hubble/Altman Weil on obstacles to alignment with clients.).