Articles Posted in Benchmarks

Published on:

Almost 200 senior in-house lawyers responded to a survey question that asked them to identify which “tracking metrics” they currently use. The survey data is courtesy of LexisNexis CounselLink study, entitled “Effects of the Current Economic Downturn on U.S. Law Departments” 2009 at 14.

The eighth most frequently tracked metric (16.8% use it) is “documents/transactions completed.” This metric jumped out at me because departments have to have at least a rudimentary matter tracking system to be able to count documents/transactions completed. Further, I suspect that it really means “matters” completed, where a “transaction” equals a “matter.” To count contracts completed would be highly unusual, let alone the broader range of “documents” such as correspondence and memos.

Also deserving of comment are the two data analytics of “cost vs. average for similar matters” and “time spent vs. average for similar matters.” Each metric being used according to about 13 percent of the survey respondents, it means those legal departments have calculated averages for amounts spent and hours by outside counsel. If that is true and representative, fixed fees should be much more accepted since the departments would know approximately what a matter should cost and the distribution of cost outcomes. I doubt this.

Published on:

I wrote about the article by Bill Turner of Womble Carlyle on Monte Carlo simulations, and emailed him with some questions (See my post of June 26, 2009: Monte Carlo and sensitivity analysis.). One was about law departments that have used the firm’s Monte Carlo capabilities. While unwilling to disclose the name of a specific law department that has used the software, “we have used this tool to develop value-driven alternative engagements for some clients.”

Turner also wrote clearly about multivariate regression. “In multivariate regression, you’re often testing to see how much of a variable, X, contributed to the forecast Y, and whether X is significant in the model, overall. The Monte Carlo tools perform sensitivity analysis by creating rank correlation coefficients between the assumptions and the model forecasts while the simulation is running. These coefficients indicate the strength with which the assumptions and the forecasts change together. The coefficients are squared and normalized to 100%.

One of the key differences between using these techniques vs. multivariate regression is that multivariate regression analysis is usually run against actual data (where variables are then tested for significance, multicolliniarity, heteroskedasticity, auto correlation, etc.) whereas Monte Carlo analysis creates the data (and then runs the sensitivity) based on defined parameters. In principle, the concepts are similar, though the sensitivity analysis demonstrated in the paper involves how much the model assumptions contributed to the variation in the forecast rather than how much the assumptions contributed to the forecast itself. The goal is to measure risk.”

Published on:

A clear explanation of the statistical model, known as the Monte Carlo simulation appears in Womble Carlyle’s FocusExtra, 2Q2009 at 1, by Bill Turner. The newsletter explains the technique in the context of estimating the cost of a lawsuit through trial. Essentially, for four stages of a litigation, the law firm and the law department — or just the law department — need to estimate the cost for each stage, as well as a low-cost estimate and a high-cost estimate. Software can then run multiple iterations where it uses data from that table to prepare a bell shaped curve of likely outcomes (See my post of May 15, 2005: Monte Carlo simulations as computational models.).

Because the outcome curve meets the requirements of a normal, Gaussian distribution, the law department can calculate confidence intervals for any given total cost. That means you can say, for example, with 80 percent confidence that the total cost will be $1.2 million.

A law department can also use the data and software for a sensitivity analysis. A sensitivity analysis tells which factors within the model create how much of the variation, the factors here being the four stages of litigation. This analysis is similar to what a multiple regression analysis can calculate.

Published on:

Queuing theory and its models often assume that the rates of arrival of work and delivery of service can be described by a Poisson distribution (See my post of Jan. 20, 2006: one of many kinds of distributions of numbers; and Aug. 16, 2006: predicts likelihood of event during a given time period.).

Let me describe what a Poisson distribution looks like graphically. Visualize a column chart that shows on the horizontal, bottom axis the number of client requests for legal services that arrive in a legal department each week. The number of requests per week increase as you move to the right.

The vertical axis shows the relative frequency of each of those weeks, expressed as percentages increasing from quite low to perhaps 20 percent. Thus, the tallest column in the middle might be 15 requests a week, which happens during 18 percent of all weeks; the lowest column on the left corresponds to one request during a week, which happens five percent of the time; the lowest column on the right represents 25 requests for legal assistance during a week, which happens one percent of the time. The overall shape of the columns is somewhat like a bell curve, but with a longer tail to the right.

Published on:

The challenge when a legal group commissions a custom benchmarking project is not creating the questions, finding comparable law departments, confirming the accuracy of the metrics, or analyzing the results. Rather, it is the weeks and weeks needed to persuade other companies to submit data and the passage of time while you await their decisions. The delays of benchmarking are like the delays of selling your house: you can’t push buyers to make an offer and you can’t strong-arm general counsel to agree to take part. So you wait and wait.

Eventually you have to decide that the data is sufficient and to close down the drawn-out effort to get one more company into the data set.

I have done well more than a score of benchmark projects for legal departments and am continually frustrated by how hard it is to pull the teeth of metrics. I can be confident on deliverables, effort, cost (even to the point of guaranteeing a fixed cost) and value, but I can’t control the waiting game.

Published on:

A physicist, Jorge Hirsch, devised a formula to determine the quality of scientific papers published by a scientist. “The h-index is the number n of a researcher’s papers that have been cited by other papers at least n times. High numbers = important science = important scientist.” According to Wired, June 2009 at 94, “similar statistical approaches have become standard practice in Internet search algorithms and on social networking sites.

Legal departments could translate this idea into the number of matters for which a law firm was hired. Let me explain. Law firm One has been retained during the past three years 100 times. Law firm Two has been retained for 50 matters, while law firm Three has handled 25 matters. Down the list, law firm Twelve has been retained 12 times. Thus the number 12 (“n”) law firm has been retained by your department at least 12 (“n”) times. Your “h[law firm]-index” is 12. Every legal department can figure out its own h[law firm]-index and, if that information is shared, compare their measure of concentration of outsourced work. The lower the number, the more concentrated the work.

The index is a descriptive metric for concentration of work given to your firms. An employment firm may top the list if it handles many smallish matters. Or a patent boutique may if it handles many patents. But eventually, down the list a ways, some firm will hit the h[law firm]-index point.

Published on:

We talk of inside lawyers per billion of revenue but it would be more insightful to convert that metric to lawyer hours worked and to add outside counsel hours, thus producing total lawyer hours per billion. (See my post of Nov. 8, 2005: measuring total hours of legal advice given.)

Inside lawyer hours are readily calculated, with a few assumptions such as 1,850 lawyering hours (See my post of May 21, 2009: chargeable hours per year by inside lawyers with 12 references.)

Outside counsel hours can be estimated based on effective billing rates per firm divided into totals billed during a year or the exact number of hours can be toted up from a matter management or e-billing system. Either way, translate dollars worked to hours, as you should do with an RFP (See my post of Dec. 5, 2005: translate dollars billed to lawyer hours worked.).

Published on:

A fascinating article in the NY Times, May 11, 2009 at B7, describes an online service that computes the answers to questions by drawing on collections of data the company has amassed. Some 100 employees in Wolfram Research have gathered, verified and organized huge amounts of data. When a user types in a query, the software tries to determine the relevant area of knowledge and find the answers, “often by performing calculations on its data.” The site does not search the Internet for answers.

On its website it claims, modestly, to be the “first step in an ambitious, long-term project to make all systematic knowledge immediately computable by anyone. You enter your question or calculation, and uses its built-in algorithms and growing collection of data to compute the answer.”

One can imagine sometime in the next decade large amounts of benchmark data from law departments poured into a Wolfram Alpha database. Its computational prowess and gaudy visuals could make the current crop of survey reports look pitifully rigid and shallow.

Published on:

Matthew W. Geekie, general counsel of Graybar Electric Co., is the only male in a law department of 13 people. The other three lawyers and the remaining nine staff are all female. Moreover, Graybar’s net sales for 2008, as a distributor of electrical, communications and networking products surpassed $5.4 billion, securing it the No. 455 slot on the Fortune 500. These facts comes from the Nat. L. J.</a>

This data means that Graybar employs less than one lawyer per billion dollars of revenue!! That number is so low – most big companies employ more than three lawyers for every billion dollars of revenue – that this remarkable leanness (dare I say Amazonian ability) deserves kudos.