Scientists call artifacts “observations in their research that are produced entirely by some aspect of the method of research,” as explained by Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus & Giroux 2011) at 110. Artifacts, weeds in the gardens of benchmarks, crop up when how the data is collected or prepared for analysis distorts the findings in some way. This risk of skewed irregularity goes to methodology, not to sample size or analysis (See my post of Feb. 19, 2010: representativeness of survey respondents; May 20, 2010: four methodological bumps; May 25, 2010: effective response rates; and June 13, 2010: example of well-described methodology.).
Many artifacts lurk around benchmark surveys of law departments:
Delivery: it might make a difference whether the survey was mailed, available only online, or asked by telephone or in person (See my post of Feb. 12, 2009: included telephone solicitation.).