If someone surveys law departments to collect data and publishes findings, they should provide readers with a minimum set of background facts about their methodology. The findings only have credibility to the extent the methodology holds up to scrutiny. Here are some questions they should answer.
How many law departments responded, out of how many invited, and based on what method of invitation? Good surveys have a sizeable number of respondents: perhaps 100+ who represent 5-to-10 percent of all those invited by a broad-based invitation.
Were the respondents representative of law departments generally? If the analysis purports to speak broadly about U.S. law departments, then the respondents to the survey need to roughly match the characteristics of that wide range of departments.