Having written approvingly about so-called 360˚ assessments, where managers, peers, and reports complete assessments of a lawyer, I think of them as useful (See my post of Dec. 26, 2007: 360˚ instruments with 5 references.) What a person learns from a 360-feedback can be constructive, but an article in talent mgt., Aug. 2007 at 12, raises some disturbing points that vendors and advocates won’t tell you.
-
“Most 360-feedback instruments measure competencies that are highly correlated with one another, making it difficult to discern specific areas on which to focus developmental efforts.”
-
“If you use average scores to summarize rater feedback, without some indication of rater agreement, it’s easy to misinterpret polarized feedback. This can lead to behavioral changes that might actually be destructive.”
-
“Correlations among rater groups are only modest, inviting difficulty in knowing what the differences among groups really mean or where to put one’s energy to modify behavior.”
-
“Little research exists about whether qualitative or quantitative results from 360-feedback optimize acceptance and behavior change.”
-
The “effect size,” how much behavior actually changes, from 360-feedback is typically very low.”
Ouch. With blows like these landing all around, it’s a wonder 360˚ assessments stand up.