A common and—at this point—hopelessly clichéd response to this history is to invoke Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. The solution, in this interpretation, is to find better measures—more sophisticated metrics, altmetrics, multi-dimensional assessment. If citation counts can be gamed, supplement them with other indicators. If any single indicator can be gamed, use a portfolio. The search, as Keith Hoskin has put it, is always for “better targets.”
But this response misses what’s actually wrong. Goodhart’s Law is a cybernetic feedback problem—the measure gets corrupted, so fix the measure. Goal displacement is a different diagnosis that is being made on a different patient. The problem is not in the metric but in the organizational form that needs metrics to function. The emphasis is inverted. Goodhart asks about the validity of the measure; Merton asks about the consequences for the thing being measured and the organization doing the measuring. One implies we should repair the instrument; the other suggests the instrument is working exactly as institutional logic requires.
Framing the dysfunction as “Goodhart’s Law” constrains what questions you can ask. It leads, inevitably, to the search for better indicators—exactly the program that metascience reformers have pursued for the past two decades through pre-registration, registered reports, open science initiatives, DORA declarations, altmetrics manifestos.15 These are not trivial interventions, and some of them may help at the margins and for a little while before the players in the game figure out how to manipulate them. But they remain inside the frame, tinkering with measures while leaving the organizational form untouched. The measures don’t just assess science; they reshape what kinds of projects researchers choose to pursue, what questions seem worth asking, what work seems worth doing.16 Fixing the metric doesn’t fix that.