If you follow this blog, then there’s a good chance you have an interest in how to measure fuzzy concepts such as governance and corruption; it’s an issue Global Integrity tracks closely. If so, then you should read Gerardo L. Munck’s recently published Measuring Democracy. It’s a hard-hitting critique of the conventional wisdom (and widely used data) used to measure democracy.
In Gerry’s slim but powerful volume, he takes up many of the same questions we’ve been wrestling with in the context of “democracy” and democracy-promotion. Can we measure “democracy”? And if we can, what are best practices to guide such a measurement exercise?
The book is Measuring Democracy: A Bridge Between Scholarship and Politics (Johns Hopkins University Press, 2009). We should disclose that we’ve worked with Gerry in the past and think highly of his work.
In Measuring Democracy, the author promotes a series of simple yet powerful best practices that should be the basis for any scholarly or practitioner-based assessment of democracy at the country level. Those best practices focus around a framework (developed with Jay Verkuilen) that emphasize three key attributes of any measurement tool or resultant data:
- the precision around conceptualization of data (in other words, are you precise in developing a tool to measure the concept you’re interested in?);
- the consistency of actual measurement, especially the importance of clear and identifiable scoring criteria (or “coding,” to use the social science parlance);
- and the rationale behind the selection of an aggregation methodology to transform disparate and disaggregated data into accessible indices.
None of this is rocket science, but Measuring Democracy lays bare just how many of the most widely used democracy data are either poorly constructed and/or misused. It saves its most strident critiques for Freedom House’s Freedom in the World data. Whether attacking Freedom House for its opaque and ill-designed aggregation methodology (“In short, the numerous conceptual and measurement problems that weaken the Freedom House indices are compounded by the blatant disregard of the challenge of aggregation.”), or what Munck sees as the organization’s simplistic conceptual approach to measuring democracy (“Freedom House includes so many attributes…and does so with such little thought about the relationships among such attributes…that it is hardly surprising that a large number of distinct or at least vaguely related aspects of democracy are lumped together.”), Measuring Democracy pulls no punches. As I read the book, I couldn’t help but compare Munck’s take-down of Freedom House to the deconstruction of the World Bank Institute’s Worldwide Governance Indicators offered in 2006 by Ardnt and Oman in their (also recommended) Uses and Abuses of Governance Indicators. Ouch.
Why do these issues of measurement matter?
As Munck succinctly puts its:
Although this measurement movement is resulting in more and better data on politics, the limitations of current knowledge should be acknowledged. Such an acknowledgment is particularly critical because data on politics are increasingly used in the world of politics. NGOs use data for purposes of advocacy; a variety of actors regularly invoke statistical analyses on the causes and consequences of democracy to justify their support of, or opposition to, different policies; and governments, [intergovernmental organizations], and the [multilateral development banks] link data on politics to policy choices and governance-related conditionalities. Moreover, such an acknowledgment is key because information presented in quantitative form is generally accorded a special status. After all, one of the selling points of using data on politics is that they draw on the power of an association with science and hence are treated with considerable deference by public officials and the public. Yet this assumed scientific status verges on being a misrepresentation if the current state of knowledge regarding the measurement of political concepts.
Here’s a real-life example: if you subscribe to Munck’s critique of the Freedom House data and Arndt and Oman’s critique of the WGI, then seven of the seventeen indicators used by the U.S. Millennium Challenge Corporation to make decisions on which countries to give large aid packages to are essentially bunk. This presents a bit of a challenge for an aid organization that defines itself by relying on third-party data for ostensibly objective and apolitical decision-making, noting that, “Before a country can become eligible to receive assistance, MCC looks at their performance on independent and transparent policy indicators.” Houston, we have a problem.
You can grab a copy of Measuring Corruption on Amazon.com.
While you’re shopping, check out Global Integrity’s A Users’ Guide to Measuring Corruption, which looks at issues specific to measuring corruption and governance.
— Nathaniel Heller
UPDATE 18 June 2009 9:00 EST: My use of the term “bunk” was aimed at the misuse of the Freedom House and WGI data, not the actual data themselves.
@ Dale: No, there aren't specific cases of "mis-measured" countries laid out in the book. The critique is more methodological in nature.
Got any good examples to cite, to illustrate the difference? E.g., vast differences in country rankings? Just curious, haven't had time to take a look at it yet.