What do Stalinist-era accounting practices in Hungary, financial reports, and post-WWII U.S. property tax assessments have in common with measuring governance/corruption? More than we think, if we take the argument advanced by a former professor of mine Martha Lampland seriously.
In a working paper (PDF) published recently, Lampland (a historical anthropologist) suggests that numbers are accorded many meanings that vary greatly depending on how they are used in different science and policy domains.
In particular, she distinguishes between provisional numbers and false numbers. Provisional numbers are just that: temporary metrics that can be useful tools for strategizing and planning. They serve as heuristic techniques and important benchmarks, even though they may appear to be stable entities in particular situations. False numbers subordinate relative accuracy to the accomplishment of other tasks and objectives. Making this analytic distinction allows her to show that numbers and quantitative measurement tools do not always have to be scientifically precise in order for them to be effective and practical.
Drawing on the work of other academicians, Lampland presents several vivid historical and contemporary examples to substantiate her claim about provisional numbers. The usage of quarterly financial reports, for instance, frequently relies on temporary metrics to make forecasts of potential trends in the market.
Another setting for the efficacious role of temporary numbers is the U.S. tax code. Isaac Martin has shown recently that property tax assessments were calibrated not with market prices but rather with local jurisdictions as well as voting and political alliances. The decoupling of market real estate values and taxation rates therefore served the interests of county tax authorities and tax payers. The tax revolts of the early 1970s were rebelling against the court-ordered movement to standardize tax rates.
Lampland’s own study of agrarian work science in prewar capitalist and socialist Hungary lends rich empirical support to her notion of false numbers. She shows how party officials during the Stalinist era strived to modernize the economy and society by increasing labor productivity, but were hampered by farmers’ indifference to formal/written bookkeeping and accounting procedures that were conceived by the socialist bureaucracy to document labor activities.
To remedy this deficiency, the Stalinist state embarked on a systematic campaign to teach bookkeeping in agricultural communities, held sporadic labor competitions, and thrust reluctant industrial workers upon equally reluctant cooperative farmers so that the former could review and correct the accounting practices of the “benighted” villager. In all of this, the party apparatus tolerated numerical discrepancies in written records early on because they were useful benchmarks of progress, learning, and the accumulation of knowledge, rather than misleading signs that were “bad” and “wrong.” (Over time, however, the level of tolerance diminished as the state increasingly punished those implicated in accounting misrepresentation.)
I recommend reading this piece, then, because it prods those of us in the governance and corruption indices business to take a step back and reflect more critically on how people actually construct and make use of numbers in context. Lampland’s material sheds light on why and how numerical indicators are not always fixed, stable signs that make claims to verisimilitude and definitive status. Rather, quantifiable measures can in certain situations be estimates that — precisely because of their provisional and malleable character — provide useful entry points to making more informed and discrete judgments, strategizing, and policy making.
Misreading provisional numbers obscures the social contexts in which they can be:
powerful tools and meaningful signs, despite their ephemeral character. They constitute forecasting benchmarks for investing in financial markets. They have formed the basis of tax schedules, while strengthening the ability of government officials to respond promptly to the demands of constituents. Provisional numbers enable scientists to think creatively about a problem: model its possible contours, consider various configurations, prompt new answers. In each instance, crucial tasks are achieved; significant investment in form has occurred. And in each instance, the actions devoted to formalizing practices are necessarily recursive, that is, they must be repeated over and over again to achieve the task at hand [quoting Lampland].
Take the Integrity Scorecard of our Global Integrity Report, for instance, as viewed from the perspective of a user. The Report assesses the existence and effectiveness of public integrity laws, institutions, and accountability mechanisms as well as citizen access to such mechanisms — the ingredients that make up the foundation of a country’s good governance and anti-corruption architecture — based on a mix of qualitative and quantitative approaches.
The lead researcher assigns aggregate numerical scores to six major categories and twenty-three sub-categories that are generated from more than 300 indicators. In turn, these scores are used to produce an overall country score. (For more details on our methodology, see our Methodology White Paper.) These numbers can provide the user with a snapshot of how a particular agency or legal regulation functions and performs in a particular country.
But the numbers alone are likely to be more limited in their utility if, say, the user wants detailed information to identify specific areas of reform for planning purposes. Since the Integrity Scorecard complements quantitative figures with qualitative narratives, users are provided with a broader contextual analysis that can enable such data to be more “actionable.” The numerical scores found in the Global Integrity Report — or other indices similar to it — can thus be viewed as provisional numbers (in Lampland’s sense) if they are used as an entry point or benchmark for further analysis and planning.
The bigger point I wish to make is that we shouldn’t fetishize and get hung up on numbers per se when designing or using measurement instruments. This in no way implies that we should sacrifice rigor and accuracy when crafting and interpreting numbers. It is, however, a reminder that “the science of measuring corruption is more an art form than a precisely defined empirical process” (see our book A Users’ Guide to Measuring Corruption).
— Raymond June
via facebook: "Terrific read: a reminder that most numbers are a means to an end, not an end in itself. Numbers as constructs in a functional context and not some sort of platonic forms."
Not too cynical. These data are abstractions, so when we build labels and rankings, there's a degree of arbitrary decision making that goes in. It's easier to make headlines (the real purpose of most rankings) if the news is bad.
Our index produces aggregate scores, with a nice normal distribution. We label the mode of the distribution "weak", with some of the better performers in the "moderate" tier. A handful reach "strong" and none have earned "very strong". Which, for advocacy purposes, is useful. Does that make it exaggeration? I'd say no, but I recognize that we're on unstable footing.
Nice summary of the article. For a similar argument about the strategic use of numbers in the West Africa context, see Mariane Ferme. "The Violence of Numbers: Consensus, Competition, and the Negotiation of Disputes in Sierra Leone." Cahiers d'Etudes Africaines XXXVII-2-4 (1998): 150.
This makes me wonder about other strategic uses of corruption indices. Mightn't anti-corruption advocates gain from exaggerating the extent of corruption in their societies? Perhaps the most compelling story arc would be high corruption rates, with incremental positive change each year to show that they are making an impact. Too cynical?