By Johannes Tonn — March 8, 2016.
Last week, I participated in the launch of the Governance Data Alliance (GDA), a consortium of organizations joining forces to better understand how the marketplace for governance data works and how the field collectively — data producers, users and funders — might be able to make it work more efficiently. As a founding member of the GDA, Global Integrity is excited about the next steps the alliance will take to broaden its membership, better understand users and their needs, and explore the nuts and bolts of collaborative data collection exercises.
During the launch, in addition to the unveiling of a novel and long-desired dashboard presenting governance data in one place, we explored key findings of the GDA-commissioned report “Governance Data: Who uses it and why?” — written by Sam Custer and her team at AidData. The report resonates well with our own thinking on a number of dimensions (see for example our piece on Measuring Governance: What’s the point?) and asks important questions about the uptake and impact of governance data.
Revisiting our theory of change
For us, the report speaks to the need to revisit the question of why we do assessments, who we think will use them, and how we think that they will make a difference. While the report does not go as far as to fundamentally question the validity of the (often times implicitly stated) theories of change data producers base their assessments on, it corroborates our disquiet about the applicability and soundness of some of our assumptions: Does the availability of governance data, generated through externally-driven assessments translate — through its uptake by local reformers — into evidence-based policy change toward better governance?
There is anecdotal evidence of this proposition holding true: We are regularly approached by governments, civil society and media to discuss our findings and there are documented instances of reformers referencing our data as either justification or as a benchmark for new aspirations. Overall however, the evidence for our data making a difference is patchy. And for every government engaging on the data, we are keenly aware of the many potential actors in other geographical areas who have not engaged or who find the data is based on concepts that — according to these actors — are not applicable to their institutional context and every-day realities.
Toward governance assessments 2.0
And indeed, governance assessments are often times directed at local actors instead of developed and investigated in concert with local actors. Instead of defining and eventually measuring context-specific concepts based on local applicability, assessments — including our own — often times paint an idealistic picture of how a particular best-practice form of a specific institution should look, and prioritize cross-country comparability at the expense of contextual relevance. Comparability, however, risks making the data less useful to country-level reformers, actors who are actually in the position to drive reform and sustain policy decisions over time.
We believe the field of data producers ought to collectively rethink the benchmark of what constitutes useful governance assessments by making progress toward conceptualizing and fleshing out what we call “Governance Assessments 2.0” — assessments that prioritize context, ownership and best-fit governance approaches over the prevailing orthodoxy of cross-country comparability and presumed best practice.
We do not propose doing away with the concept of comparability entirely, nor do we intend to reduce our commitment to analytical rigor and independently verified information. These principles are — and must be — essential cornerstones of reliable and trustworthy data collection exercises.
However, we think we can make progress toward measuring locally applicable and (very possibly) locally defined, non-trivial guideposts for meaningful reform, while not attempting to religiously measure countries against (sometimes generic and unattainable, at other times nonsensical) best practice benchmarks. And we are excited to see the World Development Report 2017 concept note foreshadow the need for context-sensitive indicators — focusing on the utility of measuring governance functions rather than forms in an attempt to crystallize the factors that indeed matter for development outcomes.
Over the next few months, we will explore the possibility of partnering with AidData and other organizations in the business of doing governance assessments 2.0, concretizing how such assessments might look in practice and what set of countries might want to have a first go at doing a deep-dive into context-specific measurements. We will experiment with and iterate on methodological choices by conducting pilot assessments in order to support local actors in their pursuit of best-fit institutional reforms. We welcome your comments and thoughts at johannes.tonn@globalintegrity.org.
Images: Courtesy of Governance Data Alliance (Top) and Susan Sermoneta (construction) —Flickr.