We have been reviewing idea submissions for many weeks now and are nearly ready to take a shortlist to our final Lightning Round. Throughout the evaluation process, we made some unexpected observations. In one case, the shockingly different ideas we had called for missed a seemingly crucial step.
Whether it’s journalists tracking money flows in politics, mappers visualizing education disparities across communities, or non-profits publishing their Aid-funded projects, those working to illuminate public information, engage citizens in decision-making, and strengthen mechanisms for holding the responsible to account are using a myriad of tools.
Transparency, accountability, and participation, however, are vague enough terms to invoke them without targeting tangible challenges.
With these difficult-to-concretize goals, the T&A community shows signs of falling prey to a trap that generates tools, but begs the question, what are they for? Are they working to address problems, or are they in search of them?
At Global Integrity, we came face-to-face with this issue while vetting ideas submitted to our innovation fund, TESTING 1 2 3.
A select few tools grabbed our attention right off the bat, owing to their innovativeness – unique designs that excited us because we had never heard about them before. It was only when we reviewed them more carefully that we noticed a shared failure to “address a specific, identifiable challenge,” one of our key criterion for selection.
Flashy ideas, such as transforming the functionality of a common tech tool or finding a niche application for an adopted method, can run the risk of leapfrogging the goal and landing at the solution.
The question is, does this matter? Are we at risk of producing tools that yield disappointing results? Or can there be unexpected benefits to them – for example, can a tool conceived to help X end up fixing Y?
Send us your examples to help answer these questions. We want to figure out the best way to approach this issue and steer research and innovation towards the challenges we seek to address.
–Photo Credit: Beshef/Flickr
Dear Global Integrity Staff
As far as I understand #testing123 project faced that problem of producing tools with disappointing results from its very conception. But in terms of further investigating what things work and what doesn´t, almost any outcome resulting is a contribution to previous state of the art. In other words, and if I understood the question correctly, there is no disappointing result if we (you), to a certain extent, are able to keep track of what happened; maybe by asking selected projects to provide information from where to start a case study analysis or something similar.
An example of a work made by the IDRC whom I think more or less faced a similar dilemma is the one I here post. To some extent these book tries and successfully achieve a compilation of what things work and what does not, after investing huge amount of money on developmental projects. They gather these experiences and provide useful theoretically driven cases that I am sure had serve to illustrate other organizations but also students around the world. (e-book is free and electronically available)
KNOWLEDGE TO POLICY: Making the Most of Development Research
http://www.idrc.ca/EN/Resources/Publications/Pages/IDRCBookDetails.aspx?PublicationID=70
See this post in: http://controlaelgasto.blogspot.mx/2013/02/innovation-fund-tools-in-search-of.html
I think the governance community should not be afraid of trying and failing. There is so much we can learn from our failures. Of course if, and this is for me the important part, we do not sweep it under the carpet but take the time to reflect on it. The worst case scenario would be that thanks to someone else trying and not succeeding, the rest of us would know that under given circumstances, certain approaches do not work. This would save us resources –money, credibility, social capital, etc–. In the best case scenario there are unintended consequences that benefit other initiatives. For example, thanks to an organization trying an approach to gather information to improve participatory budget processes, the rest of us could learn that there are tools we can use to inform programs intended to increase parents’ participation in primary schools in rural areas.
This does not mean that those of us who work in the field of informed advocacy should stop being critical about our thinking process. When developing tools and approaches to gather or produce information, it is important to have a clearly defined objective: it is not about gathering information for the sake of gathering it. Nor it is only about trial and error: we need to take some time to craft a strategy that puts us at a point where given our resource constraint, we can design an instrument that helps us to get closer to whatever it is we want to change.
All this to say that I look forward to reading about the ideas supported by the TESTING 1 2 3 Fund.