Michael Moses – Director, Programs & Advocacy
August 9, 2017
A couple of weeks ago, Global Integrity gathered up a bunch of markers and reams of flipchart paper, and headed down to the a conference room in the basement of the OpenGov Hub for our second quarterly review of 2017. Our quarterly reviews are an opportunity to discuss how things are going in our different programs, to reflect together on emerging lessons and challenges, and to strategize on how we can strengthen our effectiveness in the months to come. They’re one of the many ways in which we’re implementing our learning plan, and putting our commitment to data-driven learning and adaptation into practice.
To kick things off, each of our program leads gave quick presentations in which they laid out: a challenge that they’ve faced in the past few months in implementing their program and driving progress toward the intended outcomes; how they’ve adapted to that challenge; and the results of that adaptation. After each presentation, we split into pairs, and spent a few minutes reflecting on what had been shared, and thinking through other ways in which we might tackle the specified challenge.
After doing this for each program, we brought it all back together, working as a group to identify cross-programmatic lessons, and figure out how we might apply those lessons to our work moving forward.
So, what did we cover? Lots, but for the sake of brevity, we’ll confine ourselves to two programs in particular:
First, in our Integrity and Anti-Corruption program, we’re working with local research teams to produce innovative, detailed data on governance and social issues in Africa. Our Africa Integrity Indicators (AII) are designed to support reformers at country level in identifying and acting on gaps between law and practice (supporting use of data is integral to what we do at Global Integrity). But despite robust engagement from a number of users (including governments and the Mo Ibrahim Foundation), since the most recent round of data went live in April, we’ve not seen as much uptake and action from country level actors as we’d hope.
Second, in our leadership of the OpenGov Hub, we’re focused on building a vibrant community of open government reformers. But doing so in a sustainable way, which doesn’t over-stretch the capacity of the two staff members who support the Hub, and successfully encouraging other Hub members to step up to lead the Hub Working Groups, and other Hub activities, remains a persistent challenge.
In our small group discussions around these challenges — which mixed individuals from different teams and program areas across Global Integrity — and our overall group reflection, we identified a key similarity between these challenges. They’re both, at their core, an issue of incentives. Potential users of AII data have their own goals, and may not immediately see a link between the detailed governance data we produce and the issues they care about. Similarly, Hub members might not think that taking on additional leadership responsibilities lines up with their own incentives, and helps them accomplish their aims.
The insight that incentives matter is not new, by any stretch – we’ve been explicit about the importance of engaging with incentives for a while now, as have many others working in governance and development. But our experiences with AII and the Hub clearly illustrate how incentives matter in our own work, and lay bare how crucial it is to better understand the needs of users and partners, and to explore how — given our own capacity and incentives — we can help them meet those needs.
So where do we go from here? How do we apply this insight in practice? It starts with doing a better job of listening, figuring out where our incentives align with potential partners, and clearly communicating how we can help them achieve their goals more effectively.
For the Africa Integrity Indicators project, this means we’ll be deepening our engagement with potential users — including through offline (ie, non-email) means — to learn more about their interests, and explore whether and how the data we generate on governance in Africa can play a role in pursuing those interests. At the Hub, we’ll be digging into the results of our recent survey of all Hub members, and testing out a few different methods with which we might more effectively incentivize and sustain distributed leadership across the Hub community.
We’ll report back in a few months on how we’re doing, and what we’re learning as we continue to strengthen our effectiveness and impact. In the meantime, we’d love to hear from you — how do you support cross-programmatic learning, reflection, and adaptation in your organization? What challenges do you face in doing so? Let us know @GlobalIntegrity.