Monitoring and evaluation for civic tech: Part 2
Merlin Chatwin is Code for Canada’s Monitoring and Evaluation Lead. In this series, he explores how we might reimagine and improve the way we measure the impact of civic tech projects. You can read Part 1 here.
Over the last few months, I’ve had fascinating conversations with academics at restaurants in Toronto and Edmonton, visited and video-conferenced with government folks across Canada working on digital service and public engagement teams, and I’ve shared coffee with advocates looking to help civic tech get to the next level in their communities.
As you can imagine the conversations were far-reaching, but always came back to the challenges we need to address in creating and sustaining the social change we all desire from civic tech work. As a part of my role with Code for Canada, I’ve been having these conversations while simultaneously doing a literature review on monitoring and evaluation in civic tech.
From the beginning, I experienced two distinct challenges:
- Very few people in the civic tech ecosystem are afforded the time and space to specifically focus on monitoring and evaluation; and
- Globally, there is a limited amount of literature on evaluating what ‘good’ civic tech looks like and how it can be measured.
We’ll be publishing an article later in the year that proposes a way forward for civic tech evaluation in Canada, but in the meantime, I wanted to share some of what I’ve been hearing as a way to catalyze more conversations.
I’ve taken some insight from the conversations and early readings on civic tech M&E and synced them into some ‘polarity’ tensions. The way I’m thinking about these (aligned with writing on polarity management) is that neither aspect of the tension is wrong or inherently bad, but as a sector, we need to figure out how to continuously adapt where we sit in the midst of the tension.
Here are some of the polarities, in no particular order:
‘Getting sh*t done’ vs. Monitoring and Evaluation
One of the most common themes I’ve seen, whether in conversation or in the literature is that monitoring and evaluation work isn’t seen as a priority. It’s either not happening at all, or it’s happening off the side of someone’s desk. People talk about M&E as something that’s “time-consuming” or that takes resources away from efforts to “get shit done.”
When time is being dedicated to M&E, it’s often because it’s imposed as a requirement, either by governments or funders. Unfortunately, this has turned what should be a tool for leadership, learning and improvement into an exercise in checking boxes. As a result, this kind of M&E too often focuses on some pretty low-hanging fruit (website hits, clicks, or other digital interactions) and shies away from the more substantive evaluation.
Product development vs. relationship evolution
Something the people I spoke with consistently brought up is that technology is a means to an end. Much of what civic tech is trying to achieve through product and platform creation is a fundamental change in the relationship between government and the public. This can look different based on context, but ultimately it’s about harnessing the intelligence of the collective, providing access to important information, and bringing people back into the decision-making processes that impact them.
At the risk of stating the obvious, this is hard to measure. That doesn’t mean we shouldn’t. Monitoring and evaluating subtle changes in behaviour and relationships aren’t as appealing as proving an app saved the world, but it’s this gritty work at the human level that makes the real and sustained difference.
“It’s entirely possible that a failed product can still lead to a positive change in the relationship between government and the public.”
I heard consistently that improvements in the way governments and the public collaborate will ensure technology is applied in ways that actually address complex civic challenges. This doesn’t mean that we don’t evaluate the products, but that we don’t end there. After all, it’s entirely possible that a failed product can still lead to a positive change in the relationship between government and the public.
Stating ambitious goals vs. risk of accountability
Another theme that came up in conversations, time and time again, is that folks working in civic tech and digital government are averse to (or even afraid of) stating ambitious goals. It’s easy to see why. Community organizations or social enterprise start-ups don’t want lofty goals to be tied to their funding in case they don’t meet them. Public servants face a similar struggle; failure to meet goals can be seen as a failure of public trust, or worse as a “waste of taxpayer money.”
Ultimately, this is hindering their ability to work in the open, iterate, adapt, and learn from failure. The volume on the conversations of ‘learning from failure’ is increasing, but there is still a pressure to ‘prove’ success rather than demonstrate learning and growth. When the standard is set that civic tech initiatives must prove that they achieved a stated goal, it’s not surprising when governments or community organizations avoid putting ambitious goals on the record.
This is not to say that accountability isn’t important, but it seems that a shift in how we think about success and working in the open is necessary. How can the civic tech ecosystem use M&E to learn, adapt, and grow and do a better job of managing expectations along the way?
Contribution causal claims vs. attribution causal claims (We did this vs. we helped move this forward)
A theme of many of the conversations (often because I asked this question directly) was how to manage the need to ‘prove’ an initiative was successful. Governments and funders increasingly favour ‘Impact Evaluation;’ they want to prove through the use of control groups that a given initiative was the sole cause of a given impact.
“Requiring organizations to ‘prove’ that their intervention is the sole reason for a positive impact on a beneficiary group is counter to the culture of civic tech.”
In my conversations, there were two key issues with this type of evaluation. First, community organizations understand that the complexity inherent in civic tech work limits their ability to make causal claims (saying we did this without any help). Community organizing, civic participation, and related work is complex and should be done collaboratively, with multiple interventions aimed at bringing about the necessary change. There’s a common refrain in civic tech, “build with, not for”. Requiring organizations to ‘prove’ that their intervention is the sole reason for a positive impact on a beneficiary group is counter to the culture of civic-tech.
Second, these types of evaluations are often beyond the human and financial resource capacity of many civic tech organizations. There’s no money for civic tech start-ups to create control groups and conduct experiments or quasi-experiments to prove causation. People are interested in how to conduct rigorous M&E that is contextually appropriate and within the existing resources for civic tech initiatives.
These tensions are a result of people trying to do good work and answer tough questions. Naming and addressing M&E challenges in the field is necessary for civic tech to move to the next level of difference-making.
The current lack of M&E is a sign of sector disempowerment; the civic tech ecosystem is not empowered to design an intervention, implement and have it achieve different results than originally intended. Improved M&E is a part of a broader culture change, one that empowers people to be ambitious, do their best work, and learn how to do it better.
As always: if you’re working on similar things, I’d love to connect. Send me a message at merlin@codefor.ca and we can chat!