This was the question that we asked six international civil society organisations at the very beginning of our project. What seemed like an easy – or at least straightforward – question, proved to trigger a whole set of further questions which reached far beyond the teams responsible for monitoring, evaluation, accountability and learning (MEAL).
This blog article summarises some of the learnings from six pilot studies on people-powered decision making that Accountable Now and CPC Analytics have conducted with 350, BRAC, CIVICUS, Greenpeace, Restless Development and TECHO over the past eight months.
Theory: Connecting dynamic accountability, feedback, and sustainable impact
Before we go into describing some learnings from the pilot projects, we must ask ourselves how accountability is linked to data from stakeholders (in particular feedback data). Despite its recognised importance, accountability has often remained a mere reporting task, time intensive and rarely read. Data has not played a major role, and definitely not data from day-to-day operations.
However, this practice is about to change. The 27 members of Accountable Now have agreed on 12 commitments that capture a globally shared, dynamic understanding of accountability. At the core of this understanding lies a definition of accountability as “a constant dialogue with our key stakeholders about what they want, what they offer and how we can work together effectively.” But dialogue is not the end of the process. The collected feedback needs to find its way into decisions at all levels of the organisations, and the actions resulting from those decisions will again be subject to feedback.
Understood this way, accountability becomes inextricably linked to measuring the impact of the work of an organisation. To be sure, very often we stop at counting inputs, showing outputs or outcomes. Measuring impact remains a key challenge for most CSOs – particularly when we speak about having a sustainable impact. A process as described above – what we called “people-powered decision making” – will not only influence how organisations will work with their stakeholders, but also expand the degree of ownership/partnership stakeholders feel.
So much on theory. But how can this notion of accountability be translated into actual projects? One way forward is illustrated in the figure below where the implementation process is described along four phases: Open up, understand, interpret, and evolve. Each phase involves interaction with the relevant stakeholder or stakeholder group. Importantly, each phase of the process is likely to trigger changes – both within the organisation itself as well as among stakeholders. But the notion of change goes beyond those internal changes (and behavioural changes among stakeholders). In this case, it is also understood as the potential the respective phase holds to lead to a higher impact of the CSO activities and to more accountability of CSOs towards the stakeholders. The following points describe those phases in brief:
Open-up: The initial stage is about reaching out to stakeholders and identifying room for improvement in a given process as well as the stakeholders’ willingness to get involved. In the case of strategic partner organisations, this would most likely mean discussing gaps or opportunities to improve in the current collaboration. In the case of Restless Development for example, the question was how to integrate partner organisations in a more regular and effective way in their Youth Power campaigns – given the diverse group of partners and settings in which those partners operate. If the stakeholders to be addressed are direct beneficiaries, these interactions would be less structured. Nevertheless, the goal from the perspective of CSOs is clear: open up to new ways and voices in your processes.
- Change: The decision to involve feedback from stakeholders in a decision-making process will start a thorough thought process on both sides. The CSOs must focus on a project, a stakeholder group or a challenge that is particularly important for the organisation. For stakeholders, on the other hand, the mere fact that they are asked to contribute to a process and their input being recognised as (potentially) useful will raise awareness for the project.
Understand: The second phase is a moment of focussing on the side of the CSO: What do we already know about our stakeholders? What kind of data will be able to answer our questions? How do we collect the data – surveys, machine-generated data, secondary data? The interaction with the stakeholder group is vital for this last question in particular. The choices on depth, frequency, and quantity of data as well as the applied tools for data collection depend on the willingness of stakeholders to get involved, and sometimes the available technology.
- Change: Identifying useful data sources and transforming them into indicators (operationalisation) implies the prioritisation of some questions over others. The team of the CSO will have to decide on concrete indicators that are mutually accepted internally and externally. For the stakeholders, this phase allows them to voice their opinion and raise issues that would have been lost otherwise. TECHO has been an illuminating case in point here: While designing the feedback process for TECHO-volunteers, an internal debate showed that other national chapters had similar intentions and feedback tools were synchronised accordingly. In that process, different interests need to be considered to make the analysis meaningful for different teams and inclusive for the stakeholders.
Interpret: Once the data is collected and analysed, sharing the results with decision makers in the organisation and the stakeholder group will be the next crucial phase. Data-backed indicators represent a powerful argument in discussions and they make processes more transparent and measurable. Decision makers within CSOs will have to coordinate the discussion and eventually prioritise one issue over another. The work with BRAC Dairy in Bangladesh illustrated that: requirements voiced by smallholder dairy farmers through the feedback ranged in their complexity from ‘easy-to-implement’ (e.g. opening hours of milk collection centres) to more fundamental (e.g. increase of milk prices). Thus, strong interaction with stakeholders is needed in this phase of interpretation – particularly when they are strategic partner organisations.
- Change: For stakeholders, getting overall results from the data collection is an important step to feel involved. It allows them to place their own opinion in a wider picture. What was thought to be a priority issue might be a relatively minor issue when considering the entirety of issues presented. Conversely, a new topic might arise that was not anticipated. Organisations, on the other hand, will gain tools at this stage that help them steer operational processes.
Evolve: This phase is about closing the feedback loop and taking actions. Ideally, the interpretation phase will have delivered a clear way forward that is backed not only by data but by a mutual decision from different teams within CSOs. The action points might – depending on the initial challenge selected – range from small improvements in how to deliver a service to the restructuring of day-to-day operations in an entire project. At any rate, these decisions and actions need to be visible to stakeholders. Whether it is a one-sided message or an involved debate depends on circumstances, but without sharing results from the feedback process, many positive effects will most likely be lost.
- Change: Only at this step does people powered decision making contribute to accountability. If stakeholders can see that their feedback leads to change, positive effects emerge from the process. Ideally, the process is more efficient, more effective and/or stakeholders feel more ownership.
It is fundamental to recognise that changes happen throughout the entire process and have an effect on both the stakeholders and the CSOs. Those changes can also result in increased expectations; a fact that needs to be incorporated into the planning of the entire process. One of the worst-case scenarios is to create frustration amongst stakeholders due to a lack of follow-up.
Of course, those phases are not as neatly sequential as indicated in the diagram. In practice, there will be overlaps and also an element of trial and error. It might well be that the data collection disclosed a weakness in the questionnaire, etc. However, in our pilot projects, those iterations did not lead to a failure of the project, but rather to learning and innovation.
Reality: Managing organisations, technology, and involvement
The process framework above shows the great potential of people-powered decision making. On the other hand, it also outlines how involved such a process is. It can be time-consuming, iterative, and it can become an expensive enterprise if data collection is difficult.
When we started the six pilot studies, we were confronted with a wildly diverse set of organisations that had to address different stakeholder groups. Motivations for increasing the involvement of stakeholders through feedback data varied too. The questions ranged from “How can we integrate external stakeholders in campaign design in a data-driven, transparent, and more inclusive way?” (e.g. Greenpeace and CIVICUS) and “How can we ensure we notice the needs of our volunteers in a timely way?” (e.g. TECHO) to “How can we get regular feedback from our stakeholders to improve our services?” (e.g. 350.org, Restless Development, and BRAC Dairy). Naturally, those different “problem statements”
involved different degrees of stakeholder involvement, sophistication in data collection, and internal resources.
Interestingly, however, seemingly the most important question to the CSOs’ teams was which technology and tools should be applied. This might be because we all wish to believe that technology is there to make our lives easier or because it is the most opaque topic when talking about data. But as we moved on in the project, it became clear that the technical challenges, albeit present, might not be the biggest roadblocks.
What we realised was that there are at least three more categories of challenges – aside from technology – that organisations face when they wish to become more “feedback data-driven”: focus challenges, data challenges, and organisational challenges. Table 1 characterises each of the categories by a set of questions that need to be answered along the process outlined above.
The above list of questions certainly does not cover all questions that arise during such a project, but it gives an idea of where challenges can be expected. The relevance of those challenges also changes as the project moves from phase to phase. Focus challenges are particularly present during the “open up” phase and the “understand” phase. In the last phase, where concrete actions are to be taken from the feedback results, organisational challenges become most important.
The involvement of stakeholders influences which of the challenges is more pronounced, too. In a situation where the addressed stakeholders are beneficiaries, questions about how to implement a system become more salient.
Learnings: Data, decisions, and people
Over the past eight months, the six international CSOs started a process to integrate feedback data into their decision making – supported by Accountable Now and CPC Analytics. In that process, they have worked their way through different tasks:
- Identify a current challenge of the organisation where data from stakeholders or from interactions with stakeholders are crucial and can lead to operational improvement
- Assess the available knowledge about stakeholders within the organisation that could help improve decision making
- Build on this knowledge to design and execute a data collection process that can inform decision making
- Analyse the feedback data and bring it back to the organisation’s leadership
Knowledge ≠ data: While organisations usually know their stakeholders quite well, data on stakeholders is scarce. None of the organisations was able to identify existing stakeholder data at the beginning of the project. Mere knowledge, however, is not data. While knowledge is mostly “stored” with specific teams within the CSO, data can be easily shared and compared.
Internal buy-in and support: Wherever CSOs pooled skills from different disciplines and responsibilities, the project would progress much more quickly. Ideally, people from an operational team and from a technology team come together to start the process. Management support proved crucial from the start, too.
Incentives from donors: Dynamic accountability is linked to ensuring sustainable impact. Pressure from donor agencies proves to be an important driver for integrating extensive feedback collection mechanisms.
Known technology > experiments: Given the time pressure during the piloting phase, all of the pilot organisations relied on existing tools to collect, analyse and show data. In some cases, existing software solutions were customised in-house, but the general approach was to keep investments to a minimum or rely on existing internal tools.
Who closes the feedback loop?: It remains a challenging task to communicate the results of the analysis back into the organisation at large. While communication to the stakeholders was overall good, following up with specific actions to improve is more difficult. Data helps to improve, to be more transparent, and to communicate in a more precise way. Nevertheless, only three of the six pilots had a clear idea about which decisions would be informed by the feedback data currently collected.
Our pilot projects are currently in their final months. Check back at the end of the year for another blog post with final learnings, and in the meantime check out some of our other blog posts!