Process

Gathering and analysing outcomes

Gathering and analysing outcomes is how we describe evaluation. It involves observing/gathering evidence over the course of an initiative and using this evidence to evaluate the ‘success’ or impact of the initiative.

Why do we need to do it? It is generally important for initiatives to demonstrate that outcomes have been achieved, or to show that it is plausible for outcomes to be achieved in the future through a given course of action(s). Data collection and evidence analysis – where it supports your approach and the changes you make – creates a sense of legitimacy and validity for an initiative. This is very important when pitching to funding bodies but it’s also important for the people you are working with and for. It’s essential to evaluate whether you are achieving what you set out to do, and understand what is working and what is not. Through collecting qualitative data, such as stories of impact from people (in addition to quantitative data), you are able to better understand what the participants perceive is significant about the work you are undertaking, and compare that with your own perceptions.

“Where did they land at the end of it all? Did it matter? Did it change anything? Was it just fun? There’s a lot to be learned from a reality check [evaluation] for all of us.” – Shane Phillips, Lake Cargelligo Community Connector

It is important to note that there is a lot of information out there about this topic. Whilst we provide an overview here, we want to emphasise that evaluation is an entire field and profession with its own significant body of literature, practitioners around the world, and a wide range of tools and techniques that need to be applied appropriately to contexts for which they are relevant. We use the term ‘evaluation’ broadly, connoting the process of monitoring, evaluation and learning [MEL] specific to a project, initiative, program, etc. Experts can be hired to develop a MEL framework, approach and plan as well as to conduct evaluation on your behalf.

We have included evaluation in this curriculum because it’s important to understand how you and your stakeholders can be aware of the practices and use evaluation to strengthen your innovation work. There are some evaluation approaches that can be used by non-expert practitioners, but we caution their use without instruction, guidance and neutrality that an expert third-party would bring. Our goal in this module is to give you an introduction, some tools that you can give a go, and links to further information if you want to develop your skills further.

The ideas and stories captured here were shared by members of the Regional Innovators Network (RIN) during a peer learning session on 11 December 2018.

Quick Summary

What does it mean?

  •     What is the value of an evaluation process?
  •     What is evidence?
  •     Ongoing evaluation, not just waiting for ‘the end’
  •     One form of evaluation used in social innovation: Developmental Evaluation

How to do it

  •     Starting with the end in mind: Naming outcomes
  •     STEP 1. Using Theory of Change to identify desired outcomes
  •     STEP 2. Introduction to the MEL evaluation framework
  •     STEP 3. Trying out the MEL framework for yourself
  •     Techniques for capturing the evidence you’ll need in your evaluation
  •     Ethics and Consent

What hinders

  •     Not being flexible
  •     Being too flexible
  •     Not reaching out for support

What does it mean?

 What is the value of an evaluation process?

 Gathering evidence and undertaking evaluation takes resources, time and some planning. However, it can help us reach the following goals:

  •     Establish a baseline – understand what the current state is in the community
  •     Illuminate areas of potential – see how ideas go – take prototypes out into the community and get feedback from people about where there is room for growth and change
  •     Build an argument for how to move forward – through the monitoring, evaluation and learning process, you can build an argument for what actions should be taken, what actions should not be taken and who should take the actions
  •     Obtain buy-in and investment in further stages – when you have gathered the necessary evidence and you can show a path for how things could move forward based on that evidence, you are well-equipped to bring to other stakeholders on the journey and secure (further) funding

 What is evidence?

 In innovation, evidence is sought to determine a link (or lack thereof) between action and outcome. Noting that in social change causality can be difficult to determine and outcomes can take a long time to achieve…innovations (and any change, really) haven’t done their job if they haven’t created some kind of shift toward the outcomes we seek. Evidence is data – facts, information and stories – that demonstrates whether a ‘proposition’ is true and valid. It is the trail of data over the course of an initiative that establishes links between actions that were taken and outcomes that have occurred.

 In gathering evidence, questions of authenticity, bias, repeatability, substantiveness and correlation to claim must be addressed. Evidence should be authentic, unbiased and repeatable.

 Ongoing evaluation, not just waiting for ‘the end’

 Within the work of social innovation, evaluation can be done at:

  •     The early stages of innovation (see Developmental Evaluation below)
  •     Throughout the entire process of the intervention/initiative
  •     At completion

 “This approach to evaluation is interesting because traditionally you reach the evaluation point and that suggests you’ve reached the end of the work. But this approach shows how evaluation should be constantly feeding into the design process ongoing.” – Shane Phillips, Lake Cargelligo Community Connector

 One form of evaluation used in social innovation: Developmental Evaluation

One challenge of producing evidence for innovation is that there is never evidence for an idea that hasn’t been tried yet…an innovation is something new, after all!

Among the many approaches to evaluation, you may be familiar with the most common and widely accepted types of evaluation within western approaches to research – like randomized control trials, for instance. You may think of evaluation as something similar to a conclusion, a review or a reflection at the completion of the project. But evaluation for innovation is typically not feasible through these approaches and so a form of evaluation called ‘Developmental Evaluation’ is often used instead. The focus of developmental evaluation is to ‘learn what works’ rather than to ‘demonstrate that something you’ve already created works.’  Developmental Evaluation is about listening and learning – it’s a continuous development loop.

 Here is an explanation of Developmental Evaluation taken from one of our resources, the website www.betterevaluation.org: 

Developmental Evaluation (DE) is an evaluation approach that can assist social innovators to develop social change initiatives in complex or uncertain environments. DE originators liken their approach to the role of research & development in the private sector product development process because it facilitates real-time, or close to real-time, feedback to program staff thus facilitating a continuous development loop.

Michael Quinn Patton is careful to describe this approach as one choice that is responsive to context. This approach is not intended as the solution to every situation.

  •       Developmental evaluation is particularly suited to innovation, radical program re-design, replication, complex issues, crises
  •       In these situations, DE can help by: framing concepts, test quick iterations, tracking developments, surfacing issues.

 Link: https://www.betterevaluation.org/en/plan/approach/developmental_evaluation

 

How to do it

 In this section of the module, we step back and provide a few foundational steps you need to jump in and have a go. 

 Starting with the end in mind: Naming outcomes

 Whenever we think about undertaking an evaluation of our work, we need to first understand what evidence we will need to look for. The evidence you’ll need is determined by what you are trying to measure. What would you like to understand or prove? For instance, are you measuring social impact, process improvement, personal development?

 The best way to determine your metrics is to start with the end in mind – by naming your desired outcomes. What are the long-term goals, strategic priorities, aspirational hopes and changes your initiative is hoping to make in the world? These are your desired outcomes.

 In STEP 1, you will learn how to clearly articulate your desired outcomes using The Theory of Change as a tool. We will also discuss the difference between outcomes and outputs.

 STEP 1. Using Theory of Change to identify desired outcomes

 In this video, you will find an introduction to:

  •     Working with a Theory of Change and why we need to use it
  •     Naming desired outcomes
  •     The difference between an output and an outcome
  •     The links between outputs and outcomes

 

Below is an image of the ‘narrative version of the theory of change’ as discussed in the above video. As the video mentions, this narrative format provides a (relatively) simple way to explain the value of your work to stakeholders, collaborators and investors.

This next video steps through an example of a detailed Theory of Change for TACSI’s Family by Family program. This complex ‘program logic’ version is useful for explaining to funders exactly what you’ll be doing and teasing out the layers of action, change and impact. This amount of detail is often a requirement for grant applications.

 

Here is an image of the ‘Program logic version of the theory of change’ as discussed in the last video. It is meant to be read bottom to top and roughly in columns left to right.

  

What is the difference between outputs and outcomes? Outputs are the deliverables of a project, for example, a new youth program. The output is never a sure sign that the OUTCOME has been achieved. Outputs are a means to an end, not the end itself. Depending on the scale of a project, there may be several outputs stacked together to achieve an overall outcome.

NOTE:  For more detailed information on Theory of Change, refer to the Tools section of the RIN platform or follow this link:  https://regionalinnovation.com.au/network/tools/tool-2

 

STEP 2. Introduction to the MEL evaluation framework

 When you establish desired outcomes using the Theory of Change, you also map out how you might get to those outcomes and the assumptions you are holding about the initiative, your approach and the outcomes. With the Theory of Change in hand, you are ready to build develop key evaluation question and then create a framework around how you might monitor and evaluate what happens. (Note: If you don’t have a theory of change, you might have a design process which you could attach your evaluation framework to instead.) 

Key evaluation questions establish what you want to know and learn through the evaluation process. They typically include questions about the extent to which outcomes have been achieved, how and why outcomes are being achieved, significance of impacts, process effectiveness and capability that was built along the way, for instance.

The following video gives an introduction to the MEL evaluation framework.

 MEL – monitoring, evaluation and learning – involves:

  •     Monitoring – based on the key evaluation questions, tracking and capturing data that relates to changes over time on an ongoing basis or at key agreed intervals
  •     Evaluation – assessing the extent to which change has taken place (if any). Reviewing what has happened against assumptions. Determining the (potential) effectiveness of the interventions that have been put in place.
  •     Learning – using the evaluation process to inform and improve results. Incorporating what has been understood through the evaluation and applying those learnings into the initiative’s approach and activities.

The ‘Learning’ component is crucial for undertaking evaluation within the work of innovation, because this is what feeds your design process and ensures you continue iterating. This is also referred to as ‘evidence based learning’.

 

STEP 3. Trying out the MEL framework for yourself

The MEL framework will be the scaffolding for your evaluation; it will be the guard rails that keep your questions, time and efforts relevant, on track and on task.

Steps to setting up and implementing a MEL plan:

1.Get ready for evaluation by answering the following questions

  • Why are we evaluating this? Purpose, objective and scope of evaluation
  • Who is the evaluation for? Audience
  • What do you want to learn? Decide on your evaluation ‘threads’ or ‘tracks’ based on your key evaluation questions (e.g. one track for team capability and one for participant impact) and write down clear evaluation questions and sub-questions for each track. (See page 3 of the RIN Evaluation Framework document for an example of Key Evaluation Questions)
  • What approach will you take? What methods will you use? Choose appropriate evaluation methods and principles. For example, using a ‘Most Significant Change’ tool, ‘Impact Stories’ tool, surveys, a reflection circle, other data. Guidance from a professional can be critical here.
  • What process and timeline will you follow? Will you set up a timeline with milestones now to keep you on track? What resources do you need?

2.   Developing and running a test – Innovation often involves running tests of ideas or early prototypes (refer to ‘Making and Learning’) either as part of your MEL plan or as a subset of your MEL plan.

3.Monitoring – Be methodical in how you capture information. Be as clean, clear and well-documented as possible. Be accurate.

4.Evaluation and learning – Once you have run a test (or tests), you will need to evaluate the information you have gathered and apply the learnings. This includes:

    1. Analysis – Determine the meaning of the information you have gathered
    2. Synthesis – Explore the opportunities that these insights reveal – how might you improve what you’re doing? Do you need to pivot? Do new ideas emerge out of the learning process?
    3. Iteration – Adapt and evolve your problem framing, ideas and prototypes until you find solutions that effectively address the challenge or opportunity

Techniques for capturing the evidence you’ll need in your evaluation

 You should aim to make the physical task of gathering evidence as easy and quick as possible, without cutting corners and losing the richness of data you need. Recording ‘moments of impact’ – brief feedback that has been shared, comments from participants, etc. – is a helpful habit to develop as it helps you build evidence over time. Evaluation, and evaluative thinking, is a key part of the innovation process.

 There are a number of methods that can be used in the early stages of innovation, for instance:

  •     Learning briefs (to capture developmental learning, pivots and insights)
  •     ORID framework (for reflection)
  •     After Action Review (for reflection)
  •     Reflection workshops (for data collection, co-analysis, and reflection)
  •     The ‘What Else’ Test (for light contribution analysis)
  •     Impact log (Gathering impact stories)
  •     Most Significant Change

 Below are two methods of gathering evidence favoured by the TACSI team.

 

#1 – TACSI’s favourite quick method: Impact Log (Impact Stories)

Impact stories are typically quotes and anecdotes that speak to someone’s experience of change. They occur spontaneously and it’s really helpful to capture them. Sometimes they come by email, sometimes they occur in an off-handed comment or at events. Keeping an actual physical (digital) ‘impact log’ of stories can be useful for noticing what matters to people and demonstrating what is valued. Include the quote, the date and a brief description of the context. 

Impact stories can also be sought out and captured via case studies. The case study is longer – typically 1-5 pages – and should include a description of the context and challenge, the process used and action(s) taken, and the result. Evidence that backs up the claim of impact should be included, including quotes, facts and data points.

#2 – TACSI’s favourite qualitative method: Most Significant Change (MSC)

Most Significant Change is a qualitative and participatory technique for gathering evidence for monitoring and evaluation.

The process involves:

  •     Collecting stories of what people consider to be the most significant change as a result of an action, program, project, etc
  •     Panel(s) of stakeholders systematically reviewing and selecting what they feel to be the most significant change from among the many stories based on agreed criteria
  •     This review includes reading the stories aloud and having in-depth conversations about the value of the reported changes
  •     Feedback and communication of results

MSC should not be used as a standalone method – it should be used in combination with other methods. It’s a great tool for gathering in-depth insights into particular moments of impact but other tools are needed to gather evidence of the breadth of impact.

Click here for a link to the ‘Most Significant Change’ canvas created by TACSI for the RIN.  

 

Ethics and Consent

When you develop tests, be sure you have consent to collect the data you want to collect – and to use the data in the way you will need to use it. See the resources linked below for more information about ethical requirements for gathering information from people.

 

What hinders?

Not being flexible

A design process requires flexibility. Evaluation in the world of innovation can be a bit different to evaluation in other disciplines. In some instances, it may be appropriate to keep the evaluation questions the same throughout a project as a control. However, in the work of social innovation it is common for the evaluation questions to shift and change throughout the process, usually at key milestones. Your team will need to ask: When we set up our key evaluation questions, were we asking the right questions? Do the questions need to change for the next round of evaluation?

Being too flexible

 Much of the work of social innovation is about training ourselves and our collaborators to be flexible, open to change, curious and able to sit with ambiguity. So, it seems strange to tell you to put the brakes on! But it’s important to take a considered and disciplined approach with evaluation in the interest of generating evidence which is authentic, unbiased and reliable. Set milestones within the project which represent a point to pause and pivot your evaluation framework if needed. Pivoting outside of these milestones could compromise the data you have set out to collect.

“It’s a personal challenge to the way that we do things, to make an effort to be really considered. And one of the things you taught me is that although there is flexibility in planning, and the need to be flexible, there is also the need to be thoughtful and intentional, right to the very end.” – Shane Phillips, Lake Cargelligo Community Connector

 Not reaching out for support

 If you recognise that your project would absolutely benefit from an evaluation and a thorough ‘proof of concept’ with evidence to show stakeholders the value in your work, and you don’t have the capacity to undertake the evaluation yourself, ask for help. There are consultation groups (like TACSI partners Clear Horizon) who can come on board with a fresh and objective view to gather and analyse the outcomes of your work and provide a thorough evaluation. You can also ask them to teach you some new skills for next time.

 

Resources:

Gathering & Analysing Outcomes – TACSI presentation

Resources for Developmental Evaluation

Resources for creating a Monitoring, Evaluation and Learning (MEL) plan

Resources for evaluation methods

Resources for analysing and synthesising insights

Resources for ethics and consent

Resources