The Ten Steps    


Ten Steps for Evaluation Success

Early Intervention Foundation – March 2019

 

The EIF “10 steps for evaluation success” framework has been developed to address concerns around whether the services or interventions provided are beneficial for the young people and families who most need them. It recognises that evaluation methods frequently feel daunting and especially if they are unfamiliar or require technical and knowledgeable expertise. It also recognises that those involved in the delivery of interventions often feel uncertain about how to evaluate them, both in terms of process and content.

 

In 2020 The Youth Endowment Fund provided Values Education for Life with a capacity building grant in order that such an evaluation might be undertaken. This would further develop an intervention programme called Success for All. This programme showed potential, but lacked the level of evaluation required to provide a further grant for substantial analysis and evaluation at a practical level.

 

Values Education for Life decided  to utilise the “Ten Steps for Evaluation Success” as a framework for such capacity building and also welcomed Professors Ann Higgins D’Allesandra, Helen Haste and Peter Langdon to work with David Rowse, Chair of Values Education for Life to provide the necessary technical and knowledge expertise needed to deliver a useful practical and theory based analysis of the potential of the programme .  

 

This group has now been working on the project for the last ten months and the results are recorded on this web site as evidence of their expertise and analysis of theory related to good practice within the first five steps of the EIF evaluation framework.

 

There is still much useful work to do, but this is now dependant on a further grant that would allow further confirmation within  steps one to seven to substantiate the potential of this intervention programme.


The Ten Steps for Evaluation Success are listed below:

 

(i)         Confirm your Theory of Change

  • This step explains what the intervention will achieve and why this intervention is also important to the personal development of the young person.
  • It requires the theory to consult the scientific evidence to ensure that the theory of change is rooted in what is known about young people’s development.
  • It utilises participatory methods to confirm that the intervention’s theory is founded on scientific evidence as an active element of the process.

           

(ii)        Develop Your Logic Model

This step is a graphic representation of how the intervention’s activities should support its intended outcomes.

  • It identifies how the intervention’s resources, activities and participant outputs will support its intended outcomes.

 

  • It identifies and explains the key assumptions underpinning the relationship between the programme’s resources, activities, outputs and outcomes specified within the logic model.
  • It considers the external conditions necessary for the logic model to work.

 

(iii)       Create a Blueprint

This step identifies specific learning objectives for each of the intervention’s

core activities and then links them to short term outcomes.

  • An intervention blueprint will link each activity to a specific learning objective.
  • Descriptions will illustrate how each learning objective will lead to the intervention’s intended shotrt and long term child and family outcomes.
  • Develop attractive and engaging intervention activities and learning materials that support a wide range of learning styles and needs.

 

 (iv)      Conduct a Feasibility Study – can it work in a practical sense?

This step tests whether the intervention is able to achieve its intended outputs. These include the core activities, as well as its ability to recruit and retain its intended participants.

  • This step will specify the intervention’s core activities and identify the factors that support or interfere with their successful delivery.
  • Will use qualitative research methods to understand which factors contribute to the success of the intervention from the perspective of those delivering it.
  • Will use qualitative methods to understand how those receiving the intervention perceive the intervention’s benefits and whether these perceptions are consistent with the intervention’s original theory of change.
  • How to best recruit and retain participants.
  • To develop systems for monitoring participant reach and core delivery targets
  • To apply methods for verifying user satisfaction.
  • Track and document intervention costs.


(v)        Pilot for outcomes

Pilot studies are relatively inexpensive evaluations which investigate an intervention’s potential for improving its intended child outcomes. Pilot studies are particularly useful for determining which measures are most appropriate for testing child outcomes, as well as how to best recruit and retain sufficiently large and representative study sample. Step 5 provides the opportunity to learn:

  • The importance of validated measures and how to select and use them to measure pre and post intervention change.
  • Methods for determining an adequate sample size based on the intervention’s anticipated effects.
  • Methods for recruiting and retaining participants from the intervention’s target population.
  • Analytic methods for determining whether changes in child outcomes are statistically significant.
  • How to interpret the findings from pilot studies and use them for designing more rigorous evaluations.

 

(vi)       Test for Efficacy

This study provides a rigorous evaluation designed to determine if an intervention works under ideal circumstances.  Efficacy Studies do this through research designs that systematically reduce potential sources to a study bias, so that causality can confidently be attributed to the intervention model in step six. From this it will be learned

  • How to determine whether an intervention is ready for an efficacy study.
  • Ways in which potential sources of biases can ‘threaten’ the validity

      of a study’s findings.

  • How a comparison group and methods such as random assignment can be used to reduce potential sources of study bias.
  • Strategies for reducing all sources of potential bias throughout the duration of the efficacy study.
  • Strategies for increasing the likelihood that the study will take place under ideal circumstances.
  • How to interpret findings from efficacy studies.
  • What to do when a rigorously conducted efficacy study fails to observe any positive effect on a child outcome of interest.

 

(vii)      Test for Effectiveness

An effectiveness study is a rigorous evaluation designed to determine if the positive child outcomes observed in the efficacy study can be replicated in real world circumstances. From the perspective of EIF, it will also be useful for an effectiveness study (or previous efficacy study) can consider whether the intervention can be confidentially associated with child benefits that are sustainable for a year or longer. Step seven describes:

  • How effectiveness studies can be conducted in real world circumstances.
  • Methods for measuring change for a year or longer.
  • How effectiveness studies can be used to understand for whom and under what circumstances the intervention has its greatest impact.
  • How to interpret disappointing findings observed in effectiveness studies.

 

(viii)     Refine and Monitor

Once an intervention has confirmed that it can provide benefits for young people that are meaningful from a public health perspective and are sustainable within real-world settings, further testing is required to develop quality assurance systems to ensure that these benefits remain replicable. Step eight provides he opportunity to learn:

  • How evaluation methods can be incorporated into the running of an intervention to monitor the quality on an ongoing basis.
  • How to monitor child outcomes on an ongoing basis.
  • How monitoring systems can be used to determine when an intervention is appropriate for an individual child’s needs or when referral to other services may be necessary.
  • How to rapid cycle evaluations and micro-trials can be used to test and refine an intervention’s active ingredients.
  • Evaluation methodologies for testing an intervention’s workforce requirements.


(ix)       Adapt and Transport

As interventions are taken to scale, the diversity of the contexts in which they will be offered will naturally increase. When interventions are ‘transported’ into new cultures, substantial changes are particularly necessary. This step provides the opportunity to learn how evaluation methods can be used to:

  • Determine the extent to which intervention contents are relevant within new cultures and countries.
  • Determine whether the intervention’s intended child outcomes are upheld through ongoing piloting.
  • Make decisions about the extent to which interventions eveloped in one country are needed and will ‘fit’ within the context of another.

 

(x)        Take to scale

While taking an intervention to scale is the last step on the EIF ten step framework, it does not mean that the intervention’s evaluation journey is over. Instead, it signifies that evaluation cycles have successfully been integrated into the intervention’s delivery systems to verify that it will remain effective when offered to scale. Step ten helps to identify all the quality processes necessary for offering interventions at scale, including those which help local systems determine if they are ready to offer an intervention in a way that will ensure that it remains effective. Step ten offers the opportunity to learn:

  • Methods for assessing local system readiness.
  • The role of the intervention provider for informing system readiness.
  • The ways in which technical support can be used to inform system readiness and install interventions within local systems.
  • Methods for offering and using technical support, including licensing, purveyors and independent intermediaries.

 

Where to next for Success for All?

 

“Success for All” has collected considerable evidence to show that its' intervention programme is effective with young people at risk of education and social exclusion at steps one to five, as evidenced by the following material published on this web site.

It now seeks further financial support to further develop its knowledge and understanding at these steps and also at steps six to ten, which would allow for further evaluation in depth and  ultimately  the programme to be more extensively offered to other providers who work with disadvantaged and at risk young people.