Using Implementation Science to Measure Fidelity


A fundamental need of programs that use evidence-based practices is to assess whether or not the practice (such as a school curriculum or a home visitation model) is delivered with fidelity. James Bell Associates defines fidelity as “the extent to which the delivery of an intervention adheres to the protocol or program model as intended by the developers of the intervention.”

Evaluators will work with model purveyors to define the core components of the intervention. Winter and Szulanski (2001, p. 733) state that core components specify, “which traits are replicable, how these attributes are created, and the characteristics of
environments in which they are worth replicating.” LeCroy & Milligan Associates recently conducted a curriculum implementation assessment to develop and test fidelity metrics (that define core components) and instruments for home visitation programs that use the Growing Great Kids (GGK) curriculum, developed by Great Kids, Inc. This study used the five stages of fidelity assessment, shown below. Fidelity assessments are applicable to many domains; such as child welfare, mental health, juvenile justice, early childhood education, and substance abuse prevention and treatment.

Our work is also influenced by the principles of Implementation Science (see Fixsen et al, 2005). Implementation science seeks to understand the processes and procedures (e.g., professional development, coaching and mentoring, etc.) that promote the transfer and adoption of use of evidence-based intervention practices in real-world contexts. Two types of practices are necessary to achieve desired outcomes when implementing a program with fidelity: (1) implementation practices and (2) intervention practices.

In this context, implementation practices refer to methods and procedures used by trainers, coaches, and/or supervisors to promote practitioner use of evidence-based interventions with fidelity. One way to measure implementation fidelity is for evaluators to collect data to determine the extent that staff training and supervision are conducted in ways that promote fidelity of the intervention. For the GGK study, we developed a Pre and Post Training Survey instrument to measure changes in trainees’ knowledge, skills, and confidence in using the GGK curriculum. We also developed and collected data using a Supervision Observation Protocol, to assess the extent that supervisors adequately support their staff to use GGK on a daily basis, with fidelity to the curriculum model.

The second practice needed to achieve desired outcomes when implementing a program with fidelity is the intervention practice. Intervention practice refers to the methods and strategies used by practitioners (e.g., home visitation staff, teachers, etc.,) to affect changes or produce desired outcomes in a targeted population or group of recipients (e.g., parents, children, families, etc.). Fidelity of the intervention is two-fold, including the degree to which the evidence-based intervention is adopted and used in an intended manner by practitioners and produces expected outcomes. To assess home visitation staff fidelity in implementing the GGK curriculum with families, we developed a Home Visit Observation Protocol, interviewed staff, and performed basic and in-depth case record reviews. Family outcomes are measured through our Healthy Families Arizona Longitudinal Study.

References cited:
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M. & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231).

James Bell Associates (2009, October). Evaluation brief: Measuring implementation fidelity. Arlington, VA: Author.

Winter, S. G., & Szulanski, G. (2001). Replication as strategy. Organization Science, 12
(6), 730-743.

About the Author:
Michele Schmidt, MPA, is a Senior Evaluation Associate at LeCroy & Milligan Associates. She will be presenting a workshop on this topic at the American Evaluation Association 2014 conference in Denver, CO., Developing a Curriculum Implementation Assessment to Examine Model Fidelity and is scheduled as a concurrent session on 10/17/2014 from 4:30 PM – 5:15 PM.

Using Implementation Science to Measure Fidelity


A fundamental need of programs that use evidence-based practices is to assess whether or not the practice (such as a school curriculum or a home visitation model) is delivered with fidelity. James Bell Associates defines fidelity as “the extent to which the delivery of an intervention adheres to the protocol or program model as intended by the developers of the intervention.”

Evaluators will work with model purveyors to define the core components of the intervention. Winter and Szulanski (2001, p. 733) state that core components specify, “which traits are replicable, how these attributes are created, and the characteristics of
environments in which they are worth replicating.” LeCroy & Milligan Associates recently conducted a curriculum implementation assessment to develop and test fidelity metrics (that define core components) and instruments for home visitation programs that use the Growing Great Kids (GGK) curriculum, developed by Great Kids, Inc. This study used the five stages of fidelity assessment, shown below. Fidelity assessments are applicable to many domains; such as child welfare, mental health, juvenile justice, early childhood education, and substance abuse prevention and treatment.

Our work is also influenced by the principles of Implementation Science (see Fixsen et al, 2005). Implementation science seeks to understand the processes and procedures (e.g., professional development, coaching and mentoring, etc.) that promote the transfer and adoption of use of evidence-based intervention practices in real-world contexts. Two types of practices are necessary to achieve desired outcomes when implementing a program with fidelity: (1) implementation practices and (2) intervention practices.

In this context, implementation practices refer to methods and procedures used by trainers, coaches, and/or supervisors to promote practitioner use of evidence-based interventions with fidelity. One way to measure implementation fidelity is for evaluators to collect data to determine the extent that staff training and supervision are conducted in ways that promote fidelity of the intervention. For the GGK study, we developed a Pre and Post Training Survey instrument to measure changes in trainees’ knowledge, skills, and confidence in using the GGK curriculum. We also developed and collected data using a Supervision Observation Protocol, to assess the extent that supervisors adequately support their staff to use GGK on a daily basis, with fidelity to the curriculum model.

The second practice needed to achieve desired outcomes when implementing a program with fidelity is the intervention practice. Intervention practice refers to the methods and strategies used by practitioners (e.g., home visitation staff, teachers, etc.,) to affect changes or produce desired outcomes in a targeted population or group of recipients (e.g., parents, children, families, etc.). Fidelity of the intervention is two-fold, including the degree to which the evidence-based intervention is adopted and used in an intended manner by practitioners and produces expected outcomes. To assess home visitation staff fidelity in implementing the GGK curriculum with families, we developed a Home Visit Observation Protocol, interviewed staff, and performed basic and in-depth case record reviews. Family outcomes are measured through our Healthy Families Arizona Longitudinal Study.

References cited:
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M. & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231).

James Bell Associates (2009, October). Evaluation brief: Measuring implementation fidelity. Arlington, VA: Author.

Winter, S. G., & Szulanski, G. (2001). Replication as strategy. Organization Science, 12
(6), 730-743.

About the Author:
Michele Schmidt, MPA, is a Senior Evaluation Associate at LeCroy & Milligan Associates. She will be presenting a workshop on this topic at the American Evaluation Association 2014 conference in Denver, CO., Developing a Curriculum Implementation Assessment to Examine Model Fidelity and is scheduled as a concurrent session on 10/17/2014 from 4:30 PM – 5:15 PM.