5 Strategies to Effectively Monitor and Evaluate Program Implementation

Many of our studies involve monitoring and evaluating program implementation to learn more about how programs are delivered and the degree to which they are delivered as intended. Examining program implementation helps program developers, stakeholders, and evaluators better understand how certain factors (e.g., adherence, dosage, quality of delivery, participant engagement, modifications) might influence a program’s intended outcomes.

At Magnolia Consulting, we find the following five strategies help us to effectively monitor and evaluate implementation:

    • Understand the theory behind the program. We recommend reviewing the program’s logic model to ensure a clear understanding of the theory behind the program. A logic model provides a visual description of how critical program components are expected to result in short-, intermediate-, and long-term outcomes. This can help evaluators understand which program components should be implemented and monitored during the study.
    • Attend program training. When feasible, we recommend evaluators attend program training(s). Attending program training(s) can help evaluators learn about different program components, better understand how each component should be delivered, and become familiar with which program resources are available to support implementation (e.g., English language supports). This experience can help evaluators understand and identify when implementation is or is not going as intended.
    • Align evaluation efforts to implementation guidelines. When possible, we recommend aligning evaluation efforts to program implementation guidelines. Implementation guidelines provide detailed guidance on the expected critical program features and the extent to which each component should be implemented. Along with providing guidance to those delivering programs, they can also help evaluators determine if the program is actually implemented with fidelity to the guidelines.
    • Use multiple measures. We recommend selecting or creating multiple measures to properly monitor and assess various aspects of program implementation. For example, we use a variety of measures to monitor program implementation based on the study’s goals and budget, such as implementation logs, classroom observations, surveys, interviews, and program usage data. The use of multiple measures decreases bias and allows for response validation through triangulation (i.e., cross-verification from two or more sources), which helps ensure an accurate assessment of program implementation.
    • Keep track of response rates and missing data. We recommend tracking implementation data regularly to avoid missing data. For example, if a study uses weekly implementation logs, response rates and missing data should be monitored on a weekly basis to ensure that measures are completed in a timely manner. Complete data sets provide evaluators with important, and more valid, information regarding program implementation than those with missing data points.

An Evaluator’s Guide to Performing Successful Site Visit Observations

For external evaluators, site visits (e.g. visiting schools) provide an opportunity to experience the activity, program, or product first-hand while also offering a chance to connect in-person with participants from a study. As these observations provide an important addition to report findings, Magnolia Consulting follows several steps for successfully navigating site visit observations. Based on our experience, here are five key guidelines for successful site visit observations:

    • Conduct an in-person study orientation. Conducting an in-person study orientation before site visits offers the opportunity to meet and interact with participants, which helps to establish a trusting relationship. Having a positive rapport with participants is important as it allows for open communication and can help reduce any participant concerns associated with site visit observations. During the study orientation, inform participants of details about the purpose of the site visits, expectations for participation, and how site visits will be scheduled to minimize disruption in the classroom. During the orientation, explain how the data will be used and how evaluators will maintain participant confidentiality. If the budget does not allow for an in-person orientation, webinars with video access are also a helpful tool to introduce yourself to the participants.
    • Create an observation protocol. Using an observation checklist or protocol helps to reduce bias associated with observations and assures that preestablished guidelines are followed. These protocols typically focus on the quality and extent to which an activity, program, or product is implemented and/or aligned to best instructional practices in the field (i.e., reading instruction, STEM). If multiple evaluators perform observations across participants or sites, the measure should be checked across observers for agreement and accuracy.
    • Enlist a site coordinator. If budget allows, plan for having an on-site coordinator who knows the site’s participants and inner workings. This individual can communicate with the evaluation team’s program coordinator on details throughout the study such as scheduling observations across multiple participants at the site. Furthermore, the site coordinator can communicate details with participants before the observation and answer any questions that arise. In addition to coordinating site visit observations, the site coordinator might be responsible for other helpful tasks such as managing consent forms and ensuring assessments are distributed and returned in an organized manner.
    • Be flexible with scheduling. Allowing flexibility and accommodating a site’s scheduling needs supports understanding and recognition of “real world” complications. If possible, create a schedule that follows the site’s established routines. This may require conducting observations across several days rather than consolidating multiple sessions into a shorter timeframe. If the evaluation budget is limited and requires observations to be performed within a brief period, work with the site coordinator to determine another mindful, yet suitable schedule. If appropriate, send participants a direct email one week prior to the visit restating the purpose of the observation and confirming the schedule. This can help avoid an observation from being unannounced.
    • Follow-up with sites. Email site coordinators and participants a day or two after conducting site visit observations to express gratitude for their time and willingness to be observed. This follow-up also allows for any questions pertaining to the site visit or study.