Many of our studies involve monitoring and evaluating program implementation to learn more about how programs are delivered and the degree to which they are delivered as intended. Examining program implementation helps program developers, stakeholders, and evaluators better understand how certain factors (e.g., adherence, dosage, quality of delivery, participant engagement, modifications) might influence a program’s intended outcomes.
At Magnolia Consulting, we find the following five strategies help us to effectively monitor and evaluate implementation:
- Understand the theory behind the program. We recommend reviewing the program’s logic model to ensure a clear understanding of the theory behind the program. A logic model provides a visual description of how critical program components are expected to result in short-, intermediate-, and long-term outcomes. This can help evaluators understand which program components should be implemented and monitored during the study.
- Attend program training. When feasible, we recommend evaluators attend program training(s). Attending program training(s) can help evaluators learn about different program components, better understand how each component should be delivered, and become familiar with which program resources are available to support implementation (e.g., English language supports). This experience can help evaluators understand and identify when implementation is or is not going as intended.
- Align evaluation efforts to implementation guidelines. When possible, we recommend aligning evaluation efforts to program implementation guidelines. Implementation guidelines provide detailed guidance on the expected critical program features and the extent to which each component should be implemented. Along with providing guidance to those delivering programs, they can also help evaluators determine if the program is actually implemented with fidelity to the guidelines.
- Use multiple measures. We recommend selecting or creating multiple measures to properly monitor and assess various aspects of program implementation. For example, we use a variety of measures to monitor program implementation based on the study’s goals and budget, such as implementation logs, classroom observations, surveys, interviews, and program usage data. The use of multiple measures decreases bias and allows for response validation through triangulation (i.e., cross-verification from two or more sources), which helps ensure an accurate assessment of program implementation.
- Keep track of response rates and missing data. We recommend tracking implementation data regularly to avoid missing data. For example, if a study uses weekly implementation logs, response rates and missing data should be monitored on a weekly basis to ensure that measures are completed in a timely manner. Complete data sets provide evaluators with important, and more valid, information regarding program implementation than those with missing data points.
At Magnolia Consulting, we often use online surveys as a data collection tool in our evaluation studies. Online surveys are an efficient and effective way to gather information from participants to answer evaluation questions. While survey response rates can be impacted by many different factors, it’s important to aim for a high response rate. A high survey response rate gives credibility to survey data, whereas a low response rate may pose several challenges, such as (a) decreasing the statistical power of the data, (b) undermining the ability to generalize the results to a larger population, and (c) indicating a nonresponse bias within the sample, meaning that survey respondents differ in meaningful ways from nonrespondents (Nulty, 2008). With this in mind, we make every effort to secure a high response rate when we use a survey for data collection. Here are seven tips we recommend for increasing online survey response rates based on our experience:
- Add sender email address(es) to approved senders list. A common problem when sending an online survey occurs when the survey is incorrectly flagged as unwanted spam. Emails received by survey participants may go directly to their spam mailbox or may “bounce” if the sender’s email address is not on an approved list of senders. To proactively ensure this won’t happen to you, contact participants prior to survey administration to request they add any necessary emails to their approved list as well as inform them of the upcoming survey invitation. If issues persist, email the participant using a different email address that has fewer restrictions.
- Send a pre-survey letter or notification. Prior to sending the actual survey, have a respected authority figure (such as a school principal, or district-level administrator), send a survey notification letter to participants. This letter should inform participants why the survey is important to the school or district, which will help contextualize the survey in the local information needs of the participants. Additionally, it will inform the participants that a survey is coming from an independent evaluator, so they can keep an eye out for the survey invitation.
- Use an engaging and informative survey invitation. When sending a survey via email, the initial survey invitation is your first chance to grab the attention of your participants. First, use a short, engaging email title to ensure your invitation is not buried in their inbox. Next, the body of the email should provide all relevant information that participants will need to know about the survey, including the following: (a) a clear survey purpose and why participant feedback is important, (b) an accurate estimate of the time needed to complete the survey, (c) a date by when the survey should be completed, (d) information on incentives to complete the survey (if applicable), (e) a statement about survey confidentiality and/or anonymity, (f) and a contact name and email address for any questions about the survey.
- Ensure your survey is easy to complete. As you are creating your survey, keep certain readability factors in mind: Is the survey clear and easy to read? Is it free of jargon? Is it too long and time-consuming? You don’t want survey participants to lose interest in completing a survey because it’s difficult to read or too lengthy. One suggestion for improving survey quality is confirming that survey questions are as straightforward and simple as possible, as complex questions can carry a high cognitive load. Another recommendation to make the survey easier for participants is to ensure that survey questions are spread across several pages rather than one large survey all on one page. To do this, one can break pages into segments of questions that relate to one another. Finally, if possible, ensure your survey is optimized for both computers and mobile devices such as cell phones and tablets.
- Ensure participants have protected time to complete the survey. When we send surveys to teachers, we often ask principals to give teachers time during scheduled school events, such as meetings, to complete the survey. This way, teachers aren’t left to complete the survey on their own time outside of the school day.
- Send reminders and follow up with nonresponders. Track survey responses closely and monitor response rates on a daily or weekly basis. This is especially important if survey responses are time sensitive such as when the survey measures change before and after an intervention. It is essential to follow up with nonresponders by sending at least two reminder emails. In some cases, you can contact the participants via phone or mail if you have that information. As a last effort, you may also enlist the help of the respected authority figure mentioned in tip two.
- Show appreciation for time and effort. With everyone’s busy schedules, it is important to remind participants how valuable their feedback is to the overall goals of the evaluation. To show appreciation for your participants’ time and effort, offer any assistance (e.g., emailing a paper copy) to make survey completion less burdensome for them. Incentives can be used to show appreciation for completing the survey at both the teacher level (for example, a chance to win an Amazon gift card, or using donorschoose.org to fund classroom supplies), and the school level (for example, a gift card to put on a pizza party or a contribution to a school fundraiser). Be sure to check with the school or district to learn whether there are any restrictions on teachers or administrators receiving incentives. Finally, always thank your survey participants for their time and efforts toward completing the survey.
Nulty, D. (2008). The adequacy of response rates to online and paper surveys: what can be done? Assessment & evaluation in higher education, 33(3), 301–314.
For external evaluators, site visits (e.g. visiting schools) provide an opportunity to experience the activity, program, or product first-hand while also offering a chance to connect in-person with participants from a study. As these observations provide an important addition to report findings, Magnolia Consulting follows several steps for successfully navigating site visit observations. Based on our experience, here are five key guidelines for successful site visit observations:
- Conduct an in-person study orientation. Conducting an in-person study orientation before site visits offers the opportunity to meet and interact with participants, which helps to establish a trusting relationship. Having a positive rapport with participants is important as it allows for open communication and can help reduce any participant concerns associated with site visit observations. During the study orientation, inform participants of details about the purpose of the site visits, expectations for participation, and how site visits will be scheduled to minimize disruption in the classroom. During the orientation, explain how the data will be used and how evaluators will maintain participant confidentiality. If the budget does not allow for an in-person orientation, webinars with video access are also a helpful tool to introduce yourself to the participants.
- Create an observation protocol. Using an observation checklist or protocol helps to reduce bias associated with observations and assures that preestablished guidelines are followed. These protocols typically focus on the quality and extent to which an activity, program, or product is implemented and/or aligned to best instructional practices in the field (i.e., reading instruction, STEM). If multiple evaluators perform observations across participants or sites, the measure should be checked across observers for agreement and accuracy.
- Enlist a site coordinator. If budget allows, plan for having an on-site coordinator who knows the site’s participants and inner workings. This individual can communicate with the evaluation team’s program coordinator on details throughout the study such as scheduling observations across multiple participants at the site. Furthermore, the site coordinator can communicate details with participants before the observation and answer any questions that arise. In addition to coordinating site visit observations, the site coordinator might be responsible for other helpful tasks such as managing consent forms and ensuring assessments are distributed and returned in an organized manner.
- Be flexible with scheduling. Allowing flexibility and accommodating a site’s scheduling needs supports understanding and recognition of “real world” complications. If possible, create a schedule that follows the site’s established routines. This may require conducting observations across several days rather than consolidating multiple sessions into a shorter timeframe. If the evaluation budget is limited and requires observations to be performed within a brief period, work with the site coordinator to determine another mindful, yet suitable schedule. If appropriate, send participants a direct email one week prior to the visit restating the purpose of the observation and confirming the schedule. This can help avoid an observation from being unannounced.
- Follow-up with sites. Email site coordinators and participants a day or two after conducting site visit observations to express gratitude for their time and willingness to be observed. This follow-up also allows for any questions pertaining to the site visit or study.
At Magnolia Consulting, one of our specialities is designing and implementing curriculum efficacy or effectiveness studies, and we are often tasked with recruiting sites (i.e., schools or school districts) to participate. Finding potential sites that both fit the requirements for a given study and are able to participate can feel like finding a needle in a haystack. Based on our experience, it is possible to find these sites, but it can be challenging and requires a well-thought-out plan of action. We have found that an organized, collaborative, and personal approach to recruitment is fundamental to success. The following list includes several key elements for effectively navigating the recruitment process:
- Start early! Allow ample time for the recruitment process, as it can be quite time consuming to identify sites and to fully bring them onboard. In terms of study implementation, it is easier to confirm sites early rather than at the last minute. If possible, start recruitment in early spring before testing or before summer break when contacts may be out of the office.
- Create clear study documents. As each study is unique in terms of site selection criteria and benefits to participating sites, it is important to develop clear study descriptions for potential sites. Consider visually appealing ways to present information (e.g., a one-page infographic or handout about the study), as well as various methods of dissemination to a wide audience (e.g., website links or mailing lists). Ensure team members within your company review these documents and are able to clearly explain the study details to potential sites.
- Develop a list of potential applicants. Before contacting sites, consider developing a tool to track potential sites, which may include details on site demographics and student enrollment information. Sources for this list may include information from a national database (e.g., National Center for Education Statistics) or a curriculum provider’s mailing list of users and may be limited by specific requirements of the study, such as certain areas of the country, size or locale (urban, suburban, rural) of the site, use of specific programs, or access to technology.
- Create email and phone protocols. Utilizing email and phone communication protocols for initial contacts and any follow-up communications provides an outline for professional, consistent messaging across multiple interactions and staff members. Being approachable, positive, and grateful in all correspondence sets the stage for a potential longer-term connection.
- Track all efforts. Tracking every interaction with potential sites is essential. For example, in an Excel spreadsheet or Google Sheet, it is possible to track all dates and methods of contact, name/phone/email of the contact, site selection criteria met, and key points from the communication, such as next steps. Tracking efforts streamlines the process, promotes greater understanding of recruitment efforts among team members, and supports the study team in making the final decision on which sites to include in the study.
- Follow up! Consistent, timely follow-up with contacts in a way that balances persistence with consideration of busy school and district schedules is key. It is also generally good practice to communicate whether a site is selected to participate in the study and to show appreciation for time invested.
- Confirm final sites. As study sites are selected, continue to communicate regularly with them regarding next steps. For example, ensure that all district and school approval processes are followed and request that sites sign a Memorandum of Understanding (MOU), which outlines the roles and responsibilities of all study parties. Once MOUs are signed, move forward with next steps regarding various study start-up tasks.
The Every Student Succeeds Act (ESSA) promotes the use of federal funds to purchase programs that have evidence of effectiveness in increasing student success. But how can state and local education leaders find programs and practices that meet ESSA evidence standards? As part of a partnership between Regional Educational Laboratory (REL) Central at Marzano Research Associates and the Nebraska Department of Education, we provided technical support in developing a systematic process for conducting evidence searches. If you are on a quest for rigorous evidence to support a program or practice, here are eight steps you can follow to conduct a search.
- Step 1: Define constructs. To start, it is important to fully understand what you are looking for! Therefore, the first step of the process is to thoroughly define the program or practice that is the subject of your search, including its expected outcomes.
- Step 2: Identify search databases. The Internet is bursting with search engines! However, some will be more relevant to the search than others. In this step, take some time to determine which search engines and databases will be most appropriate to this specific search. If the search is in the realm of education, websites and search databases such as What Works Clearinghouse (WWC), REL publications, and Evidence for ESSA are good places to start because the studies on these sites have already been reviewed against rigorous standards.
- Step 3: Determine inclusion criteria. There is so much literature out there! Therefore, it is necessary to set criteria for including a study to review. Good criteria will help you focus weed out studies that aren’t relevant to your search and focus your attention on those with the highest potential to meet evidence standards. Criteria might include study design characteristics, such as a randomized controlled trial or quasi-experimental design, or program characteristics, such as relevant outcomes.
- Step 4: Determine search terms. Before beginning the search, determine and record the search terms to be used in the search process. Be sure to consider synonyms and alternative terms for the constructs you defined in Step 1.
- Step 5: Prepare the search database. Before you begin, you’ll need to prepare a database to document your search results and describe studies that meet inclusion criteria. We use an Excel spreadsheet with column headers to capture study design characteristics that speak to the study’s rigor and program characteristics that help define the program and explain why it’s effective and how it can be replicated. Include additional column headers for any information relevant to your specific search purposes.
- Step 6: Conduct the search. Now you are ready to conduct the search! Enter the search terms identified in step 4 into the search engines and databases identified in Step 2 and review the results. If a study seems relevant, review the abstract or the entire study more carefully to determine if it meets your inclusion criteria. As you conduct the search, document the number of initial results, number of abstracts reviewed, and number of studies identified for further consideration in the spreadsheet or other tool prepared in Step 5. It is important to track these numbers to capture historical data regarding your search as well as representation of the breadth and depth of your search process.
- Step 7: Document studies. Yay, a study meets your inclusions criteria! Document those studies that fit your criteria in the spreadsheet you prepared in step 5. This creates a record of these studies, so they can be further examined and reviewed.
- Step 8: Review the search process. Throughout the search process, it is important to periodically evaluate the findings and the process in general to ensure the review is on track and identifying the most relevant studies. Review the studies you’ve captured. Are you finding studies that align with your search goals? If not, revisit the search design (steps 1–5) and revise steps as needed. Also review the results of the search and determine whether the studies you’ve documented are providing strong supporting evidence for your program or practice. It is important to gauge this regularly so you know if you are on track.
These eight easy steps will help you organize any evidence search so you find the most relevant studies to support your program or practice in a systematic and efficient manner. For more information on meeting ESSA requirements for evidence-based programs and practices, check out WestEd’s resources for states here.
Research on sensitive subjects such as illegal activities, drug use, and sexual topics can pose some challenges for evaluators. These topics may result in unintended consequences, such as difficult emotions or potential legal ramifications for those involved. At Magnolia Consulting, we have conducted multiple qualitative studies on the highly sensitive topic of educator sexual misconduct, which is the abuse of students by school personnel. Based on our experience, we have identified three tasks that often present challenges during the research process: (a) recruitment, (b) sample selection, and (c) qualitative data collection through focus groups and interviews. Below, we outline these three challenges, along with solutions we employ.
- Challenge: While recruitment can be challenging for any study, recruiting participants for a sensitive-topic study faces additional hurdles. The topic may trigger difficult or painful emotional responses from participants who have a personal connection to the subject, which may discourage participation. Additionally, participants may be afraid to discuss or disclose illegal behavior unless their confidentiality is ensured.
- Solution: To encourage participation in the study, researchers may want to emphasize the value of others learning from the findings. Study participants should also be made aware of how the research findings will be used to advance knowledge in the field. Additionally, when first reaching out to potential participants, direct oral communication (via phone or in person) is key. Once potential participants have been identified, it is important for researchers to take proper steps to ensure confidentiality in the study to alleviate legal concerns of disclosing behaviors or knowledge.
- Challenge: Many sensitive-topic studies start with a limited number of known possible participants, which is further decreased by potential participants not meeting study sample parameters. In addition, a number of eligible participants may not wish to engage in the research study. This selection bias may mean that the results of the study are not generalizable.
- Solution: If possible, researchers need to identify a very large initial set of potential participants. In order to minimize selection bias among eligible participants, researchers should work to remove barriers and increase incentives for participation. Again, ensuring confidentiality is essential.
Qualitative Data Collection
- Challenge: Some participants in sensitive-topic studies may not feel comfortable being open with researchers. Hesitation to share perspectives can negatively impact the quality of the data collected.
- Solution: Establishing rapport with study participants before data collection is an important step in gaining trust. One method of doing so is to hold an in-person orientation with key participants. This allows for face-to-face meeting time, presentation of background information, and questions from participants. Another option is to conduct a pilot of the protocols, which helps researchers adjust questions and make modifications to increase the comfort and openness of participants.
Despite the various barriers facing a researcher embarking on sensitive-topic research, it is possible to work through these challenges by using a solution-focused approach.
Qualitative analysis can sometimes seem daunting, particularly when dealing with large amounts of data. We have found qualitative data analysis (QDA) software to be immensely helpful with collaboration, coding, organization, and analysis. The software we use, ATLAS.ti, allows us to collaborate across platforms; co-develop, share, and organize codes; and analyze data through different techniques. Below are important points to consider when selecting and using any qualitative analysis tool:
- Look at collaboration capabilities across platforms. When choosing QDA software, consider your organization’s collaboration needs. Do team members use a single platform, or is there a split between PC and Mac users? ATLAS.ti has recently expanded to allow for file sharing across platforms, which is particularly helpful when analyzing large data sets. Each team member can take and analyze a section of the data on a PC or Mac, and the analysis can later be merged into one master file on either platform.
- Consider ways to create and share your codes. As part of the coding process, we use a shared Excel codebook with predetermined codes that can be revised as team members identify emergent codes. A shared codebook allows our team members to easily share, collaborate on, and organize our thematic codes. ATLAS.ti also has an internal codebook function that allows users to define predetermined and emergent codes; however, we have not found an effective way to make this codebook continually accessible to all team members.
- Find ways to organize your codes. One of the strongest benefits of QDA software is the amount of organization it offers. Within ATLAS.ti, we can merge multiple thematic codes into one larger code, break apart codes, and organize codes through color coding and the creation of larger code groups. This is particularly helpful when the analysis process begins with “lumping” data to look for overarching themes and then moves into “splitting” the data in those themes to look for more detailed themes or smaller categories.
- Understand that QDA software will not run an analysis for you. While QDA supports various analyses (e.g., content analysis, thematic analysis, analytic induction), it will not run an analysis for you. Unlike SPSS or SAS, you cannot select an analysis method and have the program run the analysis or provide an output of results. Qualitative analysis is still subject to the skills of the individual, but QDA software offers one tool for better collaboration, coding, and organization.