Skip to main content


Welcome, the Hub connects all projects

MSPnet Academy Discussions


All Topics for this Forum

Topic: "MSPnet Academy: Making Sense of Measuring Implementation in Educational Research"

Topic Posts

MSPnet Academy Discussion
February 25th - March 12th, 2015

"MSPnet Academy: Making Sense of Measuring Implementation in Educational Research"

Presenters: Jeanne Century, and Amy Cassata, Outlier Research & Evaluation, CEMSE | University of Chicago


Overview: Over the past several years, the National Science Foundation, the Institute of Education Sciences in the Department of Education and other funders have brought to light the critical importance of rigorously measuring implementation fidelity, or the extent an intervention is enacted as planned, and the contextual factors that affect fidelity. Researchers are expected to discuss the psychometric properties of their measures, specifically describe their approaches to analysis and how those analyses will be used. This poses a challenge, however, because implementation measurement is still relatively new and only now is a general consensus about what it is and how to do it emerging.

This webinar will provide participants with a high-level overview of the key issues related to implementation measurement including: 1) definitions, theory and background; 2) study design and measurement approaches; 3) analysis strategies; and 4) differences between implementation measurement and other kinds of studies (such as design-based implementation research). The webinar will describe each of the issues and give concrete examples of both progress and challenges in each area. The webinar will also address where implementation measurement fits in to the study types outlined in the Common Guidelines and include specific examples from NSF- and IES-funded studies.

Archives for this academy

This archived topic is open to the public.

restart conversation
This topic has 9 posts, showing all.

Thank you for your great questions!

posted by: Jeanne Century on 2/25/2015 3:08 pm

We appreciated all of the interest and questions. We welcome more conversation; we are eager to bring more coherency to this in education.

How are you measuring implementation?

posted by: Amy Cassata on 2/25/2015 3:14 pm

We would love to hear more about your experiences in measuring implementation. What can you share with us and others about what you have done, and how?

Whitepaper?

posted by: Michael Culbertson on 2/27/2015 5:42 pm

Hello, Jeanne and Amy,

Thanks for your presentation. Do you have a whitepaper or some other document that describes the ideas from your webinar in more detail?

Thanks,
Michael Culbertson

Book chapter

posted by: Amy Cassata on 2/27/2015 6:03 pm

Hi Michael,
Jeanne and I recently wrote a chapter in the book, "Treatment Integrity: A Foundation for Evidence-Based Practice in Applied Psychology" published by the APA, that gives a good written description of the concepts and frameworks we presented in the webinar.

In the chapter, we provide examples of how we have applied our theory in the contexts of two different types of STEM innovations. We also discuss important issues to consider in implementation measurement, identify ongoing challenges, and give recommendations for how those challenges may be overcome.

Reference:
Century, J., & Cassata A. (2014). Conceptual foundations for measuring the implementation of educational innovations. In L.M.H. Sanetti & T.R. Kratochwill (Eds.), Treatment Integrity: A Foundation for Evidence-Based Practice in Applied Psychology (pp. 81-108). Washington, DC: American Psychological Association.

Analysis strategies?

posted by: Kristin Bass on 3/9/2015 1:21 pm

Dear Jeanne and Amy,

Thank you so much for your implementation presentation! It's given me a lot to think about in my work. Given that you collect such a broad range of implementation data in your projects, how do you analyze and synthesize all of it?

I know that you'd planned to speak a bit about this in your presentation but ran out of time. Would it be possible for you to share a few key points about analysis in this discussion forum?

Best regards,
Kristin Bass

Analysis strategies

posted by: Amy Cassata on 3/11/2015 3:02 pm

Hi Kristin,
Thanks for your interest in our work! We certainly have come up against (and continue to experience) challenges in analyzing our data. First, I will point you toward some of our most recent work-in-progress that illustrates our analysis strategies. Then, I'll mention some of our ongoing challenges.

I will be presenting some of the results of our Everyday Mathematics implementation study at this year's AERA meeting in Chicago. Using teacher questionnaires, we measured variation in teachers' use of EM components within and across five districts, as well as the contextual factors that influenced their use. We used multiple regression analysis to explore relationships between specific factors and specific EM components. I'll be presenting our poster on Monday, April 20, from 8:15 to 9:45 am. We will also upload a paper to the online repository that should be available at the end of March.

We have also explored ways to analyze relationships between component use and student achievement using a path analysis approach. I gave a presentation at the 2014 IES PI meeting that shares details on this analytic approach and some of our preliminary findings:
http://www.slideshare.net/AmyCassataPhD/ies-pi-meeting-pres-full

post updated by the author 3/11/2015

Analysis challenges

posted by: Amy Cassata on 3/11/2015 3:56 pm

Here are just a sampling of analysis challenges we've encountered in our work. Have any of you dealt with similar issues?

- Where does "student engagement" fit in an analytic model predicting student achievement? Our theory states that student engagement components (how students behave and participate) are essential parts of an intervention. We had to decide to treat student engagement as an independent, dependent, or mediating variable. We ultimately decided to test a model with student engagement as a variable that mediates the effects of teacher instruction on achievement.

- Where do structural components fit in an analytic model? For example, are they "containers" in which interactions happen (in which case they would be expected to influence interactions), or do they happen independently of interactions? Both answers could be right, depending on the nature of the intervention and underlying theory of how the components work together to produce outcomes. This decision also depends on the nature of structures. Structures like "lesson order" and "lesson omission" function differently than structures like "small group structure."

- How can we combine data collected at different grain sizes? For example, unit-level questionnaires were our primary data source, but they couldn't tell us the whole story about implementation. To gather data on teachers' adaptations to the written lesson, lesson-level teacher logs were much more detailed and accurate. We are still developing approaches that can allow us to leverage both sources of data.

I hope these examples are helpful. We are interested in hearing about your analysis challenges and the strategies you have used to address them.

Assessing Teacher and Student Learning

posted by: Mary Townsend on 3/12/2015 11:06 am

I'm interested in the implications of research on the practices of teachers trying to improve their instructions...which is a bit different from your purpose. However, I've shared your post with other school leaders because it really gets to the challenge of assessing student learning for the practitioner and the conditions that improve teacher instructional skill. Regular school folks can't do this type of analysis, the point I want to make is that when "school folks" attempt to look at data we often get a flawed picture of what is going on because we make meaning from incomplete information or the assessment item we use actually has flaws So how can we help teachers and their supporters know what in their instruction is working and what needs to be revised,changed or discarded?

RE: Assessing Teacher and Student Learning

posted by: Amy Cassata on 3/16/2015 4:52 pm

Hi Mary,

Making our findings useful to school and district leaders is a top priority for us a collaborative process of ongoing communication across multiple days and meetings. In general, we spend a lot of time up-front describing what our research study is about and describing what kind of information we can provide. We share copies of our survey items and ask leaders to identify what they are most interested in learning about so we can focus our data collection and reports on what is most important to them. We learn what kinds of questions they have about implementation and what problems they are looking to address. For example, one district may want to know if their PD is useful and sufficient. One district may simply want to know how the different teachers and schools across the district are using the program.

We have found that sending reports (or creating a slide show) with simple, descriptive information at the school or district level (e.g., bar charts of item-by-item response frequencies) are often useful to district leaders. Program leaders have a sense about the type of implementation they would like to see in their teachers. Our data provides a snapshot of teachers implementation of a particular program at the item level and also identifies supports or barriers to this implementation.

One important thing to note is that we have seen some district leaders communicating to their teachers to enact curriculum according to the book. Yet in our work, we have observed that teachers consistently make adaptations to meet the needs of their students in their particular setting, and not all of these adaptations are negative. We are still in the process of learning which types or patterns of implementation are optimal and which types of adaptations are useful versus detrimental. These are issues that we continue to explore in our research. Only by continuing to gather and analyze data, repeatedly looking at implementation patterns and outcomes across different contexts, will we as a field accumulate data and be able to learn the key ingredients, range of acceptable adaptations, and environments that best support student achievement.