At SPARK, we see blended learning and the use of technology as a tool to leverage, not as an ultimate outcome. As such, we measure the effectiveness of our blended learning model by taking a look at student achievement results and satisfaction/engagement surveys from parents, staff, and students. We aren't willing to subject some students to blended learning and leave others aside, so I doubt we will ever be able to make meaningful comparisons between achievement with and without technology. However, that's all well and good, because blended learning is a whole greater than the sum of its parts. Our models are not just about technology implementation, but also include culture-setting, behaviour management, social-emotional development, peer and teacher relationship-building and so much other soft "glue" that makes the whole model effective.
For me, it starts with selecting the right data points. For example, if your blended program is designed to improve students' reading skills, perhaps you would use iReady, SRI, or NWEA Maps. You would do a diagnostic, then run your blended program, and then run the assessment again, perhaps a month later to see if the data has improved. The tricky part is selecting the right data points, because you need to be able to efficiently collect the data and have it reported in a usable form.
Identify the key factors in implementing the program with fidelity and the specific skills/standards the program is marketing itself to support students in. Hold staff accountable in implementing programs with fidelity. This includes administration master scheduling to allow appropriate blocks of content time.
Then, align an external pre, mid and post assessment measurements to monitor progress. This will help to determine if the skills being worked on in a program are being generalized and mastered outside of the program.
If progress is not being made, then adjusting time or a program change is necessary in personalizing the learning pathway for each child. This can happen using an MTSS structure that looks at a variety of measurements, which includes, formative, benchmark and summative assessments. Of these assessments, it can be technology or non-technology based.
I have never been shy to ask program vendors and strategists how they can help us to determine if progress is being made by using our own external data measurements to determine the effectiveness of a program. When we have not seen effectiveness based on external measurements, we have discontinued the program.
That's a really good question. In Lewisville ISD, we look at the following indicators: course completion, grade distribution, state assessment data, and enrollment demographics. Additionally, we look at parent and student survey data. We also conducted an interesting study this year where we compared a teacher's face-to-face classes and blended classes (see data indicators above). We found that the blended classes out-performed the traditional classes.
I look at student growth on the SBAC. I'm blending because I'm trying to close an achievement gap so in theory the pace and personalization tightens my instruction. On the surface, my students might collab, engage with elearning tools and select activities but in the end, I need to show there is quality instruction occurring by design.