That's a really good question. In Lewisville ISD, we look at the following indicators: course completion, grade distribution, state assessment data, and enrollment demographics. Additionally, we look at parent and student survey data. We also conducted an interesting study this year where we compared a teacher's face-to-face classes and blended classes (see data indicators above). We found that the blended classes out-performed the traditional classes.
This is such an important question. We have a couple organizing frameworks to think about this - first, when you design a blended program what problem are you trying to solve? Some blended programs are first just trying to expand access... others are looking to target specific student populations to bolster test scores. Measuring a program against it's intended purpose is key - otherwise we may hold early implementation to too high a bar and kill innovation before it can take off.
It's also worth noting that if you're using blended to personalize learning, some blended approaches may be working really well for some students and not for others. Traditionally if an intervention is only working for a subset of students we throw it out - but if you manage to measure blended learning results at the individual student level, you can start to offer a menu of experiences that fit different students' preferences and needs.
Here's video where we talk through some of these distinctions:http://christenseninstitute.org/blog/unpacking-whether-blended-learning-works/