That being said, I am a professor, and every now and then I should probably profess something. So - for the record, I am adamantly opposed to Value Added Models of measuring teacher effectiveness.
For those who don't know, Value Added Modeling (or VAM as I'll call it from here on out) is essentially a way of looking at student test scores and trying to figure out how much of those scores are due to the teacher. In theory, students with better teachers will perform better than they did the year before, and they'll perform better in relation to other students with other teachers. VAM seeks to use all that comparative data to isolate each teacher's unique contribution to students' scores. This way, we can measure how much value a teacher has added to students' educations, and we can make staffing and/or training decisions accordingly.
On the surface, this is not a bad idea. Teachers are responsible for student learning, and we are absolutely justified in creating and establishing systems that assess teachers' effectiveness. Student performance data is a vital and valuable part of this process. In fact, teachers already use student data to evaluate and adapt their own teaching. In Washington State, the edTPA, TPEP, and ProTeach (not to mention National Board Certification) all operate according to this central premise: effective and proficient teachers will establish and use systems for collecting and analyzing student data, and they will use their analyses to adjust and refine their own practices. TPEP, which is the official and mandated evaluation system used by school districts, adds a supervisory component to this. It's not enough for a teacher to be able to do this on her own; she has to be able to prove it to her boss.
So it's not like teachers aren't using student data, and it's not like they're not being held accountable for student performance. In that sense, it's possible that adding VAM to teacher evaluations might be overkill, but that's not really the problem.
The trouble with VAM is twofold: 1) it's being used in really stupid ways; 2) it's not necessarily measuring what it claims to be measuring. It's the latter point that I want to address right now.
Increasingly, research is demonstrating that VAM derived measures of teacher effectiveness might not be assessing the teacher as much as they're assessing the demographics of the students in the classroom. For example, Newton, et al (2010) analyzed various Value Added Models of teacher effectiveness and determined that regardless of the model used, and regardless of which variables they controlled for statistically, the demographics of the students always had a significant correlation with the teacher's effectiveness scores. Specifically, as the number of ELL students went up, the teacher's effectiveness scores went down. Again, this was even after they had statistically controlled for ELL, and it happened no matter which Value Added Model they were using. Their conclusion:
"We simply cannot measure precisely how much individual teachers contribute to student learning, given the other factors involved in the learning process, the current limitations of tests and methods, and the current state of our educational system."
I could go on, but that sums it up pretty well. Granted, the arguments both for and against VAM are more complicated than I've outlined here. There are some compelling arguments in favor of it (e.g. it works if you measure student year-to-year gain instead of using static measures), and there are some compelling counterarguments to those arguments (e.g. even something like rates of learning might be co-variates with certain demographic measures). In any case, we have both theoretical and empirical reasons to doubt the suitability of VAM as a measure of teacher effectiveness. The problem isn't that we're trying to hold teachers accountable: the problem is that we may be trying to hold them accountable for something that is simply out of their control.
So, of course, it makes perfect sense that the federal government would mandate VAM as part of each state's teacher evaluation system, and it makes even more sense that they would try to extend VAM to use student test scores as a measure of the value added by the teacher's preparation program.
That's right: the Department of Education is pushing to use K-12 student test scores to measure and rank the effectiveness of teacher education programs. We'll talk about this particular stupidity in part two.