Tuesday, February 10, 2015

The Trouble with Value Added Models: Part One

Although I'm an educational researcher, I'm not given to weighing in publicly on various policy debates. It's not that I don't have opinions; it's just that all the yelling and ad hominem fallacies are really, really boring.

That being said, I am a professor, and every now and then I should probably profess something. So - for the record, I am adamantly opposed to Value Added Models of measuring teacher effectiveness.

For those who don't know, Value Added Modeling (or VAM as I'll call it from here on out) is essentially a way of looking at student test scores and trying to figure out how much of those scores are due to the teacher. In theory, students with better teachers will perform better than they did the year before, and they'll perform better in relation to other students with other teachers. VAM seeks to use all that comparative data to isolate each teacher's unique contribution to students' scores. This way, we can measure how much value a teacher has added to students' educations, and we can make staffing and/or training decisions accordingly.

On the surface, this is not a bad idea. Teachers are responsible for student learning, and we are absolutely justified in creating and establishing systems that assess teachers' effectiveness. Student performance data is a vital and valuable part of this process. In fact, teachers already use student data to evaluate and adapt their own teaching. In Washington State, the edTPA, TPEP, and ProTeach (not to mention National Board Certification) all operate according to this central premise: effective and proficient teachers will establish and use systems for collecting and analyzing student data, and they will use their analyses to adjust and refine their own practices. TPEP, which is the official and mandated evaluation system used by school districts, adds a supervisory component to this. It's not enough for a teacher to be able to do this on her own; she has to be able to prove it to her boss.

So it's not like teachers aren't using student data, and it's not like they're not being held accountable for student performance. In that sense, it's possible that adding VAM to teacher evaluations might be overkill, but that's not really the problem.

The trouble with VAM is twofold: 1) it's being used in really stupid ways; 2) it's not necessarily measuring what it claims to be measuring. It's the latter point that I want to address right now.

Increasingly, research is demonstrating that VAM derived measures of teacher effectiveness might not be assessing the teacher as much as they're assessing the demographics of the students in the classroom. For example, Newton, et al (2010) analyzed various Value Added Models of teacher effectiveness and determined that regardless of the model used, and regardless of which variables they controlled for statistically, the demographics of the students always had a significant correlation with the teacher's effectiveness scores. Specifically, as the number of ELL students went up, the teacher's effectiveness scores went down. Again, this was even after they had statistically controlled for ELL, and it happened no matter which Value Added Model they were using. Their conclusion:

"We simply cannot measure precisely how much individual teachers contribute to student learning, given the other factors involved in the learning process, the current limitations of tests and methods, and the current state of our educational system."

I could go on, but that sums it up pretty well. Granted, the arguments both for and against VAM are more complicated than I've outlined here. There are some compelling arguments in favor of it (e.g. it works if you measure student year-to-year gain instead of using static measures), and there are some compelling counterarguments to those arguments (e.g. even something like rates of learning might be co-variates with certain demographic measures). In any case, we have both theoretical and empirical reasons to doubt the suitability of VAM as a measure of teacher effectiveness. The problem isn't that we're trying to hold teachers accountable: the problem is that we may be trying to hold them accountable for something that is simply out of their control.

So, of course, it makes perfect sense that the federal government would mandate VAM as part of each state's teacher evaluation system, and it makes even more sense that they would try to extend VAM to use student test scores as a measure of the value added by the teacher's preparation program.

That's right: the Department of Education is pushing to use K-12 student test scores to measure and rank the effectiveness of teacher education programs. We'll talk about this particular stupidity in part two.


Monday, February 9, 2015

I have a new article out in the Australian Journal of Teacher Education. It's the first in a series of articles I have coming out from my most recent line of research.

Here's the short version:

- Person A wants to be a teacher. Person A also has a pretty clear set of expectations about what that's going to be like.

- Those expectations are most likely wrong. We profs call them "misaligned."

- If Person A doesn't revise her expectations, she's going to run smack into a brick wall of reality once she enters a classroom. It's going to hurt. It's going to be disillusioning. And, unfortunately, it's probably going to result in her leaving the profession early. This smackdown with reality is referred to as "practice shock."

- So, teacher preparation programs like the one I work in will do their students a great deal of good if they can help them confront and revise their expectation before they get in the field. That way, when they do experience practice shock (because there's no way to avoid it completely), they'll be able to use it as a platform for growth instead of become disillusioned and burning out.

But how can preparation programs do that? That's what my research is about.

Here's the article. Remember, this is just the first step in a long research process. I don't have all the answers. Heck - at this point, I don't even have all the questions.

And here's a picture of a kangaroo, because, you know, Australia.




Thank you to the faculty and staff in the School of Education at Seattle Pacific University. I couldn't be prouder to call myself an SPU alumnus, and I'm humbled and honored.

Teaching: Three Simple Words


The inner sanctum

Teach.

Adjust.

Repeat.

How hard can it be?

I'm an assistant professor of education. I have a doctorate in curriculum and instruction from a well-respected institution. I have years of teaching experience at the both the high school and collegiate levels. I've published in the right journals and presented at the right conferences. I'm the go-to guy, at the head of the class, and, if you squint your eyes real tight, I'm an expert in my field.

But here's the truth: I'm still a beginner.

This is what years of professional experience have taught me: teaching is hard. Though aspects of it get easier, it never gets easy, and mastery is always out of reach. It's a profoundly frustrating endeavor, particularly in a consumer based culture, because the pay off moment when the perfect student rolls out the factory door never arrives. It's like running a marathon where the finish line moves at the same pace you do. You sweat and agonize for hour upon hour, only to discover that you haven't left the starting line. There's no summit to reach and no final clue to decipher. There's no pot of gold at the end of the rainbow, because the rainbow never ends. It's a road trip with no destination, and if you ask "are we there yet?", the answer will always be "no."

And that's exactly like it should be. This is both the simple truth and profound mystery of teaching. Expertise isn't a matter of getting it right; it's a matter of getting it wrong in a systematic fashion. It's not about knowing all the answers. It's about being hyper aware of what you don't know. It's about making plans based on the best information that you have and then gathering new information in a rigorous and reliable fashion. It's about absolute fidelity to your one little corner of the puzzle even though you'll never get to see what it looks like when it's finished.

So teaching is hard. I tried to find a witty wrap-up for this post, but that pretty much sums it up right there.