

Two years in a row now I have suffered through what I like to call The Great Student Growth Goal Debacle and I finally have to say something about it publicly. Cuz it’s driving me absolutely ass-bat crazy.
Good teachers set goals for themselves, but all teachers have always set goals for their bosses, the principals and vice principals who are charged with evaluating the effectiveness of the teachers working in their schools. These goals, traditionally, have been personal and loosely organized around broad categories like curriculum, student learning, student success, and professional development. I’ve had goals in the past as broad as the successful implementation of new curriculum, or the piloting of a new kind of assessment, or ways in which I would like to facilitate staff development or collaborate with my colleagues, or ways in which I would like to reinvigorate my practice. Sometimes these goals would include my participation in some kind of coursework or workshop which would have negligible or minimal influence on my daily practice, but every once in a while, like with my discovery of The Courage to Teach work inspired by Parker Palmer, a developmental experience would totally transform my work and my life. Such has been the tradition of teachers setting goals for the school year—an imperfect, hit-or-miss system, mostly fallible because (someone concluded) the evidence for growth could often be vague, sketchy, too personal, or worse: the goals often had no measurable impact on the kids under the teacher’s charge. How, specifically, did the teacher’s goal result in improved student learning or achievement? It’s an important question; no argument from me about that. But somewhere along the line, the Powers That Be decided that the nebulousness of teacher goal setting and the absolute dearth of quantifiable data on student learning as a result was a huge problem.
Enter the Student Growth Goal. Over the last two or three years we have moved to a system in which teachers write three big goals, two “student growth goals,” and one goal that focuses on professional development. For the student growth goals, one must generate some baseline data, a starting point after which some teaching will occur, followed by an ending point during which another assessment takes place that supposedly measures the same skill after the teaching. The baseline data is compared to the final assessment data and, voila! We now know how student learning has improved. Right? It makes a great deal of sense, but unfortunately, in the actual implementation and tracking and interpreting of the data that results in these kinds of student growth goals, a clear picture of student growth does NOT occur. Mostly what does occur is the gnashing of teeth and the pulling out of hair and the loss of sleep and the extreme exasperation one might feel about the complete waste of otherwise perfectly sound and bountiful teacher energy. Normally, this would be true for me and I would be now, like I was last year at this time, totally aggravated and stressed. I would not have time to be writing this essay. This year I have seniors, four groups of which will be taking their finals a full two weeks before the underclassmen. So I have time this year, not only to complete my damn student growth goals on time, but to reflect on the experience for my own and perhaps for other’s edification. And truth be told, it’s stressful this year anyway–not because I worry I won’t get them done, but because of how absolutely frustrated I am with the process and with the results.
I have concluded that I do not like student growth goals. There are several reasons why the practice of writing student growth goals, at least the way it is practiced now, is a bad one for teachers and a seriously flawed way to evaluate teacher effectiveness.
Student growth goals often measure a minute snapshot from a whole plethora of things a student might be learning or skills a student might be developing. Let’s take a look at the goal I wrote for my College Writing students: Elaboration of Evidence—the way a student is able, in an expository essay, to string together a logical chain of proofs for their claims. Granted, it’s an important part of being able to write effectively about any subject, but it is one particular aspect of a task that is infinitely more complex than this one trait, and certainly more nuanced and interesting. Why did I choose Evaluation of Evidence? Because it’s relatively easy to quantify—and that leads to the next problem with the process of writing student growth goals:
It rewards teachers, or at least encourages teachers, to write goals for which it would be virtually impossible NOT to show student growth. On the first day of class I asked my students to write an on-demand essay about how to write a good essay, making sure that their claims were clear and that they provided an elaboration of evidence for those claims. I cannot take credit for this assignment—the course was assigned to me in the eleventh hour and I was taking a helpful cue from a colleague. At any rate—it was the first day of class in September, I gave them a totally hum-drum writing prompt with vague or minimal instructions, I applied high standards to evaluating the work, which took forever by the way, and again, voila! They did terribly. It would be a walk in the park to show how much they had improved over the course of the entire semester. Not that it’s a bad goal. It’s a fairly admirable goal to show how students have improved in this particular writing skill. But it’s disingenuous to say the least to use this kind of baseline data as a starting point. It would be like the math teacher testing students on the Pythagorean Theory before she had even introduced the concept. Had I not had student growth goals hanging over my head, I wouldn’t have given that particular assignment, and I certainly wouldn’t have spent the kind of time on it that I did, diligently working my tail off to collect assessment data that would impact their grades only minimally (being a formative assessment) unless they outright just didn’t do it. So there’s this:
Student growth goal management and documentation is unwieldy and time consuming for such a tiny snap shot of a student’s “learning.” Scoring this formative assessment, I had to take a sick day in order to carefully work my way through a single class of 35 students writing terribly about how to write a good essay, only looking at this one evaluation trait.
Because teachers are not trained research technicians, a lot of this unwieldiness and wasted time is spent gathering and reporting bad data. The final data I collected at the end of first semester, four months ago, data that I am just now trying to make sense of, was fatally flawed: the expectations were different, the final paper included research and documentation, was turned in at the very end of the semester giving me a ridiculously inadequate time frame for grading, forcing me to award students a holistic score negotiated through self-evaluation and teacher moderation. Comparing the skill of Evaluation of Evidence from the first sample and this massive number in the grade book for the research paper was clearly an apples and oranges kind of deal. So today, I made some shit up by turning both the pre-assessment and the final assessment into grades, not to be recorded in the gradebook, mind you, but to be written down in side-by-side columns for easy quantification. Big, dumb, quantifiable grades. Letters and percentages. As a measure of student learning? Meaningless.
My second student growth goal fared no better. I spent a number of hours, maybe three or four, over the last week, gathering, accumulating, and interpreting equally bad, old, inconclusive data from assessments in IB English in order to fill out a form that will take another hour or more of my time so that I can report, for the student growth goals I wrote for my IB English students, that out of 40 kids, 16 improved, 10 stayed the same, 2 got worse, and 12 students, for whom I have no baseline data, ranged from inadequate to excellent on the trait of Appreciating the Writer’s Choices in a work of literature. First of all, this data is just stupid. It doesn’t take into account any of the answers to these common sense questions: Were students responding to the same piece of literature? No. Was there an effort to make sure that the piece they wrote about the second time was equal in difficulty or complexity to the first? No. Of the two students who did worse the second time, what did you do to make them worse at a skill they’d been practicing all semester? I don’t know. Why don’t you have baseline data for a full 12 of the 40 students in the sample? Because kids were absent, or they submitted the work late, or they didn’t follow instructions for submission into the google classroom, or because I’m not skilled at keeping 60 balls in the air at once. If this data were actually used to evaluate my teaching, I would need, in my 26th year as a high school English teacher, some remediation. Certainly, I am not a “distinguished” teacher. I may not even be, by this measure,“proficient!” And this feels about as bullshitty as it gets.
I know what they say, and I believe it, that good teaching makes a difference. But what’s good teaching, and what kind of difference does it make, and for how many kids, and can these differences ever really be measured objectively? Too many variables beyond the teacher’s control and beyond the impact of the teacher’s teaching can influence a student’s success on an assessment and in school generally. And that is primarily why, along with all the other issues I’ve raised above, that standardized test results, and student growth goals based on data (at least the kind that I’ve gathered) should never be used to evaluate a teacher’s effectiveness. Isn’t it better to have goals that are meaningful but not quantifiable than to have goals that are objectively measurable but meaningless because the data is often faulty or fabricated or both? I am of the opinion that the best work teachers do and the best learning students experience defies quantification. And I am so thankful, ever so thankful, that I work where I do, in this particular school building in this particular district, because as far as I know, no teacher has ever been disciplined or reprimanded in any way as a result of showing negligible positive, or clearly negative results from their student growth goals. But I know for a fact that my school and my district are not typical and that for other teachers, this process may not just be aggravating, but diminishing, demoralizing, fear inducing, and livelihood threatening. That’s terrifying to me.
No one has trained teachers how to do this kind of thing well, and that’s another huge part of the problem, let alone whether or not it’s even possible to do this kind of thing well given a student load of 160 to 200 kids. It’s just a thing that’s landed in our laps after it’s landed in our administrator’s laps after it’s landed in the laps of our administrator’s bosses. And no one questions the damn thing. We just do it poorly and move on.
The hours that I have spent writing down numbers in columns say nothing about what I’ve taught, how I’ve taught it, what students have learned, what learning actually IS, how students feel in my room, or what they will remember about literature, about writing, about me and about my interactions with them after they go. I’d argue that these things, ultimately, are the things that determine the effectiveness of a teacher. After that, I’ll tell you what you can do with student growth goals.