03/30/2013 | ROBERT J. MARZANO Ph.D and MICHAEL TOTH
“Most importantly, instead of being adversaries, teachers and supervisors were partners working toward achievement of a common goal: improved classroom instruction to generate gains in student achievement.
But 2008 signaled the beginning of new reforms from many angles that would have a significant impact on K-12 teachers across the nation. School systems found themselves under a microscope, and for once, reformers generally agreed on the fix. The focus of the new reforms was squarely on how to best and most accurately measure teacher performance.
Driving this new angle of attack was the publication of two biting policy reports on teacher evaluation and retention — “Rush to Judgment” and “The Widget Effect.”Both papers argued that teacher evaluation systems were in crisis; most systems failed to address the quality of classroom instruction, or measure students’ learning. Teachers were often evaluated on a binary system where 90 percent of teachers were rated highly effective or effective. School districts were squandering enormous potential for student and teacher improvement.
Spurred by these assessments, the Bill and Melinda Gates Foundation embraced the cause, spending $290 million on pilot projects “to transform how teachers are recruited, developed, rewarded, and retained.” The Gates Foundation added another $45 million in research for its Measures of Effective Teaching (MET) project, aiming “to better understand and define what makes a teacher effective.” The Obama administration dangled the sweetest carrot of all, Race to the Top: a $4.35 billion grant program to spur a nationwide education overhaul, offering states significant funding if they were willing to rebuild their current evaluation systems.
The pace of change was fast and furious. Too fast, some argued. Just a few years later, many districts across the U.S. are still staggering to keep up. The recent Elementary and Secondary Education Act (ESCA) waiver process, which effectively allows states to opt out of No Child Left Behind in exchange for rigorous plans to improve instruction, has enticed more states to begin redeveloping their evaluation systems. Within this flurry of activity, vital questions are still unresolved. Is teacher evaluation meant to measure teacher performance? Are teachers to be retained, promoted, or fired based on the data collected by evaluators? Or is the goal of teacher evaluation to increase teacher expertise year by year over the course of a long career? And if helping teachers improve is the goal, are current evaluation systems built to meet that goal?
Not surprisingly, initial conceptions of teacher evaluation models, and the instruments, tools, and data used to assess teacher performance, have already evolved since the conversation began. Education experts are beginning to identify important trends. One of the most important is an appreciation for evaluation models that measure and reward teacher growth — and that actually give teachers the tools to improve.
A Model to Develop Teacher Expertise
Not surprisingly, as administrators began to implement teacher evaluation systems around the U.S., researchers were making efforts to quantify how well those systems worked. The three-year effort from the Bill and Melinda Gates MET study was completed with the January 2013 release of their final report: Ensuring Fair and Reliable Measures. Districts in Oklahoma, New Mexico, Florida, and elsewhere were conducting classroom-based studies to measure whether specific teaching strategies caused measurable increases in student achievement.
In 2009, the state of Florida worked with Robert Marzano to develop his long standing and widely used teacher development framework into the state approved teacher evaluation model. From the first, the Florida Department of Education officials saw the need for a model that would do more than merely measure teacher performance. The Gates Foundation’s MET project had revealed significant error in measurement of teacher skill based on classroom observations. The team acknowledged that a model designed with measurement as its only goal would be flawed.
Thus, the goal was to develop a model that accurately and objectively gauged teacher expertise as observed during individual lessons. Beyond that, the team was committed to a model that would help teachers progress during the course of their careers, from enthusiastic — if inexperienced — classroom neophytes straight out of college to accomplished professionals, step-by-step, year-by-year.
A model built to successfully develop and sustain teacher expertise would be both comprehensive and specific. A comprehensive model would be broad enough to identify all the classroom behaviors associated with raising student achievement. But the model would also have to be specific enough to give teachers tangible classroom strategies to develop skills. Those “high-probability strategies,” so-called because they were likely to drive student achievement, would, in addition, be tied to specific types of lessons to produce best results.
The Marzano Teacher Evaluation Model identified 41 high probability strategies that would cover a range of teacher behaviors in different types of lessons. Each research-based strategy had a clear correlation to driving student achievement. It also put in place a focused system of regular feedback aligned to the 41 strategies. This feedback loop would produce continuous improvement of instruction based on a model of professional development called deliberate practice, based on the research of Swedish cognitive psychologist K. Anders Ericsson. Teachers would participate in intense rounds of practice, feedback, reflection and more practice. Teachers identified specific areas of weakness and had access to focused tools for development of those skills. The model acknowledged and rewarded teacher growth. And it gave teachers a stake, a large stake, in driving their own development.
The model returned autonomy to teachers, the self-sufficiency eroded by years of haphazard reforms. Teachers used a developmental scale to guide and track their own skill development. With help from supervisors and coaches, teachers could pinpoint their current skill levels when they used specific strategies and set goals to improve. Each year teachers identified a set of classroom strategies to work on. They would begin to feel like professionals responsible for building and maintaining their expertise.
The evaluation system had to be transparent. There were no “gotchas!” in the evaluation cycle. Teachers, coaches, and school leaders could track progress all along the way. Instead of a single yearly observation, there would be many. Tools were built into the model to show teachers how to hone their skills, compare their performance with others, and collaborate with colleagues. Teachers using the model, scoring themselves on the scale, were remarkably candid about the areas of their practice that needed improvement.
Most importantly, instead of being adversaries, teachers and supervisors were partners working toward achievement of a common goal: improved classroom instruction to generate gains in student achievement.
Four Domains for Continuous Improvement of Instruction
A structure for continuous improvement of instruction threaded through the Marzano model’s four domains
Domain 1 — Classroom Instruction, teachers utilized classroom strategies for specific lessons.
Domain 2 — Planning and Preparing, they planned individual lessons with careful attention to clear learning progressions. Teachers scaffolded those lessons into units and built units toward annual learning goals
Domain 3 — Reflecting on Teaching, they analyzed and adjusted their performance and planning based on classroom experience, reflected on whether students had met learning targets, and developed focused professional growth plans.
Domain 4 — Collegiality and Professionalism, they collaborated with colleagues and mentors to share their strategies and experiences. The model functioned as a working plan for professional development as teachers cycled through the four domains.