First thing on a Monday, and the first tweet I see is Larry Ferlazzo’s response to today’s New York Times column on Scientifically Tested Tests. Even though my first magna-cup of killer coffee is still just steaming next to me, this one is worth a response. Larry is right that some of the suggestions from Williams College prof Susan Engel are “decent.” I would go so far as saying they are both logical and (sadly) innovative. Using a student’s own writing and using a flexible topic to draw on his/her own expertise in that piece is a terrific way to see student thinking skills, writing skills, and core achievement in language. Larry is also right that one or two of the suggestions are a bit odd. If we were to “test” reading knowledge through a name-it-and-claim it author contest, the same teachers who teach to the test would make kids memorize lists of authors.
The column suggests two aspects of testing that can make it meaningful (my own emphasis and analogies added):
1.) Writing is a powerful and high-def way to see the details of a student’s accomplishments and understanding. As anyone involved with writing knows, writing is thinking. Writing about reading adds another layer of complexity by using one brain-rich process (writing) to discuss and reveal what is occurring in another (reading). Writing is thinking, and reading is a miracle of the human brain. Put the two together, whether in print, in electronic form, or in some mediated version, and you have a 1080p window into student learning and thinking.
2.) That there exists a mutually agreed-upon “standard” for what defines “well-educated children.” Professor Engel assumes that everyone agrees on the standard she describes. I wonder whether they do. I read standards from states, professional organizations, and Common Core, and most of the time I say, ” Of course we want our kids to be able to do that.” But in the translation to assessing them, we risk either analyzing each standard down to the micron and thus trivializing it or losing the very global nature of thinking (and writing) through subdivision. The whole is greater than its parts, and perhaps this focus is what we have lost.
Larry wonders aloud how such high-altitude assessment can help inform teaching, and that is a good question. Having taught very, very bright kids for many years, I found one of the most stimulating and intellectually challenging parts of my job was analyzing what was going on in each child, his “strengths and needs,” when writing Gifted IEPs. Essentially, I was using global analysis–and a lot of writing, selected portfolio pieces, and regular observations in both my journals and students’ own journals to help describe where this child was now and needed to go next. The decisions were never solely mine. There was input from parents and from the students themselves, too. They were daunted by a 6 by 6 foot banner in my classroom that simply asked, “What do YOU think?,”and they took it very seriously.
But here is the rub: How do you take such a process of analyzing writing, selected student work, anecdotal input, and more, and make it consistent from student to student, teacher to teacher, school to school? How do you make it “scientific”? How do you enter it into a relational database for comparative analysis? How do you find TIME for such difficult depth? And in practical terms, how do you teach teachers and administrators to do it in way that is meaningful to others beyond your classroom door? Such work requires analytical writing by the adults, a skill that, alas, they may not have learned well in school, either.
I keep coming back to writing as the 1080p view of accomplishment. Now, if we could start asking inspiring kids to write every day instead of once a marking period, perhaps we’d have taken a first step.