The 2005 ATP "Innovations in Testing" conference provided glimpses of the latest in curriculum and testing technology, straight from the industry professionals who research, develop, design and sell it.
At a table during luncheon the first day, I found myself with representatives from the Buros Center for Testing and--I'm serious--the Institute for Mental Measurement at the University of Nebraska. Two senior faculty had brought along two star grad students in psychometrics, who were both trying to pump me up to enter the field. "The demand is so great," one told me. "I've already been guaranteed a job. This is very common now. The money is incredibly good." (Little did he know: I was just one of those "teacher" people--unqualified, as Lee Jones of Riverside Publishing would say, to write her own test items.)
Setting aside my objections about relying solely on automated testing to evaluate individual learning--and individual people--I thought about how the wave of national demand might in fact be making it increasingly difficult to find highly qualified people to design the tests teachers are required to implement. (Forget the "highly qualified" teacher: what about the "highly qualified" psychometrician?)
As in any industry, overconsumption of a product does not necessarily guarantee that the manufacturing process will protect quality to meet demand. In fact, inflated demand and frenetic production increase the probability of error. Right now I'm only talking about test design. You may remember the gentleman from Thompson Prometric, who said he could "curl my hair" with stories about industry complicity in designing poor (or marginally ethical) workplace test instruments. But this doesn't include test administration, data management, and score reporting. How many times has your district mixed up test booklets, miscopied items, called committees to re-word in-house assessments, tried to re-route data that was mis-crunched the first time around? How many times have you asked yourself: Given the errors and inconsistencies I can see without looking very hard, what else is underneath? Why would conscientious teachers be labeled "defiant" or "insubordinate" (even by union leadership) if they questioned the instrument itself--or refused to administer it?
Consider: I saw one demonstration of cutting edge teacher certification assessments now used in England. In what's called "simulation" testing, would-be teachers must prove themselves competent in the domains of spelling, using statistics, word-processing, spreadsheets, database management, PowerPoint, email and Internet maneuvers. The keyboard literally tracks and logs the number and order of steps chosen to complete each task and then determines the level of teacher efficiency. This final threshold for UK certification reminds me of temp-worker tests I took between jobs in college. The emphasis on simple clerical skills (what Susan Ohanian has called "paraprofessional" skills) indicates that there's decreasing expectation for teachers to ask questions, innovate, or create. The consumer model demands obedience from teachers. And here's the rhetorical trope: The more obedient you are, the more you'll be praised for being an active and creative participant!
Mingling through poster sessions in the hotel lobby between breakout workshops, I found a vast range of quality and polish. Some companies, such as Promissor, Thomson, and Pearson, used top-of-the-line electronic equipment to display their products. A few showcased research presentations were simply PowerPoint print-outs tacked on boards. One woman, who worked in Florida teacher certification testing, handed out packets. She told me that in the previous year, thirty-five thousand tests had been ordered for teacher certification--but that the actual demand had been one hundred thousand. She said that the state was having problems keeping up with the demand. That they were trying to move toward automated essay scoring by scanning teachers’ handwritten texts into computers.
I found three companies pushing automated essay scoring for students: Pearson VUE (with KAT, Knowledge Assessment Technology), ETS Pulliam (Criterion Online), and Vantage Learning (Intellimetric). Company spokesmen emphasized that typing was the key to making this most efficient in classrooms, but when I asked how companies dealt with essays when computers weren’t available, say, or writing in the primary grade levels, I got this interesting tidbit: Handwritten essays can be shipped overnight to India, where they are transcribed at very low cost--with automated scores still returning to the teacher within a day! (Such transcription work was undoubtedly performed by some of the non-PhD caste in India, though Bill Gates had made no reference to this in his speech.) When I expressed doubt, one salesman at Vantage seemed so proud he had to insist. “It sounds inefficient,” he said. “But it’s affordable and it works. You’d be surprised.”
The buy-in for English teachers seems obvious: test corporations can reduce your grading workload. But there’s something else, too. If teacher workload can be reduced by automation, why decrease class size? New brands of “Teachnology” can reduce inefficiency by streamlining the human teachers and students out of each other’s way. For example, eInstruction was demonstrating its current line of Classroom Performance Systems Technology (CPS) programs. Individual students use remote control devices to answer banks of multiple choice questions on the internet. (Note: Ownership of these massive “question banks,” by the way, is a very big deal for ATP.)
With CPS remotes, teachers can employ LCD displays, PowerPoint and SmartBoards to broadcast formative “assessment practices” on classroom screens while the students click their responses. Then the computer--not the teacher--selects successive questions based on the group’s aggregate results for each item. Every time the class responds to an item, the screen can display a bar graph and percentage of collective results. Instant, outcome-based feedback! (Or: lots of trees but no forest.)
It was amusing that we conference attendees used the CPS remotes to complete evaluations for presentations, because over the course of three days, various people would step up to a microphone somewhere and remind us to “make sure you return the CPS that you picked up by mistake.” Here was a crowd of professional adults and the CPS units were still getting lost, pilfered, and probably broken. I thought: What if these people were teaching five sections of seventh graders every day?
That is, I suppose, an imaginative, funny, and frankly inefficient question.
Monday, October 24, 2005
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment