TESTING THE TEACHERS
DAVID BROOKS| NYT NEWS SERVICE
THERE’S an atmosphere of grand fragility hanging over America’s colleges. The grandeur comes from the surging application rates, the international renown, the fancy new dining and athletic facilities. The fragility comes from the fact that colleges are charging more money, but it’s not clear how much actual benefit they are providing.
Colleges are supposed to produce learning. But, in their landmark study, ‘Academically Adrift,’ Richard Arum and Josipa Roksa found that, on average, students experienced a pathetic seven percentile point gain in skills during their first two years in college and a marginal gain in the two years after that. The exact numbers are disputed, but the study suggests that nearly half the students showed no significant gain in critical thinking, complex reasoning and writing skills during their first two years in college.
This research followed the Wabash Study, which found that student motivation actually declines over the first year in college. Meanwhile, according to surveys of employers, only a quarter of college graduates have the writing and thinking skills necessary to do their jobs.
In their book, ‘We’re Losing Our Minds,’ Richard P Keeling and Richard H Hersh argue that many colleges and universities see themselves passively as ‘a kind of bank with intellectual assets that are available to the students.’ It is up to students 19 and 20 year olds to provide the motivation, to identify which assets are most important and to figure out how to use them.
Colleges today are certainly less demanding. In 1961, students spent an average of 24 hours a week studying.
Today’s students spend a little more than half that time, a trend not explained by changing demographics.
This is an unstable situation. At some point, parents are going to decide that $160,000 is too high a price if all you get is an empty credential and a fancy car-window sticker. One part of the solution is found in three little words: valueadded assessments. Colleges have to test more to find out how they’re doing.
It’s not enough to just measure inputs, the way the US News-style rankings mostly do. Colleges and universities have to be able to provide prospective parents with data that will give them some sense of how much their students learn.
There has to be some way to reward schools that actually do provide learning and punish schools that don’t. There has to be a better way to get data so schools themselves can figure out how they’re doing in comparison with their peers.
In 2006, the Spellings commission, led by then-Secretary of Education Margaret Spellings, recommended a serious accountability regime. Specifically, the commission recommended using a standardised test called the Collegiate Learning Assessment to provide accountability data. Colleges and grad schools use standardised achievement tests to measure students on the way in; why shouldn’t they use them to measure students on the way out? Many people in higher ed are understandably anxious about importing the No Child Left Behind accountability model onto college campuses. But the good news is that colleges and universities are not reacting to the idea of testing and accountability with blanket hostility, the way some of the members of the K-12 establishment did.
If you go to the Web page of the Association of American Colleges and Universities and click on ‘assessment,’ you will find a dazzling array of experiments that institutions are running to figure out how to measure learning.
Some schools like Bowling Green and Portland State are doing portfolio assessments which measure the quality of student papers and improvement over time. Some, like Worcester Polytechnic Institute and Southern Illinois University Edwardsville, use capstone assessment, creating a culminating project in which the students display their skills in a way that can be compared and measured.
The challenge is not getting educators to embrace the idea of assessment.
It’s mobilising them to actually enact it in a way that’s real and transparent to outsiders.
The second challenge is deciding whether testing should be tied to federal dollars or more voluntary.
Should we impose a coercive testing regime that would reward and punish schools based on results? Or should we let schools adopt their own preferred systems? Given how little we know about how to test college students, the voluntary approach is probably best for now. Foundations, academic conferences or even magazines could come up with assessment methods. Each assessment could represent a different vision of what college is for.
Groups of similar schools could congregate around the assessment model that suits their vision. Then they could broadcast the results to prospective parents, saying, ‘We may not be prestigious or as expensive as X, but here students actually learn.’ This is the beginning of college reform. If you’ve got a student at or applying to college, ask the administrators these questions: ‘How much do students here learn? How do you know?’