The news seems to be everywhere these days: We need some new kinds of assessments that will truly measure students' capacities to do work that matters. Yesterday Education Sector published a report called "Measuring Skills for the 21st Century," and today researchers at UCal Berkeley have released a report on the LSAT , suggesting that while that test may predict One-L performance, it could better be replaced with a different kind of law school admission test that might actually predict the performance of future lawyers. For a while now schools and colleges have had plenty to chew on from psychologist Robert Sternberg whose research into intelligence and its measurement has generated a number of new approaches to secondary school and college admission that are in use at schools as traditional as Choate Rosemary Hall and Phillips (Andover) Academy. Tufts University, where Sternberg is now the dean of Arts and Scientists, has added a number of essay questions to its application process with the purpose of eliciting evidence of creative and practical intelligence and moral reasoning to supplement traditional academic performance data such as grades and SAT or ACT scores.
Why we need better assessments—let's even call them tests, at least in the context of both admission and the macro measurement of academic achievement—seems obvious enough.The workforce requirements of a new age, even with the economy as flat at the moment as Thomas Friedman's globalized world, are going to favor those who possess both precise content knowledge and a mastery of basic intellectual processes—reading, computation, scientific reasoning, technique in its many forms—as well as creativity, problem-solving skills, the ability to reason and communicate from multiple perspectives, and the ability to work collaboratively in multiple contexts and across cultural boundaries. The SAT, state assessments, and your average math or history examination measure at most a few of these capacities, often in narrow or even absent contexts. A few tests go a bit deeper—some IB and AP examinations, the ACT—but all are constrained by the challenges of both accurate norming and consistent scoring of any answers that aren't laid out as a row of bubbles.
Classroom teachers have known this for decades, and the spread of new ideas about classroom assessment—projects, simulations, structured discussions, presentations, service-learning—is actually old news. But next schools and prospective colleges see evidence of this kind of assessment only second-hand, as it is embedded in letter grades or fleetingly described in letters of recommendation. What students are truly learning, even in New Progressivist schools, must usually be deduced or teased out of the data. Most schools haven't really figured out how to either measure or report deeper learning, especially in summative context.
Better measures are out there. The Education Sector cites the College Work and Readiness Assessment (CWRA), a handful of high quality, high standards simulations, and United Kingdom's Key Stage 3 Information Communications Technology Literacy Assessment, while the Aurora and Kaleidoscope batteries developed by Robert Sternberg's group are already in use. These tests at least approach the kind of holistic measurement of a range of capacities—including traditional content knowledge and skills—that will help schools continue to develop curriculum that meets the needs of a real world of work, civic engagement, and personal development.
For independent schools espousing New Progressivist ideals, using traditional private school admissions tests like the Secondary School Admissions Test and the Educational Records Bureau's Independent School Entrance Exam, even supplemented by batteries of annual tests of academic progress, limits assessment to only some of the skills and knowledge we want our students to be able to develop as they enter and pass through our programs. Schools need to acknowledge the need to be looking for new kinds of minds that are open to and prepared for learning experiences that go beyond what "old-style" tests measure.
The time seems right for many, many schools—not just the handful already using the CWRA or Kaleidosope—to actively seek out and incorporate the kinds of new assessments that measure a broad range of vital skills. Here again, independent schools have the gift of being able to freely try new ideas and to think proactively and deeply about the meaning and value of such assessments.
In the twentieth century independent schools were disproportionately the pilots of "old-style" testing—the SAT, the Advanced Placement program. In an age where there were few mechanisms for identifying intellectual talent, these tests (born in part of the confidence that the early Progressives placed in psychometric testing as holding the key to human potential) served their purpose. But in a world gone mad with state assessments and where "test prep" and "teaching to the test" have become part of a college admission climate in which the term "gaming the system" is heard all too frequently, those tests and their ilk are headed the way of the dodo.
Now it is time for a new generation of educational leaders to grab hold of what we already know about intelligence and about the needs of a changing society and find ways to apply the promise of psychometrics in whole news to a new era in educational assessment.