Tag Archives: testing

PISA: The Morning After

Yesterday was PISA Day, an opportunity for concerned educators and citizens to think about the latest round of results from this important international comparative assessment

Not surprisingly, at least to those of us who follow the rhetoric and reality of comparative data, U.S. performance was basically unchanged from three years ago; some other countries (e.g., Poland, Germany) improved; some of the traditional “stars” (e.g., Finland) experienced a decline; and policy makers and commentators were quick to pronounce on the meaning of the results.

PISA is a remarkable program, in terms of the breadth of its coverage (65 education systems, including selected states in the US and Shanghai as separate from China) and the care taken to provide reliable estimates of the math, reading, and scientific literacy of samples of 15-year olds.  We have come a long way since the early days of international comparative assessment, in terms of sampling methods, psychometric quality, and reporting of results. 

Interpretation, though, remains a challenge.  For descriptive purposes, PISA provides a trove of interesting information, which, along with TIMSS, NAEP, and PIAAC (another OECD project), should be studied by anyone who cares about the ongoing pursuit of improved educational opportunity locally, nationally, and globally.  The more complicated task, though, is deriving sound policy inferences from these descriptive data.  There is no clear enough pattern of relationships to infer anything definitive about the relative success of various reforms in the U.S. and elsewhere; about the relationship of test performance to national economic outcomes; or about what exactly we should do next as we struggle to expand access and high quality educational opportunities for our students. 

For example, it seems that Massachusetts again did better than the overall U.S. average and on par with some of the biggest international “winners.”  Florida fared more poorly.  So people who like what Massachusetts has been doing must be pleased, and would be inclined therefore to like what PISA measures.  People who like Florida’s hard-charging accountability reforms are surely disappointed, and some of them must now be skeptical about whether PISA is the right tool to gauge the effects of reform.  It is no small irony that some of the harshest critics of PISA (and testing generally) are willing to use the latest results to vindicate their claims about the success or failure of various reform initiatives. 

Similarly, there is the implicit (and in some cases explicit) attempt to tie PISA scores to our current or future economic stature.  Here, too, the hazards of inferring cause and making predictions from purely correlational and descriptive data are profound.  As I’ve written elsewhere, the U.S. performed near or at the bottom on the First International Mathematics assessment in 1964, and indeed our economic productivity growth declined in the subsequent decade (when many of those high school kids who had taken the test were in the labor market).  But countries that significantly outperformed us on the math test, e.g., Britain and Japan, experienced an even more dramatic productivity growth slowdown, suggesting that those test results could not, alone, explain much about the economy or predict much about the future.  Average annual labor productivity increased by about 4.2% in the U.S., between 1979 and 2011, compared to 2.5% in Germany, which outperforms us on PISA.  Again, it’s not easy to infer simple relations from these data.  Does academic achievement matter to economic outcomes?  Yes, without a doubt.  But so much else matters, and probably more, that to draw quick inferences – and again sound loud alarm bells about our impending economic doom and, worse yet, blame everything on schools and teachers – from results on one assessment of one age group is a recipe for the ultimate erosion of respect for what is otherwise a useful tool for comparative study.  I’ve tried to make this argument elsewhere.

Finally, there is the question about what to do next.  On this, I was inspired by the wise comment of John Jackson, of the Schott Foundation for Public Education, whom I met at a dinner with a number of education policy makers and researchers.  John asked whether it was possible that we in the U.S. had essentially “maxed out” on the impact of reform in terms of its effects on PISA scores, and if so what should guide us as we continue to work on educational improvement.  That’s the right question, and though I believe there is useful information in PISA I also believe that answering it will require considerably more nuance in our understanding of the results.  (To get a sense of the complexity of these issues, see the recent work of Martin Carnoy and Richard Rothstein). 

For me, the most important issue to focus on is not where we stand on average, but rather how to cope with the ravaging effects of growing economic inequality on educational opportunity and the life chances of our youth.  In other words, we need to work on the variance more than the mean, to acknowledge the effects of poverty, and to concentrate on policies and programs that can restore opportunity (my colleagues Richard Murnane and Greg Duncan are co-authors of a book with that title, due out early next year).  A good place to start would be with a sustained program of research and policy that builds on the foundations of work such as Whither Opportunity?.  If we care about the American dream – and really want to reaffirm the nation’s commitment to upward mobility and improved quality of life for all our people – we should not distract ourselves with foolish attempts to use a single assessment as the guide to policy and practice.  PISA poses important questions; the answers aren’t as obvious. 

December 4, 2013

Leave a comment

Filed under Uncategorized

EdWeek Commentary Condemns Misdirected Blame in Atlanta Cheating Scandal; Garners Mixed Reaction from Readers

Reactions to my recent Education Week commentary on the Atlanta cheating scandal have been interesting (see also this Washington Post blog post by Valerie Strauss that highlights my commentary). They mostly relate to the question I raised, whether the system is to blame or whether individuals, even when faced with strong pressure and incentives for opportunistic behavior, should act morally and legally.  And, of course, several of the commentaries rehearse the somewhat well known criticisms of testing.

As best I can tell though, readers didn’t take up the issue I mentioned regarding the National Assessment of Educational Progress (NAEP) scores, which apparently improved during Beverly Hall’s tenure in Atlanta.  If student performance was really getting better, then there was arguably less reason to engage in the alleged tampering.  We won’t know whether the people accused of the cheating considered any of this, or indeed if most of them knew about the NAEP results in the first place.  Meanwhile, suspicions have been raised about whether there had also been mischief in the NAEP sampling and scoring. For an especially lucid analysis of the NAEP results and how they relate to the Atlanta situation, I refer you to this excellent article by our friend Marshall (Mike) Smith, former Under Secretary of Education and Dean of the Stanford Graduate School of Education.

April 15, 2013

Leave a comment

Filed under Uncategorized