Key Points
- Average US student assessment scores appear to tell a simple story: Performance rose through the 2000s, plateaued during the 2010s, and then declined sharply during the pandemic. But looking beyond the averages reveals more complex and concerning patterns.
- Student achievement began declining around 2013, but these declines—both before and after the pandemic—were overwhelmingly driven by the bottom half of test takers. Since 2011, the US has experienced the highest absolute achievement gap growth internationally.
- Adult performance trends on international assessments during this period reflect those of students, suggesting that external societal factors—not just school-related influences—are at play.
- Determining the drivers of student achievement trends is not simple. Most conventional explanations for student performance trends struggle to fully account for the patterns identified in this report.
Executive Summary
Although national test scores provide clear evidence on student achievement across time, they do not illuminate what is driving gains or losses. Nonetheless, careful examination of test scores can corroborate some explanations for changes in student achievement and discount others.
This report examines recent trends in US student achievement, as measured by national and international assessments, and identifies four key trends that any satisfactory explanation of recent US student performance should account for: a downward trend beginning around 2013; declines driven by the bottom half of the distribution, both before and after the pandemic; higher absolute achievement gap growth in the US than other nations; and adult assessment trends that closely match those of students. The report concludes by evaluating how common explanations of student achievement trajectories align with these trends.
Introduction
Once or twice a year, a familiar sequence of events repeats itself: A new round of national test scores are released. The results make the news because they are new and because test scores are important. And then gallons of ink are spilled arguing why scores changed as they did. It’s cable TV, or MTV, or violent video games, or the internet, or the phones. It’s the teachers unions, or the parents, or the inaction of policymakers, or the meddling of No Child Left Behind. It’s the advent of MySpace—no, Facebook— or Instagram, definitely TikTok. It’s the obesity epidemic, the loss of recess, the overemphasis on sports, the shriveling of arts and music, or the lack of extracurricular options. It’s students’ behavior, or lack of ambition, or increasing disrespect, or timidity, or mental health, or unhealthy perfectionism, or laziness.
National test scores are important. They provide us with a big picture of student achievement at the national level and tell us whether achievement is improving, falling, or holding steady. For those concerned with the effectiveness of the schools we invest so much in, the trajectory of the children we care for, or the future of the economy we depend on, national test scores are valuable indicators.
However, although national test scores do provide strong evidence of student achievement across time, they are not great at telling us why students get the scores they do. Test scores tell us where students are academically, not why they are there. Nonetheless, careful examination of test scores can provide circumstantial evidence to corroborate some possible explanations for changes in student achievement and discount others.
In the wake of the pandemic, test score releases may be more important than ever. Two years ago, scores from the 2022 administration of the National Assessment of Educational Progress (NAEP)—the self-styled “Nation’s Report Card”—showed just how far students had fallen behind during the pandemic. And in late January 2025, the release of scores from the 2024 NAEP assessment, the second post-pandemic NAEP, will help education researchers, pundits, and policymakers answer one of the biggest questions in education right now: Are students and schools recovering from COVID learning loss?
But even that anodyne question includes at least three implicit stories of student achievement that, I argue in this report, are not supported by a careful analysis of recent test score trends. The first implicit story is that declining student performance in recent years is solely attributable to the pandemic: If test scores have declined, that is the result of the pandemic’s aftereffects; if test scores have improved, that is because we have moved past the pandemic.
Insofar as the pandemic was probably the biggest disruption to American schooling in over a century, this story has a lot to recommend it. However, this story suffers from pandemic myopia, leading to a woefully incomplete picture of how performance has declined in recent years and why. Indeed, average test scores began declining well before the pandemic, suggesting that the decline in test scores over the pandemic was perhaps not solely attributable to the pandemic and that test scores could continue to decline in the coming years for reasons that have little to do with COVID-19.
The second implicit story is that things are getting much worse for all students. For good reason, average scores get the headlines, and averages have been trending downward. However, averages can mask what’s going on underneath, and over the past decade, test scores show a sharp divergence in how high-achieving and low-achieving students are performing. In fact, on a number of tests and in a number of subjects, higher-achieving students have held steady over the past decade.
The third implicit story is that changes to student performance are the result of what happens in schools: Tests assess students on the skills they practice in the classroom, and so changes to test scores suggest changes to what happens in the classroom. For example, although the pandemic was not something that happened only in schools, the pandemic affected instructional quality, leading to declines in test scores.
While understandable, recent test scores suggest that factors outside of school might play a considerable role. To wit, during the pandemic, test scores declined significantly for American adults, the vast majority of whom were not in school at any point during the pandemic. I argue that, while perplexing, these changes to adult test scores align too well with changes to K–12 test scores to be ignored.
Changes to test scores are not easy to account for, and in this report I don’t try to. Instead, I look at test results from a number of national and international tests and identify several patterns and trends that any satisfactory explanation of student performance over the past decade needs to account for.
In this report, I analyze longitudinal average and selected percentile scores from a range of nationally representative assessments spanning various subjects, grades, and age groups. The main NAEP assessments cover reading, math, science, and history for grades four, eight, and 12 from 1992 to 2022. The NAEP long-term trend (LTT) assessments gauge reading and math for ages nine and 13 from 1978 to 2022. The Trends in International Mathematics and Science Study (TIMSS) assesses science and math for grades four and eight from 1995 to 2003. The Program for International Student Assessment (PISA) evaluates 15-year-olds in reading, math, and science from 2000 to 2022. I also include tests of numeracy and literacy in adult populations, using the Program for the International Assessment of Adult Competencies (PIAAC) with other linked assessments, spanning 1998 to 2023. Together, these assessments provide a comprehensive view of US educational and skill trends across different populations over the past two decades.