You know you’re making progress in an education policy debate when education professors and journalists start printing what used to be a conservative counter-argument as an unfortunate and perplexing fact. Last week, Education Week published an article titled: “Preschool Studies Show Lagging Result. Why?” The lede notes that although studies from the 1960s and 1970s showed strong positive results, “more recent experimental studies of preschool don’t show as strongly positive results for students’ academic and social outcomes.”
This is not, of course, an honest way to report the data. The average reader will assume that recent experimental studies show positive—albeit less positive—results. In fact, as a team of education researchers report, the weight of the evidence shows harm.
For years, pre-K advocates, education journalists, and Democrats have argued that the strong positive results from the Perry Preschool Project and the Abecedarian Study prove that universal pre-k would significantly help children. That claim was always about as logical as stating that because chemotherapy can defeat childhood leukemia, all kids should take aspirin daily.
The Perry Preschool Project served fifty-eight black students classified by researchers as “functionally retarded,” and relied heavily on home visits to teach parents how to better teach their children. The Abecedarian study served fifty-seven deeply disadvantaged, mostly black children, starting from birth, for five years in a single care center and a total cost of about $85,000 per pupil. Both showed striking positive results. Therefore, pre-K advocates argued, sending kids to public school one year sooner would also yield striking positive results.
It seemed that only a handful of conservative policy experts were willing to point out that these studies were irrelevant to the contemporary question of expanding pre-K. But in a recent Annenberg Institute working paper titled: “Why Are Preschool Programs Becoming Less Effective?” academics noted: “In hindsight, it is naïve to believe that findings from the early RCT studies would generalize to public programs operating at scale that serve millions of children each year at a fraction of the cost.” Yes, it is.
But it’s perhaps worse than naïve to stare at a spate of negative findings and frame them as “less” positive. In their report, the researchers summarize much of the major recent evidence on the effects of pre-K.
In contrast to the Perry and Abecedarian programs, which showed increased higher education, higher adult earnings, and lower crime among participants, students in other programs analyzed in the report had poorer outcomes by most measures. The report included an analysis of a long-run study of Head Start, and analysis of pre-K programs in Boston, Tennessee, Georgia, and North Carolina.
The one study included in the report that’s not a randomized control trial is the long-run Head Start study. In 2009, Harvard professor David Deming published a study showing long-run benefits for students who attended Head Start in the 1960s and 1970s by comparing results between sibling who did and did not attend. But in 2019, Deming’s students extended his methods to cover the original subjects over a longer time horizon, as well as students who attended Head Start in the in the 1980s. The originally observed benefits largely faded, and the impacts for the later cohort of students were largely negative (more likely to be diagnosed with a learning disability, more behavior problems at school, more idleness as an adult, more teen parenthood).
The Tennessee pre-K study showed that by third and sixth grade, participants had lower academic achievement, more behavior problems, and were more likely to be diagnosed with a disability. The Georgia pre-K study found negative effects on reading and math achievement by the end of fourth grade. The North Carolina pre-K study showed lower math skills and worse behavior by the end of kindergarten. (Although, the authors note, the researchers managed to dis-establish statistical significance “after multiple testing hypothesis corrections.)
I’ve previously noted the strangeness of the Boston study, in which—contrary to established theory—long-run benefits largely accrued to white and affluent students. As I noted, the “findings appear theoretically incoherent, and theoretically incoherent findings are often a symptom of some other serious empirical problem.” The authors of this report are also dubious, noting that another study found null short-run effects, therefore they note that Boston’s program “lack[s] a clear mechanism linking pre-k attendance to adult functioning.”
It’s unlikely that any amount of evidence would persuade the education establishment that pre-K programs harm students. But it is progress, nonetheless, to see researchers and journalists admit that the evidence doesn’t prove that it’s a panacea.