Over the past year, eerily human artificial intelligence has crossed from the province of science fiction into daily reality. Displaying an astonishing ability to write poetry, code programs, summarize research, and ace the SAT, AI has profound implications for education.
The early response has tended to take one of two forms: sky-is-falling panic or an odd triumphalism. On the one hand, school districts have moved to ban AI from local servers and fearful educators have worried that their jobs are at risk. On the other, techno-optimists are eager to explain that schools no longer need to do the tedious work of teaching students to write because AI will do it for them.
Needless to say, I suspect both takes are deeply flawed. As I explore in my new book The Great School Rethink, technological advances, ubiquitous educational choice, and the dislocations wrought by the pandemic create new opportunities to tackle persistent challenges in schooling.
But all of this starts with being clear-eyed about the technology and how it’s used. For instance, last winter, just weeks after ChatGPT debuted, 30 percent of four-year college students said they were already turning in AI-written assignments. As AI becomes more powerful, convenient, and familiar, those numbers will skyrocket. How can educators know whether students are writing their own book reports, term papers, or admissions essays?
I mean, AI is really good at school. GPT-4, the successor to ChatGPT, has tested at the 93rd percentile on the SAT reading and writing test, the 90th on the bar exam, the 77th on the certified sommelier exam, and the 84th on Advanced Placement macroeconomics. And teachers and professors find it tough to distinguish AI-written work from student work, largely because AI tends to be banal and rote in a manner similar to so much student writing. (That so many students write like chat-bots is a pretty good distillation of what we need to do better.)
Now, some readers may say, “Wait a moment. Aren’t challenges about grading, not learning?” It’s a really useful distinction. But remember that schools are supposed to help students master knowledge and skills. That means we need to know what students have learned. That’s where “grading” and assessment come in.
After all, it’s a big problem if a student uses AI to pass a driver’s exam and then gets behind the wheel without knowing to stop at a red light (or which pedal is the brake). Similarly, calculators are terrific time-savers once students have mastered multiplication, but students won’t actually learn or understand computation if they’re just mashing buttons.
While this distinction regarding the use of calculators may seem obvious today, it wasn’t when they were first introduced. Indeed, the introduction of calculators heralded the same mix of unhelpful wishful thinking and hand-wringing that we see with AI today. But the plain truth is that — whether the subject is division, Dante, or driving — AI makes it easy for learners to show proficiency without ever mastering essential knowledge or skills. That’s a problem. It’s clearer how we might guard against that with a driver’s exam or the LSAT; it’s more complicated when it comes to essays, book reports, and term papers.
That raises a huge issue that is simultaneously disorienting, troubling, and exciting: AI is about to upend what we think of as “good teaching.” Skilled teachers typically try to maximize the amount of class time devoted to discussions, interactive problem-solving, or hands-on-shoulder coaching. Consequently, teachers try to have students do much of their writing at home. Teachers have always had to manage concerns about plagiarism or untoward assistance but such tactics lacked the easy, wholesale efficiency of AI. This means that teachers will need to rethink familiar, time-tested routines.
Teachers will need to bring writing back into the classroom, where teachers can observe the writing process and engage with students. They’ll need to check in with students at each stage of the writing process, as they develop a thesis, frame arguments, identify sources, and so forth. There will be a place for more oral presentations. And all this is good! While some teachers may initially balk, helping students become more conscientious writers and more purposeful thinkers is a terrific thing.
Ubiquitous information and on-demand writing are powerful tools, but their value depends on how they’re used. If AI can write a business plan or create a potent deepfake image, students need to know how to consider the sources and judge the merits for themselves. If the world is rife with AI-produced analysis and content, it’s ever more crucial that students learn how to distinguish wheat from chaff.
What does this imply for the shape of schooling? There’s a strong case for pushing more emphasis on cultural literacy (as envisioned by E.D. Hirsch), if only so that students have some basis for evaluating the validity of AI handiwork. Testing and assessment are going to loom large in all of this, as we seek ways to ensure that students are actually learning the things they need to know. If you thought efforts to teach students to navigate online environments and to spot fake news were important before, just wait until AI-written summaries start treating AI-generated fabrications as authoritative sources.
AI will also prove to be a powerful resource as we wrestle with these changes. New AI tutoring systems make it possible for students to have an always-available mentor who can answer an extraordinary range of questions. Those same AI resources help facilitate the creation of immersive virtual environments in which students can access learning experiences which are otherwise impossible, dangerous, or ludicrously expensive—whether that’s terraforming Mars, experimenting with toxic chemicals, or touring the Louvre.
It’s a good bet that many familiar debates about pedagogy, assessment, instruction, and the shape of schooling are about to be reshaped. We just don’t yet know which ones. Or exactly how.