“Extreme work” is back, and it isn’t pretty. Reports from the AI sector point to founders pressing for 70–80-hour weeks (or more), arguing that extreme effort is the price of being first to human-level artificial general intelligence (AGI) and that there’s no trophy—or stock options—for second place.
The evidence on how excessive work affects employees and companies suggests otherwise. Pushing teams past sustainable limits may raise output in the short-term, but it also reduces performance, raises error rates, and increases health risks. Hard-core programming might get a headline this year, but it could result in a truckload of expensive and potentially dangerous technical debt the next.
A 2014 study by Stanford University economist John Pencavel shows that output per hour declines steeply after roughly 50 hours per week, and that the extra hours beyond 55–60 subtracts from output due to the additional time required to fix mistakes. Serious health risks increase as well. One meta-analysis of 37 studies found that working more than 55 hours per week raises the chance of stroke and heart attack by 35 percent and 17 percent respectively, adding to business costs through absenteeism, turnover, and diminished cognitive performance. Creativity and judgment—the very human skills most needed in an AI-infused working environment—also suffer from overwork which often impairs attention, working memory and decision-making. The converse is also true: people who take time away from direct cognitive effort are more likely to generate novel insights than those who do not.
This is an almost impossible argument to make to Silicon Valley’s doyens about the “rise and grind” culture they create and inhabit. As the race for AGI has heated up, AI leaders are doubling-down on extreme work habits. The Pragmatic Engineer reported this week on moves by a number of firms to institute what are called “996” schedules (9:00 a.m. to 9:00 p.m., six days per week) as a kind of minimum expectation. This practice originated in China and was eventually banned by the government because of rising public discontent and a recognition that excessive work hurts innovation and competitiveness. Silicon Valley founders have ignored this history based on the theory that the opportunity for their workers to gain “generational wealth” outweighs the business costs and human impact of their work policies. It’s a free market, and workers can go elsewhere if they don’t like the terms of employment.
The stakes of this debate may extend beyond tech workers. The sprint toward AGI, whatever its economic and strategic value, creates potential risks if the humans creating it are pushed beyond their limits. “Technical debt”—the mistakes in design and code that increase with fatigue—and the consequences of that debt rise with a technology that is beginning to show the capability to improve itself. Mistakes in today’s models would be prone to replicating and magnifying in tomorrow’s AI products and systems. At least theoretically, human-fatigue-driven technical debt could be further exacerbated as agentic AI systems are linked, and the technical debt “virus” spreads from “sick” systems to “healthy” ones raising the risk of cascading and intensifying errors.
There is also a basic contradiction at play. Telling workers they must sacrifice sleep, health, and family to be part of “humanity’s greatest breakthrough” is less leadership than it is psychological coercion parading as a mission. AI promises to reduce drudgery and expand human potential. Re‑creating 19th century factory conditions to build a 21st‑century technology is a category error we might all come to regret.



