President Biden’s executive order this week on artificial intelligence (AI) brings to mind his split media personality, which consists of the avuncular “Uncle Joe” and the more Machiavellian “Dark Brandon.” This bifurcated political personality has the advantage of keeping his opponents guessing, but in the case of the EO, it creates a policy jumble that is going to take a while to sort out.
Dark Brandon’s EO threatens to unleash a tsunami of heavy-handed, poorly considered regulation, which can breed regulatory capture, stifle innovation, and deprive the nation’s businesses and workers of critically needed opportunities to boost sluggish productivity. Uncle Joe’s EO makes genuine efforts toward easing workers’ transition to a new era of human-machine collaboration. Yet even Uncle Joe needs some work.
The labor and workforce development provisions of the EO direct the secretary of labor to develop employee privacy guidelines and best practices for employers; allocates funding for education and training opportunities in AI fields; calls for AI training programs across federal agencies; and seeks to make it easier for the US to attract AI talent from abroad. Most of these provisions make sense, especially the immigration and federal agency capacity initiatives.
Of concern, however, is the requirement for the secretary to publish a report within 180 days, on “job-displacement risks and career opportunities related to AI, including effects on job skills and evaluation of applicants and workers.” Here, Uncle Joe is out over his skis. Our labor market information (LMI) system, which would form the basis of any guesstimation of AI’s impact on jobs and skills, is hopelessly unprepared for this task. As my AEI colleague Mason Bishop likes to say, our LMI system is like a 1930s workforce system trying to keep up in an iPhone age. Such a report would be like a race between an Edsel and a spanking-new Lamborghini.
This data challenge is compounded by the complexity of the situation we are facing. Even the most sophisticated analyses of AI employment impacts done by academic and private-sector researchers have varied wildly over the past 15 years. From Frey and Osborne’s 2013 study to recent predictions by the McKinsey Global Institute, many researchers have forecasted that new technologies, including AI, will automate large sections of the global economy and possibly disemploy tens of millions. Others have found that jobs destroyed by AI will be offset by employment growth elsewhere. If, as some argue, AI spells doom for jobs, why would a recent study find that sectors with greater exposure to AI-related automation show employment gains?
A more deliberative, thoughtful approach to this very important issue is needed, and it probably can’t be completed in anything close to six months. One reason we need a different approach is that AI is such a “quicksilver” technology that we currently lack a framework for evaluating its impact on jobs and skills. Every time employment researchers think they have it reasonably nailed down, AI slips away.
As NYU labor economist Julia Lane argued in a recent AEI paper, AI, because it’s the product of complex flows of ideas generated by universities, researchers, and the private sector—and constraints imposed by government regulation—doesn’t fit the typical mold of an industry or service. Lane and other labor market experts are actively pursuing new theoretical models that can accommodate this new “industry of ideas” and its impact on the workforce.
Moreover, since these impacts are likely to be extremely far-reaching and unpredictable, it makes more sense to study the impacts of AI at local and regional levels rather than at a national level. Producing useful guidance for education, training, and employment planning requires familiarity with local contexts in which AI is employed. Information at the national level, while useful in other ways, often obscures important differences across the country. Lane addressed this issue in a second paper, arguing for a reinvention of LMI systems along the lines of the agriculture extension service. Such an approach could make data and analysis more available and actionable at the local and regional level and build the capacity of land-grant universities to serve as partners in workforce planning.
As AI advances and the need to support workers becomes more obvious, access to reliable and useful labor market data is increasingly critical. On this front, Uncle Joe shows some good instincts, but they need to be backed up by deeper reflection on how we can best find, analyze, and effectively use AI LMI.