The future is fast arriving—as the last year’s developments in artificial intelligence make clear—but the national government is nowhere near ready.
Over the last year, we’ve seen the explosion into the public consciousness of major breakthroughs in artificial intelligence in the form of new tools that were immediately widely available.
First came the release of a handful of sophisticated image-generating AI tools—Stable Diffusion, OpenAI’s DALL-E2, Midjourney, and various spinoffs. The availability and impressiveness of these tools led to playful experimentation, experimental adoption in some industries, and lots of speculation and debate about the moral, social, and legal implications of the techniques used to produce the images, and about the future of human-made art and illustration.
Then, last November, OpenAI released ChatGPT3, a tool that links a vast database of textual information with algorithms capable of answering questions in a somewhat human-like manner. ChatGPT3, while clunky and tending to produce thin and sometimes inaccurate results, was a first glimmer of what was on the horizon. Then Microsoft’s new Bing landed in mid-February, several orders of magnitude better than the OpenAI tech that undergirded it. Microsoft promptly announced an additional $10 billion of investment in OpenAI research and suddenly it seemed like the whole computer science industrial complex was racing to market with either new tools similar to Bing Chat, promises of such tools, or strategic partnerships intended to spark further AI innovation.
All of that feels like recounting, if not an ancient history, some pretty stale news, even though some of it was almost literally yesterday. AI chat tools seem to be sprouting everywhere. One U.K.-based law firm claimed that its chat tool, named Harvey, was radically simplifying legal research and drafting, eliminating a lot of heavy lifting among attorneys and paralegals. Other firms are now reportedly lining up to adopt the tool. At this early stage of technology implementation, problems are cropping up, including the documented tendency of AI chatbots to make things up, or, in the vernacular, “hallucinate” responses. Hence the need to keep skilled humans in the loop especially while the training wheels are still on the technology.
Meanwhile the big daddy of the web-hosting world, Amazon Web Services, has entered into a partnership with Hugging Face (do you ever wonder where tech-world gets its naming conventions?) to permit, well, everyone to integrate AI models into their customer offerings. OpenAI’s Codex tool is causing a reassessment of the longer-term need for training lots of new people in basic coding skills and raising the premium on the most skilled coders.
In short, we may look back upon the period from mid-2022 to early 2023 as the historical nanosecond at which America’s high-tech industry, for good and ill, split the AI atom.
On balance, this is a hopeful moment. After untold billions of dollars in investment and years of painstaking development of databases, language models, algorithms, and microprocessor hardware, the broader economy (rather than only the startup sector) seems poised to start reaping the benefit of AI in bringing new productivity and efficiency to the economy. But no new good is ever completely without alloy, and problems surely lie ahead. What’s concerning is the relative absence of safeguards around the further development and deployment of AI technology. To use a historical analogy, the Gatling gun just showed up in a town without a sheriff. It might be useful and entirely benign—but given human nature, as amply demonstrated in the long history of the world, it also might not.
Where’s the sheriff?
As usual, the federal government is lagging significantly behind technological development. Thus it has always been and will always be. Our government is meant to move slowly; too much energy in the executive can lead to overregulation and endanger freedom. In an area as fast-moving as AI with its immense consequences for our economy and international competitiveness, we don’t need an activist federal government excessively slowing or limiting development.
At the same time, if regulation lags too much, that can be problematic as well. A recent study by the Stanford Human-Centered AI (HAI) project provides some insight into federal AI adoption and use that does not, to put it mildly, engender great confidence in the ability of federal agencies to contribute well and meaningfully to deliberations about how to regulate it. Under three Trump-era measures—the AI in Government Act of 2020, Executive Order 13859 on AI leadership, and Executive Order 13960 on AI in government—federal agencies were tasked with assessing their own use of AI. Of the 45 legal requirements contained in these measures, the Stanford study found that fewer than 40 percent had been implemented. The Office of Management and Budget had not yet developed an AI “occupational series” that will guide the hiring of federal employees with the expertise to use and manage AI systems nor had it released guidance to agencies on AI acquisition, bias mitigation, or a timeline for public comment on AI uses.
At the agency level, the picture is particularly grim. As of December 2022, 88 percent of the covered agencies had failed to submit the Agency AI Plans required under the executive order on AI leadership. The requirement to submit AI use case inventories (descriptions of how each agency is currently using AI) have been published by only 24 percent of the 220 covered agencies. Among large agencies, 48 percent have failed to publish inventories. Among agencies that have published inventories, the Stanford study noted significant omissions—including the use of facial-recognition technology by Customs and Border Protection, which seems particularly problematic from the standpoints of privacy and security. Not all the news was bad, however. The U.S. Department of Health and Human Services seems to have benefited from its relationship with the health sector in completing a thorough self-review of AI use cases and formulating potential regulatory actions.
None of the above is meant to be especially critical of federal efforts. Even some firms that produce AI tools have difficulty explaining how they work or the logic behind their tools’ “reasoning.” The halting federal results are merely emblematic of just how far behind the federal government is in terms of positioning itself and its workforce to understand and manage the AI resources they have, not to mention those that will soon be on offer from eager federal IT contractors.
The approaching technological tsunami, combined with the unpreparedness documented in the Stanford analysis, raises an obvious question: Can federal agencies that are unable to account for even their own use of existing AI products establish sensible regulatory frameworks meant to ensure AI safety, privacy, and security for the rest of us?
The regulatory future
Over the past twelve months, I had the opportunity to serve on a bipartisan project of the U.S. Chamber of Commerce called the AI Commission on Competition, Inclusion, and Innovation. The commission’s report is due out this week, and, without preempting its findings, I have a few high-level observations relevant to the regulatory moment we are facing.
One is the huge promise AI has for improving the lives of, health of, and economic opportunities available to Americans. The existing advances to medicine alone are breathtaking in terms of improving diagnosis and treatment of disease. New tools will merge with existing ones to create unforeseen opportunities and consequences. This dynamic will make it difficult, and perhaps impossible, for any government to stay abreast enough of change to regulate effectively without overshooting in ways that stifle the sector or undershooting and missing key challenges that expose the public to undesirable risks to safety and privacy.
In light of the lack of regulatory readiness at the federal level, we need to be looking for innovative approaches to guiding AI without strangling it. This process should begin with building the capacity of the federal government to understand AI and its progeny. The federal government should partner with the private sector to create a standing, blue-ribbon commission—perhaps a presidential commission housed at the National Institute of Standards and Technology, or elsewhere in the Department of Commerce, or in the Department of Energy, or some other appropriate agency—tasked to monitoring developments in AI technology. (The National Academy of Sciences should also host its own parallel panel outside the government.)
Following the model of the various government bioethics commissions that were tasked with informing the policy debates and public understanding of biotechnology, hearings and reports of this AI commission should cover both important immediate-term technological breakthroughs as well as providing insight and guidance on longer-term regulatory matters. The commission should be made up of individuals from a broad range of backgrounds, including, law and technology, certainly, but also such fields as cognitive science, philosophy, ethics, social welfare, workforce and labor, health, business, education, and others. AI is going to touch every aspect of our lives and will require thoughtful examination by experts in every domain of human activity. The reports and recommendations of the commission could help guide Congress, the White House, and other federal policymakers on needed corrections to law and regulation and also serve as something of an early-warning system for identifying emerging AI challenges requiring immediate attention.
A second opportunity would be to substantially expand the existing Government Accountability Office (GAO) resources devoted to the study of artificial intelligence. In 2019, the GAO created a Science, Technology Assessment, and Analytics (STAA) team to replace capacity lost when the congressional Office of Technological Assessment was eliminated. Over the past several years, STAA has produced reports on AI in health care and national security as well as recommendations on AI transparency and accountability. With the sudden acceleration of AI and its growing army of unknown-unknowns for the economy and society, STAA needs to be expanded to increase its ability to help elected officials and the public better understand AI and its social and economic implications.
When it comes to AI, we need, as a nation, to move deliberately and with urgency to better understand and shape the introduction of new technology in ways that will deliver the most public benefit while preventing and remediating its inevitable downsides. As it stands, we aren’t anywhere close to being ready for a future that has already begun to arrive.