Here’s what to know about President Biden’s sweeping AI executive order

'AI policy is like running a decathlon, where we don’t get to pick and choose which events we do,' says White House Advisor for AI, Ben Buchanan.
Photo of President Biden in White House Press Room
The executive order seems to focus on both regulating and investing in AI technology. Anna Moneymaker/Getty Images

Today, President Joe Biden signed a new, sweeping executive order outlining plans on governmental oversight and corporate regulation of artificial intelligence. Released on October 30, the legislation is aimed at addressing widespread issues such as privacy concerns, bias, and misinformation enabled by a multibillion dollar industry increasingly entrenching itself within modern society. Though the solutions so far remain largely conceptual, the White House’s Executive Order Fact Sheet makes clear US regulating bodies intend to both attempt to regulate and benefit from the wide range of emerging and re-branded “artificial intelligence” technologies.

[Related: Zoom could be using your ‘content’ to train its AI.]

In particular, the administration’s executive order seeks to establish new standards for AI safety and security. Harnessing the Defense Production Act, the order instructs companies to make their safety test results and other critical information available to US regulators whenever designing AI that could pose “serious risk” to national economic, public, and military security, though it is not immediately clear who would be assessing such risks and on what scale. However, safety standards soon to be set by the National Institute of Standards and Technology must be met before public release of any such AI programs.

Drawing the map along the way 

“I think in many respects AI policy is like running a decathlon, where we don’t get to pick and choose which events we do,” Ben Buchanan, the White House Senior Advisor for AI, told PopSci via phone call. “We have to do safety and security, we have to do civil rights and equity, we have to do worker protections, consumer protections, the international dimension, government use of AI, [while] making sure we have a competitive ecosystem here.”

“Probably some of [order’s] most significant actions are [setting] standards for AI safety, security, and trust. And then require that companies notify us of large-scale AI development, and that they share the tests of those systems in accordance with those standards,” says Buchanan. “Before it goes out to the public, it needs to be safe, secure, and trustworthy.”

Too little, too late?

Longtime critics of the still-largely unregulated AI tech industry, however, claim the Biden administration’s executive order is too little, too late.

“A lot of the AI tools on the market are already illegal,” Albert Fox Cahn, executive director for the tech privacy advocacy nonprofit, Surveillance Technology Oversight Project, said in a press release. Cahn contended the “worst forms of AI,” such as facial recognition, deserve bans instead of regulation.

“[M]any of these proposals are simply regulatory theater, allowing abusive AI to stay on the market,” he continued, adding that, “the White House is continuing the mistake of over-relying on AI auditing techniques that can be easily gamed by companies and agencies.”

Buchanan tells PopSci the White House already has a “good dialogue” with companies such as OpenAI, Meta, and Google, although they are “certainly expecting” them to “hold up their end of the bargain on the voluntary commitments that they made” earlier this year.

A long road ahead

In Monday’s announcement, President Biden also urged Congress to pass bipartisan data privacy legislation “to protect all Americans, especially kids,” from the risks of AI technology. Although some states including Massachusetts, California, Virginia, and Colorado have proposed or passed legislation, the US currently lacks comprehensive legal safeguards akin to the EU’s General Data Protection Regulation (GDPR). Passed in 2018, the GDPR heavily restricts companies’ access to consumers’ private data, and can issue large fines if businesses are found to violate the law.

[Related: Your car could be capturing data on your sex life.]

The White House’s newest calls for data privacy legislation, however, “are unlikely to be answered,” Sarah Kreps, a professor of government and director of the Tech Policy Institute at Cornell University, tells PopSci via email. “… [B]oth parties agree that there should be action but can’t agree on what it should look like.”

A federal hiring push is now underway to help staff the numerous announced projects alongside additional funding opportunities, all of which can be found via the new governmental website portal,