SHARE

Last week, Rep. Ted Lieu (D-CA) introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act alongside Sen. Edward Markey (D-MA) and numerous other bipartisan co-sponsors. The bill’s objective is as straightforward as its name: ensuring AI will never have a final say in American nuclear strategy.

“While we all try to grapple with the pace at which AI is accelerating, the future of AI and its role in society remains unclear. It is our job as Members of Congress to have responsible foresight when it comes to protecting future generations from potentially devastating consequences,” Rep. Lieu said in the bill’s announcement, adding, “AI can never be a substitute for human judgment when it comes to launching nuclear weapons.”

He’s not the only one to think so—a 2021 Human Rights Watch report co-authored by Harvard Law School’s International Human Rights Clinic stated that “[r]obots lack the compassion, empathy, mercy, and judgment necessary to treat humans humanely, and they cannot understand the inherent worth of human life.”

[Related: This AI-powered brain scanner can paraphrase your thoughts.]

If passed, the bill would legally codify existing Department of Defense procedures found in its  2022 Nuclear Posture Review, which states that “in all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.’’ Additionally, the DOD said that no federal funds could be used to launch nukes by an automated system without “meaningful human control,” according to the bill’s announcement.

The proposed legislation comes at a time when the power of generative AI, including chatbots like ChatGPT, is increasingly part of the public discourse. But the surreal spectrum between “amusing chatbot responses” and “potential existential threats to humanity” is not lost on Lieu. He certainly never thought part of his civic responsibilities would include crafting legislation to stave off a Skynet scenario, he tells PopSci.

As a self-described “recovering computer science major,” Lieu says he is amazed by what AI programs can now accomplish. “Voice recognition is pretty amazing now. Facial recognition is pretty amazing now, although it is more inaccurate for people with darker skin,” he says, referring to long-documented patterns of algorithmic bias

The past year’s release of generative AI programs such as OpenAI’s GPT-4, however, is when Lieu began to see the potential for harm.

[Related: ‘Godfather of AI’ quits Google to talk openly about the dangers of the rapidly emerging tech.]

“It’s creating information and predicting scenarios,” he says of the available tech. “That leads to different concerns, including my view that AI, no matter how smart it gets, should never have operative control of nuclear weapons.”

Lieu believes it’s vital to begin discussing AI regulations to curtail three major consequences: Firs, the proliferation of misinformation and other content “harmful to society.” Second is reining in AI that, while not existentially threatening for humanity, “can still just straight-up kill you.” He references San Francisco’s November 2022 multi-vehicle crash that injured multiple people and was allegedly caused by a Tesla engaged in its controversial Autopilot self-driving mode.

“When your cellphone malfunctions, it isn’t going at 50 miles-per-hour,” he says.

Finally, there is the “AI that can destroy the world, literally,” says Lieu. And this is where he believes the Block Nuclear Launch by Autonomous Artificial Intelligence Act can help, at least in some capacity. Essentially, if the bill becomes law, AI systems could still provide analysis and strategic suggestions regarding nuclear events, but ultimate say-so will rest firmly within human hands.

[Related: A brief but terrifying history of tactical nuclear weapons.]

Going forward, Lieu says there needs to be a larger regulatory approach to handling AI issues due to the fact Congress “doesn’t have the bandwidth or capacity to regulate AI in every single application.” He’s open to a set of AI risk standards agreed upon by federal agencies, or potentially a separate agency dedicated to generative and future advanced AI. On Thursday, the Biden administration unveiled plans to offer $140 million in funding to new research centers aimed at monitoring and regulating AI development.

When asked if he fears society faces a new “AI arms race,” Lieu concedes it is “certainly a possibility,” but points to the existence of current nuclear treaties. “Yes, there is a nuclear weapons arms race, but it’s not [currently] an all-out arms race. And so it’s possible to not have an all-out AI arms race,” says Lieu.

“Countries are looking at this, and hopefully they will get together to say, ‘Here are just some things we are not going to let AI do.’”