The White House’s new ‘AI Bill of Rights’ plans to tackle racist and biased algorithms

The roadmap towards American data privacy is a good start, but not legally binding.
Robot hand typing on laptop keyboard
Experts call it a good first step, but much more will need to happen. Deposit Photos

The Biden administration and Office of Science and Technology Policy (OSTP) announced a detailed blueprint towards achieving an AI Bill of Rights focused on protecting Americans’ privacy and safety today. But industry experts warn not much can come out of it without proper legislative enforcement.

“In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased,” reads the report’s introduction. “Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.”

[Related: How to stop school devices from sharing your family’s data.]

The OSTP’s blueprint centers on five pillars meant to better protect Americans as smart technologies continue to play an increasingly major role in our lives: protecting citizens from unsafe and ineffective systems, reforming algorithmic bias to ensure more equitable usage, constructing built-in safeguards for agency over data, keeping informed about automated systems and their ramifications, and making it simple and accessible to opt-out of AI systems in favor of human interactions whenever possible.

Observers note that, although the OSTP’s industry landscape survey is thorough, the blueprint can only do so much at the moment. “It is disheartening to see the lack of coherent federal policy to tackle desperately needed challenges posed by AI, such as federally coordinated monitoring, auditing, and reviewing actions to mitigate the risks and harm brought by deployed or open-source foundation models,” Russell Wald, the Stanford Institute for Human-Centered AI’s director of policy, explained to MIT Technology Review.

[Related: App privacy depends a lot on where you were when you downloaded it.]

Still, there is a largely unified consensus among lawmakers that data privacy and consumer protections need to at least catch up with other regions of the world. The European Union, for example, is currently pushing for AI responsibility and corporate accountability, and already has stringent data protections in place for its citizens. Despite the existence of a rare political overlap, however, there has yet to be a unified push towards reform.

“These technologies are causing real harms in the lives of Americans—harms that run counter to our core democratic values, including the fundamental right to privacy, freedom from discrimination, and our basic dignity,” a senior administration official told reporters at a press conference this morning.