SHARE

The world’s first comprehensive set of regulations on artificial intelligence just moved one step closer to finalization.The European Parliament overwhelmingly passed a “draft law” of its AI Act on Wednesday. Once member states finish their negotiations regarding the bill’s final form, the sweeping regulations could dramatically affect biometric surveillance, data privacy, and AI development within the European Union. The changes will also set the tone for other nations’ approaches to the powerful, controversial technology. The regulations could be finalized by the end of the year.

“While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” Brando Benifei, a European Parliament member (MEP) representing Italy, said in a statement, adding that, “We want AI’s positive potential for creativity and productivity to be harnessed, but we will also fight to protect our position and counter dangers to our democracies and freedom.”

[Related: AI ‘pastor’ leaves churchgoers surprised but uninspired.]

If formally enforced, the AI Act would prohibit a number of invasive technologies, such as real-time remote biometric identification in public spaces, as well as biometric categorization systems focused on “gender, race, ethnicity, citizenship status, religion, [and] political orientation.” Other forms of AI deemed illegal would include predictive policing tech, emotion recognition, and untargeted facial image scraping from the internet or CCTV footage, which the European Parliament considers a violation of human rights and right to privacy.

As the European news publication Euractiv noted on Wednesday, EU lawmakers also introduced a tiered classification system for enforcement, with so-called “General Purpose AI” receiving less restrictions than large language models such as OpenAI’s ChatGPT. If passed, the new laws would require labeling of all AI-generated content, and force disclosure of company’s training data that was covered by copyright.

[Related: Big Tech’s latest AI doomsday warning might be more of the same hype.]

Despite multiple high-profile announcements warning against the dangers of unchecked AI, Big Tech leaders such as OpenAI’s Sam Altman recently warned against “overregulation.” Additionally, Altman threatened to withdraw access from the EU if laws proved too stringent. He also stated he believed Europe’s AI laws would “get pulled back,” a rumor EU lawmakers immediately refuted.

“If OpenAI can’t comply with basic data governance, transparency, safety and security requirements, then their systems aren’t fit for the European market,” Dutch MEP Kim van Sparrentak said at the time.