Senior European Union officials say the rules, first proposed in 2021, will protect citizens from potential risks associated with lightning-fast technology while fostering innovation on the continent.
The EU has been rushing to pass the new law ever since ChatGPT, created by Microsoft-backed OpenAI, burst onto the scene in late 2022 and unleashed a global AI race.
The surge of interest in generative AI came after ChatGPT wowed the world with its human-like capabilities, from digesting complex text to creating poems in seconds or passing medical exams.
Other examples of generative AI models include DALL-E and Midjourney, which create images while other models create sounds based on simple input in everyday language.
The ruling was supported by 523 lawmakers in the European Parliament in Strasbourg, France, with 46 voting against.
The 27 EU countries are expected to approve the text in April before the law is published in the Official Journal of the EU in May or June. Brando Benifei, an Italian lawmaker who pushed the text through parliament with Romanian MEP Dragos Tudorache, said:
“Today is again an historic day on our long path towards regulation of AI. (This is) the first regulation in the world that is putting a clear path towards a safe and human-centric development of AI.”
Tudorache told journalists before the vote:
“We managed to find that very delicate balance between the interest to innovate and the interest to protect.”
The EU’s internal market commissioner, Thierry Breton, hailed the vote. He said:
“I welcome the overwhelming support from the European Parliament for the EU AI Act. Europe is now a global standard-setter in trustworthy AI.”
The rules covering AI models such as ChatGPT will come into force 12 months after the law is formally adopted, while most other provisions companies will have to comply with within two years.
The EU rules, known as the Artificial Intelligence Act, are based on a risk-based approach: the riskier the system, the stricter the requirements – with an outright ban on AI tools deemed to pose the greatest threat.
For example, high-risk AI vendors must conduct a risk assessment and ensure their products meet the law’s requirements before they are made available to the public.
Companies face fines ranging from €7.5 million to €35 million ($8.2 million to $38.2 million) for violations, depending on the type of offence and the size of the company.
It is strictly prohibited to use artificial intelligence for predictive policing and systems that use biometric information to determine a person’s race, religion or sexual orientation.
The rules also prohibit real-time facial recognition in public places, but with some exceptions for law enforcement, although police must obtain judicial approval before any AI deployment.