The AI law agreed by the EU comes out of the shell with the aim of establishing security standards and fundamental rights to prevent this technology from being used for spurious purposes.
After long and arduous negotiations within the European Union (EU), there is already an agreement for what has the title of being the first artificial intelligence (AI) law in the world. This legislation will result in stricter rules for the use of the most fashionable technology of the moment and will put the brakes on the most potentially dangerous AI applications.
Negotiators from the European Parliament and EU member states agreed last Friday evening in Brussels on a groundbreaking law to regulate AI. The European Parliament emphasizes that it is the world’s first AI law. And Thierry Breton, European Commissioner for Internal Markets and Services, insists on describing the agreement as “historic”.
The European Commission (EC) first proposed the law in April 2021. According to the regulation agreed by the European Parliament and member states, AI systems will be categorized on the basis of overlapping risks. And the applications most pregnant with risks could eventually be banned from EU territory.
Under the law, stringent transparency rules will also be imposed on companies such as OpenAI, Microsoft and Google that are active in the field of AI. Such companies will be required, for example, to provide specific information on the data used to train their AI systems and on compliance with copyright rules.
The new European regulation comes out of its shell with the aim of establishing security and fundamental rights standards that prevent this technology from being used for purposes focused on repression, manipulation and discrimination. However, the regulation simultaneously avoids falling into hyper-regulation, which would have ultimately undermined the competitiveness of EU countries.
Such a law could become a model for regulating AI worldwide (or at least the EU would like it to be). The United States is also currently working on a rule to regulate this technology. However, the congressmen’s plans are still at an early stage and envisage less stringent rules than those of the EU.
The regulation of AI foundational models and the use of this technology for biometric purposes have been the thorniest issues in the negotiations to agree on the law.
AI typically refers to applications that are anchored in “machine learning,” a technology that allows searches of large volumes of data to find matches and draw conclusions based on the results. AI applications can be used in multiple areas. They can, for example, evaluate images from CT scans much more quickly and accurately than humans. Autonomous cars also rely on AI to predict the behavior of other motorists on the road. And chatbots and the automatic playlists of streaming music services are also governed by the same technology.
It is worth noting, however, that negotiations within the EU recently came close to collapse over the regulation of so-called foundational AI models. Such models are very powerful, are trained with huge amounts of data and are the basis for a whole plethora of applications. ChatGPT is, for example, a foundational AI model. Germany, France and Italy had previously positioned themselves in favor of regulating only and exclusively specific AI applications (excluding foundational models).
The rules for regulating biometric surveillance supported by AI for national security purposes have also been the subject of strong controversy. Finally, this technology will be generally prohibited and may be used only in three cases: for the search for victims of kidnapping or human trafficking, for the prevention of terrorist attacks or when such attacks have already been perpetrated, and for the identification of persons suspected of having committed crimes of terrorism, murder, rape or sex trafficking.
The European Parliament and EU member states must now approve the AI law, but after agreement was reached last Friday, that approval is shaping up to be a mere formality.