European Union's AI Act, explained
In development since 2021, the EU AI Act aims to bring order and governance to the use of artificial intelligence while turning the bloc into a competitive global player where AI flourishes. But it is not a perfect solution, some argue.
The AI Act is expected to be a landmark piece of EU legislation governing the use of artificial intelligence in Europe that has been in the works for over two years.
On its website, the European Commission observes that “the way we approach Artificial Intelligence (AI) will define the world we live in the future. To help build a resilient Europe for the Digital Decade, people and businesses should be able to enjoy the benefits of AI while feeling safe and protected.”
The EC also notes its objective in coming up with an AI Act “translates into the European approach to excellence and trust through concrete rules and actions”.
Lawmakers have proposed classifying different AI tools according to their perceived level of risk, from low to unacceptable. Governments and companies using these tools will have different obligations, depending on the risk level.
The Future of Life Institute’s website on the European Union AI Act points out the risk levels as such: “First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned.
“Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements.
“Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.”
What is the scope of the act?
The Act is expansive and will govern anyone who provides a product or a service that uses AI.
The Act will cover systems that can generate output such as content, predictions, recommendations, or decisions influencing environments.
Apart from uses of AI by companies, it will also look at AI used in the public sector and law enforcement. It will work in tandem with other laws such as the General Data Protection Regulation (GDPR).
Those using AI systems which interact with humans, are used for surveillance purposes, or can be used to generate "deepfake" content face strong transparency obligations.
The EC website declares that the EU AI Act will help the bloc become internationally competitive by:
- “enabling the development and uptake of AI in the EU;
- “making the EU the place where AI thrives from the lab to the market;
- “ensuring that AI works for people and is a force for good in society;
- “building strategic leadership in high-impact sectors.”
What's considered 'high risk'?
A number of AI tools may be considered high risk, such as those used in critical infrastructure, law enforcement, or education.
They are one level below "unacceptable," and therefore are not banned outright.
Instead, those using high-risk AIs will likely be obliged to complete rigorous risk assessments, log their activities, and make data available to authorities to scrutinise. That would be likely to increase compliance costs for companies.
Many of the "high risk" categories where AI use will be strictly controlled would be areas such as law enforcement, migration, infrastructure, product safety and administration of justice.
What is a 'GPAIS'?
A GPAIS (General Purpose AI System) is a category proposed by lawmakers to account for AI tools with more than one application, such as generative AI models like ChatGPT.
Lawmakers are currently debating whether all forms of GPAIS will be designated high risk, and what that would mean for technology companies looking to adopt AI into their products. The draft does not clarify what obligations AI system manufacturers would be subject to.
A recent article in Politico explores the difficulty European lawmakers are having while trying to classify ChatGPT and its ilk.
“The rise of ChatGPT is now forcing the European Parliament to [rewrite their draft plans],” Gian Volpicelli writes. “In February the lead lawmakers on the AI Act, [Brando] Benifei and [Dragos] Tudorache, proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list — an effort to stop ChatGPT from churning out disinformation at scale.
“The idea was met with scepticism by right-leaning political groups in the European Parliament, and even parts of Tudorache's own Liberal group.”
What if a company breaks the rules?
The proposals say those found in breach of the AI Act face fines of up to 30 million euros or 6 percent of global profits, whichever is higher.
For a company like Microsoft, which is backing ChatGPT creator OpenAI, it could mean a fine of over $10 billion if found violating the rules.
When will the AI Act come into force?
While the industry expects the Act to be passed this year, there is no concrete deadline. The Act is being discussed by parliamentarians, and after they reach common ground, there will be a trilogue between representatives of the European Parliament, the Council of the European Union and the European Commission.
After the terms are finalised, there would be a grace period of around two years to allow affected parties to comply with the regulations.
The Future of Life Institute (FLI) website suggests that all is not well with the AI Act: “There are several loopholes and exceptions in the proposed law,” they write.
“These shortcomings limit the Act’s ability to ensure that AI remains a force for good in your life. Currently, for example, facial recognition by the police is banned unless the images are captured with a delay or the technology is being used to find missing children.”
FLI also points out that the proposed law does not have the flexibility for alteration and growth when artificial intelligence technology inevitably progresses and flourishes beyond the scope of the Act: “If in two years’ time a dangerous AI application is used in an unforeseen sector, the law provides no mechanism to label it as ‘high-risk’.”