The current draft of the EU AI Act aims to divide artificial intelligence systems into different strands of risk based on their intended use, and to ensure that providers of AI systems comply with the requirements under the risk level that corresponds to its risk allocation.
LOCATION: Brussels/Stockholm
EFFECTIVE: June 14, 2023
30-second take | 3-minute deeper dive | 3-second links
30-second take
- The European Union’s flagship draft artificial intelligence regulation—the EU AI Act—took a major step toward becoming law, after lawmakers voted to approve the text of the law that would ban real-time facial recognition and place new transparency requirements on generative AI tools like ChatGPT.
- The EU AI Act is the first law on AI by a major regulator anywhere and, like the General Data Protection Regulation (GDPR), has potential to become a global standard.
- The act divides AI systems into different strands of risk, based on the intended use of the system: prohibited practices, high-risk, limited risk, and minimal risk.
- The AI regulation provides for substantial fines in the event of non-compliance. Those found in breach of the act face fines of up to EUR 30 million or 6% of global profits, whichever is higher.
- Recent changes to the act’s text include the statement that any company using generative AI tools must disclose copyrighted material used to train its systems. Companies working on a "high-risk application" must also do a fundamental rights impact assessment and evaluate environmental impact.
- While it’s expected that the EU AI Act will be passed this year, there is no concrete deadline.
3-minute deeper dive
European Union lawmakers on June 14, 2023 voted to approve the text of the EU Artificial Intelligence Act, a major step toward the proposed regulation becoming law. The EU AI Act is the first law on AI by a major regulator anywhere, and like the EU’s General Data Protection Regulation (GDPR) it has potential to become a global standard. Lawmakers voted on text that would ban real-time facial recognition and place new transparency requirements on generative AI tools like ChatGPT.
What is in the EU AI Act?
Initially proposed in April 2021, the EU AI Act aims to apply to what the EU calls “AI systems,” meaning systems developed through machine learning approaches and logic as well as knowledge-based approaches. Experts note that this is a broad definition to accommodate future developments in AI technology, but it extends to much of present-day AI software. That said, it’s worth noting that not all AI systems will ultimately be subject to the obligations under the EU AI Act.
The act divides artificial intelligence systems into different strands of risk, based on the intended use of the system. The four strands are as follows:
- Prohibited practices: AI systems that use social scoring (i.e. creating a social score for a person that leads to unfavorable treatment), facial recognition, manipulation (by exploiting any vulnerabilities of specific groups of people, e.g. due to their age, to distort their behaviors) and dark pattern AI.
- High-risk AI systems: AI systems with use cases in education, employment, justice, and immigration law among others use cases.
- Limited risk AI systems: This includes chatbots, emotion recognition and biometric categorization, and systems generating “deep fake” or synthetic content.
- Minimal risk AI systems: This includes spam filters or AI-enabled video games.
Providers of such AI systems will be obligated to ensure that the system complies with the requirements under the risk level that corresponds to its risk allocation. The regulation provides for substantial fines in the event of non-compliance. Those found in breach of the EU AI Act face fines of up to EUR 30 million or 6% of global profits, whichever is higher.
Recent changes to the act’s text include a ban on the use of the technology in biometric surveillance and for generative AI systems like ChatGPT to disclose AI-generated content. The current draft also states that any company using generative AI tools must disclose copyrighted material used to train its systems (i.e. the “prompts” used). Companies working on a "high-risk application" must also do a fundamental rights impact assessment and evaluate environmental impact. In addition, AI systems that could be used to influence voters and the outcome of elections and systems used by social media platforms with over 45 million users were added to the high-risk list—including Meta and Twitter.
When will the EU AI Act become law?
While it’s expected that the EU AI Act will be passed this year, there is no concrete deadline. After EU lawmakers reach common ground, there will be a trilogue—an inter-institutional negotiation process—between representatives of the European Parliament, the Council of the European Union and the European Commission. Reuters reports that after the terms are finalized, there would be a grace period of about two years to allow affected parties to comply with the regulations.
3-second links
- The EU Artificial Intelligence Act
- AI 101: The Regulatory Framework
- UK policy white paper takes “pro-innovation approach” to AI regulation
- What does responsible AI and machine learning look like for business leaders?
The information provided on this website does not, and is not intended to, constitute legal advice; instead, all information, content, and materials available on this site are for general informational purposes only. Readers should consult with their own attorney regarding legal matters.