The European Commission just opened a new chapter in the rollout of the EU AI Act, and it starts with a question: how can we make compliance simpler?
Earlier this week, the Commission launched a public consultation aimed at reducing the regulatory burden tied to the AI Act. It’s part of a somewhat broader effort to keep Europe competitive in the AI space, especially as the US and China continue to charge ahead. The Commission isn’t just looking at AI governance in isolation, it’s connecting the dots across cloud infrastructure, data access, digital skills, and industrial adoption.
But what does this mean for ethics and compliance teams?
First, let’s be clear, the Commission isn’t walking back its risk-based framework. High-risk AI systems—the ones used in hiring, credit scoring, or biometric surveillance—will still face strict scrutiny. What’s on the table is how those obligations are interpreted and executed. The goal is to support innovation without compromising on trust or safety.
For compliance teams, this could mean fewer ambiguities around risk classification, more user-friendly documentation and assessment template with sector-specific guidance to support implementation, and a reduction in overlapping or redundant rules across EU digital legislation.
The Commission is especially focused on input from smaller players and mid-sized businesses, the ones often squeezed between the promise of AI and the pressure of compliance. These companies have the most to gain from regulatory clarity, and the most to lose if simplification efforts stall.
As someone who’s worked in ethics and compliance for nearly two decades, I’ve likely seen this movie play before. Vague or overly complex rules don’t just slow innovation, they weaken the systems designed to keep it ethical.
Even if this consultation doesn’t lead to formal legal amendments, it’s something we should all be watching. The way the AI Act is interpreted, supported, and enforced will shape compliance strategy for years to come.
Now’s the time revisit your AI governance frameworks and identify where your current, or future, use of AI intersects with high-risk categories. Keep an eye on these public consultations and emerging guidance, and ensure you are advocating internally for cross-functional collaboration on AI compliance.
If Europe wants AI that’s both competitive and trustworthy, simplification will need to mean more than just easier paperwork. It has to make compliance more effective, not just less painful. And for compliance leaders, that’s not a threat to prepare for—it’s an opportunity to shape what comes next.