EU Issues New Compliance Guidelines for AI Models with Systemic Risks

By futureTEKnow | Editorial Team

The European Union just made a significant move in the world of artificial intelligence regulation. On July 18, the European Commission released detailed guidelines for AI models classified as carrying systemic risk. These new rules, set to take effect on August 2, 2025, target some of the most powerful foundation models on the market—including those from OpenAI, Google, Meta, Anthropic, and Mistral. Here’s what technology leaders and AI companies need to understand about the evolving landscape.

What Is a Systemic Risk AI Model?

The EU defines “systemic risk” AI models as those exhibiting highly advanced computational abilities that could meaningfully impact public health, safety, fundamental rights, or the well-being of society. These are not your average chatbots—they’re massive, general-purpose AI systems, often trained on billions of parameters with staggering datasets, capable of a broad range of tasks.

What’s Required from AI Providers Under the New Guidelines?

The rules are strict—and clear:

  • Extensive risk assessments: Providers must identify, evaluate, and mitigate risks related to their models’ deployment.

  • Adversarial testing: Companies are expected to conduct regular “red-teaming” or structured testing to find and address vulnerabilities and unforeseen behaviors.

  • Reporting requirements: Significant incidents must be reported to the EU Commission without delay, along with proposed corrective actions.

  • Cybersecurity demands: Robust cybersecurity measures are non-negotiable, aimed at protecting the model and its deployment infrastructure.

  • Transparency: Foundational or general-purpose models must generate and maintain technical documentation, craft clear copyright policies, and publish detailed data summaries about what was used to train each algorithm.

Why Does This Matter?

For the first time, multinationals building general-purpose AI platforms—regardless of where they operate—face the same bar for compliance if they want to serve the EU market. Penalties for non-compliance are hefty, reaching up to 7% of global turnover or significant fixed fines. These obligations also extend to open-source developers whose models are considered systemically risky and made accessible to EU users.

What About the Codes of Practice?

Many details—including precisely how “systemic risk” is measured and the type of testing required—will be clarified through a forthcoming AI Code of Practice, to be created by May 2025. Importantly, leading AI companies are invited to participate in shaping these codes. How this dialogue evolves could determine the regulatory burden and technical standards for years to come.

Industry Response and the Road Ahead

While some companies have voiced concerns about the regulatory burden and its potential to stifle innovation, the European Commission asserts these guidelines provide “clarity and direction” for compliance and risk management. The reality is that the new EU requirements are likely to become de facto global standards, as major U.S. AI providers are unlikely to forgo the European market just to sidestep regulation.

As the August 2025 deadline approaches, AI providers should act now, reviewing their risk frameworks, technical documentation, and compliance structures to keep ahead of both regulation and industry best practice.

futureTEKnow covers technology, startups, and business news, highlighting trends and updates across AI, Immersive Tech, Space, and robotics.

futureTEKnow

Editorial Team

futureTEKnow is a leading source for Technology, Startups, and Business News, spotlighting the most innovative companies and breakthrough trends in emerging tech sectors like Artificial Intelligence (AI), immersive technologies (XR), robotics, and the space industry.

Trending Companies

Latest Articles

Medicare will pilot AI-driven prior authorization in 2026 across six states, targeting high-risk services while clinicians make final decisions.

AI-Powered Prior Authorization Comes to Traditional Medicare

Traditional Medicare will pilot AI-assisted prior authorization in 2026 across six states, focusing on high-risk outpatient services. Clinicians retain final say, but incentives and access concerns loom as CMS tests fraud reduction and “gold card” exemptions. Here’s what providers and patients should know.

Read More >