By futureTEKnow | Editorial Team
The European Union just made a significant move in the world of artificial intelligence regulation. On July 18, the European Commission released detailed guidelines for AI models classified as carrying systemic risk. These new rules, set to take effect on August 2, 2025, target some of the most powerful foundation models on the market—including those from OpenAI, Google, Meta, Anthropic, and Mistral. Here’s what technology leaders and AI companies need to understand about the evolving landscape.
The EU defines “systemic risk” AI models as those exhibiting highly advanced computational abilities that could meaningfully impact public health, safety, fundamental rights, or the well-being of society. These are not your average chatbots—they’re massive, general-purpose AI systems, often trained on billions of parameters with staggering datasets, capable of a broad range of tasks.
The rules are strict—and clear:
Extensive risk assessments: Providers must identify, evaluate, and mitigate risks related to their models’ deployment.
Adversarial testing: Companies are expected to conduct regular “red-teaming” or structured testing to find and address vulnerabilities and unforeseen behaviors.
Reporting requirements: Significant incidents must be reported to the EU Commission without delay, along with proposed corrective actions.
Cybersecurity demands: Robust cybersecurity measures are non-negotiable, aimed at protecting the model and its deployment infrastructure.
Transparency: Foundational or general-purpose models must generate and maintain technical documentation, craft clear copyright policies, and publish detailed data summaries about what was used to train each algorithm.
For the first time, multinationals building general-purpose AI platforms—regardless of where they operate—face the same bar for compliance if they want to serve the EU market. Penalties for non-compliance are hefty, reaching up to 7% of global turnover or significant fixed fines. These obligations also extend to open-source developers whose models are considered systemically risky and made accessible to EU users.
Many details—including precisely how “systemic risk” is measured and the type of testing required—will be clarified through a forthcoming AI Code of Practice, to be created by May 2025. Importantly, leading AI companies are invited to participate in shaping these codes. How this dialogue evolves could determine the regulatory burden and technical standards for years to come.
While some companies have voiced concerns about the regulatory burden and its potential to stifle innovation, the European Commission asserts these guidelines provide “clarity and direction” for compliance and risk management. The reality is that the new EU requirements are likely to become de facto global standards, as major U.S. AI providers are unlikely to forgo the European market just to sidestep regulation.
As the August 2025 deadline approaches, AI providers should act now, reviewing their risk frameworks, technical documentation, and compliance structures to keep ahead of both regulation and industry best practice.
SpaceX aims to nearly double launches from Vandenberg in 2025, facing support from federal agencies but strong objections from the state and local communities.
Traditional Medicare will pilot AI-assisted prior authorization in 2026 across six states, focusing on high-risk outpatient services. Clinicians retain final say, but incentives and access concerns loom as CMS tests fraud reduction and “gold card” exemptions. Here’s what providers and patients should know.
OpenArt’s new “one-click story” compresses scripting, visuals, and edits into ready-to-post short videos—fueling viral growth and a fresh IP debate. We break down how it works, adoption signals, what’s next (multi-character, mobile), and practical guardrails creators and brands should follow to stay original and compliant.
OpenAI’s o3 swept the Kaggle AI chess tournament, defeating xAI’s Grok 4–0. The victory fueled the intense rivalry between Altman and Musk, reshaping AI benchmarks.
NASA and Google’s AI-powered Crew Medical Officer Digital Assistant enables autonomous diagnoses for astronauts on Mars missions, redefining remote healthcare for space and Earth.
Pinterest’s CEO confirms that fully agentic AI shopping is years away, as the platform invests in AI-powered tools to enhance discovery, inspiration, and personalized shopping experiences for millions.
Shopify’s new AI shopping tools are transforming e-commerce, letting agents and chatbots deliver smooth, personalized shopping and checkout experiences across platforms. Learn how these innovations reshape online retail.
Meta has acquired WaveForms AI, a startup pioneering emotion-detecting voice technology. Learn what this means for Meta’s AI voice ambitions and the future of AI audio.
Tracelight is revolutionizing financial modelling for finance professionals with AI-powered Excel tools that automate complex tasks, reduce errors, and unlock new analysis capabilities. Learn how this next-gen solution changes the future of spreadsheets.
China’s Lanyue lander completed its first major test, showcasing advanced engineering for safe, crewed moon landings before 2030. Explore how this milestone shapes the space race.
Microsoft rolls out GPT-5 across its Copilot suite, integrating smarter AI for enterprise and personal users. Discover new features, free access, and what sets this launch apart.
OpenAI’s GPT-5 is now live for all ChatGPT users. It brings faster, smarter AI with improved reasoning, expanded context, and safer outputs—marking a major leap in generative technology.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Thanks for visiting futureTEKnow.