The EU is a Trailblazer, and the AI Act Proves It

B. Valle

Summary Bullets:

• On August 2, 2025, the second stage of the EU AI Act came into force, including obligations for general purpose models.

• The AI Act first came into force in February 2025, with the first set of applications built into law; the legislation follows a staggered approach with the last wave expected for August 2, 2027.

August 2025 has been marked by the enforcement of a new set of rules as part of the AI Act, the world’s first comprehensive AI legislation, which is being implemented in gradual stages. Like GDPR was for data privacy in the 2010s, the AI Act will be the global blueprint for governance of the transformative technology of AI, for decades to come. Recent news of the latest case of legal action, this time against OpenAI, by the parents of 16-year-old Adam Raine, who ended his life after months of intensive use of ChatGPT, has thrown into stark relief the potential for harm and the need to regulate the technology.

The AI Act follows a risk management approach; it aims to regulate transparency and accountability for AI systems and their developers. Although it was enacted into law in 2024, the first wave of enforcement proper was implemented last February (please see GlobalData’s take on The AI Act: landmark regulation comes into force) covering “unacceptable risk,” including AI systems considered a clear threat to societal safety. The second wave, implemented this month, covers general purpose AI (GPAI) models and arguably is the most important one, at least in terms of scope. The next steps are expected to follow in August 2026 (“high-risk systems”) and August 2027 (final steps of implementation).

From August 2, 2025, GPAI providers must comply with transparency and copyright obligations when placing their models on the EU market. This applies not only to EU-based companies but any organization with operations in the EU. GPAI models already on the market before August 2, 2025, must ensure compliance by August 2, 2027. For the intents and purposes of the law, GPAI models include those trained with over 10^23 floating point operations (FLOP) and capable of generating language (whether text or audio), text-to-image, or text-to-video.

Providers of GPAI systems must keep technical documentation about the model, including a sufficiently detailed summary of its training corpus. In addition, they must implement a policy to comply with EU copyright law. Within the group of GPAI models there is a special tier considered to be of “systemic risk,” very advanced models that only a small handful of providers develop. Firms within this tier face additional obligations, for instance, notify the European Commission when developing a model deemed with systemic risk and take steps to ensure the model’s safety and security. The classification of which models pose systemic risks can change over time as the technology evolves. There are exceptions: AI used for national security, military, and defense purposes is exempted in the act. Some open-source systems are also outside the reach of the legislation, as are AI models developed using publicly available code.

The European Commission has published a template to help providers summarize the data used to train their models, the GPAI Code of Practice, developed by independent experts as a voluntary tool for AI providers to demonstrate compliance with the AI Act. Signatories include Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, OpenAI and ServiceNow, but some glaring absences include Meta (at the time of print). The code covers transparency and copyright rules that apply to all GPAI models, with additional safety and security rules for the systemic risk tier.

The AI Act has drawn criticism because of its disproportionate impact on startups and SMBs, with some experts arguing that it should include exceptions for technologies that are yet to have some hold on the general public, and don’t have a wide impact or potential for harm. Others say it could slow down progress among European organizations in the process of training their AI models, and that the rules are confusing. Last July, several tech lobbies including CCIA Europe, urged the EU to pause implementation of the act, arguing that the roll-out had been too rushed, without weighing in on the potential consequences… Sounds familiar?

However, it has been developed with the collaboration of thousands of stakeholders in the private sector, at a time when businesses are craving regulatory guidance. It is also introducing standard security practices across the EU, in a critical period of adoption. It is setting a global benchmark for others to follow in a time of great upheaval. After the AI Act, the US and other countries will find it increasingly harder to continue ignoring the calls for more responsible AI, a commendable effort that will make history.

Leave a Reply