Summary Bullets:

• On February 4, 2025, the European Commission published the Guidelines on prohibited AI practices, as defined by the AI Act, which came into force on August 1, 2024.
• The AI Action Summit took place in Paris (France) on February 10/11, 2025, with heads of state and government, leaders of international organizations, and CEOs in attendance.
It has been a busy few weeks for observers of AI in the European continent: firstly, the issuance of new guidance around the AI Act, the most comprehensive regulatory framework for AI to date; secondly, the AI Action Summit, hosted by France and co-chaired by India. The stakes were high, with almost 100 countries and over 1,000 private sector and civil society representatives in attendance, and the ensuing debate delivered in spades. With the summit following the latest issuance of the AI Act by a matter of days, part of the event concentrated on issues around regulation vs innovation.
The AI Action summit provided a platform to ponder the question: does innovation trump regulation? However, it can be argued that ignoring the risks inherent to AI will not necessarily accelerate innovation, and that the current European challenges have more to do with market fragmentation and lack of venture capital. It is important to consider the need for democratic governments to enact practical measures, rather than platitudes, focusing on risks to social, political, and economic stability around the world from misuse of AI models.
The AI Act follows a four-tier risk-based system. The highest level, “unacceptable risk”, includes AI systems considered a clear threat to societal safety. Eight practices are included: harmful AI-based manipulation and deception; harmful AI-based exploitation of vulnerabilities; social scoring; individual criminal offence risk assessment or prediction; untargeted scraping of the internet or CCTV material to create or expand facial recognition databases; emotion recognition in workplaces and education institutions; biometric categorization to deduce certain protected characteristics; and, real-time remote biometric identification for law enforcement purposes in publicly accessible spaces.
Provisions within this level, which includes scraping the internet to create facial recognition databases, came into force on February 2, 2025. These systems are now banned, and companies face fines of up to EUR35 million or 7% of their global annual revenues, whichever is higher if they don’t comply. However, enforcement in the following tiers will have to wait until August 2025.
The next level down, the “high-risk” level includes AI use cases that can pose serious risks to health, safety or fundamental rights, including threats to critical infrastructures (e.g., transport), the failure of which could put the life and health of citizens at risk, and AI solutions used in education institutions, that may determine the access to education and course of someone’s professional life (e.g., scoring of exams) as well as AI-based safety components of products (e.g., AI application in robot-assisted surgery). Although they will not be banned, high-risk AI systems will be subject to legal obligations before they can be put on the market, including adequate risk assessment and mitigation systems and detailed documentation providing all information necessary.
Following from the high-risk level, there is “minimal or no risk”. This means lighter transparency obligations which may entail that developers and deployers ensure that end-users are aware that they are interacting with AI, for example in practical cases such as with chatbots and deepfakes. Explainability is also enshrined in this legislation, as AI companies may have to share information about why an AI system has made a prediction and taken an action.
During the summit, the impact of this new guidance was discussed, with the US criticizing European regulation and warning against cooperation with China. The US and the UK refused to sign the summit declaration on ‘inclusive’ AI, a snub that dashed hopes for a unified approach to regulating the technology. The document was backed by 60 signatories including France, China, India, Japan, Australia, and Canada. Startups such as OpenAI, which not so long ago was admonishing the US Congress about the need for regulating AI, have argued that the AI Act may hold Europe back when it comes to commercial development of AI.
The summit took place at a time of fast-paced change, with Chinese startup DeepSeek challenging the US with the recent release of open-weight model R1. Another open-source player, French startup Mistral AI, which just released its Le Chat model, played a significant role. The company announced partnerships with the national employment agency in France, European defense company Helsing, and Stellantis, the car manufacturer that owns the Peugeot, Citroën, Fiat, and Jeep brands. The launch of the EUR200-billion InvestAI initiative, to finance four AI gigafactories to train large AI models, was seen as part of a broader strategy to foster open and collaborative development of advanced AI models in the EU.
