On 10 July 2025, The European Union unveiled a new code of practice on AI regulation, some of the first detail on how EU regulators plan to implement the AI Act passed last year. Lawyers from member firms of the European law firm association ADVANT offer their perspectives on this development, and its implications below.
Comments from Paolo Lazzarino, Partner at ADVANT Nctm (Italy):
“The new Code of Practice released by the European Commission on July 10, 2025, marks a significant step toward transparency in artificial intelligence. One of its core elements is the requirement for developers of generative AI models to disclose what data was used to train them. This isn’t just a formality—it allows users, journalists, and other developers to understand the foundations behind AI-generated content. Think of it as a nutrition label for AI: knowing what a model was ‘fed’ helps to assess the reliability of what it produces.
“This focus on transparency is aimed to build public trust and increase corporate accountability. If we know whether the data comes from a certain media or archives, we can better evaluate the model’s potential biases and limitations. While the Code is voluntary, companies that adopt it show a commitment to responsible AI, anticipating the binding requirements that will come into force under the EU AI Act in the coming years.”
Comments from Paolo Gallarati, Partner at ADVANT Nctm (Italy):
“This will also contribute to raise awareness on the fair processing of personal data in AI training models, with a view to preserving the right balance between the legitimate interest of AI developers and data subjects’ consent: in fact, big data and machine learning can pierce the veil of anonymous data enabling the identification of individuals with technical means whose affordability was unimaginable just a few years ago.”
Comments fromGiulio Uras, Counsel at ADVANT Nctm (Italy):
“From a compliance standpoint, the EU’s newly released code of practice for general-purpose AI systems reveals not only the technical direction of AI Act enforcement, but also the political and economic balancing act the Union is currently engaged in.
“While framed as a voluntary tool, the code is clearly intended to become the de facto compliance path for major AI providers. For legal and compliance professionals working within the AI Act’s risk-based framework, the immediate challenge is operational: how to ensure conformity and due diligence in an environment where upstream transparency — particularly in relation to model documentation and training data — remains discretionary and, in many cases, asymmetrical.
“Beyond the legal mechanics, however, the broader picture is harder to ignore. The EU’s attempt to ‘simplify’ compliance via soft law mechanisms is, in reality, a defensive maneuver. With geopolitical uncertainty increasing — and transatlantic tensions, industrial policy shifts, and global AI races accelerating — Europe’s regulatory approach risks becoming both overly cautious and structurally rigid. The code’s voluntary nature may ease the short-term burden on industry, but it also delays legal certainty and fosters fragmented compliance strategies across jurisdictions and actors.
“Moreover, the EU’s efforts to accommodate industry concerns, while politically expedient, arguably dilute the AI Act’s foundational promise of trustworthy and safe AI. In practice, this risks creating a compliance framework that is neither robustly enforceable nor truly innovation-friendly — particularly for EU-based firms that do not have the scale or leverage of the major GPAI developers.”