EU Sets Key Benchmarks for Regulating AI Development with Code of Practice

AinvestWednesday, Jul 16, 2025 1:20 pm ET
2min read

The European Union has set key benchmarks for regulating AI development through its Code of Practice. The code requires transparency, copyright compliance, and systemic risk management. It aims to help firms comply with these norms and provide legal clarity. The assumption that Big Tech will regulate itself is deemed naive, given profit pressures and geopolitical competition. Other countries, such as India, have yet to develop dedicated AI laws.

The European Commission has published a voluntary Code of Practice to assist AI companies in complying with the EU AI Act, marking a significant milestone in the regulation of artificial intelligence. The code, which addresses transparency, copyright, and systemic risk management, aims to reduce compliance burdens and provide greater legal clarity for signatories. This article delves into the key aspects of the code and its implications for the AI industry.

The EU AI Act, approved in 2024, is the first comprehensive legal framework governing AI. It classifies AI applications into risk categories—unacceptable, high, limited, and minimal—and imposes obligations accordingly. The act also carries fines of up to 7% of a company’s global annual revenue for violations. The voluntary Code of Practice seeks to clarify legal obligations under the act, particularly for providers of general-purpose AI models like ChatGPT [1].

The code is structured into three core chapters: Transparency, Copyright, and Safety and Security. The Transparency chapter includes a model documentation form, a user-friendly tool to help companies demonstrate compliance with transparency requirements. The Copyright chapter offers practical solutions to meet the AI Act’s obligation to comply with EU copyright law. The Safety and Security chapter outlines concrete state-of-the-art practices for managing systemic risks, particularly for advanced AI systems [1].

The drafting process involved 13 independent experts and extensive consultations with stakeholders, including AI developers, industry organizations, academics, civil society organizations, and representatives of EU member states. The code will take effect on August 2, 2025, with enforcement starting in 2026 for new AI models and 2027 for existing ones. AI firms such as OpenAI and Google are reviewing the code, with a decision on signatory status pending [1].

While the code is voluntary, signatories will benefit from lower administrative burdens and greater legal certainty. This development is significant, as the assumption that Big Tech will regulate itself is deemed naive given profit pressures and geopolitical competition. Other countries, such as India, have yet to develop dedicated AI laws, leaving a gap in global AI governance [1].

In contrast, the European Commission has also shelved plans for a digital tax targeting Big Tech companies, opting instead for a broader range of levies. This shift reflects the EU's ongoing negotiations with the U.S. over a broader trade agreement. The proposed alternatives include a new EU-wide tax on tobacco products, a levy on discarded electronic equipment, and a corporate tax targeting companies with over €50 million in annual EU revenue [2].

The European Union's approach to AI regulation underscores its commitment to ensuring that AI systems used within its borders are safe, transparent, and respectful of fundamental human rights. As other countries continue to develop their AI frameworks, the EU's voluntary Code of Practice serves as a valuable guide for global AI governance.

References:
[1] https://www.pymnts.com/news/artificial-intelligence/2025/european-union-publishes-final-ai-code-guide-compliance-tech-companies/
[2] https://9to5mac.com/2025/07/14/eu-likely-to-shelve-digital-tax-plan-that-would-target-apple-and-other-big-tech-companies/

EU Sets Key Benchmarks for Regulating AI Development with Code of Practice

Comments



Add a public comment...
No comments

No comments yet

Disclaimer: The news articles available on this platform are generated in whole or in part by artificial intelligence and may not have been reviewed or fact checked by human editors. While we make reasonable efforts to ensure the quality and accuracy of the content, we make no representations or warranties, express or implied, as to the truthfulness, reliability, completeness, or timeliness of any information provided. It is your sole responsibility to independently verify any facts, statements, or claims prior to acting upon them. Ainvest Fintech Inc expressly disclaims all liability for any loss, damage, or harm arising from the use of or reliance on AI-generated content, including but not limited to direct, indirect, incidental, or consequential damages.