EU Sets Key Benchmarks for Regulating AI Development with Code of Practice
ByAinvest
Wednesday, Jul 16, 2025 1:20 pm ET2min read
AAPL--
The EU AI Act, approved in 2024, is the first comprehensive legal framework governing AI. It classifies AI applications into risk categories—unacceptable, high, limited, and minimal—and imposes obligations accordingly. The act also carries fines of up to 7% of a company’s global annual revenue for violations. The voluntary Code of Practice seeks to clarify legal obligations under the act, particularly for providers of general-purpose AI models like ChatGPT [1].
The code is structured into three core chapters: Transparency, Copyright, and Safety and Security. The Transparency chapter includes a model documentation form, a user-friendly tool to help companies demonstrate compliance with transparency requirements. The Copyright chapter offers practical solutions to meet the AI Act’s obligation to comply with EU copyright law. The Safety and Security chapter outlines concrete state-of-the-art practices for managing systemic risks, particularly for advanced AI systems [1].
The drafting process involved 13 independent experts and extensive consultations with stakeholders, including AI developers, industry organizations, academics, civil society organizations, and representatives of EU member states. The code will take effect on August 2, 2025, with enforcement starting in 2026 for new AI models and 2027 for existing ones. AI firms such as OpenAI and Google are reviewing the code, with a decision on signatory status pending [1].
While the code is voluntary, signatories will benefit from lower administrative burdens and greater legal certainty. This development is significant, as the assumption that Big Tech will regulate itself is deemed naive given profit pressures and geopolitical competition. Other countries, such as India, have yet to develop dedicated AI laws, leaving a gap in global AI governance [1].
In contrast, the European Commission has also shelved plans for a digital tax targeting Big Tech companies, opting instead for a broader range of levies. This shift reflects the EU's ongoing negotiations with the U.S. over a broader trade agreement. The proposed alternatives include a new EU-wide tax on tobacco products, a levy on discarded electronic equipment, and a corporate tax targeting companies with over €50 million in annual EU revenue [2].
The European Union's approach to AI regulation underscores its commitment to ensuring that AI systems used within its borders are safe, transparent, and respectful of fundamental human rights. As other countries continue to develop their AI frameworks, the EU's voluntary Code of Practice serves as a valuable guide for global AI governance.
References:
[1] https://www.pymnts.com/news/artificial-intelligence/2025/european-union-publishes-final-ai-code-guide-compliance-tech-companies/
[2] https://9to5mac.com/2025/07/14/eu-likely-to-shelve-digital-tax-plan-that-would-target-apple-and-other-big-tech-companies/
The European Union has set key benchmarks for regulating AI development through its Code of Practice. The code requires transparency, copyright compliance, and systemic risk management. It aims to help firms comply with these norms and provide legal clarity. The assumption that Big Tech will regulate itself is deemed naive, given profit pressures and geopolitical competition. Other countries, such as India, have yet to develop dedicated AI laws.
The European Commission has published a voluntary Code of Practice to assist AI companies in complying with the EU AI Act, marking a significant milestone in the regulation of artificial intelligence. The code, which addresses transparency, copyright, and systemic risk management, aims to reduce compliance burdens and provide greater legal clarity for signatories. This article delves into the key aspects of the code and its implications for the AI industry.The EU AI Act, approved in 2024, is the first comprehensive legal framework governing AI. It classifies AI applications into risk categories—unacceptable, high, limited, and minimal—and imposes obligations accordingly. The act also carries fines of up to 7% of a company’s global annual revenue for violations. The voluntary Code of Practice seeks to clarify legal obligations under the act, particularly for providers of general-purpose AI models like ChatGPT [1].
The code is structured into three core chapters: Transparency, Copyright, and Safety and Security. The Transparency chapter includes a model documentation form, a user-friendly tool to help companies demonstrate compliance with transparency requirements. The Copyright chapter offers practical solutions to meet the AI Act’s obligation to comply with EU copyright law. The Safety and Security chapter outlines concrete state-of-the-art practices for managing systemic risks, particularly for advanced AI systems [1].
The drafting process involved 13 independent experts and extensive consultations with stakeholders, including AI developers, industry organizations, academics, civil society organizations, and representatives of EU member states. The code will take effect on August 2, 2025, with enforcement starting in 2026 for new AI models and 2027 for existing ones. AI firms such as OpenAI and Google are reviewing the code, with a decision on signatory status pending [1].
While the code is voluntary, signatories will benefit from lower administrative burdens and greater legal certainty. This development is significant, as the assumption that Big Tech will regulate itself is deemed naive given profit pressures and geopolitical competition. Other countries, such as India, have yet to develop dedicated AI laws, leaving a gap in global AI governance [1].
In contrast, the European Commission has also shelved plans for a digital tax targeting Big Tech companies, opting instead for a broader range of levies. This shift reflects the EU's ongoing negotiations with the U.S. over a broader trade agreement. The proposed alternatives include a new EU-wide tax on tobacco products, a levy on discarded electronic equipment, and a corporate tax targeting companies with over €50 million in annual EU revenue [2].
The European Union's approach to AI regulation underscores its commitment to ensuring that AI systems used within its borders are safe, transparent, and respectful of fundamental human rights. As other countries continue to develop their AI frameworks, the EU's voluntary Code of Practice serves as a valuable guide for global AI governance.
References:
[1] https://www.pymnts.com/news/artificial-intelligence/2025/european-union-publishes-final-ai-code-guide-compliance-tech-companies/
[2] https://9to5mac.com/2025/07/14/eu-likely-to-shelve-digital-tax-plan-that-would-target-apple-and-other-big-tech-companies/

Stay ahead of the market.
Get curated U.S. market news, insights and key dates delivered to your inbox.
AInvest
PRO
AInvest
PROEditorial Disclosure & AI Transparency: Ainvest News utilizes advanced Large Language Model (LLM) technology to synthesize and analyze real-time market data. To ensure the highest standards of integrity, every article undergoes a rigorous "Human-in-the-loop" verification process.
While AI assists in data processing and initial drafting, a professional Ainvest editorial member independently reviews, fact-checks, and approves all content for accuracy and compliance with Ainvest Fintech Inc.’s editorial standards. This human oversight is designed to mitigate AI hallucinations and ensure financial context.
Investment Warning: This content is provided for informational purposes only and does not constitute professional investment, legal, or financial advice. Markets involve inherent risks. Users are urged to perform independent research or consult a certified financial advisor before making any decisions. Ainvest Fintech Inc. disclaims all liability for actions taken based on this information. Found an error?Report an Issue

Comments
No comments yet