Meta Refuses EU AI Code, Sparking Global Tech Debate


Meta has publicly refused to sign the European Union’s code of practice for its landmark AI Act, a decision that has sparked significant debate in the global technology landscape. This move comes just weeks before new rules for general-purpose AI models are set to take effect, highlighting the growing tension between rapid technological innovation and the need for comprehensive AI regulation. This development is particularly noteworthy for those interested in the intersection of technology, policy, and market dynamics, signaling a significant battle shaping the future of artificial intelligence.
Meta’s chief global affairs officer, Joel Kaplan, was clear in his explanation of the company’s stance. In a LinkedIn post, he stated that Europe is “heading down the wrong path on AI.” Kaplan elaborated that the European Commission’s Code of Practice for general-purpose AI models introduces “legal uncertainties for model developers” and includes “measures which go far beyond the scope of the AI Act.” This suggests that
views the voluntary code as an overreach that could hinder its ambitious AI initiatives.The EU AI Act is designed as a risk-based regulation for artificial intelligence applications. It categorizes AI systems based on their potential to cause harm, imposing different levels of scrutiny. Key aspects of this landmark AI regulation include outright bans on AI uses deemed to pose an unacceptable threat to fundamental rights, such as cognitive behavioral manipulation or social scoring. Additionally, there are strict requirements for AI systems used in sensitive areas like biometrics, facial recognition, education, and employment. Developers of these systems must register them and meet rigorous risk and quality management obligations. Furthermore, there are transparency obligations requiring AI systems to be transparent about their capabilities and limitations.
The specific point of contention for Meta is the EU’s voluntary code of practice, published recently. While intended to guide companies in implementing the EU AI Act, Meta argues it oversteps its bounds. This code, affecting providers of general-purpose AI models with systemic risk, includes provisions such as mandatory and regular updates to documentation about AI tools and services, a ban on training AI models on pirated content, and compliance with content owners’ requests to exclude their works from data sets. Meta’s Kaplan views this as an “over-reach” that will “throttle the development and deployment of frontier AI models in Europe,” potentially harming European companies relying on these technologies.
Meta is not alone in its concerns. Major tech companies globally have reportedly lobbied the European Commission to delay the rollout of these rules. Their collective pushback underscores a shared anxiety within the industry about the pace and scope of AI regulation. The Commission, however, has remained firm on its timeline, with rules affecting “general-purpose AI models with systemic risk” going into effect on August 2. Companies with such models on the market before this date will need to comply by August 2, 2027. This standoff highlights the delicate balance between fostering tech innovation and establishing necessary safeguards in a rapidly evolving field.
Meta’s refusal to sign the EU’s AI code of practice marks a pivotal moment in the global discourse on artificial intelligence governance. While the EU aims to establish a robust framework for AI regulation, tech giants like Meta fear that overzealous rules could stifle the very tech innovation they champion. The coming months will reveal how this significant disagreement impacts the development and deployment of general-purpose AI models, and whether a middle ground can be found that protects societal interests without hindering technological progress. This ongoing dialogue will undoubtedly shape the future of Meta AI and the broader AI ecosystem for years to come.

Comments
No comments yet