Meta's Global Affairs Chief Slams EU AI Code, Calls It A Burden On 'Western Open Source AI Models'
Generado por agente de IAHarrison Brooks
miércoles, 5 de febrero de 2025, 3:26 am ET2 min de lectura
META--
Meta's Global Affairs Chief, Joel Kaplan, has criticized the European Union's (EU) AI Code of Practice, stating that it imposes "additional burdens on Western open source AI models." Kaplan made this announcement at Meta's EU Innovation Day event in Brussels, as reported by Bloomberg. The EU's fragmented and inconsistent regulatory approach to AI has raised concerns among leading companies, including Meta, SAP, and Spotify, who argue that it risks stifling innovation and leaving Europe at a competitive disadvantage in the global AI race.
The open letter, signed by over 50 leading companies and researchers, including Meta, highlights the plight of companies developing open-source AI models. The letter argues that without clear rules allowing the use of European data to train these advanced models, Europe risks missing out on AI's transformative potential. Open models, freely available for modification and use by businesses, researchers, and public institutions, are seen as crucial for driving productivity, scientific discovery, and economic growth.
Meta, a staunch supporter of open-source technology in AI, has justified its significant investment in the technology, particularly after the rise of a cheaper open-source AI model from a Chinese rival. Kaplan also announced that Meta will soon introduce Community Notes, a feature allowing users to fact-check content on its platforms in the U.S. This tool is anticipated to replace third-party fact-checkers on social networks.

Meta's difficulty in rolling out its AI assistant in Europe makes for a useful case study. Like many companies, Meta has been exploiting users' publicly posted data to train its AI models. However, it paused this practice in the EU and U.K. in June following complaints from privacy activists to European data protection authorities. These activists argue that Meta lacks a secure legal basis for using Europeans' data to train AI models and that it is flouting the GDPR's restrictions on purpose limitation.
Meta's stance is that, if its models can't be trained to understand Europe-specific idioms, knowledge, and culture, the deployment isn't worth it. The company has also confirmed that due to regulatory uncertainties, it will withhold the release of its upcoming AI models, including the anticipated Llama multimodal, which can understand and interpret images.
Meta's refusal to comply with the EU's AI Code of Practice could potentially intensify the ongoing regulatory tensions with the EU. In November 2021, the European Commission fined Meta (then Facebook) $841 million for violating EU antitrust laws. In January 2023, the European Consumer Organisation (BEUC) raised concerns about Meta's "pay-or-consent" data policy, suggesting it might breach consumer protection laws, data privacy regulations, and the Digital Markets Act. Europe is the second-largest market for Meta after the U.S., with ad revenue growth of 22% for Europe on a user geography basis, exceeding that of North America at 18%.
Meta's CFO, Susan Li, has acknowledged that regulatory headwinds, including those in the EU and U.S., could significantly impact the company's business and financial results. The company continues to monitor an active regulatory landscape that could have a substantial impact on its operations and financial performance.
In conclusion, Meta's Global Affairs Chief, Joel Kaplan, has slammed the EU's AI Code of Practice, calling it a burden on 'Western open source AI models.' The company's concerns about the EU's regulatory environment and its impact on AI innovation and investment in Europe highlight the importance of regulatory certainty and harmonization for tech companies operating in the region. As Meta and other tech giants push back against the EU's regulatory approach, the future of AI development and innovation in Europe remains uncertain.
SAP--
SPOT--
Meta's Global Affairs Chief, Joel Kaplan, has criticized the European Union's (EU) AI Code of Practice, stating that it imposes "additional burdens on Western open source AI models." Kaplan made this announcement at Meta's EU Innovation Day event in Brussels, as reported by Bloomberg. The EU's fragmented and inconsistent regulatory approach to AI has raised concerns among leading companies, including Meta, SAP, and Spotify, who argue that it risks stifling innovation and leaving Europe at a competitive disadvantage in the global AI race.
The open letter, signed by over 50 leading companies and researchers, including Meta, highlights the plight of companies developing open-source AI models. The letter argues that without clear rules allowing the use of European data to train these advanced models, Europe risks missing out on AI's transformative potential. Open models, freely available for modification and use by businesses, researchers, and public institutions, are seen as crucial for driving productivity, scientific discovery, and economic growth.
Meta, a staunch supporter of open-source technology in AI, has justified its significant investment in the technology, particularly after the rise of a cheaper open-source AI model from a Chinese rival. Kaplan also announced that Meta will soon introduce Community Notes, a feature allowing users to fact-check content on its platforms in the U.S. This tool is anticipated to replace third-party fact-checkers on social networks.

Meta's difficulty in rolling out its AI assistant in Europe makes for a useful case study. Like many companies, Meta has been exploiting users' publicly posted data to train its AI models. However, it paused this practice in the EU and U.K. in June following complaints from privacy activists to European data protection authorities. These activists argue that Meta lacks a secure legal basis for using Europeans' data to train AI models and that it is flouting the GDPR's restrictions on purpose limitation.
Meta's stance is that, if its models can't be trained to understand Europe-specific idioms, knowledge, and culture, the deployment isn't worth it. The company has also confirmed that due to regulatory uncertainties, it will withhold the release of its upcoming AI models, including the anticipated Llama multimodal, which can understand and interpret images.
Meta's refusal to comply with the EU's AI Code of Practice could potentially intensify the ongoing regulatory tensions with the EU. In November 2021, the European Commission fined Meta (then Facebook) $841 million for violating EU antitrust laws. In January 2023, the European Consumer Organisation (BEUC) raised concerns about Meta's "pay-or-consent" data policy, suggesting it might breach consumer protection laws, data privacy regulations, and the Digital Markets Act. Europe is the second-largest market for Meta after the U.S., with ad revenue growth of 22% for Europe on a user geography basis, exceeding that of North America at 18%.
Meta's CFO, Susan Li, has acknowledged that regulatory headwinds, including those in the EU and U.S., could significantly impact the company's business and financial results. The company continues to monitor an active regulatory landscape that could have a substantial impact on its operations and financial performance.
In conclusion, Meta's Global Affairs Chief, Joel Kaplan, has slammed the EU's AI Code of Practice, calling it a burden on 'Western open source AI models.' The company's concerns about the EU's regulatory environment and its impact on AI innovation and investment in Europe highlight the importance of regulatory certainty and harmonization for tech companies operating in the region. As Meta and other tech giants push back against the EU's regulatory approach, the future of AI development and innovation in Europe remains uncertain.
Divulgación editorial y transparencia de la IA: Ainvest News utiliza tecnología avanzada de Modelos de Lenguaje Largo (LLM) para sintetizar y analizar datos de mercado en tiempo real. Para garantizar los más altos estándares de integridad, cada artículo se somete a un riguroso proceso de verificación con participación humana.
Mientras la IA asiste en el procesamiento de datos y la redacción inicial, un miembro editorial profesional de Ainvest revisa, verifica y aprueba de forma independiente todo el contenido para garantizar su precisión y cumplimiento con los estándares editoriales de Ainvest Fintech Inc. Esta supervisión humana está diseñada para mitigar las alucinaciones de la IA y garantizar el contexto financiero.
Advertencia sobre inversiones: Este contenido se proporciona únicamente con fines informativos y no constituye asesoramiento profesional de inversión, legal o financiero. Los mercados conllevan riesgos inherentes. Se recomienda a los usuarios que realicen una investigación independiente o consulten a un asesor financiero certificado antes de tomar cualquier decisión. Ainvest Fintech Inc. se exime de toda responsabilidad por las acciones tomadas con base en esta información. ¿Encontró un error? Reportar un problema

Comentarios
Aún no hay comentarios