Ethereum News Today: ETH, EPFL to Launch Open-Source 70B-Param LLM Trained on Green Energy

Generated by AI AgentCoin World
Tuesday, Aug 5, 2025 10:22 am ET1min read
Aime RobotAime Summary

- ETH Zurich and EPFL will release an open-source LLM trained on green energy, offering full transparency under Apache 2.0 licensing by year-end.

- The dual-model (8B/70B parameters) supports 1,500+ languages, uses 10,000 Nvidia chips, and complies with Swiss/EU AI regulations.

- Unlike closed systems like GPT-4, its open weights enable onchain AI, DeFi integration, and adversarial testing aligned with EU 2025 rules.

- While facing challenges like computational costs, its green infrastructure and compliance-ready design attract transparency-focused developers.

ETH Zurich and EPFL are set to release an open-source large language model (LLLM), trained on the carbon-neutral Alps supercomputer and slated for public release under the Apache 2.0 license later this year [1]. Unlike most commercially available models, which provide only API access, this LLM will offer full transparency, allowing users to access, audit, and fine-tune its weights, training code, and data sets [1]. The initiative is backed by a dual-model configuration—8 billion and 70 billion parameters—trained on 15 trillion tokens, with support for over 1,500 languages [1]. Powered by 10,000

Grace-Hopper chips, the model is trained on renewable energy and is aligned with Swiss data protection laws and the EU AI Act [1].

The open-weight architecture of the Swiss LLM stands in contrast to closed systems like GPT-4, which limits access to its internal workings. This openness empowers developers to innovate freely, from onchain inference in blockchain smart contracts to tokenized data marketplaces and DeFi integration [1]. The model’s transparency also allows for seamless compliance with regulatory frameworks, particularly in the EU, where new rules require adversarial testing, detailed data summaries, and cybersecurity audits effective August 2, 2025 [1].

Alibaba’s Qwen3 series, including Qwen3-Coder, offers a competing open-source alternative with notable performance in coding and mathematical tasks. However, Qwen’s openness is more limited in terms of data provenance compared to the Swiss LLM [1]. While Qwen provides weights and code, it lacks the same level of transparency in training data sources. The Swiss LLM, in contrast, offers full-stack openness, with training methodology, code, and weights all available, making it particularly suitable for academic research and sovereign AI initiatives [1].

Despite its advantages, the Swiss model is expected to face challenges common to open-source LLMs, including software instability, integration complexity, and high computational costs [1]. Additionally, legal uncertainties around data licensing and potential hallucination risks may impact adoption. However, the model’s green infrastructure and compliance-ready design are likely to attract interest from developers, researchers, and companies prioritizing transparency, sustainability, and regulatory alignment [1].

Source:

[1] [This open-source LLM could redefine AI research, and it’s 100% public](https://cointelegraph.com/explained/this-open-source-llm-could-redefine-ai-research-and-its-100-public)

Comments



Add a public comment...
No comments

No comments yet