AInvest Newsletter
Daily stocks & crypto headlines, free to your inbox



Fluence, a decentralized cloudless computing platform, announced on October 3, 2025, the launch of its GPU compute service for AI workloads, offering costs up to 85% lower than centralized cloud providers[1]. This expansion into GPU resources follows Fluence's existing CPU-based virtual server marketplace, which generates over $1 million in annual recurring revenue (ARR) and has saved customers $3.5 million compared to traditional cloud services[1]. The platform now provides GPU containers for immediate deployment, with GPU virtual machines (VMs) and bare metal support planned for the coming weeks[1]. The initiative is supported by a partnership with Spheron Network, a key compute provider, which aims to expand Fluence's global infrastructure network[1].
The move addresses rising compute costs and rigid pricing structures in the AI sector, where hyperscalers like AWS, Google, and Microsoft dominate. Fluence's decentralized model aggregates capacity from enterprise-grade data centers, enabling direct access to high-performance hardware at reduced costs. Co-Founder Evgeny Ponomarev emphasized that the platform's integration of Spheron's decentralized GPUs removes barriers to AI development, allowing teams to scale projects from research to deployment[1]. Spheron's co-founder Prashant Maurya noted that the collaboration provides "dependable, decentralized compute power" to accelerate AI workflows[1].
Fluence's GPU containers are optimized for fine-grained AI workloads, with the platform prioritizing flexibility for developers. The service supports a range of AI applications, including model inference, fine-tuning, and agentic computing. By leveraging Fluence's existing infrastructure-which hosts thousands of blockchain nodes for clients like Antier, NEO, and RapidNode-the platform positions itself as a scalable alternative to centralized cloud providers[1]. The launch aligns with Fluence's Vision 2026, which envisions a global GPU-powered marketplace to meet growing AI demand[1].
The company's roadmap for 2025–2026 includes expanding GPU node onboarding, introducing confidential computing for data privacy, and integrating AI model deployment templates[3]. These developments aim to simplify AI workflows while ensuring security through trusted execution environments (TEEs).
also plans to launch a Guardian Program, where community members monitor network uptime and enforce service-level agreements (SLAs) for decentralized infrastructure[3]. The initiative reflects Fluence's broader mission to democratize access to compute resources, countering the centralization risks posed by hyperscalers[4].Industry analysts highlight the strategic significance of Fluence's approach. By combining DePIN (Decentralized Physical Infrastructure Network) principles with AI infrastructure, Fluence challenges the status quo of cloud dominance. The platform's partnerships with Spheron and dKloud-another DePIN project integrating Fluence's virtual servers into enterprise workflows-underscore its focus on usability and scalability. Fluence's native token, FLT, underpins governance, staking, and coordination within the network, reinforcing its role as a tokenized compute asset[3].
Developers can access GPU containers via the Fluence Console at fluence.network/gpu, with documentation available at fluence.dev/docs. The launch marks a pivotal step in DePIN's evolution, offering an alternative to centralized cloud models while addressing AI's exponential compute demands[1].
Quickly understand the history and background of various well-known coins

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025
Daily stocks & crypto headlines, free to your inbox
Comments
No comments yet