Fluence Challenges AI Cloud Giants with 85% Cheaper Decentralized GPUs

Generated by AI AgentCoin World
Friday, Oct 3, 2025 2:58 am ET2min read
Speaker 1
Speaker 2
AI Podcast:Your News, Now Playing
Aime RobotAime Summary

- Fluence launches GPU service with 85% lower costs than cloud providers, targeting AI workloads.

- Partnership with Spheron expands decentralized infrastructure to challenge AWS/Google/Microsoft dominance.

- Platform integrates enterprise-grade GPUs and blockchain nodes to democratize compute access.

- Vision 2026 aims to create global GPU marketplace with confidential computing and community SLA enforcement.

Fluence, a decentralized cloudless computing platform, announced on October 3, 2025, the launch of its GPU compute service for AI workloads, offering costs up to 85% lower than centralized cloud providersFluence Launches Global and Affordable GPU Compute for AI[1]. This expansion into GPU resources follows Fluence's existing CPU-based virtual server marketplace, which generates over $1 million in annual recurring revenue (ARR) and has saved customers $3.5 million compared to traditional cloud servicesFluence Launches Global and Affordable GPU Compute for AI[1]. The platform now provides GPU containers for immediate deployment, with GPU virtual machines (VMs) and bare metal support planned for the coming weeksFluence Launches Global and Affordable GPU Compute for AI[1]. The initiative is supported by a partnership with Spheron Network, a key compute provider, which aims to expand Fluence's global infrastructure networkFluence Launches Global and Affordable GPU Compute for AI[1].

The move addresses rising compute costs and rigid pricing structures in the AI sector, where hyperscalers like AWS, Google, and Microsoft dominate. Fluence's decentralized model aggregates capacity from enterprise-grade data centers, enabling direct access to high-performance hardware at reduced costs. Co-Founder Evgeny Ponomarev emphasized that the platform's integration of Spheron's decentralized GPUs removes barriers to AI development, allowing teams to scale projects from research to deploymentFluence Launches Global and Affordable GPU Compute for AI[1]. Spheron's co-founder Prashant Maurya noted that the collaboration provides "dependable, decentralized compute power" to accelerate AI workflowsFluence Launches Global and Affordable GPU Compute for AI[1].

Fluence's GPU containers are optimized for fine-grained AI workloads, with the platform prioritizing flexibility for developers. The service supports a range of AI applications, including model inference, fine-tuning, and agentic computing. By leveraging Fluence's existing infrastructure-which hosts thousands of blockchain nodes for clients like Antier, NEO, and RapidNode-the platform positions itself as a scalable alternative to centralized cloud providersFluence Launches Global and Affordable GPU Compute for AI[1]. The launch aligns with Fluence's Vision 2026, which envisions a global GPU-powered marketplace to meet growing AI demandFluence Launches Global and Affordable GPU Compute for AI[1].

The company's roadmap for 2025–2026 includes expanding GPU node onboarding, introducing confidential computing for data privacy, and integrating AI model deployment templatesFluence AI Roadmap: Delivering A Neutral Compute Layer for the Future of Intelligence with FLT[3]. These developments aim to simplify AI workflows while ensuring security through trusted execution environments (TEEs).

also plans to launch a Guardian Program, where community members monitor network uptime and enforce service-level agreements (SLAs) for decentralized infrastructureFluence AI Roadmap: Delivering A Neutral Compute Layer for the Future of Intelligence with FLT[3]. The initiative reflects Fluence's broader mission to democratize access to compute resources, countering the centralization risks posed by hyperscalersFluence’s 2025–2026 Roadmap: Building the AI Infrastructure We Deserve[4].

Industry analysts highlight the strategic significance of Fluence's approach. By combining DePIN (Decentralized Physical Infrastructure Network) principles with AI infrastructure, Fluence challenges the status quo of cloud dominance. The platform's partnerships with Spheron and dKloud-another DePIN project integrating Fluence's virtual servers into enterprise workflows-underscore its focus on usability and scalability. Fluence's native token, FLT, underpins governance, staking, and coordination within the network, reinforcing its role as a tokenized compute assetFluence AI Roadmap: Delivering A Neutral Compute Layer for the Future of Intelligence with FLT[3].

Developers can access GPU containers via the Fluence Console at fluence.network/gpu, with documentation available at fluence.dev/docs. The launch marks a pivotal step in DePIN's evolution, offering an alternative to centralized cloud models while addressing AI's exponential compute demandsFluence Launches Global and Affordable GPU Compute for AI[1].

Comments



Add a public comment...
No comments

No comments yet