AWS Parallel Computing Service Supports Capacity Blocks for Reserved GPU Access

Thursday, Sep 18, 2025 11:19 am ET1min read
AMZN--
NVDA--

AWS Parallel Computing Service now supports Amazon EC2 Capacity Blocks for Machine Learning, enabling customers to reserve and schedule GPU-accelerated instances for future use. This includes the NVIDIA Hopper GPU series and AWS Trainium. Capacity Blocks allow customers to reserve blocks of instances for durations of up to 6 months, offering discounted rates compared to on-demand pricing. The addition of Capacity Blocks support in PCS bridges the gap between flexible on-demand instances and long-term reservations, making it ideal for customers running ML or HPC workloads that require consistent execution on powerful GPUs.

AWS Parallel Computing Service Supports Capacity Blocks for Reserved GPU Access

Stay ahead of the market.

Get curated U.S. market news, insights and key dates delivered to your inbox.

Comments



Add a public comment...
No comments

No comments yet