AWS Parallel Computing Service Supports Capacity Blocks for Reserved GPU Access

jueves, 18 de septiembre de 2025, 11:19 am ET1 min de lectura
AMZN--
NVDA--

AWS Parallel Computing Service now supports Amazon EC2 Capacity Blocks for Machine Learning, enabling customers to reserve and schedule GPU-accelerated instances for future use. This includes the NVIDIA Hopper GPU series and AWS Trainium. Capacity Blocks allow customers to reserve blocks of instances for durations of up to 6 months, offering discounted rates compared to on-demand pricing. The addition of Capacity Blocks support in PCS bridges the gap between flexible on-demand instances and long-term reservations, making it ideal for customers running ML or HPC workloads that require consistent execution on powerful GPUs.

AWS Parallel Computing Service Supports Capacity Blocks for Reserved GPU Access

Comentarios



Add a public comment...
Sin comentarios

Aún no hay comentarios