AWS Announces Customization Capabilities for Amazon Nova Using Amazon SageMaker AI

Thursday, Jul 17, 2025 10:16 pm ET1min read

AWS announces the release of Amazon SageMaker AI's Amazon Nova customization features. Customers can now customize Nova Micro, Nova Lite, and Nova Pro throughout the model training lifecycle using pre-trained, fine-tuning, and tuning techniques. These techniques are provided as ready-to-use Amazon SageMaker recipes and can be easily deployed to Amazon Bedrock for on-demand and provisioned inference. The customization techniques supported by Nova models include fine-tuning, tuning, continuous pre-training, and knowledge distillation. The optimal technique depends on the goal, complexity of the use case, availability of data and computing resources, and the desired performance, cost, and flexibility.

Amazon Web Services (AWS) has announced a suite of customization capabilities for Amazon Nova in Amazon SageMaker AI. This feature allows customers to tailor Nova Micro, Nova Lite, and Nova Pro models across the entire model training lifecycle, including pre-training, supervised fine-tuning, and alignment. These customization techniques are available as ready-to-use Amazon SageMaker recipes, which can be seamlessly deployed to Amazon Bedrock for both on-demand and provisioned throughput inference.

Amazon Nova foundation models are designed to power diverse generative AI use cases across various industries. As deployments scale, businesses often need models that reflect proprietary knowledge, workflows, and brand requirements. While prompt optimization and retrieval-augmented generation (RAG) can integrate general-purpose foundation models into applications, business-critical workflows often require model customization to meet specific accuracy, cost, and latency requirements.

AWS offers several customization techniques, including supervised fine-tuning (SFT), alignment, continued pre-training (CPT), and knowledge distillation. The choice of technique depends on the goals, complexity of the use case, availability of data and compute resources, and the desired performance, cost, and flexibility. Supervised fine-tuning can be implemented through parameter-efficient fine-tuning (PEFT) or full fine-tuning (FFT). Alignment can be achieved using direct preference optimization (DPO) or proximal policy optimization (PPO). Continued pre-training expands foundational model knowledge through self-supervised learning on large quantities of unlabeled proprietary data. Knowledge distillation transfers knowledge from a larger "teacher" model to a smaller, faster, and more cost-efficient "student" model.

Early access customers, including Cosine AI, Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL), Volkswagen, Amazon Customer Service, and Amazon Catalog Systems Service, are already successfully using Amazon Nova customization capabilities. AWS provides comprehensive documentation and example implementations through the SageMaker HyperPod recipes repository on GitHub.

The customization techniques are currently available in the US East (N. Virginia) region. For more information, visit the Amazon Nova customization webpage and Amazon Nova user guide. To get started, users can access the Amazon SageMaker AI console.

[1] https://aws.amazon.com/blogs/aws/announcing-amazon-nova-customization-in-amazon-sagemaker-ai/

AWS Announces Customization Capabilities for Amazon Nova Using Amazon SageMaker AI

Comments



Add a public comment...
No comments

No comments yet