Amazon SageMaker HyperPod features

Scale and accelerate generative AI model development across thousands of AI accelerators

Task governance

Amazon SageMaker HyperPod provides full visibility and control over compute resource allocation across generative AI model development tasks, such as training and inference. SageMaker HyperPod automatically manages tasks queues, ensuring the most critical tasks are prioritized, while more efficiently using compute resources to reduce model development costs. In a few short steps, administrators can define priorities for different tasks and set up limits for how many compute resources each team or project can use. Then, data scientists and developers create tasks (for example, a training run, fine-tuning a particular model, or making predictions on a trained model) that SageMaker HyperPod automatically runs, adhering to the compute resource limits and priorities that the administrator set. When a high-priority task needs to be completed immediately but all compute resources are in use, SageMaker HyperPod automatically frees up compute resources from lower-priority tasks. Additionally, SageMaker HyperPod automatically uses idle compute resources to accelerate waiting tasks. SageMaker HyperPod provides a dashboard where administrators can monitor and audit tasks that are running or waiting for compute resources.

Learn more

Flexible training plans

To meet your training timelines and budgets, SageMaker HyperPod helps you create the most cost-efficient training plans that use compute resources from multiple blocks of compute capacity. Once you approve the training plans, SageMaker HyperPod automatically provisions the infrastructure and runs the training jobs on these compute resources without requiring any manual intervention. You save weeks of effort managing the training process to align jobs with compute availability.

Learn more

 

Optimized recipes

SageMaker HyperPod recipes help data scientists and developers of all skill sets benefit from state-of-the-performance while quickly getting started training and fine-tuning publicly available generative AI models, including Llama 3.1 405B, Mixtral 8x22B, and Mistral 7B. Each recipe includes a training stack that has been tested by AWS, removing weeks of tedious work testing different model configurations. You can switch between GPU-based and AWS Trainium–based instances with a one-line recipe change, enable automated model checkpointing for improved training resiliency, and run workloads in production on SageMaker HyperPod.

 

High-performing distributed training

SageMaker HyperPod accelerates distributed training by automatically splitting your models and training datasets across AWS accelerators. It helps you to optimize your training job for AWS network infrastructure and cluster topology and streamline model checkpointing by optimizing the frequency of saving checkpoints, ensuring minimum overhead during training.

Advanced experimentation and observability tools

You can use built-in AI tools in SageMaker HyperPod to improve model performance. For example, managed TensorBoard in SageMaker helps you save development time by visualizing the model architecture to identify and remediate convergence issues. Integration with Amazon CloudWatch Container Insights provides deeper insights into cluster performance, health, and use. Managed MLflow in SageMaker helps you efficiently manage experiments at scale.

Workload scheduling and orchestration

The SageMaker HyperPod user interface is highly customizable using Slurm or Amazon Elastic Kubernetes Service (Amazon EKS). You can select and install any needed frameworks or tools. All clusters are provisioned with the instance type and count you choose, and they are retained for your use across workloads. With Amazon EKS support in SageMaker HyperPod, you can manage and operate clusters with a consistent Kubernetes-based administrator experience. Efficiently run and scale workloads, from training to fine-tuning to inference. You can also share compute capacity and switch between Slurm and Amazon EKS for different types of workloads.

Automatic cluster health check and repair

If any instances become defective during a model development workload, SageMaker HyperPod automatically detects and addresses infrastructure issues. To detect faulty hardware, SageMaker HyperPod regularly runs an array of health checks for accelerator and network integrity.