Amazon SageMaker Studio features

Perform end-to-end ML development with a fully managed IDE
JupyterLab

JupyterLab

Launch fully managed JupyterLab in seconds. Use the latest web-based interactive development environment for notebooks, code, and data. Its flexible and extensible interface allows you to easily configure and arrange machine learning (ML) workflows, and you can use the AI-powered inline coding companion to quickly author, debug, explain, and test code.
Code Editor, based on Code-OSS

Code Editor, based on Code-OSS

Use the lightweight and powerful code editor, and boost productivity with its familiar shortcuts, terminal, debugger, and refactoring tools. Choose from thousands of Visual Studio Code–compatible extensions available in the Open VSX extension gallery to enhance your development experience. Enable versioning control and cross-team collaboration through GitHub repositories. Use the most popular ML frameworks out of the box with the preconfigured Amazon SageMaker distribution. Seamlessly integrate with AWS services through the AWS Toolkit for Visual Studio Code, including built-in access to AWS data sources such as Amazon Simple Storage Service (Amazon S3) and Amazon Redshift, and increase coding efficiency through Amazon CodeWhisperer, the AI-powered coding companion.
RStudio

RStudio

Use the fully managed IDE for R with a console; a syntax-highlighting editor that supports direct code execution; and tools for plotting, history, debugging, and workspace management. Use preconfigured R packages such as devtools, tidyverse, shiny, and rmarkdown to generate insights, and publish them using RStudio Connect. You can seamlessly switch within RStudio, JupyterLab, and Code Editor IDEs for R and Python development. 
Access and evaluate FMs

Access and evaluate FMs

Quickly get started with generative AI development using hundreds of publicly available FMs and prebuilt solutions that can be deployed in just a few steps from Amazon SageMaker JumpStart. Quickly evaluate, compare, and select the best FMs for your use case based on a variety of criteria, such as accuracy, robustness, toxicity, and bias within minutes, using Amazon SageMaker Clarify. Get started with FM evaluations by using curated prompt datasets or extend the evaluation with your own custom prompt datasets. Human evaluations can be used for more subjective dimensions such as creativity and style.
Prepare data at scale

Prepare data at scale

Simplify your data workflows with a unified environment for data engineering, analytics, and ML. Run Spark jobs interactively using Amazon EMR and AWS Glue serverless Spark environments, and monitor them using Spark UI. Use the built-in data preparation capability to visualize data, identify data quality issues, and apply recommended solutions to improve data quality. Automate your data preparation workflows quickly by scheduling your notebook as a job in a few steps. Store, share, and manage ML model features in a central feature store.

Train models quickly with optimized performance

Train models quickly with optimized performance

Amazon SageMaker offers high-performing distributed training libraries and built-in tools to optimize model performance. You can automatically tune your models and visualize and correct performance issues before deploying the models to production.

Deploy models for optimal inference performance and cost 

Deploy models for optimal inference performance and cost

Deploy your models with a broad selection of ML infrastructure and deployment options to help meet your ML inference needs. It is fully managed and integrates with MLOps tools, so you can scale your model deployment, reduce inference costs, manage models more effectively in production, and reduce operational burden.

Deliver high-performance production ML models

Deliver high-performance production ML models

SageMaker provides purpose-built MLOps and governance tools to help you automate, standardize, and streamline documentation processes across the ML lifecycle. Using SageMaker MLOps tools, you can easily train, test, troubleshoot, deploy, and govern ML models at scale while maintaining model performance in production.