Transform responsible AI from theory into practice
Promoting the safe and responsible development of AI as a force for goodBuilding AI responsibly at AWS
The rapid growth of generative AI brings promising new innovation, and at the same time raises new challenges. At AWS, we are committed to developing AI responsibly, taking a people-centric approach that prioritizes education, science, and our customers, to integrate responsible AI across the end-to-end AI lifecycle.
Core dimensions of responsible AI
Fairness
Considering impacts on different groups of stakeholders
Explainability
Understanding and evaluating system outputs
Privacy and security
Appropriately obtaining, using, and protecting data and models
Safety
Preventing harmful system output and misuse
Controllability
Having mechanisms to monitor and steer AI system behavior
Veracity and robustness
Achieving correct system outputs, even with unexpected or adversarial inputs
Governance
Incorporating best practices into the AI supply chain, including providers and deployers
Transparency
Enabling stakeholders to make informed choices about their engagement with an AI system
Core dimensions of responsible AI
Fairness
Considering impacts on different groups of stakeholders
Explainability
Understanding and evaluating system outputs
Privacy and security
Appropriately obtaining, using, and protecting data and models
Safety
Preventing harmful system output and misuse
Controllability
Having mechanisms to monitor and steer AI system behavior
Veracity and robustness
Achieving correct system outputs, even with unexpected or adversarial inputs
Governance
Incorporating best practices into the AI supply chain, including providers and deployers
Transparency
Enabling stakeholders to make informed choices about their engagement with an AI system
Services and tools
AWS offers services and tools to help you design, build, and operate AI systems responsibly.
Implementing safeguards in generative AI
Amazon Bedrock Guardrails helps you implement safeguards tailored to your generative AI applications and aligned with your responsible AI policies. Guardrails provides additional customizable safeguards on top of the native protections of FMs, delivering safety protections that is among the best in the industry by:
- Blocking as much as 85% more harmful content
- Filtering over 75% hallucinated responses for RAG and summarization workloads
- Enabling customers to customize and apply safety, privacy and truthfulness protections within a single solution
Foundation model (FM) evaluations
Model Evaluation on Amazon Bedrock helps you evaluate, compare, and select the best FMs for your specific use case based on custom metrics, such as accuracy, robustness, and toxicity. You can also use Amazon SageMaker Clarify and fmeval for model evaluation.
Detecting bias and explaining predictions
Biases are imbalances in data or disparities in the performance of a model across different groups. Amazon SageMaker Clarify helps you mitigate bias by detecting potential bias during data preparation, after model training, and in your deployed model by examining specific attributes.
Understanding a model’s behavior is important to develop more accurate models and make better decisions. Amazon SageMaker Clarify provides greater visibility into model behavior, so you can provide transparency to stakeholders, inform humans making decisions, and track whether a model is performing as intended.
Monitoring and human review
Monitoring is important to maintain high-quality machine learning (ML) models and help ensure accurate predictions. Amazon SageMaker Model Monitor automatically detects and alerts you to inaccurate predictions from deployed models. And with Amazon SageMaker Ground Truth you can apply human feedback across the ML lifecycle to improve the accuracy and relevancy of models.
Improving governance
ML Governance from Amazon SageMaker provides purpose-built tools for improving governance of your ML projects by giving you tighter control and visibility over your ML models. You can easily capture and share model information and stay informed on model behavior, like bias, all in one place.
AWS AI Service Cards
AI Service Cards are a resource to enhance transparency by providing you with a single place to find information on the intended use cases and limitations, responsible AI design choices, and performance optimization best practices for our AI services and models.