Artificial intelligence (AI) and machine learning (ML) are some of the most transformative technologies we will encounter in our generation—to tackle business and societal problems, improve customer experiences, and spur innovation. Along with the widespread use and growing scale of AI comes the recognition that we must all build responsibly. At AWS, we think responsible AI encompasses a number of core dimensions including:
Our commitment to developing AI and ML in a responsible way is integral to how we build our services, engage with customers, and drive innovation. We are also committed to providing customers with tools and resources to develop and use AI/ML responsibly, from enabling ML builders with a fully managed development environment to helping customers embed AI services into common business use cases.
Our customers want to know that the technology they are using was developed in a responsible way. They want resources and guidance to implement that technology responsibly at their own organization. And most importantly, they want to ensure that the technology they roll out is for everyone’s benefit, especially their end-users’. At AWS, we want to help them bring this vision to life.
To deliver the transparency that customers are asking for, we are excited to launch AWS AI Service Cards, a new resource to help customers better understand our AWS AI services. AI Service Cards are a form of responsible AI documentation that provide customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and deployment and performance optimization best practices for our AI services. They are part of a comprehensive development process we undertake to build our services in a responsible way that addresses fairness and bias, explainability, robustness, governance, transparency, privacy, and security. At AWS re:Invent 2022 we’re making the first three AI Service Cards available: Amazon Rekognition – Face Matching, Amazon Textract – AnalyzeID, and Amazon Transcribe – Batch (English-US).
Each AI Service Card contains four sections covering:
The content of the AI Service Cards addresses a broad audience of customers, technologists, researchers, and other stakeholders who seek to better understand key considerations in the responsible design and use of an AI service.
Our customers use AI in an increasingly diverse set of applications. The intended use cases and limitations section provides information about common uses for a service, and helps customers assess whether a service is a good fit for their application. For example, in the Amazon Transcribe – Batch (English-US) Card we describe the service use case of transcribing general-purpose vocabulary spoken in US English from an audio file. If a company wants a solution that automatically transcribes a domain-specific event, such as an international neuroscience conference, they can add custom vocabularies and language models to include scientific vocabulary in order to increase the accuracy of the transcription.
In the design section of each AI Service Card, we explain key responsible AI design considerations across important areas, such as our test-driven methodology, fairness and bias, explainability, and performance expectations. We provide example performance results on an evaluation dataset that is representative of a common use case. This example is just a starting point though, as we encourage customers to test on their own datasets to better understand how the service will perform on their own content and use cases in order to deliver the best experience for their end customers. And this is not a one-time evaluation. To build in a responsible way, we recommend an iterative approach where customers periodically test and evaluate their applications for accuracy or potential bias.
In the best practices for deployment and performance optimization section, we lay out key levers that customers should consider to optimize the performance of their application for real-world deployment. It’s important to explain how customers can optimize the performance of an AI system that acts as a component of their overall application or workflow to get the maximum benefit. For example, in the Amazon Rekognition Face Matching Card that covers adding face recognition capabilities to identity verification applications, we share steps customers can take to increase the quality of the face matching predictions incorporated into their workflow.
Offering our customers the resources and tools they need to transform responsible AI from theory to practice is an ongoing priority for AWS. Earlier this year we launched our Responsible Use of Machine Learning guide that provides considerations and recommendations for responsibly using ML across all phases of the ML lifecycle. AI Service Cards complement our existing developer guides and blog posts, which provide builders with descriptions of service features and detailed instructions for using our service APIs. And with Amazon SageMaker Clarify and Amazon SageMaker Model Monitor, we offer capabilities to help detect bias in datasets and models and better monitor and review model predictions through automation and human oversight.
At the same time, we continue to advance responsible AI across other key dimensions, such as governance. At re:Invent today we launched a new set of purpose-built tools to help customers improve governance of their ML projects with Amazon SageMaker Role Manager, Amazon SageMaker Model Cards, and Amazon SageMaker Model Dashboard. Learn more on the AWS News blog and website about how these tools help to streamline ML governance processes.
Education is another key resource that helps advance responsible AI. At AWS we are committed to building the next generation of developers and data scientists in AI with the AI and ML Scholarship Program and AWS Machine Learning University (MLU). This week at re:Invent we launched a new, public MLU course on fairness considerations and bias mitigation across the ML lifecycle. Taught by the same Amazon data scientists who train AWS employees on ML, this free course features 9 hours of lectures and hands-on exercises and it is easy to get started.
We are excited to bring a new transparency resource to our customers and the broader community and provide additional information on the intended uses, limitations, design, and optimization of our AI services, informed by our rigorous approach to building AWS AI services in a responsible way. Our hope is that AI Service Cards will act as a useful transparency resource and an important step in the evolving landscape of responsible AI. AI Service Cards will continue to evolve and expand as we engage with our customers and the broader community to gather feedback and continually iterate on our approach.
Contact our group of responsible AI experts to start a conversation.
Podcasts are a fun and easy way to learn about machine learning.
TL;DR We asked o1 to share its thoughts on our recent LNM/LMM post. https://www.artificial-intelligence.show/the-ai-podcast/o1s-thoughts-on-lnms-and-lmms What…
Palantir and Grafana Labs’ Strategic PartnershipIntroductionIn today’s rapidly evolving technological landscape, government agencies face the…
Amazon SageMaker Pipelines includes features that allow you to streamline and automate machine learning (ML)…
When it comes to AI, large language models (LLMs) and machine learning (ML) are taking…
Cohere's Command R7B uses RAG, features a context length of 128K, supports 23 languages and…