This post is co-written with Dean Steel and Simon Gatie from Aviva.
With a presence in 16 countries and serving over 33 million customers, Aviva is a leading insurance company headquartered in London, UK. With a history dating back to 1696, Aviva is one of the oldest and most established financial services organizations in the world. Aviva’s mission is to help people protect what matters most to them—be it their health, home, family, or financial future. To achieve this effectively, Aviva harnesses the power of machine learning (ML) across more than 70 use cases. Previously, ML models at Aviva were developed using a graphical UI-driven tool and deployed manually. This approach led to data scientists spending more than 50% of their time on operational tasks, leaving little room for innovation, and posed challenges in monitoring model performance in production.
In this post, we describe how Aviva built a fully serverless MLOps platform based on the AWS Enterprise MLOps Framework and Amazon SageMaker to integrate DevOps best practices into the ML lifecycle. This solution establishes MLOps practices to standardize model development, streamline ML model deployment, and provide consistent monitoring. We illustrate the entire setup of the MLOps platform using a real-world use case that Aviva has adopted as its first ML use case.
Approximately 47% of ML projects never reach production, according to Gartner. Despite the advancements in open source data science frameworks and cloud services, deploying and operating these models remains a significant challenge for organizations. This struggle highlights the importance of establishing consistent processes, integrating effective monitoring, and investing in the necessary technical and cultural foundations for a successful MLOps implementation.
For companies like Aviva, which handles approximately 400,000 insurance claims annually, with expenditures of about £3 billion in settlements, the pressure to deliver a seamless digital experience to customers is immense. To meet this demand amidst rising claim volumes, Aviva recognizes the need for increased automation through AI technology. Therefore, developing and deploying more ML models is crucial to support their growing workload.
To prove the platform can handle onboarding and industrialization of ML models, Aviva picked their Remedy use case as their first project. This use case concerns a claim management system that employs a data-driven approach to determine whether submitted car insurance claims qualify as either total loss or repair cases, as illustrated in the following diagram
The successful deployment and evaluation of the Remedy use case on the MLOps platform was intended to serve as a blueprint for future use cases, providing maximum efficiency by using templated solutions.
To handle the complexity of operationalizing ML models at scale, AWS offers provides an MLOps offering called AWS Enterprise MLOps Framework, which can be used for a wide variety of use cases. The offering encapsulates a best practices approach to build and manage MLOps platforms based on the consolidated knowledge gained from a multitude of customer engagements carried out by AWS Professional Services in the last five 5 years. The proposed baseline architecture can be logically divided into four building blocks which that are sequentially deployed into the provided AWS accounts, as illustrated in the following diagram below.
The building blocks are as follows:
The framework implements the infrastructure deployment from a primary governance account to separate development, staging, and production accounts. Developers can use the AWS Cloud Development Kit (AWS CDK) to customize the solution to align with the company’s specific account setup. In adapting the AWS Enterprise MLOps Framework to a three-account structure, Aviva has designated accounts as follows: development, staging, and production. This structure is depicted in the following architecture diagram. The governance components, which facilitate model promotions with consistent processes across accounts, have been integrated into the development account.
The processing, training, and inference code for the Remedy use case was developed by Aviva’s data science team in SageMaker Studio, a cloud-based environment designed for collaborative work and rapid experimentation. When experimentation is complete, the resulting seed code is pushed to an AWS CodeCommit repository, initiating the CI/CD pipeline for the construction of a SageMaker pipeline. This pipeline comprises a series of interconnected steps for data processing, model training, parameter tuning, model evaluation, and the registration of the generated models in the Amazon SageMaker Model Registry.
Amazon SageMaker Automatic Model Tuning enabled Aviva to utilize advanced tuning strategies and overcome the complexities associated with implementing parallelism and distributed computing. The initial step involved a hyperparameter tuning process (Bayesian optimization), during which approximately 100 model variations were trained (5 steps with 20 models trained concurrently in each step). This feature integrates with Amazon SageMaker Experiments to provide data scientists with insights into the tuning process. The optimal model is then evaluated in terms of accuracy, and if it exceeds a use case-specific threshold, it is registered in the SageMaker Model Registry. A custom approval step was constructed, such that only Aviva’s lead data scientist can permit the deployment of a model through a CI/CD pipeline to a SageMaker real-time inference endpoint in the development environment for further testing and subsequent promotion to the staging and production environment.
To realize the actual business value of Aviva’s ML model, it was necessary to integrate the inference logic with Aviva’s internal business systems. The inference workflow is responsible for combining the model predictions, external data, and business logic to generate a recommendation for claims handlers. The recommendation is based on three possible outcomes:
The following diagram illustrates the workflow.
The workflow starts with a request to an API endpoint hosted on Amazon API Gateway originating from a claims management system, which invokes an AWS Step Functions workflow that uses AWS Lambda to complete the following steps:
The ability to obtain real-time access to detailed data for each state machine run and task is critically important for effective oversight and enhancement of the system. This includes providing claim handlers with comprehensive details behind decision summaries, such as model outputs, external API calls, and applied business logic, to make sure recommendations are based on accurate and complete information. Snowflake is the preferred data platform, and it receives data from Step Functions state machine runs through Amazon CloudWatch logs. A series of filters screen for data pertinent to the business. This data then transmits to an Amazon Data Firehose delivery stream and subsequently relays to an Amazon Simple Storage Service (Amazon S3) bucket, which is accessed by Snowflake. The data generated by all runs is used by Aviva business analysts to create dashboards and management reports, facilitating insights such as monthly views of total losses by region or average repair costs by vehicle manufacturer and model.
The described solution processes personally identifiable information (PII), making customer data protection the core security focus of the solution. The customer data is protected by employing networking restrictions, because processing is run inside the VPC, where data is logically separated in transit. The data is encrypted in transit between steps of the processing and encrypted at rest using AWS Key Management Service (AWS KMS). Access to the production customer data is restricted on a need-to-know basis, where only the authorized parties are allowed to access production environment where this data resides.
The second security focus of the solution is protecting Aviva’s intellectual property. The code the data scientists and engineers are working on is stored securely in the dev AWS account, private to Aviva, in the CodeCommit git repositories. The training data and the artifacts of the trained models are stored securely in the S3 buckets in the dev account, protected by AWS KMS encryption at rest, with AWS Identity and Access Management (IAM) policies restricting access to the buckets to only the authorized SageMaker endpoints. The code pipelines are private to the account as well, and reside in the customer’s AWS environment.
The auditability of the workflows is provided by logging the steps of inference and decision-making in the CloudWatch logs. The logs are encrypted at rest as well with AWS KMS, and are configured with a lifecycle policy, guaranteeing availability of audit information for the required compliance period. To maintain security of the project and operate it securely, the accounts are enabled with Amazon GuardDuty and AWS Config. AWS CloudTrail is used to monitor the activity within the accounts. The software to monitor for security vulnerabilities resides primarily in the Lambda functions implementing the business workflows. The processing code is primarily written in Python using libraries that are periodically updated.
This post provided an overview of the partnership between Aviva and AWS, which resulted in the construction of a scalable MLOps platform. This platform was developed using the open source AWS Enterprise MLOps Framework, which integrated DevOps best practices into the ML lifecycle. Aviva is now capable of replicating consistent processes and deploying hundreds of ML use cases in weeks rather than months. Furthermore, Aviva has transitioned entirely to a pay-as-you-go model, resulting in a 90% reduction in infrastructure costs compared to the company’s previous on-premises ML platform solution.
Explore the AWS Enterprise MLOps Framework on GitHub and learn more about MLOps on Amazon SageMaker to see how it can accelerate your organization’s MLOps journey.
Our new AI system accurately identifies errors inside quantum computers, helping to make this new…
Estimating the density of a distribution from samples is a fundamental problem in statistics. In…
Swiss Re & PalantirScaling Data Operations with FoundryEditor’s note: This guest post is authored by our customer,…
As generative AI models advance in creating multimedia content, the difference between good and great…
Large language models (LLMs) give developers immense power and scalability, but managing resource consumption is…
We dive into the most significant takeaways from Microsoft Ignite, and Microsoft's emerging leadership in…