Categories: FAANG

Improve employee productivity using generative AI with Amazon Bedrock

The Employee Productivity GenAI Assistant Example is a practical AI-powered solution designed to streamline writing tasks, allowing teams to focus on creativity rather than repetitive content creation. Built on AWS technologies like AWS Lambda, Amazon API Gateway, and Amazon DynamoDB, this tool automates the creation of customizable templates and supports both text and image inputs. Using generative AI models such as Anthropic’s Claude 3 from Amazon Bedrock, it provides a scalable, secure, and efficient way to generate high-quality content. Whether you’re new to AI or an experienced user, this simplified interface allows you to quickly take advantage of the power of this sample code, enhancing your team’s writing capabilities and enabling them to focus on more valuable tasks.

By using Amazon Bedrock and generative AI on AWS, organizations can accelerate their innovation cycles, unlock new business opportunities, and deliver innovative solutions powered by the latest advancements in generative AI technology, while maintaining high standards of security, scalability, and operational efficiency.

AWS takes a layered approach to generative AI, providing a comprehensive stack that covers the infrastructure for training and inference, tools to build with large language models (LLMs) and other foundation models (FMs), and applications that use these models. At the bottom layer, AWS offers advanced infrastructure like graphics processing units (GPUs), AWS Trainium, AWS Inferentia, and Amazon SageMaker, along with capabilities like UltraClusters, Elastic Fabric Adapter (EFA), and Amazon EC2 Capacity Blocks for efficient model training and inference. The middle layer, Amazon Bedrock, provides a managed service that allows you to choose from industry-leading models, customize them with your own data, and use security, access controls, and other features. This layer includes capabilities like guardrails, agents, Amazon Bedrock Studio, and customization options. The top layer consists of applications like Amazon Q Business, Amazon Q Developer, Amazon Q in QuickSight, and Amazon Q in Connect, which enable you to use generative AI for various tasks and workflows. This post focuses exclusively on the middle layer, tools with LLMs and other FMs, specifically Amazon Bedrock and its capabilities for building and scaling generative AI applications.

Employee GenAI Assistant Example: Key features

In this section, we discuss the key features of the Employee Productivity GenAI Assistant Example and its console options.

The Playground page of the Employee Productivity GenAI Assistant Example is designed to interact with Anthropic’s Claude language models on Amazon Bedrock. In this example, we explore how to use the Playground feature to request a poem about New York City, with the model’s response dynamically streamed back to the user.

This process includes the following steps:

  1. The Playground interface provides a dropdown menu to choose the specific AI model to be used. In this case, use claude-3:sonnet-202402229-v1.0, which is a version of Anthropic’s Claude 3.
  2. In the Input field, enter the prompt “Write a poem about NYC” to request the AI model to compose a poem about New York.
  3. After you enter the prompt, choose Submit. This sends the API request to Amazon Bedrock, which is hosting the Anthropic’s Claude 3 Sonnet language model.

As the AI model processes the request and generates the poem, it’s streamed back to Output in real time, allowing you to observe the text being generated word by word or line by line.

The Templates page lists various predefined sample prompt templates, such as Interview Question Crafter, Perspective Change Prompt, Grammar Genie, and Tense Change Prompt.

Now let’s create a template called Product Naming Pro:

  1. Add a customized prompt by choosing Add Prompt Template.
  2. Enter Product Naming Pro as the name and Create catchy product names from descriptions and keywords as the description.
  3. Choose anthropic.claude-3:sonnet-202402229-v1.0 as the model.

The template section includes a System Prompt option. In this example, we provide the System Prompt with guidance on creating effective product names that capture the essence of the product and leave a lasting impression.

The ${INPUT_DATA} field is a placeholder variable that allows template users to provide their input text, which will be incorporated into the prompt used by the system. The visibility of the template can be set as Public or Private. A public template can be seen by authenticated users within the deployment of the solution, making sure that only those with an account and proper authentication can access it. In contrast, a private template is only visible to your own authenticated user, keeping it exclusive to you. Additional information, such as the creator’s email address, is also displayed.

The interface showcases the creation of a Product Naming Pro template designed to generate catchy product names from descriptions and keywords, enabling efficient prompt engineering.

On the Activity page, you can choose a prompt template to generate output based on provided input.

The following steps demonstrate how to use the Activity feature:

  1. Choose the Product Naming Pro template created in the previous section.
  2. In the input field, enter a description: A noise-canceling, wireless, over-ear headphone with a 20-hour battery life and touch controls. Designed for audiophiles and frequent travelers.
  3. Add relevant keywords: immersive, comfortable, high-fidelity, long-lasting, convenient.
  4. After you provide the input description and keywords, choose Submit.

The output section displays five suggested product names that were generated based on the input. For example, SoundScape Voyager, AudioOasis Nomad, EnvoyAcoustic, FidelityTrek, and SonicRefuge Traveler.

The template has processed the product description and keywords to create catchy and descriptive product name suggestions that capture the essence of the noise-canceling, wireless, over-ear headphones designed for audiophiles and frequent travelers.

The History page displays logs of the interactions and activities performed within the application, including requests made on the Playground and Activity pages.

At the top of the interface, a notification indicates that text has been copied to the clipboard, enabling you to copy generated outputs or prompts for use elsewhere.

The View and Delete options allow you to review the full details of the interaction or delete the entry from the history log, respectively.

The History page provides a way to track and revisit past activities within the application, providing transparency and allowing you to reference or manage your previous interactions with the system. The history saves your inputs and outputs on the Playground and Activity page (at the time of writing, Chat page history is not yet supported). You can only see the history of your own user requests, safeguarding security and privacy, and no other users can access your data. Additionally, you have the option to delete records stored in the history at any time if you prefer not to keep them.

The interactive chat interface displays a chat conversation. The user is greeted by the assistant, and then chooses the Product Naming Pro template and provides a product description for a noise-canceling, wireless headphone designed for audiophiles and frequent travelers. The assistant responds with an initial product name recommendation based on the description. The user then requests additional recommendations, and the assistant provides five more product name suggestions. This interactive conversation highlights how the chat functionality allows continued natural language interaction with the AI model to refine responses and explore multiple options.

In the following example, the user chooses an AI model (for example, anthropic.claude-3-sonnet-202402280-v1.0) and provides input for that model. An image named headphone.jpg has been uploaded and the user asks “Please describe the image uploaded in detail to me.”

The user chooses Submit and the AI model’s output is displayed, providing a detailed description of the headphone image. It describes the headphones as “over-ear wireless headphones in an all-black color scheme with a sleek and modern design.” It mentions the matte black finish on the ear cups and headband, as well as the well-padded soft leather or leatherette material for comfort during extended listening sessions.

This demonstrates the power of multi-modality models like the Anthropic’s Claude 3 family on Amazon Bedrock, allowing you to upload and use up to six images on the Playground or Activity pages as inputs for generating context-rich, multi-modal responses.

Solution overview

The Employee Productivity GenAI Assistant Example is built on robust AWS serverless technologies such as AWS Lambda, API Gateway, DynamoDB, and Amazon Simple Storage Service (Amazon S3), maintaining scalability, high availability, and security through Amazon Cognito. These technologies provide a foundation that allows the Employee Productivity GenAI Assistant Example to respond to user needs on-demand while maintaining strict security standards. The core of its generative abilities is derived from the powerful AI models available in Amazon Bedrock, which help deliver tailored and high-quality content swiftly.

The following diagram illustrates the solution architecture.

The workflow of the Employee Productivity GenAI Assistant Example includes the following steps:

  1. Users access a static website hosted in the us-east-1 AWS Region, secured with AWS WAF. The frontend of the application consists of a React application hosted on an S3 bucket (S3 React Frontend), distributed using Amazon CloudFront.
  2. Users can initiate REST API calls from the static website, which are routed through an API Gateway. API Gateway manages these calls and interacts with multiple components:
    1. The API interfaces with a DynamoDB table to store and retrieve template and history data.
    2. The API communicates with a Python-based Lambda function to process requests.
    3. The API generates pre-signed URLs for image uploads and downloads to and from an S3 bucket (S3 Images).
  3. API Gateway integrates with Amazon Cognito for user authentication and authorization, managing users and groups.
  4. Users upload images to the S3 bucket (S3 Images) using the pre-signed URLs provided by API Gateway.
  5. When users request image downloads, a Lambda authorizer function written in Java is invoked, recording the request in the history database (DynamoDB table).
  6. For streaming data, users establish a WebSocket connection with an API Gateway WebSocket, which interacts with a Python Lambda function to handle the streaming data. The streaming data undergoes processing before being transmitted to an Amazon Bedrock streaming service.

Running generative AI workloads in Amazon Bedrock offers a robust and secure environment that seamlessly scales to help meet the demanding computational requirements of generative AI models. The layered security approach of Amazon Bedrock, built on the foundational principles of the comprehensive security services provided by AWS, provides a fortified environment for handling sensitive data and processing AI workloads with confidence. Its flexible architecture lets organizations use AWS elastic compute resources to scale dynamically with workload demands, providing efficient performance and cost control. Furthermore, the modular design of Amazon Bedrock empowers organizations to integrate their existing AI and machine learning (ML) pipelines, tools, and frameworks, fostering a seamless transition to a secure and scalable generative AI infrastructure within the AWS ecosystem.

In addition to the interactive features, the Employee Productivity GenAI Assistant Example provides a robust architectural pattern for building generative AI solutions on AWS. By using Amazon Bedrock and AWS serverless services such as Lambda, API Gateway, and DynamoDB, the Employee Productivity GenAI Assistant Example demonstrates a scalable and secure approach to deploying generative AI applications. You can use this architecture pattern as a foundation to build various generative AI solutions tailored to different use cases. Furthermore, the solution includes a reusable component-driven UI built on the React framework, enabling developers to quickly extend and customize the interface to fit their specific needs. The example also showcases the implementation of streaming support using WebSockets, allowing for real-time responses in both chat-based interactions and one-time requests, enhancing the user experience and responsiveness of the generative AI assistant.

Prerequisites

You should have the following prerequisites:

  • An AWS account
  • Permission to use Lambda, API Gateway, Amazon Bedrock, Amazon Cognito, CloudFront, AWS WAF, Amazon S3, and DynamoDB

Deploy the solution

To deploy and use the application, complete the following steps:

  1. Clone the GitHub repository into your AWS environment:
    git clone https://github.com/aws-samples/improve-employee-productivity-using-genai
  2. See the How to Deploy Locally section if you want to deploy from your computer.
  3. See How to Deploy via AWS CloudShell if you want to deploy from AWS CloudShell in your AWS account.
  4. After deployment is complete, see Post Deployment Steps to get started.
  5. See Demos to see examples of the solution’s capabilities and features.

Cost estimate for running the Employee Productivity GenAI Assistant Example

The cost of running the Employee Productivity GenAI Assistant Example will vary depending on the Amazon Bedrock model you choose and your usage patterns, as well as the Region you use. The primary cost drivers are the Amazon Bedrock model pricing and the AWS services used to host and run the application.

For this example, let’s assume a scenario with 50 users, each using this example code five times a day, with an average of 500 input tokens and 200 output tokens per use.

The total monthly token usage calculation is as follows:

  • Input tokens: 7.5 million
    • 500 tokens per request * 5 requests per day * 50 users * 30 days = 3.75 million tokens
  • Output tokens: 1.5 million
    • 200 tokens per request * 5 requests day * 50 users * 30 days = 1.5 million tokens

The estimated monthly costs (us-east-1 Region) for different Anthropic’s Claude models on Amazon Bedrock would be the following:

  • Anthropic’s Claude 3 Haiku model:
    • Amazon Bedrock: $2.81
      • 75 million input tokens at $0.00025/thousand tokens = $0.9375
      • 5 million output tokens at $0.00125/thousand tokens = $1.875
    • Other AWS services: $16.51
    • Total: $19.32
  • Anthropic’s Claude 3 and 3.5 Sonnet model:
    • Amazon Bedrock: $33.75
      • 75 million input tokens at $0.003/thousand tokens = $11.25
      • 5 million output tokens at $0.015/thousand tokens = $22.50
    • Other AWS services: $16.51
    • Total: $50.26
  • Anthropic’s Claude 3 Opus model:
    • Amazon Bedrock: $168.75
      • 75 million input tokens at $0.015/thousand tokens = $56.25
      • 5 million output tokens at $0.075/thousand tokens = $112.50
    • Other AWS services: $16.51
    • Total: $185.26

These estimates don’t consider the AWS Free Tier for eligible services, so your actual costs might be lower if you’re still within the Free Tier limits. Additionally, the pricing for AWS services might change over time, so the actual costs might vary from these estimates.

The beauty of this serverless architecture is that you can scale resources up or down based on demand, making sure that you only pay for the resources you consume. Some components, such as Lambda, Amazon S3, CloudFront, DynamoDB, and Amazon Cognito, might not incur additional costs if you’re still within the AWS Free Tier limits.

For a detailed breakdown of the cost estimate, including assumptions and calculations, refer to the Cost Estimator.

Clean up

When you’re done, delete any resources you no longer need to avoid ongoing costs.

To delete the stack, use the command

./deploy.sh --delete --region=<your-aws-region> --email=<your-email>

For example:

./deploy.sh --delete --us-east-1 --email=abc@example.com

For more information about how to delete the resources from your AWS account, see the How to Deploy Locally section in the GitHub repo.

Summary

The Employee Productivity GenAI Assistant Example is a cutting-edge sample code that uses generative AI to automate repetitive writing tasks, freeing up resources for more meaningful work. It uses Amazon Bedrock and generative AI models to create initial templates that can be customized. You can input both text and images, benefiting from the multimodal capabilities of AI models. Key features include a user-friendly playground, template creation and application, activity history tracking, interactive chat with templates, and support for multi-modal inputs. The solution is built on robust AWS serverless technologies such as Lambda, API Gateway, DynamoDB, and Amazon S3, maintaining scalability, security, and high availability.

Visit our GitHub repository and try it firsthand.

By using Amazon Bedrock and generative on AWS, organizations can accelerate innovation cycles, unlock new business opportunities, and deliver AI-powered solutions while maintaining high standards of security and operational efficiency.


About the Authors

Samuel Baruffi is a seasoned technology professional with over 17 years of experience in the information technology industry. Currently, he works at AWS as a Principal Solutions Architect, providing valuable support to global financial services organizations. His vast expertise in cloud-based solutions is validated by numerous industry certifications. Away from cloud architecture, Samuel enjoys soccer, tennis, and travel.

Somnath Chatterjee is an accomplished Senior Technical Account Manager at AWS, Somnath Chatterjee is dedicated to guiding customers in crafting and implementing their cloud solutions on AWS. He collaborates strategically with customers to help them run cost-optimized and resilient workloads in the cloud. Beyond his primary role, Somnath holds specialization in the Compute technical field community. He is an SAP on AWS Specialty certified professional and EFS SME. With over 14 years of experience in the information technology industry, he excels in cloud architecture and helps customers achieve their desired outcomes on AWS.

Mohammed Nawaz Shaikh is a Technical Account Manager at AWS, dedicated to guiding customers in crafting and implementing their AWS strategies. Beyond his primary role, Nawaz serves as an AWS GameDay Regional Lead and is an active member of the AWS NextGen Developer Experience technical field community. With over 16 years of expertise in solution architecture and design, he is not only a passionate coder but also an innovator, holding three US patents.

AI Generated Robotic Content

Recent Posts

AlphaQubit tackles one of quantum computing’s biggest challenges

Our new AI system accurately identifies errors inside quantum computers, helping to make this new…

6 hours ago

Instance-Optimal Private Density Estimation in the Wasserstein Distance

Estimating the density of a distribution from samples is a fundamental problem in statistics. In…

6 hours ago

Swiss Re & Palantir: Scaling Data Operations with Foundry

Swiss Re & PalantirScaling Data Operations with FoundryEditor’s note: This guest post is authored by our customer,…

6 hours ago

Enhance speech synthesis and video generation models with RLHF using audio and video segmentation in Amazon SageMaker

As generative AI models advance in creating multimedia content, the difference between good and great…

6 hours ago

Don’t let resource exhaustion leave your users hanging: A guide to handling 429 errors

Large language models (LLMs) give developers immense power and scalability, but managing resource consumption is…

6 hours ago

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

We dive into the most significant takeaways from Microsoft Ignite, and Microsoft's emerging leadership in…

7 hours ago