In the field of technology and creative design, logo design and creation has adapted and evolved at a rapid pace. From the hieroglyphs of ancient Egypt to the sleek minimalism of today’s tech giants, the visual identities that define our favorite brands have undergone a remarkable transformation.
Today, the world of creative design is once again being transformed by the emergence of generative AI. Designers and brands now have opportunities to push the boundaries of creativity, crafting logos that are not only visually stunning but also responsive to their environments and tailored to the preferences of their target audiences.
Amazon Bedrock enables access to powerful generative AI models like Stable Diffusion through a user-friendly API. These models can be integrated into the logo design workflow, allowing designers to rapidly ideate, experiment, generate, and edit a wide range of unique visual images. Integrating it with the range of AWS serverless computing, networking, and content delivery services like AWS Lambda, Amazon API Gateway, and AWS Amplify facilitates the creation of an interactive tool to generate dynamic, responsive, and adaptive logos.
In this post, we walk through how AWS can help accelerate a brand’s creative efforts with access to a powerful image-to-image model from Stable Diffusion available on Amazon Bedrock to interactively create and edit art and logo images.
The Stability AI’s image-to-image model, SDXL, is a deep learning model that generates images based on text descriptions, images, or other inputs. It first converts the text into numerical values that summarize the prompt, then uses those values to generate an image representation. Finally, it upscales the image representation into a high-resolution image. Stable Diffusion can also generate new images based on an initial image and a text prompt. For example, it can fill in a line drawing with colors, lighting, and a background that makes sense for the subject. Stable Diffusion can also be used for inpainting (adding features to an existing image) and outpainting (removing features from an existing image).
One of its primary applications lies in advertising and marketing, where it can be used to create personalized ad campaigns and an unlimited number of marketing assets. Businesses can generate visually appealing and tailored images based on specific prompts, enabling them to stand out in a crowded marketplace and effectively communicate their brand message. In the media and entertainment sector, filmmakers, artists, and content creators can use this as a tool for developing creative assets and ideating with images.
The following diagram illustrates the solution architecture.
This architecture workflow involves the following steps:
To set up this solution, complete the following prerequisites:
us-east-1
To deploy the backend resources for the solution, we create a stack using an AWS CloudFormation template. You can upload the template directly, or upload it to an S3 bucket and link to it during the stack creation process. During the creation process, provide the appropriate variable names for apiGatewayName
, apiGatewayStageName
, s3BucketName
, and lambdaFunctionName
. If you created a new S3 bucket earlier, input that name in s3BucketName
– this bucket is where output images are stored. When the stack creation is complete, all the backend resources are ready to be connected to the frontend UI.
The frontend resources play an integral part in creating an interactive environment for your end-users. Complete the following steps to integrate the frontend and backend:
apiGateway-js-sdk
.This file is configured to integrate with the JavaScript SDK by simply placing it in the folder.
index.html
is placed in the folder, select the content of the folder and compress it into a .zip file (don’t compress the apiGateway-js-sdk
folder itself.)The deployment will take a few seconds. When deployment is complete, there will be a domain URL that you can use to access the application. The application is ready to be tested at the domain URL.
Before we move on to testing the solution, let’s explore the CloudFormation template. This template sets up an API Gateway API with appropriate rules and paths, a Lambda function, and necessary permissions in AWS Identity and Access Management (IAM). Let’s dive deep into the content of the CloudFormation template to understand the resources created:
/action/{actionInput}/prompt/{promptInput}/{proxy+}.
The {promptInput}
value is a placeholder variable for the prompt that users input in the frontend. Similarly, {actionInput}
is the choice the user selected for how they want to generate the image. These are used in the backend Lambda function to process and generate images.Let’s dive into the details of the Python code that generates and manipulate images using the Stability AI model. There are three ways of using the Lambda function: provide a text prompt to generate an initial image, upload an image and include a text prompt to adjust the image, or reupload a generated image and include a prompt to adjust the image.
The code contains the following constants:
photographic
, digital-art
, or cinematic
). We used digital-art
for this post.FAST_BLUE
, FAST_GREEN
, NONE
, SIMPLE
, SLOW
, SLOWER
, SLOWEST
).DDIM
, DDPM
, K_DPMPP_SDE
, K_DPMPP_2M
, K_DPMPP_2S_ANCESTRAL
, K_DPM_2
, K_DPM_2_ANCESTRAL
, K_EULER
, K_EULER_ANCESTRAL
, K_HEUN
, K_LMS
).handler(event, context)
is the main entry point for the Lambda function. It processes the input event, which contains the promptInput
and actionInput
parameters. Based on the actionInput
, it performs one of the following actions:
GenerateInit
, it generates a new image using the generate_image_with_bedrock
function, uploads it to Amazon S3, and returns the file name and a pre-signed URL.generate_image_with_bedrock
function, uploads the new image to Amazon S3, and returns the file name and a pre-signed URL.generate_image_with_bedrock
function, uploads the new image to Amazon S3, and returns the file name and a pre-signed URL.The function generate_image_with_bedrock(prompt, init_image_b64=None)
generates an image using the Amazon Bedrock runtime service, which includes the following actions:
To obtain a more personalized outputs, the hyperparameter values in the function can be adjusted:
digital-art
.The following screenshot shows a simple UI. You can choose to either generate a new image or edit an image using text prompts.
The following screenshots show iterations of sample logos we created using the UI. The text prompts are included under each image.
To clean up, delete the CloudFormation stack and the S3 bucket you created.
In this post, we explored how you can use Stability AI and Amazon Bedrock to generate and edit images. By following the instructions and using the provided CloudFormation template and the frontend code, you can generate unique and personalized images and logos for your business. Try generating and editing your own logos, and let us know what you think in the comments. To explore more AI use cases, refer to AI Use Case Explorer.
TL;DR A conversation with 4o about the potential demise of companies like Anthropic. As artificial…
Whether a company begins with a proof-of-concept or live deployment, they should start small, test…
Digital tools are not always superior. Here are some WIRED-tested agendas and notebooks to keep…
Machine learning (ML) models are built upon data.
Editor’s note: This is the second post in a series that explores a range of…
David J. Berg*, David Casler^, Romain Cledat*, Qian Huang*, Rui Lin*, Nissan Pow*, Nurcan Sonmez*,…