Categories: FAANG

Best practices for prompt engineering with Meta Llama 3 for Text-to-SQL use cases

With the rapid growth of generative artificial intelligence (AI), many AWS customers are looking to take advantage of publicly available foundation models (FMs) and technologies. This includes Meta Llama 3, Meta’s publicly available large language model (LLM). The partnership between Meta and Amazon signifies collective generative AI innovation, and Meta and Amazon are working together to push the boundaries of what’s possible.

In this post, we provide an overview of the Meta Llama 3 models available on AWS at the time of writing, and share best practices on developing Text-to-SQL use cases using Meta Llama 3 models. All the code used in this post is publicly available in the accompanying Github repository.

Background of Meta Llama 3

Meta Llama 3, the successor to Meta Llama 2, maintains the same 70-billion-parameter capacity but achieves superior performance through enhanced training techniques rather than sheer model size. This approach underscores Meta’s strategy of optimizing data utilization and methodologies to push AI capabilities further. The release includes new models based on Meta Llama 2’s architecture, available in 8-billion- and 70-billion-parameter variants, each offering base and instruct versions. This segmentation allows Meta to deliver versatile solutions suitable for different hardware and application needs.

A significant upgrade in Meta Llama 3 is the adoption of a tokenizer with a 128,256-token vocabulary, enhancing text encoding efficiency for multilingual tasks. The 8-billion-parameter model integrates grouped-query attention (GQA) for improved processing of longer data sequences, enhancing real-world application performance. Training involved a dataset of over 15 trillion tokens across two GPU clusters, significantly more than Meta Llama 2. Meta Llama 3 Instruct, optimized for dialogue applications, underwent fine-tuning with over 10 million human-annotated samples using advanced techniques like proximal policy optimization and supervised fine-tuning. Meta Llama 3 models are licensed permissively, allowing redistribution, fine-tuning, and derivative work creation, now requiring explicit attribution. This licensing update reflects Meta’s commitment to fostering innovation and collaboration in AI development with transparency and accountability.

Prompt engineering best practices for Meta Llama 3

The following are best practices for prompt engineering for Meta Llama 3:

  • Base model usage – Base models offer the following:
    • Prompt-less flexibility – Base models in Meta Llama 3 excel in continuing sequences and handling zero-shot or few-shot tasks without requiring specific prompt formats. They serve as versatile tools suitable for a wide range of applications and provide a solid foundation for further fine-tuning.
  • Instruct versions – Instruct versions offer the following:
    • Structured dialogue – Instruct versions of Meta Llama 3 use a structured prompt format designed for dialogue systems. This format maintains coherent interactions by guiding system responses based on user inputs and predefined prompts.
  • Text-to-SQL parsing – For tasks like Text-to-SQL parsing, note the following:
    • Effective prompt design – Engineers should design prompts that accurately reflect user queries to SQL conversion needs. Meta Llama 3’s capabilities enhance accuracy and efficiency in understanding and generating SQL queries from natural language inputs.
  • Development best practices – Keep in mind the following:
    • Iterative refinement – Continuous refinement of prompt structures based on real-world data improves model performance and consistency across different applications.
    • Validation and testing – Thorough testing and validation make sure that prompt-engineered models perform reliably and accurately across diverse scenarios, enhancing overall application effectiveness.

By implementing these practices, engineers can optimize the use of Meta Llama 3 models for various tasks, from generic inference to specialized natural language processing (NLP) applications like Text-to-SQL parsing, using the model’s capabilities effectively.

Solution overview

The demand for using LLMs to improve Text-to-SQL queries is growing more important because it enables non-technical users to access and query databases using natural language. This democratizes access to generative AI and improves efficiency in writing complex queries without needing to learn SQL or understand complex database schemas. For example, if you’re a financial customer and you have a MySQL database of customer data spanning multiple tables, you could use Meta Llama 3 models to build SQL queries from natural language. Additional use cases include:

  • Improved accuracy – LLMs can generate SQL queries that more accurately capture the intent behind natural language queries, thanks to their advanced language understanding capabilities. This reduces the need to rephrase or refine your queries.
  • Handling complexity – LLMs can handle complex queries involving multiple tables (which we demonstrate in this post), joins, filters, and aggregations, which would be challenging for rule-based or traditional Text-to-SQL systems. This expands the range of queries that can be handled using natural language.
  • Incorporating context – LLMs can use contextual information like database schemas, table descriptions, and relationships to generate more accurate and relevant SQL queries. This helps bridge the gap between ambiguous natural language and precise SQL syntax.
  • Scalability – After they’re trained, LLMs can generalize to new databases and schemas without extensive retraining or rule-writing, making them more scalable than traditional approaches.

For the solution, we follow a Retrieval Augmented Generation (RAG) pattern to generate SQL from a natural language query using the Meta Llama 3 70B model on Amazon SageMaker JumpStart, a hub that provides access to pre-trained models and solutions. SageMaker JumpStart provides a seamless and hassle-free way to deploy and experiment with the latest state-of-the-art LLMs like Meta Llama 3, without the need for complex infrastructure setup or deployment code. With just a few clicks, you can have Meta Llama 3 models up and running in a secure AWS environment under your virtual private cloud (VPC) controls, maintaining data security. SageMaker JumpStart offers access to a range of Meta Llama 3 model sizes (8B and 70B parameters). This flexibility allows you to choose the appropriate model size based on your specific requirements. You can also incrementally train and tune these models before deployment.

The solution also includes an embeddings model hosted on SageMaker JumpStart and publicly available vector databases like ChromaDB to store the embeddings.

ChromaDB and other vector engines

In the realm of Text-to-SQL applications, ChromaDB is a powerful, publicly available, embedded vector database designed to streamline the storage, retrieval, and manipulation of high-dimensional vector data. Seamlessly integrating with machine learning (ML) and NLP workflows, ChromaDB offers a robust solution for applications such as semantic search, recommendation systems, and similarity-based analysis. ChromaDB offers several notable features:

  • Efficient vector storage – ChromaDB uses advanced indexing techniques to efficiently store and retrieve high-dimensional vector data, enabling fast similarity searches and nearest neighbor queries.
  • Flexible data modeling – You can define custom collections and metadata schemas tailored to your specific use cases, allowing for flexible data modeling.
  • Seamless integration – ChromaDB can be seamlessly embedded into existing applications and workflows, providing a lightweight and performant solution for vector data management.

Why choose ChromaDB for Text-to-SQL use cases?

  • Efficient vector storage for text embeddings – ChromaDB’s efficient storage and retrieval of high-dimensional vector embeddings are crucial for Text-to-SQL tasks. It enables fast similarity searches and nearest neighbor queries on text embeddings, facilitating accurate mapping of natural language queries to SQL statements.
  • Seamless integration with LLMs – ChromaDB can be quickly integrated with LLMs, enabling RAG architectures. This allows LLMs to use relevant context, such as providing only the relevant table schemas necessary to fulfill the query.
  • Customizable and community support – ChromaDB offers flexibility and customization with an active community of developers and users who contribute to its development, provide support, and share best practices. This provides a collaborative and supportive landscape for Text-to-SQL applications.
  • Cost-effective – ChromaDB eliminates the need for expensive licensing fees, making it a cost-effective choice for organizations of all sizes.

By using vector database engines like ChromaDB, you gain more flexibility for your specific use cases and can build robust and performant Text-to-SQL systems for generative AI applications.

Solution architecture

The solution uses the AWS services and features illustrated in the following architecture diagram.

The process flow includes the following steps:

  1. A user sends a text query specifying the data they want returned from the databases.
  2. Database schemas, table structures, and their associated metadata are processed through an embeddings model hosted on SageMaker JumpStart to generate embeddings.
  3. These embeddings, along with additional contextual information about table relationships, are stored in ChromaDB to enable semantic search, allowing the system to quickly retrieve relevant schema and table context when processing user queries.
  4. The query is sent to ChromaDB to be converted to vector embeddings using a text embeddings model hosted on SageMaker JumpStart. The generated embeddings are used to perform a semantic search on the ChromaDB.
  5. Following the RAG pattern, ChromaDB outputs the relevant table schemas and table context that pertain to the query. Only relevant context is sent to the Meta Llama 3 70B model. The augmented prompt is created using this information from ChromaDB as well as the user query.
  6. The augmented prompt is sent to the Meta Llama3 70B model hosted on SageMaker JumpStart to generate the SQL query.
  7. After the SQL query is generated, you can run the SQL query against Amazon Relational Database Service (Amazon RDS) for MySQL, a fully managed cloud database service that allows you to quickly operate and scale your relational databases like MySQL.
  8. From there, the output is sent back to the Meta Llama 3 70B model hosted on SageMaker JumpStart to provide a response the user.
  9. Response sent back to the user.

Depending on where your data lives, you can implement this pattern with other relational database management systems such as PostgreSQL or alternative database types, depending on your existing data infrastructure and specific requirements.

Prerequisites

Complete the following prerequisite steps:

  1. Have an AWS account.
  2. Install the AWS Command Line Interface (AWS CLI) and have the Amazon SDK for Python (Boto3) set up.
  3. Request model access on the Amazon Bedrock console for access to the Meta Llama 3 models.
  4. Have access to use Jupyter notebooks (whether locally or on Amazon SageMaker Studio).
  5. Install packages and dependencies for LangChain, the Amazon Bedrock SDK (Boto3), and ChromaDB.

Deploy the Text-to-SQL environment to your AWS account

To deploy your resources, use the provided AWS CloudFormation template, which is a tool for deploying infrastructure as code. Supported AWS Regions are US East (N. Virginia) and US West (Oregon). Complete the following steps to launch the stack:

  1. On the AWS CloudFormation console, create a new stack.
  2. For Template source, choose Upload a template file then upload the yaml for deploying the Text-to-SQL environment.
  3. Choose Next.
  4. Name the stack text2sql.
  5. Keep the remaining settings as default and choose Submit.

The template stack should take 10 minutes to deploy. When it’s done, the stack status will show as CREATE_COMPLETE.

  1. When the stack is complete, navigate to the stack Outputs
  2. Choose the SagemakerNotebookURL link to open the SageMaker notebook in a separate tab.
  3. In the SageMaker notebook, navigate to the Meta-Llama-on-AWS/blob/text2sql-blog/RAG-recipes directory and open llama3-chromadb-text2sql.ipynb.
  4. If the notebook prompts you to set the kernel, choose the conda_pytorch_p310 kernel, then choose Set kernel.

Implement the solution

You can use the following Jupyter notebook, which includes all the code snippets provided in this section, to build the solution. In this solution, you can choose which service (SageMaker Jumpstart or Amazon Bedrock) to use as the hosting model service using ask_for_service() in the notebook. Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs. We give you the choice between solutions so that your teams can evaluate if SageMaker JumpStart is preferred or if your teams want to reduce operational overhead with the user-friendly Amazon Bedrock API. You have the choice to use SageMaker JumpStart to host the embeddings model of your choice or Amazon Bedrock to host the Amazon Titan Embeddings model (amazon.titan-embed-text-v2:0).

Now that the notebook is ready to use, follow the instructions in the notebook. With these steps, you create an RDS for MySQL connector, ingest the dataset into an RDS database, ingest the table schemas into ChromaDB, and generate Text-to-SQL queries to run your prompts and analyze data residing in Amazon RDS.

  1. Create a SageMaker endpoint with the BGE Large En v1.5 Embedding model from Hugging Face:
    bedrock_ef = AmazonSageMakerEmbeddingFunction()
  2. Create a collection in ChromaDB for the RAG framework:
    chroma_client = chromadb.Client()
    collection = chroma_client.create_collection(name="table-schemas-titan-embedding", embedding_function=bedrock_ef, metadata={"hnsw:space": "cosine"})
  3. Build the document with the table schema and sample questions to enhance the retriever’s accuracy:
    # The doc includes a structure format for clearly identifying the table schemas and questions
    doc1 = "<table_schemas>n"
    doc1 += f"<table_schema>n {settings_airplanes['table_schema']} n</table_schema>n".strip()
    doc1 += "n</table_schemas>"
    doc1 += f"n<questions>n {questions} n</questions>"
  4. Add documents to ChromaDB:
    collection.add(
    documents=[
    doc1,
    ],
    metadatas=[
    {"source": "mysql", "database": db_name, "table_name": table_airplanes},
    ],
    ids=[table_airplanes], # unique for each doc
    )
  5. Build the prompt (final_question) by combining the user input in natural language (user_query), the relevant metadata from the vector store (vector_search_match), and instructions (details):
    instructions = [
    {
    "role": "system",
    "content":
    """You are a mysql query expert whose output is a valid sql query.
    Only use the following tables:
    It has the following schemas:
    <table_schemas>
    {table_schemas}
    <table_schemas>
    Always combine the database name and table name to build your queries. You must identify these two values before proving a valid SQL query.
    Please construct a valid SQL statement to answer the following the question, return only the mysql query in between <sql></sql>.
    """
    },
    {
    "role": "user",
    "content": "{question}"
    }
    ]
    tmp_sql_sys_prompt = format_instructions(instructions)
  6. Submit a question to ChromaDB and retrieve the table schema SQL
    # Query/search 1 most similar results.
    docs = collection1.query(
    query_texts=[question],
    n_results=1
    )
    pattern = r"<table_schemas>(.*)</table_schemas>"
    table_schemas = re.search(pattern, docs["documents"][0][0], re.DOTALL).group(1)
    print(f"ChromaDB - Schema Retrieval: n{table_schemas.strip()}")
  7. Invoke Meta Llama 3 on SageMaker and prompt it to generate the SQL query. The function get_llm_sql_analysis will run and pass the SQL query results to Meta Llama 3 to provide a comprehensive analysis of the data:
    # Generate a prompt to get the LLM to provide an SQL query
    SQL_SYS_PROMPT = PromptTemplate.from_template(tmp_sql_sys_prompt).format(
    question=question,
    table_schemas=table_schemas,
    )
    
    results = get_llm_sql_analysis(
    question=question,
    sql_sys_prompt=SQL_SYS_PROMPT,
    qna_sys_prompt=QNA_SYS_PROMPT
    )

Although Meta Llama 3 doesn’t natively support function calling, you can simulate an agentic workflow. In this approach, a query is first generated, then run, and the results are sent back to Meta Llama 3 for interpretation.

Run queries

For our first query, we provide the input “How many unique airplane producers are represented in the database?” The following is the table schema retrieved from ChromaDB:

<table_schema>
CREATE TABLE airline_db.airplanes -- Table name
(
Airplane_id INT(10), -- airplane id
Producer VARCHAR(20), -- name of the producer
Type VARCHAR(10), -- airplane type
PRIMARY KEY (Airplane_id)
)

</table_schema>

The following is the generated query:

SELECT COUNT(DISTINCT Producer) FROM airline_db.airplanes;

The following is the data analysis generated from the previous SQL query:

Based on the SQL query results, we can see that the output is a single value: `(4)`.

This result indicates that there are **4 unique airplane producers** represented in the database.

In other words, the query has counted the number of distinct airplane producers in the database, and the answer is 4. This means that there are four different companies or entities that produce airplanes, and they are all represented in the database.

Therefore, the answer to the original question is: **There are 4 unique airplane producers represented in the database.**

For our second query, we ask “Find the airplane IDs and producers for airplanes that have flown to New York.” The following are the table schemas retrieved from ChromaDB:


<table_schema>
 CREATE TABLE airline_db.airplanes -- Table name
(
Airplane_id INT(10), -- airplane id
Producer VARCHAR(20), -- name of the producer
Type VARCHAR(10), -- airplane type
PRIMARY KEY (Airplane_id)
)
 
</table_schema>
<table_schema>
 CREATE TABLE airline_db.flights -- Table name
(
Flight_number VARCHAR(10), -- flight id
Arrival_time VARCHAR(20), -- arrival time (YYYY-MM-DDTH:M:S)
Arrival_date VARCHAR(20), -- arrival date (YYYY-MM-DD)
Departure_time VARCHAR(20), -- departure time (YYYY-MM-DDTH:M:S)
Departure_date VARCHAR(20), -- departure date (YYYY-MM-DD)
Destination VARCHAR(20), -- destination
Airplane_id INT(10), -- airplane id
PRIMARY KEY (Flight_number),
FOREIGN KEY (Airplane_id) REFERENCES airplanes(Airplane_id)
)
 
</table_schema>

The following is our generated query:


SELECT a.Airplane_id, a.Producer
FROM airline_db.airplanes a
JOIN airline_db.flights f ON a.Airplane_id = f.Airplane_id
WHERE f.Destination = 'New York';

The following is the data analysis generated from the previous SQL query:

Based on the provided SQL query results, we can analyze and interpret the output as follows:

The result set contains a single row with two columns:

* `airplane_id`: 6
* `producer`: 'Airbus'

This suggests that there is only one airplane that has flown to New York, and its details are as follows:

* The airplane has an ID of 6.
* The producer of this airplane is Airbus.

Therefore, the answer to the original question is that the airplane with ID 6, produced by Airbus, has flown to New York.

Clean up

To avoid incurring continued AWS usage charges, delete all the resources you created as part of this post. Make sure you delete the SageMaker endpoints you created within the application before you delete the CloudFormation stack.

Conclusion

In this post, we explored a solution that uses the vector engine ChromaDB and Meta Llama 3, a publicly available FM hosted on SageMaker JumpStart, for a Text-to-SQL use case. We shared a brief history of Meta Llama 3, best practices for prompt engineering with Meta Llama 3 models, and an architecture pattern using few-shot prompting and RAG to extract the relevant schemas stored as vectors in ChromaDB. Finally, we provided a solution with code samples that gives you flexibility to choose SageMaker Jumpstart or Amazon Bedrock for a more managed experience to host Meta Llama 3 70B, Meta Llama3 8B, and embeddings models.

The use of publicly available FMs and services alongside AWS services helps drive more flexibility and provides more control over the tools being used. We recommend following the SageMaker JumpStart GitHub repo for getting started guides and examples. The solution code is also available in the following Github repo.

We look forward to your feedback and ideas on how you apply these calculations for your business needs.


About the Authors

Marco Punio is a Sr. Specialist Solutions Architect focused on generative AI strategy, applied AI solutions, and conducting research to help customers hyperscale on AWS. Marco is based in Seattle, WA, and enjoys writing, reading, exercising, and building applications in his free time.

Armando Diaz is a Solutions Architect at AWS. He focuses on generative AI, AI/ML, and Data Analytics. At AWS, Armando helps customers integrating cutting-edge generative AI capabilities into their systems, fostering innovation and competitive advantage. When he’s not at work, he enjoys spending time with his wife and family, hiking, and traveling the world.

Breanne Warner is an Enterprise Solutions Architect at Amazon Web Services supporting healthcare and life science (HCLS) customers. She is passionate about supporting customers to leverage generative AI and evangelizing model adoption. Breanne is also on the Women@Amazon board as co-director of Allyship with the goal of fostering inclusive and diverse culture at Amazon. Breanne holds a Bachelor of Science in Computer Engineering.

Varun Mehta is a Solutions Architect at AWS. He is passionate about helping customers build enterprise-scale Well-Architected solutions on the AWS Cloud. He works with strategic customers who are using AI/ML to solve complex business problems. Outside of work, he loves to spend time with his wife and kids.

Chase Pinkerton is a Startups Solutions Architect at Amazon Web Services. He holds a Bachelor’s in Computer Science with a minor in Economics from Tufts University. He’s passionate about helping startups grow and scale their businesses. When not working, he enjoys road cycling, hiking, playing volleyball, and photography.

Kevin Lu is a Technical Business Developer intern at Amazon Web Services on the Generative AI team. His work focuses primarily on machine learning research as well as generative AI solutions. He is currently an undergraduate at the University of Pennsylvania, studying computer science and math. Outside of work, he enjoys spending time with friends and family, golfing, and trying new food.

AI Generated Robotic Content

Recent Posts

Despite intense AI arms race, we’re in for a multi-modal future

There will be no single model that will rule the universe, neither next year nor…

16 hours ago

AI’s next frontier: Selling your intentions before you know them

The near future could see AI assistants that forecast and influence our decision-making at an…

16 hours ago

How advanced foundation models will expand what AI can do (and other predictions for 2025)

Foundation models will be brand DNA, hands-free will be redefined and we'll hit the AI…

2 days ago

Test-driving Google’s Gemini-Exp-1206 model: Competitive data analysis and sophisticated visualizations in under a minute

Showing speed and precision, one of Google’s latest experimental models, Exp-1206, shows potential to alleviate…

3 days ago

Simplify Your Morning With a One-Step Coffee-Weighing Cup

You can get whole seconds back every morning by measuring your coffee beans in the…

3 days ago

Optimizing costs of generative AI applications on AWS

The report The economic potential of generative AI: The next productivity frontier, published by McKinsey…

4 days ago