jeffzeng
Today, we’re announcing structured outputs on Amazon Bedrock—a capability that fundamentally transforms how you can obtain validated JSON responses from foundation models through constrained decoding for schema compliance.
This represents a paradigm shift in AI application development. Instead of validating JSON responses and writing fallback logic for when they fail, you can move straight to building with the data. With structured outputs, you can build zero-validation data pipelines that trust model outputs, reliable agentic systems that confidently call external functions, and simplified application architectures without retry logic.
In this post, we explore the challenges of traditional JSON generation and how structured outputs solves them. We cover the two core mechanisms—JSON Schema output format and strict tool use—along with implementation details, best practices, and practical code examples. Whether you’re building data extraction pipelines, agentic workflows, or AI-powered APIs, you’ll learn how to use structured outputs to create reliable, production-ready applications. Our companion Jupyter notebook provides hands-on examples for every feature covered here.
For years, getting structured data from language models meant crafting detailed prompts, hoping for the best, and building elaborate error-handling systems. Even with careful prompting, developers routinely encounter:
json.loads() callsIn production systems, these failures compound. A single malformed response can cascade through your pipeline, requiring retries that increase latency and costs. For agentic workflows where models call tools, invalid parameters can break function calls entirely.
Consider a booking system requiring passengers: int. Without schema enforcement, the model might return passengers: "two" or passengers: "2"—syntactically valid JSON, but semantically wrong for your function signature.
Structured outputs on Amazon Bedrock isn’t incremental improvement—it’s a fundamental shift from probabilistic to deterministic output formatting. Through constrained decoding, Amazon Bedrock constrains model responses to conform to your specified JSON schema. Two complementary mechanisms are available:
| Feature | Purpose | Use case |
|---|---|---|
| JSON Schema output format | Control the model’s response format | Data extraction, report generation, API responses |
| Strict tool use | Validate tool parameters | Agentic workflows, function calling, multi-step automation |
These features can be used independently or together, giving you precise control over both what the model outputs and how it calls your functions.
JSON.parse() errors or parsing exceptionsStructured outputs uses constrained sampling with compiled grammar artifacts. Here’s what happens when you make a request:
Changing the JSON schema structure or a tool’s input schema invalidates the cache, but changing only name or description fields does not.
The following example demonstrates structured outputs with the Converse API:
Output:
The response conforms to your schema—no additional validation required.
To use structured outputs effectively, follow these guidelines:
additionalProperties: false on all objects. This is required for structured outputs to work. Without it, your schema won’t be accepted.customer_email outperform generic names like field1.enum for constrained values. When a field has a limited set of valid values, use enum to constrain options. This improves accuracy and produces valid values.stopReason in every response. Two scenarios can produce non-conforming responses: refusals (when the model declines for safety reasons) and token limits (when max_tokens is reached before completing). Handle both cases in your code.object, array, string, integer, number, boolean, nullenum (strings, numbers, bools, or nulls only)const, anyOf, allOf (with limitations)$ref, $def, and definitions (internal references only)date-time, time, date, duration, email, hostname, uri, ipv4, ipv6, uuidminItems (only values 0 and 1)$ref referencesminimum, maximum, multipleOf)minLength, maxLength)additionalProperties set to anything other than falseWhen building applications where models call tools, set strict: true in your tool definition to constrain tool parameters to match your input schema exactly:
With strict: true, structured outputs constrains the output so that:
location field is always a stringunit field is always either celsius or fahrenheitThe notebook demonstrates use cases that span industries:
Our testing revealed clear patterns for when to use each feature:
Use JSON Schema output format when:
Use strict tool use when:
Use both together when:
API comparison: Converse compared to InvokeModel
Both the Converse API and InvokeModel API support structured outputs, with slightly different parameter formats:
| Aspect | Converse API | InvokeModel (Anthropic Claude) | InvokeModel (open-weight models) |
|---|---|---|---|
| Schema location | outputConfig.textFormat | output_config.format | response_format |
| Tool strict flag | toolSpec.strict | tools[].strict | tools[].function.strict |
| Schema format | JSON string in jsonSchema.schema | JSON object in schema | JSON object in json_schema.schema |
| Best for | Conversational workflows | Single-turn inference (Claude) | Single-turn inference (open-weight) |
Note: The InvokeModel API uses different request field names depending on the model type. For Anthropic Claude models, use output_config.format for JSON schema outputs. For open-weight models, use response_format instead.
Choose the Converse API for multi-turn conversations and the InvokeModel API when you need direct model access with provider-specific request formats.
Structured outputs is generally available in all commercial AWS Regions for select Amazon Bedrock model providers:
The feature works seamlessly with:
ConverseStream or InvokeModelWithResponseStreamIn this post, you discovered how structured outputs on Amazon Bedrock reduce the uncertainty of AI-generated JSON through validated, schema-compliant responses. By using JSON Schema output format and strict tool use, you can build reliable data extraction pipelines, robust agentic workflows, and production-ready AI applications—without custom parsing or validation logic.Whether you’re extracting data from documents, building intelligent automation, or creating AI-powered APIs, structured outputs deliver the reliability your applications demand.
Structured outputs is now generally available on Amazon Bedrock. To use structured outputs with the Converse APIs, update to the latest AWS SDK. To learn more, see the Amazon Bedrock documentation and explore our sample notebook.
What workflows could validated, schema-compliant JSON unlock in your organization? The notebook provides everything you need to find out.
made a short video with LTX-2 using an iCloRA Flow to recreate a Space Jam…
The composition of objects and their parts, along with object-object positional relationships, provides a rich…
As generative AI moves from experimentation to production, platform engineers face a universal challenge for…
The government has withheld details of the investigation of Renee Good’s killing—but an unrelated case…
Inspired by the shape-shifting skin of octopuses, Penn State researchers developed a smart hydrogel that…
A team of EPFL researchers has taken a major step towards resolving the problem of…