image ML195901
AutoScout24 is Europe’s leading automotive marketplace platform that connects buyers and sellers of new and used cars, motorcycles, and commercial vehicles across several European countries. Their long-term vision is to build a Bot Factory, a centralized framework for creating and deploying artificial intelligence (AI) agents that can perform tasks and make decisions within workflows, to significantly improve operational efficiency across their organization.
As generative AI agents (systems that can reason, plan, and act) become more powerful, the opportunity to improve internal productivity for AutoScout24 was clear. This led to various engineering teams experimenting with the technology. As AI innovation accelerated across AutoScout24, they recognized an opportunity to pioneer a standardized approach for AI development. While AutoScout24 had successfully experimented with various tools and frameworks on Amazon Web Services (AWS), they envisioned creating a unified, enterprise-grade framework that could enable faster innovation. Their goal was to establish a paved path that could make it easier for teams across the organization to build secure, scalable, and maintainable AI agents. The AutoScout24 AI Platform Engineering team partnered with the AWS Prototype and Cloud Engineering (PACE) team in a three-week AI bootcamp. The goal was to move from fragmented experiments to a coherent strategy by creating a reusable blueprint, a Bot Factory, to standardize how future AI agents are built and operated within their company.
To ground the Bot Factory blueprint in a tangible business case, the team targeted a significant operational cost: internal developer support. The problem was well-defined. AutoScout24 AI Platform engineers were spending up to 30% of their time on repetitive tasks like answering questions, granting access to tools, and locating documentation. This support tax reduced overall productivity. It diverted skilled engineers from high-priority feature development and forced other developers to wait for routine requests to be completed. An automated support bot was an ideal first use case because it needed to perform two core agent functions:
By building a bot that could do both, the team could validate the blueprint while delivering immediate business value.
In this post, we explore the architecture that AutoScout24 used to build their standardized AI development framework, enabling rapid deployment of secure and scalable AI agents.
The architecture is designed with a simple, decoupled flow to make sure the system is both resilient and straightforward to maintain. The diagram provides a simplified view focused on the core generative-AI workflow. In a production environment, additional AWS services such as AWS Identity and Access Management (IAM), Amazon CloudWatch, AWS X-Ray, AWS CloudTrail, AWS Web Application Firewall (WAF), and AWS Key Management Service (KMS) could be integrated to enhance security, observability, and operational governance.
Here is how a request flows through the system:
A crucial implementation detail is how the system leverages AgentCore’s complete session isolation. To maintain conversational context, the system generates a unique, deterministic sessionId for each Slack thread by combining the channel ID and the thread’s timestamp. This sessionId is passed with every agent invocation within that thread. Interactions in a thread share this same sessionId, so the agent treats them as one continuous conversation. Meanwhile, interactions in other threads get different sessionIds, keeping their contexts separate. In effect, each conversation runs in an isolated session: AgentCore spins up separate resources per sessionId, so context and state do not leak between threads. In practice, this means that if a developer sends multiple messages in one Slack thread, the agent remembers the earlier parts of that conversation. Each thread’s history is preserved automatically by AgentCore.
This session management strategy is also vital for observability. Based on a unique sessionId, the interaction can be traced using AWS X-Ray, which offers insight into the flow – from the Slack message arriving at API Gateway to the message being enqueued in SQS. It follows the orchestrator’s processing, the call to the foundation model, subsequent tool invocations (such as a knowledge-base lookup or a GitHub API call), and finally the response back to Slack.
Metadata and timing help indicate the flow of each step to understand where time is spent. If a step fails or is slow (for example, a timeout on an external API call), X-Ray pinpoints which step caused the issue. This is invaluable for diagnosing problems quickly and building confidence in the system’s behavior.
The Bot Factory architecture designed by the AutoScout24 and AWS teams is event-driven, serverless, and built on a foundation of managed AWS services. This approach provides a resilient and scalable pattern that can be adapted for new use cases.
The solution builds on Amazon Bedrock and its integrated capabilities:
This solution provides a significant advantage for AutoScout24. Instead of building foundational infrastructure for session management, security, and observability, they use AgentCore’s purpose-built services. This allows the team to focus on the agent’s business logic rather than the underlying infrastructure. AgentCore also provides built-in security and isolation features. Each agent invocation runs in its own isolated container, helping to prevent data leakage between sessions. Agents are assigned specific IAM roles to restrict their AWS permissions (following the principle of least privilege). Credentials or tokens needed by agent tools (such as a GitHub API key) are stored securely in AWS Secrets Manager and accessed at runtime. These features give the team a secure environment for running agents with minimal custom infrastructure.
The agent itself was built using the Strands Agents SDK, an open-source framework that simplifies defining an agent’s logic, tools, and behavior in Python. This combination proves effective: Strands to build the agent, and AgentCore to securely run it at scale. The team adopted a sophisticated “agents-as-tools” design pattern, where a central orchestrator Agent acts as the main controller. This orchestrator does not contain the logic for every possible task. Instead, it intelligently delegates requests to specialized, single-purpose agents. For the support bot, this included a Knowledge Base agent for handling informational queries and a GitHub agent for executing actions like assigning licenses. This modular design makes it straightforward to extend the system with new capabilities, such as adding a PR review agent without re-architecting the entire pipeline. Running these agents on Amazon Bedrock further enhances flexibility, since the team can choose from a broad range of foundation models. More powerful models can be applied to complex reasoning tasks, while lighter, cost-efficient models are well-suited for routine worker agents such as GitHub license requests or operational workflows. This ability to mix and match models allows Autoscout24 to balance cost, performance, and accuracy across their agent architecture.
Using the Strands Agents SDK helped the team to define the orchestrator agent with concise, declarative code. The framework uses a model-driven approach, where the developer focuses on defining the agent’s instructions and tools, and the foundation model handles the reasoning and planning. The orchestrator agent can be expressed in just a few dozen lines of Python. The example snippet below (simplified for clarity, not intended for direct use) shows how the agent is configured with a model, a system prompt, and a list of tools (which in this architecture represent the specialized agents):
Another example is the GitHub Copilot license agent. It is implemented as a Strands tool function. The following snippet shows how the team defined it using the @tool decorator. This function creates a GitHubCopilotSeatAgent, passes the user’s request (a GitHub username) to it, and returns the result:
Key benefits of this approach include clear separation of concerns. The developer writes declarative code focused on the agent’s purpose. The complex infrastructure logic, including scaling, session management, and secure execution, is handled by Amazon Bedrock AgentCore. This abstraction enables rapid development and allowed AutoScout24 to move from prototype to production more quickly. The tools list effectively makes other agents callable functions, allowing the orchestrator to delegate tasks without needing to know their internal implementation.
The Bot Factory project delivers results that extended beyond the initial prototype. It creates immediate business value and establishes a strategic foundation for future AI innovation at AutoScout24.The key outcomes were:
AutoScout24’s partnership with AWS turned fragmented generative AI experiments into a scalable, standardized framework. By adopting Amazon Bedrock AgentCore, the team moved their support bot from prototype to production, while focusing on their Bot Factory vision. AgentCore manages session state and scaling, so engineers can focus on high-value business logic instead of infrastructure. The outcome is more than a support bot: it’s a reusable foundation for building enterprise agents. With AgentCore, AutoScout24 can move from prototype to production efficiently, setting a model for how organizations can standardize generative AI development on AWS. To start building enterprise agents with Amazon Bedrock, explore the following resources:
Editor’s note: This article is a part of our series on visualizing the foundations of…
For a global cybersecurity leader like Palo Alto Networks, a comprehensive understanding of each customer…
The news is a blow for Thinking Machines Lab. Two narratives are already emerging about…
Generative artificial intelligence models have left such an indelible impact on digital content creation that…
When I first started reading machine learning research papers, I honestly thought something was wrong…
Our latest Veo update generates lively, dynamic clips that feel natural and engaging — and…