Optimizing Test-Time Compute for LLMs: A Meta-Reinforcement Learning A …

Enhancing the reasoning abilities of LLMs by optimizing test-time compute is a critical research challenge. Current approaches primarily rely on fine-tuning models with search traces or RL using binary outcome rewards. However, these methods may not fully exploit test-time compute efficiently. Recent research suggests that increasing test-time computing can improve reasoning by generating longer solution traces and incorporating structured steps such as reflection, planning, and algorithmic search. Key challenges remain whether LLMs allocate computational resources effectively based on task complexity and discover solutions to more difficult problems when given a larger test-time compute budget. Addressing these is crucial for improving efficiency and generalization in LLM reasoning.

Recent advancements in scaling test-time compute have explored training separate verifiers for selection-based methods like best-of-N or beam search, which can sometimes be more effective than increasing data or model size. However, fine-tuning on unfamiliar search traces may lead to memorization rather than genuine reasoning improvements. RL-based approaches have demonstrated promise in generating chain-of-thought reasoning, enabling models to introspect, plan, and refine their outputs. However, increasing reasoning length does not always correlate with higher accuracy, as models may generate unnecessarily long sequences without meaningful progress. To address this, recent efforts have incorporated structured reward mechanisms and length penalties to encourage efficient reasoning, ensuring that models focus on producing informative, concise solutions rather than excessive computation.

Researchers from Carnegie Mellon University & Hugging Face investigate optimizing test-time compute for LLMs by refining how models allocate computational resources during reasoning. Instead of relying solely on outcome-reward RL, they introduce a fine-tuning approach that balances exploration and exploitation, ensuring steady progress toward correct answers. Their method incorporates a dense reward bonus to quantify progress, improving efficiency. Evaluations on mathematical benchmarks demonstrate that this approach significantly outperforms existing methods, enhancing both accuracy and token efficiency. Their findings also suggest that optimizing for progress minimizes computational regret while improving solution discovery without sacrificing accuracy.

The problem of optimizing test-time compute is framed as a meta reinforcement learning (meta RL) challenge. The goal is to maximize an LLM’s performance within a given test-time token budget by balancing exploration and exploitation. Instead of solely optimizing for outcomes, the proposed Meta Reinforcement Fine-Tuning (MRT) approach minimizes cumulative regret by rewarding progress across sequential episodes. This budget-agnostic strategy allows LLMs to make steady progress regardless of training constraints. By incorporating a reward bonus based on incremental improvements, MRT ensures efficient test-time compute usage, enhancing adaptability and response accuracy within deployment constraints.

The study evaluates the effectiveness of MRT in optimizing test-time computation, with a focus on achieving high accuracy while maintaining computational efficiency. The study presents key findings, compares MRT’s efficiency with prior methods, and conducts ablation experiments on token budget and progress. MRT consistently outperforms baseline models and outcome-reward RL (GRPO), achieving state-of-the-art results in its size category. It also improves out-of-distribution robustness and delivers larger performance gains with weaker models. Furthermore, MRT significantly enhances token efficiency, requiring fewer tokens for comparable accuracy. Additional experiments highlight its effectiveness in backtracking search and linearized evaluations.

In conclusion, the study reframes optimizing test-time compute as a meta-reinforcement learning (RL) problem, introducing cumulative regret as a key metric. State-of-the-art outcome-reward RL models fail to minimize regret, often struggling with novel queries within a token budget. This limitation arises from training solely with outcome rewards, which lack the granularity to guide stepwise progress. To address this, MRT is proposed, incorporating a dense reward bonus that encourages incremental improvement. MRT enhances test-time compute efficiency, achieving 2-3x better performance and 1.5x greater token efficiency in mathematical reasoning compared to outcome-reward RL, though several open questions remain.

Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.

The post Optimizing Test-Time Compute for LLMs: A Meta-Reinforcement Learning Approach with Cumulative Regret Minimization appeared first on MarkTechPost.

Getting started with computer use in Amazon Bedrock Agents

Computer use is a breakthrough capability from Anthropic that allows foundation models (FMs) to visually perceive and interpret digital interfaces. This capability enables Anthropic’s Claude models to identify what’s on a screen, understand the context of UI elements, and recognize actions that should be performed such as clicking buttons, typing text, scrolling, and navigating between applications. However, the model itself doesn’t execute these actions—it requires an orchestration layer to safely implement the supported actions.
Today, we’re announcing computer use support within Amazon Bedrock Agents using Anthropic’s Claude 3.5 Sonnet V2 and Anthropic’s Claude Sonnet 3.7 models on Amazon Bedrock. This integration brings Anthropic’s visual perception capabilities as a managed tool within Amazon Bedrock Agents, providing you with a secure, traceable, and managed way to implement computer use automation in your workflows.
Organizations across industries struggle with automating repetitive tasks that span multiple applications and systems of record. Whether processing invoices, updating customer records, or managing human resource (HR) documents, these workflows often require employees to manually transfer information between different systems – a process that’s time-consuming, error-prone, and difficult to scale.
Traditional automation approaches require custom API integrations for each application, creating significant development overhead. Computer use capabilities change this paradigm by allowing machines to perceive existing interfaces just as humans.
In this post, we create a computer use agent demo that provides the critical orchestration layer that transforms computer use from a perception capability into actionable automation. Without this orchestration layer, computer use would only identify potential actions without executing them. The computer use agent demo powered by Amazon Bedrock Agents provides the following benefits:

Secure execution environment – Execution of computer use tools in a sandbox environment with limited access to the AWS ecosystem and the web. It is crucial to note that currently Amazon Bedrock Agent does not provide a sandbox environment
Comprehensive logging – Ability to track each action and interaction for auditing and debugging
Detailed tracing capabilities – Visibility into each step of the automated workflow
Simplified testing and experimentation – Reduced risk when working with this experimental capability through managed controls
Seamless orchestration – Coordination of complex workflows across multiple systems without custom code

This integration combines Anthropic’s perceptual understanding of digital interfaces with the orchestration capabilities of Amazon Bedrock Agents, creating a powerful agent for automating complex workflows across applications. Rather than build custom integrations for each system, developers can now create agents that perceive and interact with existing interfaces in a managed, secure way.
With computer use, Amazon Bedrock Agents can automate tasks through basic GUI actions and built-in Linux commands. For example, your agent could take screenshots, create and edit text files, and run built-in Linux commands. Using Amazon Bedrock Agents and compatible Anthropic’s Claude models, you can use the following action groups:

Computer tool – Enables interactions with user interfaces (clicking, typing, scrolling)
Text editor tool – Provides capabilities to edit and manipulate files
Bash – Allows execution of built-in Linux commands

Solution overview
An example computer use workflow consists of the following steps:

Create an Amazon Bedrock agent and use natural language to describe what the agent should do and how it should interact with users, for example: “You are computer use agent capable of using Firefox web browser for web search.”
Add the Amazon Bedrock Agents supported computer use action groups to your agent using CreateAgentActionGroup API.
Invoke the agent with a user query that requires computer use tools, for example, “What is Amazon Bedrock, can you search the web?”
The Amazon Bedrock agent uses the tool definitions at its disposal and decides to use the computer action group to click a screenshot of the environment. Using the return control capability of Amazon Bedrock Agents, the agent the responds with the tool or tools that it wants to execute. The return control capability is required for using computer use with Amazon Bedrock Agents.
The workflow parses the agent response and executes the tool returned in a sandbox environment. The output is given back to the Amazon Bedrock agent for further processing.
The Amazon Bedrock agent continues to respond with tools at its disposal until the task is complete.

You can recreate this example in the us-west-2 AWS Region with the AWS Cloud Development Kit (AWS CDK) by following the instructions in the GitHub repository. This demo deploys a containerized application using AWS Fargate across two Availability Zones in the us-west-2 Region. The infrastructure operates within a virtual private cloud (VPC) containing public subnets in each Availability Zone, with an internet gateway providing external connectivity. The architecture is complemented by essential supporting services, including AWS Key Management Service (AWS KMS) for security and Amazon CloudWatch for monitoring, creating a resilient, serverless container environment that alleviates the need to manage underlying infrastructure while maintaining robust security and high availability.
The following diagram illustrates the solution architecture.

At the core of our solution are two Fargate containers managed through Amazon Elastic Container Service (Amazon ECS), each protected by its own security group. The first is our orchestration container, which not only handles the communication between Amazon Bedrock Agents and end users, but also orchestrates the workflow that enables tool execution. The second is our environment container, which serves as a secure sandbox where the Amazon Bedrock agent can safely run its computer use tools. The environment container has limited access to the rest of the ecosystem and the internet. We utilize service discovery to connect Amazon ECS services with DNS names.
The orchestration container includes the following components:

Streamlit UI – The Streamlit UI that facilitates interaction between the end user and computer use agent
Return control loop – The workflow responsible for parsing the tools that the agent wants to execute and returning the output of these tools

The environment container includes the following components:

UI and pre-installed applications – A lightweight UI and pre-installed Linux applications like Firefox that can be used to complete the user’s tasks
Tool implementation – Code that can execute computer use tool in the environment like “screenshot” or “double-click”
Quart (RESTful) JSON API – An orchestration container that uses Quart to execute tools in a sandbox environment

The following diagram illustrates these components.

Prerequisites

AWS Command Line Interface (CLI), follow instructions here. Make sure to setup credentials, follow instructions here.
Require Python 3.11 or later.
Require Node.js 14.15.0 or later.
AWS CDK CLI, follow instructions here.
Enable model access for Anthropic’s Claude Sonnet 3.5 V2 and for Anthropic’s Claude Sonnet 3.7.
Boto3 version >= 1.37.10.

Create an Amazon Bedrock agent with computer use
You can use the following code sample to create a simple Amazon Bedrock agent with computer, bash, and text editor action groups. It is crucial to provide a compatible action group signature when using Anthropic’s Claude 3.5 Sonnet V2 and Anthropic’s Claude 3.7 Sonnet as highlighted here.

Model
Action Group Signature

Anthropic’s Claude 3.5 Sonnet V2
computer_20241022 text_editor_20241022 bash_20241022

Anthropic’s Claude 3.7 Sonnet
computer_20250124 text_editor_20250124 bash_20250124

import boto3
import time

# Step 1: Create the bedrock agent client

bedrock_agent = boto3.client(“bedrock-agent”, region_name=”us-west-2″)

# Step 2: Create an agent

create_agent_response = create_agent_response = bedrock_agent.create_agent(
agentResourceRoleArn=agent_role_arn, # Amazon Bedrock Agent execution role
agentName=”computeruse”,
description=”””Example agent for computer use.
This agent should only operate on
Sandbox environments with limited privileges.”””,
foundationModel=”us.anthropic.claude-3-7-sonnet-20250219-v1:0″,
instruction=”””You are computer use agent capable of using Firefox
web browser for web search.”””,
)

time.sleep(30) # wait for agent to be created

# Step 3.1: Create and attach computer action group

bedrock_agent.create_agent_action_group(
actionGroupName=”ComputerActionGroup”,
actionGroupState=”ENABLED”,
agentId=create_agent_response[“agent”][“agentId”],
agentVersion=”DRAFT”,
parentActionGroupSignature=”ANTHROPIC.Computer”,
parentActionGroupSignatureParams={
“type”: “computer_20250124”,
“display_height_px”: “768”,
“display_width_px”: “1024”,
“display_number”: “1”,
},
)

# Step 3.2: Create and attach bash action group

bedrock_agent.create_agent_action_group(
actionGroupName=”BashActionGroup”,
actionGroupState=”ENABLED”,
agentId=create_agent_response[“agent”][“agentId”],
agentVersion=”DRAFT”,
parentActionGroupSignature=”ANTHROPIC.Bash”,
parentActionGroupSignatureParams={
“type”: “bash_20250124″,
},
)

# Step 3.3: Create and attach text editor action group

bedrock_agent.create_agent_action_group(
actionGroupName=”TextEditorActionGroup”,
actionGroupState=”ENABLED”,
agentId=create_agent_response[“agent”][“agentId”],
agentVersion=”DRAFT”,
parentActionGroupSignature=”ANTHROPIC.TextEditor”,
parentActionGroupSignatureParams={
“type”: “text_editor_20250124″,
},
)

# Step 3.4 Create Weather Action Group

bedrock_agent.create_agent_action_group(
actionGroupName=”WeatherActionGroup”,
agentId=create_agent_response[“agent”][“agentId”],
agentVersion=”DRAFT”,
actionGroupExecutor = {
‘customControl’: ‘RETURN_CONTROL’,
},
functionSchema = {
‘functions’: [
{
“name”: “get_current_weather”,
“description”: “Get the current weather in a given location.”,
“parameters”: {
“location”: {
“type”: “string”,
“description”: “The city, e.g., San Francisco”,
“required”: True,
},
“unit”: {
“type”: “string”,
“description”: ‘The unit to use, e.g.,
fahrenheit or celsius. Defaults to “fahrenheit”‘,
“required”: False,
},
},
“requireConfirmation”: “DISABLED”,
}
]
},
)
time.sleep(10)
# Step 4: Prepare agent

bedrock_agent.prepare_agent(agentId=create_agent_response[“agent”][“agentId”])

Example use case
In this post, we demonstrate an example where we use Amazon Bedrock Agents with the computer use capability to complete a web form. In the example, the computer use agent can also switch Firefox tabs to interact with a customer relationship management (CRM) agent to get the required information to complete the form. Although this example uses a sample CRM application as the system of record, the same approach works with Salesforce, SAP, Workday, or other systems of record with the appropriate authentication frameworks in place.

In the demonstrated use case, you can observe how well the Amazon Bedrock agent performed with computer use tools. Our implementation completed the customer ID, customer name, and email by visually examining the excel data. However, for the overview, it decided to select the cell and copy the data, because the information wasn’t completely visible on the screen. Finally, the CRM agent was used to get additional information on the customer.
Best practices
The following are some ways you can improve the performance for your use case:

Implement Security Groups, Network Access Control Lists (NACLs), and Amazon Route 53 Resolver DNS Firewall domain lists to control access to the sandbox environment.
Apply AWS Identity and Access Management (IAM) and the principle of least privilege to assign limited permissions to the sandbox environment.
When providing the Amazon Bedrock agent with instructions, be concise and direct. Specify simple, well-defined tasks and provide explicit instructions for each step.
Understand computer use limitations as highlighted by Anthropic here.
Complement return of control with user confirmation to help safeguard your application from malicious prompt injections by requesting confirmation from your users before invoking a computer use tool.
Use multi-agent collaboration and computer use with Amazon Bedrock Agents to automate complex workflows.
Implement safeguards by filtering harmful multimodal content based on your responsible AI policies for your application by associating Amazon Bedrock Guardrails with your agent.

Considerations
The computer use feature is made available to you as a beta service as defined in the AWS Service Terms. It is subject to your agreement with AWS and the AWS Service Terms, and the applicable model EULA. Computer use poses unique risks that are distinct from standard API features or chat interfaces. These risks are heightened when using the computer use feature to interact with the internet. To minimize risks, consider taking precautions such as:

Operate computer use functionality in a dedicated virtual machine or container with minimal privileges to minimize direct system exploits or accidents
To help prevent information theft, avoid giving the computer use API access to sensitive accounts or data
Limit the computer use API’s internet access to required domains to reduce exposure to malicious content
To enforce proper oversight, keep a human in the loop for sensitive tasks (such as making decisions that could have meaningful real-world consequences) and for anything requiring affirmative consent (such as accepting cookies, executing financial transactions, or agreeing to terms of service)

Any content that you enable Anthropic’s Claude to see or access can potentially override instructions or cause the model to make mistakes or perform unintended actions. Taking proper precautions, such as isolating Anthropic’s Claude from sensitive surfaces, is essential – including to avoid risks related to prompt injection. Before enabling or requesting permissions necessary to enable computer use features in your own products, inform end users of any relevant risks, and obtain their consent as appropriate.
Clean up
When you are done using this solution, make sure to clean up all the resources. Follow the instructions in the provided GitHub repository.
Conclusion
Organizations across industries face significant challenges with cross-application workflows that traditionally require manual data entry or complex custom integrations. The integration of Anthropic’s computer use capability with Amazon Bedrock Agents represents a transformative approach to these challenges.
By using Amazon Bedrock Agents as the orchestration layer, organizations can alleviate the need for custom API development for each application, benefit from comprehensive logging and tracing capabilities essential for enterprise deployment, and implement automation solutions quickly.
As you begin exploring computer use with Amazon Bedrock Agents, consider workflows in your organization that could benefit from this approach. From invoice processing to customer onboarding, HR documentation to compliance reporting, the potential applications are vast and transformative.
We’re excited to see how you will use Amazon Bedrock Agents with the computer use capability to securely streamline operations and reimagine business processes through AI-driven automation.
Resources
To learn more, refer to the following resources:

Computer use with Amazon Bedrock Agents guide
Computer use with Amazon Bedrock Agents implementation
Computer use with Anthropic’s Claude implementation
Computer use with Anthropic guide
Amazon Bedrock Agent Samples

About the Authors
Eashan Kaushik is a Specialist Solutions Architect AI/ML at Amazon Web Services. He is driven by creating cutting-edge generative AI solutions while prioritizing a customer-centric approach to his work. Before this role, he obtained an MS in Computer Science from NYU Tandon School of Engineering. Outside of work, he enjoys sports, lifting, and running marathons.
Maira Ladeira Tanke is a Tech Lead for Agentic workloads in Amazon Bedrock at AWS, where she enables customers on their journey to develop autonomous AI systems. With over 10 years of experience in AI/ML. At AWS, Maira partners with enterprise customers to accelerate the adoption of agentic applications using Amazon Bedrock, helping organizations harness the power of foundation models to drive innovation and business transformation. In her free time, Maira enjoys traveling, playing with her cat, and spending time with her family someplace warm.
Raj Pathak is a Principal Solutions Architect and Technical advisor to Fortune 50 and Mid-Sized FSI (Banking, Insurance, Capital Markets) customers across Canada and the United States. Raj specializes in Machine Learning with applications in Generative AI, Natural Language Processing, Intelligent Document Processing, and MLOps.
Adarsh Srikanth is a Software Development Engineer at Amazon Bedrock, where he develops AI agent services. He holds a master’s degree in computer science from USC and brings three years of industry experience to his role. He spends his free time exploring national parks, discovering new hiking trails, and playing various racquet sports.
Abishek Kumar is a Senior Software Engineer at Amazon, bringing over 6 years of valuable experience across both retail and AWS organizations. He has demonstrated expertise in developing generative AI and machine learning solutions, specifically contributing to key AWS services including SageMaker Autopilot, SageMaker Canvas, and AWS Bedrock Agents. Throughout his career, Abishek has shown passion for solving complex problems and architecting large-scale systems that serve millions of customers worldwide. When not immersed in technology, he enjoys exploring nature through hiking and traveling adventures with his wife.
Krishna Gourishetti is a Senior Software Engineer for the Bedrock Agents team in AWS. He is passionate about building scalable software solutions that solve customer problems. In his free time, Krishna loves to go on hikes.

Evaluating RAG applications with Amazon Bedrock knowledge base evaluat …

Organizations building and deploying AI applications, particularly those using large language models (LLMs) with Retrieval Augmented Generation (RAG) systems, face a significant challenge: how to evaluate AI outputs effectively throughout the application lifecycle. As these AI technologies become more sophisticated and widely adopted, maintaining consistent quality and performance becomes increasingly complex.
Traditional AI evaluation approaches have significant limitations. Human evaluation, although thorough, is time-consuming and expensive at scale. Although automated metrics are fast and cost-effective, they can only evaluate the correctness of an AI response, without capturing other evaluation dimensions or providing explanations of why an answer is problematic. Furthermore, traditional automated evaluation metrics typically require ground truth data, which for many AI applications is difficult to obtain. Especially for those involving open-ended generation or retrieval augmented systems, defining a single “correct” answer is practically impossible. Finally, metrics such as ROUGE and F1 can be fooled by shallow linguistic similarities (word overlap) between the ground truth and the LLM response, even when the actual meaning is very different. These challenges make it difficult for organizations to maintain consistent quality standards across their AI applications, particularly for generative AI outputs.
Amazon Bedrock has recently launched two new capabilities to address these evaluation challenges: LLM-as-a-judge (LLMaaJ) under Amazon Bedrock Evaluations and a brand new RAG evaluation tool for Amazon Bedrock Knowledge Bases. Both features rely on the same LLM-as-a-judge technology under the hood, with slight differences depending on if a model or a RAG application built with Amazon Bedrock Knowledge Bases is being evaluated. These evaluation features combine the speed of automated methods with human-like nuanced understanding, enabling organizations to:

Assess AI model outputs across various tasks and contexts
Evaluate multiple evaluation dimensions of AI performance simultaneously
Systematically assess both retrieval and generation quality in RAG systems
Scale evaluations across thousands of responses while maintaining quality standards

These capabilities integrate seamlessly into the AI development lifecycle, empowering organizations to improve model and application quality, promote responsible AI practices, and make data-driven decisions about model selection and application deployment.
This post focuses on RAG evaluation with Amazon Bedrock Knowledge Bases, provides a guide to set up the feature, discusses nuances to consider as you evaluate your prompts and responses, and finally discusses best practices. By the end of this post, you will understand how the latest Amazon Bedrock evaluation features can streamline your approach to AI quality assurance, enabling more efficient and confident development of RAG applications.
Key features
Before diving into the implementation details, we examine the key features that make the capabilities of RAG evaluation on Amazon Bedrock Knowledge Bases particularly powerful. The key features are:

Amazon Bedrock Evaluations

Evaluate Amazon Bedrock Knowledge Bases directly within the service
Systematically evaluate both retrieval and generation quality in RAG systems to change knowledge base build-time parameters or runtime parameters

Comprehensive, understandable, and actionable evaluation metrics

Retrieval metrics: Assess context relevance and coverage using an LLM as a judge
Generation quality metrics: Measure correctness, faithfulness (to detect hallucinations), completeness, and more
Provide natural language explanations for each score in the output and on the console
Compare results across multiple evaluation jobs for both retrieval and generation
Metrics scores are normalized to 0 and 1 range

Scalable and efficient assessment

Scale evaluation across thousands of responses
Reduce costs compared to manual evaluation while maintaining high quality standards

Flexible evaluation framework

Support both ground truth and reference-free evaluations
Equip users to select from a variety of metrics for evaluation
Supports evaluating fine-tuned or distilled models on Amazon Bedrock
Provides a choice of evaluator models

Model selection and comparison

Compare evaluation jobs across different generating models
Facilitate data-driven optimization of model performance

Responsible AI integration

Incorporate built-in responsible AI metrics such as harmfulness, answer refusal, and stereotyping
Seamlessly integrate with Amazon Bedrock Guardrails

These features enable organizations to comprehensively assess AI performance, promote responsible AI development, and make informed decisions about model selection and optimization throughout the AI application lifecycle. Now that we’ve explained the key features, we examine how these capabilities come together in a practical implementation.
Feature overview
The Amazon Bedrock Knowledge Bases RAG evaluation feature provides a comprehensive, end-to-end solution for assessing and optimizing RAG applications. This automated process uses the power of LLMs to evaluate both retrieval and generation quality, offering insights that can significantly improve your AI applications.
The workflow is as follows, as shown moving from left to right in the following architecture diagram:

Prompt dataset – Prepared set of prompts, optionally including ground truth responses
JSONL file – Prompt dataset converted to JSONL format for the evaluation job
Amazon Simple Storage Service (Amazon S3) bucket – Storage for the prepared JSONL file
Amazon Bedrock Knowledge Bases RAG evaluation job – Core component that processes the data, integrating with Amazon Bedrock Guardrails and Amazon Bedrock Knowledge Bases.
Automated report generation – Produces a comprehensive report with detailed metrics and insights at individual prompt or conversation level
Analyze the report to derive actionable insights for RAG system optimization

Designing holistic RAG evaluations: Balancing cost, quality, and speed
RAG system evaluation requires a balanced approach that considers three key aspects: cost, speed, and quality. Although Amazon Bedrock Evaluations primarily focuses on quality metrics, understanding all three components helps create a comprehensive evaluation strategy. The following diagram shows how these components interact and feed into a comprehensive evaluation strategy, and the next sections examine each component in detail.

Cost and speed considerations
The efficiency of RAG systems depends on model selection and usage patterns. Costs are primarily driven by data retrieval and token consumption during retrieval and generation, and speed depends on model size and complexity as well as prompt and context size. For applications requiring high performance content generation with lower latency and costs, model distillation can be an effective solution to use for creating a generator model, for example. As a result, you can create smaller, faster models that maintain quality of larger models for specific use cases.
Quality assessment framework
Amazon Bedrock knowledge base evaluation provides comprehensive insights through various quality dimensions:

Technical quality through metrics such as context relevance and faithfulness
Business alignment through correctness and completeness scores
User experience through helpfulness and logical coherence measurements
Incorporates built-in responsible AI metrics such as harmfulness, stereotyping, and answer refusal.

Establishing baseline understanding
Begin your evaluation process by choosing default configurations in your knowledge base (vector or graph database), such as default chunking strategies, embedding models, and prompt templates. These are just some of the possible options. This approach establishes a baseline performance, helping you understand your RAG system’s current effectiveness across available evaluation metrics before optimization. Next, create a diverse evaluation dataset. Make sure this dataset contains a diverse set of queries and knowledge sources that accurately reflect your use case. The diversity of this dataset will provide a comprehensive view of your RAG application performance in production.
Iterative improvement process
Understanding how different components affect these metrics enables informed decisions about:

Knowledge base configuration (chunking strategy or embedding size or model) and inference parameter refinement
Retrieval strategy modifications (semantic or hybrid search)
Prompt engineering refinements
Model selection and inference parameter configuration
Choice between different vector stores including graph databases

Continuous evaluation and improvement
Implement a systematic approach to ongoing evaluation:

Schedule regular offline evaluation cycles aligned with knowledge base updates
Track metric trends over time to identify areas for improvement
Use insights to guide knowledge base refinements and generator model customization and selection

Prerequisites
To use the knowledge base evaluation feature, make sure that you have satisfied the following requirements:

An active AWS account.
Selected evaluator and generator models enabled in Amazon Bedrock. You can confirm that the models are enabled for your account on the Model access page of the Amazon Bedrock console.
Confirm the AWS Regions where the model is available and quotas.
Complete the knowledge base evaluation prerequisites related to AWS Identity and Access Management (IAM) creation and add permissions for an S3 bucket to access and write output data.

You also need to set up and enable CORS on your S3 bucket.

Have an Amazon Bedrock knowledge base created and sync your data such that it’s ready to be used by a knowledge base evaluation job.
If yo’re using a custom model instead of an on-demand model for your generator model, make sure you have sufficient quota for running a Provisioned Throughput during inference. Go to the Service Quotas console and check the following quotas:

Model units no-commitment Provisioned Throughputs across custom models
Model units per provisioned model for [your custom model name]
Both fields need to have enough quota to support your Provisioned Throughput model unit. Request a quota increase if necessary to accommodate your expected inference workload.

Prepare input dataset
To prepare your dataset for a knowledge base evaluation job, you need to follow two important steps:

Dataset requirements:

Maximum 1,000 conversations per evaluation job (1 conversation is contained in the conversationTurns key in the dataset format)
Maximum 5 turns (prompts) per conversation
File must use JSONL format (.jsonl extension)
Each line must be a valid JSON object and complete prompt
Stored in an S3 bucket with CORS enabled

Follow the following format:

Retrieve only evaluation jobs.

Special note: On March 20, 2025, the referenceContexts key will change to referenceResponses. The content of referenceResponses should be the expected ground truth answer that an end-to-end RAG system would have generated given the prompt, not the expected passages/chunks retrieved from the Knowledge Base.

{
“conversationTurns”: [{
## required for Context Coverage metric
“referenceContexts”: [{
“content”: [{
“text”: “This is a reference response used as ground truth”
}]
}],
## your prompt to the model
“prompt”: {
“content”: [{
“text”: “This is a prompt”
}]
}
}]
}

Retrieve and generate evaluation jobs

{
“conversationTurns”: [{
##optional
“referenceResponses”: [{
“content”: [{
“text”: “This is a reference response used as ground truth”
}]
}],
## your prompt to the model
“prompt”: {
“content”: [{
“text”: “This is a prompt”
}]
}
}]
}

Start a knowledge base RAG evaluation job using the console
Amazon Bedrock Evaluations provides you with an option to run an evaluation job through a guided user interface on the console. To start an evaluation job through the console, follow these steps:

On the Amazon Bedrock console, under Inference and Assessment in the navigation pane, choose Evaluations and then choose Knowledge Bases.
Choose Create, as shown in the following screenshot.
Give an Evaluation name, a Description, and choose an Evaluator model, as shown in the following screenshot. This model will be used as a judge to evaluate the response of the RAG application.
Choose the knowledge base and the evaluation type, as shown in the following screenshot. Choose Retrieval only if you want to evaluate only the retrieval component and Retrieval and response generation if you want to evaluate the end-to-end retrieval and response generation. Select a model, which will be used for generating responses in this evaluation job.
(Optional) To change inference parameters, choose configurations. You can update or experiment with different values of temperature, top-P, update knowledge base prompt templates, associate guardrails, update search strategy, and configure numbers of chunks retrieved. The following screenshot shows the Configurations screen.
Choose the Metrics you would like to use to evaluate the RAG application, as shown in the following screenshot.
Provide the S3 URI, as shown in step 3 for evaluation data and for evaluation results. You can use the Browse S3
Select a service (IAM) role with the proper permissions. This includes service access to Amazon Bedrock, the S3 buckets in the evaluation job, the knowledge base in the job, and the models being used in the job. You can also create a new IAM role in the evaluation setup and the service will automatically give the role the proper permissions for the job.
Choose Create.
You will be able to check the evaluation job In Progress status on the Knowledge Base evaluations screen, as shown in in the following screenshot.
Wait for the job to be complete. This could be 10–15 minutes for a small job or a few hours for a large job with hundreds of long prompts and all metrics selected. When the evaluation job has been completed, the status will show as Completed, as shown in the following screenshot.
When it’s complete, select the job, and you’ll be able to observe the details of the job. The following screenshot is the Metric summary.
You should also observe a directory with the evaluation job name in the Amazon S3 path. You can find the output S3 path from your job results page in the evaluation summary section.
You can compare two evaluation jobs to gain insights about how different configurations or selections are performing. You can view a radar chart comparing performance metrics between two RAG evaluation jobs, making it simple to visualize relative strengths and weaknesses across different dimensions, as shown in the following screenshot.

On the Evaluation details tab, examine score distributions through histograms for each evaluation metric, showing average scores and percentage differences. Hover over the histogram bars to check the number of conversations in each score range, helping identify patterns in performance, as shown in the following screenshots.

Start a knowledge base evaluation job using Python SDK and APIs
To use the Python SDK for creating a knowledge base evaluation job, follow these steps. First, set up the required configurations:

import boto3
from datetime import datetime

# Generate unique name for the job
job_name = f”kb-evaluation-{datetime.now().strftime(‘%Y-%m-%d-%H-%M-%S’)}”

# Configure your knowledge base and model settings
knowledge_base_id = “<YOUR_KB_ID>”
evaluator_model = “mistral.mistral-large-2402-v1:0”
generator_model = “anthropic.claude-3-sonnet-20240229-v1:0”
role_arn = “arn:aws:iam::<YOUR_ACCOUNT_ID>:role/<YOUR_IAM_ROLE>”

# Specify S3 locations for evaluation data and output
input_data = “s3://<YOUR_BUCKET>/evaluation_data/input.jsonl”
output_path = “s3://<YOUR_BUCKET>/evaluation_output/”

# Configure retrieval settings
num_results = 10
search_type = “HYBRID”

# Create Bedrock client
bedrock_client = boto3.client(‘bedrock’)

For retrieval-only evaluation, create a job that focuses on assessing the quality of retrieved contexts:

retrieval_job = bedrock_client.create_evaluation_job(
jobName=job_name,
jobDescription=”Evaluate retrieval performance”,
roleArn=role_arn,
applicationType=”RagEvaluation”,
inferenceConfig={
“ragConfigs”: [{
“knowledgeBaseConfig”: {
“retrieveConfig”: {
“knowledgeBaseId”: knowledge_base_id,
“knowledgeBaseRetrievalConfiguration”: {
“vectorSearchConfiguration”: {
“numberOfResults”: num_results,
“overrideSearchType”: search_type
}
}
}
}
}]
},
outputDataConfig={
“s3Uri”: output_path
},
evaluationConfig={
“automated”: {
“datasetMetricConfigs”: [{
“taskType”: “Custom”,
“dataset”: {
“name”: “RagDataset”,
“datasetLocation”: {
“s3Uri”: input_data
}
},
“metricNames”: [
“Builtin.ContextRelevance”,
“Builtin.ContextCoverage”
]
}],
“evaluatorModelConfig”: {
“bedrockEvaluatorModels”: [{
“modelIdentifier”: evaluator_model
}]
}
}
}
)

For a complete evaluation of both retrieval and generation, use this configuration:

retrieve_generate_job=bedrock_client.create_evaluation_job(
jobName=job_name,
jobDescription=”Evaluate retrieval and generation”,
roleArn=role_arn,
applicationType=”RagEvaluation”,
inferenceConfig={
“ragConfigs”: [{
“knowledgeBaseConfig”: {
“retrieveAndGenerateConfig”: {
“type”: “KNOWLEDGE_BASE”,
“knowledgeBaseConfiguration”: {
“knowledgeBaseId”: knowledge_base_id,
“modelArn”: generator_model,
“retrievalConfiguration”: {
“vectorSearchConfiguration”: {
“numberOfResults”: num_results,
“overrideSearchType”: search_type
}
}
}
}
}
}]
},
outputDataConfig={
“s3Uri”: output_path
},
evaluationConfig={
“automated”: {
“datasetMetricConfigs”: [{
“taskType”: “Custom”,
“dataset”: {
“name”: “RagDataset”,
“datasetLocation”: {
“s3Uri”: input_data
}
},
“metricNames”: [
“Builtin.Correctness”,
“Builtin.Completeness”,
“Builtin.Helpfulness”,
“Builtin.LogicalCoherence”,
“Builtin.Faithfulness”
]
}],
“evaluatorModelConfig”: {
“bedrockEvaluatorModels”: [{
“modelIdentifier”: evaluator_model
}]
}
}
}
)

To monitor the progress of your evaluation job, use this configuration:

# depending on job type, we can retrieve the ARN of the job and monitor to to take any downstream actions.
evaluation_job_arn = retrieval_job[‘jobArn’]
evaluation_job_arn = retrieve_generate_job[‘jobArn’]

response = bedrock_client.get_evaluation_job(
jobIdentifier=evaluation_job_arn
)
print(f”Job Status: {response[‘status’]}”)

Interpreting results
After your evaluation jobs are completed, Amazon Bedrock RAG evaluation provides a detailed comparative dashboard across the evaluation dimensions.

The evaluation dashboard includes comprehensive metrics, but we focus on one example, the completeness histogram shown below. This visualization represents how well responses cover all aspects of the questions asked. In our example, we notice a strong right-skewed distribution with an average score of 0.921. The majority of responses (15) scored above 0.9, while a small number fell in the 0.5-0.8 range. This type of distribution helps quickly identify if your RAG system has consistent performance or if there are specific cases needing attention.

Selecting specific score ranges in the histogram reveals detailed conversation analyses. For each conversation, you can examine the input prompt, generated response, number of retrieved chunks, ground truth comparison, and most importantly, the detailed score explanation from the evaluator model.
Consider this example response that scored 0.75 for the question, “What are some risks associated with Amazon’s expansion?” Although the generated response provided a structured analysis of operational, competitive, and financial risks, the evaluator model identified missing elements around IP infringement and foreign exchange risks compared to the ground truth. This detailed explanation helps in understanding not just what’s missing, but why the response received its specific score.
This granular analysis is crucial for systematic improvement of your RAG pipeline. By understanding patterns in lower-performing responses and specific areas where context retrieval or generation needs improvement, you can make targeted optimizations to your system—whether that’s adjusting retrieval parameters, refining prompts, or modifying knowledge base configurations.
Best practices for implementation
These best practices help build a solid foundation for your RAG evaluation strategy:

Design your evaluation strategy carefully, using representative test datasets that reflect your production scenarios and user patterns. If you have large workloads greater than 1,000 prompts per batch, optimize your workload by employing techniques such as stratified sampling to promote diversity and representativeness within your constraints such as time to completion and costs associated with evaluation.
Schedule periodic batch evaluations aligned with your knowledge base updates and content refreshes because this feature supports batch analysis rather than real-time monitoring.
Balance metrics with business objectives by selecting evaluation dimensions that directly impact your application’s success criteria.
Use evaluation insights to systematically improve your knowledge base content and retrieval settings through iterative refinement.
Maintain clear documentation of evaluation jobs, including the metrics selected and improvements implemented based on results. The job creation configuration settings in your results pages can help keep a historical record here.
Optimize your evaluation batch size and frequency based on application needs and resource constraints to promote cost-effective quality assurance.
Structure your evaluation framework to accommodate growing knowledge bases, incorporating both technical metrics and business KPIs in your assessment criteria.

To help you dive deeper into the scientific validation of these practices, we’ll be publishing a technical deep-dive post that explores detailed case studies using public datasets and internal AWS validation studies. This upcoming post will examine how our evaluation framework performs across different scenarios and demonstrate its correlation with human judgments across various evaluation dimensions. Stay tuned as we explore the research and validation that powers Amazon Bedrock Evaluations.
Conclusion
Amazon Bedrock knowledge base RAG evaluation enables organizations to confidently deploy and maintain high-quality RAG applications by providing comprehensive, automated assessment of both retrieval and generation components. By combining the benefits of managed evaluation with the nuanced understanding of human assessment, this feature allows organizations to scale their AI quality assurance efficiently while maintaining high standards. Organizations can make data-driven decisions about their RAG implementations, optimize their knowledge bases, and follow responsible AI practices through seamless integration with Amazon Bedrock Guardrails.
Whether you’re building customer service solutions, technical documentation systems, or enterprise knowledge base RAG, Amazon Bedrock Evaluations provides the tools needed to deliver reliable, accurate, and trustworthy AI applications. To help you get started, we’ve prepared a Jupyter notebook with practical examples and code snippets. You can find it on our GitHub repository.
We encourage you to explore these capabilities in the Amazon Bedrock console and discover how systematic evaluation can enhance your RAG applications.

About the Authors
Ishan Singh is a Generative AI Data Scientist at Amazon Web Services, where he helps customers build innovative and responsible generative AI solutions and products. With a strong background in AI/ML, Ishan specializes in building Generative AI solutions that drive business value. Outside of work, he enjoys playing volleyball, exploring local bike trails, and spending time with his wife and dog, Beau.
Ayan Ray is a Senior Generative AI Partner Solutions Architect at AWS, where he collaborates with ISV partners to develop integrated Generative AI solutions that combine AWS services with AWS partner products. With over a decade of experience in Artificial Intelligence and Machine Learning, Ayan has previously held technology leadership roles at AI startups before joining AWS. Based in the San Francisco Bay Area, he enjoys playing tennis and gardening in his free time.
Adewale Akinfaderin is a Sr. Data Scientist–Generative AI, Amazon Bedrock, where he contributes to cutting edge innovations in foundational models and generative AI applications at AWS. His expertise is in reproducible and end-to-end AI/ML methods, practical implementations, and helping global customers formulate and develop scalable solutions to interdisciplinary problems. He has two graduate degrees in physics and a doctorate in engineering.
Evangelia Spiliopoulou is an Applied Scientist in the AWS Bedrock Evaluation group, where the goal is to develop novel methodologies and tools to assist automatic evaluation of LLMs. Her overall work focuses on Natural Language Processing (NLP) research and developing NLP applications for AWS customers, including LLM Evaluations, RAG, and improving reasoning for LLMs. Prior to Amazon, Evangelia completed her Ph.D. at Language Technologies Institute, Carnegie Mellon University.
Jesse Manders is a Senior Product Manager on Amazon Bedrock, the AWS Generative AI developer service. He works at the intersection of AI and human interaction with the goal of creating and improving generative AI products and services to meet our needs. Previously, Jesse held engineering team leadership roles at Apple and Lumileds, and was a senior scientist in a Silicon Valley startup. He has an M.S. and Ph.D. from the University of Florida, and an MBA from the University of California, Berkeley, Haas School of Business.

Google DeepMind’s Gemini Robotics: Unleashing Embodied AI with Zero- …

Google DeepMind has shattered conventional boundaries in robotics AI with the unveiling of Gemini Robotics, a suite of models built upon the formidable foundation of Gemini 2.0. This isn’t just an incremental upgrade; it’s a paradigm shift, propelling AI from the digital realm into the tangible world with unprecedented “embodied reasoning” capabilities.

Gemini Robotics: Bridging the Gap Between Digital Intelligence and Physical Action

At the heart of this innovation lies Gemini Robotics, an advanced vision-language-action (VLA) model that transcends traditional AI limitations. By introducing physical actions as a direct output modality, Gemini Robotics empowers robots to autonomously execute tasks with a level of understanding and adaptability previously unattainable. Complementing this is Gemini Robotics-ER (Embodied Reasoning), a specialized model engineered to refine spatial understanding, enabling roboticists to seamlessly integrate Gemini’s cognitive prowess into existing robotic architectures.

These models herald a new era of robotics, promising to unlock a diverse spectrum of real-world applications. Google DeepMind’s strategic partnerships with industry leaders like Apptronik, for the integration of Gemini 2.0 into humanoid robots, and collaborations with trusted testers, underscore the transformative potential of this technology.

Key Technological Advancements:

Unparalleled Generality: Gemini Robotics leverages Gemini’s robust world model to generalize across novel scenarios, achieving superior performance on rigorous generalization benchmarks compared to state-of-the-art VLA models.

Intuitive Interactivity: Built on Gemini 2.0’s language understanding, the model facilitates fluid human-robot interaction through natural language commands, dynamically adapting to environmental changes and user input.

Advanced Dexterity: The model demonstrates remarkable dexterity, executing complex manipulation tasks like origami folding and intricate object handling, showcasing a significant leap in robotic fine motor control.

Versatile Embodiment: Gemini Robotics’ adaptability extends to various robotic platforms, from bi-arm systems like ALOHA 2 and Franka arms to advanced humanoid robots like Apptronik’s Apollo.

Gemini Robotics-ER: Pioneering Spatial Intelligence

Gemini Robotics-ER elevates spatial reasoning, a critical component for effective robotic operation. By enhancing capabilities such as pointing, 3D object detection, and spatial understanding, this model enables robots to perform tasks with heightened precision and efficiency.

Gemini 2.0: Enabling Zero and Few-Shot Robot Control

A defining feature of Gemini 2.0 is its ability to facilitate zero and few-shot robot control. This eliminates the need for extensive robot action data training, enabling robots to perform complex tasks “out of the box.” By uniting perception, state estimation, spatial reasoning, planning, and control within a single model, Gemini 2.0 surpasses previous multi-model approaches.

Zero-Shot Control via Code Generation: Gemini Robotics-ER leverages its code generation capabilities and embodied reasoning to control robots using API commands, reacting and replanning as needed. The model’s enhanced embodied understanding results in a near 2x improvement in task completion compared to Gemini 2.0.

Few-Shot Control via In-Context Learning (ICL): By conditioning the model on a small number of demonstrations, Gemini Robotics-ER can quickly adapt to new behaviors.

Below is the perception and control APIs, and agentic orchestration during an episode. This system is used for zero-shot control:

Commitment to Safety 

Google DeepMind prioritizes safety through a multi-layered approach, addressing concerns from low-level motor control to high-level semantic understanding. The integration of Gemini Robotics-ER with existing safety-critical controllers and the development of mechanisms to prevent unsafe actions underscore this commitment.

The release of the ASIMOV dataset and the framework for generating data-driven “Robot Constitutions” further demonstrates Google DeepMind’s dedication to advancing robotics safety research.

Intelligent robots are getting closer…

Check out  the full Gemini Robotics report and Gemini Robotics. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.

The post Google DeepMind’s Gemini Robotics: Unleashing Embodied AI with Zero-Shot Control and Enhanced Spatial Reasoning appeared first on MarkTechPost.

Aya Vision Unleashed: A Global AI Revolution in Multilingual Multimoda …

Cohere For AI has just dropped a bombshell: Aya Vision, a open-weights vision model that’s about to redefine multilingual and multimodal communication. Prepare for a seismic shift as we shatter language barriers and unlock the true potential of AI across the globe!

Smashing the Multilingual Multimodal Divide!

Let’s face it, AI has been speaking with a frustratingly limited vocabulary. But not anymore! Aya Vision explodes onto the scene, obliterating the performance gap between languages and modalities. This isn’t just an incremental improvement; it’s a quantum leap, extending multimodal magic to 23 languages, reaching over half the planet’s population. Imagine AI finally speaking your language, understanding the rich tapestry of your culture.

Aya Vision: Where Vision Meets Linguistic Brilliance!

This is not your average vision model. Aya Vision is a linguistic virtuoso, a visual maestro, and a global communicator all rolled into one. From crafting captivating image captions to answering complex visual questions, it’s a powerhouse of multimodal understanding. See above: you snap a photo of a stunning piece of art from your travels, and Aya Vision instantly unveils its history, style, and cultural significance, bridging worlds with a single image.

Performance That Will Blow Your Mind!

Multilingual Domination: Aya Vision obliterates the competition, leaving leading open-weights models in the dust when it comes to multilingual text generation and image understanding.

Parameter Prowess: The 8B model is a lean, mean, performance machine, crushing giants like Qwen2.5-VL 7B, Gemini Flash 1.5 8B, Llama-3.2 11B Vision, and Pangea 7B with jaw-dropping win rates!

32B Titan: The 32B model sets a new gold standard, outperforming even larger models like Llama-3.2 90B Vision, Molmo 72B, and Qwen2-VL 72B with breathtaking efficiency.

Efficiency Unleashed: Aya Vision proves you don’t need monstrous models to achieve monumental results, outperforming models 10x its size!

Algorithmic Alchemy: Secret ingredients like synthetic annotations, multilingual data scaling, and multimodal model merging have been masterfully combined to create this AI masterpiece.

Open Weights, Open Doors, Open World!

Cohere For AI isn’t just building groundbreaking AI; they’re democratizing it. Aya Vision’s 8B and 32B models are now freely available on Kaggle and Hugging Face. 

Want to contribute?

Cohere For AI invites researchers worldwide to join the Aya initiative, apply for research grants, and collaborate in their open science community. Aya Vision is a huge step forward into the future of multilingual multimodal. 

Check out Aya Vision blog post  and Aya Initiative, Kaggle and Hugging Face. . All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.

Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. It’s operated using an easy-to-use CLI and native client SDKs in Python and TypeScript .

The post Aya Vision Unleashed: A Global AI Revolution in Multilingual Multimodal Power! appeared first on MarkTechPost.

Simular Releases Agent S2: An Open, Modular, and Scalable AI Framework …

In today’s digital landscape, interacting with a wide variety of software and operating systems can often be a tedious and error-prone experience. Many users face challenges when navigating through complex interfaces and performing routine tasks that demand precision and adaptability. Existing automation tools frequently fall short in adapting to subtle interface changes or learning from past mistakes, leaving users to manually oversee processes that could otherwise be streamlined. This persistent gap between user expectations and the capabilities of traditional automation calls for a system that not only performs tasks reliably but also learns and adjusts over time.

Simular has introduced Agent S2, an open, modular, and scalable framework designed to assist with computer use agents. Agent S2 builds upon the foundation laid by its predecessor, offering a refined approach to automating tasks on computers and smartphones. By integrating a modular design with both general-purpose and specialized models, the framework can be adapted to a variety of digital environments. Its design is inspired by the human brain’s natural modularity, where different regions work together harmoniously to handle complex tasks, thereby fostering a system that is both flexible and robust.

Technical Details and Benefits

At its core, Agent S2 employs experience-augmented hierarchical planning. This method involves breaking down long and intricate tasks into smaller, more manageable subtasks. The framework continuously refines its strategy by learning from previous experiences, thereby improving its execution over time. An important aspect of Agent S2 is its visual grounding capability, which allows it to interpret raw screenshots for precise interaction with graphical user interfaces. This eliminates the need for additional structured data and enhances the system’s ability to correctly identify and interact with UI elements. Moreover, Agent S2 utilizes an advanced Agent-Computer Interface that delegates routine, low-level actions to expert modules. Complemented by an adaptive memory mechanism, the system retains useful experiences to guide future decision-making, resulting in a more measured and effective performance.

Results and Insights

Evaluations on real-world benchmarks indicate that Agent S2 performs reliably in both computer and smartphone environments. On the OSWorld benchmark—which tests the execution of multi-step computer tasks—Agent S2 achieved a success rate of 34.5% on a 50-step evaluation, reflecting a modest yet consistent improvement over earlier models. Similarly, on the AndroidWorld benchmark, the framework reached a 50% success rate in executing smartphone tasks. These results underscore the practical benefits of a system that can plan ahead and adapt to dynamic conditions, ensuring that tasks are completed with improved accuracy and minimal manual intervention.

Conclusion

Agent S2 represents a thoughtful approach to enhancing everyday digital interactions. By addressing common challenges in computer automation through a modular design and adaptive learning, the framework provides a practical solution for managing routine tasks more efficiently. Its balanced combination of proactive planning, visual understanding, and expert delegation makes it well-suited for both complex computer tasks and mobile applications. In an era where digital workflows continue to evolve, Agent S2 offers a measured, reliable means of integrating automation into daily routines—helping users achieve better outcomes while reducing the need for constant manual oversight.

Check out the Technical details and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.

Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. It’s operated using an easy-to-use CLI and native client SDKs in Python and TypeScript .

The post Simular Releases Agent S2: An Open, Modular, and Scalable AI Framework for Computer Use Agents appeared first on MarkTechPost.

How GoDaddy built a category generation system at scale with batch inf …

This post was co-written with Vishal Singh, Data Engineering Leader at Data & Analytics team of GoDaddy
Generative AI solutions have the potential to transform businesses by boosting productivity and improving customer experiences, and using large language models (LLMs) in these solutions has become increasingly popular. However, inference of LLMs as single model invocations or API calls doesn’t scale well with many applications in production.
With batch inference, you can run multiple inference requests asynchronously to process a large number of requests efficiently. You can also use batch inference to improve the performance of model inference on large datasets.
This post provides an overview of a custom solution developed by the for GoDaddy, a domain registrar, registry, web hosting, and ecommerce company that seeks to make entrepreneurship more accessible by using generative AI to provide personalized business insights to over 21 million customers—insights that were previously only available to large corporations. In this collaboration, the Generative AI Innovation Center team created an accurate and cost-efficient generative AI–based solution using batch inference in Amazon Bedrock, helping GoDaddy improve their existing product categorization system.
Solution overview
GoDaddy wanted to enhance their product categorization system that assigns categories to products based on their names. For example:

Input: Fruit by the Foot Starburst

Output: color -> multi-colored, material -> candy, category -> snacks, product_line -> Fruit by the Foot,…

GoDaddy used an out-of-the-box Meta Llama 2 model to generate the product categories for six million products where a product is identified by an SKU. The generated categories were often incomplete or mislabeled. Moreover, employing an LLM for individual product categorization proved to be a costly endeavor. Recognizing the need for a more precise and cost-effective solution, GoDaddy sought an alternative approach that was a more accurate and cost-efficient way for product categorization to improve their customer experience.
This solution uses the following components to categorize products more accurately and efficiently:

Batch processing in Amazon Bedrock using the Meta Llama 2 and Anthropic’s Claude models
Amazon Simple Storage Service (Amazon S3) to store product and output data
AWS Lambda to orchestrate the Amazon Bedrock models

The key steps are illustrated in the following figure:

A JSONL file containing product data is uploaded to an S3 bucket, triggering the first Lambda function. Amazon Bedrock batch processes this single JSONL file, where each row contains input parameters and prompts. It generates an output JSONL file with a new model_output value appended to each row, corresponding to the input data.
The Lambda function spins up an Amazon Bedrock batch processing endpoint and passes the S3 file location.
The Amazon Bedrock endpoint performs the following tasks:

It reads the product name data and generates a categorized output, including category, subcategory, season, price range, material, color, product line, gender, and year of first sale.
It writes the output to another S3 location.

The second Lambda function performs the following tasks:

It monitors the batch processing job on Amazon Bedrock.
It shuts down the endpoint when processing is complete.

The security measures are inherently integrated into the AWS services employed in this architecture. For detailed information, refer to the Security Best Practices section of this post.

We used a dataset that consisted of 30 labeled data points and 100,000 unlabeled test data points. The labeled data points were generated by llama2-7b and verified by a human subject matter expert (SME). As shown in the following screenshot of the sample ground truth, some fields have N/A or missing values, which isn’t ideal because GoDaddy wants a solution with high coverage for downstream predictive modeling. Higher coverage for each possible field can provide more business insights to their customers.

The distribution for the number of words or tokens per SKU shows mild outlier concern, suitable for bundling many products to be categorized in the prompts and potentially more efficient model response.

The solution delivers a comprehensive framework for generating insights within GoDaddy’s product categorization system. It’s designed to be compatible with a range of LLMs on Amazon Bedrock, features customizable prompt templates, and supports batch and real-time (online) inferences. Additionally, the framework includes evaluation metrics that can be extended to accommodate changes in accuracy requirements.
In the following sections, we look at the key components of the solution in more detail.
Batch inference
We used Amazon Bedrock for batch inference processing. Amazon Bedrock provides the CreateModelInvocationJob API to create a batch job with a unique job name. This API returns a response containing jobArn. Refer to the following code:

Request: POST /model-invocation-job HTTP/1.1

Content-type: application/json
{
“clientRequestToken”: “string”,
“inputDataConfig”: {
“s3InputDataConfig”: {
“s3Uri”: “string”,
“s3InputFormat”: “JSONL”
}
},
“jobName”: “string”,
“modelId”: “string”,
“outputDataConfig”: {
“s3OutputDataConfig”: {
“s3Uri”: “string”
}
},
“roleArn”: “string”,
“tags”: [{
“key”: “string”,
“value”: “string”
}]
}

Response
HTTP/1.1 200 Content-type: application/json
{
“jobArn”: “string”
}

We can monitor the job status using GetModelInvocationJob with the jobArn returned on job creation. The following are valid statuses during the lifecycle of a job:

Submitted – The job is marked Submitted when the JSON file is ready to be processed by Amazon Bedrock for inference.
InProgress – The job is marked InProgress when Amazon Bedrock starts processing the JSON file.
Failed – The job is marked Failed if there was an error while processing. The error can be written into the JSON file as a part of modelOutput. If it was a 4xx error, it’s written in the metadata of the Job.
Completed – The job is marked Completed when the output JSON file is generated for the input JSON file and has been uploaded to the S3 output path submitted as a part of the CreateModelInvocationJob in outputDataConfig.
Stopped – The job is marked Stopped when a StopModelInvocationJob API is called on a job that is InProgress. A terminal state job (Succeeded or Failed) can’t be stopped using StopModelInvocationJob.

The following is example code for the GetModelInvocationJob API:

GET /model-invocation-job/jobIdentifier HTTP/1.1

Response:
{
‘ResponseMetadata’: {
‘RequestId’: ‘081afa52-189f-4e83-a3f9-aa0918d902f4’,
‘HTTPStatusCode’: 200,
‘HTTPHeaders’: {
‘date’: ‘Tue, 09 Jan 2024 17:00:16 GMT’,
‘content-type’: ‘application/json’,
‘content-length’: ‘690’,
‘connection’: ‘keep-alive’,
‘x-amzn-requestid’: ‘081afa52-189f-4e83-a3f9-aa0918d902f4’
},
‘RetryAttempts’: 0
},
‘jobArn’: ‘arn:aws:bedrock:<region>:<account-id>:model-invocation-job/<id>’,
‘jobName’: ‘job47’,
‘modelId’: ‘arn:aws:bedrock:<region>::foundation-model/anthropic.claude-instant-v1:2’,
‘status’: ‘Submitted’,
‘submitTime’: datetime.datetime(2024, 1, 8, 21, 44, 38, 611000, tzinfo=tzlocal()),
‘lastModifiedTime’: datetime.datetime(2024, 1, 8, 23, 5, 47, 169000, tzinfo=tzlocal()),
‘inputDataConfig’: {‘s3InputDataConfig’: {‘s3Uri’: <path to input jsonl file>}},
‘outputDataConfig’: {‘s3OutputDataConfig’: {‘s3Uri’: <path to output jsonl.out file>}}
}

When the job is complete, the S3 path specified in s3OutputDataConfig will contain a new folder with an alphanumeric name. The folder contains two files:

json.out – The following code shows an example of the format:

{
“processedRecordCount”:<number>,
“successRecordCount”:<number>,
“errorRecordCount”:<number>,
“inputTokenCount”:<number>,
“outputTokenCount”:<number>
}

<file_name>.jsonl.out – The following screenshot shows an example of the code, containing the successfully processed records under The modelOutput contains a list of categories for a given product name in JSON format.

We then process the jsonl.out file in Amazon S3. This file is parsed using LangChain’s PydanticOutputParser to generate a .csv file. The PydanticOutputParser requires a schema to be able to parse the JSON generated by the LLM. We created a CCData class that contains the list of categories to be generated for each product as shown in the following code example. Because we enable n-packing, we wrap the schema with a List, as defined in List_of_CCData.

class CCData(BaseModel):
product_name: Optional[str] = Field(default=None, description=”product name, which will be given as input”)
brand: Optional[str] = Field(default=None, description=”Brand of the product inferred from the product name”)
color: Optional[str] = Field(default=None, description=”Color of the product inferred from the product name”)
material: Optional[str] = Field(default=None, description=”Material of the product inferred from the product name”)
price: Optional[str] = Field(default=None, description=”Price of the product inferred from the product name”)
category: Optional[str] = Field(default=None, description=”Category of the product inferred from the product name”)
sub_category: Optional[str] = Field(default=None, description=”Sub-category of the product inferred from the product name”)
product_line: Optional[str] = Field(default=None, description=”Product Line of the product inferred from the product name”)
gender: Optional[str] = Field(default=None, description=”Gender of the product inferred from the product name”)
year_of_first_sale: Optional[str] = Field(default=None, description=”Year of first sale of the product inferred from the product name”)
season: Optional[str] = Field(default=None, description=”Season of the product inferred from the product name”)

class List_of_CCData(BaseModel):
list_of_dict: List[CCData]

We also use OutputFixingParser to handle situations where the initial parsing attempt fails. The following screenshot shows a sample generated .csv file.

Prompt engineering
Prompt engineering involves the skillful crafting and refining of input prompts. This process entails choosing the right words, phrases, sentences, punctuation, and separator characters to efficiently use LLMs for diverse applications. Essentially, prompt engineering is about effectively interacting with an LLM. The most effective strategy for prompt engineering needs to vary based on the specific task and data, specifically, data card generation and GoDaddy SKUs.
Prompts consist of particular inputs from the user that direct LLMs to produce a suitable response or output based on a specified task or instruction. These prompts include several elements, such as the task or instruction itself, the surrounding context, full examples, and the input text that guides LLMs in crafting their responses. The composition of the prompt will vary based on factors like the specific use case, data availability, and the nature of the task at hand. For example, in a Retrieval Augmented Generation (RAG) use case, we provide additional context and add a user-supplied query in the prompt that asks the LLM to focus on contexts that can answer the query. In a metadata generation use case, we can provide the image and ask the LLM to generate a description and keywords describing the image in a specific format.
In this post, we briefly distribute the prompt engineering solutions into two steps: output generation and format parsing.
Output generation
The following are best practices and considerations for output generation:

Provide simple, clear and complete instructions – This is the general guideline for prompt engineering work.
Use separator characters consistently – In this use case, we use the newline character n
Deal with default output values such as missing – For this use case, we don’t want special values such as N/A or missing, so we put multiple instructions in line, aiming to exclude the default or missing values.
Use few-shot prompting – Also termed in-context learning, few-shot prompting involves providing a handful of examples, which can be beneficial in helping LLMs understand the output requirements more effectively. In this use case, 0–10 in-context examples were tested for both Llama 2 and Anthropic’s Claude models.
Use packing techniques – We combined multiple SKU and product names into one LLM query, so that some prompt instructions can be shared across different SKUs for cost and latency optimization. In this use case, 1–10 packing numbers were tested for both Llama 2 and Anthropic’s Claude models.
Test for good generalization – You should keep a hold-out test set and correct responses to check if your prompt modifications generalize.
Use additional techniques for Anthropic’s Claude model families – We incorporated the following techniques:

Enclosing examples in XML tags:

<example>
H: <question> The list of product names is:
{few_shot_product_name} </question>
A: <response> The category information generated with absolutely no missing value, in JSON format is:
{few_shot_field} </response>
</example>

Using the Human and Assistant annotations:

nnHuman:


nnAssistant:

Guiding the assistant prompt:

nnAssistant: Here are the answer with NO missing, unknown, null, or N/A values (in JSON format):

Use additional techniques for Llama model families – For Llama 2 model families, you can enclose examples in [INST] tags:

[INST]
If the list of product names is:
{few_shot_product_name}
[/INST]

Then the answer with NO missing, unknown, null, or N/A values is (in JSON format):

{few_shot_field}

[INST]
If the list of product names is:
{product_name}
[/INST]

Then the answer with NO missing, unknown, null, or N/A values is (in JSON format):

Format parsing
The following are best practices and considerations for format parsing:

Refine the prompt with modifiers – Refinement of task instructions typically involves altering the instruction, task, or question part of the prompt. The effectiveness of these techniques varies based on the task and data. Some beneficial strategies in this use case include:

Role assumption – Ask the model to assume it’s playing a role. For example:

You are a Product Information Manager, Taxonomist, and Categorization Expert who follows instruction well.

Prompt specificity: Being very specific and providing detailed instructions to the model can help generate better responses for the required task.

EVERY category information needs to be filled based on BOTH product name AND your best guess. If you forget to generate any category information, leave it as missing or N/A, then an innocent people will die.

Output format description – We provided the JSON format instructions through a JSON string directly, as well as through the few-shot examples indirectly.

Pay attention to few-shot example formatting – The LLMs (Anthropic’s Claude and Llama) are sensitive to subtle formatting differences. Parsing time was significantly improved after several iterations on few-shot examples formatting. The final solution is as follows:

few_shot_field='{“list_of_dict”‘ +
‘:[‘ +
‘, n’.join([true_df.iloc[i].to_json() for i in range(num_few_shot)]) +
‘]}’

Use additional techniques for Anthropic’s Claude model families – For the Anthropic’s Claude model, we instructed it to format the output in JSON format:

{
“list_of_dict”: [{
“some_category”: “your_generated_answer”,
“another_category”: “your_generated_answer”,
},
{
<category information for the 2st product name, in json format>
},
{
<category information for the 3st product name, in json format>
},
// … {additional product information, in json format} …
}]
}

Use additional techniques for Llama 2 model families – For the Llama 2 model, we instructed it to format the output in JSON format as follows:

Format your output in the JSON format (ensure to escape special character): The output should be formatted as a JSON instance that conforms to the JSON schema below. As an example, for the schema {“properties”: {“foo”: {“title”: “Foo”, “description”: “a list of strings”, “type”: “array”, “items”: {“type”: “string”}}}, “required”: [“foo”]} the object {“foo”: [“bar”, “baz”]} is a well-formatted instance of the schema. The object {“properties”: {“foo”: [“bar”, “baz”]}} is not well-formatted.
Here is the output schema:
{“properties”: {“list_of_dict”: {“title”: “List Of Dict”, “type”: “array”, “items”: {“$ref”: “#/definitions/CCData”}}}, “required”: [“list_of_dict”], “definitions”: {“CCData”: {“title”: “CCData”, “type”: “object”, “properties”: {“product_name”: {“title”: “Product Name”, “description”: “product name, which will be given as input”, “type”: “string”}, “brand”: {“title”: “Brand”, “description”: “Brand of the product inferred from the product name”, “type”: “string”}, “color”: {“title”: “Color”, “description”: “Color of the product inferred from the product name”, “type”: “string”}, “material”: {“title”: “Material”, “description”: “Material of the product inferred from the product name”, “type”: “string”}, “price”: {“title”: “Price”, “description”: “Price of the product inferred from the product name”, “type”: “string”}, “category”: {“title”: “Category”, “description”: “Category of the product inferred from the product name”, “type”: “string”}, “sub_category”: {“title”: “Sub Category”, “description”: “Sub-category of the product inferred from the product name”, “type”: “string”}, “product_line”: {“title”: “Product Line”, “description”: “Product Line of the product inferred from the product name”, “type”: “string”}, “gender”: {“title”: “Gender”, “description”: “Gender of the product inferred from the product name”, “type”: “string”}, “year_of_first_sale”: {“title”: “Year Of First Sale”, “description”: “Year of first sale of the product inferred from the product name”, “type”: “string”}, “season”: {“title”: “Season”, “description”: “Season of the product inferred from the product name”, “type”: “string”}}}}}
Models and parameters
We used the following prompting parameters:

Number of packings – 1, 5, 10
Number of in-context examples – 0, 2, 5, 10
Format instruction – JSON format pseudo example (shorter length), JSON format full example (longer length)

For Llama 2, the model choices were meta.llama2-13b-chat-v1 or meta.llama2-70b-chat-v1. We used the following LLM parameters:

{
“temperature”: 0.1,
“top_p”: 0.9,
“max_gen_len”: 2048,
}

For Anthropic’s Claude, the model choices were anthropic.claude-instant-v1 and anthropic.claude-v2. We used the following LLM parameters:

{
“temperature”: 0.1,
“top_k”: 250,
“top_p”: 1,
“max_tokens_to_sample”: 4096,
“stop_sequences”: [“nnHuman:”],
“anthropic_version”: “bedrock-2023-05-31”
}

The solution is straightforward to extend to other LLMs hosted on Amazon Bedrock, such as Amazon Titan (switch the model ID to amazon.titan-tg1-large, for example), Jurassic (model ID ai21.j2-ultra), and more.
Evaluations
The framework includes evaluation metrics that can be extended further to accommodate changes in accuracy requirements. Currently, it involves five different metrics:

Content coverage – Measures portions of missing values in the output generation step.
Parsing coverage – Measures portions of missing samples in the format parsing step:

Parsing recall on product name – An exact match serves as a lower bound for parsing completeness (parsing coverage is the upper bound for parsing completeness) because in some cases, two virtually identical product names need to be normalized and transformed to be an exact match (for example, “Nike Air Jordan” and “nike. air Jordon”).
Parsing precision on product name – For an exact match, we use a similar metric to parsing recall, but use precision instead of recall.

Final coverage – Measures portions of missing values in both output generation and format parsing steps.
Human evaluation – Focuses on holistic quality evaluation such as accuracy, relevance, and comprehensiveness (richness) of the text generation.

Results
The following are the approximate sample input and output lengths under some best performing settings:

Input length for Llama 2 model family – 2,068 tokens for 10-shot, 1,585 tokens for 5-shot, 1,319 tokens for 2-shot
Input length for Anthropic’s Claude model family – 1,314 tokens for 10-shot, 831 tokens for 5-shot, 566 tokens for 2-shot, 359 tokens for zero-shot
Output length with 5-packing – Approximately 500 tokens

Quantitative results
The following table summarizes our consolidated quantitative results.

To be concise, the table contains only some of our final recommendations for each model types.
The metrics used are latency and accuracy.
The best model and results are highlighted in green color and in bold font.

Config
Latency
Accuracy

Batch process service
Model
Prompt
Batch process latency (5 packing)
Near-real-time process latency (1 packing)
Programmatic evaluation (coverage)

test set = 20
test set = 5k
GoDaddy rqmt @ 5k
Recall on parsing exact match
Final content coverage

Amazon Bedrock batch inference
Llama2-13b
zero-shot
n/a
n/a
3600s
n/a
n/a
n/a

5-shot (template12)
65.4s
1704s
3600s
72/20=3.6s
92.60%
53.90%

Llama2-70b
zero-shot
n/a
n/a
3600s
n/a
n/a
n/a

5-shot (template13)
139.6s
5299s
3600s
156/20=7.8s
98.30%
61.50%

Claude-v1 (instant)
zero-shot (template6)
29s
723s
3600s
44.8/20=2.24s
98.50%
96.80%

5-shot (template12)
30.3s
644s
3600s
51/20=2.6s
99%
84.40%

Claude-v2
zero-shot (template6)
82.2s
1706s
3600s
104/20=5.2s
99%
84.40%

5-shot (template14)
49.1s
1323s
3600s
104/20=5.2s
99.40%
90.10%

The following tables summarize the scaling effect in batch inference.

When scaling from 5,000 to 100,000 samples, only eight times more computation time was needed.
Performing categorization with individual LLM calls for each product would have increased the inference time for 100,000 products by approximately 40 times compared to the batch processing method.
The accuracy in coverage remained stable, and cost scaled approximately linearly.

Batch process service
Model
Prompt
Batch process latency (5 packing)
Near-real-time process latency (1 packing)

test set = 20
test set = 5k
GoDaddy rqmt @ 5k
test set = 100k

Amazon Bedrock batch
Claude-v1 (instant)
zero-shot (template6)
29s
723s
3600s
5733s
44.8/20=2.24s

Amazon Bedrock batch
Anthropic’s Claude-v2
zero-shot (template6)
82.2s
1706s
3600s
7689s
104/20=5.2s

Batch process service
Near-real-time process latency (1 packing)
Programmatic evaluation (coverage)

Parsing recall on product name (test set = 5k)
Parsing recall on product name (test set = 100k)
Final content coverage (test set = 5k)
Final content coverage (test set = 100k)

Amazon Bedrock batch
44.8/20=2.24s
98.50%
98.40%
96.80%
96.50%

Amazon Bedrock batch
104/20=5.2s
99%
98.80%
84.40%
97%

The following table summarizes the effect of n-packing. Llama 2 has an output length limit of 2,048 and fits up to around 20 packing. Anthropic’s Claude has a higher limit. We tested on 20 ground truth samples for 1, 5, and 10 packing and selected results from all model and prompt templates. The scaling effect on latency was more obvious in the Anthropic’s Claude model family than Llama 2. Anthropic’s Claude had better generalizability than Llama 2 when extending the packing numbers in output.
We only tried a few shots with Llama 2 models, which showed improved accuracy over zero-shot.

Batch process service
Model
Prompt
Latency (test set = 20)

Accuracy (final coverage)

npack = 1
npack= 5
npack = 10
npack = 1
npack= 5
npack = 10

Amazon Bedrock batch inference
Llama2-13b
5-shot (template12)
72s
65.4s
65s
95.90%
93.20%
88.90%

Llama2-70b
5-shot (template13)
156s
139.6s
150s
85%
97.70%
100%

Claude-v1 (instant)
zero-shot (template6)
45s
29s
27s
99.50%
99.50%
99.30%

5-shot (template12)
51.3s
30.3s
27.4s
99.50%
99.50%
100%

Claude-v2
zero-shot (template6)
104s
82.2s
67s
85%
97.70%
94.50%

5-shot (template14)
104s
49.1s
43.5s
97.70%
100%
99.80%

Qualitative results
We noted the following qualitative results:

Human evaluation – The categories generated were evaluated qualitatively by GoDaddy SMEs. The categories were found to be of good quality.
Learnings – We used an LLM in two separate calls: output generation and format parsing. We observed the following:

For this use case, we saw Llama 2 didn’t perform well in format parsing but was relatively capable in output generation. To be consistent and make a fair comparison, we required the LLM used in both calls to be the same—the API calls in both steps should all be invoked to llama2-13b-chat-v1, or they should all be invoked to anthropic.claude-instant-v1. However, GoDaddy chose Llama 2 as the LLM for category generation. For this use case, we found that using Llama 2 in output generation only and using Anthropic’s Claude in format parsing was suitable due to Llama 2’s relative lower model capability.
Format parsing is improved through prompt engineering (JSON format instruction is critical) to reduce the latency. For example, with Anthropic’s Claude-Instant on a 20-test set and averaging multiple prompt templates, the latency can be reduced by approximately 77% (from 90 seconds to 20 seconds). This directly eliminates the necessity of using a JSON fine-tuned version of the LLM.

Llama2 – We observed the following:

Llama2-13b and Llama2-70b models both need the full instruction as format_instruction() in zero-shot prompts.
Llama2-13b seems to be worse in content coverage and formatting (for example, it can’t correctly escape char, \“), which can incur significant parsing time and cost and also degrade accuracy.
Llama 2 shows clear performance drops and instability when the packing number varies among 1, 5, and 10, indicating poorer generalizability compared to the Anthropic’s Claude model family.

Anthropic’s Claude – We observed the following:

Anthropic’s Claude-Instant and Claude-v2, regardless of using zero-shot or few-shot prompting, need only partial format instruction instead of the full instruction format_instruction(). It shortens the input length, and is therefore more cost-effective. It also shows Anthropic’s Claude’s better capability in following instructions.
Anthropic’s Claude generalizes well when varying packing numbers among 1, 5, and 10.

Business takeaways
We had the following key business takeaways:

Improved latency – Our solution inferences 5,000 products in 12 minutes, which is 80% faster than GoDaddy’s needs (5,000 products in 1 hour). Using batch inference in Amazon Bedrock demonstrates efficient batch processing capabilities and anticipates further scalability with AWS planning to deploy more cloud instances. The expansion will lead to increased time and cost savings.
More cost-effectiveness – The solution built by the Generative AI Innovation Center using Anthropic’s Claude-Instant is 8% more affordable than the existing proposal using Llama2-13b while also providing 79% more coverage.
Enhanced accuracy – The deliverable produces 97% category coverage on both the 5,000 and 100,000 hold-out test set, exceeding GoDaddy’s needs at 90%. The comprehensive framework is able to facilitate future iterative improvements over the current model parameters and prompt templates.
Qualitative assessment – The category generation is in satisfactory quality through human evaluation by GoDaddy SMEs.

Technical takeaways
We had the following key technical takeaways:

The solution features both batch inference and near real-time inference (2 seconds per product) capability and multiple backend LLM selections.
Anthropic’s Claude-Instant with zero-shot is the clear winner:

It was best in latency, cost, and accuracy on the 5,000 hold-out test set.
It showed better generalizability to higher packing numbers (number of SKUs in one query), with potentially more cost and latency improvement.

Iteration on prompt templates shows improvement on all these models, suggesting that good prompt engineering is a practical approach for the categorization generation task.
Input-wise, increasing to 10-shot may further improve performance, as observed in small-scale science experiments, but also increase the cost by around 30%. Therefore, we tested at most 5-shot in large-scale batch experiments.
Output-wise, increasing to 10-packing or even 20-packing (Anthropic’s Claude only; Llama 2 has 2,048 output length limit) might further improve latency and cost (because more SKUs can share the same input instructions).
For this use case, we saw Anthropic’s Claude model family having better accuracy and generalizability, for example:

Final category coverage performance was better with Anthropic’s Claude-Instant.
When increasing packing numbers from 1, 5, to 10, Anthropic’s Claude-Instant showed improvement in latency and stable accuracy in comparison to Llama 2.
To achieve the final categories for the use case, we noticed that Anthropic’s Claude required a shorter prompt input to follow the instruction and had a longer output length limit for a higher packing number.

Next steps for GoDaddy
The following are the recommendations that the GoDaddy team is considering as a part of future steps:

Dataset enhancement – Aggregate a larger set of ground truth examples and expand programmatic evaluation to better monitor and refine the model’s performance. On a related note, if the product names can be normalized by domain knowledge, the cleaner input is also helpful for better LLM responses. For example, the product name ”<product_name> Power t-shirt, ladyfit vest or hoodie” can prompt the LLM to respond for multiple SKUs, instead of one SKU (similarly, “<product_name> – $5 or $10 or $20 or $50 or $100”).
Human evaluation – Increase human evaluations to provide higher generation quality and alignment with desired outcomes.
Fine-tuning – Consider fine-tuning as a potential strategy for enhancing category generation when a more extensive training dataset becomes available.
Prompt engineering – Explore automatic prompt engineering techniques to enhance category generation, particularly when additional training data becomes available.
Few-shot learning – Investigate techniques such as dynamic few-shot selection and crafting in-context examples based on the model’s parameter knowledge to enhance the LLMs’ few-shot learning capabilities.
Knowledge integration – Improve the model’s output by connecting LLMs to a knowledge base (internal or external database) and enabling it to incorporate more relevant information. This can help to reduce LLM hallucinations and enhance relevance in responses.

Conclusion
In this post, we shared how the Generative AI Innovation Center team worked with GoDaddy to create a more accurate and cost-efficient generative AI–based solution using batch inference in Amazon Bedrock, helping GoDaddy improve their existing product categorization system. We implemented n-packing techniques and used Anthropic’s Claude and Meta Llama 2 models to improve latency. We experimented with different prompts to improve the categorization with LLMs and found that Anthropic’s Claude model family gave the better accuracy and generalizability than the Llama 2 model family. GoDaddy team will test this solution on a larger dataset and evaluate the categories generated from the recommended approaches.
If you’re interested in working with the AWS Generative AI Innovation Center, please reach out.
Security Best Practices

Amazon S3
Amazon Bedrock
StepFunctions
AWS Lambda

References

How SnapLogic built a text-to-pipeline application with Amazon Bedrock to translate business intent into action

How Q4 Inc. used Amazon Bedrock, RAG, and SQLDatabaseChain to address numerical and structured dataset challenges building their Q&A chatbot

About the Authors
Vishal Singh is a Data Engineering leader at the Data and Analytics team of GoDaddy. His key focus area is towards building data products and generating insights from them by application of data engineering tools along with generative AI.
Yun Zhou is an Applied Scientist at AWS where he helps with research and development to ensure the success of AWS customers. He works on pioneering solutions for various industries using statistical modeling and machine learning techniques. His interest includes generative models and sequential data modeling.
Meghana Ashok is a Machine Learning Engineer at the Generative AI Innovation Center. She collaborates closely with customers, guiding them in developing secure, cost-efficient, and resilient solutions and infrastructure tailored to their generative AI needs.
Karan Sindwani is an Applied Scientist at AWS where he works with AWS customers across different verticals to accelerate their use of Gen AI and AWS Cloud services to solve their business challenges.
Vidya Sagar Ravipati is a Science Manager at the Generative AI Innovation Center, where he uses his vast experience in large-scale distributed systems and his passion for machine learning to help AWS customers across different industry verticals accelerate their AI and cloud adoption.

Benchmarking customized models on Amazon Bedrock using LLMPerf and Lit …

Open foundation models (FMs) allow organizations to build customized AI applications by fine-tuning for their specific domains or tasks, while retaining control over costs and deployments. However, deployment can be a significant portion of the effort, often requiring 30% of project time because engineers must carefully optimize instance types and configure serving parameters through careful testing. This process can be both complex and time-consuming, requiring specialized knowledge and iterative testing to achieve the desired performance.
Amazon Bedrock Custom Model Import simplifies deployments of custom models by offering a straightforward API for model deployment and invocation. You can upload model weights and let AWS handle an optimal, fully managed deployment. This makes sure that deployments are performant and cost effective. Amazon Bedrock Custom Model Import also handles automatic scaling, including scaling to zero. When not in use and there are no invocations for 5 minutes, it scales to zero. You pay only for what you use in 5-minute increments. It also handles scaling up, automatically increasing the number of active model copies when higher concurrency is required. These features make Amazon Bedrock Custom Model Import an attractive solution for organizations looking to use custom models on Amazon Bedrock providing simplicity and cost-efficiency.
Before deploying these models in production, it’s crucial to evaluate their performance using benchmarking tools. These tools help to proactively detect potential production issues such as throttling and verify that deployments can handle expected production loads.
This post begins a blog series exploring DeepSeek and open FMs on Amazon Bedrock Custom Model Import. It covers the process of performance benchmarking of custom models in Amazon Bedrock using popular open source tools: LLMPerf and LiteLLM. It includes a notebook that includes step-by-step instructions to deploy a DeepSeek-R1-Distill-Llama-8B model, but the same steps apply for any other model supported by Amazon Bedrock Custom Model Import.
Prerequisites
This post requires an Amazon Bedrock custom model. If you don’t have one in your AWS account yet, follow the instructions from Deploy DeepSeek-R1 distilled Llama models with Amazon Bedrock Custom Model Import.
Using open source tools LLMPerf and LiteLLM for performance benchmarking
To conduct performance benchmarking, you will use LLMPerf, a popular open-source library for benchmarking foundation models. LLMPerf simulates load tests on model invocation APIs by creating concurrent Ray Clients and analyzing their responses. A key advantage of LLMPerf is wide support of foundation model APIs. This includes LiteLLM, which supports all models available on Amazon Bedrock.
Setting up your custom model invocation with LiteLLM
LiteLLM is a versatile open source tool that can be used both as a Python SDK and a proxy server (AI gateway) for accessing over 100 different FMs using a standardized format. LiteLLM standardizes inputs to match each FM provider’s specific endpoint requirements. It supports Amazon Bedrock APIs, including InvokeModel and Converse APIs, and FMs available on Amazon Bedrock, including imported custom models.
To invoke a custom model with LiteLLM, you use the model parameter (see Amazon Bedrock documentation on LiteLLM). This is a string that follows the bedrock/provider_route/model_arn format.
The provider_route indicates the LiteLLM implementation of request/response specification to use. DeepSeek R1 models can be invoked using their custom chat template using the DeepSeek R1 provider route, or with the Llama chat template using the Llama provider route.
The model_arn is the model Amazon Resource Name (ARN) of the imported model. You can get the model ARN of your imported model in the console or by sending a ListImportedModels request.
For example, the following script invokes the custom model using the DeepSeek R1 chat template.
import time
from litellm import completion

while True:
try:
response = completion(
model=f”bedrock/deepseek_r1/{model_id}”,
messages=[{“role”: “user”, “content”: “””Given the following financial data:
– Company A’s revenue grew from $10M to $15M in 2023
– Operating costs increased by 20%
– Initial operating costs were $7M

Calculate the company’s operating margin for 2023. Please reason step by step.”””},
{“role”: “assistant”, “content”: “<think>”}],
max_tokens=4096,
)
print(response[‘choices’][0][‘message’][‘content’])
break
except:
time.sleep(60)
After the invocation parameters for the imported model have been verified, you can configure LLMPerf for benchmarking.
Configuring a token benchmark test with LLMPerf
To benchmark performance, LLMPerf uses Ray, a distributed computing framework, to simulate realistic loads. It spawns multiple remote clients, each capable of sending concurrent requests to model invocation APIs. These clients are implemented as actors that execute in parallel. llmperf.requests_launcher manages the distribution of requests across the Ray Clients, and allows for simulation of various load scenarios and concurrent request patterns. At the same time, each client will collect performance metrics during the requests, including latency, throughput, and error rates.
Two critical metrics for performance include latency and throughput:

Latency refers to the time it takes for a single request to be processed.
Throughput measures the number of tokens that are generated per second.

Selecting the right configuration to serve FMs typically involves experimenting with different batch sizes while closely monitoring GPU utilization and considering factors such as available memory, model size, and specific requirements of the workload. To learn more, see Optimizing AI responsiveness: A practical guide to Amazon Bedrock latency-optimized inference. Although Amazon Bedrock Custom Model Import simplifies this by offering pre-optimized serving configurations, it’s still crucial to verify your deployment’s latency and throughput.
Start by configuring token_benchmark.py, a sample script that facilitates the configuration of a benchmarking test. In the script, you can define parameters such as:

LLM API: Use LiteLLM to invoke Amazon Bedrock custom imported models.
Model: Define the route, API, and model ARN to invoke similarly to the previous section.
Mean/standard deviation of input tokens: Parameters to use in the probability distribution from which the number of input tokens will be sampled.
Mean/standard deviation of output tokens: Parameters to use in the probability distribution from which the number of output tokens will be sampled.
Number of concurrent requests: The number of users that the application is likely to support when in use.
Number of completed requests: The total number of requests to send to the LLM API in the test.

The following script shows an example of how to invoke the model. See this notebook for step-by-step instructions on importing a custom model and running a benchmarking test.
python3 ${{LLM_PERF_SCRIPT_DIR}}/token_benchmark_ray.py \
–model “bedrock/llama/{model_id}” \
–mean-input-tokens {mean_input_tokens} \
–stddev-input-tokens {stddev_input_tokens} \
–mean-output-tokens {mean_output_tokens} \
–stddev-output-tokens {stddev_output_tokens} \
–max-num-completed-requests ${{LLM_PERF_MAX_REQUESTS}} \
–timeout 1800 \
–num-concurrent-requests ${{LLM_PERF_CONCURRENT}} \
–results-dir “${{LLM_PERF_OUTPUT}}” \
–llm-api litellm \
–additional-sampling-params ‘{{}}’
At the end of the test, LLMPerf will output two JSON files: one with aggregate metrics, and one with separate entries for every invocation.
Scale to zero and cold-start latency
One thing to remember is that because Amazon Bedrock Custom Model Import will scale down to zero when the model is unused, you need to first make a request to make sure that there is at least one active model copy. If you obtain an error indicating that the model isn’t ready, you need to wait for approximately ten seconds and up to 1 minute for Amazon Bedrock to prepare at least one active model copy. When ready, run a test invocation again, and proceed with benchmarking.
Example scenario for DeepSeek-R1-Distill-Llama-8B
Consider a DeepSeek-R1-Distill-Llama-8B model hosted on Amazon Bedrock Custom Model Import, supporting an AI application with low traffic of no more than two concurrent requests. To account for variability, you can adjust parameters for token count for prompts and completions. For example:

Number of clients: 2
Mean input token count: 500
Standard deviation input token count: 25
Mean output token count: 1000
Standard deviation output token count: 100
Number of requests per client: 50

This illustrative test takes approximately 8 minutes. At the end of the test, you will obtain a summary of results of aggregate metrics:
inter_token_latency_s
p25 = 0.010615988283217918
p50 = 0.010694698716183695
p75 = 0.010779359342088015
p90 = 0.010945443657517748
p95 = 0.01100556307365132
p99 = 0.011071086908721675
mean = 0.010710014800224604
min = 0.010364670612635254
max = 0.011485444453299149
stddev = 0.0001658793389904756
ttft_s
p25 = 0.3356793452499005
p50 = 0.3783651359990472
p75 = 0.41098671700046907
p90 = 0.46655246950049334
p95 = 0.4846706690498647
p99 = 0.6790834719300077
mean = 0.3837810468001226
min = 0.1878921090010408
max = 0.7590946710006392
stddev = 0.0828713133225014
end_to_end_latency_s
p25 = 9.885957818500174
p50 = 10.561580732000039
p75 = 11.271923759749825
p90 = 11.87688222009965
p95 = 12.139972019549713
p99 = 12.6071144856102
mean = 10.406450886010116
min = 2.6196457750011177
max = 12.626598834998731
stddev = 1.4681851822617253
request_output_throughput_token_per_s
p25 = 104.68609252502657
p50 = 107.24619111072519
p75 = 108.62997591951486
p90 = 110.90675007239598
p95 = 113.3896235445618
p99 = 116.6688412475626
mean = 107.12082450567561
min = 97.0053466021563
max = 129.40680882698936
stddev = 3.9748004356837137
number_input_tokens
p25 = 484.0
p50 = 500.0
p75 = 514.0
p90 = 531.2
p95 = 543.1
p99 = 569.1200000000001
mean = 499.06
min = 433
max = 581
stddev = 26.549294727074212
number_output_tokens
p25 = 1050.75
p50 = 1128.5
p75 = 1214.25
p90 = 1276.1000000000001
p95 = 1323.75
p99 = 1372.2
mean = 1113.51
min = 339
max = 1392
stddev = 160.9598415942952
Number Of Errored Requests: 0
Overall Output Throughput: 208.0008834264341
Number Of Completed Requests: 100
Completed Requests Per Minute: 11.20784995697034

In addition to the summary, you will receive metrics for individual requests that can be used to prepare detailed reports like the following histograms for time to first token and token throughput.

Analyzing performance results from LLMPerf and estimating costs using Amazon CloudWatch
LLMPerf gives you the ability to benchmark the performance of custom models served in Amazon Bedrock without having to inspect the specifics of the serving properties and configuration of your Amazon Bedrock Custom Model Import deployment. This information is valuable because it represents the expected end user experience of your application.
In addition, the benchmarking exercise can serve as a valuable tool for cost estimation. By using Amazon CloudWatch, you can observe the number of active model copies that Amazon Bedrock Custom Model Import scales to in response to the load test. ModelCopy is exposed as a CloudWatch metric in the AWS/Bedrock namespace and is reported using the imported model ARN as a label. The plot for the ModelCopy metric is shown in the figure below. This data will assist in estimating costs, because billing is based on the number of active model copies at a given time.

Conclusion
While Amazon Bedrock Custom Model Import simplifies model deployment and scaling, performance benchmarking remains essential to predict production performance, and compare models across key metrics such as cost, latency, and throughput.
To learn more, try the example notebook with your custom model.
Additional resources:

Deploy DeepSeek-R1 distilled Llama models with Amazon Bedrock Custom Model Import
Learn more about LLMPerf and LiteLLM

About the Authors
Felipe Lopez is a Senior AI/ML Specialist Solutions Architect at AWS. Prior to joining AWS, Felipe worked with GE Digital and SLB, where he focused on modeling and optimization products for industrial applications.
Rupinder Grewal is a Senior AI/ML Specialist Solutions Architect with AWS. He currently focuses on the serving of models and MLOps on Amazon SageMaker. Prior to this role, he worked as a Machine Learning Engineer building and hosting models. Outside of work, he enjoys playing tennis and biking on mountain trails.
Paras Mehra is a Senior Product Manager at AWS. He is focused on helping build Amazon Bedrock. In his spare time, Paras enjoys spending time with his family and biking around the Bay Area.
Prashant Patel is a Senior Software Development Engineer in AWS Bedrock. He’s passionate about scaling large language models for enterprise applications. Prior to joining AWS, he worked at IBM on productionizing large-scale AI/ML workloads on Kubernetes. Prashant has a master’s degree from NYU Tandon School of Engineering. While not at work, he enjoys traveling and playing with his dogs.

Creating asynchronous AI agents with Amazon Bedrock

The integration of generative AI agents into business processes is poised to accelerate as organizations recognize the untapped potential of these technologies. Advancements in multimodal artificial intelligence (AI), where agents can understand and generate not just text but also images, audio, and video, will further broaden their applications. This post will discuss agentic AI driven architecture and ways of implementing.
The emergence of generative AI agents in recent years has contributed to the transformation of the AI landscape, driven by advances in large language models (LLMs) and natural language processing (NLP). Companies like Anthropic, Cohere, and Amazon have made significant strides in developing powerful language models capable of understanding and generating human-like content across multiple modalities, revolutionizing how businesses integrate and utilize artificial intelligence in their processes.
These AI agents have demonstrated remarkable versatility, being able to perform tasks ranging from creative writing and code generation to data analysis and decision support. Their ability to engage in intelligent conversations, provide context-aware responses, and adapt to diverse domains has revolutionized how businesses approach problem-solving, customer service, and knowledge dissemination.
One of the most significant impacts of generative AI agents has been their potential to augment human capabilities through both synchronous and asynchronous patterns. In synchronous orchestration, just like in traditional process automation, a supervisor agent orchestrates the multi-agent collaboration, maintaining a high-level view of the entire process while actively directing the flow of information and tasks. This approach allows businesses to offload repetitive and time-consuming tasks in a controlled, predictable manner.
Alternatively, asynchronous choreography follows an event-driven pattern where agents operate autonomously, triggered by events or state changes in the system. In this model, agents publish events or messages that other agents can subscribe to, creating a workflow that emerges from their collective behavior. These patterns have proven particularly valuable in enhancing customer experiences, where agents can provide round-the-clock support, resolve issues promptly, and deliver personalized recommendations through either orchestrated or event-driven interactions, leading to increased customer satisfaction and loyalty.
Agentic AI architecture
Agentic AI architecture is a shift in process automation through autonomous agents towards the capabilities of AI, with the purpose of imitating cognitive abilities and enhancing the actions of traditional autonomous agents. This architecture can enable businesses to streamline operations, enhance decision-making processes, and automate complex tasks in new ways.
Much like traditional business process automation through technology, the agentic AI architecture is the design of AI systems designed to resolve complex problems with limited or indirect human intervention. These systems are composed of multiple AI agents that converse with each other or execute complex tasks through a series of choreographed or orchestrated processes. This approach empowers AI systems to exhibit goal-directed behavior, learn from experience, and adapt to changing environments.
The difference between a single agent invocation and a multi-agent collaboration lies in the complexity and the number of agents involved in the process.
When you interact with a digital assistant like Alexa, you’re typically engaging with a single agent, also known as a conversational agent. This agent processes your request, such as setting a timer or checking the weather, and provides a response without needing to consult other agents.
Now, imagine expanding this interaction to include multiple agents working together. Let’s start with a simple travel booking scenario:
Your interaction begins with telling a travel planning agent about your desired trip. In this first step, the AI model, in this case an LLM, is acting as an interpreter and user experience interface between your natural language input and the structured information needed by the travel planning system. It’s processing your request, which might be a complex statement like “I want to plan a week-long beach vacation in Hawaii for my family of four next month,” and extracting key details such as the destination, duration, number of travelers, and approximate dates.
The LLM is also likely to infer additional relevant information that wasn’t explicitly stated, such as the need for family-friendly accommodations or activities. It might ask follow-up questions to clarify ambiguous points or gather more specific preferences. Essentially, the LLM is transforming your casual, conversational input into a structured set of travel requirements that can be used by the specialized booking agents in the subsequent steps of the workflow.
This initial interaction sets the foundation for the entire multi-agent workflow, making sure that the travel planning agent has a clear understanding of your needs before engaging other specialized agents.
By adding another agent, the flight booking agent, the travel planning agent can call upon it to find suitable flights. The travel planning agent needs to provide the flight booking agent with relevant information (dates, destinations), and wait for and process the flight booking agent’s response, to incorporate the flight options into its overall plan
Now, let’s add another agent to the workflow; a hotel booking agent to support finding accommodations. With this addition, the travel planning agent must also communicate with the hotel booking agent, which needs to make sure that the hotel dates align with the flight dates and provide the information back to the overall plan to include both flight and hotel options.
As we continue to add agents, such as a car rental agent or a local activities agent, each new addition receives relevant information from the travel planning agent, performs its specific task, and returns its results to be incorporated into the overall plan. The travel planning agent acts not only as the user experience interface, but also as a coordinator, deciding when to involve each specialized agent and how to combine their inputs into a cohesive travel plan.
This multi-agent workflow allows for more complex tasks to be accomplished by taking advantage of the specific capabilities of each agent. The system remains flexible, because agents can be added or removed based on the specific needs of each request, without requiring significant changes to the existing agents and minimal change to the overall workflow.
For more on the benefits of breaking tasks into agents, see How task decomposition and smaller LLMs can make AI more affordable.
Process automation with agentic AI architecture
The preceding scenario, just like in traditional process automation, is a common orchestration pattern, where the multi-agent collaboration is orchestrated by a supervisor agent. The supervisor agent acts like a conductor leading an orchestra, telling each instrument when to play and how to harmonize with others. For this approach, Amazon Bedrock Agents enables generative AI applications to execute multi-step tasks orchestrated by an agent and create a multi-agent collaboration with Amazon Bedrock Agents to solve complex tasks. This is done by designating an Amazon Bedrock agent as a supervisor agent, associating one or more collaborator agents with the supervisor. For more details, read on creating and configuring Amazon Bedrock Agents and Use multi-agent collaboration with Amazon Bedrock Agents.
The following diagram illustrates the supervisor agent methodology.

Supervisor agent methodology

Following traditional process automation patterns, the other end of the spectrum to synchronous orchestration would be asynchronous choreography: an asynchronous event-driven multi-agent workflow. In this approach, there would be no central orchestrating agent (supervisor). Agents operate autonomously where actions are triggered by events or changes in a system’s state and agents publish events or messages that other agents can subscribe to. In this approach, the workflow emerges from the collective behavior of the agents reacting to events asynchronously. It’s more like a jazz improvisation, where each musician responds to what others are playing without a conductor. The following diagram illustrates this event-driven workflow.

Event-driven workflow methodology

The event-driven pattern in asynchronous systems operates without predefined workflows, creating a dynamic and potentially chaotic processing environment. While agents subscribe to and publish messages through a central event hub, the flow of processing is determined organically by the message requirements and the available subscribed agents. Although the resulting pattern may resemble a structured workflow when visualized, it’s important to understand that this is emergent behavior rather than orchestrated design. The absence of centralized workflow definitions means that message processing occurs naturally based on publication timing and agent availability, creating a fluid and adaptable system that can evolve with changing requirements.
The choice between synchronous orchestration and asynchronous event-driven patterns fundamentally shapes how agentic AI systems operate and scale. Synchronous orchestration, with its supervisor agent approach, provides precise control and predictability, making it ideal for complex processes requiring strict oversight and sequential execution. This pattern excels in scenarios where the workflow needs to be tightly managed, audited, and debugged. However, it can create bottlenecks as all operations must pass through the supervisor agent. Conversely, asynchronous event-driven systems offer greater flexibility and scalability through their distributed nature. By allowing agents to operate independently and react to events in real-time, these systems can handle dynamic scenarios and adapt to changing requirements more readily. While this approach may introduce more complexity in tracking and debugging workflows, it excels in scenarios requiring high scalability, fault tolerance, and adaptive behavior. The decision between these patterns often depends on the specific requirements of the system, balancing the need for control and predictability against the benefits of flexibility and scalability.
Getting the best of both patterns
You can use a single agent to route messages to other agents based on the context of the event data (message) at runtime, with no prior knowledge of the downstream agents, without having to rely on each agent subscribing to an event hub. This is traditionally known as the message broker or event broker pattern, which for the purpose of this article we will call an agent broker pattern, to represent brokering of messages to AI agents. The agent broker pattern is a hybrid approach that combines elements of both centralized synchronous orchestration and distributed asynchronous event-driven systems.
The key to this pattern is that a single agent acts as a central hub for message distribution but doesn’t control the entire workflow. The broker agent determines where to send each message based on its content or metadata, making routing decisions at runtime. The processing agents are decoupled from each other and from the message source, only interacting with the broker to receive messages. The agent broker pattern is different from the supervisor pattern because it awaits a response from collaborating agents by routing a message to an agent and not awaiting a response. The following diagram illustrates the agent broker methodology.

Agent broker methodology

Following an agent broker pattern, the system is still fundamentally event-driven, with actions triggered by the arrival of messages. New agents can be added to handle specific types of messages without changing the overall system architecture. Understanding how to implement this type of pattern will be explained later in this post.
This pattern is often used in enterprise messaging systems, microservices architectures, and complex event processing systems. It provides a balance between the structure of orchestrated workflows and the flexibility of pure event-driven systems.
Agentic architecture with the Amazon Bedrock Converse API
Traditionally, we might have had to sacrifice some flexibility in the broker pattern by having to update the routing logic in the broker when adding additional processes (agents) to the architecture. This is, however, not the case when using the Amazon Bedrock Converse API. With the Converse API, we can call a tool to complete an Amazon Bedrock model response. The only change is the additional agent added to the collaboration stored as configuration outside of the broker.
To let a model use a tool to complete a response for a message, the message and the definitions for one or more tools (agents) are sent to the model. If the model determines that one of the tools can help generate a response, it returns a request to use the tool.
AWS AppConfig, a capability of AWS Systems Manager, is used to store each of the agents’ tool context data as a single configuration in a managed data store, to be sent to the Converse API tool request. By using AWS Lambda as the message broker to receive all message and send requests to the Converse API with the tool context stored in AWS AppConfig, the architecture allows for adding additional agents to the system without having to update the routing logic, by ‘registering’ agents as ‘tool context’ in the configuration stored in AWS AppConfig, to be read by Lambda at run time (event message received). For more information about when to use AWS Config, see AWS AppConfig use cases.
Implementing the agent broker pattern
The following diagram demonstrates how Amazon EventBridge and Lambda act as a central message broker, with the Amazon Bedrock Converse API to let a model use a tool in a conversation to dynamically route messages to appropriate AI agents.

Agent broker architecture

Messages sent to EventBridge are routed through an EventBridge rule to Lambda. There are three tasks the EventBridge Lambda function performs as the agent broker:

Query AWS AppConfig for all agents’ tool context. An agent tool context is a description of the agent’s capability along with the Amazon Resource Name (ARN) or URL of the agent’s message ingress.
Provide the agent tool context along with the inbound event message to the Amazon Bedrock LLM through the Converse API; in this example, using an Amazon Bedrock tools-compatible LLM. The LLM, using the Converse API, combines the event message context compared to the agent tool context to provide a response back to the requesting Lambda function, containing the recommended tool or tools that should be used to process the message.
Receive the response from the Converse API request containing one or more tools that should be called to process the event message, and hands the event message to the ingress of the recommended tools.

In this example, the architecture demonstrates brokering messages asynchronously to an Amazon SageMaker based agent, an Amazon Bedrock agent, and an external third-party agent, all from the same agent broker.
Although the brokering Lambda function could connect directly to the SageMaker or Amazon Bedrock agent API, the architecture provides for adaptability and scalability in message throughput, allowing messages from the agent broker to be queued, in this example with Amazon Simple Queue Service (Amazon SQS), and processed according to the capability of the receiving agent. For adaptability, the Lambda function subscribed to the agent ingress queue provides additional system prompts (pre-prompting of the LLM for specific tool context) and message formatted, and required functions for the expected input and output of the agent request.
To add new agents to the system, the only integration requirements are to update the AWS AppConfig with the new agent tool context (description of the agents’ capability and ingress endpoint), and making sure the brokering Lambda function has permissions to write to the agent ingress endpoint.
Agents can be added to the system without rewriting the Lambda function or integration that requires downtime, allowing the new agent to be used on the next instantiation of the brokering Lambda function.
Implementing the supervisor pattern with an agent broker
Building upon the agent broker pattern, the architecture can be extended to handle more complex, stateful interactions. Although the broker pattern effectively uses AWS AppConfig and Amazon Bedrock’s Converse API tool use capability for dynamic routing, its unidirectional nature has limitations. Events flow in and are distributed to agents, but complex scenarios like travel booking require maintaining context across multiple agent interactions. This is where the supervisor pattern provides additional capabilities without compromising the flexible routing we achieved with the broker pattern.
Using the example of the travel booking agent: the example has the broker agent and several task-based agents that events will be pushed to. When processing a request like “Book a 3-night trip to Sydney from Melbourne during the first week of September for 2 people”, we encounter several challenges. Although this statement contains clear intent, it lacks critical details that the agent might need, such as:

Specific travel dates
Accommodation preferences and room configurations

The broker pattern alone can’t effectively manage these information gaps while maintaining context between agent interactions. This is where adding the capability of a supervisor to the broker agent provides:

Contextual awareness between events and agent invocations
Bi-directional information flow capabilities

The following diagram illustrates the supervisor pattern workflow

Supervisor pattern architecture

When a new event enters the system, the workflow initiates the following steps:

The event is assigned a unique identifier for tracking
The supervisor performs the following actions:

Evaluates which agents to invoke (brokering)
Creates a new state record with the identifier and timestamp
Provides this contextual information to the selected agents along with their invocation parameters

Agents process their tasks and emit ‘task completion’ events back to EventBridge
The supervisor performs the following actions:

Collects and processes completed events
Evaluates the combined results and context
Determines if additional agent invocations are needed
Continues this cycle until all necessary actions are completed

This pattern handles scenarios where agents might return varying results or request additional information. The supervisor can either:

Derive missing information from other agent responses
Request additional information from the source
Coordinate with other agents to resolve information gaps

To handle information gaps without architectural modifications, we can introduce an answers agent to the existing system. This agent operates within the same framework as other agents, but specializes in context resolution. When agents report incomplete information or require clarification, the answers agent can:

Process queries about missing information
Emit task completion events with enhanced context
Allow the supervisor to resume workflow execution with newly available information, the same way that it would after another agent emits its task-completion event.

This enhancement enables complex, multi-step workflows while maintaining the system’s scalability and flexibility. The supervisor can manage dependencies between agents, handle partial completions, and make sure that the necessary information is gathered before finalizing tasks.
Implementation considerations:
Implementing the supervisor pattern on top of the existing broker agent architecture provides the advantages of both the broker pattern and the complex state management of orchestration. The state management can be handled through Amazon DynamoDB, and maintaining the use of EventBridge for event routing and AWS AppConfig for agent configuration. The Amazon Bedrock Converse API continues to play a crucial role in agent selection, but now with added context from the supervisor’s state management. This allows you to preserve the dynamic routing capabilities we established with the broker pattern while adding the sophisticated workflow management needed for complex, multi-step processes.
Conclusion
Agentic AI architecture, powered by Amazon Bedrock and AWS services, represents a leap forward in the evolution of automated AI systems. By combining the flexibility of event-driven systems with the power of generative AI, this architecture enables businesses to create more adaptive, scalable, and intelligent automated processes. The agent broker pattern offers a robust solution for dynamically routing complex tasks to specialized AI agents, and the agent supervisor pattern extends these capabilities to handle sophisticated, context-aware workflows.
These patterns take advantage of the strengths of the Amazon Bedrock’s Converse API, Lambda, EventBridge, and AWS AppConfig to create a flexible and extensible system. The broker pattern excels at dynamic routing and seamless agent integration, while the supervisor pattern adds crucial state management and contextual awareness for complex, multi-step processes. Together, they provide a comprehensive framework for building sophisticated AI systems that can handle both simple routing and complex, stateful interactions.
This architecture not only streamlines operations, but also opens new possibilities for innovation and efficiency across various industries. Whether implementing simple task routing or orchestrating complex workflows requiring maintained context, organizations can build scalable, maintainable AI systems that evolve with their needs while maintaining operational stability.
To get started with an agentic AI architecture, consider the following next steps:

Explore Amazon Bedrock – If you haven’t already, sign up for Amazon Bedrock and experiment with its powerful generative AI models and APIs. Familiarize yourself with the Converse API and its tool use capabilities.
Prototype your own agent broker – Use the architecture outlined in this post as a starting point to build a proof-of-concept agent broker system tailored to your organization’s needs. Start small with a few specialized agents and gradually expand.
Identify use cases – Analyze your current business processes to identify areas where an agentic AI architecture could drive significant improvements. Consider complex, multi-step tasks that could benefit from AI assistance.
Stay informed – Keep up with the latest developments in AI and cloud technologies. AWS regularly updates its offerings, so stay tuned for new features that could enhance your agentic AI systems.
Collaborate and share – Join AI and cloud computing communities to share your experiences and learn from others. Consider contributing to open-source projects or writing about your implementation to help advance the field.
Invest in training – Make sure your team has the necessary skills to work with these advanced AI technologies. Consider AWS training and certification programs to build expertise in your organization.

By embracing an agentic AI architecture, you’re not just optimizing your current processes – you’re positioning your organization at the forefront of the AI revolution. Start your journey today and unlock the full potential of AI-driven automation for your business.

About the Authors
Aaron Sempf is Next Gen Tech Lead for the AWS Partner Organization in Asia-Pacific and Japan. With over 20 years in distributed system engineering design and development, he focuses on solving for large scale complex integration and event driven systems. In his spare time, he can be found coding prototypes for autonomous robots, IoT devices, distributed solutions, and designing agentic architecture patterns for generative AI assisted business automation.
Joshua Toth is a Senior Prototyping Engineer with over a decade of experience in software engineering and distributed systems. He specializes in solving complex business challenges through technical prototypes, demonstrating the art of the possible. With deep expertise in proof of concept development, he focuses on bridging the gap between emerging technologies and practical business applications. In his spare time, he can be found developing next-generation interactive demonstrations and exploring cutting-edge technological innovations.
Sara van de Moosdijk, simply known as Moose, is an AI/ML Specialist Solution Architect at AWS. She helps AWS customers and partners build and scale AI/ML solutions through technical enablement, support, and architectural guidance. Moose spends her free time figuring out how to fit more books in her overflowing bookcase.

Building an Interactive Bilingual (Arabic and English) Chat Interface …

In this tutorial, we implement a Bilingual Chat Assistant powered by Arcee’s Meraj-Mini model, which is deployed seamlessly on Google Colab using T4 GPU. This tutorial showcases the capabilities of open-source language models while providing a practical, hands-on experience in deploying state-of-the-art AI solutions within the constraints of free cloud resources. We’ll utilise a powerful stack of tools including:

Arcee’s Meraj-Mini model

Transformers library for model loading and tokenization

Accelerate and bitsandbytes for efficient quantization

PyTorch for deep learning computations

Gradio for creating an interactive web interface

Copy CodeCopiedUse a different Browser# Enable GPU acceleration
!nvidia-smi –query-gpu=name,memory.total –format=csv

# Install dependencies
!pip install -qU transformers accelerate bitsandbytes
!pip install -q gradio

First we enable GPU acceleration by querying the GPU’s name and total memory using the nvidia-smi command. It then installs and updates key Python libraries—such as transformers, accelerate, bitsandbytes, and gradio—to support machine learning tasks and deploy interactive applications.

Copy CodeCopiedUse a different Browserimport torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, BitsAndBytesConfig

quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type=”nf4″,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True
)

model = AutoModelForCausalLM.from_pretrained(
“arcee-ai/Meraj-Mini”,
quantization_config=quant_config,
device_map=”auto”
)
tokenizer = AutoTokenizer.from_pretrained(“arcee-ai/Meraj-Mini”)

Then we configures 4-bit quantization settings using BitsAndBytesConfig for efficient model loading, then loads the “arcee-ai/Meraj-Mini” causal language model along with its tokenizer from Hugging Face, automatically mapping devices for optimal performance.

Copy CodeCopiedUse a different Browserchat_pipeline = pipeline(
“text-generation”,
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.1,
do_sample=True
)

Here we create a text generation pipeline tailored for chat interactions using Hugging Face’s pipeline function. It configures maximum new tokens, temperature, top_p, and repetition penalty to balance diversity and coherence during text generation.

Copy CodeCopiedUse a different Browserdef format_chat(messages):
prompt = “”
for msg in messages:
prompt += f”<|im_start|>{msg[‘role’]}n{msg[‘content’]}<|im_end|>n”
prompt += “<|im_start|>assistantn”
return prompt

def generate_response(user_input, history=[]):
history.append({“role”: “user”, “content”: user_input})
formatted_prompt = format_chat(history)
output = chat_pipeline(formatted_prompt)[0][‘generated_text’]
assistant_response = output.split(“<|im_start|>assistantn”)[-1].split(“<|im_end|>”)[0]
history.append({“role”: “assistant”, “content”: assistant_response})
return assistant_response, history

We define two functions to facilitate a conversational interface. The first function formats a chat history into a structured prompt with custom delimiters, while the second appends a new user message, generates a response using the text-generation pipeline, and updates the conversation history accordingly.

Copy CodeCopiedUse a different Browserimport gradio as gr

with gr.Blocks() as demo:
chatbot = gr.Chatbot()
msg = gr.Textbox(label=”Message”)
clear = gr.Button(“Clear History”)

def respond(message, chat_history):
response, _ = generate_response(message, chat_history.copy())
return response, chat_history + [(message, response)]

msg.submit(respond, [msg, chatbot], [msg, chatbot])
clear.click(lambda: None, None, chatbot, queue=False)

demo.launch(share=True)

Finally, we build a web-based chatbot interface using Gradio. It creates UI elements for chat history, message input, and a clear history button, and defines a response function that integrates with the text-generation pipeline to update the conversation. Finally, the demo is launched with sharing enabled for public access.

Here is the Colab Notebook. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 80k+ ML SubReddit.
The post Building an Interactive Bilingual (Arabic and English) Chat Interface with Open Source Meraj-Mini by Arcee AI: Leveraging GPU Acceleration, PyTorch, Transformers, Accelerate, BitsAndBytes, and Gradio appeared first on MarkTechPost.

This AI Paper Introduces R1-Searcher: A Reinforcement Learning-Based F …

Large language models (LLMs) models primarily depend on their internal knowledge, which can be inadequate when handling real-time or knowledge-intensive questions. This limitation often leads to inaccurate responses or hallucinations, making it essential to enhance LLMs with external search capabilities. By leveraging reinforcement learning, researchers are actively working on methods to improve these models’ ability to retrieve and integrate relevant information beyond their static knowledge base.

Current LLMs’ restricted access to up-to-date and domain-specific knowledge is a major issue. Since these models are trained on vast datasets that may not include recent developments, they struggle with answering dynamic questions requiring real-time information. While retrieval-augmented generation (RAG) methods have been introduced to mitigate this issue, existing solutions rely heavily on structured prompting and supervised fine-tuning (SFT). These approaches often lead to overfitting, limiting the model’s generalization ability across different datasets. There is a need for an alternative that allows LLMs to autonomously interact with external search systems, improving their adaptability and accuracy.

Previous methods have attempted to incorporate external search functionality into LLMs using iterative prompting, supervised fine-tuning, and tree-based search techniques like Monte Carlo Tree Search (MCTS). While these methods show some improvements, they rely on expensive computational resources and proprietary models. Supervised fine-tuning, for instance, forces models to memorize reasoning paths, which negatively impacts their ability to generalize to new scenarios. Some retrieval-based strategies introduce multi-step query refinement techniques but often require human intervention or predefined prompt templates. These limitations necessitate the development of a more autonomous and efficient search mechanism for LLMs.

A research team from the Renmin University of China and DataCanvas Alaya NeW introduced R1-Searcher, a novel reinforcement learning framework designed to improve LLMs’ ability to retrieve external knowledge effectively. This framework employs a two-stage reinforcement learning approach to enable LLMs to invoke an external search system without requiring human-crafted prompts or prior supervised fine-tuning. By focusing solely on reinforcement learning, R1-Searcher allows models to explore and learn optimal retrieval strategies autonomously, improving accuracy and efficiency in reasoning tasks.

The R1-Searcher framework is structured in two phases. The first phase encourages the model to initiate external search actions, providing retrieval-based rewards without considering the final answer’s correctness. This phase ensures that the model learns to invoke search queries correctly. The second phase refines this capability by introducing an answer-based reward system, which evaluates whether the retrieved information contributes to solving the given problem. The reinforcement learning process relies on a tailored loss function that penalizes incorrect or unnecessary searches while rewarding the effective use of external knowledge. Unlike previous retrieval-based techniques, this approach allows LLMs to integrate reasoning and retrieval dynamically, improving their adaptability across diverse tasks.

Experimental evaluations demonstrated that R1-Searcher outperformed existing retrieval-augmented methods, including GPT-4o-mini-based models. On the HotpotQA dataset, accuracy improved by 48.22%, while on the 2WikiMultiHopQA dataset, it achieved a 21.72% increase. Further, it showed strong generalization capabilities by outperforming other models on the Bamboogle dataset, achieving an 11.4% improvement over comparable retrieval-based approaches. Unlike previous techniques, which relied on closed-source models and extensive computational resources, R1-Searcher provided superior performance while maintaining efficiency in search and reasoning tasks. The study also demonstrated that this approach successfully mitigated common issues related to hallucinations and misinformation in LLM-generated responses.

The findings indicate that enhancing LLMs with autonomous search capabilities can significantly improve their accuracy and generalization. Using reinforcement learning instead of supervised fine-tuning, R1-Searcher allows models to learn optimal retrieval strategies dynamically, eliminating reliance on memorized responses. This approach represents a major advancement in artificial intelligence, addressing the limitations of existing models while ensuring they remain adaptable to evolving knowledge requirements. The study’s results highlight the potential for reinforcement learning to revolutionize knowledge integration in LLMs, making them more reliable for diverse reasoning tasks.

Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.

Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. It’s operated using an easy-to-use CLI and native client SDKs in Python and TypeScript .
The post This AI Paper Introduces R1-Searcher: A Reinforcement Learning-Based Framework for Enhancing LLM Search Capabilities appeared first on MarkTechPost.

HybridNorm: A Hybrid Normalization Strategy Combining Pre-Norm and Pos …

Transformers have revolutionized natural language processing as the foundation of large language models (LLMs), excelling in modeling long-range dependencies through self-attention mechanisms. However, as these models grow deeper and more complex, training stability presents a significant challenge that directly impacts performance. Researchers face a troublesome trade-off between two primary normalization strategies: Pre-Layer Normalization (Pre-Norm) and Post-Layer Normalization (Post-Norm). Pre-Norm offers improved training stability but compromises in final model performance, while Post-Norm delivers superior generalization and performance at the cost of training difficulty. This stability-performance dilemma has hindered the advancement of transformer architectures.

Existing methods tried to enhance transformer architectures in computational efficiency and model expressiveness. Architecture modifications like Multi-head Latent Attention (MLA) and Mixture of Experts (MoE) have improved performance across various tasks but require careful integration with normalization layers. In normalization types, methods like RMSNorm have shown effectiveness in specific contexts by addressing internal covariate shift using root mean square statistics. Regarding attention normalization, QK-Norm enhances stability by normalizing query and key components, while QKV-Norm extends this approach to include value components. Solutions like DeepNorm address training instability by scaling residual connections, while Mix-LN applies Post-Norm to earlier layers and Pre-Norm to deeper layers.

Researchers from Peking University, SeedFoundation-Model ByteDance, and Capital University of Economics and Business have proposed HybridNorm, a normalization strategy to combine the strengths of both Pre-Norm and Post-Norm approaches in transformer architectures effectively. It implements a dual normalization technique within each transformer block: applying QKV normalization within the attention mechanism while utilizing Post-Norm in the feed-forward network (FFN). This strategic combination addresses the longstanding stability-performance trade-off that has challenged transformer model development. The approach proves particularly effective for LLMs, where training stability and performance optimization are crucial.

The HybridNorm is evaluated across two model series: dense models (550M and 1B parameters) and MoE models. The 1B dense model contains approximately 1.27 billion parameters with an architecture similar to Llama 3.2. For the MoE variant, researchers implemented the OLMoE framework, which activates only 1.3B parameters from a total of 6.9B. The 550M dense model features a model dimension of 1536, an FFN dimension of 4096, and 16 attention heads. The larger 1.2B model expands these dimensions to 2048 and 9192, respectively, with 32 attention heads. The MoE-1B-7B model implements a specialized configuration with 16 attention heads and 2048 model dimensions and selectively activates 8 experts from a pool of 64, enabling more efficient computational resource allocation.

The experimental results reveal HybridNorm’s superior performance across dense and MoE models. In dense model evaluations, both HybridNorm and HybridNorm* configurations show consistently lower training loss and validation perplexity than traditional Pre-Norm approaches. Downstream benchmark evaluations show HybridNorm* outperforming the Pre-Norm across diverse tasks, achieving the highest average scores with improvements in BasicArithmetic (+3.11), HellaSwag (+1.71), and COPA (+3.78). In the MoE model, HybridNorm* maintains its advantage with consistently lower training loss and validation perplexity throughout training. Downstream task evaluations for MoE models show improvements in reasoning-intensive tasks like ARC-C (+2.35), ARC-E (+2.40), and OpenbookQA (+0.81).

In conclusion, researchers introduced HybridNorm, a significant advancement in transformer architecture design to resolve the traditional trade-off between training stability and model performance. It strategically combines Pre-Norm and Post-Norm techniques within each transformer block, applying QKV normalization in the attention mechanism and Post-Norm in the feed-forward network. This hybrid strategy creates a balanced normalization framework to stabilize gradient flow while maintaining strong regularization effects. Moreover, the consistent performance gains across various model scales highlight HybridNorm’s versatility and scalability in transformer design. As transformer models, HybridNorm offers a practical solution for developing more robust and performant large-scale neural networks.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.

Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. It’s operated using an easy-to-use CLI and native client SDKs in Python and TypeScript .
The post HybridNorm: A Hybrid Normalization Strategy Combining Pre-Norm and Post-Norm Strengths in Transformer Architectures appeared first on MarkTechPost.

Exploring creative possibilities: A visual guide to Amazon Nova Canvas

Compelling AI-generated images start with well-crafted prompts. In this follow-up to our Amazon Nova Canvas Prompt Engineering Guide, we showcase a curated gallery of visuals generated by Nova Canvas—categorized by real-world use cases—from marketing and product visualization to concept art and design exploration.
Each image is paired with the prompt and parameters that generated it, providing a practical starting point for your own AI-driven creativity. Whether you’re crafting specific types of images, optimizing workflows, or simply seeking inspiration, this guide will help you unlock the full potential of Amazon Nova Canvas.
Solution overview
Getting started with Nova Canvas is straightforward. You can access the model through the Image Playground on the AWS Management Console for Amazon Bedrock, or through APIs. For detailed setup instructions, including account requirements and necessary permissions, visit our documentation on Creative content generation with Amazon Nova. Our previous post on prompt engineering best practices provides comprehensive guidance on crafting effective prompts.
A visual guide to Amazon Nova Canvas
In this gallery, we showcase a diverse range of images and the prompts used to generate them, highlighting how Amazon Nova Canvas adapts to various use cases—from marketing and product design to storytelling and concept art.
All images that follow were generated using Nova Canvas at a 1280x720px resolution with a CFG scale of 6.5, seed of 0, and the Premium setting for image quality. This resolution also matches the image dimensions expected by Nova Reel, allowing you to take these images into Amazon Nova Reel to experiment with video generation.
Landscapes

Overhead perspective of winding river delta, capturing intricate branching waterways and sediment patterns. Soft morning light revealing subtle color gradations between water and land. Revealing landscape’s hidden fluid dynamics from bird’s-eye view.

Sparse arctic tundra landscape at twilight, expansive white terrain with isolated rock formations silhouetted against a deep blue sky. Low-contrast black and white composition capturing the infinite horizon, with subtle purple hues in the shadows. Ultra-wide-angle perspective emphasizing the vastness of negative space and geological simplicity.

Wide-angle aerial shot of patchwork agricultural terrain at golden hour, with long shadows accentuating the texture and topography of the land. Emphasis on the interplay of light and shadow across the geometric field divisions.

Dynamic drone perspective of a dramatic shoreline at golden hour, capturing long shadows cast by towering sea stacks and coastal cliffs. Hyper-detailed imagery showcasing the interplay of warm sunlight on rocky textures and the cool, foamy edges of incoming tides.

Dramatic wide-angle shot of a rugged mountain range at sunset, with a lone tree silhouetted in the foreground, creating a striking focal point.

Wide-angle capture of a hidden beach cove, surrounded by towering cliffs, with a shipwreck partially visible in the shallow waters.

Character portraits

A profile view of a weathered fisherman, silhouetted against a pastel dawn sky. The rim lighting outlines the shape of his beard and the texture of his knit cap. Rendered with high contrast to emphasize the rugged contours of his face and the determined set of his jaw.

A weathered fisherman with a thick gray beard and a knit cap, framed against the backdrop of a misty harbor at dawn. The image captures him in a medium shot, revealing more of his rugged attire. Cool, blue tones dominate the scene, contrasting with the warm highlights on his face.

An intimate portrait of a seasoned fisherman, his face filling the frame. His thick gray beard is flecked with sea spray, and his knit cap is pulled low over his brow. The warm glow of sunset bathes his weathered features in golden light, softening the lines of his face while still preserving the character earned through years at sea. His eyes reflect the calm waters of the harbor behind him.

A seaside cafe at sunrise, with a seasoned barista’s silhouette visible through the window. Their kind smile is illuminated by the warm glow of the rising sun, creating a serene atmosphere. The image has a dreamy, soft-focus quality with pastel hues.

A dynamic profile shot of a barista in motion, captured mid-conversation with a customer. Their smile is genuine and inviting, with laugh lines accentuating their seasoned experience. The cafe’s interior is rendered in soft bokeh, maintaining the cinematic feel with a shallow depth of field.

A front-facing portrait of an experienced barista, their welcoming smile framed by the sleek espresso machine. The background bustles with blurred cafe activity, while the focus remains sharp on the barista’s friendly demeanor. The lighting is contrasty, enhancing the cinematic mood.

Fashion photography

A model with sharp cheekbones and platinum pixie cut in a distressed leather bomber jacket stands amid red smoke in an abandoned subway tunnel. Wide-angle lens, emphasizing tunnel’s converging lines, strobed lighting creating a sense of motion.

A model with sharp cheekbones and platinum pixie cut wears a distressed leather bomber jacket, posed against a stark white cyclorama. Low-key lighting creates deep shadows, emphasizing the contours of her face. Shot from a slightly lower angle with a medium format camera, highlighting the jacket’s texture.

Close-up portrait of a model with defined cheekbones and a platinum pixie cut, emerging from an infinity pool while wearing a wet distressed leather bomber jacket. Shot from a low angle with a tilt-shift lens, blurring the background for a dreamy fashion magazine aesthetic.

A model with sharp cheekbones and platinum pixie cut is wearing a distressed leather bomber jacket, caught mid-laugh at a backstage fashion show. Black and white photojournalistic style, natural lighting.

Side profile of a model with defined cheekbones and a platinum pixie cut, standing still amidst the chaos of Chinatown at midnight. The distressed leather bomber jacket contrasts with the blurred neon lights in the background, creating a sense of urban solitude.

Product photography

A flat lay featuring a premium matte metal water bottle with bamboo accents, placed on a textured linen cloth. Eco-friendly items like a cork notebook, a sprig of eucalyptus, and a reusable straw are arranged around it. Soft, natural lighting casts gentle shadows, emphasizing the bottle’s matte finish and bamboo details. The background is an earthy tone like beige or light gray, creating a harmonious and sustainable composition.

Angled perspective of the premium water bottle with bamboo elements, positioned on a natural jute rug. Surrounding it are earth-friendly items: a canvas tote bag, a stack of recycled paper notebooks, and a terracotta planter with air-purifying plants. Warm, golden hour lighting casts long shadows, emphasizing textures and creating a cozy, sustainable atmosphere. The scene evokes a sense of eco-conscious home or office living.

An overhead view of the water bottle’s bamboo cap, partially unscrewed to reveal the threaded metal neck. Soft, even lighting illuminates the entire scene, showcasing the natural variations in the bamboo’s color and grain. The bottle’s matte metal body extends out of frame, creating a minimalist composition that draws attention to the sustainable materials and precision engineering.

An angled view of a premium matte metal water bottle with bamboo accents, showcasing its sleek profile. The background features a soft blur of a serene mountain lake. Golden hour sunlight casts a warm glow on the bottle’s surface, highlighting its texture. Captured with a shallow depth of field for product emphasis.

A pair of premium over-ear headphones with a matte black finish and gold accents, arranged in a flat lay on a clean white background. Organic leaves for accents. small notepad, pencils, and a carrying case are neatly placed beside the headphones, creating a symmetrical and balanced composition. Bright, diffused lighting eliminates shadows, emphasizing the sleek design without distractions. A shadowless, crisp aesthetic.

An overhead shot of premium over-ear headphones resting on a reflective surface, showcasing the symmetry of the design. Dramatic side lighting accentuates the curves and edges, casting subtle shadows that highlight the product’s premium build quality.

An extreme macro shot focusing on the junction where the leather ear cushion meets the metallic housing of premium over-ear headphones. Sharp details reveal the precise stitching and material textures, while selective focus isolates this area against a softly blurred, dark background, showcasing the product’s premium construction.

An overhead shot of premium over-ear headphones resting on a reflective surface, showcasing the symmetry of the design. Dramatic side lighting casts long shadows, accentuating the curves of the headband and the depth of the ear cups against a minimalist white background.

A dynamic composition of premium over-ear headphones floating in space, with the headband and ear cups slightly separated to showcase individual components. Rim lighting outlines each piece, while a gradient background adds depth and sophistication.

A smiling student holding up her smartphone, displaying a green matte screen for easy image replacement, in a classroom setting.

Overhead view of a young man typing on a laptop with a green matte screen, surrounded by work materials on a wooden table.

Food photography

Monochromatic macarons arranged in precise geometric pattern. Strong shadow play. Architectural lighting. Minimal composition.

A pyramid of macarons in ombre pastels, arranged on a matte black slate surface. Dramatic side lighting from left. Close-up view highlighting texture of macaron shells. Garnished with edible gold leaf accents. Shot at f/2 aperture for shallow depth of field.

Disassembled macaron parts in zero-g chamber. Textured cookie halves, viscous filling streams, and scattered almond slivers drifting. High-contrast lighting with subtle shadows on off-white. Wide-angle shot showcasing full dispersal pattern.

Architectural design

A white cubic house with floor-to-ceiling windows, interior view from living room. Double-height space, floating steel staircase, polished concrete floors. Late afternoon sunbeams streaming across minimal furnishings. Ultra-wide architectural lens.

A white cubic house with floor-to-ceiling windows, kitchen and dining space. Monolithic marble island, integrated appliances, dramatic shadows from skylight above. Shot from a low angle with a wide-angle lens, emphasizing the height and openness of the space, late afternoon golden hour light streaming in.

An angular white modernist house featuring expansive glass walls, photographed for Architectural Digest’s cover. Misty morning atmosphere, elongated infinity pool creating a mirror image, three-quarter aerial view, lush coastal vegetation framing the scene.

A white cubic house with floor-to-ceiling windows presented as detailed architectural blueprints. Site plan view showing landscaping and property boundaries, technical annotations, blue background with white lines, precise measurements and zoning specifications visible.

A white cubic house with floor-to-ceiling windows in precise isometric projection. X-ray style rendering revealing internal framework, electrical wiring, and plumbing systems. Technical cross-hatching on load-bearing elements and foundation.

Concept art

A stylized digital painting of a bustling plaza in a futuristic eco-city, with soft impressionistic brushstrokes. Crystalline towers frame the scene, while suspended gardens create a canopy overhead. Holographic displays and eco-friendly vehicles add life to the foreground. Dreamlike and atmospheric, with glowing highlights in sapphire and rose gold.

A stylized digital painting of an elevated park in a futuristic eco-city, viewed from a high angle, with soft impressionistic brushstrokes. Crystalline towers peek through a canopy of trees, while winding elevated walkways connect floating garden platforms. People relax in harmony with nature. Dreamlike and atmospheric, with glowing highlights in jade and amber.

Concept art of a floating garden platform in a futuristic city, viewed from below. Translucent roots and hanging vines intertwine with advanced technology, creating a mesmerizing canopy. Soft bioluminescent lights pulse through the vegetation, casting ethereal patterns on the ocean’s surface. A gradient of deep purples and blues dominates the twilight sky.

An enchanted castle atop a misty cliff at sunrise, warm golden light bathing the ivy-covered spires. A wide-angle view capturing a flock of birds soaring past the tallest tower, set against a dramatic sky with streaks of orange and pink. Mystical ambiance and dynamic composition.

A magical castle rising from morning fog on a rugged cliff face, bathed in cool blue twilight. A low-angle shot showcasing the castle’s imposing silhouette against a star-filled sky, with a crescent moon peeking through wispy clouds. Mysterious mood and vertical composition emphasizing height.

An enchanted fortress clinging to a mist-shrouded cliff, caught in the moment between night and day. A panoramic view from below, revealing the castle’s reflection in a tranquil lake at the base of the cliff. Ethereal pink and purple hues in the sky, with a V-formation of birds flying towards the castle. Serene atmosphere and balanced symmetry.

Illustration

Japanese ink wash painting of a cute baby dragon with pearlescent mint-green scales and tiny wings curled up in a nest made of cherry blossom petals. Delicate brushstrokes, emphasis on negative space.

Art nouveau-inspired composition centered on an endearing dragon hatchling with gleaming mint-green scales. Sinuous morning glory stems and blossoms intertwine around the subject, creating a harmonious balance. Soft, dreamy pastels and characteristic decorative elements frame the scene.

Watercolor scene of a cute baby dragon with pearlescent mint-green scales crouched at the edge of a garden puddle, tiny wings raised. Soft pastel flowers and foliage frame the composition. Loose, wet-on-wet technique for a dreamy atmosphere, with sunlight glinting off ripples in the puddle.

A playful, hand-sculpted claymation-style baby dragon with pearlescent mint scales and tiny wings, sitting on a puffy marshmallow cloud. Its soft, rounded features and expressive googly eyes give it a lively, mischievous personality as it giggles and flaps its stubby wings, trying to take flight in a candy-colored sky.

A whimsical, animated-style render of a baby dragon with pearlescent mint scales nestled in a bed of oversized, bioluminescent flowers. The floating island garden is bathed in the warm glow of sunset, with fireflies twinkling like stars. Dynamic lighting accentuates the dragon’s playful expression.

Graphic design

A set of minimalist icons for a health tracking app. Dual-line design with 1.5px stroke weight on solid backgrounds. Each icon uses teal for the primary line and a lighter shade for the secondary line, with ample negative space. Icons maintain consistent 64x64px dimensions with centered compositions. Clean, professional aesthetic suitable for both light and dark modes.

Stylized art deco icons for fitness tracking. Geometric abstractions of health symbols with gold accents. Balanced designs incorporating circles, triangles, and zigzag motifs. Clean and sophisticated.

Set of charming wellness icons for digital health tracker. Organic, hand-drawn aesthetic with soft, curvy lines. Uplifting color combination of lemon yellow and fuchsia pink. Subtle size variations among icons for a dynamic, handcrafted feel.

Lush greenery tapestry in 16:9 panoramic view. Detailed monstera leaves overlap in foreground, giving way to intricate ferns and tendrils. Emerald and sage watercolor washes create atmospheric depth. Foliage density decreases towards center, suggesting an enchanted forest clearing.

Modern botanical line drawing in 16:9 widescreen. Forest green single-weight outlines of stylized foliage. Negative space concentrated in the center for optimal text placement. Geometric simplification of natural elements with a focus on curves and arcs.

3D sculptural typography spelling out “BRAVE” with each letter made from a different material, arranged in a dynamic composition.

Experimental typographic interpretation of “BRAVE” using abstract, interconnected geometric shapes that flow and blend organically. Hyper-detailed textures reminiscent of fractals and natural patterns create a mesmerizing, otherworldly appearance with sharp contrast.

A dreamy photograph overlaid with delicate pen-and-ink drawings, blending reality and fantasy to reveal hidden magic in ordinary moments.

Surreal digital collage blending organic and technological elements in a futuristic style.

Abstract figures emerging from digital screens, gradient color transitions, mixed textures, dynamic composition, conceptual narrative style.

Abstract humanoid forms materializing from multiple digital displays, vibrant color gradients flowing between screens, contrasting smooth and pixelated textures, asymmetrical layout with visual tension, surreal storytelling aesthetic.

Abstract figures emerging from digital screens, glitch art aesthetic with RGB color shifts, fragmented pixel clusters, high contrast scanlines, deep shadows cast by volumetric lighting.

Conclusion
The examples showcased here are just the beginning of what’s possible with Amazon Nova Canvas. For even greater control, you can guide generations with reference images, use custom color palettes, or make precise edits—such as swapping backgrounds or refining details— with simple inputs. Plus, with built-in safeguards such as watermarking and content moderation, Nova Canvas offers a responsible and secure creative experience. Whether you’re a professional creator, a marketing team, or an innovator with a vision, Nova Canvas provides the tools to bring your ideas to life.
We invite you to explore these possibilities yourself and discover how Nova Canvas can transform your creative process. Stay tuned for our next installment, where we’ll dive into the exciting world of video generation with Amazon Nova Reel.
Ready to start creating? Visit the Amazon Bedrock console today and bring your ideas to life with Nova Canvas. For more information about features, specifications, and additional examples, explore our documentation on creative content generation with Amazon Nova.

Creative content generation with Amazon Nova
Prompting best practices for Amazon Nova content creation models
Image and video prompt engineering for Amazon Nova Canvas and Amazon Nova Reel

About the authors
Yanyan Zhang is a Senior Generative AI Data Scientist at Amazon Web Services, where she has been working on cutting-edge AI/ML technologies as a Generative AI Specialist, helping customers use generative AI to achieve their desired outcomes. Yanyan graduated from Texas A&M University with a PhD in Electrical Engineering. Outside of work, she loves traveling, working out, and exploring new things.
Kris Schultz has spent over 25 years bringing engaging user experiences to life by combining emerging technologies with world class design. As Sr. Solutions Architect within Amazon AGI, he influences the development of Amazon’s first-party generative AI models. Kris is passionate about empowering users and creators of all types with generative AI tools and knowledge.
Sanju Sunny is a Generative AI Design Technologist with AWS Prototyping & Cloud Engineering (PACE), specializing in strategy, engineering, and customer experience. He collaborates with customers across diverse industries, leveraging Amazon’s customer-obsessed innovation mechanisms to rapidly conceptualize, validate, and prototype innovative products, services, and experiences.
Nitin Eusebius is a Sr. Enterprise Solutions Architect at AWS, experienced in Software Engineering, Enterprise Architecture, and AI/ML. He is deeply passionate about exploring the possibilities of generative AI. He collaborates with customers to help them build well-architected applications on the AWS platform, and is dedicated to solving technology challenges and assisting with their cloud journey.

Benchmarking Amazon Nova and GPT-4o models with FloTorch

Based on original post by Dr. Hemant Joshi, CTO, FloTorch.ai
A recent evaluation conducted by FloTorch compared the performance of Amazon Nova models with OpenAI’s GPT-4o.
Amazon Nova is a new generation of state-of-the-art foundation models (FMs) that deliver frontier intelligence and industry-leading price-performance. The Amazon Nova family of models includes Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro, which support text, image, and video inputs while generating text-based outputs. These models offer enterprises a range of capabilities, balancing accuracy, speed, and cost-efficiency.
Using its enterprise software, FloTorch conducted an extensive comparison between Amazon Nova models and OpenAI’s GPT-4o models with the Comprehensive Retrieval Augmented Generation (CRAG) benchmark dataset. FloTorch’s evaluation focused on three critical factors—latency, accuracy, and cost—across five diverse topics.
Key findings from the benchmark study:

GPT-4o demonstrated a slight advantage in accuracy over Amazon Nova Pro
Amazon Nova Pro outperformed GPT-4o in efficiency, operating 97% faster while being 65.26% more cost-effective
Amazon Nova Micro and Amazon Nova Lite outperformed GPT-4o-mini by 2 percentage points in accuracy
In terms of affordability, Amazon Nova Micro and Amazon Nova Lite were 10% and 56.59% cheaper than GPT-4o-mini, respectively
Amazon Nova Micro and Amazon Nova Lite also demonstrated faster response times, with 48% and 26.60% improvements, respectively

In this post, we discuss the findings from this benchmarking in more detail.
The growing need for cost-effective AI models
The landscape of generative AI is rapidly evolving. OpenAI launched GPT-4o in May 2024, and Amazon introduced Amazon Nova models at AWS re:Invent in December 2024. Although GPT-4o has gained traction in the AI community, enterprises are showing increased interest in Amazon Nova due to its lower latency and cost-effectiveness.
Large language models (LLMs) are generally proficient in responding to user queries, but they sometimes generate overly broad or inaccurate responses. Additionally, LLMs might provide answers that extend beyond the company-specific context, making them unsuitable for certain enterprise use cases.
One of the most critical applications for LLMs today is Retrieval Augmented Generation (RAG), which enables AI models to ground responses in enterprise knowledge bases such as PDFs, internal documents, and structured data. This is a crucial requirement for enterprises that want their AI systems to provide responses strictly within a defined scope.
To better serve the enterprise customers, the evaluation aimed to answer three key questions:

How does Amazon Nova Pro compare to GPT-4o in terms of latency, cost, and accuracy?
How do Amazon Nova Micro and Amazon Nova Lite perform against GPT-4o mini in these same metrics?
How well do these models handle RAG use cases across different industry domains?

By addressing these questions, the evaluation provides enterprises with actionable insights into selecting the right AI models for their specific needs—whether optimizing for speed, accuracy, or cost-efficiency.
Overview of the CRAG benchmark dataset
The CRAG dataset was released by Meta for testing with factual queries across five domains with eight question types and a large number of question-answer pairs. Five domains in CRAG dataset are Finance, Sports, Music, Movie, and Open (miscellaneous). The eight different question types are simple, simple_w_condition, comparison, aggregation, set, false_premise, post-processing, and multi-hop. The following table provides example questions with their domain and question type.

Domain
Question
Question Type

Sports
Can you carry less than the maximum number of clubs during a round of golf?
simple

Music
Can you tell me how many grammies were won by arlo guthrie until 60th grammy (2017)?
simple_w_condition

Open
Can i make cookies in an air fryer?
simple

Finance
Did meta have any mergers or acquisitions in 2022?
simple_w_condition

Movie
In 2016, which movie was distinguished for its visual effects at the oscars?
simple_w_condition

The evaluation considered 200 queries from this dataset representing five domains and two question types, simple and simple_w_condition. Both types of questions are common from users, and a typical Google search for the query such as “Can you tell me how many grammies were won by arlo guthrie until 60th grammy (2017)?” will not give you the correct answer (one Grammy). FloTorch used these queries and their ground truth answers to create a subset benchmark dataset. The CRAG dataset also provides top five search result pages for each query. These five webpages act as a knowledge base (source data) to limit the RAG model’s response. The goal is to index these five webpages dynamically using a common embedding algorithm and then use a retrieval (and reranking) strategy to retrieve chunks of data from the indexed knowledge base to infer the final answer.
Evaluation setup
The RAG evaluation pipeline consists of the several key components, as illustrated in the following diagram.

In this section, we explore each component in more detail.
Knowledge base
FloTorch used the top five HTML webpages provided with the CRAG dataset for each query as the knowledge base source data. HTML pages were parsed to extract text for the embedding stage.
Chunking strategy
FloTorch used a fixed chunking strategy with a chunk size of 512 tokens (four characters is usually around one token) and a 10% overlap between chunks. Further experiments with different chunking strategies, chunk sizes, and percent overlap will be done in coming weeks and will update this post.
Embedding strategy
FloTorch used the Amazon Titan Text Embeddings V2 model on Amazon Bedrock with an output vector size of 1024. With a maximum input token limit of 8,192 for the model, the system successfully embedded chunks from the knowledge base source data as well as short queries from the CRAG dataset efficiently. Amazon Bedrock APIs make it straightforward to use Amazon Titan Text Embeddings V2 for embedding data.
Vector database
FloTorch selected Amazon OpenSearch Service as a vector database for its high-performance metrics. The implementation included a provisioned three-node sharded OpenSearch Service cluster. Each provisioned node was r7g.4xlarge, selected for its availability and sufficient capacity to meet the performance requirements. FloTorch used HSNW indexing in OpenSearch Service.
Retrieval (and reranking) strategy
FloTorch used a retrieval strategy with a k-nearest neighbor (k-NN) of five for retrieved chunks. The experiments excluded reranking algorithms to make sure retrieved chunks remained consistent for both models when inferring the answer to the provided query. The following code snippet embeds the given query and passes the embeddings to the search function:

def search_results(interaction_ids: List[str], queries: List[str], k: int):
“””Retrieve search results for queries.”””
results = []
embedding_max_length = int(os.getenv(“EMBEDDING_MAX_LENGTH”, 1024))
normalize_embeddings = os.getenv(“NORMALIZE_EMBEDDINGS”, “True”).lower() == “true”

for interaction_id, query in zip(interaction_ids, queries):
try:
_, _, embedding = create_embeddings_with_titan_bedrock(query, embedding_max_length, normalize_embeddings)
results.append(search(interaction_id + ‘_titan’, embedding, k))
except Exception as e:
logger.error(f”Error processing query {query}: {e}”)
results.append(None)
return results

Inferencing
FloTorch used the GPT-4o model from OpenAI using the API key available and used the Amazon Nova Pro model with conversation APIs. GPT-4o supports a context window of 128,000 compared to Amazon Nova Pro with a context window of 300,000. The maximum output token limit of GPT-4o is 16,384 vs. the Amazon Nova Pro maximum output token limit of 5,000. The benchmarking experiments were conducted without Amazon Bedrock Guardrails functionality. The implementation used the universal gateway provided by the FloTorch enterprise version to enable consistent API calls using the same function and to track token count and latency metrics uniformly. The inference function code is as follows:

def generate_responses(dataset_path: str, model_name: str, batch_size: int, api_endpoint: str, auth_header: str,
max_tokens: int, search_k: int, system_prompt: str):
“””Generate response for queries.”””
results = []

for batch in tqdm(load_data_in_batches(dataset_path, batch_size), desc=”Generating responses”):
interaction_ids = [item[“interaction_id”] for item in batch]
queries = [item[“query”] for item in batch]
search_results_list = search_results(interaction_ids, queries, search_k)

for i, item in enumerate(batch):
item[“search_results”] = search_results_list[i]

responses = send_batch_request(batch, model_name, api_endpoint, auth_header, max_tokens, system_prompt)

for i, response in enumerate(responses):
results.append({
“interaction_id”: interaction_ids[i],
“query”: queries[i],
“prediction”: response.get(“choices”, [{}])[0].get(“message”, {}).get(“content”) if response else None,
“response_time”: response.get(“response_time”) if response else None,
“response”: response,
})

return results

Evaluation
Both models were evaluated by running batch queries. A batch of eight was selected to comply with Amazon Bedrock quota limits as well as GPT-4o rate limits. The query function code is as follows:

def send_batch_request(batch: List[Dict], model_name: str, api_endpoint: str, auth_header: str, max_tokens: int,
system_prompt: str):
“””Send batch queries to the API.”””
headers = {“Authorization”: auth_header, “Content-Type”: “application/json”}
responses = []

for item in batch:
query = item[“query”]
query_time = item[“query_time”]
retrieval_results = item.get(“search_results”, [])

references = “# References n” + “n”.join(
[f”Reference {_idx + 1}:n{res[‘text’]}n” for _idx, res in enumerate(retrieval_results)])
user_message = f”{references}n——nnUsing only the references listed above, answer the following question:nQuestion: {query}n”

payload = {
“model”: model_name,
“messages”: [{“role”: “system”, “content”: system_prompt},
{“role”: “user”, “content”: user_message}],
“max_tokens”: max_tokens,
}

try:
start_time = time.time()
response = requests.post(api_endpoint, headers=headers, json=payload, timeout=25000)
response.raise_for_status()
response_json = response.json()
response_json[‘response_time’] = time.time() – start_time
responses.append(response_json)
except requests.RequestException as e:
logger.error(f”API request failed for query: {query}. Error: {e}”)
responses.append(None)

return responses

Benchmarking on the CRAG dataset
In this section, we discuss the latency, accuracy, and cost measurements of benchmarking on the CRAG dataset.
Latency
Latency measurements for each query response were calculated as the difference between two timestamps: the timestamp when the API call is made to the inference LLM, and a second timestamp when the entire response is received from the inference endpoint. The difference between these two timestamps determines the latency. A lower latency indicates a faster-performing LLM, making it suitable for applications requiring rapid response times. The study indicates that latency can be further reduced for both models through optimizations and caching techniques; however, the evaluation focused on measuring out-of-the-box latency performance for both models.
Accuracy
FloTorch used a modified version of the local_evaluation.py script provided with the CRAG benchmark for accuracy evaluations. The script was enhanced to provide proper categorization of correct, incorrect, and missing responses. The default GPT-4o evaluation LLM in the evaluation script was replaced with the mixtral-8x7b-instruct-v0:1 model API. Additional modifications to the script enabled monitoring of input and output tokens and latency as described earlier.
Cost
Cost calculations were straightforward because both Amazon Nova Pro and GPT-4o have published price per million input and output tokens separately. The calculation methodology involved multiplying input tokens by corresponding rates and applying the same process for output tokens. The total cost for running 200 queries was determined by combining input token and output token costs. OpenSearch Service provisioned cluster costs were excluded from this analysis because the cost comparison focused solely on the inference level between Amazon Nova Pro and GPT-4o LLMs.
Results
The following table summarizes the results.

 .
Amazon Nova Pro
GPT-4o
Observation

Accuracy on subset of the CRAG dataset
51.50% (103 correct responses out of 200)
53.00% (106 correct responses out of 200)
GPT-4o outperforms Amazon Nova Pro by 1.5% on accuracy

Cost for running inference for 200 queries
$0.00030205
$0.000869537
Amazon Nova Pro saves 65.26% in costs compared to GPT-4o

Average latency (seconds)
1.682539835
2.15615045
Amazon Nova Pro is 21.97% faster than GPT-4o

Average of input and output tokens
1946.621359
1782.707547
Typical GPT-4o responses are shorter than Amazon Nova responses

For simple queries, Amazon Nova Pro and GPT-4o have similar accuracies (55 and 56 correct responses, respectively) but for simple queries with conditions, GPT-4o performs slightly better than Amazon Nova Pro (50 vs. 48 correct answers). Imagine you are part of an organization running an AI assistant service that handles 1,000 questions per month from 10,000 users (10,000,000 queries per month). Amazon Nova Pro will save your organization $5,674.88 per month ($68,098 per year) compared to GPT-4o.
Let’s look at similar results for Amazon Nova Micro, Amazon Nova Lite, and GPT-4o mini models on the same dataset.

   
Amazon Nova Lite
Nova Micro
GPT-4o mini
Observation

Accuracy on subset of the CRAG dataset
52.00% (104 correct responses out of 200)
54.00% (108 correct responses out of 200)
50.00% (100 correct responses out of 200)
Both Amazon Nova Lite and Amazon Nova Micro outperform GPT-4o mini by 2 and 4 points, respectively

Cost for running inference for 200 queries
$0.00002247 (56.59% cheaper than GPT-4o mini)
$0.000013924 (73.10% cheaper than GPT-4o mini)
$0.000051768
Amazon Nova Lite and Amazon Nova Micro are cheaper than GPT-4o mini by 56.59% and 73.10%, respectively

Average latency (seconds)
1.553371465 (26.60% faster than GPT-4o mini)
1.6828564 (20.48% faster than GPT-4o mini)
2.116291895
Amazon Nova models are at least 20% faster than GPT-4o mini

Average of input and output tokens
1930.980769
1940.166667
1789.54
GPT-4o mini returns shorter answers

Amazon Nova Micro is significantly faster and less expensive compared to GPT-4o mini while providing more accurate answers. If you are running a service that handles about 10 million queries each month, it will save you on average 73% of what you will be paying for slightly less accurate results from the GPT-4o mini model.
Conclusion
Based on these tests for RAG cases, Amazon Nova models produce comparable or higher accuracy at significantly lower cost and latency compared to GPT-4o and GPT-4o mini models. FloTorch is continuing further experimentation with other relevant LLMs for comparison. Future research will include additional experiments with various query types such as comparison, aggregation, set, false_premise, post-processing, and multi-hop queries.
Get started with Amazon Nova on the Amazon Bedrock console. Learn more at the Amazon Nova product page.
About FloTorch
FloTorch.ai is helping enterprise customers design and manage agentic workflows in a secure and scalable manner. FloTorch’s mission is to help enterprises make data-driven decisions in the end-to-end generative AI pipeline, including but not limited to model selection, vector database selection, and evaluation strategies. FloTorch offers an open source version for customers with scalable experimentation with different chunking, embedding, retrieval, and inference strategies. The open source version works on a customer’s AWS account so you can experiment on your AWS account with your proprietary data. Interested users are invited to try out FloTorch from AWS Marketplace or from GitHub. FloTorch also offers an enterprise version of this product for scalable experimentation with LLM models and vector databases on cloud platforms. The enterprise version also includes a universal gateway with model registry to custom define new LLMs and recommendation engine to suggest ew LLMs and agent workflows. For more information, contact us at info@flotorch.ai.

About the author
Prasanna Sridharan is a Principal Gen AI/ML Architect at AWS, specializing in designing and implementing AI/ML and Generative AI solutions for enterprise customers. With a passion for helping AWS customers build innovative Gen AI applications, he focuses on creating scalable, cutting-edge AI solutions that drive business transformation. You can connect with Prasanna on LinkedIn.
Dr. Hemant Joshi has over 20 years of industry experience building products and services with AI/ML technologies. As CTO of FloTorch, Hemant is engaged with customers to implement State of the Art GenAI solutions and agentic workflows for enterprises.

Deploy DeepSeek-R1 distilled models on Amazon SageMaker using a Large …

DeepSeek-R1 is a large language model (LLM) developed by DeepSeek AI that uses reinforcement learning to enhance reasoning capabilities through a multi-stage training process from a DeepSeek-V3-Base foundation. A key distinguishing feature is its reinforcement learning step, which was used to refine the model’s responses beyond the standard pre-training and fine-tuning process. By incorporating RL, DeepSeek-R1 can adapt more effectively to user feedback and objectives, ultimately enhancing both relevance and clarity. In addition, DeepSeek-R1 employs a chain-of-thought (CoT) approach, meaning it’s equipped to break down complex queries and reason through them in a step-by-step manner. This guided reasoning process allows the model to produce more accurate, transparent, and detailed answers. This model combines RL-based fine-tuning with CoT capabilities, aiming to generate structured responses while focusing on interpretability and user interaction. With its wide-ranging capabilities, DeepSeek-R1 has captured the industry’s attention as a versatile text-generation model that can be integrated into various workflows such as agents, logical reasoning, and data interpretation tasks.
DeepSeek-R1 uses a Mixture of Experts (MoE) architecture and is 671 billion parameters in size. The MoE architecture allows activation of 37 billion parameters, enabling efficient inference by routing queries to the most relevant expert clusters. This approach allows the model to specialize in different problem domains while maintaining overall efficiency.
DeepSeek-R1 distilled models bring the reasoning capabilities of the main R1 model to more efficient architectures based on popular open models like Meta’s Llama (8B and 70B) and Hugging Face’s Qwen (1.5B, 7B, 14B, and 32B). Distillation refers to a process of training smaller, more efficient models to mimic the behavior and reasoning patterns of the larger DeepSeek-R1 model, using it as a teacher model. For example, DeepSeek-R1-Distill-Llama-8B offers an excellent balance of performance and efficiency. By integrating this model with Amazon SageMaker AI, you can benefit from the AWS scalable infrastructure while maintaining high-quality language model capabilities.
In this post, we show how to use the distilled models in SageMaker AI, which offers several options to deploy the distilled versions of the R1 model.
Solution overview
You can use DeepSeek’s distilled models within the AWS managed machine learning (ML) infrastructure. We demonstrate how to deploy these models on SageMaker AI inference endpoints.
SageMaker AI offers a choice of which serving container to use for deployments:

LMI container – A Large Model Inference (LMI) container with different backends (vLLM, TensortRT-LLM, and Neuron). See the following GitHub repo for more details.
TGI container – A Hugging Face Text Generation Interface (TGI) container. You can find more details in the following GitHub repo.

In the following code snippets, we use the LMI container example. See the following GitHub repo for more deployment examples using TGI, TensorRT-LLM, and Neuron.
LMI containers
LMI containers are a set of high-performance Docker containers purpose built for LLM inference. With these containers, you can use high-performance open source inference libraries like vLLM, TensorRT-LLM, and Transformers NeuronX to deploy LLMs on SageMaker endpoints. These containers bundle together a model server with open source inference libraries to deliver an all-in-one LLM serving solution.
LMI containers provide many features, including:

Optimized inference performance for popular model architectures like Meta Llama, Mistral, Falcon, and more
Integration with open source inference libraries like vLLM, TensorRT-LLM, and Transformers NeuronX
Continuous batching for maximizing throughput at high concurrency
Token streaming
Quantization through AWQ, GPTQ, FP8, and more
Multi-GPU inference using tensor parallelism
Serving LoRA fine-tuned models
Text embedding to convert text data into numeric vectors
Speculative decoding support to decrease latency

LMI containers provide these features through integrations with popular inference libraries. A unified configuration format enables you to use the latest optimizations and technologies across libraries. To learn more about the LMI components, see Components of LMI.
Prerequisites
To run the example notebooks, you need an AWS account with an AWS Identity and Access Management (IAM) role with permissions to manage resources created. For details, refer to Create an AWS account.
If this is your first time working with Amazon SageMaker Studio, you first need to create a SageMaker domain. Additionally, you might need to request a service quota increase for the corresponding SageMaker hosting instances. In this example, you host the base model and multiple adapters on the same SageMaker endpoint, so you will use an ml.g5.2xlarge SageMaker hosting instance.
Deploy DeepSeek-R1 for inference
The following is a step-by-step example that demonstrates how to programmatically deploy DeepSeek-R1-Distill-Llama-8B for inference. The code for deploying the model is provided in the GitHub repo. You can clone the repo and run the notebook from SageMaker AI Studio.

Configure the SageMaker execution role and import the necessary libraries:

!pip install –force-reinstall –no-cache-dir sagemaker==2.235.2

import json
import boto3
import sagemaker

# Set up IAM Role
try:
    role = sagemaker.get_execution_role()
except ValueError:
    iam = boto3.client(‘iam’)
    role = iam.get_role(RoleName=’sagemaker_execution_role’)[‘Role’][‘Arn’]

There are two ways to deploy an LLM like DeepSeek-R1 or its distilled variants on SageMaker:

Deploy uncompressed model weights from an Amazon S3 bucket – In this scenario, you need to set the HF_MODEL_ID variable to the Amazon Simple Storage Service (Amazon S3) prefix that has model artifacts. This method is generally much faster, with the model typically downloading in just a couple of minutes from Amazon S3.
Deploy directly from Hugging Face Hub (requires internet access) – To do this, set HF_MODEL_ID to the Hugging Face repository or model ID (for example, “deepseek-ai/DeepSeek-R1-Distill-Llama-8B”). However, this method tends to be slower and can take significantly longer to download the model compared to using Amazon S3. This approach will not work if enable_network_isolation is enabled, because it requires internet access to retrieve model artifacts from the Hugging Face Hub.

In this example, we deploy the model directly from the Hugging Face Hub:

vllm_config = {
“HF_MODEL_ID”: “deepseek-ai/DeepSeek-R1-Distill-Llama-8B”,
“OPTION_TENSOR_PARALLEL_DEGREE”: “max”,
“OPTION_ROLLING_BATCH”: “vllm”,
“OPTION_MAX_ROLLING_BATCH_SIZE”: “16”,
}

The OPTION_MAX_ROLLING_BATCH_SIZE parameter limits number of concurrent requests that can be processed by the endpoint. We set it to 16 to limit GPU memory requirements. You should adjust it based on your latency and throughput requirements.

Create and deploy the model:

# Create a Model object
lmi_model = sagemaker.Model(
image_uri = inference_image_uri,
env = vllm_config,
role = role,
name = model_name,
enable_network_isolation=True, # Ensures model is isolated from the internet
vpc_config={
“Subnets”: [“subnet-xxxxxxxx”, “subnet-yyyyyyyy”],
“SecurityGroupIds”: [“sg-zzzzzzzz”]
}
)
# Deploy to SageMaker
lmi_model.deploy(
initial_instance_count = 1,
instance_type = “ml.g5.2xlarge”,
container_startup_health_check_timeout = 1600,
endpoint_name = endpoint_name,
)

Make inference requests:

sagemaker_client = boto3.client(‘sagemaker-runtime’, region_name=’us-east-1′)
endpoint_name = predictor.endpoint_name

input_payload = {
“inputs”: “What is Amazon SageMaker? Answer concisely.”,
“parameters”: {“max_new_tokens”: 250, “temperature”: 0.1}
}

serialized_payload = json.dumps(input_payload)

response = sagemaker_client.invoke_endpoint(
EndpointName=endpoint_name,
ContentType=’application/json’,
Body=serialized_payload
)

Performance and cost considerations
The ml.g5.2xlarge instance provides a good balance of performance and cost. For large-scale inference, use larger batch sizes for real-time inference to optimize cost and performance. You can also use batch transform for offline, large-volume inference to reduce costs. Monitor endpoint usage to optimize costs.
Clean up
Clean up your resources when they’re no longer needed:

predictor.delete_endpoint()

Security
You can configure advanced security and infrastructure settings for the DeepSeek-R1 model, including virtual private cloud (VPC) networking, service role permissions, encryption settings, and EnableNetworkIsolation to restrict internet access. For production deployments, it’s essential to review these settings to maintain alignment with your organization’s security and compliance requirements.
By default, the model runs in a shared AWS managed VPC with internet access. To enhance security and control access, you should explicitly configure a private VPC with appropriate security groups and IAM policies based on your requirements.
SageMaker AI provides enterprise-grade security features to help keep your data and applications secure and private. We do not share your data with model providers, unless you direct us to, providing you full control over your data. This applies to all models—both proprietary and publicly available, including DeepSeek-R1 on SageMaker.
For more details, see Configure security in Amazon SageMaker AI.
Logging and monitoring
You can monitor SageMaker AI using Amazon CloudWatch, which collects and processes raw data into readable, near real-time metrics. These metrics are retained for 15 months, allowing you to analyze historical trends and gain deeper insights into your application’s performance and health.
Additionally, you can configure alarms to monitor specific thresholds and trigger notifications or automated actions when those thresholds are met, helping you proactively manage your deployment.
For more details, see Metrics for monitoring Amazon SageMaker AI with Amazon CloudWatch.
Best practices
It’s always recommended to deploy your LLMs endpoints inside your VPC and behind a private subnet, without internet gateways, and preferably with no egress. Ingress from the internet should also be blocked to minimize security risks.
Always apply guardrails to make sure incoming and outgoing model responses are validated for safety, bias, and toxicity. You can guard your SageMaker endpoints model responses with Amazon Bedrock Guardrails. See DeepSeek-R1 model now available in Amazon Bedrock Marketplace and Amazon SageMaker JumpStart for more details.
Inference performance evaluation
In this section, we focus on inference performance of DeepSeek-R1 distilled variants on SageMaker AI. Evaluating the performance of LLMs in terms of end-to-end latency, throughput, and resource efficiency is crucial for providing responsiveness, scalability, and cost-effectiveness in real-world applications. Optimizing these metrics directly impacts user experience, system reliability, and deployment feasibility at scale. For this post, we test all DeepSeek-R1 distilled variants—1.5B, 7B, 8B, 14B, 32B, and 70B—along four performance metrics:

End-to-end latency (time between sending a request and receiving the response)
Throughput tokens
Time to first token
Inter-token latency

The main purpose of this performance evaluation is to give you an indication about relative performance of distilled R1 models on different hardware for generic traffic patterns. We didn’t try to optimize the performance for each model/hardware/use case combination. These results should not be treated like a best possible performance of a particular model on a particular instance type. You should always perform your own testing using your own datasets and traffic patterns as well as I/O sequence length.
Scenarios
We tested the following scenarios:

Container/model configuration – We used LMI container v14 with default parameters, except MAX_MODEL_LEN, which was set to 10000 (no chunked prefix and no prefix caching). On instances with multiple accelerators, we sharded the model across all available GPUs.
Tokens – We evaluated SageMaker endpoint hosted DeepSeek-R1 distilled variants on performance benchmarks using two sample input token lengths. We ran both tests 50 times each before measuring the average across the different metrics. Then we repeated the test with concurrency 10.

Short-length test – 512 input tokens and 256 output tokens.
Medium-length test – 3072 input tokens and 256 output tokens.

Hardware – We tested the distilled variants on a variety of instance types ranging from 1, 4, or 8 GPUs per instance. In the following table, a green cell indicates that a model was tested on that particular instance type, and red indicates that a model wasn’t tested with that instance type, either because the instance was excessive for a given model size or too small to fit the model in memory.

Box plots
In the following sections, we use a box plot to visualize model performance. A box is a concise visual summary that displays a dataset’s median, interquartile range (IQR), and potential outliers using a box for the middle 50% of the data, with whiskers extending to the smallest and largest non-outlier values. By examining the median’s placement within the box, the box’s size, and the whiskers’ lengths, you can quickly assess the data’s central tendency, variability, and skewness, as illustrated in the following figure.

DeepSeek-R1-Distill-Qwen-1.5B
This model can be deployed on a single GPU instance. The results indicate that the ml.g5.xlarge instance outperforms the ml.g6.xlarge instance across all measured performance criteria and concurrency settings.
The following figure illustrates testing with concurrency = 1.

The following figure illustrates testing with concurrency = 10.

DeepSeek-R1-Distill-Qwen-7B
DeepSeek-R1-Distill-Qwen-7B was tested on ml.g5.2xlarge and ml.g6e.2xlarge. Among all instances, ml.g6e.2xlarge demonstrated the highest performance.
The following figure illustrates testing with concurrency = 1.

The following figure illustrates testing with concurrency = 10.

DeepSeek-R1-Distill-Llama-8B
DeepSeek-R1-Distill-Llama-8B was benchmarked across ml.g5.2xlarge, ml.g5.12xlarge, ml.g6e.2xlarge, and ml.g6e.12xlarge, with ml.g6e.12xlarge demonstrating the highest performance among all instances.
The following figure illustrates testing with concurrency = 1.

The following figure illustrates testing with concurrency = 10.

DeepSeek-R1-Distill-Qwen-14B
We tested this model on ml.g6.12xlarge, ml.g5.12xlarge, ml.g6e.48xlarge, and ml.g6e.12xlarge. The instance with 8 GPU (ml.g6e.48xlarge) showed the best results.
The following figure illustrates testing with concurrency = 1.

The following figure illustrates testing with concurrency = 10.

DeepSeek-R1-Distill-Qwen-32B
This is a fairly large model, and we only deployed it on multi-GPU instances: ml.g6.12xlarge, ml.g5.12xlarge, and ml.g6e.12xlarge. The latest generation (ml.g6e.12xlarge) showed the best performance across all concurrency settings.
The following figure illustrates testing with concurrency = 1.

The following figure illustrates testing with concurrency = 10.

DeepSeek-R1-Distill-Llama-70B
We tested this model on two different 8 GPUs instances: ml.g6e.48xlarge and ml.p4d.24xlarge. The latter showed the best performance.
The following figure illustrates testing with concurrency = 1.

The following figure illustrates testing with concurrency = 10.

Conclusion
Deploying DeepSeek models on SageMaker AI provides a robust solution for organizations seeking to use state-of-the-art language models in their applications. The combination of DeepSeek’s powerful models and SageMaker AI managed infrastructure offers a scalable and efficient approach to natural language processing tasks.
The performance evaluation section presents a comprehensive performance evaluation of all DeepSeek-R1 distilled models across four key inference metrics, using 13 different NVIDIA accelerator instance types. This analysis offers valuable insights to assist in the selection of the optimal instance type for deploying the DeepSeek-R1 solution.
Check out the complete code in the following GitHub repos:

Deploy a DeepSeek model Quantized LLaMA 3.1 70B Instruct Model Using SageMaker Endpoints and SageMaker Large Model Inference (LMI) Container
Deploy DeepSeek R1 Large Language Model from HuggingFace Hub on Amazon SageMaker
Deploy deepseek-ai/DeepSeek-R1-Distill-* models on Amazon SageMaker using LMI container
Deploy DeepSeek R1 Llama on AWS Inferentia using SageMaker Large Model Inference Container
Interactive fine-tuning of Foundation Models with Amazon SageMaker Training using @remote decorator

For additional resources, refer to:

Amazon SageMaker Documentation
Deepseek Model Hub
Hugging Face on Amazon SageMaker

About the Authors
Dmitry Soldatkin is a Senior AI/ML Solutions Architect at Amazon Web Services (AWS), helping customers design and build AI/ML solutions. Dmitry’s work covers a wide range of ML use cases, with a primary interest in Generative AI, deep learning, and scaling ML across the enterprise. He has helped companies in many industries, including insurance, financial services, utilities, and telecommunications. You can connect with Dmitry on LinkedIn.
Vivek Gangasani is a Lead Specialist Solutions Architect for Inference at AWS. He helps emerging generative AI companies build innovative solutions using AWS services and accelerated compute. Currently, he is focused on developing strategies for fine-tuning and optimizing the inference performance of large language models. In his free time, Vivek enjoys hiking, watching movies, and trying different cuisines.
Prasanna Sridharan is a Principal Gen AI/ML Architect at AWS, specializing in designing and implementing AI/ML and Generative AI solutions for enterprise customers. With a passion for helping AWS customers build innovative Gen AI applications, he focuses on creating scalable, cutting-edge AI solutions that drive business transformation. You can connect with Prasanna on LinkedIn.
Pranav Murthy is an AI/ML Specialist Solutions Architect at AWS. He focuses on helping customers build, train, deploy and migrate machine learning (ML) workloads to SageMaker. He previously worked in the semiconductor industry developing large computer vision (CV) and natural language processing (NLP) models to improve semiconductor processes using state of the art ML techniques. In his free time, he enjoys playing chess and traveling. You can find Pranav on LinkedIn.