Build scalable containerized RAG based generative AI applications in A …

Generative artificial intelligence (AI) applications are commonly built using a technique called Retrieval Augmented Generation (RAG) that provides foundation models (FMs) access to additional data they didn’t have during training. This data is used to enrich the generative AI prompt to deliver more context-specific and accurate responses without continuously retraining the FM, while also improving transparency and minimizing hallucinations.
In this post, we demonstrate a solution using Amazon Elastic Kubernetes Service (EKS) with Amazon Bedrock to build scalable and containerized RAG solutions for your generative AI applications on AWS while bringing your unstructured user file data to Amazon Bedrock in a straightforward, fast, and secure way.
Amazon EKS provides a scalable, secure, and cost-efficient environment for building RAG applications with Amazon Bedrock and also enables efficient deployment and monitoring of AI-driven workloads while leveraging Bedrock’s FMs for inference. It enhances performance with optimized compute instances, auto-scales GPU workloads while reducing costs via Amazon EC2 Spot Instances and AWS Fargate and provides enterprise-grade security via native AWS mechanisms such as Amazon VPC networking and AWS IAM.
Our solution uses Amazon S3 as the source of unstructured data and populates an Amazon OpenSearch Serverless vector database via the use of Amazon Bedrock Knowledge Bases with the user’s existing files and folders and associated metadata. This enables a RAG scenario with Amazon Bedrock by enriching the generative AI prompt using Amazon Bedrock APIs with your company-specific data retrieved from the OpenSearch Serverless vector database.
Solution overview
The solution uses Amazon EKS managed node groups to automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for the Amazon EKS Kubernetes cluster. Every managed node in the cluster is provisioned as part of an Amazon EC2 Auto Scaling group that’s managed for you by EKS.
The EKS cluster consists of a Kubernetes deployment that runs across two Availability Zones for high availability where each node in the deployment hosts multiple replicas of a Bedrock RAG container image registered and pulled from Amazon Elastic Container Registry (ECR). This setup makes sure that resources are used efficiently, scaling up or down based on the demand. The Horizontal Pod Autoscaler (HPA) is set up to further scale the number of pods in our deployment based on their CPU utilization.
The RAG Retrieval Application container uses Bedrock Knowledge Bases APIs and Anthropic’s Claude 3.5 Sonnet LLM hosted on Bedrock to implement a RAG workflow. The solution provides the end user with a scalable endpoint to access the RAG workflow using a Kubernetes service that is fronted by an Amazon Application Load Balancer (ALB) provisioned via an EKS ingress controller.
The RAG Retrieval Application container orchestrated by EKS enables RAG with Amazon Bedrock by enriching the generative AI prompt received from the ALB endpoint with data retrieved from an OpenSearch Serverless index that is synced via Bedrock Knowledge Bases from your company-specific data uploaded to Amazon S3.
The following architecture diagram illustrates the various components of our solution:

Prerequisites
Complete the following prerequisites:

Ensure model access in Amazon Bedrock. In this solution, we use Anthropic’s Claude 3.5 Sonnet on Amazon Bedrock.
Install the AWS Command Line Interface (AWS CLI).
Install Docker.
Install Kubectl.
Install Terraform.

Deploy the solution
The solution is available for download on the GitHub repo. Cloning the repository and using the Terraform template will provision the components with their required configurations:

Clone the Git repository:

sudo yum install -y unzip
git clone https://github.com/aws-samples/genai-bedrock-serverless.git
cd eksbedrock/terraform

From the terraform folder, deploy the solution using Terraform:

terraform init
terraform apply -auto-approve

Configure EKS

Configure a secret for the ECR registry:

aws ecr get-login-password
–region <aws_region> | docker login
–username AWS
–password-stdin <your account id>.dkr.ecr.<your account region>.amazonaws.com/bedrockragrepodocker pull <your account id>.dkr.ecr.<aws_region>.amazonaws.com/bedrockragrepo:latestaws eks update-kubeconfig
–region <aws_region>
–name eksbedrockkubectl create secret docker-registry ecr-secret  
–docker-server=<your account id>.dkr.ecr.<aws_region>.amazonaws.com
–docker-username=AWS
–docker-password=$(aws ecr get-login-password –region <aws_region>)

Navigate to the kubernetes/ingress folder:

Make sure that the AWS_Region variable in the bedrockragconfigmap.yaml file points to your AWS region.
Replace the image URI in line 20 of the bedrockragdeployment.yaml file with the image URI of your bedrockrag image from your ECR repository.

Provision the EKS deployment, service and ingress:

cd ..
kubectl apply -f ingress/

Create a knowledge base and upload data
To create a knowledge base and upload data, follow these steps:

Create an S3 bucket and upload your data into the bucket. In our blog post, we uploaded these two files, Amazon Bedrock User Guide and the Amazon FSx for ONTAP User Guide, into our S3 bucket.
Create an Amazon Bedrock knowledge base. Follow the steps here to create a knowledge base. Accept all the defaults including using the Quick create a new vector store option in Step 7 of the instructions that creates an Amazon OpenSearch Serverless vector search collection as your knowledge base.

In Step 5c of the instructions to create a knowledge base, provide the S3 URI of the object containing the files for the data source for the knowledge base
Once the knowledge base is provisioned, obtain the Knowledge Base ID from the Bedrock Knowledge Bases console for your newly created knowledge base.

Query using the Application Load Balancer
You can query the model directly using the API front end provided by the AWS ALB provisioned by the Kubernetes (EKS) Ingress Controller. Navigate to the AWS ALB console and obtain the DNS name for your ALB to use as your API:

curl -X POST “<ALB DNS name>/query”

-H “Content-Type: application/json”

-d ‘{“prompt”: “What is a bedrock knowledgebase?”, “kbId”: “<Knowledge Base ID>”}’

Cleanup
To avoid recurring charges, clean up your account after trying the solution:

From the terraform folder, delete the Terraform template for the solution: terraform apply –destroy 
Delete the Amazon Bedrock knowledge base. From the Amazon Bedrock console, select the knowledge base you created in this solution, select Delete, and follow the steps to delete the knowledge base.

Conclusion
In this post, we demonstrated a solution that uses Amazon EKS with Amazon Bedrock and provides you with a framework to build your own containerized, automated, scalable, and highly available RAG-based generative AI applications on AWS. Using Amazon S3 and Amazon Bedrock Knowledge Bases, our solution automates bringing your unstructured user file data to Amazon Bedrock within the containerized framework. You can use the approach demonstrated in this solution to automate and containerize your AI-driven workloads while using Amazon Bedrock FMs for inference with built-in efficient deployment, scalability, and availability from a Kubernetes-based containerized deployment.
For more information about how to get started building with Amazon Bedrock and EKS for RAG scenarios, refer to the following resources:

Amazon Bedrock Workshop GitHub repo
Amazon EKS Workshop
Build RAG-based generative AI applications in AWS using Amazon Bedrock and Amazon FSx for NetApp ONTAP

About the Authors
Kanishk Mahajan is Principal, Solutions Architecture at AWS. He leads cloud transformation and solution architecture for AWS customers and partners. Kanishk specializes in containers, cloud operations, migrations and modernizations, AI/ML, resilience and security and compliance. He is a Technical Field Community (TFC) member in each of those domains at AWS.
Sandeep Batchu is a Senior Security Architect at Amazon Web Services, with extensive experience in software engineering, solutions architecture, and cybersecurity. Passionate about bridging business outcomes with technological innovation, Sandeep guides customers through their cloud journey, helping them design and implement secure, scalable, flexible, and resilient cloud architectures.

How Hexagon built an AI assistant using AWS generative AI services

This post was co-written with Julio P. Roque Hexagon ALI.
Recognizing the transformative benefits of generative AI for enterprises, we at Hexagon’s Asset Lifecycle Intelligence division sought to enhance how users interact with our Enterprise Asset Management (EAM) products. Understanding these advantages, we partnered with AWS to embark on a journey to develop HxGN Alix, an AI-powered digital worker using AWS generative AI services. This blog post explores the strategy, development, and implementation of HxGN Alix, demonstrating how a tailored AI solution can drive efficiency and enhance user satisfaction.
Forming a generative AI strategy: Security, accuracy, and sustainability
Our journey to build HxGN Alix was guided by a strategic approach focused on customer needs, business requirements, and technological considerations. In this section, we describe the key components of our strategy.
Understanding consumer generative AI and enterprise generative AI
Generative AI serves diverse purposes, with consumer and enterprise applications differing in scope and focus. Consumer generative AI tools are designed for broad accessibility, enabling users to perform everyday tasks such as drafting content, generating images, or answering general inquiries. In contrast, enterprise generative AI is tailored to address specific business challenges, including scalability, security, and seamless integration with existing workflows. These systems often integrate with enterprise infrastructures, prioritize data privacy, and use proprietary datasets to provide relevance and accuracy. This customization allows businesses to optimize operations, enhance decision-making, and maintain control over their intellectual property.

Commercial compared to open source LLMs
We used multiple evaluation criteria, as illustrated in the following figure, to determine whether to use a commercial or open source large language model (LLM).

The evaluation criteria are as follows:

Cost management – Help avoid unpredictable expenses associated with LLMs.
Customization – Tailor the model to understand domain-specific terminology and context.
Intellectual property and licensing – Maintain control over data usage and compliance.
Data privacy – Uphold strict confidentiality and adherence to security requirements.
Control over the model lifecycle – By using open source LLMs, we’re able to control the lifecycle of model customizations based on business needs. This control makes sure updates, enhancements, and maintenance of the model are aligned with evolving business objectives without dependency on third-party providers.

The path to the enterprise generative AI: Crawl, walk, run
By adopting a phased approach (as shown in the following figure), we were able to manage development effectively. Because the technology is new, it was paramount to carefully build the right foundation for adoption of generative AI across different business units.

The phases of the approach are:

Crawl – Establish foundational infrastructure with a focus on data privacy and security. This phase focused on establishing a secure and compliant foundation to enable the responsible adoption of generative AI. Key priorities included implementing guardrails around security, compliance, and data privacy, making sure that customer and enterprise data remained protected within well-defined access controls. Additionally, we focused on capacity management and cost governance, making sure that AI workloads operated efficiently while maintaining financial predictability. This phase was critical in setting up the necessary policies, monitoring mechanisms, and architectural patterns to support long-term scalability.
Walk – Integrate customer-specific data to enhance relevance while maintaining tenant-level security. With a solid foundation in place, we transitioned from proof of concept to production-grade implementations. This phase was characterized by deepening our technical expertise, refining operational processes, and gaining real-world experience with generative AI models. As we integrated domain-specific data to improve relevance and usability, we continued to reinforce tenant-level security to provide proper data segregation. The goal of this phase was to validate AI-driven solutions in real-world scenarios, iterating on workflows, accuracy, and optimizing performance for production deployment.
Run – Develop high-value use cases tailored to customer needs, enhancing productivity and decision-making. Using the foundations established in the walk phase, we moved toward scaling development across multiple teams in a structured and repeatable manner. By standardizing best practices and development frameworks, we enabled different products to adopt AI capabilities efficiently. At this stage, we focused on delivering high-value use cases that directly enhanced customer productivity, decision-making, and operational efficiency.

Identifying the right use case: Digital worker
A critical part of our strategy was identifying a use case that would offer the best return on investment (ROI), depicted in the following figure. We pinpointed the development of a digital worker as an optimal use case because of its potential to:

Enhance productivity – Recognizing that the productivity of any AI solution lies in a digital worker capable of handling advanced and nuanced domain-specific tasks
Improve efficiency – Automate routine tasks and streamline workflows
Enhance user experience – Provide immediate, accurate responses to user inquiries
Support high security environments – Operate within stringent security parameters required by clients

By focusing on a digital worker, we aimed to deliver significant value to both internal teams and end-users.
Introducing Alix: A digital worker for asset lifecycle intelligence
HxGN Alix is our AI-powered chat assistant designed to act as a digital worker to revolutionize user interaction with EAM products. Developed to operate securely within high-security environments, HxGN Alix serves multiple functions:

Streamline information access – Provide users with quick, accurate answers, alleviating the need to navigate extensive PDF manuals
Enhance internal workflows – Assist Customer Success managers and Customer Support teams with efficient information retrieval
Improve customer satisfaction – Offer EAM end-users an intuitive tool to engage with, thereby elevating their overall experience

By delivering a tailored, AI-driven approach, HxGN Alix addresses specific challenges faced by our clients, transforming the user experience while upholding stringent security standards.
Understanding system needs to guide technology selection
Before selecting the appropriate technology stack for HxGN Alix, we first identified the high-level system components and expectations of our AI assistant infrastructure. Through this process, we made sure that we understood the core components required to build a robust and scalable solution. The following figure illustrates the core components that we identified.

The non-functional requirements are:

Regional failover – Maintain system resilience by providing the ability to fail over seamlessly in case of Regional outages, maintaining service availability.
Model lifecycle management – Establish a reliable mechanism for customizing and deploying machine learning models.
LLM hosting – Host the AI models in an environment that provides stability, scalability, and adheres to our high-security requirements.
Multilingual capabilities – Make sure that the assistant can communicate effectively in multiple languages to cater to our diverse user base.
Safety tools – Incorporate safeguards to promote safe and responsible AI use, particularly with regard to data protection and user interactions.
Data storage – Provide secure storage solutions for managing product documentation and user data, adhering to industry security standards.
Retrieval Augmented Generation (RAG) – Enhance the assistant’s ability to retrieve relevant information from stored documents, thereby improving response accuracy and providing grounded answers.
Text embeddings – Use text embeddings to represent and retrieve relevant data, making sure that high-accuracy retrieval tasks are efficiently managed.

Choosing the right technology stack
To develop HxGN Alix, we selected a combination of AWS generative AI services and complementary technologies, focusing on scalability, customization, and security. We finalized the following architecture to serve our technical needs.

The AWS services include:

Amazon Elastic Kubernetes Service (Amazon EKS) – We used Amazon EKS for compute and model deployment. It facilitates efficient deployment and management of Alix’s models, providing high availability and scalability. We were able to use our existing EKS cluster, which already had the required safety, manageability, and integration with our DevOps environment. This allowed for seamless integration and used existing investments in infrastructure and tooling.
Amazon Elastic Compute Cloud (Amazon EC2) G6e instances – AWS provides comprehensive, secure, and cost-effective AI infrastructure. We selected G6e.48xlarge instances powered by NVIDIA L40S GPUs—the most cost-efficient GPU instances for deploying generative AI models under 12 billion parameters.
Mistral NeMo – We chose Mistral NeMo, a 12-billion parameter open source LLM built in collaboration with NVIDIA and released under the Apache 2.0 license. Mistral NeMo offers a large context window of up to 128,000 tokens and is designed for global, multilingual applications. It’s optimized for function calling and performs strongly in multiple languages, including English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. The model’s multilingual capabilities and optimization for function calling aligned well with our needs.
Amazon Bedrock Guardrails – Amazon Bedrock Guardrails provides a comprehensive framework for enforcing safety and compliance within AI applications. It enables the customization of filtering policies, making sure that AI-generated responses align with organizational standards and regulatory requirements. With built-in capabilities to detect and mitigate harmful content, Amazon Bedrock Guardrails enhances user trust and safety while maintaining high performance in AI deployments. This service allows us to define content moderation rules, restrict sensitive topics, and establish enterprise-level security for generative AI interactions.
Amazon Simple Storage Service (Amazon S3) – Amazon S3 provides secure storage for managing product documentation and user data, adhering to industry security standards.
Amazon Bedrock Knowledge Bases – Amazon Bedrock Knowledge Bases enhances Alix’s ability to retrieve relevant information from stored documents, improving response accuracy. This service stood out as a managed RAG solution, handling the heavy lifting and enabling us to experiment with different strategies and solve complex challenges efficiently. More on this is discussed in the development journey.
Amazon Bedrock – We used Amazon Bedrock as a fallback solution to handle Regional failures. In the event of zonal or outages, the system can fall back to the Mistral 7B model using Amazon Bedrock multi- Region endpoints, maintaining uninterrupted service.
Amazon Bedrock Prompt Management – This feature of Amazon Bedrock simplifies the creation, evaluation, versioning, and sharing of prompts within the engineering team to get the best responses from foundation models (FMs) for our use cases.

The development journey
We embarked on the development of HxGN Alix through a structured, phased approach.
The proof of concept
We initiated the project by creating a proof of concept to validate the feasibility of an AI assistant tailored for secure environments. Although the industry has seen various AI assistants, the primary goal of the proof of concept was to make sure that we could develop a solution while adhering to our high security standards, which required full control over the manageability of the solution.
During the proof of concept, we scoped the project to use an off-the-shelf NeMo model deployed on our existing EKS cluster without integrating internal knowledge bases. This approach helped us verify the ability to integrate the solution with existing products, control costs, provide scalability, and maintain security—minimizing the risk of late-stage discoveries.
After releasing the proof of concept to a small set of internal users, we identified a healthy backlog of work items that needed to go live, including enhancements in security, architectural improvements, network topology adjustments, prompt management, and product integration.
Security enhancements
To adhere to the stringent security requirements of our customers, we used the secure infrastructure provided by AWS. With models deployed in our existing production EKS environment, we were able to use existing tooling for security and monitoring. Additionally, we used isolated private subnets to make sure that code interacting with models wasn’t connected to the internet, further enhancing information protection for users.
Because user interactions are in free-text format and users might input content including personally identifiable information (PII), it was critical not to store any user interactions in any format. This approach provided complete confidentiality of AI use, adhering to strict data privacy standards.
Adjusting response accuracy
During the proof of concept, it became clear that integrating the digital worker with our products was essential. Base models had limited knowledge of our products and often produced hallucinations. We had to choose between pretraining the model with internal documentation or implementing RAG. RAG became the obvious choice for the following reasons:

 We were in the early stages of development and didn’t have enough data to pre-train our models
RAG helps ground the model’s responses in accurate context by retrieving relevant information, reducing hallucinations

Implementing a RAG system presented its own challenges and required experimentation. Key challenges are depicted in the following figure.

These challenges include:

Destruction of context when chunking documents – The first step in RAG is to chunk documents to transform them into vectors for meaningful text representation. However, applying this method to tables or complex structures risks losing relational data, which can result in critical information not being retrieved, causing the LLM to provide inaccurate answers. We evaluated various strategies to preserve context during chunking, verifying that important relationships within the data were maintained. To address this, we used the hierarchical chunking capability of Amazon Bedrock Knowledge Bases, which helped us preserve the context in the final chunk.
Handling documents in different formats – Our product documentation, accumulated over decades, varied greatly in format. The presence of non-textual elements, such as tables, posed significant challenges. Tables can be difficult to interpret when directly queried from PDFs or Word documents. To address this, we normalized and converted these documents into consistent formats suitable for the RAG system, enhancing the model’s ability to retrieve and interpret information accurately. We used the FM parsing capability of Amazon Bedrock Knowledge Bases, which processed the raw document with an LLM before creating a final chunk, verifying that data from non-textual elements was also correctly interpreted.
Handling LLM boundaries – User queries sometimes exceed the system’s capabilities, such as when they request comprehensive information, like a complete list of product features. Because our documentation is split into multiple chunks, the retrieval system might not return all the necessary documents. To address this, we adjusted the system’s responses so the AI agent could provide coherent and complete answers despite limitations in the retrieved context. We created custom documents containing FAQs and special instructions for these cases and added them to the knowledge base. These acted as few-shot examples, helping the model produce more accurate and complete responses.
Grounding responses – By nature, an LLM completes sentences based on probability, predicting the next word or phrase by evaluating patterns from its extensive training data. However, sometimes the output isn’t accurate or factually correct, a phenomenon often referred to as hallucination. To address this, we use a combination of specialized prompts along with contextual grounding checks from Amazon Bedrock Guardrails.
Managing one-line conversation follow-ups – Users often engage in follow-up questions that are brief or context-dependent, such as “Can you elaborate?” or “Tell me more.” When processed in isolation by the RAG system, these queries might yield no results, making it challenging for the AI agent to respond effectively. To address this, we implemented mechanisms to maintain conversational context, enabling HxGN Alix to interpret and respond appropriately.

We tested two approaches:

Prompt-based search reformulation – The LLM first identifies the user’s intent and generates a more complete query for the knowledge base. Although this requires an additional LLM call, it yields highly relevant results, keeping the final prompt concise.
Context-based retrieval with chat history – We sent the last five messages from the chat history to the knowledge base, allowing broader results. This approach provided faster response times because it involved only one LLM round trip.

The first method worked better with large document sets by focusing on highly relevant results, whereas the second approach was more effective with a smaller, focused document set. Both methods have their pros and cons, and results vary based on the nature of the documents.
To address these challenges, we developed a pipeline of steps to receive accurate responses from our digital assistant.
The following figure summarizes our RAG implementation journey.

Adjusting the application development lifecycle
For generative AI systems, the traditional application development lifecycle requires adjustments. New processes are necessary to manage accuracy and system performance:

Testing challenges – Unlike traditional code, generative AI systems can’t rely solely on unit tests. Prompts can return different results each time, making verification more complex.
Performance variability – Responses from LLMs can vary significantly in latency, ranging from 1–60 seconds depending on the user’s query, unlike traditional APIs with predictable response times.
Quality assurance (QA) – We had to develop new testing and QA methodologies to make sure that Alix’s responses were consistent and reliable.
Monitoring and optimization – Continuous monitoring was implemented to track performance metrics and user interactions, allowing for ongoing optimization of the AI system.

Conclusion
The successful launch of HxGN Alix demonstrates the transformative potential of generative AI in enterprise asset management. By using AWS generative AI services and a carefully selected technology stack, we optimized internal workflows and elevated user satisfaction within secure environments. HxGN Alix exemplifies how a strategically designed AI solution can drive efficiency, enhance user experience, and meet the unique security needs of enterprise clients.
Our journey underscores the importance of a strategic approach to generative AI—balancing security, accuracy, and sustainability—while focusing on the right use case and technology stack. The success of HxGN Alix serves as a model for organizations seeking to use AI to solve complex information access challenges.
By using the right technology stack and strategic approach, you can unlock new efficiencies, improve user experience, and drive business success. Connect with AWS to learn more about how AI-driven solutions can transform your operations.

About the Authors
Julio P. Roque is an accomplished Cloud and Digital Transformation Executive and an expert at using technology to maximize shareholder value. He is a strategic leader who drives collaboration, alignment, and cohesiveness across teams and organizations worldwide. He is multilingual, with an expert command of English and Spanish, understanding of Portuguese, and cultural fluency of Japanese.
Manu Mishra is a Senior Solutions Architect at AWS, specializing in artificial intelligence, data and analytics, and security. His expertise spans strategic oversight and hands-on technical leadership, where he reviews and guides the work of both internal and external customers. Manu collaborates with AWS customers to shape technical strategies that drive impactful business outcomes, providing alignment between technology and organizational goals.
Veda Raman is a Senior Specialist Solutions Architect for generative AI and machine learning at AWS. Veda works with customers to help them architect efficient, secure, and scalable machine learning applications. Veda specializes in generative AI services like Amazon Bedrock and Amazon SageMaker.

Multimodal AI Needs More Than Modality Support: Researchers Propose Ge …

Artificial intelligence has grown beyond language-focused systems, evolving into models capable of processing multiple input types, such as text, images, audio, and video. This area, known as multimodal learning, aims to replicate the natural human ability to integrate and interpret varied sensory data. Unlike conventional AI models that handle a single modality, multimodal generalists are designed to process and respond across formats. The goal is to move closer to creating systems that mimic human cognition by seamlessly combining different types of knowledge and perception.

The challenge faced in this field lies in enabling these multimodal systems to demonstrate true generalization. While many models can process multiple inputs, they often fail to transfer learning across tasks or modalities. This absence of cross-task enhancement—known as synergy—hinders progress toward more intelligent and adaptive systems. A model may excel in image classification and text generation separately, but it cannot be considered a robust generalist without the ability to connect skills from both domains. Achieving this synergy is essential for developing more capable, autonomous AI systems.

Many current tools rely heavily on large language models (LLMs) at their core. These LLMs are often supplemented with external, specialized components tailored to image recognition or speech analysis tasks. For example, existing models such as CLIP or Flamingo integrate language with vision but do not deeply connect the two. Instead of functioning as a unified system, they depend on loosely coupled modules that mimic multimodal intelligence. This fragmented approach means the models lack the internal architecture necessary for meaningful cross-modal learning, resulting in isolated task performance rather than holistic understanding.

Researchers from the National University of Singapore (NUS), Nanyang Technological University (NTU), Zhejiang University (ZJU), Peking University (PKU), and others proposed an AI framework named General-Level and a benchmark called General-Bench. These tools are built to measure and promote synergy across modalities and tasks. General-Level establishes five levels of classification based on how well a model integrates comprehension, generation, and language tasks. The benchmark is supported by General-Bench, a large dataset encompassing over 700 tasks and 325,800 annotated examples drawn from text, images, audio, video, and 3D data.

The evaluation method within General-Level is built on the concept of synergy. Models are assessed by task performance and their ability to exceed state-of-the-art (SoTA) specialist scores using shared knowledge. The researchers define three types of synergy—task-to-task, comprehension-generation, and modality-modality—and require increasing capability at each level. For example, a Level-2 model supports many modalities and tasks, while a Level-4 model must exhibit synergy between comprehension and generation. Scores are weighted to reduce bias from modality dominance and encourage models to support a balanced range of tasks.

The researchers tested 172 large models, including over 100 top-performing MLLMs, against General-Bench. Results revealed that most models do not demonstrate the needed synergy to qualify as higher-level generalists. Even advanced models like GPT-4V and GPT-4o did not reach Level 5, which requires models to use non-language inputs to improve language understanding. The highest-performing models managed only basic multimodal interactions, and none showed evidence of total synergy across tasks and modalities. For instance, the benchmark showed 702 tasks assessed across 145 skills, yet no model achieved dominance in all areas. General-Bench’s coverage across 29 disciplines, using 58 evaluation metrics, set a new standard for comprehensiveness.

This research clarifies the gap between current multimodal systems and the ideal generalist model. The researchers address a core issue in multimodal AI by introducing tools prioritizing integration over specialization. With General-Level and General-Bench, they offer a rigorous path forward for assessing and building models that handle various inputs and learn and reason across them. Their approach helps steer the field toward more intelligent systems with real-world flexibility and cross-modal understanding.

Check out the Paper and Project Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

Here’s a brief overview of what we’re building at Marktechpost:

ML News Community – r/machinelearningnews (92k+ members)

Newsletter– airesearchinsights.com/(30k+ subscribers)

miniCON AI Events – minicon.marktechpost.com

AI Reports & Magazines – magazine.marktechpost.com

AI Dev & Research News – marktechpost.com (1M+ monthly readers)

Partner with us

The post Multimodal AI Needs More Than Modality Support: Researchers Propose General-Level and General-Bench to Evaluate True Synergy in Generalist Models appeared first on MarkTechPost.

A Step-by-Step Guide on Building, Customizing, and Publishing an AI-Fo …

In this tutorial, we will guide you step-by-step through creating and publishing a sleek, modern AI blogging website using Lovable.dev. Lovable.dev simplifies website creation, enabling users to effortlessly develop visually appealing and responsive web pages tailored to specific niches, like AI and technology. We’ll demonstrate how to build a homepage quickly, integrate interactive components, and deploy your site live in just a few clicks. We will also guide you on how to publish it on GitHub.

Step 01: Start with a simple or detailed template according to the website’s requirements, theme, color, background, and functionalities.

Step 02: Lovable.dev will generate the site and code base after the prompt. If you need to make any changes, you can ask to update the website according to your needs.

Step 03: After the site is updated, you can publish it directly from Lovable.dev. It will also provide a domain from Lovable.

Step 04: Now, you can add any existing domain if you have one. Or you can buy a new domain according to availability and your needs.

Step 05: The site can also be published on GitHub. To do that, start by connecting Lovable with GitHub.

Step 06: In the next step, authorize lovable.dev to connect to GitHub. 

Step 07: Now that GitHub is connected, click on manage in connected organizations to link the GitHub repos and publish your current project on GitHub.

Step 08: After executing the above steps to connect GitHub, you can see that the GitHub repo of your current project is connected and published on GitHub.

Step 09: Now, you can make changes directly from GitHub. You can also share and clone the repo to work locally and push and modify anything in the project.

Step 10: You can edit in VS Code or access the GitHub repository information from the lovable editor.

Step 11: After all the changes, the final preview is ready to be used and modified according to the website’s needs.

In conclusion, following this tutorial, we’ve successfully created a fully functional AI-focused blogging website using Lovable.dev. We constructed a visually appealing platform optimized for readability and user engagement, but we’ve also published our site live. We have also connected and published on Gitell to make things easier and more convenient. With these few steps, we now have a working website ready to be used.
The post A Step-by-Step Guide on Building, Customizing, and Publishing an AI-Focused Blogging Website with Lovable.dev and Seamless GitHub Integration appeared first on MarkTechPost.

Offline Video-LLMs Can Now Understand Real-Time Streams: Apple Researc …

Video-LLMs process whole pre-recorded videos at once. However, applications like robotics and autonomous driving need causal perception and interpretation of visual information online. This fundamental mismatch shows a limitation of current Video-LLMs, as they are not naturally designed to operate in streaming scenarios where timely understanding and responsiveness are paramount. The transition from offline to streaming video understanding presents two key challenges. First, multi-turn real-time understanding requires models to process the most recent video segment while maintaining historical visual and conversational context. Second, proactive response generation demands human-like behavior where the model actively monitors the visual stream and provides timely outputs based on unfolding content without explicit prompts.

Video-LLMs have gained significant attention for video understanding, combining visual encoders, modality projectors, and LLMs to generate contextual responses from video content. Several approaches have emerged to address the challenge of streaming video understanding. VideoLLMOnline and Flash-VStream introduced specialized online objectives and memory architectures for handling sequential inputs. MMDuet and ViSpeak developed dedicated components for proactive response generation. Multiple benchmark suites have been used to evaluate streaming capabilities, including StreamingBench, StreamBench, SVBench, OmniMMI, and OVO-Bench.

Researchers from Apple and Fudan University have proposed StreamBridge, a framework to transform offline Video-LLMs into streaming-capable models. It addresses two fundamental challenges in adapting existing models into online scenarios: limited capability for multi-turn real-time understanding and lack of proactive response mechanisms. StreamBridge combines a memory buffer with a round-decayed compression strategy, supporting long-context interactions. It also incorporates a decoupled, lightweight activation model that integrates seamlessly with existing Video-LLMs for proactive response generation. Further, researchers introduced Stream-IT, a large-scale dataset designed for streaming video understanding, featuring mixed videotext sequences and diverse instruction formats.

StreamBridge framework is evaluated using mainstream offline Video-LLMs, LLaVA-OV-7B, Qwen2-VL-7B, and Oryx-1.5-7B. The Stream-IT dataset is added with approximately 600K samples from established datasets to maintain general video understanding capabilities, including LLaVA-178K, VCG-Plus, and ShareGPT4Video. OVO-Bench and StreamingBench are used for multi-turn real-time understanding, focusing on their real-time tasks. General video understanding is evaluated across seven benchmarks, including three short-video datasets (MVBench, PerceptionTest, TempCompass) and four long-video benchmarks (EgoSchema, LongVideoBench, MLVU, VideoMME).

The evaluation results show that Qwen2-VL† improved with average scores increasing from 55.98 to 63.35 on OVO-Bench and 69.04 to 72.01 on Streaming-Bench. In contrast, LLaVA-OV† experiences slight performance decreases, dropping from 64.02 to 61.64 on OVO-Bench and from 71.12 to 68.39 on Streaming-Bench. Fine-tuning on the Stream-IT dataset yields substantial improvements across all models. Oryx-1.5† achieves gains of +11.92 on OVO-Bench and +4.2 on Streaming-Bench. Moreover, Qwen2-VL† reaches average scores of 71.30 on OVO-Bench and 77.04 on Streaming-Bench after Stream-IT fine-tuning, outperforming even proprietary models like GPT-4o and Gemini 1.5 Pro, showing the effectiveness of StreamBridge’s approach in enhancing streaming video understanding capabilities.

In conclusion, researchers introduced StreamBridge, a method to transform offline Video-LLMs into effective streaming-capable models. Its dual innovations, a memory buffer with round-decayed compression strategy and a decoupled lightweight activation model, address the core challenges of streaming video understanding without compromising general performance. Further, the Stream-IT dataset is introduced for streaming video understanding, with specialized interleaved video-text sequences. As streaming video understanding becomes increasingly essential in robotics and autonomous driving, StreamBridge offers a generalizable solution that transforms static Video-LLMs into dynamic, responsive systems capable of meaningful interaction in continuously evolving visual environments.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

Here’s a brief overview of what we’re building at Marktechpost:

ML News Community – r/machinelearningnews (92k+ members)

Newsletter– airesearchinsights.com/(30k+ subscribers)

miniCON AI Events – minicon.marktechpost.com

AI Reports & Magazines – magazine.marktechpost.com

AI Dev & Research News – marktechpost.com (1M+ monthly readers)

Partner with us

The post Offline Video-LLMs Can Now Understand Real-Time Streams: Apple Researchers Introduce StreamBridge to Enable Multi-Turn and Proactive Video Understanding appeared first on MarkTechPost.

Build an intelligent community agent to revolutionize IT support with …

In the era of AI and machine learning (ML), there is a growing emphasis on enhancing security— especially in IT contexts. In this post, we demonstrate how your organization can reduce the end-to-end burden of resolving regular challenges experienced by your IT support teams—from understanding errors and reviewing diagnoses, remediation steps, and relevant documentation, to opening external support tickets using common third-party services such as Jira.
We show how Amazon Q Business can streamline your end-to-end troubleshooting processes by using your preexisting documentation and ticketing systems while approaching complex IT issues in a conversational dialogue. This solution illustrates the benefits of incorporating Amazon Q as a supplemental tool in your IT stack.
Benefits of Amazon Q Business
The following are some relevant benefits of Amazon Q Business:

Scalability – As an AWS cloud-based service, Amazon Q is highly scalable and able to handle numerous concurrent requests from multiple employees without performance degradation. This makes it suitable for organizations with a large IT department consisting of many employees who intend to use Amazon Q as an intelligent agent assistant.
Increased productivity – Because Amazon Q can handle a large volume of customer inquiries simultaneously, this frees up human employees (such as IT support engineers) to focus on more complex or specialized tasks, thereby improving overall productivity.
Natural language understanding (NLU) – Users can interact with the Amazon Q Business application using natural language (such as English). This enables more natural and intuitive conversational experiences without requiring your agents to learn new APIs or languages.
Customization and personalization – Developers can customize the knowledge base and responses to cater to the specific needs of their application and users, enabling more personalized experiences. In this post, we discuss an IT support use case for Amazon Q Business and how to configure it to index and search custom audit logs.

Solution overview
Our use case focuses on the challenges around troubleshooting, specifically within systems and applications for IT support and help desk operations. We use Amazon Q Business to train on our internal documentation and runbooks to create a tailored Amazon Q application that offers personalized instructions, source links to relevant documentation, and seamless integration with ticketing services like Jira for escalation requirements. Our goal is to reduce the time and effort required for IT support teams and others to diagnose challenges, review runbooks for remediation, and automate the escalation and ticketing process.
The following diagram illustrates the solution architecture.

The solution consists of the following key integrations:

Jira plugin – Amazon Q Business supports integration with Jira; you can use the AI assistant UI to search, read, create, and delete Jira tickets. Changes made using this plugin by Amazon Q can then be viewed within your Jira console.
Web crawling – Amazon Q Business uses web crawlers to index and ingest product documentation websites, making sure that the latest information is available for answering queries.
Amazon S3 connector – Organizations can upload product documents directly to Amazon Simple Storage Service (Amazon S3), enabling Amazon Q Business to access and incorporate this information into its knowledge base.
Jira data source – If your Jira environment rarely changes, or if you want to have more granular control over Amazon Q interactions with Jira, then you can use Jira as a simple data source. Here, Amazon Q will have read-only access to Jira.

Prerequisites
As a prerequisite to deploying this solution, you will need to set up Jira and Confluence using an Atlassian account. If you already have these set up, you can use your existing account. Otherwise, you can create an Atlassian account and set up Jira and Confluence using the free version.

Sign up with your email or through a social identity provider. If you sign up using email, you must verify your email through a One Time Password (OTP).

Enter a name for your site and choose Continue.

Choose Other and choose Continue.

If asked for a starting template, you can choose the Project management template and choose Start now.
Enter a name for your project and choose Get started.

Your UI should now look like the following screenshot.
Now you have created an Atlassian account and Jira project.
For example purposes, we created a few tasks within the Jira console. We will come back to these later.
Create an Amazon Q application
You are now ready to create an Amazon Q application:

Sign in to your AWS account on the AWS Management Console and set your preferred AWS Region.
Open the Amazon Q console.
If you haven’t already, complete the steps to connect to AWS IAM Identity Center, creating either an organization instance or account instance.

After you have completed your configuration of IAM Identity Center and connected it within Amazon Q, you should see the following success message on the Amazon Q console.

On the Amazon Q Business console, choose Applications in the navigation pane, then choose Create an application.
For Application name, enter a name (for example, QforITTeams).
Leave the remaining options as default and choose Next.

You have the choice of selecting an existing Amazon Kendra retriever or using the Amazon Q native retriever. For more information on the retriever options, see Creating an index for an Amazon Q Business application. For this post, we use the native retriever.
Keep the other default options and choose Next.

Amazon Q offers a suite of default data sources for you to choose from, including Amazon S3, Amazon Relational Database Service (Amazon RDS), Slack, Salesforce, Confluence, code repositories in GitHub, on-premises stores (such as IBM DB2), and more. For our sample set up, we are using sample AWS Well-Architected documentation, for which we can use a web crawler. We also want to use some sample runbooks (we have already generated and uploaded these to an S3 bucket).
Let’s set up our Amazon S3 data source first.

For Add a data source, choose Amazon S3.

Under Name and description, enter a name and description.

Complete the steps to add your Amazon S3 data source. For our use case, we create a new AWS Identity and Access Management (IAM) service role according to the AWS recommendations for standard use cases. AWS will automatically propagate the role for us following the principle of least privilege.
After you add the data source, run the sync by choosing Sync now.

Wait 5–10 minutes for your data to finish syncing to Amazon Q.

Now let’s add our web crawler and link to some AWS Well-Architected documentation.

Add a second data source and choose Web crawlers.
Under Source, select Source URLs and enter the source URLs you want to crawl.

For this use case, we entered some links to public AWS documentation; you have the option to configure authentication and a web proxy in order to crawl intranet documents as well.

After you create the data source, choose Sync now to run the sync.

Add an IAM Identity Center user
While our data sources are busy syncing, let’s create an IAM Identity Center user for us to test the Amazon Q Business application web experience:

On the Amazon Q Business console, navigate to your application.
Under Groups and users, choose Manage access and subscriptions, and choose Add groups and users.
Select Add new users and choose Next.
After you create the user, you can add it by choosing Assign existing users and groups and searching for the user by first name.
After you add the user, you can edit their subscription access. We upgrade our user’s access to Q Business Pro for our testing.

Deploy the web experience
After the data sources have completed their sync, you can move to the testing stage to confirm things are working so far:

On the Amazon Q Business console, choose Applications in the navigation pane.
Select your application and choose Deploy web experience.
On the application details page, choose Customize web experience.
Customize the title, subtitle, and welcome message as needed, then choose Save.
Choose View web experience.

Let’s test some prompts on the data that our Amazon Q application has seen.
First, let’s ask some questions around the provided runbooks stored in our S3 bucket that we previously added as a data source to our application. In the following example, we ask about information for restarting an Amazon Elastic Compute Cloud (Amazon EC2) instance.
As shown in the following screenshot, Amazon Q has not only answered our question, but it also cited its source for us, providing a link to the .txt file that contains the runbook for Restarting an EC2 Instance.
Let’s ask a question about the Well-Architected webpages that we crawled. For this query, we can ask if there is a tool we can use to improve our AWS architecture. The following screenshot shows the reply.

Set up Jira as a data source
In this section, we set up Jira as a data source for our Amazon Q application. This will allow Amazon Q to search data in Jira. For instructions, see Connecting Jira to Amazon Q Business.
After you have set up Jira as a data source, test out your Amazon Q Business application. Go to the web experience chat interface URL and ask it about one of your Jira tickets. The following screenshot shows an example.

Set up a Jira plugin
What if you encounter a situation where your user, an IT support professional, can’t find the solution with the provided internal documents and runbooks that Amazon Q has been trained on? Your next step might be to open a ticket in Jira. Let’s add a plugin for Jira that allows you to submit a Jira ticket through the Amazon Q chat interface. For more details, see Configuring a Jira Cloud plugin for Amazon Q Business. In the previous section, we added Jira as a data source, allowing Amazon Q to search data contained in Jira. By adding Jira as a plugin, we will allow Amazon Q to perform actions within Jira.
Complete the following steps to add the Jira plugin:

On the Amazon Q Business console, navigate to your application.
Choose Plugins in the navigation pane.
Choose Add plugin.

For Plugin name, enter a name.
For Domain URL, enter https://api.atlassian.com/ex/jira/yourInstanceID, where the value of yourInstanceID is the value at https://my-site-name.atlassian.net/_edge/tenant_info.
For OAuth2.0, select Create a new secret, and enter your Jira client ID and client secret.

If you require assistance retrieving these values, refer to the prerequisites.

Complete creating your plugin.

After you have created the plugin, return to the application web experience to try it out. The first time you use the Jira plugin within the Amazon Q chat interface, you might be asked to authorize access. The request will look similar to the following screenshots.

After you provide Amazon Q authorization to access Jira, you’re ready to test out the plugin.
First, let’s ask Amazon Q to create some draft text for our ticket.

Next, we ask Amazon Q to use this context to create a task in Jira. This is where we use the plugin. Choose the options menu (three dots) next to the chat window and choose the Jira plugin.

Ask it to generate a Jira task. Amazon Q will automatically recognize the conversation and input its data within the Jira ticket template for you, as shown in the following screenshot. You can customize the fields as needed and choose Submit.
You should receive a response similar to the following screenshot.

Amazon Q has created a new task for us in Jira. We can confirm that by viewing our Jira console. There is a task for updating the IT runbooks to meet disaster recovery objectives.
If we open that task, we can confirm that the information provided matches the information we passed to the Jira plugin.
Now, let’s test out retrieving an existing ticket and modifying it. In the following screenshot, Amazon Q is able to search through our Jira Issues and correctly identify the exact task we were referring to.
We can ask Amazon Q about some possible actions we can take.

Let’s ask Amazon Q to move the task to the “In Progress” stage.

The following screenshot shows the updated view of our Jira tasks on the Jira console. The ticket for debugging the Amazon DynamoDB application has been moved to the In Progress stage.

Now, suppose we wanted to view more information for this task. We can simply ask Amazon Q. This saves us the trouble of having to navigate our way around the Jira UI.

Amazon Q is even able to extract metadata about the ticket, such as last-updated timestamps, its creator, and other components.

You can also delete tasks in Jira using the Amazon Q chat interface. The following is an example of deleting the DynamoDB ticket. You will be prompted to confirm the task ID (key). The task will be deleted after you confirm.
Now, if we view our Jira console, the corresponding task is gone.
Clean up
To clean up the resources that you have provisioned, complete the following steps:

Empty and delete any S3 buckets you created.
Downgrade your IAM Identity Center user subscription to Amazon Q.
Delete any Amazon Q related resources, including your Amazon Q Business application.
Delete any additional services or storage provisioned during your tests.

Conclusion
In this post, we configured IAM Identity Center for Amazon Q and created an Amazon Q application with connectors to Amazon S3, web crawlers, and Jira. We then customized our Amazon Q application for a use case targeting IT specialists, and we sent some test prompts to review our runbooks for issue resolution as well as to get answers to questions regarding AWS Well-Architected practices. We also added a plugin for Jira so that IT support teams can create Jira issues and tickets automatically with Amazon Q, taking into account the full context of our conversation.
Try out Amazon Q Business for your own use case, and share your feedback in the comments. For more information about using Amazon Q Business with Jira, see Improve the productivity of your customer support and project management teams using Amazon Q Business and Atlassian Jira.

About the Authors
Dylan Martin is a Solutions Architect (SA) at Amazon Web Services based in the Seattle area. Dylan specializes in developing Generative AI solutions for new service and feature launches. Outside of work, Dylan enjoys motorcycling and studying languages.
Ankit Patel is a Solutions Developer at AWS based in the NYC area. As part of the Prototyping and Customer Engineering (PACE) team, he helps customers bring their innovative ideas to life by rapid prototyping; using the AWS platform to build, orchestrate, and manage custom applications.

This AI Paper Introduces Effective State-Size (ESS): A Metric to Quant …

In machine learning, sequence models are designed to process data with temporal structure, such as language, time series, or signals. These models track dependencies across time steps, making it possible to generate coherent outputs by learning from the progression of inputs. Neural architectures like recurrent neural networks and attention mechanisms manage temporal relationships through internal states. The ability of a model to remember and relate previous inputs to current tasks depends on how well it utilizes its memory mechanisms, which are crucial in determining model effectiveness across real-world tasks involving sequential data.

One of the persistent challenges in the study of sequence models is determining how memory is used during computation. While the size of a model’s memory—often measured as state or cache size—is easy to quantify, it does not reveal whether that memory is being effectively used. Two models might have similar memory capacities but very different ways of applying that capacity during learning. This discrepancy means existing evaluations fail to capture critical nuances in model behavior, leading to inefficiencies in design and optimization. A more refined metric is needed to observe memory utilization rather than mere memory size.

Previous approaches to understanding memory use in sequence models relied on surface-level indicators. Visualizations of operators like attention maps or basic metrics, such as model width and cache capacity, provided some insight. However, these methods are limited because they often apply only to narrow classes of models or do not account for important architectural features like causal masking. Further, techniques like spectral analysis are hindered by assumptions that do not hold across all models, especially those with dynamic or input-varying structures. As a result, they fall short of guiding how models can be optimized or compressed without degrading performance.

Researchers from Liquid AI, The University of Tokyo, RIKEN, and Stanford University introduced an Effective State-Size (ESS) metric to measure how much of a model’s memory is truly being utilized. ESS is developed using principles from control theory and signal processing, and it targets a general class of models that include input-invariant and input-varying linear operators. These cover a range of structures such as attention variants, convolutional layers, and recurrence mechanisms. ESS operates by analyzing the rank of submatrices within the operator, specifically focusing on how past inputs contribute to current outputs, providing a measurable way to assess memory utilization.

The calculation of ESS is grounded in analyzing the rank of operator submatrices that link earlier input segments to later outputs. Two variants were developed: tolerance-ESS, which uses a user-defined threshold on singular values, and entropy-ESS, which uses normalized spectral entropy for a more adaptive view. Both methods are designed to handle practical computation issues and are scalable across multi-layer models. The ESS can be computed per channel and sequence index and aggregated as average or total ESS for comprehensive analysis. The researchers emphasize that ESS is a lower bound on required memory and can reflect dynamic patterns in model learning.

Empirical evaluation confirmed that ESS correlates closely with performance across various tasks. In multi-query associative recall (MQAR) tasks, ESS normalized by the number of key-value pairs (ESS/kv) showed a stronger correlation with model accuracy than theoretical state-size (TSS/kv). For instance, models with high ESS consistently achieved higher accuracy. The study also revealed two failure modes in model memory usage: state saturation, where ESS nearly equals TSS, and state collapse, where ESS remains underused. Also, ESS was successfully applied to model compression via distillation. Higher ESS in teacher models resulted in greater loss when compressing to smaller models, showing ESS’s utility in predicting compressibility. It also tracked how end-of-sequence tokens modulated memory use in large language models like Falcon Mamba 7B.

The study outlines a precise and effective approach to solving the gap between theoretical memory size and actual memory use in sequence models. Through the development of ESS, the researchers offer a robust metric that brings clarity to model evaluation and optimization. It paves the way for designing more efficient sequence models and enables using ESS in regularization, initialization, and model compression strategies grounded in clear, quantifiable memory behavior.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

Here’s a brief overview of what we’re building at Marktechpost:

ML News Community – r/machinelearningnews (92k+ members)

Newsletter– airesearchinsights.com/(30k+ subscribers)

miniCON AI Events – minicon.marktechpost.com

AI Reports & Magazines – magazine.marktechpost.com

AI Dev & Research News – marktechpost.com (1M+ monthly readers)

Partner with us

The post This AI Paper Introduces Effective State-Size (ESS): A Metric to Quantify Memory Utilization in Sequence Models for Performance Optimization appeared first on MarkTechPost.

LightOn AI Released GTE-ModernColBERT-v1: A Scalable Token-Level Seman …

Semantic retrieval focuses on understanding the meaning behind text rather than matching keywords, allowing systems to provide results that align with user intent. This ability is essential across domains that depend on large-scale information retrieval, such as scientific research, legal analysis, and digital assistants. Traditional keyword-based methods fail to capture the nuance of human language, often retrieving irrelevant or imprecise results. Modern approaches rely on converting text into high-dimensional vector representations, enabling more meaningful comparisons between queries and documents. These embeddings aim to preserve semantic relationships and provide more contextually relevant outcomes during retrieval.

Among many, the primary challenge in semantic retrieval is the efficient handling of long documents and complex queries. Many models are restricted by fixed-length token windows, commonly around 512 or 1024 tokens, which limits their application in domains that require processing full-length articles or multi-paragraph documents. As a result, crucial information that appears later in a document may be ignored or truncated. Furthermore, real-time performance is often compromised due to the computational cost of embedding and comparing large documents, especially when indexing and querying must occur at scale. Scalability, accuracy, and generalization to unseen data remain persistent challenges in deploying these models in dynamic environments.

In earlier research, models like ModernBERT and other sentence-transformer-based tools have dominated the semantic embedding space. They often use mean pooling or simple aggregation techniques to generate sentence vectors over contextual embeddings. While such methods work for short and moderate-length documents, they struggle to maintain precision when faced with longer input sequences. These models also rely on dense vector comparisons, which become computationally expensive when handling millions of documents. Also, even though they perform well on standard benchmarks like MS MARCO, they show reduced generalization to diverse datasets, and re-tuning for specific contexts is frequently required.

Researchers from LightOn AI introduced GTE-ModernColBERT-v1. This model builds upon the ColBERT architecture, integrating the ModernBERT foundation developed by Alibaba-NLP. By distilling knowledge from a base model and optimizing it on the MS MARCO dataset, the team aimed to overcome limitations related to context length and semantic preservation. The model was trained using 300-token document inputs but demonstrated the ability to handle inputs as large as 8192 tokens. This makes it suitable for indexing and retrieving longer documents with minimal information loss. Their work was deployed through PyLate, a library that simplifies the indexing and querying of documents using dense vector models. The model supports token-level semantic matching using the MaxSim operator, which evaluates similarity between individual token embeddings rather than compressing them into a single vector.

GTE-ModernColBERT-v1 transforms text into 128-dimensional dense vectors and utilizes the MaxSim function for computing semantic similarity between query and document tokens. This method preserves granular context and allows fine-tuned retrieval. It integrates with PyLate’s Voyager indexing system, which manages large-scale embeddings using an efficient HNSW (Hierarchical Navigable Small World) index. Once documents are embedded and stored, users can retrieve top-k relevant documents using the ColBERT retriever. The process supports full pipeline indexing and lightweight reranking for first-stage retrieval systems. PyLate provides flexibility in modifying document length during inference, enabling users to handle texts much longer than the model was originally trained on, an advantage rarely seen in standard embedding models.

Image Source

On the NanoClimate dataset, the model achieved a MaxSim Accuracy@1 of 0.360, Accuracy@5 of 0.780, and Accuracy@10 of 0.860. Precision and recall scores were consistent, with MaxSim Recall@3 reaching 0.289 and Precision@3 at 0.233. These scores reflect the model’s ability to retrieve accurate results even in longer-context retrieval scenarios. When evaluated on the BEIR benchmark, GTE-ModernColBERT outperformed previous models, including ColBERT-small. For example, it scored 54.89 on the FiQA2018 dataset, 48.51 on NFCorpus, and 83.59 on the TREC-COVID task. The average performance across these tasks was significantly higher than baseline ColBERT variants. Notably, in the LongEmbed benchmark, the model scored 88.39 in Mean score and 78.82 in LEMB Narrative QA Retrieval, surpassing other leading models such as voyage-multilingual-2 (79.17) and bge-m3 (58.73).

Image Source

These results suggest that the model offers robust generalization and effective handling of long-context documents, outperforming many contemporaries by almost 10 points on long-context tasks. It is also highly adaptable to different retrieval pipelines, supporting indexing and reranking implementations. Such versatility makes it an attractive solution for scalable semantic search.

Several Key Highlights from the Research on GTE-ModernColBERT-v1 include:

GTE-ModernColBERT-v1 uses 128-dimensional dense vectors with token-level MaxSim similarity, based on ColBERT and ModernBERT foundations.

Though trained on 300-token documents, the model generalizes to documents up to 8192 tokens, showing adaptability for long-context retrieval tasks.

Accuracy@10 reached 0.860, Recall@3 was 0.289, and Precision@3 was 0.233, demonstrating strong retrieval accuracy.

On the BEIR benchmark, the model scored 83.59 on TREC-COVID and 54.89 on FiQA2018, outperforming ColBERT-small and other baselines.

Achieved a mean score of 88.39 in the LongEmbed benchmark and 78.82 in LEMB Narrative QA, surpassing previous SOTA by nearly 10 points.

Integrates with PyLate’s Voyager index, supports reranking and retrieval pipelines, and is compatible with efficient HNSW indexing.

The model can be deployed in pipelines requiring fast and scalable document search, including academic, enterprise, and multilingual applications.

In conclusion, this research provides a meaningful contribution to long-document semantic retrieval. By combining the strengths of token-level matching with scalable architecture, GTE-ModernColBERT-v1 addresses several bottlenecks that current models face. It introduces a reliable method for processing and retrieving semantically rich information from extended contexts, significantly improving precision and recall.

Check out the Model on Hugging Face. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

Here’s a brief overview of what we’re building at Marktechpost:

ML News Community – r/machinelearningnews (92k+ members)

Newsletter– airesearchinsights.com/(30k+ subscribers)

miniCON AI Events – minicon.marktechpost.com

AI Reports & Magazines – magazine.marktechpost.com

AI Dev & Research News – marktechpost.com (1M+ monthly readers)

Partner with us

The post LightOn AI Released GTE-ModernColBERT-v1: A Scalable Token-Level Semantic Search Model for Long-Document Retrieval and Benchmark-Leading Performance appeared first on MarkTechPost.

A Coding Implementation of Accelerating Active Learning Annotation wit …

In this tutorial, we’ll learn how to leverage the Adala framework to build a modular active learning pipeline for medical symptom classification. We begin by installing and verifying Adala alongside required dependencies, then integrate Google Gemini as a custom annotator to categorize symptoms into predefined medical domains. Through a simple three-iteration active learning loop, prioritizing critical symptoms such as chest pain, we’ll see how to select, annotate, and visualize classification confidence, gaining practical insights into model behavior and Adala’s extensible architecture.

Copy CodeCopiedUse a different Browser!pip install -q git+https://github.com/HumanSignal/Adala.git
!pip list | grep adala

We install the latest Adala release directly from its GitHub repository. At the same time, the subsequent pip list | grep adala command scans your environment’s package list for any entries containing “adala,” providing a quick confirmation that the library was installed successfully.

Copy CodeCopiedUse a different Browserimport sys
import os
print(“Python path:”, sys.path)
print(“Checking if adala is in installed packages…”)
!find /usr/local -name “*adala*” -type d | grep -v “__pycache__”

!git clone https://github.com/HumanSignal/Adala.git
!ls -la Adala

We print out your current Python module search paths and then search the /usr/local directory for any installed “adala” folders (excluding __pycache__) to verify the package is available. Next, it clones the Adala GitHub repository into your working directory and lists its contents so you can confirm that all source files have been fetched correctly.

Copy CodeCopiedUse a different Browserimport sys
sys.path.append(‘/content/Adala’)

By appending the cloned Adala folder to sys.path, we’re telling Python to treat /content/Adala as an importable package directory. This ensures that subsequent import Adala… statements will load directly from your local clone rather than (or in addition to) any installed version.

Copy CodeCopiedUse a different Browser!pip install -q google-generativeai pandas matplotlib

import google.generativeai as genai
import pandas as pd
import json
import re
import numpy as np
import matplotlib.pyplot as plt
from getpass import getpass

We install the Google Generative AI SDK alongside data-analysis and plotting libraries (pandas and matplotlib), then import key modules, genai for interacting with Gemini, pandas for tabular data, json and re for parsing, numpy for numerical operations, matplotlib.pyplot for visualization, and getpass to prompt the user for their API key securely.

Copy CodeCopiedUse a different Browsertry:
from Adala.adala.annotators.base import BaseAnnotator
from Adala.adala.strategies.random_strategy import RandomStrategy
from Adala.adala.utils.custom_types import TextSample, LabeledSample
print(“Successfully imported Adala components”)
except Exception as e:
print(f”Error importing: {e}”)
print(“Falling back to simplified implementation…”)

This try/except block attempts to load Adala’s core classes, BaseAnnotator, RandomStrategy, TextSample, and LabeledSample so that we can leverage its built-in annotators and sampling strategies. On success, it confirms that the Adala components are available; if any import fails, it catches the error, prints the exception message, and gracefully falls back to a simpler implementation.

Copy CodeCopiedUse a different BrowserGEMINI_API_KEY = getpass(“Enter your Gemini API Key: “)
genai.configure(api_key=GEMINI_API_KEY)

We securely prompt you to enter your Gemini API key without echoing it to the notebook. Then we configure the Google Generative AI client (genai) with that key to authenticate all subsequent calls.

Copy CodeCopiedUse a different BrowserCATEGORIES = [“Cardiovascular”, “Respiratory”, “Gastrointestinal”, “Neurological”]

class GeminiAnnotator:
def __init__(self, model_name=”models/gemini-2.0-flash-lite”, categories=None):
self.model = genai.GenerativeModel(model_name=model_name,
generation_config={“temperature”: 0.1})
self.categories = categories

def annotate(self, samples):
results = []
for sample in samples:
prompt = f”””Classify this medical symptom into one of these categories:
{‘, ‘.join(self.categories)}.
Return JSON format: {{“category”: “selected_category”,
“confidence”: 0.XX, “explanation”: “brief_reason”}}

SYMPTOM: {sample.text}”””

try:
response = self.model.generate_content(prompt).text
json_match = re.search(r'({.*})’, response, re.DOTALL)
result = json.loads(json_match.group(1) if json_match else response)

labeled_sample = type(‘LabeledSample’, (), {
‘text’: sample.text,
‘labels’: result[“category”],
‘metadata’: {
“confidence”: result[“confidence”],
“explanation”: result[“explanation”]
}
})
except Exception as e:
labeled_sample = type(‘LabeledSample’, (), {
‘text’: sample.text,
‘labels’: “unknown”,
‘metadata’: {“error”: str(e)}
})
results.append(labeled_sample)
return results

We define a list of medical categories and implement a GeminiAnnotator class that wraps Google Gemini’s generative model for symptom classification. In its annotate method, it builds a JSON-returning prompt for each text sample, parses the model’s response into a structured label, confidence score, and explanation, and wraps those into lightweight LabeledSample objects, falling back to an “unknown” label if any errors occur.

Copy CodeCopiedUse a different Browsersample_data = [
“Chest pain radiating to left arm during exercise”,
“Persistent dry cough with occasional wheezing”,
“Severe headache with sensitivity to light”,
“Stomach cramps and nausea after eating”,
“Numbness in fingers of right hand”,
“Shortness of breath when climbing stairs”
]

text_samples = [type(‘TextSample’, (), {‘text’: text}) for text in sample_data]

annotator = GeminiAnnotator(categories=CATEGORIES)
labeled_samples = []

We define a list of raw symptom strings and wrap each in a lightweight TextSample object to pass them to the annotator. It then instantiates your GeminiAnnotator with the predefined category set and prepares an empty labeled_samples list to store the results of the upcoming annotation iterations.

Copy CodeCopiedUse a different Browserprint(“nRunning Active Learning Loop:”)
for i in range(3):
print(f”n— Iteration {i+1} —“)

remaining = [s for s in text_samples if s not in [getattr(l, ‘_sample’, l) for l in labeled_samples]]
if not remaining:
break

scores = np.zeros(len(remaining))
for j, sample in enumerate(remaining):
scores[j] = 0.1
if any(term in sample.text.lower() for term in [“chest”, “heart”, “pain”]):
scores[j] += 0.5

selected_idx = np.argmax(scores)
selected = [remaining[selected_idx]]

newly_labeled = annotator.annotate(selected)
for sample in newly_labeled:
sample._sample = selected[0]
labeled_samples.extend(newly_labeled)

latest = labeled_samples[-1]
print(f”Text: {latest.text}”)
print(f”Category: {latest.labels}”)
print(f”Confidence: {latest.metadata.get(‘confidence’, 0)}”)
print(f”Explanation: {latest.metadata.get(‘explanation’, ”)[:100]}…”)

This active‐learning loop runs for three iterations, each time filtering out already‐labeled samples and assigning a base score of 0.1—boosted by 0.5 for keywords like “chest,” “heart,” or “pain”—to prioritize critical symptoms. It then selects the highest‐scoring sample, invokes the GeminiAnnotator to generate a category, confidence, and explanation, and prints those details for review.

Copy CodeCopiedUse a different Browsercategories = [s.labels for s in labeled_samples]
confidence = [s.metadata.get(“confidence”, 0) for s in labeled_samples]

plt.figure(figsize=(10, 5))
plt.bar(range(len(categories)), confidence, color=’skyblue’)
plt.xticks(range(len(categories)), categories, rotation=45)
plt.title(‘Classification Confidence by Category’)
plt.tight_layout()
plt.show()

Finally, we extract the predicted category labels and their confidence scores and use Matplotlib to plot a vertical bar chart, where each bar’s height reflects the model’s confidence in that category. The category names are rotated for readability, a title is added, and tight_layout() ensures the chart elements are neatly arranged before display.

In conclusion, by combining Adala’s plug-and-play annotators and sampling strategies with the generative power of Google Gemini, we’ve constructed a streamlined workflow that iteratively improves annotation quality on medical text. This tutorial walked you through installation, setup, and a bespoke GeminiAnnotator, and demonstrated how to implement priority-based sampling and confidence visualization. With this foundation, you can easily swap in other models, expand your category set, or integrate more advanced active learning strategies to tackle larger and more complex annotation tasks.

Check out Colab Notebook here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

Here’s a brief overview of what we’re building at Marktechpost:

ML News Community – r/machinelearningnews (92k+ members)

Newsletter– airesearchinsights.com/(30k+ subscribers)

miniCON AI Events – minicon.marktechpost.com

AI Reports & Magazines – magazine.marktechpost.com

AI Dev & Research News – marktechpost.com (1M+ monthly readers)

Partner with us

The post A Coding Implementation of Accelerating Active Learning Annotation with Adala and Google Gemini appeared first on MarkTechPost.

A Coding Guide to Unlock mem0 Memory for Anthropic Claude Bot: Enablin …

In this tutorial, we walk you through setting up a fully functional bot in Google Colab that leverages Anthropic’s Claude model alongside mem0 for seamless memory recall. Combining LangGraph’s intuitive state-machine orchestration with mem0’s powerful vector-based memory store will empower our assistant to remember past conversations, retrieve relevant details on demand, and maintain natural continuity across sessions. Whether you’re building support bots, virtual assistants, or interactive demos, this guide will equip you with a robust foundation for memory-driven AI experiences.

Copy CodeCopiedUse a different Browser!pip install -qU langgraph mem0ai langchain langchain-anthropic anthropic

First, we install and upgrade LangGraph, the Mem0 AI client, LangChain with its Anthropic connector, and the core Anthropic SDK, ensuring we have all the latest libraries required for building a memory-driven Claude chatbot in Google Colab. Running it upfront will avoid dependency issues and streamline the setup process.

Copy CodeCopiedUse a different Browserimport os
from typing import Annotated, TypedDict, List

from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from langchain_anthropic import ChatAnthropic
from mem0 import MemoryClient

We bring together the core building blocks for our Colab chatbot: it loads the operating-system interface for API keys, Python’s typed dictionaries and annotation utilities for defining conversational state, LangGraph’s graph and message decorators to orchestrate chat flow, LangChain’s message classes for constructing prompts, the ChatAnthropic wrapper to call Claude, and Mem0’s client for persistent memory storage.

Copy CodeCopiedUse a different Browseros.environ[“ANTHROPIC_API_KEY”] = “Use Your Own API Key”
MEM0_API_KEY = “Use Your Own API Key”

We securely inject our Anthropic and Mem0 credentials into the environment and a local variable, ensuring that the ChatAnthropic client and Mem0 memory store can authenticate properly without hard-coding sensitive keys throughout our notebook. Centralizing our API keys here, we maintain a clean separation between code and secrets while enabling seamless access to the Claude model and persistent memory layer.

Copy CodeCopiedUse a different Browserllm = ChatAnthropic(
model=”claude-3-5-haiku-latest”,
temperature=0.0,
max_tokens=1024,
anthropic_api_key=os.environ[“ANTHROPIC_API_KEY”]
)
mem0 = MemoryClient(api_key=MEM0_API_KEY)

We initialize our conversational AI core: first, it creates a ChatAnthropic instance configured to talk with Claude 3.5 Sonnet at zero temperature for deterministic replies and up to 1024 tokens per response, using our stored Anthropic key for authentication. Then it spins up a Mem0 MemoryClient with our Mem0 API key, giving our bot a persistent vector-based memory store to save and retrieve past interactions seamlessly.

Copy CodeCopiedUse a different Browserclass State(TypedDict):
messages: Annotated[List[HumanMessage | AIMessage], add_messages]
mem0_user_id: str

graph = StateGraph(State)

def chatbot(state: State):
messages = state[“messages”]
user_id = state[“mem0_user_id”]

memories = mem0.search(messages[-1].content, user_id=user_id)

context = “n”.join(f”- {m[‘memory’]}” for m in memories)
system_message = SystemMessage(content=(
“You are a helpful customer support assistant. ”
“Use the context below to personalize your answers:n” + context
))

full_msgs = [system_message] + messages
ai_resp: AIMessage = llm.invoke(full_msgs)

mem0.add(
f”User: {messages[-1].content}nAssistant: {ai_resp.content}”,
user_id=user_id
)

return {“messages”: [ai_resp]}

We define the conversational state schema and wire it into a LangGraph state machine: the State TypedDict tracks the message history and a Mem0 user ID, and graph = StateGraph(State) sets up the flow controller. Within the chatbot, the most recent user message is used to query Mem0 for relevant memories, a context-enhanced system prompt is constructed, Claude generates a reply, and that new exchange is saved back into Mem0 before returning the assistant’s response.

Copy CodeCopiedUse a different Browsergraph.add_node(“chatbot”, chatbot)
graph.add_edge(START, “chatbot”)
graph.add_edge(“chatbot”, “chatbot”)
compiled_graph = graph.compile()

We plug our chatbot function into LangGraph’s execution flow by registering it as a node named “chatbot,” then connecting the built-in START marker to that node. Hence, the conversation begins there, and finally creates a self-loop edge so each new user message re-enters the same logic. Calling graph.compile() then transforms this node-and-edge setup into an optimized, runnable graph object that will manage each turn of our chat session automatically.

Copy CodeCopiedUse a different Browserdef run_conversation(user_input: str, mem0_user_id: str):
config = {“configurable”: {“thread_id”: mem0_user_id}}
state = {“messages”: [HumanMessage(content=user_input)], “mem0_user_id”: mem0_user_id}
for event in compiled_graph.stream(state, config):
for node_output in event.values():
if node_output.get(“messages”):
print(“Assistant:”, node_output[“messages”][-1].content)
return

if __name__ == “__main__”:
print(“Welcome! (type ‘exit’ to quit)”)
mem0_user_id = “customer_123”
while True:
user_in = input(“You: “)
if user_in.lower() in [“exit”, “quit”, “bye”]:
print(“Assistant: Goodbye!”)
break
run_conversation(user_in, mem0_user_id)

We tie everything together by defining run_conversation, which packages our user input into the LangGraph state, streams it through the compiled graph to invoke the chatbot node, and prints out Claude’s reply. The __main__ guard then launches a simple REPL loop, prompting us to type messages, routing them through our memory-enabled graph, and gracefully exiting when we enter “exit”.

In conclusion, we’ve assembled a conversational AI pipeline that combines Anthropic’s cutting-edge Claude model with mem0’s persistent memory capabilities, all orchestrated via LangGraph in Google Colab. This architecture allows our bot to recall user-specific details, adapt responses over time, and deliver personalized support. From here, consider experimenting with richer memory-retrieval strategies, fine-tuning Claude’s prompts, or integrating additional tools into your graph.

Check out Colab Notebook here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit.

Here’s a brief overview of what we’re building at Marktechpost:

ML News Community – r/machinelearningnews (92k+ members)

Newsletter– airesearchinsights.com/(30k+ subscribers)

miniCON AI Events – minicon.marktechpost.com

AI Reports & Magazines – magazine.marktechpost.com

AI Dev & Research News – marktechpost.com (1M+ monthly readers)

The post A Coding Guide to Unlock mem0 Memory for Anthropic Claude Bot: Enabling Context-Rich Conversations appeared first on MarkTechPost.

Huawei Introduces Pangu Ultra MoE: A 718B-Parameter Sparse Language Mo …

Sparse large language models (LLMs) based on the Mixture of Experts (MoE) framework have gained traction for their ability to scale efficiently by activating only a subset of parameters per token. This dynamic sparsity allows MoE models to retain high representational capacity while limiting computation per token. However, with their increasing complexity and model size approaching trillions of parameters, training them efficiently requires algorithmic innovation and a tightly integrated hardware-software optimization. These challenges are especially relevant when deploying models on non-standard AI accelerators like Ascend NPUs, which require specific architectural alignment to deliver optimal performance.

A major technical challenge lies in the inefficient utilization of hardware resources while training sparse LLMs. Since only a portion of parameters are active for each token, workloads across devices become unbalanced, leading to synchronization delays and underused processing power. This imbalance also affects memory utilization as different experts process different numbers of tokens, sometimes exceeding capacity. These inefficiencies are compounded at a large scale, such as across thousands of AI chips, where communication and memory management bottlenecks significantly hinder throughput. The inability to fully harness the computational promise of sparsity in practice restricts the deployment of such models on hardware systems like Ascend NPUs.

Several strategies have been proposed to tackle these challenges. These include auxiliary losses to balance token distribution across experts and drop-and-pad strategies that limit expert overload by discarding tokens exceeding capacity. However, these techniques either reduce model performance or introduce inefficiencies in memory and computation. Other efforts include heuristic expert placement and traditional communication patterns like All-to-All dispatching, but these often fail to scale well or maintain high throughput. Moreover, standard memory-saving techniques like recomputation are usually coarse-grained, targeting whole layers instead of specific operations, leading to increased runtime without proportional memory savings.

Researchers from the Pangu team at Huawei Cloud introduced a highly structured and optimized training approach for large MoE models tailored to Ascend NPUs. They developed Pangu Ultra MoE, a sparse LLM with 718 billion parameters, focusing on aligning model architecture and system design with the capabilities of the Ascend hardware. Their approach begins with a simulation-based model configuration process that evaluates thousands of architecture variants using metrics grounded in actual hardware behavior. These simulations inform design decisions before any physical training is undertaken, thus saving substantial computational resources and enabling informed tuning of model hyperparameters.

The simulation method analyzes combinations of parameters such as the number of layers, hidden size, and expert count using a five-dimensional parallelism strategy that includes Pipeline Parallelism, Tensor Parallelism, Expert Parallelism, Data Parallelism, and Context Parallelism. The final model configuration adopted by Huawei included 256 experts, a hidden size 7680, and 61 transformer layers. To further optimize performance, researchers integrated an Adaptive Pipe Overlap mechanism to mask communication costs and used hierarchical All-to-All communication to reduce inter-node data transfer. They employed fine-grained recomputation, such as recomputing only key-value vectors in attention modules, and introduced tensor swapping to offload activation memory to host devices dynamically.

Pangu Ultra MoE achieved a Model Flops Utilization (MFU) of 30.0% and processed tokens at a rate of 1.46 million per second using 6,000 Ascend NPUs. The baseline MFU was 18.9% with 0.61 million tokens per second on 4,000 NPUs. The researchers also introduced dynamic expert placement strategies, improving device-level load balance and achieving a relative 10% MFU improvement. The model performed competitively on benchmark evaluations, attaining 81.3% on AIME2024, 97.4% on MATH500, 94.8% on CLUEWSC, and 91.5% on MMLU. In the healthcare domain, it outperformed DeepSeek R1 by scoring 87.1% on MedQA and 80.8% on MedMCQA, confirming its strength in domain-specific applications.

This study illustrates how the Pangu team at Huawei effectively tackled the core difficulties of training massive MoE models on specialized hardware. Their systematic architecture search, efficient communication techniques, and tailored memory optimizations represent a strong framework for scalable AI training. The work demonstrates practical ways to unlock the performance potential of sparse models and sets a direction for future system-aware AI design.

Check out Paper here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit.

Here’s a brief overview of what we’re building at Marktechpost:

ML News Community – r/machinelearningnews (92k+ members)

Newsletter– airesearchinsights.com/(30k+ subscribers)

miniCON AI Events – minicon.marktechpost.com

AI Reports & Magazines – magazine.marktechpost.com

AI Dev & Research News – marktechpost.com (1M+ monthly readers)

The post Huawei Introduces Pangu Ultra MoE: A 718B-Parameter Sparse Language Model Trained Efficiently on Ascend NPUs Using Simulation-Driven Architecture and System-Level Optimization appeared first on MarkTechPost.

ZeroSearch from Alibaba Uses Reinforcement Learning and Simulated Docu …

Large language models are now central to various applications, from coding to academic tutoring and automated assistants. However, a critical limitation persists in how these models are designed; they are trained on static datasets that become outdated over time. This creates a fundamental challenge because the language models cannot update their knowledge or validate responses against fresh, real-world data. As a result, while these models demonstrate strong performance on reasoning tasks or structured queries, their answers can still include fabricated or obsolete information, reducing their reliability in real-world usage. To maintain credibility, especially for applications requiring updated knowledge such as news, research, or product reviews, models must interact with external data sources in a timely and cost-efficient manner.

The core problem lies in teaching these models to effectively retrieve and incorporate external information. While fine-tuned pretraining helps develop a strong baseline understanding, the capacity to conduct meaningful, dynamic searches is missing. Equipping language models with this ability introduces practical constraints. Search engines used for external information retrieval provide varying document quality that introduces inconsistency in model training. Moreover, integrating reinforcement learning to simulate real-world searching requires large-scale interactions with live APIs, running up hundreds of thousands of calls, which becomes prohibitively expensive. This results in a bottleneck for academic research and commercial deployment, where cost and training scalability are critical.

Various methods have been developed to enhance language models’ search and retrieval capabilities. Some early techniques relied on prompt-based instructions that guided the model through processes like generating sub-queries or managing multi-step searches. These methods, however, heavily relied on manual tuning and often required extensive computational resources to ensure consistent outputs. Other approaches leaned on supervised fine-tuning for smaller models to perform more targeted retrieval, with models like Self-RAG and RetroLLM emerging in this space. There have also been experiments with techniques like Monte Carlo Tree Search to expand possible answer paths during inference dynamically. Reinforcement learning-based solutions like Search-R1 and DeepResearcher allowed models to interact directly with real search engines, offering a training experience closer to how users behave. However, these innovations still suffer from either complexity, high computational demand, or financial cost due to live interaction constraints.

Researchers from Tongyi Lab at Alibaba Group introduced an innovative solution called ZeroSearch. This reinforcement learning framework removes the need for live API-based search entirely. Instead, it uses another language model to simulate the behavior of a search engine. The simulation model is fine-tuned through supervised training to generate documents that either help or mislead the policy model, depending on whether the content is designed to be relevant or noisy. This allows complete control over the document quality and cost while enabling a realistic retrieval training experience. A key innovation lies in using curriculum-based learning during training, which means gradually introducing harder retrieval tasks by adjusting how much noise is present in the generated documents. This progression helps the policy model develop resilience and better reasoning skills over time without ever making a real search query.

The structure of ZeroSearch involves distinct phases in the reasoning process. The model first thinks internally using designated tags, then generates queries if it determines that additional information is needed. Finally, it outputs an answer only when sufficient context is acquired. This structured approach enforces clarity in decision-making and has been shown to improve transparency and answer quality. A minimal change in prompts guides document generation for the simulated search engine that controls whether the document appears helpful or misleading. The simulated LLM is fine-tuned using interaction data where each retrieval trajectory is labeled based on the correctness of the final answer. The policy model is taught to handle straightforward and complex search conditions by systematically varying document quality. A performance scaling function determines how much noise is introduced at each training stage, increasing the model’s ability to navigate uncertainty over time.

A 3-billion parameter model was able to simulate the retrieval process for training purposes effectively. The results became particularly notable with larger models. A 7B retrieval module was performed at a level comparable to Google Search regarding response quality. A 14B model even surpassed Google Search benchmarks. ZeroSearch also showed flexibility, functioning effectively across base and instruction-tuned LLMs of different sizes. It integrates well with a range of reinforcement learning algorithms, including PPO, GRPO, and Reinforce++, and it uses a reward design based on the F1 score rather than exact match to discourage the model from generating excessively long answers just to increase keyword overlap. Furthermore, ZeroSearch uses a masking mechanism during backpropagation to ensure that gradients are only computed on the policy model’s outputs, stabilizing training without sacrificing performance.

The research demonstrates a clear and efficient alternative to real-time search engine reliance. Using simulation-driven document generation removes the need for high-cost APIs, and the quality of training input is controlled with precision. The method also boosts model reasoning capability by introducing progressive noise and uncertainty, effectively mimicking how real-world data retrieval might fail or mislead. The policy model is trained to extract the most useful information. These traits make ZeroSearch a scalable and practical solution for commercial-grade applications.

This approach successfully identifies and addresses the twin challenges of document quality variability and economic cost that have limited real-time search integration in language model training. It combines document simulation, structured interaction, and reinforcement learning to ensure effectiveness and scalability. By relying solely on simulated data generation, the researchers achieved superior or comparable results to existing methods while removing all dependency on costly APIs.

Several Key Takeaways from the Research include the following:

A 3B model simulated realistic document retrieval effectively with zero API cost.

A 7B retrieval module matched Google Search performance in benchmark tests.

The 14B model exceeded real search engine performance.

Reinforcement learning was performed with a curriculum-based rollout that gradually introduced noise.

A simulation LLM generated both relevant and noisy documents via lightweight supervised fine-tuning.

Structured interaction phases (<think>, <search>, <answer>) improved model clarity and accuracy.

F1-based rewards discouraged reward hacking by penalizing irrelevant answer length.

Compatible with major RL algorithms including PPO, GRPO, and Reinforce++.

Training was stabilized using a gradient masking mechanism to prevent instability from simulated tokens.

Check out the Paper and Model on Hugging Face. Also, don’t forget to follow us on Twitter.

Here’s a brief overview of what we’re building at Marktechpost:

ML News Community – r/machinelearningnews (92k+ members)

Newsletter– airesearchinsights.com/(30k+ subscribers)

miniCON AI Events – minicon.marktechpost.com

AI Reports & Magazines – magazine.marktechpost.com

AI Dev & Research News – marktechpost.com (1M+ monthly readers)

Partner with us

The post ZeroSearch from Alibaba Uses Reinforcement Learning and Simulated Documents to Teach LLMs Retrieval Without Real-Time Search appeared first on MarkTechPost.

AI That Teaches Itself: Tsinghua University’s ‘Absolute Zero’ Tr …

LLMs have shown advancements in reasoning capabilities through Reinforcement Learning with Verifiable Rewards (RLVR), which relies on outcome-based feedback rather than imitating intermediate reasoning steps. Current RLVR works face critical scalability challenges as they heavily depend on manually curated collections of questions and answers for training. As reasoning models advance, constructing large-scale, high-quality datasets becomes increasingly unsustainable, similar to bottlenecks identified in LLM pretraining. Moreover, exclusive dependency on human-designed tasks may constrain AI systems’ capacity for autonomous learning and development, especially as they evolve beyond human intellectual capabilities.

Researchers have explored various approaches to enhance LLM reasoning capabilities. STaR pioneered self-bootstrapping using expert iteration and rejection sampling of outcome-verified responses to improve CoT reasoning. The o1 model deployed this concept at scale, achieving state-of-the-art results, and R1 later became the first open-weight model to match or surpass o1’s performance by introducing the “zero” setting where RL is applied directly to the base LLM. Further, self-play paradigms have evolved from Schmidhuber’s early two-agent setups to more complex implementations like AlphaGo and AlphaZero. Recent methods such as SPIN, Self-Rewarding Language Models, SPC, and SPAG have applied self-play to language models for alignment and reasoning.

Researchers from Tsinghua University, Beijing Institute for General Artificial Intelligence, and Pennsylvania State University have proposed an RLVR paradigm called Absolute Zero to enable a single model to autonomously generate and solve tasks that maximize its own learning progress without relying on any external data. Under this method, researchers have introduced the Absolute Zero Reasoner (AZR) that self-evolves its training curriculum and reasoning ability through a code executor that validates proposed code reasoning tasks and verifies answers, providing a unified source of verifiable reward to guide open-ended yet grounded learning. AZR can be effectively implemented across different model scales and remains compatible with various model classes, suggesting broad applicability.

LLMs provide an ideal framework for implementing AZR in multitask learning contexts. During each online rollout iteration in the absolute zero setting’s objective equation, AZR proposes new reasoning tasks based on task type and past self-generated examples, with explicit prompting to generate diverse tasks and then attempts to solve them, receiving grounded feedback for its model responses. AZR utilizes a code executor as both a flexible interface and verifiable environment, enabling automatic construction, execution, and validation of code reasoning tasks. Lastly, the AZR Algorithm includes buffer initialization, Task Proposal Inputs and Buffer Management, valid task construction, solution validation, and advantage estimator calculation through Task-Relative REINFORCE++.

The Absolute Zero Reasoner-Coder-7B has achieved state-of-the-art performance in the 7B overall average and coding average categories, surpassing previous best models by 1.8 absolute percentage points despite being entirely out-of-distribution for both math and code reasoning benchmarks. It outperforms models trained with expert-curated human data in coding by 0.3 absolute percentage points while never accessing such data itself. Scaling analysis reveals that AZR delivers greater gains on larger models, with the 7B and 14B models continuing to improve beyond 200 training steps while the 3B model plateaus. Out-of-distribution performance gains increase with model size: +5.7, +10.2, and +13.2 for 3B, 7B, and 14B, respectively.

In conclusion, researchers introduced the Absolute Zero paradigm to address data limitations in existing RLVR frameworks. Under this method, researchers present AZR, which trains models to propose and solve code-related reasoning tasks grounded by a code executor. However, there is a limitation regarding safety management in self-improving systems. The team observed several instances of safety-concerning CoT reasoning from the Llama-3.1-8B model, termed “uh-oh moments.” The findings indicate that while the Absolute Zero paradigm reduces human intervention needs in task curation, ongoing oversight remains necessary to address lingering safety concerns, highlighting a critical direction for future research.

Check out the Paper, Model on Hugging Face and GitHub Page. Also, don’t forget to follow us on Twitter.

Here’s a brief overview of what we’re building at Marktechpost:

ML News Community – r/machinelearningnews (92k+ members)

Newsletter– airesearchinsights.com/(30k+ subscribers)

miniCON AI Events – minicon.marktechpost.com

AI Reports & Magazines – magazine.marktechpost.com

AI Dev & Research News – marktechpost.com (1M+ monthly readers)

Partner with us

The post AI That Teaches Itself: Tsinghua University’s ‘Absolute Zero’ Trains LLMs With Zero External Data appeared first on MarkTechPost.

Google Redefines Computer Science R&D: A Hybrid Research Model tha …

Computer science research has evolved into a multidisciplinary effort involving logic, engineering, and data-driven experimentation. With computing systems now deeply embedded in everyday life, research increasingly focuses on large-scale, real-time systems capable of adapting to diverse user needs. These systems often learn from massive datasets and must handle unpredictable interactions. As the scope of computer science broadens, so does the methodology, requiring tools and approaches that accommodate scalability, responsiveness, and empirical validation over purely theoretical models.

The difficulty arises when connecting innovative ideas to practical applications without losing the depth and risk inherent in true research. Rapid development cycles, product deadlines, and user expectations often overlap with the uncertain timelines and exploratory nature of research. The challenge is enabling meaningful innovation while maintaining relevance and practical outcomes. Finding a structure where exploration and implementation coexist is essential to making real progress in this demanding and high-impact field.

Traditionally, the division between research and engineering has led to inefficiencies. Research teams create conceptual models or prototypes, which are later handed over to engineering teams for scaling and integration. This separation often results in delays, failures in technology transfer, and difficulty adapting ideas to real-world use. Even when research has academic value, the lack of immediate relevance or scalable deployment options limits its broader impact. Conventional dissemination methods, such as peer-reviewed papers, don’t always align with the fast-moving demands of technology development.

Google introduced a hybrid research model integrating researchers directly into product and engineering teams. This approach was designed to reduce delays between ideation and implementation, enabling faster and more relevant outcomes. Researchers at Google, a company that runs at the intersection of massive computing infrastructure and billions of users, operate within small teams that remain involved from concept to deployment. By embedding development research, the risk of failure is offset by iterative learning and empirical data gathered from actual user interactions. This model promotes cross-functional innovation where knowledge flows seamlessly between domains.

The methodology adopted by Google supports research through robust infrastructure and real-time experimentation. Teams write production-ready code early and rely on continuous feedback from deployed services. Elaborate prototypes are avoided, as they slow the path to real user impact. Google’s services model allows even small teams to access powerful computing resources and integrate complex features quickly. Their projects are modularized, breaking long-term goals into smaller, achievable components. This structure keeps motivation high and provides frequent opportunities for measurable progress. Research is not isolated from engineering but rather supported by it, ensuring that practical constraints and user behavior shape every line of code and every experiment.

The results of this model are substantial. Google published 279 research papers in 2011, a steep rise from 13 in 2003, showing an increased emphasis on sharing its scientific advancements. High-impact systems such as MapReduce, BigTable, and the Google File System originated within this hybrid structure and have become foundational to modern computing. Over 1,000 open-source projects and hundreds of public APIs have emerged from this integrated approach. Google Translate and Voice Search are examples of small research teams that transitioned ideas into large-scale products. Contributions extend to global standards, with team members shaping specifications like HTML5.

By deeply connecting research with product development, Google has built a model that fosters innovation and delivers it at scale. Its hybrid research system empowers teams to work on difficult problems without being detached from practical realities. Projects are designed with user impact and academic relevance in mind, allowing teams to adjust direction quickly when goals are unmet. This has led to projects such as Google Health being re-evaluated when they did not yield the expected outcomes, showing the model’s flexibility and pragmatism.

Combining experimentation, real-world data, and scalable engineering, Google has built a framework that makes research outcomes more tangible and impactful. This paper clearly shows how a unified approach to research and engineering can bridge the gap between innovation and usability, offering a potential blueprint for other technology-driven organizations.

Check out the Paper. Also, don’t forget to follow us on Twitter.

Here’s a brief overview of what we’re building at Marktechpost:

ML News Community – r/machinelearningnews (92k+ members)

Newsletter– airesearchinsights.com/(30k+ subscribers)

miniCON AI Events – minicon.marktechpost.com

AI Reports & Magazines – magazine.marktechpost.com

AI Dev & Research News – marktechpost.com (1M+ monthly readers)

The post Google Redefines Computer Science R&D: A Hybrid Research Model that Merges Innovation with Scalable Engineering appeared first on MarkTechPost.

ServiceNow AI Released Apriel-Nemotron-15b-Thinker: A Compact Yet Powe …

AI models today are expected to handle complex tasks such as solving mathematical problems, interpreting logical statements, and assisting with enterprise decision-making. Building such models demands the integration of mathematical reasoning, scientific understanding, and advanced pattern recognition. As the demand for intelligent agents in real-time applications, like coding assistants and business automation tools, continues to grow, there is a pressing need for models that combine strong performance with efficient memory and token usage, making them viable for deployment in practical hardware environments.

A central challenge in AI development is the resource intensity of large-scale reasoning models. Despite their strong capabilities, these models often require significant memory and computational resources, limiting their real-world applicability. This creates a gap between what advanced models can achieve and what users can realistically deploy. Even well-resourced enterprises may find running models demanding dozens of gigabytes of memory or high inference costs unsustainable. The issue is not just about building smarter models, but ensuring they are efficient and deployable in real-world platforms. High-performing models such as QWQ‑32b, o1‑mini, and EXAONE‑Deep‑32b excel at tasks involving mathematical reasoning and academic benchmarks. However, their dependence on high-end GPUs and high token consumption limits their use in production settings. These models highlight the ongoing trade-off in AI deployment: achieving high accuracy at the cost of scalability and efficiency.

Addressing this gap, researchers at ServiceNow introduced Apriel-Nemotron-15b-Thinker. This model consists of 15 billion parameters, a relatively modest size compared to its high-performing counterparts, yet it demonstrates performance on par with models almost twice its size. The primary advantage lies in its memory footprint and token efficiency. While delivering competitive results, it requires nearly half the memory of QWQ‑32b and EXAONE‑Deep‑32b. This directly contributes to improved operational efficiency in enterprise environments, making it feasible to integrate high-performance reasoning models into real-world applications without large-scale infrastructure upgrades.

The development of Apriel-Nemotron-15b-Thinker followed a structured three-stage training approach, each designed to enhance a specific aspect of the model’s reasoning capabilities. In the initial phase, termed Continual Pre-training (CPT), the model was exposed to over 100 billion tokens. These tokens were not generic text but carefully selected examples from domains requiring deep reasoning, mathematical logic, programming challenges, scientific literature, and logical deduction tasks. This exposure provided the foundational reasoning capabilities that distinguish the model from others. The second stage involved Supervised Fine-Tuning (SFT) using 200,000 high-quality demonstrations. These examples further calibrated the model’s responses to reasoning challenges, enhancing performance on tasks that require accuracy and attention to detail. The final tuning stage, GRPO (Guided Reinforcement Preference Optimization), refined the model’s outputs by optimizing alignment with expected results across key tasks. This pipeline ensures the model is intelligent, precise, structured, and scalable.

In enterprise-specific tasks such as MBPP, BFCL, Enterprise RAG, MT Bench, MixEval, IFEval, and Multi-Challenge, the model delivered competitive or superior performance compared to larger models. Regarding production efficiency, it consumed 40% fewer tokens than QWQ‑32b, significantly lowering inference costs. From a memory standpoint, it achieves all this with approximately 50% of the memory needed by QWQ‑32b and EXAONE-Deep‑32b, indicating a substantial improvement in deployment feasibility. Even in academic benchmarks, such as AIME-24, AIME-25, AMC-23, MATH-500, and GPQA, the model held its own, often equaling or surpassing the performance of other larger models, all while being significantly lighter in computational demand.

Several Key Takeaways from the Research on Apriel-Nemotron-15b-Thinker:

Apriel-Nemotron-15b-Thinker has 15 billion parameters, significantly smaller than QWQ-32b or EXAONE-Deep-32b, but performs competitively.

Uses a 3-phase training, 100B+ tokens in CPT, 200K fine-tuning demos in SFT, and final GRPO refinement.

Consumes around 50% less memory than QWQ-32b, allowing for easier deployment on enterprise hardware.

Uses 40% fewer tokens in production tasks than QWQ-32b, reducing inference cost and increasing speed.

Outperforms or equals larger models on MBPP, BFCL, Enterprise RAG, and academic tasks like GPQA and MATH-500.

Optimized for Agentic and Enterprise tasks, suggesting utility in corporate automation, coding agents, and logical assistants.

Designed specifically for real-world use, avoiding over-reliance on lab-scale compute environments.

Check out the Model on Hugging Face. Also, don’t forget to follow us on Twitter.

Here’s a brief overview of what we’re building at Marktechpost:

ML News Community – r/machinelearningnews (92k+ members)

Newsletter– airesearchinsights.com/(30k+ subscribers)

miniCON AI Events – minicon.marktechpost.com

AI Reports & Magazines – magazine.marktechpost.com

AI Dev & Research News – marktechpost.com (1M+ monthly readers)

The post ServiceNow AI Released Apriel-Nemotron-15b-Thinker: A Compact Yet Powerful Reasoning Model Optimized for Enterprise-Scale Deployment and Efficiency appeared first on MarkTechPost.