2024 Year in Review: Features, Features & More Features!

The confetti is ready, the champagne’s on ice, and we’re here to celebrate what has been an epic year for the team here at Customers.ai!

2024 wasn’t just a banner year for ecommerce, it was a record-shattering one. With brands raking in massive sales during Black Friday and Cyber Monday (our customers led the charge, pulling in some serious ROI), it’s clear that innovation and agility are the keys to the kingdom.

And speaking of innovation, the team at Customers.ai didn’t just rise to the occasion – we soared. 

Last night at our holiday dinner, Devin, one of our AEs, asked me what I thought of this year.I said I had two words:Challenging and exciting.As a founder, it’s always a hard answer. pic.twitter.com/m5byoBkLyj— Larry Kim (@larrykim) December 19, 2024

This year, we launched more features than ever before, making 2024 our most transformative year yet. 

The product you know and love is leaps and bounds ahead of where it was just 12 months ago. Not only are we outpacing the competition, but we can confidently say we’re now offering the most accurate visitor ID tool on the market.

With so many groundbreaking updates rolled out, it’s only fitting to look back and celebrate the work that made it all possible. From revamping customer journeys to pioneering tools that connect the dots like never before, this year has been all about giving marketers the data they need to drive success.

So, without further ado, let’s get into the top 10 features that made 2024 a standout year for Customers.ai.

See Who Is On Your Site Right Now!

Get names, emails, phone numbers & more.

Try it Free, No Credit Card Required

Start Your Free Trial

1. Signal: Recognize Return Visitors Like Never Before

Signal was one of our customers’ favorite features we rolled out this year because of how impactful it’s been. With Klaviyo only tracking cookies for seven days and most visitor ID tools treating everyone like they’re brand-new, a lot of returning shoppers were slipping through the cracks.

Stop ignoring your return visitors Look what happened when our customer added https://t.co/jkRIu5kbND Signal to only their abandoned cart flows. 30% increase in conversions 34% increase in conversion rate 29% increase in revenue per recipientIf you aren’t using… pic.twitter.com/P2DmmFZwdW— CustomersAI (@CustomersAI) December 5, 2024

Not with Signal. It identifies return visitors across sessions, even weeks or months later, so you can engage them like the VIPs they are. No more generic emails or missed opportunities.

Why is this such a big deal? 

Return visitors are already warmed up and way more likely to convert. By recognizing them, you can ditch the one-size-fits-all approach and send tailored messages that actually connect.

Since launching, Signal has been a major win for our customers, boosting engagement and turning return visitors into loyal shoppers. It’s proof that the little details, like recognizing someone who’s been here before, can make a massive impact.

Learn more about resignaling return visitors to Klaviyo >>

2. Super CAPI EMQ Enhancer: Meta Ads Just Got Smarter

If you’re running Meta Ads, you already know how crucial it is to have accurate data. Getting that data in a post-iOS 14 world? Not exactly like unwrapping a shiny gift under the tree. 

That’s why Super CAPI has been such a standout feature this year.

Super CAPI takes Meta’s Conversions API (CAPI) to the next level by feeding the platform ultra-accurate, first-party data in real-time. 

The result? Better ad targeting, smarter optimization, and, yep, more sales at a lower cost. Just look at what Harvest Hosts saw after implementing Super CAPI:

What makes Super CAPI so impactful? 

It connects the dots between your site visitors and Meta Ads without relying on outdated tracking methods. And for our customers, Super CAPI has delivered more wins than Santa on Christmas Eve. 

If you’re tired of wasting budget on guesses, this feature has you covered.

Learn more about Super CAPI >> 

3. Customer Journey Tracking: All the Insights, None of the Guesswork

Tracking visitors is cool but tracking their entire journey? That’s next-level! 

Customer journey tracking was one of the first features we launched in 2024 and it’s been getting better ever since. 

This feature doesn’t just show you who’s visiting, it maps out their entire experience across your website, Klaviyo, and Shopify. Did they open your email? Add something to their cart? Finally make that purchase? 

You’ll see it all in the Customers.ai dashboard, clear as day.

What makes this so amazing is the fact that it brings CDP-level capabilities to everyone – not just the Amazons and Walmarts of the world. 

Now, even smaller ecommerce brands can tap into serious data power, build custom audience segments, and deliver hyper-personalized experiences.

Think smarter targeting, better engagement, and ultimately, more conversions. And the best part? It’s all wrapped up in one easy-to-use dashboard. No reindeer required.

4. Shopify Visitor ID + Remarketing App: Integration That Delivers

This year, we unwrapped something big – a direct integration with Shopify. No small feat.

With the Shopify Visitor ID + Remarketing App, you’re not just figuring out who’s lurking on your site, you’re turning those “just browsing” moments into cha-ching moments. 

The app connects seamlessly with your Shopify store, identifying visitors, tracking what they’re checking out, and syncing that data to fuel remarketing campaigns that actually hit the mark. Abandoned carts? Solved. Repeat shoppers? Targeted with precision. It’s like having a direct line to Santa’s list but for your customers.

Why does this matter? 

Because now, brands of all sizes can use visitor data like pros. No more guessing games or generic campaigns. You get real insights about your audience and can use them to create tailored offers that convert.

Whether it’s holiday shoppers or year-round browsers, this app gives you everything you need to keep customers coming back for more. Consider it your ecommerce gift that keeps on giving.

Learn more about the Shopify Visitor ID & Remarketing App >>

5. Klaviyo Revenue Reporting: ROI You Can See

Launched in Q1 and only getting better, Klaviyo Revenue Reporting lets you see exactly how much revenue your Customers.ai contacts are driving after syncing with Klaviyo.

Here’s how it works – we identify your visitors, drop them into your email flows, and now you can track every dollar they bring in – straight from your Customers.ai dashboard. 

Sales from triggered emails? Check. 

Revenue from return visitors Klaviyo can’t track? Double check.

Attribution doesn’t have to be a mystery anymore. With this feature, you can finally see what’s working (so you can keep doing it) and what’s not (so you can stop wasting budget). 

The result? True ROI, no guesswork.

New case study coming in $32,900 in Attributable Revenue Post Email Send 4.5x Return on Investment 14,000 High-Intent Contacts IdentifiedThe best part? The agency responsible is bringing on more clients. Happy holidays to us pic.twitter.com/j4M2Ibjhyw— CustomersAI (@CustomersAI) December 18, 2024

It’s like the best holiday gift. Total clarity and confidence in your data, all tied up with a bow. 

See how Customers.ai is helping improve flow rates in Klaviyo >>

6. Google Ads Custom Audience Integration: Your Ads, Smarter

2024 was the year we made Google Ads way more powerful for our customers. Previously, we didn’t have a direct integration – but now? Game on.

No more manual list uploads, no more worrying about privacy compliance, just better targeting.

With our Google Ads integration, you can sync your Customers.ai data directly into Google Ads audiences, making it easier to retarget existing customers or find new ones who match your best buyers. It’s all automatic, so you can focus on creating killer campaigns instead of wrangling spreadsheets.

And the privacy concerns? Handled. 

Your data stays secure, and you’re still fully compliant with all the latest rules. Its targeting made simple, efficient, and way less stressful. Happy holidays indeed.

If you’ve been looking for a reason to get more out of your Google Ads budget, this integration makes it effortless. Consider it one more way we’re helping you crush your marketing goals in 2025. 

Learn more about Google Ads Custom Match audiences >>

7. Identity Resolution & Data Enrichment API

You know what’s great? Having a stack of data at your fingertips. You know what’s even better? Making that data work exactly the way you need it to. 

That’s where our Identity Resolution API comes in.

This feature lets you take visitor identification to the next level by integrating it directly into your own systems. 

Want to enrich your CRM with visitor insights? Done. 

Building custom dashboards? No problem. 

With the API, you’re in total control of where your data goes and how it powers your marketing.

And since it’s the holidays, here’s the ultimate gift – Customers.ai is the only platform offering this level of visitor identification through an API. 

Your competitors will be wishing they were on your nice list.

Learn more about our visitor identification API >> 

8. Postcards to Anonymous Visitors: Because Some Shoppers Love Snail Mail

In a perfect world, every shopper would click your ads, open your emails, and hit “Buy Now” on the first visit. But we all know that’s fantasy land. 

For those visitors ghosting your digital campaigns, we added a way to send a personal touch – postcards. Now available directly through Customers.ai.

Just like we do with emails and ads, we use enriched visitor data that can be used to send customized postcards straight to your prospects’ mailboxes. It’s simple, it’s effective, and it’s the perfect way to re-engage shoppers who’ve gone radio silent online.

Why is this such a big deal? 

Direct mail has an average open rate of 80-90%. Compare that to 20-30% for email and you can see there are some real benefits here. 

And hey, with the holidays here and everyone checking their mail for holiday cards, there’s no better time to show up where your customers least expect it. 

Learn more about our direct mail integration >> 

9. Drag-and-Drop Email Builder: Beautiful Emails, Zero Fuss

Who has time for clunky email editors? Not you and definitely not us. 

That’s why we rolled out the Drag-and-Drop Email Builder, making it ridiculously easy to create gorgeous, on-brand emails directly within Customers.ai.

From adding images and logos to customizing layouts, it’s all as simple as click, drag, and done.

And the results? Stellar! 

One of our partners, Lilikoi Agency, used the editor to create a welcome email for Solaris Solar, which hit a 37.5% open rate and a 9.9% click rate. Not too shabby, right?

New feature alert! The new and improved drag-and-drop email editor Our partner lilikoi agency used it to set up an on-brand welcome email to high-intent visitors for Solaris Solar. The result? A 37.5% open rate and 9.9% click rate pic.twitter.com/8Gs8zERIN7— CustomersAI (@CustomersAI) December 4, 2024

So, if you’ve been avoiding using the email editor, this one’s for you. It’s fast, intuitive, and just what you need to spread a little holiday cheer straight to your customers’ inboxes.

10. ROI Tracking for Lead Gen: Finally, Clarity

One of our newest features, rolled out in November, is already making waves – ROI Tracking for Lead Gen. 

Lead generation ROI has always been a bit of a mystery. The sales cycle can drag on, forms get involved, and tracking conversions feels like trying to untangle holiday lights.

Not anymore. 

Customers.ai now tracks and reports on lead gen conversions, giving you a clear view of what’s working and what’s not. We identify your anonymous visitors, you remarket to them through emails or ads, and when they convert on your lead gen forms, we connect the dots and report it back to you.

Sounds simple enough, right? 

Whether you’re in B2B, insurance, finance, or automotive, this feature gives you tangible ROI insights on your lead generation efforts. Just in time for the new year.

Wrapping Up 2024: A Year of Big Moves

Whew! What a year it’s been! 

Rolling out one or two major features is a big deal, but ten? That’s a whole new level of hustle and we’re here for it. 

From helping businesses identify return visitors and track customer journeys to making email campaigns easier and more effective, these updates have made a real difference for us and, most importantly, for you.

Our customers are seeing incredible results, staying with us longer, and using these tools to take their marketing to places we couldn’t have imagined a year ago. 

And while we’re all about celebrating the wins from 2024, trust us when we say….you ain’t seen nothing yet! 

We’ve got even bigger things planned for 2025 and we can’t wait to show you what’s coming.

So here’s to all the success we’ve shared this year and to even more wins in the next!

See Who Is On Your Site Right Now!

Get names, emails, phone numbers & more.

Try it Free, No Credit Card Required

Start Your Free Trial

Important Next Steps

See what targeted outbound marketing is all about. Capture and engage your first 500 website visitor leads with Customers.ai X-Ray website visitor identification for free.

Talk and learn about sales outreach automation with other growth enthusiasts. Join Customers.ai Island, our Facebook group of 40K marketers and entrepreneurs who are ready to support you.

Advance your marketing performance with Sales Outreach School, a free tutorial and training area for sales pros and marketers.

The post 2024 Year in Review: Features, Features & More Features! appeared first on Customers.ai.

GitHub’s AI Programming Copilot Goes Free for VS Code Developers

Software development presents numerous challenges, from debugging complex code to navigating legacy systems and adapting to rapidly evolving technologies. These obstacles can hamper productivity, increase error rates, and steepen the learning curve for developers. While AI tools offer promising solutions, high subscription costs, and limited accessibility have often excluded many, especially students and open-source contributors. GitHub’s latest announcement offers a significant step toward leveling the playing field for developers worldwide.

GitHub is Making Its AI Programming Copilot Free for VS Code Developers

GitHub has revealed that its AI-powered coding assistant, Copilot, will now be available for free to all developers using Visual Studio Code (VS Code). Originally launched in 2021, Copilot enhances coding by providing intelligent suggestions, completing lines of code, and even generating entire functions. By offering a free tier, GitHub is making AI-driven programming assistance more accessible and inclusive.

Although the free version comes with certain limitations, such as usage caps and restricted features, it retains the core functionalities that make Copilot a powerful tool. Integrating this free version with VS Code—one of the most popular integrated development environments (IDEs)—ensures that developers can seamlessly access the tool within their existing workflows.

Technical Details and Benefits

At its core, Copilot is powered by OpenAI Codex, a machine learning model fine-tuned specifically for programming tasks. By leveraging natural language processing (NLP), Copilot provides context-aware suggestions, enabling developers to write code more efficiently and with fewer errors.

One of Copilot’s key capabilities is its ability to generate boilerplate code, saving developers valuable time on repetitive tasks. Users can simply describe what they need in plain language, and Copilot will generate functional code snippets, often accompanied by comments and optimized logic. This is particularly useful for junior developers learning best practices and for experienced programmers tackling tight deadlines.

Moreover, Copilot’s contextual understanding allows it to adapt to the project’s existing codebase, ensuring that its suggestions align with the overall structure and style. Its broad support for multiple programming languages and frameworks enhances its versatility, making it a valuable asset for solo developers and collaborative teams alike.

Preliminary data from GitHub highlights Copilot’s impact on developer productivity. Studies suggest that users have experienced a 50% reduction in time spent on repetitive coding tasks. Additionally, Copilot’s intelligent suggestions have contributed to a reduction in initial implementation errors by identifying potential issues early.

The introduction of a free tier is anticipated to expand Copilot’s user base, particularly among students, hobbyists, and open-source contributors. Early feedback emphasizes the tool’s dual role as a learning resource and a productivity booster. By lowering barriers to access, GitHub is enabling more developers to benefit from AI-driven assistance, fostering a culture of experimentation and growth.

Conclusion

GitHub’s decision to make Copilot free for VS Code developers is a noteworthy step toward democratizing access to advanced AI tools. By addressing common challenges in software development, this initiative has the potential to reshape how developers approach their work. While the free tier includes some limitations, it demonstrates GitHub’s commitment to inclusivity and innovation.

As AI technology continues to evolve, tools like Copilot will become increasingly integral to the development process. Whether streamlining workflows, enhancing learning, or enabling more ambitious projects, Copilot’s availability marks a significant milestone in the journey toward a more collaborative and productive future in tech. For developers at all levels, this is an opportunity to explore the transformative potential of AI in programming.

Check out the Details. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence….
The post GitHub’s AI Programming Copilot Goes Free for VS Code Developers appeared first on MarkTechPost.

OpenAI Just Announced API Access to o1 (Advanced Reasoning Model)

Artificial intelligence has made significant progress over the years, yet certain challenges remain, particularly in advanced reasoning. Many AI models struggle with generalization, often falling short in scenarios requiring logical deduction, multi-step decision-making, or nuanced understanding. These limitations are particularly evident in areas such as financial forecasting, medical diagnostics, and complex programming tasks. Developers and researchers have long sought a model capable of addressing these gaps without extensive customization. OpenAI’s recently introduced o1 model aims to address these persistent challenges.

OpenAI Just Announced API Access to o1 (Advanced Reasoning Model)

OpenAI has announced API access to its o1 model, a system designed to excel in advanced reasoning tasks. This move allows developers to integrate the model’s reasoning capabilities into a variety of applications. OpenAI describes o1 as a model focused on solving multi-step reasoning problems while maintaining contextual understanding and accuracy.

Initially available to select developers, the model is being rolled out gradually to ensure it meets usability and reliability standards. This careful approach enables developers to explore its potential in areas like intelligent tutoring systems, virtual assistants, and other reasoning-intensive applications.

Technical Details and Benefits

The o1 model builds on advancements in transformer architectures and refined pretraining techniques. It leverages a dataset specifically curated for reasoning tasks, optimizing it for logic-heavy domains. Key features of the model include:

Multi-step Reasoning: o1 is designed to handle problems requiring several layers of reasoning, such as puzzles or strategic planning.

Contextual Understanding: The model shows improved ability to retain and apply context in extended conversations or intricate scenarios.

Precision in Reasoning: With an architecture tuned for accuracy, o1 aims to minimize errors, even in ambiguous or complex queries.

Customizability: Through OpenAI’s “function calling” feature, developers can adapt the model to specific use cases, enhancing its versatility.

These features make the model applicable to diverse industries. For instance, it can support legal analysis, enhance educational tools focused on problem-solving, or improve financial modeling by adapting to dynamic market conditions.

Results and Insights

Early evaluations suggest that o1 offers meaningful improvements over previous models. In benchmarks such as the Big-Bench Reasoning Challenge (BBRC) and the AI2 Reasoning Challenge (ARC), o1 demonstrated enhanced performance, particularly in multi-step logic tasks. It reduced errors in contextual understanding and showed notable gains in reasoning accuracy.

Case studies further illustrate its potential. In healthcare, the model has been tested for diagnostic reasoning, identifying complex disease patterns with promising accuracy. Similarly, in software development, it has been used to debug and optimize code, helping developers save significant time on error analysis. These results highlight the model’s practicality in solving real-world challenges.

Conclusion

OpenAI’s o1 model addresses some of the key challenges in advanced reasoning, offering developers a tool to navigate complex tasks more effectively. By making the model accessible through its API, OpenAI is opening up opportunities for innovation across industries. As the rollout continues, the model’s capabilities will likely evolve based on user feedback, ensuring it remains a valuable resource for a wide range of applications. The o1 model represents a thoughtful step forward in the pursuit of AI systems capable of reasoning with depth and precision.

Check out the Details. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence….
The post OpenAI Just Announced API Access to o1 (Advanced Reasoning Model) appeared first on MarkTechPost.

Alibaba AI Research Releases CosyVoice 2: An Improved Streaming Speech …

Speech synthesis technology has made notable strides, yet challenges remain in delivering real-time, natural-sounding audio. Common obstacles include latency, pronunciation accuracy, and speaker consistency—issues that become critical in streaming applications where responsiveness is paramount. Additionally, handling complex linguistic inputs, such as tongue twisters or polyphonic words, often exceeds the capabilities of existing models. To address these issues, researchers at Alibaba have unveiled CosyVoice 2, an enhanced streaming TTS model designed to resolve these challenges effectively.

Introducing CosyVoice 2

CosyVoice 2 builds upon the foundation of the original CosyVoice, bringing significant upgrades to speech synthesis technology. This enhanced model focuses on refining both streaming and offline applications, incorporating features that improve flexibility and precision across diverse use cases, including text-to-speech and interactive voice systems.

Key advancements in CosyVoice 2 include:

Unified Streaming and Non-Streaming Modes: Seamlessly adaptable to various applications without compromising performance.

Enhanced Pronunciation Accuracy: A reduction of pronunciation errors by 30%-50%, improving clarity in complex linguistic scenarios.

Improved Speaker Consistency: Ensures stable voice output across zero-shot and cross-lingual synthesis tasks.

Advanced Instruction Capabilities: Offers precise control over tone, style, and accent through natural language instructions.

Innovations and Benefits

CosyVoice 2 integrates several technological advancements to enhance its performance and usability:

Finite Scalar Quantization (FSQ): Replacing traditional vector quantization, FSQ optimizes the use of the speech token codebook, improving semantic representation and synthesis quality.

Simplified Text-Speech Architecture: Leveraging pre-trained large language models (LLMs) as its backbone, CosyVoice 2 eliminates the need for additional text encoders, streamlining the model while boosting cross-lingual performance.

Chunk-Aware Causal Flow Matching: This innovation aligns semantic and acoustic features with minimal latency, making the model suitable for real-time speech generation.

Expanded Instructional Dataset: With over 1,500 hours of training data, the model enables granular control over accents, emotions, and speech styles, allowing for versatile and expressive voice generation.

Performance Insights

Extensive evaluations of CosyVoice 2 underscore its strengths:

Low Latency and Efficiency: Response times as low as 150ms make it well-suited for real-time applications like voice chat.

Improved Pronunciation: The model achieves significant enhancements in handling rare and complex linguistic constructs.

Consistent Speaker Fidelity: High speaker similarity scores demonstrate the ability to maintain naturalness and consistency.

Multilingual Capability: Strong results on Japanese and Korean benchmarks highlight its robustness, though challenges remain with overlapping character sets.

Resilience in Challenging Scenarios: CosyVoice 2 excels in difficult cases such as tongue twisters, outperforming previous models in accuracy and clarity.

Conclusion

CosyVoice 2 thoughtfully advances from its predecessor, addressing key limitations in latency, accuracy, and speaker consistency with scalable solutions. The integration of advanced features like FSQ and chunk-aware flow matching offers a balanced approach to performance and usability. While opportunities remain to expand language support and refine complex scenarios, CosyVoice 2 lays a strong foundation for the future of speech synthesis. Bridging offline and streaming modes ensures high-quality, real-time audio generation for diverse applications.

Check out the Paper, Hugging Face Page, Pre-Trained Model, and Demo. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence….
The post Alibaba AI Research Releases CosyVoice 2: An Improved Streaming Speech Synthesis Model appeared first on MarkTechPost.

How Fastweb fine-tuned the Mistral model using Amazon SageMaker HyperP …

This post is co-written with Marta Cavalleri and Giovanni Germani from Fastweb, and Claudia Sacco and Andrea Policarpi from BIP xTech.
AI’s transformative impact extends throughout the modern business landscape, with telecommunications emerging as a key area of innovation. Fastweb, one of Italy’s leading telecommunications operators, recognized the immense potential of AI technologies early on and began investing in this area in 2019. With a vision to build a large language model (LLM) trained on Italian data, Fastweb embarked on a journey to make this powerful AI capability available to third parties.
Training an LLM is a compute-intensive and complex process, which is why Fastweb, as a first step in their AI journey, used AWS generative AI and machine learning (ML) services such as Amazon SageMaker HyperPod.
SageMaker HyperPod can provision and maintain large-scale compute resilient clusters powered by thousands of accelerators such as AWS Trainium and NVIDIA H200 and H100 Graphical Processing Units (GPUs), but its flexibility allowed Fastweb to deploy a small, agile and on-demand cluster enabling efficient resource utilization and cost management, aligning well with the project’s requirements.
In this post, we explore how Fastweb used cutting-edge AI and ML services to embark on their LLM journey, overcoming challenges and unlocking new opportunities along the way.
Fine-tuning Mistral 7B on AWS
Fastweb recognized the importance of developing language models tailored to the Italian language and culture. To achieve this, the team built an extensive Italian language dataset by combining public sources and acquiring licensed data from publishers and media companies. Using this data, Fastweb, in their first experiment with LLM training, fine-tuned the Mistral 7B model, a state-of-the-art LLM, successfully adapting it to handle tasks such as summarization, question answering, and creative writing in the Italian language, applying a nuanced understanding of Italian culture to the LLM’s responses and providing contextually appropriate and culturally sensitive output.
The team opted for fine-tuning on AWS. This strategic decision was driven by several factors:

Efficient data preparation – Building a high-quality pre-training dataset is a complex task, involving assembling and preprocessing text data from various sources, including web sources and partner companies. Because the final, comprehensive pre-training dataset was still under construction, it was essential to begin with an approach that could adapt existing models to Italian.
Early results and insights – Fine-tuning allowed the team to achieve early results in training models on the Italian language, providing valuable insights and preliminary Italian language models. This enabled the engineers to iteratively improve the approach based on initial outcomes.
Computational efficiency – Fine-tuning requires significantly less computational power and less time to complete compared to a complete model pre-training. This approach streamlined the development process and allowed for a higher volume of experiments within a shorter time frame on AWS.

To facilitate the process, the team created a comprehensive dataset encompassing a wide range of tasks, constructed by translating existing English datasets and generating synthetic elements. The dataset was stored in an Amazon Simple Storage Service (Amazon S3) bucket, which served as a centralized data repository. During the training process, our SageMaker HyperPod cluster was connected to this S3 bucket, enabling effortless retrieval of the dataset elements as needed.
The integration of Amazon S3 and the SageMaker HyperPod cluster exemplifies the power of the AWS ecosystem, where various services work together seamlessly to support complex workflows.
Overcoming data scarcity with translation and synthetic data generation
When fine-tuning a custom version of the Mistral 7B LLM for the Italian language, Fastweb faced a major obstacle: high-quality Italian datasets were extremely limited or unavailable. To tackle this data scarcity challenge, Fastweb had to build a comprehensive training dataset from scratch to enable effective model fine-tuning.
While establishing strategic agreements to acquire licensed data from publishers and media companies, Fastweb employed two main strategies to create a diverse and well-rounded dataset: translating open source English training data into Italian and generating synthetic Italian data using AI models.
To use the wealth of information available in English, Fastweb translated open source English training datasets into Italian. This approach made valuable data accessible and relevant for Italian language training. Both LLMs and open source translation tools were used for this process.
The open source Argos Translate tool was used for bulk translation of datasets with simpler content. Although LLMs offer superior translation quality, Argos Translate is free, extremely fast, and well-suited for efficiently handling large volumes of straightforward data. For complex datasets where accuracy was critical, LLMs were employed to provide high-quality translations.
To further enrich the dataset, Fastweb generated synthetic Italian data using LLMs. This involved creating a variety of text samples covering a wide range of topics and tasks relevant to the Italian language. High-quality Italian web articles, books, and other texts served as the basis for training the LLMs to generate authentic-sounding synthetic content that captured the nuances of the language.
The resulting sub-datasets spanned diverse subjects, including medical information, question-answer pairs, conversations, web articles, science topics, and more. The tasks covered were also highly varied, encompassing question answering, summarization, creative writing, and others.
Each subset generated through translation or synthetic data creation underwent meticulous filtering to maintain quality and diversity. A similarity check was performed to deduplicate the data; if two elements were found to be too similar, one was removed. This step was crucial in maintaining variability and preventing bias from repetitive or overly similar content.
The deduplication process involved embedding dataset elements using a text embedder, then computing cosine similarity between the embeddings to identify similar elements. Meta’s FAISS library, renowned for its efficiency in similarity search and clustering of dense vectors, was used as the underlying vector database due to its ability to handle large-scale datasets effectively.
After filtering and deduplication, the remaining subsets were postprocessed and combined to form the final fine-tuning dataset, comprising 300,000 training elements. This comprehensive dataset enabled Fastweb to effectively fine-tune their custom version of the Mistral 7B model, achieving high performance and diversity across a wide range of tasks and topics.
All data generation and processing steps were run in parallel directly on the SageMaker HyperPod cluster nodes, using a unique working environment and highlighting the cluster’s versatility for various tasks beyond just training models.
The following diagram illustrates two distinct data pipelines for creating the final dataset: the upper pipeline uses translations of existing English datasets into Italian, and the lower pipeline employs custom generated synthetic data.

The computational cost of training an LLM
The computational cost of training LLMs scales approximately with the number of parameters and the amount of training data. As a general rule, for each model parameter being trained, approximately 24 bytes of memory are required. This means that to fully fine-tune a 7 billion parameter model like Mistral 7B, at least 156 GB of hardware memory is necessary, not including the additional overhead of loading training data.
The following table provides additional examples.

LLM Model Size vs. Training Memory

Number of Parameters
Memory Requirement

500 million
12 GB

1 billion
23 GB

2 billion
45 GB

3 billion
67 GB

5 billion
112 GB

7 billion
156 GB

10 billion
224 GB

Parameter-efficient fine-tuning (PEFT) methods minimize the number of trainable parameters, whereas quantization reduces the number of bits per parameter, often with minimal negative impact on the final training results.
Despite these memory-saving techniques, fine-tuning large models still demands substantial GPU memory and extended training times. This makes distributed training essential, allowing the workload to be shared across multiple GPUs, thereby enabling the efficient handling of such large-scale computational tasks.
The following table and figure illustrate the allocation of GPU memory during each phase of LLM training.

Solution overview
Training LLMs often requires significant computational resources that can exceed the capabilities of a single GPU. Distributed training is a powerful technique that addresses this challenge by distributing the workload across multiple GPUs and nodes, enabling parallel processing and reducing training time. SageMaker HyperPod simplifies the process of setting up and running distributed training jobs, providing preconfigured environments and libraries specifically designed for this purpose.
There are two main techniques for distributed training: data parallelization and model parallelization. Data parallelization involves distributing the training data across multiple GPUs, whereas model parallelization splits the model itself across different GPUs.
To take advantage of distributed training, a cluster of interconnected GPUs, often spread across multiple physical nodes, is required. SageMaker HyperPod allows for both data and model parallelization techniques to be employed simultaneously, maximizing the available computational resources. Also, SageMaker HyperPod provides resilience through features like automatic fault detection and recovery, which are crucial for long-running training jobs. SageMaker HyperPod allows for the creation of personalized Conda environments, enabling the installation of necessary libraries and tools for distributed training.
One popular library for implementing distributed training is DeepSpeed, a Python optimization library that handles distributed training and makes it memory-efficient and fast by enabling both data and model parallelization. The choice to use DeepSpeed was driven by the availability of an extensive, already-developed code base, ready to be employed for training experiments. The high flexibility and environment customization capabilities of SageMaker HyperPod made it possible to create a personalized Conda environment with all the necessary libraries installed, including DeepSpeed.
The following diagram illustrates the two key parallelization strategies offered by DeepSpeed: data parallelism and model parallelism. Data parallelism involves replicating the entire model across multiple devices, with each device processing a distinct batch of training data. In contrast, model parallelism distributes different parts of a single model across multiple devices, enabling the training of large models that exceed the memory capacity of a single device.

To help meet the demanding computational requirements of training LLMs, we used the power and flexibility of SageMaker HyperPod clusters, orchestrated with Slurm. While HyperPod also supports orchestration with Amazon EKS, our research team had prior expertise with Slurm. The cluster configuration was tailored to our specific training needs, providing optimal resource utilization and cost-effectiveness.
The SageMaker HyperPod cluster architecture consisted of a controller machine to orchestrate the training job’s coordination and resource allocation. The training tasks were run by two compute nodes, which were g5.12xlarge instances equipped with high-performance GPUs. These compute nodes handled the bulk of the computational workload, using their GPUs to accelerate the training process.
The AWS managed high-performance Lustre file system (Amazon FSx for Lustre) mounted on the nodes provided high-speed data access and transfer rates, which are essential for efficient training operations.
SageMaker HyperPod is used to launch large clusters for pre-training Large Language Models (LLMs) with thousands of GPUs, but one of its key advantages is its flexibility, indeed it also allows for the creation of small, agile, and on-demand clusters. The versatility of SageMaker HyperPod made it possible to use resources only when needed, avoiding unnecessary costs.
For the DeepSpeed configuration, we followed the standard recommended setup, enabling data and model parallelism across the two g5.12xlarge nodes of the cluster, for a total of 8 GPUs.
Although more advanced techniques were available, such as offloading some computation to the CPU during training, our cluster was sized with a sufficiently high GPU margin. With 192 GiB (206 GB) of available overall GPU memory, even accounting for the additional GPU needed to keep dataset batches in memory during training, we had ample resources to train a 7B parameter model without the need for these advanced techniques. The following figure describes the infrastructure setup of our training solution.

Training results and output examples
After completing the training process, Fastweb’s fine-tuned language model demonstrated a significant performance improvement on Italian language tasks compared to the base model. Evaluated on an internal benchmark dataset, the fine-tuned model achieved an average accuracy increase of 20% across a range of tasks designed to assess its general understanding of the Italian language.
The benchmark tasks focused on three key areas: question answering, common sense reasoning, and next word prediction. Question answering tasks tested the model’s ability to comprehend and provide accurate responses to queries in Italian. Common sense reasoning evaluated the model’s grasp of common sense knowledge and its capacity to make logical inferences based on real-world scenarios. Next word prediction assessed the model’s understanding of language patterns and its ability to predict the most likely word to follow in a given context.
To evaluate the fine-tuned model’s performance, we initiated our interaction by inquiring about its capabilities. The model responded by enumerating its primary functions, emphasizing its ability to address Fastweb-specific topics. The response was formulated in correct Italian with a very natural syntax, as illustrated in the following example.

Afterwards, we asked the model to generate five titles for a presentation on the topic of AI.

Just for fun, we asked what the most famous sandwich is. The model responded with a combination of typical Italian ingredients and added that there is a wide variety of choices.

Lastly, we asked the model to provide us with a useful link to understand the recent EU AI Act. The model provided a working link, along with a helpful description.

Conclusion
Using SageMaker HyperPod, Fastweb successfully fine-tuned the Mistral 7B model as a first step in their generative AI journey, significantly improving its performance on tasks involving the Italian language.
Looking ahead, Fastweb plans to deploy their next models also on Amazon Bedrock using the Custom Model Import feature. This strategic move will enable Fastweb to quickly build and scale new generative AI solutions for their customers, using the broad set of capabilities available on Amazon Bedrock.
By harnessing Amazon Bedrock, Fastweb can further enhance their offerings and drive digital transformation for their customers. This initiative aligns with Fastweb’s commitment to staying at the forefront of AI technology and fostering innovation across various industries.
With their fine-tuned language model running on Amazon Bedrock, Fastweb will be well-positioned to deliver cutting-edge generative AI solutions tailored to the unique needs of their customers. This will empower businesses to unlock new opportunities, streamline processes, and gain valuable insights, ultimately driving growth and competitiveness in the digital age.
Fastweb’s decision to use the Custom Model Import feature in Amazon Bedrock underscores the company’s forward-thinking approach and their dedication to providing their customers with the latest and most advanced AI technologies. This collaboration with AWS further solidifies Fastweb’s position as a leader in digital transformation and a driving force behind the adoption of innovative AI solutions across industries.
To learn more about SageMaker HyperPod, refer to Amazon SageMaker HyperPod and the Amazon SageMaker HyperPod workshop.

About the authors
Marta Cavalleri is the Manager of the Artificial Intelligence Center of Excellence (CoE) at Fastweb, where she leads teams of data scientists and engineers in implementing enterprise AI solutions. She specializes in AI operations, data governance, and cloud architecture on AWS.
Giovanni Germani is the Manager of Architecture & Artificial Intelligence CoE at Fastweb, where he leverages his extensive experience in Enterprise Architecture and digital transformation. With over 12 years in Management Consulting, Giovanni specializes in technology-driven projects across telecommunications, media, and insurance industries. He brings deep expertise in IT strategy, cybersecurity, and artificial intelligence to drive complex transformation programs.
Claudia Sacco is an AWS Professional Solutions Architect at BIP xTech, collaborating with Fastweb’s AI CoE and specialized in architecting advanced cloud and data platforms that drive innovation and operational excellence. With a sharp focus on delivering scalable, secure, and future-ready solutions, she collaborates with organizations to unlock the full potential of cloud technologies. Beyond her professional expertise, Claudia finds inspiration in the outdoors, embracing challenges through climbing and trekking adventures with her family.
Andrea Policarpi is a Data Scientist at BIP xTech, collaborating with Fastweb’s AI CoE. With a strong foundation in computer vision and natural language processing, he is currently exploring the world of Generative AI and leveraging its powerful tools to craft innovative solutions for emerging challenges. In his free time, Andrea is an avid reader and enjoys playing the piano to relax.
Giuseppe Angelo Porcelli is a Principal Machine Learning Specialist Solutions Architect for Amazon Web Services. With several years of software engineering and an ML background, he works with customers of any size to understand their business and technical needs and design AI and ML solutions that make the best use of the AWS Cloud and the Amazon Machine Learning stack. He has worked on projects in different domains, including MLOps, computer vision, and NLP, involving a broad set of AWS services. In his free time, Giuseppe enjoys playing football.
Adolfo Pica has a strong background in cloud computing, with over 20 years of experience in designing, implementing, and optimizing complex IT systems and architectures and with a keen interest and hands-on experience in the rapidly evolving field of generative AI and foundation models. He has expertise in AWS cloud services, DevOps practices, security, data analytics and generative AI. In his free time, Adolfo enjoys following his two sons in their sporting adventures in taekwondo and football.
Maurizio Pinto is a Senior Solutions Architect at AWS, specialized in cloud solutions for telecommunications. With extensive experience in software architecture and AWS services, he helps organizations navigate their cloud journey while pursuing his passion for AI’s transformative impact on technology and society.

Using natural language in Amazon Q Business: From searching and creati …

Many enterprise customers across various industries are looking to adopt Generative AI to drive innovation, user productivity, and enhance customer experience. Generative AI–powered assistants such as Amazon Q Business can be configured to answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon Q Business understands natural language and allows users to receive immediate, permissions-aware responses from enterprise data sources with citations. This capability supports various use cases such as IT, HR, and help desk.
With custom plugins for Amazon Q Business, you can enhance the application environment to enable your users to use natural language to perform specific tasks related to third-party applications — such as Jira, Salesforce, and ServiceNow — directly from within their web experience chat.
Enterprises that have adopted ServiceNow can improve their operations and boost user productivity by using Amazon Q Business for various use cases, including incident and knowledge management. Users can search ServiceNow knowledge base (KB) articles and incidents in addition to being able to create, manage, and track incidents and KB articles, all from within their web experience chat.
In this post, we’ll demonstrate how to configure an Amazon Q Business application and add a custom plugin that gives users the ability to use a natural language interface provided by Amazon Q Business to query real-time data and take actions in ServiceNow. By the end of this hands-on session, you should be able to:

Create an Amazon Q Business application and integrate it with ServiceNow using a custom plugin.
Use natural language in your Amazon Q web experience chat to perform read and write actions in ServiceNow such as querying and creating incidents and KB articles in a secure and governed fashion.

Prerequisites
Before proceeding, make sure that you have the necessary AWS account permissions and services enabled, along with access to a ServiceNow environment with the required privileges for configuration.
AWS

Have an AWS account with administrative access. For more information, see Setting up for Amazon Q Business. For a complete list of AWS Identity and Access Management (IAM) roles for Amazon Q Business, see IAM roles for Amazon Q Business. Although we’re using admin privileges for the purpose of this post, it’s a security best practice to apply least privilege permissions and grant only the permissions required to perform a task.
Subscribe to the Amazon Q Business Pro plan which includes access to custom plugins to enable users to execute actions in third-party applications. For information on what is included in the tiers of user subscriptions, see Amazon Q Business pricing document.

ServiceNow

Obtain a ServiceNow Personal Developer Instance or use a clean ServiceNow developer environment. You will need an account that has admin privileges to perform the configuration steps in ServiceNow.

Solution overview
The following architecture diagram illustrates the workflow for Amazon Q Business web experience with enhanced capabilities to integrate it seamlessly with ServiceNow.

The implementation includes the following steps:

The solution begins with configuring Amazon Q Business using the AWS Management Console. This includes setting up the application environment, adding users to AWS IAM Identity Center, selecting the appropriate subscription tier, and configuring the web experience for users to interact with. The environment can optionally be configured to provide real-time data retrieval using a native retriever, which pulls information from indexed data sources, such as Amazon Simple Storage Service (Amazon S3), during interactions.
The next step involves adjusting the global controls and response settings for the application environment guardrails to allow Amazon Q Business to use its large language model (LLM) knowledge to generate responses when it cannot find responses from your connected data sources.
Integration with ServiceNow is achieved by setting up an OAuth Inbound application endpoint in ServiceNow, which authenticates and authorizes interactions between Amazon Q Business and ServiceNow. This involves creating an OAuth API endpoint in ServiceNow and using the web experience URL from Amazon Q Business as the callback URL. The setup makes sure that Amazon Q Business can securely perform actions in ServiceNow with the same scoped permissions as the user signing in to ServiceNow.
The final step of the solution involves enhancing the application environment with a custom plugin for ServiceNow using APIs defined in an OpenAPI schema. The plugin allows Amazon Q Business to securely interact with ServiceNow’s REST APIs, enabling operations such as querying, creating, and updating records dynamically and in real time

Configuring the Amazon Q Business application
To create an Amazon Q Business application, sign in to the Amazon Q Business console. As a prerequisite to creating an Amazon Q Business application, follow the instructions in Configuring an IAM Identity Center instance section. Amazon Q Business integrates with IAM Identity Center to enable managing user access to your Amazon Q Business application. This is the recommended method for managing human access to AWS resources and the method used for the purpose of this blog.
Amazon Q Business also supports identity federation through IAM. When you use identity federation, you can manage users with your enterprise identity provider (IdP) and use IAM to authenticate users when they sign in to Amazon Q Business.
Create and configure the Amazon Q Business application:

In the Amazon Q Business console, choose Application from the navigation pane and then choose Create application.
Enter the following information for your Amazon Q Business application:

Application name: Enter a name for quick identification, such as my-demo-application.
Service access: Select the Create and use a new service-linked role (SLR). A service-linked role is a unique type of IAM role that is linked directly to Amazon Q Business. Service-linked roles are predefined by Amazon Q Business and include the permissions that the service requires to call other AWS services on your behalf.
Choose Create.

 After creating your Amazon Q Business application environment, create and select the retriever and provision the index that will power your generative AI web experience. The retriever pulls data from the index in real time during a conversation. On the Select Retriever page:

Retrievers: Select Use native retriever.
Index provisioning: Select Starter, which is ideal for proof-of-concept or developer workloads. See Index types for more information.
Number of units: Enter 1. This indicates the capacity units that you want to provision for your index. Each unit is 20,000 documents. Choose Next.
Choose Next.

After you select a retriever for your Amazon Q Business application environment, you can optionally connect other data sources to it. Because a data source isn’t required for this session, we won’t configure one. For more information on connecting data sources to an Amazon Q Business application, see connecting data sources.

Choose Next.

As an account admin, you can add users to your IAM Identity Center instance from the Amazon Q Business console. After you add users or groups to an application environment, you can then choose the Amazon Q Business tier for each user or group. On the Add groups and users page:

Choose Add groups and users.
In the Add new users dialog box that opens, enter the details of the user. The details you must enter for a single user include: Username, First name, Last name, email address, Confirm email address, and Display name.
Choose Next and then Add. The user is automatically added to an IAM Identity Center directory and an email invitation to join Identity Center is sent to the email address provided.
After adding a user or group, choose the Amazon Q Business subscription tier for each user or group. From the Current subscription dropdown menu, select Q Business Pro.
For the Web experience service access, select Create and use a new service role.
Choose Create application.

Upon successful completion, Amazon Q Business returns a web experience URL that you can share with the users you added to your application environment. The Web experience URL (in this case: https://xxxxxxxx.chat.qbusiness.us-east-1.on.aws/) will be used when creating an OAuth application endpoint in ServiceNow. Note that your web experience URL will be different from the one shown here.

Enhancing an Amazon Q Business application with guardrails
By default, an Amazon Q Business application is configured to respond to user chat queries using only enterprise data. Because we didn’t configure a data source for the purpose of this post, you will use Admin controls and guardrails to allow Amazon Q to use its LLM world knowledge to generate responses when it cannot find responses from your connected data sources.
Create a custom plugin for ServiceNow:

From the Amazon Q Business console, choose Applications in the navigation pane. Select the name of your application from the list of applications.
From the navigation pane, choose Enhancements, and then choose Admin Controls and guardrails.
In Global Controls, choose Edit.
In Response settings under Application guardrails, select Allow Amazon Q to fall back to LLM knowledge.

Configuring ServiceNow
To allow Amazon Q Business to connect to your ServiceNow instance, you need to create an OAuth inbound application endpoint. OAuth-based authentication validates the identity of the client that attempts to establish a trust on the system by using an authentication protocol. For more information, see OAuth Inbound and Outbound authentication.
Create an OAuth application endpoint for external client applications to access the ServiceNow instance:

In the ServiceNow console, navigate to All, then System OAuth, then Application Registry and then choose New. On the interceptor page, select Create an OAuth API endpoint for external clients and then fill in the form with details for Name and Redirect URL. The other fields are automatically generated by the ServiceNow OAuth server.

The Redirect URL is the callback URL that the authorization server redirects to. Enter the web experience URL of your Amazon Q Business application environment (which is the client requesting access to the resource), appended by oauth/callback.
For this example, the URL is: https://xxxxxxxx.chat.qbusiness.us-east-1.on.aws/oauth/callback

For Auth Scope, set the value to useraccount. The scope API response parameter defines the amount of access granted by the access token, which means that the access token has the same rights as the user account that authorized the token. For example, if Abel Tuter authorizes an application by providing login credentials, then the resulting access token grants the token bearer the same access privileges as Abel Tuter.
Choose Submit.

This creates an OAuth client application record and generates a client ID and client secret, which Amazon Q Business needs to access the restricted resources on the instance. You will need this authentication information (client ID and client secret) in the following custom plugin configuration process.

Enhancing the Amazon Q Business application environment with custom plugins for ServiceNow
To integrate with external applications, Amazon Q Business uses APIs, which are configured as part of the custom plugins.
Before creating a custom plugin, you need to create or edit an OpenAPI schema, outlining the different API operations that you want to enable for your custom plugin. Amazon Q Business uses the configured third-party OpenAPI specifications to dynamically determine which API operations to perform to fulfill a user request. Therefore, the OpenAPI schema definition has a big impact on API selection accuracy and might require design optimizations. In order to maximize accuracy and improve efficiency with an Amazon Q Business custom plugin, follow the best practices for configuring OpenAPI schema definitions.
To configure a custom plugin, you must define at least one and a maximum of eight API operations that can be invoked. To define the API operations, create an OpenAPI schema in JSON or YAML format. You can create OpenAPI schema files and upload them to Amazon S3. Alternatively, you can use the OpenAPI text editor in the console, which will validate your schema.
For this post, a working sample of an OpenAPI Schema for ServiceNow is provided in JSON format. Before using it, edit the template file and replace <YOUR_SERVICENOW_INSTANCE_URL> in the following sections with the URL of your ServiceNow instance.
You can use the REST API Explorer to browse available APIs, API versions, and methods for each API. The explorer enables you to test REST API requests straight from the user interface. The Table API provides endpoints that allow you to perform create, read, update, and delete (CRUD) operations on existing tables. The calling user must have sufficient roles to access the data in the table specified in the request. For additional information on assigning roles, see Managing roles.
{
“openapi”: “3.0.1”,
“info”: {
“title”: “Table API”,
“description”: “Allows you to perform create, read, update and delete (CRUD) operations on existing tables”,
“version”: “latest”
},
“externalDocs”: {
“url”: “https://docs.servicenow.com/?context=CSHelp:REST-Table-API”
},
“servers”: [
{
“url”: “YOUR_SERVICENOW_INSTANCE_URL”
}
],
“paths”: {
“/api/now/table/{tableName}”: {
“get”: {
“description”: “Retrieve records from a table”,
“parameters”: [
{
“name”: “tableName”,
“in”: “path”,
“description”: “Table Name”,
“required”: true,
“schema”: {
“type”: “string”
}
},
{
“name”: “sysparm_query”,
“in”: “query”,
“description”: “An encoded query string used to filter the results like Incidents Numbers or Knowledge Base IDs etc”,
“required”: true,
“schema”: {
“type”: “string”
}
},
{
“name”: “sysparm_fields”,
“in”: “query”,
“description”: “A comma-separated list of fields to return in the response”,
“required”: false,
“schema”: {
“type”: “string”
}
},
{
“name”: “sysparm_limit”,
“in”: “query”,
“description”: “The maximum number of results returned per page”,
“required”: false,
“schema”: {
“type”: “string”
}
}
],
“responses”: {
“200”: {
“description”: “ok”,
“content”: {
“application/json”: {
“schema”: {
“$ref”: “#/components/schemas/incident”
}
}
}
}
}
},
“post”: {
“description”: “Create a record”,
“parameters”: [
{
“name”: “tableName”,
“in”: “path”,
“description”: “Table Name”,
“required”: true,
“schema”: {
“type”: “string”
}
}
],
“requestBody”: {
“content”: {
“application/json”: {
“schema”: {
“type”: “object”,
“properties”: {
“short_description”: {
“type”: “string”,
“description”: “Short Description”
},
“description”: {
“type”: “string”,
“description”: “Full Description for Incidents only”
},
“caller_id”: {
“type”: “string”,
“description”: “Caller Email”
},
“state”: {
“type”: “string”,
“description”: “State of the incident”,
“enum”: [
“new”,
“in_progress”,
“resolved”,
“closed”
]
},
“text”: {
“type”: “string”,
“description”: “Article Body Text for Knowledge Bases Only (KB)”
}
},
“required”: [
“short_description”,
“caller_id”
]
}
}
},
“required”: true
},
“responses”: {
“200”: {
“description”: “ok”,
“content”: {
“application/json”: {}
}
}
}
}
},
“/api/now/table/{tableName}/{sys_id}”: {
“get”: {
“description”: “Retrieve a record”,
“parameters”: [
{
“name”: “tableName”,
“in”: “path”,
“description”: “Table Name”,
“required”: true,
“schema”: {
“type”: “string”
}
},
{
“name”: “sys_id”,
“in”: “path”,
“description”: “Sys ID”,
“required”: true,
“schema”: {
“type”: “string”
}
},
{
“name”: “sysparm_fields”,
“in”: “query”,
“description”: “A comma-separated list of fields to return in the response”,
“required”: false,
“schema”: {
“type”: “string”
}
}
],
“responses”: {
“200”: {
“description”: “ok”,
“content”: {
“application/json”: {},
“application/xml”: {},
“text/xml”: {}
}
}
}
},
“delete”: {
“description”: “Delete a record”,
“parameters”: [
{
“name”: “tableName”,
“in”: “path”,
“description”: “Table Name”,
“required”: true,
“schema”: {
“type”: “string”
}
},
{
“name”: “sys_id”,
“in”: “path”,
“description”: “Sys ID”,
“required”: true,
“schema”: {
“type”: “string”
}
}
],
“responses”: {
“200”: {
“description”: “ok”,
“content”: {
“application/json”: {},
“application/xml”: {},
“text/xml”: {}
}
}
}
},
“patch”: {
“description”: “Update or modify a record”,
“parameters”: [
{
“name”: “tableName”,
“in”: “path”,
“description”: “Table Name”,
“required”: true,
“schema”: {
“type”: “string”
}
},
{
“name”: “sys_id”,
“in”: “path”,
“description”: “Sys ID”,
“required”: true,
“schema”: {
“type”: “string”
}
}
],
“requestBody”: {
“content”: {
“application/json”: {
“schema”: {
“type”: “object”,
“properties”: {
“short_description”: {
“type”: “string”,
“description”: “Short Description”
},
“description”: {
“type”: “string”,
“description”: “Full Description for Incidents only”
},
“caller_id”: {
“type”: “string”,
“description”: “Caller Email”
},
“state”: {
“type”: “string”,
“description”: “State of the incident”,
“enum”: [
“new”,
“in_progress”,
“resolved”,
“closed”
]
},
“text”: {
“type”: “string”,
“description”: “Article Body Text for Knowledge Bases Only (KB)”
}
},
“required”: [
“short_description”,
“caller_id”
]
}
}
},
“required”: true
},
“responses”: {
“200”: {
“description”: “ok”,
“content”: {
“application/json”: {},
“application/xml”: {},
“text/xml”: {}
}
}
}
}
}
},
“components”: {
“schemas”: {
“incident”: {
“type”: “object”,
“properties”: {
“sys_id”: {
“type”: “string”,
“description”: “Unique identifier for the incident”
},
“number”: {
“type”: “string”,
“description”: “Incident number”
},
“short_description”: {
“type”: “string”,
“description”: “Brief description of the incident”
}
}
}
},
“securitySchemes”: {
“oauth2”: {
“type”: “oauth2”,
“flows”: {
“authorizationCode”: {
“authorizationUrl”: “YOUR_SERVICENOW_INSTANCE_URL/oauth_auth.do”,
“tokenUrl”: “YOUR_SERVICENOW_INSTANCE_URL/oauth_token.do”,
“scopes”: {
“useraccount”: “Access equivalent to the user’s account”
}
}
}
}
}
},
“security”: [
{
“oauth2”: [
“useraccount”
]
}
]
}
The URL for the ServiceNow instance used in this post is: https://devxxxxxx.service-now.com/. After updating the sections of the template with the URL for this specific instance, the JSON should look like the following:
“servers”: [
{
“url”: “https://devxxxxxx.service-now.com/”
}
“securitySchemes”: {
“oauth2”: {
“type”: “oauth2”,
“flows”: {
“authorizationCode”: {
“authorizationUrl”: “https://devxxxxxx.service-now.com/oauth_auth.do”,
“tokenUrl”: “https://devxxxxxx.service-now.com/oauth_token.do”,
“scopes”: {
“useraccount”: “Access equivalent to the user’s account”
}
}
}
}
}
To create a custom plugin for ServiceNow:

Sign in to the Amazon Q Business console.
Choose Applications in the navigation pane, and then select your application from the list of applications.
In the navigation pane, choose Enhancements, and then choose Plugins.
In Plugins, choose Add plugin.
In Add plugins, choose Custom plugin.
In Custom plugin, enter the following information:

In Name and description, for Plugin name: Enter a name for your Amazon Q plugin.
In API schema, for API schema source, select Define with in-line OpenAPI schema editor.
Select JSON as the format for the schema.
Remove any sample schema that appears in the inline OpenAPI schema editor and replace it with the text from the provided sample JSON template, updated with your ServiceNow instance URL.

In Authentication: Select Authentication required.
For AWS Secrets Manager secret, choose Create and add a new secret. You need to store the ServiceNow OAuth authentication credentials in a Secrets Manager secret to connect your third-party application to Amazon Q. In the window that opens, enter the details in the form:

Secret name: A name for your Secrets Manager secret.
Client ID: The Client ID from ServiceNow OAuth configuration in the previous section.
Client secret: The Client Secret from ServiceNow OAuth configuration in the previous section.
OAuth callback URL: The URL the user needs to be redirected to after authentication. This will be your web experience URL. For this example, it’s: https://xxxxxxxx.chat.qbusiness.us-east-1.on.aws/oauth/callback. Amazon Q Business will handle OAuth tokens in this URL.

In Choose a method to authorize Amazon Q Business: Select Create and add a new service role. The console will generate a service role name. To connect Amazon Q Business to third-party applications that require authentication, you need to give the Amazon Q role permissions to access your Secrets Manager secret. This will enable an Amazon Q Business custom plugin to access the credentials needed to sign in to the third-party service.
Choose Add plugin to add your plugin.

Upon successful completion, the plugin will appear under Plugins with Build status of Ready and Plugin status Active.
Using Amazon Q Business web experience chat to take actions in ServiceNow
Users can launch your Amazon Q Business web experience in two ways:

AWS access portal URL provided in an invitation email sent to the user to join AWS IAM Identity Center.
Web experience URL shared by the admin.

Navigate to the deployed web experience URL and sign with your AWS IAM Identity Center credentials. After signing in, choose the New conversation icon in the left-hand menu to start a conversation.
Example: Search Knowledge Base Articles in ServiceNow for user issue and create an incident
The following chat conversation example illustrates a typical use case of Amazon Q Business integrated with custom plugins for ServiceNow. These features allow you to perform a wide range of tasks tailored to your organization’s needs.
In this example, we initiate a conversation in the web experience chat to search for KB articles related to ”log in issues” in ServiceNow by invoking a plugin action. After the user submits a prompt, Amazon Q Business queries ServiceNow through the appropriate API to retrieve the results and provides a response with related KB articles. We then proceed by asking Amazon Q Business for more details to see if any of the KB articles directly addresses the user’s issue. When no relevant KB articles pertaining to the user’s issue are found, we ask Amazon Q Business to summarize the conversation and create a new incident in ServiceNow, making sure the issue is logged for resolution.
User prompt 1 – I am having issues logging in to the intranet and want to know if there are any ServiceNow KB articles on log-in issues. Perform the search on both Short Description and Text field using LIKE operator
Before submitting the preceding prompt for an action to create an incident in ServiceNow, choose the vertical ellipsis to open Conversation settings, then choose Use a Plugin to select the corresponding custom plugin for ServiceNow. If this is the first time a user is accessing the custom plugin or if their past sign-in has expired, the user will need to authenticate. After authenticating successfully, Amazon Q Business will perform the requested task.
Choose Authorize.
If the user isn’t already signed in to ServiceNow, they will be prompted to enter their credentials. For this example, the user signing in to ServiceNow is the admin user and API actions performed in ServiceNow by Amazon Q Business on behalf of the user will have the same level of access as the user within ServiceNow.
Choose Allow for Amazon Q Business to connect to ServiceNow and perform the requested task on your behalf.

Upon executing the user’s request after verifying that they are authorized, Amazon Q Business responds with the information that it retrieved. We then proceed to retrieve additional details with the following prompt.
User prompt 2 – Can you list the KB number and short description in a tabular form?
Because there no KB articles related the user’s issue were found, we will ask Amazon Q to summarize the conversation context to create an incident with the following prompt.
User prompt 3 – The error I get is “Unable to Login After System Upgrade”. Summarize my issue and create an incident with detailed description and add a note that this needs to be resolved asap.
In response to your prompt for an action, Amazon Q displays a review form where you can modify or fill in the necessary information.
To successfully complete the action, choose submit.
Note: The caller_id value entered in the following example is a valid ServiceNow user.
Your web experience will display a success message if the action succeeds, or an error message if the action fails. In this case, the action succeeded and Amazon Q Business responded accordingly.

The following screenshot shows that the incident was created successfully in ServiceNow.

Troubleshooting common errors
To have a seamless experience with third-party application integrations, it’s essential to thoroughly test, identify, and troubleshoot unexpected behavior.
A common error encountered in Amazon Q Business is API Response too large, which occurs when an API response size exceeds the current limit of 100 KB. While prompting techniques are essential for obtaining accurate and relevant answers, optimizing API responses to include only the necessary and relevant data is crucial for better response times and enhanced user experience.
The REST API Explorer (shown in the following figure) in ServiceNow is a tool that allows developers and administrators to interact with and test the ServiceNow REST APIs directly from within the ServiceNow environment. It provides a user-friendly interface for making API requests, viewing responses, and understanding the available endpoints and data structures. Using this tool simplifies the process of testing and integrating with ServiceNow.
Clean up
To clean up AWS configurations, sign in to the Amazon Q Business console.

From the Amazon Q Business console, in Applications, select the application that you want to delete.
Choose Actions and select Delete.
To confirm deletion, enter Delete.

This will take a few minutes to finish. When completed, the application and the configured custom plugin will be deleted.
When you delete the Amazon Q Business application, the users created as part of the configuration are not automatically deleted from IAM Identity Center. Use the instructions in Delete users in IAM Identity Center to delete the users created for this post.
To clean up in ServiceNow, release the Personal Developer Instance provisioned for this post by following the instructions in the ServiceNow Documentation.
Conclusion
The integration of generative AI-powered assistants such as Amazon Q Business with enterprise systems such as ServiceNow offers significant benefits for organizations. By using natural language processing capabilities, enterprises can streamline operations, enhance user productivity, and deliver better customer experiences. The ability to query real-time data and create incidents and knowledge articles through a secure and governed chat interface transforms how users interact with enterprise data and applications. As demonstrated in this post, enhancing Amazon Q Business to integrate with ServiceNow using custom plugins empowers users to perform complex tasks effortlessly, driving efficiency across various business functions. Adopting this technology not only modernizes workflows, but also positions enterprises at the forefront of innovation.
Learn more

Amazon Q main product page
Get started with Amazon Q
Introducing Amazon Q, a new generative AI-powered assistant (preview)
Improve developer productivity with generative-AI powered Amazon Q in Amazon CodeCatalyst (preview)
Upgrade your Java applications with Amazon Q Code Transformation (preview)
New generative AI features in Amazon Connect, including Amazon Q, facilitate improved contact center service
New Amazon Q in QuickSight uses generative AI assistance for quicker, easier data insights (preview)
Amazon Q brings generative AI-powered assistance to IT pros and developers (preview)

About the Author
Siddhartha Angara is a Senior Solutions Architect at Amazon Web Services. He helps enterprise customers design and build well-architected solutions in the cloud, accelerate cloud adoption, and build Machine Learning and Generative AI applications. He enjoys playing the guitar, reading and family time!

How to Convert Anonymous Website Visitors into Leads

You’ve nailed it. You’re using Customers.ai to identify those anonymous visitors sneaking around your site. You know who’s there, where they’ve been, and maybe even what caught their eye. Big win, right?

Well, kind of.

Identifying your anonymous visitors is just step one. 

The real challenge – the part that separates good marketers from great ones – is figuring out what these visitors want, guiding them toward the right funnel, and turning them into legit leads who actually convert.

Visitor identification isn’t about collecting data for the sake of it. 

It’s about using that data to understand intent, prioritize opportunities, and build connections that lead to sales. So, what’s next? How do you convert those anonymous visitors into real revenue?

See Who Is On Your Site Right Now!

Get names, emails, phone numbers & more.

Try it Free, No Credit Card Required

Start Your Free Trial

Step 1: Decode Anonymous Visitor Intent

Unfortunately for us marketers, most visitors aren’t ready to buy the moment they land on your site.  In fact, according to studies, 96% of website visitors aren’t immediately prepared to make a purchase. 

That’s why understanding intent is the first step in converting anonymous traffic into leads.

So, how do you decode what someone actually wants? 

Look for the signals:

Page Visits and Time Spent

Is someone spending time on your pricing or product pages? That’s a high-intent action. Casual browsers don’t hang out there.

High-Value Actions

Adding something to the cart, downloading a guide, or signing up for a webinar shows clear interest. These actions indicate they’re further along the decision-making process.

Engagement With Pop-Ups or Chatbots

If someone interacts with your pop-up offer or drops a question into your chatbot, they’re giving you a direct clue about their intent.

Decoding these signals will help you separate serious leads from casual visitors. 

And – once you understand intent, you can prioritize who to nurture and how to engage them without wasting time or making people mad by sending them emails they aren’t ready for.

Step 2: Segment Your Anonymous Visitors by Intent

Once you’ve identified your anonymous visitor intent, you can start dividing these visitors into audiences or segments. 

This “micro-segmentation” will help you target the right people with the right message and convert those anonymous visitors who are ready to buy much faster.

Here’s how to do it, with real-world examples that actually work.

High-Intent Visitor Segments

Start with your high-intent visitors. These are your hottest leads, showing clear buying signals.

Example 1: Visitors who spend more than 3 minutes on your pricing page. Set up a follow-up flow with a direct CTA like “Ready to get started? Book a demo today.”

Example 2: Users who add items to their cart but don’t complete the purchase. Trigger a cart abandonment email offering free shipping or a discount code.

Example 3: Someone who requested a demo or consultation. Fast-track them to your sales team for immediate outreach.

When building your high-intent segments, focus on actions that demonstrate clear purchase intent, like visiting pricing pages, adding to cart, or requesting a demo. 

Prioritize these leads for immediate follow-ups with personalized messaging and time-sensitive offers to capitalize on their readiness to buy.

Medium Intent Visitor Segments

These visitors are engaged but not quite ready to buy. That means you can’t just send them emails selling them everything you have. 

You know what that will get you? Moved into the spam folder. No thanks.

Your goal is to nurture them.

Example 1: Blog readers who’ve visited multiple posts. Drop them into a content-based email sequence sharing related articles or guides.

Example 2: Visitors who download gated content like whitepapers or eBooks. Send them follow-ups showcasing how your product solves the problem discussed in the resource.

Example 3: Repeat visitors who haven’t taken action. Use retargeting ads offering a limited-time discount or invite them to a webinar.

For medium-intent segments, focus on adding value rather than pushing for an immediate sale. 

Tailor your follow-ups to educate, inspire, and build trust, so when they’re ready to buy, you’re the obvious choice. 

Low Intent Anonymous Visitor Segments

Low intent visitors are those browsing with no clear focus. 

Example 1: Someone who landed on your homepage but bounced after 30 seconds. Serve them a pop-up offering a 10% discount to encourage engagement.

Example 2: General traffic visiting non-specific pages (e.g., your blog or About page). Use display ads to drive them toward an introductory offer.

Example 3: First-time visitors referred from social media. Send a follow-up campaign focused on brand storytelling to build trust.

You can’t convert visitors to leads if they aren’t ready to be a lead but low intent doesn’t mean not worth your time. After all, they did come to your site. What it does mean is that you need to nurture them to better understand what they are interested in. 

Bonus Tip: Use dynamic segmentation tools to automatically adjust these groups in real-time based on visitor behavior. For example, someone who goes from reading a blog post to viewing the pricing page should instantly move from medium to high intent. This keeps your targeting sharp and your funnels optimized.

Webinar: 2024 Playbook

Increasing Traffic & Conversions in the Age of AI

Free Webinar: Watch Now

Step 3: Build Lead Funnels That Convert

The secret to a high-performing funnel is alignment. When you match the lead’s intent with a tailored funnel, you guide them smoothly toward conversion without overwhelming them. 

Let’s break down how to build intent-driven funnels that actually work.

High-Intent Funnels

These visitors are primed to buy, so your funnel should focus on closing the deal quickly.

Simplify the Path: Send them directly to a product page, pricing page, or one-click demo signup. Example: For a B2B SaaS lead who requested a demo, follow up with a personalized calendar link to schedule the meeting immediately.

Remove Friction: Offer fast-tracking options like “Skip the line” demos or limited-time offers. Example: Use a cart abandonment email with “Your items are still waiting—complete your checkout in one click!”

Leverage Urgency: Add time-sensitive CTAs like “Sign up now and get 10% off your first month” to push high-intent leads across the finish line.

Brand Example: 

Allbirds uses cart abandonment recovery emails to target high-intent visitors. 

Their email features the product image, a one-click “Complete My Purchase” button, and free shipping as an incentive, making it easy for customers to finish their transactions.

Medium-Intent Funnels

These leads are interested but need a little more nurturing before they commit.

Educate and Build Trust: Use automated email campaigns that share case studies, product tutorials, or customer testimonials. Example: Send a series like “How [Customer Name] Increased ROI by 35% Using [Your Product].”

Offer Value-Based Content: Direct them to gated resources like whitepapers, free trials, or detailed guides. Example: “Download our free 7-step guide to [achieving your goal]—no strings attached.”

Use Micro-Conversions: Create smaller CTAs like “Subscribe for updates” or “Take our quick quiz to find the best solution for you.” These mini-commitments move them closer to conversion.

Brand Example: 

Warby Parker targets medium-intent visitors who’ve used their virtual try-on tool. 

They send follow-up emails showcasing frames similar to the ones tried, customer reviews, and a CTA to book an in-store consultation.

Low-Intent Funnels

For these visitors, your goal is awareness and engagement, not an immediate sale.

Highlight Your Brand Story: Use retargeting ads and social campaigns to introduce your brand and showcase your unique value. Example: Run a carousel ad with customer testimonials or key product features.

Capture the Lead First: Focus on collecting contact info with simple offers like “Sign up for 10% off your first purchase” or “Join our community for insider updates.”

Drive Engagement: Send them to engaging content, like blogs or webinars, to keep them interested and learning about your brand.

Brand Example: 

Glossier runs retargeting ads for visitors who browsed their product pages but didn’t purchase. 

The ads highlight glowing customer reviews, showcase bestsellers, and offer a 10% discount code to nudge visitors toward their first purchase.

Remember, keep your funnels simple. A 50-step funnel might sound impressive, but it’s unnecessary. 

We compared two customer flows (one good / one bad) and here’s what we saw: Top accounts had short, intent-based sequences with offers, while the worst performers drowned in endless drip emails. Moral of the story: Less is more when it comes to email marketing! pic.twitter.com/uvsnpHQuhA— CustomersAI (@CustomersAI) October 7, 2024

Focus on building short, actionable funnels with clear next steps based on intent and optimize from there. 

Unlock High-Intent Leads Hiding on Your Site

Book a demo of Customers.ai’s U.S. website visitor identification, customer journey insights and remarketing platform to skyrocket conversions and sales.

Book a Demo

Step 4: Nurture With Precision

Nurturing isn’t about sending a barrage of emails and hoping one works. 

It’s about building trust with every interaction. Your leads are real people, so treat them that way. 

Show them that you understand where they are and giving them exactly what they need, no fluff.

Skip the Sales Overload, Deliver Real Value

Nobody wants a generic, “Buy now!” email when they’re still trying to figure out what you do. 

Look at what they’ve engaged with (pages visited, resources downloaded, or questions asked) and use that data to craft your follow-ups. For example:

Downloaded your whitepaper? Follow up with, “Here’s how [Client Name] used these strategies to double their sales.”

Viewed your pricing page? Share a case study or video showing ROI for businesses like theirs.

Make Every Interaction Personal

People don’t connect with brands, they connect with experiences. Tailor your touchpoints so they feel like the next logical step in the conversation.

High Intent: “We noticed you’ve been checking out [product]. Let us know if you’d like a personal walkthrough or a quick Q&A session.”

Medium Intent: “Still exploring? Here are 3 ways customers use [your product] to hit their goals.”

Low Intent: “Curious about who we are? Here’s a quick look at our story and what makes us different.”

Automate Smarter, Not Harder

Automation is great but not when it sacrifices authenticity. 

Use smart workflows to trigger emails based on real-time behaviors, but keep the tone conversational. Example: If a lead revisits your pricing page multiple times, send them an email like, “Hey [Name], we noticed you’re diving into our pricing details. Want a breakdown of the most popular options?”

Pro Tip: Layer your nurturing strategy with “milestone moments.” For example:

After 3 days of no action, send a soft nudge: “Ready to take the next step?”

After 7 days, escalate with value: “Here’s a free resource to help you decide.”

After 14 days, add urgency: “Act now—this offer ends in 48 hours.”

The goal is to create an experience where leads feel seen, understood, and guided – not pushed. 

Email Deliverability Hacks:

Unlocking Inboxes 2

HOSTED BY

Larry Kim

Founder and CEO, Customers.ai

Free Webinar: Watch Now

Step 5: Measure and Refine

Building sales funnels and nurturing strategies is only half the battle. The real wins come from constant optimization. 

To turn good results into great ones, you need to track the right metrics and make data-driven tweaks.

What to Track:

Data is your roadmap. Focus on these metrics to understand what’s working and what’s not:

Conversion Rates by Funnel: Measure how effectively each funnel moves leads through the stages. Are high-intent visitors converting as expected? Are medium-intent leads stalling?

Engagement Metrics: Keep an eye on email clicks, resource downloads, webinar sign-ups, and other key actions. If engagement drops off, your content or timing might need adjustment.

Time to Convert: Track how long it takes leads to move from anonymous visitors to sales-qualified leads. Long delays may signal friction points in your funnel.

Optimize Based on Data

Once you know what’s happening, take action. Small changes can lead to big improvements:

Test New CTAs: If click-through rates are low, swap vague CTAs like “Learn More” for action-driven ones like “Get Your Free Demo Now.”

Revamp Email Copy: If engagement is weak, test different tones, formats, or subject lines. A conversational style might outperform a formal one.

Adjust Timing: Analyze when leads are dropping off. For example, if they abandon after a second email, try sending it sooner or add a touchpoint between emails.

Segment Smarter: If certain segments are underperforming, refine your criteria. Maybe a “medium-intent” lead needs different triggers or content to move forward.

Pro Tip: Use heatmaps or click-tracking tools to visualize where leads are engaging most on your emails or landing pages. This can uncover hidden friction points or opportunities to improve.

Every tweak, test, and optimization brings you closer to creating funnels that convert.

Start Converting Your Anonymous Website Visitors Into Leads

Identifying anonymous visitors is just the start. 

The real work begins when you figure out what they want, segment them smartly, and guide them through funnels that actually convert. That’s how you turn anonymous traffic into leads, and into real, paying customers.

And if you’re looking for tools to make this easier?

Start your free trial of Customers.ai. We are built to help you identify, nurture, and convert like a pro. 

Stop wasting traffic and start converting those anonymous visitors into leads and revenue!

See Who Is On Your Site Right Now!

Get names, emails, phone numbers & more.

Try it Free, No Credit Card Required

Start Your Free Trial

Important Next Steps

See what targeted outbound marketing is all about. Capture and engage your first 500 website visitor leads with Customers.ai X-Ray website visitor identification for free.

Talk and learn about sales outreach automation with other growth enthusiasts. Join Customers.ai Island, our Facebook group of 40K marketers and entrepreneurs who are ready to support you.

Advance your marketing performance with Sales Outreach School, a free tutorial and training area for sales pros and marketers.

The post How to Convert Anonymous Website Visitors into Leads appeared first on Customers.ai.

Plain Text Emails: How Old-School Emails are Dominating Modern Inboxes

It’s almost 2025 and that means it’s time to forget flashy designs, animated GIFs, and emails that look like they belong in a magazine. Plain text emails are in!

Wait, what?

Are plain text emails back?Last week at DTC Next @Melanie_Balke talked about using plain text emails.Today’s @DTCNewsletter also hits on plain text emails.Have we come full circle? Are plain text emails the new ‘it’ girl? pic.twitter.com/ii6DxIITL0— CustomersAI (@CustomersAI) December 9, 2024

It turns out that plain text emails are quietly crushing it. In fact, plain text emails can outperform their fancier HTML counterparts with 10-20% higher deliverability rates. 

Why? Because they skip the spam filters, feel personal, and stand out in an inbox full of overly designed noise.

Who knew?

Well, apparently Melanie Balke who made it clear during her DTC Next session. According to Balke, simpler is smarter when it comes to email. Plain text emails get delivered, they get opened, and they get acted on.

So, if you’re still obsessing over perfecting your email design, it might be time to hit pause. 

Let’s break down why plain text emails are winning and how you can use this seemingly outdated strategy to dominate the inbox.

Why Plain Text Emails Are the Next ‘It’ Thing

If you’re an email marketer, you’ve without a doubt spent hours perfecting beautifully designed emails. We’re talking editing visuals, testing layouts, making sure every pixel is perfect, etc., etc. 

Yet, more often than not, they still struggle to break through the noise. 

That’s where plain text emails come in. 

They’re not flashy, but they’re ridiculously effective. Why? Because they strip email down to what matters – the message. 

Here’s why plain text emails are the hot new thing (again):

They Get Delivered

HTML emails can get caught in spam filters thanks to heavy code, external images, or embedded elements. 

Remember that spam filters are essentially gatekeepers and they will flag anything that feels like marketing. 

Plain text emails glide into the inbox because they’re clean, simple, and light on code. If your open rates are slipping, deliverability could be the problem, and plain text might be the solution to your email deliverability troubles.

They Feel Personal

Email marketers know the key is to talk to your audience, not at them and plain text emails mimic the emails people send to friends and coworkers every day. 

No distractions, no over-the-top graphics. Just straight-up communication. 

That personal feel builds trust, improves engagement, and makes it more likely someone will respond or click.

They’re Fast

Between copy, design, testing, revisions, and more, crafting an email can take anywhere from 2 to 5 hours on average. 

Plain text emails cut that process in half. 

Write the message, test it, and send it. They’re fast to create, fast to tweak, and fast to get results. For marketers managing multiple campaigns, speed and efficiency are non-negotiable.

Stats Back It Up

Let’s talk numbers because data matters. According to Campaign Monitor, emails with fewer visuals see 27% higher click-through rates. 

Plus, plain text emails often outperform HTML in deliverability, ensuring your message actually lands in front of your audience. More clicks AND better delivery? Plain text wins.

While plain text emails might feel like a step backward, they’re actually a great strategy for cutting through the noise. For marketers looking to optimize every email they send, plain text might just be the way to go. 

Email Deliverability Hacks:

Unlocking Inboxes 2

HOSTED BY

Larry Kim

Founder and CEO, Customers.ai

Free Webinar: Watch Now

Where Plain Text Emails Beats HTML Emails (and Where They Don’t)

While we are certainly singing the praises of plain text emails here, it’s important to remember that HTML emails are still important and knowing when to use each one is what separates advanced email marketers from the pack. 

When Plain Text Wins

Cold Outreach or Direct Communication: Trying to connect with someone for the first time? Whether it’s a new lead or a B2B prospect, plain text feels more personal and less “salesy,” making it more likely to grab attention.

Post-Purchase Follow-Ups or Support Emails: A simple “Thanks for your order” or “How’s your experience so far?” in plain text feels authentic. It shows you care, which builds trust and loyalty.

Flash Sale Reminders or “Just Checking In” Campaigns: Plain text emails work perfectly for quick nudges. A short, clear reminder like “Hey, your 20% off code expires tonight” cuts through the inbox clutter better than a flashy graphic.

When HTML Still Matters

Product Launches That Need Visuals: If you’re introducing a new product line, visuals are non-negotiable. People want to see what you’re offering, making HTML the obvious choice.

Highly Branded Emails Like Newsletters: For recurring campaigns like newsletters, where consistency and design reinforce brand identity, HTML helps you deliver a polished and professional experience.

The takeaway here? 

Plain: Plain text emails shine when the goal is personal, direct communication. 

HTML: HTML comes in strong when visuals are essential for storytelling or showcasing products. 

The best idea? Test both formats in your campaigns to see what resonates best with your audience. 

How to Make Plain Text Emails That Actually Convert

You’ve nailed down when to use plain text emails instead of HTML emails but how do you make them convert? Ah, the magic question we all wish we had the answer to. 

While plain text might be simple, it’s not automatic. It takes strategy to turn a “hey, just checking in” into clicks, replies, and revenue. 

Here are five tips on writing plain text emails:

1. Write Like a Human

Can we please forget the corporate jargon and buzzwords? Ugh.

Plain text emails work because they feel personal. 

Write as if you’re talking directly to the recipient. Short sentences, conversational tone, and a clear point are key. 

Instead of “Our exclusive offer is now available,” say, “Hey [Name], your 20% off is waiting—grab it before it’s gone.”

Structure for Scanners

People get way too many emails to read. They skim. 

Keep your email easy to scan by using short paragraphs, line breaks, and even bullet points where necessary. For example:

What’s the offer?

Why should they care?

What’s the next step?

This format respects their time and ensures the key points don’t get lost.

Add a Strong CTA

A plain text email without a clear call-to-action is just a note so be specific and action-oriented. 

“Click here” is outdated and uninspiring. Instead, try: “Grab 20% off here before midnight” or “Check out the sale now—it’s ending soon.” 

Strong, time-sensitive language motivates readers to act.

Track It Without Clutter

Just because plain text looks simple doesn’t mean you can’t measure success. 

Use UTM parameters in your links to track clicks and engagement. This allows you to get detailed insights into performance without needing flashy HTML or tracking pixels. 

Tools like Google Analytics or your ESP make this easy.

Leverage the P.S. Line

A plain text email’s best kept secret? The postscript! 

Adding a P.S. at the end catches the eye of skimmers and gives you one last chance to drive your message home. 

Use it to highlight urgency (“P.S. This deal expires at midnight—don’t miss it!”), reinforce your offer (“P.S. Your 20% off code is WELCOME20”), or add a personal touch (“P.S. Got questions? Just reply to this email—I’d love to help”). 

It’s simple, subtle, and surprisingly effective at boosting conversions.

Plain text emails might not have the visual appeal of HTML but when written and structured well, they’re conversion machines.

Ready to Test Plain Text Emails? Here’s Your Next Step

The proof is in the performance. If you’re still on the fence about plain text emails, it’s time to put them to the test. 

Run a side-by-side A/B test: One version with your best-designed HTML email and the other as a clean, plain text version with the same message and CTA.

Pay attention to the metrics:

Open rates: Which email actually lands and gets opened?

Click-through rates: Which one drives action?

Deliverability: Is one version consistently ending up in the spam folder?

Chances are, you’ll see that plain text emails aren’t just easier to create, they outperform when it comes to engagement and ROI.

Start with your next campaign. Try plain text and watch how your audience responds. You might just find that less really is more when it comes to email marketing.

Let’s See More of Those Plain Text Emails!

Plain text emails might not win any beauty contests but they’re winning where it matters – in the inbox! 

They’re personal, effective, and cut straight to the point, which is exactly what your audience wants.

So, if you’re tired of spending hours perfecting flashy designs that get ignored, maybe it’s time to embrace the simplicity. 

Start testing plain text emails in your next campaign and see how they stack up.

Want to add more emails to your test? Start your free trial of Customers.ai and get 500 contacts today!

See Who Is On Your Site Right Now!

Get names, emails, phone numbers & more.

Try it Free, No Credit Card Required

Start Your Free Trial

Important Next Steps

See what targeted outbound marketing is all about. Capture and engage your first 500 website visitor leads with Customers.ai X-Ray website visitor identification for free.

Talk and learn about sales outreach automation with other growth enthusiasts. Join Customers.ai Island, our Facebook group of 40K marketers and entrepreneurs who are ready to support you.

Advance your marketing performance with Sales Outreach School, a free tutorial and training area for sales pros and marketers.

Plain Text Email FAQs

What are plain text emails?

Plain text emails consist of unformatted text without images, links, or HTML, ensuring compatibility with all email clients and devices.

How do plain text emails differ from HTML emails?

Plain text emails only include simple text, while HTML emails feature formatting, images, and multimedia elements. Plain text emails are cleaner and more universally supported, whereas HTML emails are visually engaging but prone to deliverability issues.

What are the benefits of plain text emails?

Benefits include higher deliverability rates, improved engagement due to their personal feel, faster loading times, and universal compatibility across devices and email clients.

Do plain text emails perform better than HTML emails?

Plain text emails often outperform HTML emails in key metrics like deliverability and click-through rates. Their simplicity avoids spam filters, and their authenticity boosts engagement.

What are the best use cases for plain text emails?

Plain text emails are ideal for cold outreach, transactional messages (e.g., order confirmations), follow-ups, and campaigns where a personal, one-on-one feel is important.

How can I personalize plain text emails?

You can personalize plain text emails by including the recipient’s name, referencing past purchases or interactions, and tailoring the content to match their preferences or behaviors.

What are some best practices for writing plain text emails?

Keep the message clear and concise. Use short paragraphs and line breaks for readability. Include one clear call-to-action. Avoid overloading the email with links. Test your email to ensure it displays well across devices.

Do plain text emails increase deliverability?

Yes, plain text emails are less likely to be flagged as spam because they lack heavy code, images, or links that often trigger spam filters.

Can plain text emails track engagement?

Yes, engagement can be tracked by using unique, trackable links or by monitoring responses to email campaigns through your email service provider.

How do plain text emails impact click-through rates?

Plain text emails often improve click-through rates by focusing attention on the message and call-to-action without the distractions of images or flashy designs.

What tools can I use to create plain text emails?

Most email marketing platforms like Mailchimp, Klaviyo, and HubSpot offer options to create and send plain text emails. These tools also provide analytics to track performance.

Are plain text emails better for mobile users?

Yes, plain text emails are highly mobile-friendly because they load quickly and adapt seamlessly to various screen sizes without layout issues.

How can I test the effectiveness of plain text emails?

Run A/B tests by sending plain text emails to one segment of your audience and HTML emails to another. Compare metrics like open rates, click-through rates, and conversions.

What are common mistakes to avoid in plain text emails?

Overloading the email with too many links. Writing long, dense paragraphs without breaks. Skipping a clear call-to-action. Ignoring testing across devices.

Can plain text emails include links?

Yes, but they should be minimal and clear. Include one or two links that are easy to understand, such as “Visit us at [yourdomain.com].”

How do plain text emails build trust?

Plain text emails feel more authentic and personal, resembling a message from a friend rather than a brand, which helps establish trust with recipients.

Can I automate plain text email campaigns?

Yes, plain text emails can be automated using tools like Klaviyo, Mailchimp, or ActiveCampaign to deliver personalized, time-based sequences.

How do plain text emails perform in B2B marketing?

In B2B marketing, plain text emails excel in outreach, follow-ups, and transactional communication, often yielding higher response rates due to their straightforward tone.

Are plain text emails compliant with email regulations?

Yes, plain text emails comply with regulations like GDPR and CAN-SPAM as long as they include required elements like an opt-out link and sender information.

Do plain text emails work for ecommerce?

Yes, plain text emails work well for ecommerce in situations like cart abandonment reminders, order confirmations, and personalized follow-ups to increase trust and engagement.

How can I make plain text emails visually appealing?

Although they lack images, you can enhance readability by using bullet points, short paragraphs, and line spacing to structure content effectively.

Do plain text emails work for newsletters?

Yes, plain text emails can be effective for newsletters, especially when the goal is to share valuable content or updates in a more conversational tone.

How can plain text emails support drip campaigns?

Plain text emails are ideal for drip campaigns as they deliver concise, personalized messages that feel less like marketing and more like genuine communication.

What industries benefit most from plain text emails?

Industries like B2B, SaaS, ecommerce, and consulting benefit from plain text emails for their ability to foster personal connections and drive action.

How can I improve subject lines for plain text emails?

Keep subject lines short, clear, and action-oriented. Use personalization and urgency to grab attention, like “Your free trial ends today” or “Quick update, [Name].”
The post Plain Text Emails: How Old-School Emails are Dominating Modern Inboxes appeared first on Customers.ai.

Google Released State of the Art ‘Veo 2’ for Video Generation and …

Video and Image generation innovations are improving the quality of visuals and focusing on making AI models more responsive to detailed prompts. AI tools have opened new possibilities for artists, filmmakers, businesses, and creative professionals by achieving more accurate representations of real-world physics and human movement. AI-generated visuals are no longer limited to generic images and videos; they now allow for high-quality, cinematic outputs that closely mimic human creativity. This progress reflects the immense demand for technology that efficiently produces professional-grade results, offering opportunities across industries from entertainment to advertising.

The challenge in AI-based video and image generation has always been achieving realism and precision. Earlier models often struggled with inconsistencies in video content, such as hallucinated objects, distorted human movements, and unnatural lighting. Similarly, image generation tools sometimes need to follow user prompts accurately or render textures and details poorly. These shortcomings undermined their usability in professional settings where flawless execution is critical. AI models are needed to improve understanding of physics-based interactions, handle lighting effects, and reproduce intricate artistic details, which are fundamental to achieving visually appealing and accurate outputs.

Existing tools like Veo and Imagen have provided considerable improvements but have limitations. Veo allowed creators to generate video content with custom backgrounds and cinematic effects, while Imagen produced high-quality images in various art styles. YouTube creators, enterprise customers on Vertex AI, and artists through VideoFX and ImageFX extensively used these tools. They are good tools, but they often have technical constraints, such as inconsistent detail rendering, limited resolution capabilities, and the inability to adapt seamlessly to complex user prompts. As a result, creators required tools that combined precision, realism, and flexibility to meet professional standards.

Google Labs and Google DeepMind introduced Veo 2 and an upgraded Imagen 3 to improve the abovementioned problems. These models represent the next generation of AI-driven tools to achieve state-of-the-art video and image generation results. Veo 2 focuses on video production with improved realism, supporting resolutions up to 4K and extending video lengths to several minutes. It incorporates a deep understanding of cinematographic language, enabling users to specify lenses, cinematic effects, and camera angles. For instance, prompts like “18mm lens” or “low-angle tracking shot” allow the model to create wide-angle shots or immersive cinematic effects. Imagen 3 enhances image generation by producing richer textures, brighter visuals, and precise compositions across various art styles. These tools are now accessible through platforms like VideoFX, ImageFX, and Whisk, Google’s new experiment that combines AI-generated visuals with creative remixing capabilities.

Veo 2 brings several upgrades to video generation. The central one is its improved understanding of real-world physics and human expression. Unlike earlier models, Veo 2 accurately renders complex movements, natural lighting, and detailed backgrounds while minimizing hallucinated artifacts like extra fingers or floating objects. Users can create videos with genre-specific effects, motion dynamics, and storytelling elements. For example, the tool allows prompts to include phrases such as “shallow depth of field” or “smooth panning shot,” resulting in videos that mirror professional filmmaking techniques. Imagen 3 similarly delivers exceptional improvements by following prompts with greater fidelity. It generates photorealistic textures, detailed compositions, and art styles ranging from anime to impressionism. These models offer professional-grade visual content creation that adapts to user requirements.

Image Source

In evaluations, in head-to-head comparisons judged by human raters, Veo 2 outperformed leading video models regarding realism, quality, and prompt adherence. Imagen 3 achieved state-of-the-art results in image generation, excelling in texture precision, composition accuracy, and color grading. The upgraded models also feature SynthID watermarks to identify outputs as AI-generated, ensuring ethical usage and mitigating misinformation risks.

With Veo 2 and Improved Imagen 3, Whisk is a new experimental tool by the team that integrates Imagen 3 with Google’s Gemini model for image-based visualizations. Whisk allows users to upload or create images and remix their subjects, scenes, and styles to generate new visuals. Whisk combines the latest Imagen 3 model with Gemini’s visual understanding and description capabilities. The Gemini model automatically writes a detailed caption of the images and feeds those descriptions into Imagen 3. This process allows users to easily remix the subjects, scenes, and styles in fun, new ways. For instance, the tool can transform a hand-drawn concept into a polished digital output by analyzing and enhancing the image through AI algorithms.

Some of the highlights of ‘Veo 2’:

Veo 2 creates videos at up to 4K resolution with extended lengths of several minutes.

It reduces hallucinated artifacts such as extra objects or distorted human movements.

Also, it accurately interprets cinematographic language (lens type, camera angles, and motion effects).

Veo 2 improves understanding of real-world physics and human expressions for greater realism.

It allows cinematic prompts, such as “low-angle tracking shots” and “shallow depth of field,” to produce professional outputs.

It integrates with Google Labs’ VideoFX platform for widespread usability.

Some of the highlights of ‘Improved Imagen 3’:

Now, Imagen 3 produces brighter, more detailed images with improved textures and compositions.

It accurately follows prompts across diverse art styles, including photorealism, anime, and impressionism.

Imagen 3 enhances color grading and detail rendering for sharper, richer visuals.

It minimizes inconsistencies in generated outputs, achieving state-of-the-art image quality.

Accessible through Google Labs’ ImageFX platform and supports creative applications.

Image Source

In conclusion, Google Labs and DeepMind research introduce parallel upgrades in AI-driven video and image generation. Veo 2 and Imagen 3 set new benchmarks for professional-grade content creation by addressing long-standing challenges in visual realism and user control. These tools improve video and image fidelity, enabling creators to specify intricate details and achieve cinematic outputs. With innovations like Whisk, users gain access to creative workflows that were previously unattainable. The combination of precision, ethical safeguards, and innovative flexibility ensures that Veo 2 and Imagen 3 will impact the AI-generated visuals positively.

Check out the details for Veo 2 and Imagen 3. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence….
The post Google Released State of the Art ‘Veo 2’ for Video Generation and ‘Improved Imagen 3’ for Image Creation: Setting New Standards with 4K Video and Several Minutes Long Video Generation appeared first on MarkTechPost.

Self-Calibrating Conformal Prediction: Enhancing Reliability and Uncer …

In machine learning, reliable predictions and uncertainty quantification are critical for decision-making, particularly in safety-sensitive domains like healthcare. Model calibration ensures predictions accurately reflect true outcomes, making them robust against extreme over- or underestimation and facilitating trustworthy decision-making. Predictive inference methods, such as Conformal Prediction (CP), offer a model-agnostic and distribution-free approach to uncertainty quantification by generating prediction intervals that contain the true outcome with a user-specified probability. However, standard CP only provides marginal coverage, averaging performance across all contexts. Achieving context-conditional coverage, which accounts for specific decision-making scenarios, typically requires additional assumptions. As a result, researchers have developed methods to provide weaker but practical forms of conditional validity, such as prediction-conditional coverage.

Recent advancements have explored different approaches to conditional validity and calibration. Techniques like Mondrian CP apply context-specific binning schemes or regression trees to construct prediction intervals but often need more calibrated point predictions and self-calibrated intervals. SC-CP addresses these limitations using isotonic calibration to discretize the predictor adaptively, achieving improved conformity scores, calibrated predictions, and self-calibrated intervals. Additionally, methods like Multivalid-CP and difficulty-aware CP further refine prediction intervals by conditioning on class labels, prediction set sizes, or difficulty estimates. While approaches like Venn-Abers calibration and its regression extensions have been explored, challenges persist in balancing model accuracy, interval width, and conditional validity without increasing computational overhead.

Researchers from the University of Washington, UC Berkeley, and UCSF have introduced Self-Calibrating Conformal Prediction. This method combines Venn-Abers calibration and conformal prediction to deliver both calibrated point predictions and prediction intervals with finite-sample validity conditional on these predictions. Extending the Venn-Abers method from binary classification to regression enhances prediction accuracy and interval efficiency. Their framework analyzes the interplay between model calibration and predictive inference, ensuring valid coverage while improving practical performance. Real-world experiments demonstrate its effectiveness, offering a robust and efficient alternative to feature-conditional validity in decision-making tasks requiring both point and interval predictions.

Self-Calibrating Conformal Prediction (SC-CP) is a modified version of CP that ensures both finite-sample validity and post-hoc applicability while achieving perfect calibration. It introduces Venn-Abers calibration, an extension of isotonic regression, to produce calibrated predictions in regression tasks. Venn-Abers generates prediction sets that are guaranteed to include a perfectly calibrated point prediction by iteratively calibrating over imputed outcomes and leveraging isotonic regression. SC-CP further conformalizes these predictions, constructing intervals centered around the calibrated outputs with quantifiable uncertainty. This approach effectively balances calibration and predictive performance, especially in small samples, by accounting for overfitting and uncertainty through adaptive intervals.

The MEPS dataset predicts healthcare utilization while evaluating prediction-conditional validity across sensitive subgroups. The dataset comprises 15,656 samples with 139 features, including race as the sensitive attribute. The data is split into training, calibration, and test sets, and XGBoost trains the initial model under two settings: poorly calibrated (untransformed outcomes) and well-calibrated (transformed outcomes). SC-CP is compared against Marginal, Mondrian, CQR, and Kernel methods. Results show SC-CP achieves narrower intervals, improved calibration, and fairer predictions with reduced subgroup calibration errors. Unlike baselines, SC-CP adapts to heteroscedasticity, achieving desired coverage and interval efficiency.

In conclusion, SC-CP effectively integrates Venn-Abers calibration with Conformal Prediction to deliver calibrated point predictions and prediction intervals with finite-sample validity. By extending Venn-Abers calibration to regression tasks, SC-CP ensures robust prediction accuracy while improving interval efficiency and coverage conditional on forecasts. Experimental results, particularly on the MEPS dataset, highlight its ability to adapt to heteroscedasticity, achieve narrower prediction intervals, and maintain fairness across subgroups. Compared to traditional methods, SC-CP offers a practical and computationally efficient approach, making it particularly suitable for safety-critical applications requiring reliable uncertainty quantification and trustworthy predictions.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence….
The post Self-Calibrating Conformal Prediction: Enhancing Reliability and Uncertainty Quantification in Regression Tasks appeared first on MarkTechPost.

Mechanisms of Localized Receptive Field Emergence in Neural Networks

A notable aspect of peripheral responses in the animal nervous system is localization, where the linear receptive fields of simple-cell neurons respond to specific, contiguous regions much smaller than their total input domain. However, localization poses a critical challenge in understanding neural information processing across sensory systems. Traditional machine learning approaches generate weight distributions, that span entire input signals, diverging from biological neural networks’ localized processing strategies. This fundamental difference has motivated researchers to develop artificial learning models, capable of generating localized receptive fields from naturalistic stimuli.

Existing research has explored multiple approaches to address the localization challenge in neural networks. Sparse coding, independent component analysis (ICA), and related compression methods have used a top-down strategy. These techniques aim to generate efficient input signal representations by optimizing explicit sparsity or independence criteria within critically parameterized regimes. It is found that localized receptive fields can develop in simple feedforward neural networks when trained on data models approximating natural visual inputs. Computational simulations reveal that these networks develop increased sensitivity to higher-order input statistics, with even single neurons learning localized receptive fields.

Researchers from Yale University and the Gatsby Unit & SWC, UCL have presented an understanding of the mechanisms behind localized receptive field emergence. Building upon previous work, researchers describe the underlying principles driving localization in neural networks. The paper addresses the challenges of analyzing higher-order input statistics using existing tools that typically assume Gaussianity. By strategically separating the learning process into two distinct stages, the researchers developed analytical equations that capture the early-stage learning dynamics of a single-neuron model trained on idealized naturalistic data. The proposed method presents a unique analytical model that provides a concise description of the higher-order statistical structure driving localization.

The research focuses on a two-layer feedforward neural network with a nonlinear activation function and scalar output. The architecture’s ability to learn rich features has made it a critical subject of ongoing theoretical neural network analyses, highlighting its significance in understanding complex learning dynamics. The theoretical framework establishes an analytical model for localization dynamics in a single-neuron architecture. The researchers identified necessary and sufficient conditions for localization, initially demonstrated for a binary response scenario. Notably, the conditions developed for the single-neuron architecture were empirically validated for a multi-neuron architecture. Also, the proposed architectures would fail to learn localized receptive fields if trained on elliptical distributions.

The research findings reveal critical insights into the localization of neural network weights. When the parameters NLGP(g) and Kur(k) produce a negative excess kurtosis, the Inverse Participation Ratio (IPR) approaches its maximum value of 1.0, indicating highly localized weights. Conversely, positive excess kurtosis results in an IPR near zero, suggesting non-localized weight distributions. For the Ising model, the integrated receptive field precisely matches the simulated field’s peak position in 26 out of 28 initial conditions (93% accuracy). The results highlight excess kurtosis as a primary driver of localization, showing the phenomenon is largely independent of other data distribution properties.

In conclusion, researchers highlight the significant contributions of the analytical approach to understanding emergent localization in neural receptive fields. This approach aligns with recent research that repositions data-distributional properties as a primary mechanism for complex behavioral patterns. Through effective analytical dynamics, the researchers found that specific data properties, particularly covariance structure and marginals, fundamentally shape localization in neural receptive fields. Also, the researchers acknowledged the current data model as a simplified abstraction of early sensory systems, recognizing limitations such as the inability to capture orientation or phase selectivity. These set promising directions for future investigative work for noise-based frameworks or expanded computational models.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence….
The post Mechanisms of Localized Receptive Field Emergence in Neural Networks appeared first on MarkTechPost.

Simplify multimodal generative AI with Amazon Bedrock Data Automation

Developers face significant challenges when using foundation models (FMs) to extract data from unstructured assets. This data extraction process requires carefully identifying models that meet the developer’s specific accuracy, cost, and feature requirements. Additionally, developers must invest considerable time optimizing price performance through fine-tuning and extensive prompt engineering. Managing multiple models, implementing safety guardrails, and adapting outputs to align with downstream system requirements can be difficult and time consuming.
Amazon Bedrock Data Automation in public preview helps address these and other challenges. This new capability from Amazon Bedrock offers a unified experience for developers of all skillsets to easily automate the extraction, transformation, and generation of relevant insights from documents, images, audio, and videos to build generative AI–powered applications. With Amazon Bedrock Data Automation, customers can fully utilize their data by extracting insights from their unstructured multimodal content in a format compatible with their applications. Amazon Bedrock Data Automation’s managed experience, ease of use, and customization capabilities help customers deliver business value faster, eliminating the need to spend time and effort orchestrating multiple models, engineering prompts, or stitching together outputs.
In this post, we demonstrate how to use Amazon Bedrock Data Automation in the AWS Management Console and the AWS SDK for Python (Boto3) for media analysis and intelligent document processing (IDP) workflows.
Amazon Bedrock Data Automation overview
You can use Amazon Bedrock Data Automation to generate standard outputs and custom outputs. Standard outputs are modality-specific default insights, such as video summaries that capture key moments, visual and audible toxic content, explanations of document charts, graph figure data, and more. Custom outputs use customer-defined blueprints that specify output requirements using natural language or a schema editor. The blueprint includes a list of fields to extract, data format for each field, and other instructions, such as data transformations and normalizations. This gives customers full control of the output, making it easy to integrate Amazon Bedrock Data Automation into existing applications.
Using Amazon Bedrock Data Automation, you can build powerful generative AI applications and automate use cases such as media analysis and IDP. Amazon Bedrock Data Automation is also integrated with Amazon Bedrock Knowledge Bases, making it easier for developers to generate meaningful information from their unstructured multimodal content to provide more relevant responses for Retrieval Augmented Generation (RAG).
Customers can get started with standard outputs for all four modalities: documents, images, videos, and audio and custom outputs for documents and images. Custom outputs for video and audio will be supported when the capability is generally available.
Amazon Bedrock Data Automation for images, audio, and video
To take a media analysis example, suppose that customers in the media and entertainment industry are looking to monetize long-form content, such as TV shows and movies, through contextual ad placement. To deliver the right ads at the right video moments, you need to derive meaningful insights from both the ads and the video content. Amazon Bedrock Data Automation enables your contextual ad placement application by generating these insights. For instance, you can extract valuable information such as video summaries, scene-level summaries, content moderation concepts, and scene classifications based on the Interactive Advertising Bureau (IAB) taxonomy.
To get started with deriving insights with Amazon Bedrock Data Automation, you can create a project where you can specify your output configuration using the AWS console, AWS Command Line Interface (AWS CLI) or API.
To create a project on the Amazon Bedrock console, follow these steps:

Expand the Data Automation dropdown menu in the navigation pane and select Projects, as shown in the following screenshot.
From the Projects console, create a new project and provide a project name, as shown in the following screenshot.
From within the project, choose Edit, as shown in the following screenshot, to specify or modify an output configuration. Standard output is the default way of interacting with Amazon Bedrock Data Automation, and it can be used with audio, documents, images and videos, where you can have one standard output configuration per data type for each project.
For customers who want to analyze images and videos for media analysis, standard output can be used to generate insights such as image summary, video scene summary, and scene classifications with IAB taxonomy. You can select the image summarization, video scene summarization, and IAB taxonomy checkboxes from the Standard output tab and then choose Save changes to finish configuring your project, as shown in the following screenshot.
To test the standard output configuration using your media assets, choose Test, as shown in the following screenshot.

The next example uses the project to generate insights for a travel ad.

Upload an image, then choose Generate results, as shown in the following screenshot, for Amazon Bedrock Data Automation to invoke an inference request.
Amazon Bedrock Data Automation will process the uploaded file based on the project’s configuration, automatically detecting that the file is an image and then generating a summary and IAB categories for the travel ad.
After you have generated insights for the ad image, you can generate video insights to determine the best video scene for effective ad placement. In the same project, upload a video file and choose Generate results, as shown in the following screenshot.

Amazon Bedrock Data Automation will detect that the file is a video and will generate insights for the video based on the standard output configuration specified in the project, as shown in the following screenshot.

These insights from Amazon Bedrock Data Automation, can help you effectively place relevant ads in your video content, which can help improve content monetization.
Intelligent document processing with Amazon Bedrock Data Automation
You can use Amazon Bedrock Data Automation to automate IDP workflows at scale, without needing to orchestrate complex document processing tasks such as classification, extraction, normalization, or validation.
To take a mortgage example, a lender wants to automate the processing of a mortgage lending packet to streamline their IDP pipeline and improve the accuracy of loan processing. Amazon Bedrock Data Automation simplifies the automation of complex IDP tasks such as document splitting, classification, data extraction, output format normalization, and data validation. Amazon Bedrock Data Automation also incorporates confidence scores and visual grounding of the output data to mitigate hallucinations and help improve result reliability.
For example, you can generate custom output by defining blueprints, which specify output requirements using natural language or a schema editor, to process multiple file types in a single, streamlined API. Blueprints can be created using the console or the API, and you can use a catalog blueprint or create a custom blueprint for documents and images.
For all modalities, this workflow consists of three main steps: creating a project, invoking the analysis, and retrieving the results.
The following solution walks you through a simplified mortgage lending process with Amazon Bedrock Data Automation using the Amazon SDK for Python (Boto3), which is straightforward to integrate into an existing IDP workflow.
Prerequisites
Before you invoke the Amazon Bedrock API, make sure you have the following:

An AWS account that provides access to AWS services, including Amazon Bedrock Data Automation and Amazon Simple Storage Service (Amazon S3)
The AWS CLI set up
An AWS Identity and Access Management (IAM) user set up for the Amazon Bedrock Data Automation API and appropriate permissions added to the IAM user
The IAM user access key and secret key to configure the AWS CLI and permissions
The latest Boto3 library
The minimum Python version 3.8 configured with your integrated development environment (IDE)
An S3 bucket

Create custom blueprint
In this example, you have the lending packet, as shown in the following image, which contains three documents: a pay stub, a W-2 form, and a driver’s license.

Amazon Bedrock Data Automation has sample blueprints for these three documents that define commonly extracted fields. However, you can also customize Amazon Bedrock Data Automation to extract specific fields from each document. For example, you can extract only the gross pay and net pay from the pay stub by creating a custom blueprint.
To create a custom blueprint using the API, you can use the CreateBlueprint operation using the Amazon Bedrock Data Automation Client. The following example shows the gross pay and net pay being defined as properties passed to CreateBlueprint, to be extracted from the lending packet:

bda_create_blueprint_response = bedrock_data_automation_client.create_blueprint(
blueprintName=’CUSTOM_PAYSLIP_BLUEPRINT’,
type=’DOCUMENT’,
blueprintStage=’LIVE’,
schema=json.dumps({
‘$schema’: ‘http://json-schema.org/draft-07/schema#’,
‘description’: ‘default’,
‘documentClass’: ‘default’,
‘type’: ‘object’,
‘properties’: {
‘gross_pay_this_period’: {
‘type’: ‘number’,
‘inferenceType’: ‘extractive’,
‘description’: ‘The gross pay for this pay period from the Earnings table’
},
‘net_pay’: {
‘type’: ‘number’,
‘inferenceType’: ‘extractive’,
‘description’: ‘The net pay for this pay period from the bottom of the document’
}
}
}),
)

The CreateBlueprint response returns the blueprintARN for the pay stub’s custom blueprint:

‘blueprintArn: arn:aws:bedrock:us-west-2:<AWS_ACCOUNT_ID>:blueprint/<BLUEPRINT_ID>’

Configure Amazon Bedrock Data Automation project
To begin processing files using blueprints with Amazon Bedrock Data Automation, you first need to create a data automation project. To process a multiple-page document containing different file types, you can configure a project with different blueprints for each file type.
Use Amazon Bedrock Data Automation to apply multiple document blueprints within one project so you can process different types of documents within the same project, each with its own custom extraction logic.
When using the API to create a project, you invoke the CreateDataAutomationProject operation. The following is an example of how you can configure custom output using the custom blueprint for the pay stub and the sample blueprints for the W-2 and driver’s license:

bda_bedrock_automation_create_project_response = bedrock_data_automation_client.create_data_automation_project(
projectName=’TEST_PROJECT’,
projectDescription=’test BDA project’,
projectStage=bda_stage,
standardOutputConfiguration={
‘document’: {
‘outputFormat’: {
‘textFormat’: {
‘types’: [‘PLAIN_TEXT’]
},
‘additionalFileFormat’: {
‘state’: ‘ENABLED’,
}
}
},
},
customOutputConfiguration={
‘blueprints’: [
{
‘blueprintArn’: ‘arn:aws:bedrock:us-west-2:<AWS_ACCOUNT_ID>:blueprint/<BLUEPRINT_ID>’
},
{
‘blueprintArn’: ‘arn:aws:bedrock:us-west-2:aws:blueprint/bedrock-data-automation-public-w2-form’
},
{
‘blueprintArn’: ‘arn:aws:bedrock:us-west-2:aws:blueprint/bedrock-data-automation-public-us-driver-license’
},
],
},
overrideConfiguration={
‘document’: {
‘splitter’: {
‘state’: ‘ENABLED’
}
}
},
)

The CreateProject response returns the projectARN for the project:

‘arn:aws:bedrock:us-west-2:<AWS_ACCOUNT_ID>:data-automation-project/<PROJECT_ID>’

To process different types of documents using multiple document blueprints in a single project, Amazon Bedrock Data Automation uses a splitter configuration, which must be enabled through the API. The following is the override configuration for the splitter, and you can refer to the Boto3 documentation for more information:

overrideConfiguration={
‘document’: {
‘splitter’: {
‘state’: ‘ENABLED’ | ‘DISABLED’
}
}
},

Upon creation, the API validates the input configuration and creates a new project, returning the projectARN, as shown in the following screenshot.

‘arn:aws:bedrock:us-west-2:<AWS_ACCOUNT_ID>:data-automation-project/<PROJECT_ID>’

Test the solution
Now that the blueprint and project setup is complete, the InvokeDataAutomationAsync operation from the Amazon Bedrock Data Automation runtime can be used to start processing files. This API call initiatives the asynchronous processing of files in an S3 bucket, in this case the lending packet, using the configuration defined in the project by passing the project’s ARN:

bda_invoke_data_automation_async_response = bedrock_data_automation_runtime_client.invoke_data_automation_async(
inputConfiguration={‘s3Uri’: ‘<S3_URI>’},
outputConfiguration={‘s3Uri’: ‘<S3_URI>’},
dataAutomationConfiguration={
‘dataAutomationArn’: ‘arn:aws:bedrock:us-west-2:<AWS_ACCOUNT_ID>:data-automation-project/<PROJECT_ID>’,
‘stage’: ‘LIVE’
}
)

InvokeDataAutomationAsync returns the invocationARN:

‘arn:aws:bedrock:us-west-2:<AWS_ACCOUNT_ID>:data-automation-invocation/<INVOCATION_ID>’

GetDataAutomationStatus can be used to view the status of the invocation, using the InvocationARN from the previous response:

bda_invoke_data_automation_async_response = bedrock_data_automation_runtime_client.get_data_automation_status(
invocationArn=’arn:aws:bedrock:us-west-2:<AWS_ACCOUNT_ID>:data-automation-invocation/<INVOCATION_ID>’
)

When the job is complete, view the results in the S3 bucket used in the outputConfiguration by navigating to the ~/JOB_ID/0/custom_output/ folder.
From the following sample output, Amazon Bedrock Data Automation associated the pay stub file with the custom pay stub blueprint with a high level of confidence:

‘matched_blueprint’: {
‘arn’: ‘<BLUEPRINT_ARN>’, ‘name’: ‘CUSTOM_PAYSLIP_BLUEPRINT’, ‘confidence’: 0.99959725
}

Using the matched blueprint, Amazon Bedrock Data Automation was able to accurately extract each field defined in the blueprint:

‘inference_result’: {
‘net_pay’: 291.9, ‘gross_pay_this_period’: 452.43
}

Additionally, Amazon Bedrock Data Automation returns confidence intervals and bounding box information for each field:

‘explainability_info’: [{
‘net_pay’: {‘success’: true, ‘confidence’: 0.96484375, ‘geometry’: [{‘boundingBox’: …

This example demonstrates how customers can use Amazon Bedrock Data Automation to streamline and automate an IDP workflow. Amazon Bedrock Data Automation automates complex document processing tasks such as data extraction, normalization, and validation from documents. Amazon Bedrock Data Automation helps to reduce operational complexity and improves processing efficiency to handle higher loan processing volumes, minimize errors, and drive operational excellence.
Cleanup
When you’re finished evaluating this feature, delete the S3 bucket and any objects to avoid any further charges.
Summary
Customers can get started with Amazon Bedrock Data Automation, which is available in public preview in AWS Region US West 2 (Oregon). Learn more on Amazon Bedrock Data Automation and how to automate the generation of accurate information from unstructured content for building generative AI–based applications.

About the authors
Ian Lodge is a Solutions Architect at AWS, helping ISV customers in solving their architectural, operational, and cost optimization challenges. Outside of work he enjoys spending time with his family, ice hockey and woodworking.
Alex Pieri is a Solutions Architect at AWS that works with retail customers to plan, build, and optimize their AWS cloud environments. He specializes in helping customers build enterprise-ready generative AI solutions on AWS.
Raj Pathak is a Principal Solutions Architect and Technical advisor to Fortune 50 and Mid-Sized FSI (Banking, Insurance, Capital Markets) customers across Canada and the United States. Raj specializes in Machine Learning with applications in Generative AI, Natural Language Processing, Intelligent Document Processing, and MLOps.

How TUI uses Amazon Bedrock to scale content creation and enhance hote …

TUI Group is one of the world’s leading global tourism services, providing 21 million customers with an unmatched holiday experience in 180 regions. TUI Group covers the end-to-end tourism chain with over 400 owned hotels, 16 cruise ships, 1,200 travel agencies, and 5 airlines covering all major holiday destinations around the globe. At TUI, crafting high-quality content is a crucial component of its promotional strategy.
The TUI content teams are tasked with producing high-quality content for its websites, including product details, hotel information, and travel guides, often using descriptions written by hotel and third-party partners. This content needs to adhere to TUI’s tone of voice, which is essential to communicating the brand’s distinct personality. But as its portfolio expands with more hotels and offerings, scaling content creation has proven challenging. This presents an opportunity to augment and automate the existing content creation process using generative AI.
In this post, we discuss how we used Amazon SageMaker and Amazon Bedrock to build a content generator that rewrites marketing content following specific brand and style guidelines. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Amazon SageMaker helps data scientists and machine learning (ML) engineers build FMs from scratch, evaluate and customize FMs with advanced techniques, and deploy FMs with fine-grain controls for generative AI use cases that have stringent requirements on accuracy, latency, and cost.
Through experimentation, we found that following a two-phased approach worked best to make sure that the output aligned to TUI’s tone of voice requirements. The first phase was to fine-tune with a smaller large language model (LLM) on a large corpus of data. The second phase used a different LLM model for post-processing. Through fine-tuning, we generate content that mimics the TUI brand voice using static data and which could not be captured through prompt engineering. Employing a second model with few-shot examples helped verify the output adhered to specific formatting and grammatical rules. The latter uses a more dynamic dataset, which we can use to adjust the output quickly in the future for different brand requirements. Overall, this approach resulted in higher quality content and allowed TUI to improve content quality at a higher velocity.
Solution overview
The architecture consists of a few key components:

LLM models – We evaluated different approaches and found that a two-model solution performed the best. This consists of a fine-tuned Meta Llama model to generate a description for the given hotel and Anthropic’s Claude model to reformat its output. Fine-tuning and hosting the Meta Llama 2 model was done on Amazon SageMaker, and Anthropic’s Claude 2 was consumed from Amazon Bedrock through API calls.
Orchestration – We created a state machine using AWS Step Functions to make calls in a batch format to the two LLMs and fetch the search engine optimization (SEO) score for the generated content from a third-party API. If the SEO content score is above a defined threshold (80%), the generated content is stored in an Amazon DynamoDB table and can later be reviewed by the content team directly in the front-end UI. Through this process, we maintain and monitor content quality at scale.
Human in the loop feedback – We developed a custom React front-end application to gather feedback from the content team to facilitate continuous improvement and future model fine-tuning. You can use the feedback to fine-tune a base model on SageMaker using reinforcement learning from human feedback (RLHF) to improve performance.

The following diagram is the high-level architecture of the solution.

Prompt engineering
We started by crafting prompts that include the most common issues identified by the TUI content team, including tone of voice, accuracy, length, and grammar. We provided the LLM with a few examples of curated content within the context window. Although the generated output followed the guidance, the writing style didn’t meet TUI’s tone of voice requirements.
 Example prompt:

You are an experienced British copywriter for TUI. TUI is a world-leading travel company. You are an expert in generating hotel descriptions, based on TUI’s tone of voice. TUI’s tone of voice can be described as upbeat, enthusiastic, and promotional. Avoid all the words in the following list: {banned words}
Write at most 100 words.
Your hotel descriptions must follow TUI’s tone of voice and apply SEO guidelines.
These are some good examples. You should mimic below.
{examples}.
Human: {input}
Assistant:

Here is an example of the results we received.

Original text
Generated text

Boutique interiors and a great location combine at this small but perfectly formed hotel. It may be small but this 15-room hotel packs a punch with its interior design. The immaculate pool is set in an open atrium area. And there’s funky cocktail bar and lounge, too. The location is every bit as impressive as the styling. It’s about a 10-minute walk to the beach and less than half that to the centre of Thassos Town.
This petite hotel brings boutique style to Thassos Town. Its immaculate pool and funky cocktail bar create an oasis in the heart of the action. The beach is just a 10-minute walk away, so you can easily split your time between sunbathing and sightseeing. With only 15 rooms, the experience feels intimate yet worldly. The interiors marry modern design with traditional touches for a unique look. And the central location means popular attractions like the Archaeological Museum and shopping street are right on your doorstep.

To evaluate these results, we generated a dataset of 150 hotel descriptions using two different prompt versions and asked our business teams to review it. However, they flagged 98% of the output with significant issues. Firstly, Anthropic’s Claude 2 struggled to adhere to TUI’s tone of voice and branding guidelines, frequently employed abbreviations, and favored American English over British English. It also failed to follow the SEO guidelines provided and sometimes generated false information regarding hotel facilities and locations. The following image shows a list of these challenges and how the LLM handled them. Of the six challenges, the LLM met only one.

Fine-tuning Llama 2 using PEFT on Amazon SageMaker JumpStart
These issues and poor feedback led us to conclude that prompt engineering alone would not adequately address the problem. As a result, we decided to pursue an alternative approach: fine-tuning a smaller large language model to rewrite the text in accordance with TUI’s tone of voice. We used a curated set of hotel descriptions written by TUI copywriters so that the model would have better alignment with our guidelines.
We selected the Meta Llama 2 model using Amazon SageMaker JumpStart, one of the top open source LLMs available at the time, and chose the 13B parameter version to apply parameter-efficient fine-tuning (PEFT), specifically using quantized low-rank adaptation (QLoRA). This technique quantizes the pre-trained model to 4 bits and adds small low-rank adapters for fine-tuning. We fine-tuned the model on a single ml.g5.4xlarge instance in about 20 hours using a relatively small dataset of around 4,500 hotels. We also tested out the Llama 2 7B and 70B models. We found that the 7B model didn’t perform well enough, and the 70B model had much higher costs without seeing significant improvement.
We used common natural language processing (NLP) evaluation metrics, such as perplexity for evaluation and monitoring during training, and established daily feedback loops with the content team to refine the test set. The output from the fine-tuned Meta Llama 2 13B model effectively mimicked the TUI tone of voice and adhered to most SEO guidelines, even those not specified in the prompt.
For more information, refer to Fine-tune Llama 2 for text generation on Amazon SageMaker Jumpstart.

Original text
Generated text

The NH München City Süd hotel, formerly known as NH München am Ring, is located southeast of Munich. The location is ideal for reaching the highway towards Salzburg and provides good access to the airport and Exhibition Center. You can also get to the city center in 20 minutes by hopping on the S-Bahn at the Karl-Preis-Platz station. The peaceful residential location offers an excellent opportunity to relax in one of the 163 rooms. These are decorated in white with red accents. Rooms feature free Wi-Fi and electronic safes that are large enough for your laptop. If you need to work, there are spacious desks in every room and apartments with seperate living rooms are available for longer term guests. The suites also come with separate living rooms and foldaway beds are available upon request. The breakfast will get you ready for a busy day with a large spread of cold cuts, cheeses, fresh pastries, and fruit. These come accompanied with freshly brewed coffee, tea, and fresh fruit juices. For lunch and dinner, you will find many restaurant options in close proximity to the hotel. There is a sauna for relaxation and you can park your car in the garage.
Make yourself at home in Munich at the München City Süd. This former NH München am Ring sits just southeast of the Bavarian capital, putting both the highway to Salzburg and the airport within easy reach. The S-Bahn station Karl-Preis-Platz is a quick walk away too, so you can be in the city center in just 20 minutes. The neighborhood here is peaceful, giving you the chance to properly relax during your stay. The hotel’s 163 rooms are decorated in calming white and red hues. Free Wi-Fi and laptop-sized safes come as standard, while spacious desks make working easy. For longer visits, apartments with separate living areas are available. The suites take things up a notch with their own living spaces and foldaway beds on request. Fuel up each morning with a vast breakfast spread of cold cuts, cheeses, pastries and fruit. All washed down with coffee, tea and fresh juices. You’ll find many dining options close by for other meals. After a busy day, unwind in the sauna or park up in the garage.

The following image shows a list of the challenges and how the LLM handled them. Of the six challenges, the LLM met four.

Integrating Anthropic’s Claude 2 for further improvement
To further improve the output, we introduced Anthropic’s Claude 2 using Amazon Bedrock as a final refinement step. This included converting American spelling to British spelling, writing numbers one through nine in words and larger numbers in digits, correcting typos and capitalization errors, minimizing banned words, incorporating essential TUI branding words, and adding missing hotel information. We also implemented a feedback mechanism in the UI to use data for ongoing fine-tuning in production. By using Anthropic Claude 2, we make sure that the final output applies the remaining formatting rules.

Original text
Generated text

As its name suggests, Vienna Sporthotel has loads lined up for sports fans. There are 10 tennis courts and four golf simulators here, plus, you can book a session with a coach to brush up on your skills. On top of that, there’s an outdoor pool, a well-stocked fitness room and a big sauna. To fuel up for all that exercise, you can grab a plate at the breakfast buffet. Location-wise, this place is set up for sightseeing. Hop on a bus and you can reach the likes of The Hofburg, Belvedere Palace and Karlskirche in less than half an hour. You’re only a 10-minute stroll away from the nearest restaurants and cafes, too.
With a name like Vienna Sporthotel, you can expect this place to serve up plenty for fans of sport and activity. Brush up your skills on one of 10 tennis courts or in the four golf simulators, then recharge in the outdoor pool or well-equipped gym. There’s even a coach on hand to help perfect your technique. When you’re ready to refuel, tuck into the breakfast buffet. Then get set for sightseeing – a bus ride whisks you to top Vienna attractions like The Hofburg, Belvedere Palace and Karlskirche in under 30 minutes. You’re also just a short stroll from local eateries and coffee shops.

The following image shows a list of the challenges and how the LLM handled them. The LLM met all six challenges.

Key outcomes
The final architecture consists of a fine-tuned Meta Llama 2 13B model and Anthropic Claude 2, using the strengths of each model. In a blind test, these dynamically generated hotel descriptions were rated higher than those written by humans in 75% of a sample of 50 hotels. We also integrated a third-party API to calculate SEO scores for the generated content, and we observed up to 4% uplift in SEO scores for the generated content compared to human written descriptions. Most significantly, the content generation process is now five times faster, enhancing our team’s productivity without compromising quality or consistency. We can generate a vast number of hotel descriptions in just a few hours— a task that previously took months.
Takeaways
Moving forward, we plan to explore how this technology can address current inefficiencies and quality gaps, especially for hotels that our team hasn’t had the capacity to curate. We plan to expand this solution to more brands and regions within the TUI portfolio, including producing content in various languages and tailoring it to meet the specific needs of different audiences.
Throughout this project, we learned a few valuable lessons:

Few-shot prompting is cost-effective and sufficient when you have limited examples and specific guidelines for responses. Fine-tuning can help significantly improve model performance when you need to tailor content to match a brand’s tone of voice, but can be resource intensive and is based on static data sources that can get outdated.
Fine-tuning the Llama 70B model was much more expensive than Llama 13B and did not result in significant improvement.
Incorporating human feedback and maintaining a human-in-the-loop approach is essential for protecting brand integrity and continuously improving the solution. The collaboration between TUI engineering, content, and SEO teams was crucial to the success of this project.

Although Meta Llama 2 and Anthropic’s Claude 2 were the latest state-of-the-art models available at the time of our experiment, since then we have seen the launch of Meta Llama 3 and Anthropic’s Claude 3.5, which we expect can significantly improve the quality of our outputs. Amazon Bedrock also now supports fine-tuning for Meta Llama 2, Cohere Command Light, and Amazon Titan models, making it simpler and faster to test models without managing infrastructure.

About the Authors
Nikolaos Zavitsanos is a Data Scientist at TUI, specialized in developing customer-facing Generative AI applications using AWS services. With a strong background in Computer Science and Artificial Intelligence, he leverages advanced technologies to enhance user experiences and drive innovation. Outside of work, Nikolaos plays water polo and is competing at a national level. Connect with Nikolaos on Linkedin
Hin Yee Liu is a Senior Prototyping Engagement Manager at Amazon Web Services. She helps AWS customers to bring their big ideas to life and accelerate the adoption of emerging technologies. Hin Yee works closely with customer stakeholders to identify, shape and deliver impactful use cases leveraging Generative AI, AI/ML, Big Data, and Serverless technologies using agile methodologies. In her free time, she enjoys knitting, travelling and strength training. Connect with Hin Yee on LinkedIn.

15 High-Converting Lead Capture Tactics to Turn Maybe to Yes

Did you know that 70% of your website visitors leave without taking action? That’s a massive chunk of potential leads walking out the door!  

But here’s the good news…capturing leads doesn’t require tricks, gimmicks, or begging. It’s about delivering value and reducing friction.

In our Lead Capture Guide, we walked through the basics of capturing leads. Now, we’re taking it further with 15 proven lead capture tactics you can put to work right now. 

These lead capture strategies are sharp, specific, and ready to turn those “maybes” into actual leads. Let’s dive in.

Unlock High-Intent Leads Hiding on Your Site

Book a demo of Customers.ai’s U.S. website visitor identification, customer journey insights and remarketing platform to skyrocket conversions and sales.

Book a Demo

Lead Capture Tactic #1: Use a Visitor Identification Technology

Most website visitors stay anonymous…browsing your site, clicking around, and leaving without a trace. That’s a huge missed opportunity! 

With visitor identification technology, you can de-anonymize those visitors, capture their information, and turn them into actionable leads.

Here’s where Customers.ai takes it to the next level. We don’t just identify anonymous website visitors, we can also identify users who started filling out a form but didn’t hit “submit.” That’s right…lost leads are no longer lost!

How it works:

Identify Anonymous Visitors: Customers.ai’s tech pinpoints who’s visiting your site, even if they never fill out a form. You get access to real names, email addresses, and other valuable contact information so you can follow up with targeted outreach.

Recover Abandoned Forms: Someone started filling out your lead form but got distracted or changed their mind? Customers.ai can identify that partial data, so you can retarget them or reconnect with an offer they can’t refuse.

Why it works: Traditional lead capture relies entirely on form submissions, but 90-98% of visitors leave without completing a form. Yikes! Visitor identification flips the script by capturing lead data you’d otherwise miss

Pro Tip: Pair this technology with automated follow-ups. Once you’ve identified your visitors, trigger an email or SMS flow with an offer or reminder to re-engage them.

Visitor identification technology is a smarter way to maximize your lead capture efforts and make sure no potential lead slips through the cracks.

Lead Capture Tactic #2. Use a Strong, Value-Packed Lead Magnet

A lead magnet is an incentive you offer in exchange for a visitor’s contact information. Think of it as your ticket to starting a conversation with potential customers. 

But here’s the catch – it needs to provide real value. 

People don’t hand over their email addresses for anything less than something that feels genuinely useful, exclusive, or actionable.

The best lead magnets solve a problem, save time, or help your audience achieve a goal faster. They also align with your audience’s interests and where they are in their buying journey. 

For example, if you’re selling skincare, a guide to “5 Ingredients That Fix Dry Skin Fast” will get sign-ups. Running an ecommerce store? A 10% discount code or “early access to new drops” works wonders.

What about a software company that helps improve Meta retargeting? Something like this might work.

Here are examples of lead magnets that convert:

Exclusive Content: eBooks, case studies, or in-depth guides. If you’re a B2B brand, whitepapers that offer advanced industry insights can work well.

Discount Codes: Offering 10-15% off a first purchase is a proven way to capture ecommerce leads and get hesitant buyers off the fence.

Free Tools and Templates: Tools like calculators, planners, or checklists are irresistible when they solve an immediate problem. Example: a budgeting template for a financial services brand.

Webinars or Video Workshops: Share insights your audience can’t get anywhere else. Focus on actionable takeaways that give immediate value.

Quizzes: A quiz like “What’s Your Perfect Skincare Routine?” delivers a result while collecting lead data in the background. Combine this with a discount on the recommended products for bonus points.

What makes a lead magnet work? 

It’s not just about the format, it’s also about the promise. 

Your lead magnet should answer one key question: “Why should I give you my email?” Make the benefit clear, specific, and relevant to your audience. For example, instead of saying, “Sign up for our newsletter,” try “Get weekly tips to double your conversions in 10 minutes or less.” 

Always position your lead magnet as something worth their time and email.

Pro tip: Keep the process frictionless. Use a simple opt-in form with a name and email field—anything more will hurt conversions. 

Once you’ve delivered the lead magnet, follow up with automated email flows that nurture the relationship and guide your lead toward the next step.

Lead Capture Tactic #3. Leverage Time-Sensitive Offers

Urgency is one of the most effective ways to push visitors from “maybe later” to “yes, right now.” 

Time-sensitive offers tap into the fear of missing out (FOMO) and create a psychological nudge that encourages immediate action. 

The key here is to make the offer feel both valuable and temporary. Visitors need to believe that waiting will cost them.

What works best for urgency?

Limited-Time Discounts: Offer a percentage off (like 10-20%) or free shipping, but tie it to a deadline. Example: “20% off your first order—expires in 24 hours!”

Exclusive Early Access: Give leads a chance to access products, content, or features before anyone else. Example: “Sign up today for early access to our summer collection.”

Free Trials: For SaaS or subscription-based businesses, a free trial for a limited period works well. Add urgency with a “limited spots available” message to amplify the effect.

Flash Deals: Highlight short, unexpected deals that require users to act immediately. For example: “This deal ends at midnight—don’t miss out!”

Why time-sensitive offers work: 

Urgency creates a sense of scarcity, which triggers action. When visitors know there’s a deadline, they’re less likely to procrastinate. Instead of saving the offer for later (and forgetting about it), they’re motivated to act now.

How to implement it effectively:

Make the urgency clear: Use bold messaging like “Ends in 24 hours!” or “Offer expires today!” Combine this with countdown timers on your website, emails, or SMS for added pressure.

Keep the CTA simple: Phrases like “Claim Your Discount” or “Shop Now” work best for urgency-driven campaigns. Make the next step easy to take.

Use urgency across channels: Reinforce the limited-time offer via pop-ups, follow-up emails, and SMS reminders. Example:

Pop-up: “20% off your first order—claim it now before it’s gone!”

Email: “ Your 20% off ends tonight! Don’t miss out.”

SMS: “Last call! Your 20% off code expires in 3 hours: [link].”

Pro tip: Don’t overdo it. Fake urgency (“only 5 spots left!” when that’s not true) will erode trust and hurt long-term results. 

Use real, meaningful deadlines, and always deliver on the promises you make. Done right, time-sensitive offers turn visitors into leads and leads into buyers.

Lead Capture Tactic #4. Add a Smart Exit-Intent Pop-Up

Exit-intent pop-ups are your last chance to capture visitors who are about to leave your site. 

Using exit-intent technology, these pop-ups trigger when a user’s mouse movement signals they’re about to bounce, giving you a critical opportunity to turn an exit into a conversion.

Your exit-intent pop-up needs to offer something compelling. 

If a visitor hasn’t taken action yet, they need a nudge. Perhaps something valuable that makes them rethink leaving. 

The most effective exit offers are clear, time-sensitive, and focused on solving a problem or delivering value.

Here are examples of exit-intent offers that work:

Discounts: “Wait! Here’s 15% off if you complete your purchase now.”

Lead Magnets: “Don’t leave empty-handed. Download our free guide to [X] now.”

Free Shipping: “Before you go, get free shipping on your order for the next 2 hours!”

Abandoned Cart Recovery: “Your cart is waiting! Complete checkout now and save 10%.”

To make exit pop-ups work, focus on clean design and persuasive messaging. Use a bold headline that grabs attention, a clear CTA (e.g., “Claim My Discount”), and minimal form fields—like just name and email. If you’re offering a discount, add urgency by setting a time limit, like “expires in 24 hours.”

Pro tip: Pair your exit-intent pop-up with behavioral targeting. For example, show different offers based on where the user is on your site: a discount for cart abandoners, a lead magnet for blog readers, or free shipping for product page browsers. This ensures your exit offer is relevant and increases your chances of capturing the lead.

A smart exit-intent pop-up gives your visitors a reason to stay engaged and helps you recover leads that would otherwise be lost.

Lead Capture Tactic #5. Gamify Your Lead Capture (Spin-to-Win, Scratch-Offs)

Gamification is a proven way to make lead capture fun, interactive, and highly engaging. 

Instead of asking for an email with a boring form, tools like spin-to-win wheels, scratch-offs, and prize draws make it feel like a game. One where the user wins just for participating.

Why does this work? 

People love the thrill of a chance to win something, even if the “game” is rigged in their favor. Temu has actually become known for their wheel:

This approach not only boosts engagement but also captures leads who might otherwise ignore a traditional pop-up. Plus, gamified lead capture can drive immediate action, like purchases, by offering enticing rewards.

Here’s how to do it right:

Make the Rewards Clear: Offer discounts, free shipping, or small freebies. Example: “Spin the wheel for up to 30% off your first order!” If every spin is a win, users are more likely to play.

Keep It Simple: Gamified tools should be quick, easy, and mobile-friendly. The fewer steps required to play (e.g., name and email), the better the conversion rate.

Highlight Urgency: Tie rewards to time-sensitive offers, like “Your 15% off code expires in 24 hours,” to encourage users to act fast.

Add Personalization: Use tools that let you tailor the experience based on user behavior. For example, if someone adds products to their cart but doesn’t buy, a spin-to-win pop-up offering a small discount could seal the deal.

Pro Tip: Use gamification strategically. It’s a great tool, but don’t overdo it. Too many “games” can feel gimmicky and cheapen your brand. The key is to deliver a fun experience while still maintaining trust and professionalism.

Gamified lead capture works because it combines psychology (the excitement of winning) with real incentives, all while capturing valuable contact information. 

Lead Capture Tactic #6. Create a High-Value Quiz

Quizzes turn lead capture into an interactive experience while delivering personalized value. Instead of a generic “Sign up here,” you’re engaging visitors with something they want.

Why do quizzes work? 

People love discovering something about themselves! They are also more willing to exchange their contact information when they know they’ll get a relevant, personalized outcome.

Take Summersalt’s fit quiz for example. The quiz helps you understand what suit you should buy – if that’s not personalized, I don’t know what is.

Here’s how to create a high-value quiz that converts:

Solve a Problem or Spark Curiosity: Make sure your quiz addresses something your audience cares about. Examples:

Skincare brand: “Find Your Perfect Skincare Routine in 60 Seconds”

Marketing platform: “What’s Your Marketing Style? Take the Quiz to Find Out!”

Fitness brand: “What’s the Best Workout Plan for You?”

Keep It Short and Sweet: Aim for 5-7 questions max to keep engagement high. Use multiple-choice questions that are easy to answer quickly.

Make the Results Actionable: Your quiz outcome should deliver something users can apply immediately, like product recommendations, tips, or insights. Pair it with a relevant CTA. Example: “Here’s your perfect skincare routine—get 10% off your first order to try it out.”

Capture the Lead Before Delivering Results: Always ask for a name and email before revealing quiz outcomes. Use friendly messaging like, “Where should we send your results?” to make the ask feel natural.

Pro Tip: Use follow-up email flows to nurture quiz takers. For example, if a user takes a “Find Your Perfect Skincare Routine” quiz, send an automated email series featuring products matched to their results, customer reviews, and exclusive offers.

Quizzes work because they deliver value upfront while making the lead capture process engaging and personal. 

Lead Capture Tactic #7. Offer Early Access or VIP Perks

Exclusivity is a powerful motivator and offering early access or VIP perks taps directly into your audience’s fear of missing out (FOMO). 

When people feel like they’re getting special treatment, they’re far more likely to hand over their contact information to stay in the loop.

Sephora is a pro at this with their VIB program. One of the core benefits? Early access to products and sales!

How it works: Offer something that feels exclusive and valuable, something they can’t get unless they sign up. Examples include:

Early Access: Give your VIPs first dibs on new product launches before they’re released to the public. Example: “Be the first to shop our Spring collection—sign up now for early access.”

Exclusive Discounts: VIP-only coupons or promo codes that reward loyal subscribers. Example: “Join our VIP list and get 15% off your first order + insider-only offers.”

Sneak Peeks: Offer behind-the-scenes content or previews of upcoming releases to make your audience feel like insiders. Example: “Get an exclusive first look at our newest products—straight to your inbox.”

Why it works: VIP perks are about positioning your audience as part of something special. This sense of exclusivity builds excitement, drives sign-ups, and fosters brand loyalty. We all want to be VIPs!

Pro Tip: Highlight the value of joining with a strong CTA. Use phrases like “Don’t miss out,” “Unlock VIP perks,” or “Get first access” to emphasize exclusivity and urgency.

When done right, VIP sign-ups create a community of loyal customers who are excited to hear from you and act quickly on your offers. The result? More leads!

Lead Capture Tactic #8. Use Simple, High-Converting Forms

When it comes to lead capture forms, shorter is always better. 

The more fields you ask visitors to fill out, the higher the friction. Every extra field drops your conversion rate. 

To maximize leads, stick to the essentials: name and email. If you absolutely need more info (like phone numbers for SMS), ask for it after you’ve built initial trust.

Here’s how to make your forms convert:

Keep It Short: Limit forms to 1-2 fields. Example: “Enter your name and email to get 15% off your first order.”

Enable Autofill: Make the process effortless by supporting browser autofill. The easier it is to complete, the more leads you’ll capture.

Use One-Click Sign-Ups: Tools like Google or social logins can streamline sign-ups by eliminating manual typing altogether. Example: “Sign up with Google and claim your discount instantly.”

Make It Mobile-Friendly: Over half of your traffic is probably coming from mobile. Ensure forms are easy to tap, scroll, and complete on smaller screens.

Add Microcopy for Reassurance: Include trust-building phrases like “We’ll never spam you” or “Unsubscribe anytime” to ease hesitation.

Pro Tip: Test your forms regularly. A small tweak like changing “Submit” to “Get My Discount” can have a big impact on conversions.

Simple, frictionless forms convert because they respect your audience’s time. Ask for what you need, make it effortless, and watch your leads grow.

Lead Capture Tactic #9. Add a Sticky Header or Footer CTA

Sticky headers or footers are a subtle, non-intrusive way to keep your lead capture CTA visible as users scroll through your site. 

Unlike pop-ups, which can feel disruptive, sticky CTAs blend seamlessly into the browsing experience while staying top-of-mind.

How to make it work:

Keep It Simple and Clear: Your CTA should be short, direct, and tied to a valuable offer. Example: “Get 15% off—subscribe now!” or “Unlock VIP perks—Join the list.”

Stay Non-Intrusive: Use clean, minimal design that doesn’t take up too much screen space. Sticky elements should enhance the user experience, not interrupt it.

Make It Actionable: Add a single, bold button that opens your opt-in form or lead magnet. Example: “Claim Your Discount.”

Test Placement: Sticky headers are great for desktop, while sticky footers often perform better on mobile. Test both to see what resonates with your audience.

Why it works: Sticky CTAs keep your offer visible without distracting from the content. As users scroll and explore, the gentle reminder encourages them to sign up when they’re ready.

Pro Tip: Pair your sticky CTA with a small incentive, like a discount or exclusive access, to increase conversions. Keep it visually appealing but easy to ignore if a visitor isn’t interested.

Sticky headers or footers quietly drive sign-ups by staying in sight and ready to convert. Try it out!

Lead Capture Tactic #10. Use Social Proof to Boost Credibility

Trust is everything, and social proof is the shortcut to building it. 

Adding testimonials, reviews, or subscriber counts near your lead capture forms reassures visitors that signing up is worth it because other people already think so.

Why it works: People trust people. When they see that others are subscribing, buying, or benefiting, they’re more likely to follow suit. 

How to add social proof that converts:

Subscriber Count: Highlight how many people have already signed up. Example: “Join 10,000+ marketers getting weekly tips to boost conversions.”

Testimonials: Add quick, punchy quotes from happy customers. Example: “This newsletter helped me double my email ROI—highly recommend!”

Star Ratings: If your product or service has reviews, show those ratings near your lead form. Example: “Rated 4.9/5 by over 3,000 users.”

Logos or Trust Badges: Display brand logos or certifications to add credibility. Example: “As featured in Shopify, Forbes, and HubSpot.”

Pro Tip: Make your social proof specific and relevant. Vague claims like “Best newsletter ever” won’t cut it. Highlight tangible benefits, numbers, or names to make it believable.

When visitors see proof that others trust your brand, hesitation fades, and sign-ups soar. Trust sells, and social proof makes trust visible.

Lead Capture Tactic #11. Launch an Interactive Lead Capture Chatbot

Chatbots are powerful tools for capturing leads in a way that feels natural and interactive. 

Instead of static forms, a chatbot creates a conversational flow that engages visitors, making it easier to collect their info.

Why it works: People respond better to quick, back-and-forth interactions. A chatbot feels less like a form and more like a personalized experience, reducing friction and boosting conversions.

Here’s how to use a chatbot for lead capture:

Start with an Engaging Hook: Ask a question that grabs attention. Example: “Want 15% off your first order? Drop your email, and we’ll send your code!”

Keep It Simple: Focus on collecting just 1-2 pieces of info, like an email or phone number. Use follow-ups to gather more data later.

Offer Value Instantly: Once they share their info, deliver on the promise right away. Example: “Here’s your 15% off code: SAVE15. Happy shopping!”

Make It Feel Personalized: Add their name and tailor responses based on their actions. Example: “Thanks, Sarah! Keep an eye on your inbox for exclusive deals.”

Pro Tip: Use chatbots to segment leads automatically. For instance, if someone interacts with a product-related question, tag them for a product-focused email flow.

Chatbots capture leads without disrupting the user experience. They’re fast, frictionless, and feel like a two-way conversation

Lead Capture Tactic #12. Add a Countdown Timer for Urgency

Urgency gets people moving, and countdown timers are one of the most effective ways to drive immediate action. 

By combining a timer with a lead capture pop-up, banner, or form, you’re giving visitors a reason to act right now instead of bouncing.

Why it works: Scarcity and time limits trigger FOMO (fear of missing out). When users see the clock ticking down, they’re less likely to procrastinate and more likely to sign up or claim your offer.

How to implement countdown timers effectively:

Pair Them with Strong Offers: Urgency works best when the reward is clear and valuable. Example: “Free shipping ends in 3 hours. Sign up now to claim it!”

Place Them Strategically: Add countdown timers to exit-intent pop-ups, sticky banners, or landing pages. Keep them front and center where users can’t miss them.

Make the Timer Real: Stick to your deadlines. If the countdown resets or feels fake, it erodes trust and reduces conversions long-term.

Use Dynamic Timers for Personalization: Set countdowns tailored to the user, like “Your 10% off code expires in 30 minutes.” This keeps the urgency feeling personal.

Pro Tip: Test different durations to see what drives the best results. For most campaigns, 24 hours or less works best for maintaining urgency without overwhelming visitors.

Countdown timers turn hesitation into action. They’re simple to implement, highly effective, and a perfect way to add urgency to your lead capture efforts.

Lead Capture Tactic #13. Personalize Offers Based on User Behavior

Generic pop-ups and lead capture forms are easy to ignore. But when offers feel tailored to a user’s behavior or interests, they convert like crazy! 

Dynamic lead capture tools let you show personalized messages based on actions visitors take on your site, like browsing specific pages, abandoning their cart, or lingering too long on a product.

Why it works: Personalization makes users feel seen. Instead of throwing out a one-size-fits-all message, you’re offering something relevant and valuable based on what they care about.

Here’s how to personalize offers effectively:

Page-Specific Offers: Show targeted lead capture messages based on what a user is viewing. Example: “Looks like you love skincare! Join our list for expert tips and 15% off your first order.”

Cart Abandonment Pop-Ups: Trigger a dynamic pop-up when someone adds items to their cart but tries to leave. Example: “Don’t leave your favorites behind! Sign up now for 10% off and free shipping.”

Time on Page Prompts: If a visitor spends a lot of time on a page, trigger a tailored lead form to capture their interest. Example: “Enjoying our content? Sign up to get more insights straight to your inbox.”

Returning Visitor Recognition: Show special offers to returning visitors who haven’t signed up yet. Example: “Welcome back! Don’t miss out—join our VIP list for exclusive deals.”

Pro Tip: Use lead capture tools that integrate with your website analytics to track user behavior and automate these personalized messages. Keep the tone conversational and natural so it feels less like a sales pitch and more like helpful guidance.

Dynamic, behavior-based lead capture works because it meets users where they are and gives them a reason to act. 

Lead Capture Tactic #14. Optimize Your Mobile Lead Capture Experience

Over 50% of web traffic comes from mobile devices, so if your lead capture forms or pop-ups aren’t mobile-friendly, you’re in trouble. 

Mobile users won’t struggle to pinch, zoom, or fill out clunky forms. They’ll simply bounce. Bye!

Optimizing your mobile experience is critical to capturing leads.

Here’s how to nail mobile lead capture:

Keep It Short and Simple: Use minimal fields (name and email) and ensure forms are easy to complete on smaller screens. The fewer taps required, the higher the conversion rate.

Design for Tap, Not Click: Buttons and CTAs need to be large enough to tap with a thumb. Avoid tiny links or hard-to-press elements. Example: “Get My Code” as a big, bold button.

Use Tap-to-Text for SMS Sign-Ups: Tap-to-text lets users sign up for SMS with a single tap by pre-populating their messaging app with your opt-in keyword. It’s seamless, fast, and frictionless. Example: “Sign up for VIP deals—tap here to text ‘JOIN’ and get started!”

Trigger Mobile-Friendly Pop-Ups: Pop-ups on mobile must be unobtrusive. Use slide-ins or sticky banners instead of full-screen interruptions. Make closing them easy with a visible “X.”

Test Across Devices: What works on desktop might not translate to mobile. Regularly test your forms, pop-ups, and CTAs to ensure they look clean and load quickly on all devices.

Pro Tip: Use geo-targeting for mobile visitors to personalize offers even further. Example: “Hey NYC shoppers, get free same-day delivery—sign up now!”

Mobile lead capture is all about simplicity and speed. 

Lead Capture Tactic #15. Follow Up Fast with an Automated Welcome Sequence

Capturing the lead is just the start. The good stuff happens in how quickly you follow up. 

Automating a welcome sequence ensures that new leads receive immediate value, builds trust with your brand, and keeps them engaged. Delayed responses? That’s a fast track to losing their interest.

Here’s how to craft a high-impact automated welcome sequence:

Deliver the Promise Instantly: If you offered a discount or resource, send it right away. Example: “Welcome to the club! Here’s your 20% off code: WELCOME20.” Include a clear CTA to shop or download.

Set the Tone: Use the first message to introduce your brand’s personality and what subscribers can expect (e.g., exclusive offers, tips, or updates). Keep it friendly, fun, and aligned with your voice.

Guide Them to the Next Step: Move new leads toward taking action—shopping, consuming content, or joining your community. Example: “Here are our most-loved products to get you started.”

Follow Up with a Nurture Flow: A well-timed second or third email/SMS keeps momentum going. Example: “Still deciding? Don’t forget—your 20% off code expires in 48 hours!”

Pro Tip: Use segmentation to personalize your welcome flow. If someone signed up for skincare tips, send them content and product recommendations tailored to that interest.

An automated welcome sequence ensures your new leads feel seen, valued, and motivated to take the next step. Respond quickly and deliver on your promise.

Level Up Your Lead Capture Tactics with Customers.ai

These lead capture tactics are not about tricking your customers. They are about delivering value, creating urgency, and keeping the process simple. 

From visitor identification technology and gamified pop-ups to dynamic personalization and countdown timers, these 15 proven lead capture tactics are designed to turn those passive visitors into engaged leads (and revenue!).

To start, uudit your current lead capture strategy. Are you offering real value? Is your process seamless, mobile-friendly, and engaging? If not, start implementing these tactics. We are confident you’ll see immediate improvements in your lead flow and conversions.

Are you ready to stop guessing and start capturing? Get your free trial of Customers.ai and capture 500 leads free!

See Who Is On Your Site Right Now!

Get names, emails, phone numbers & more.

Try it Free, No Credit Card Required

Start Your Free Trial

Important Next Steps

See what targeted outbound marketing is all about. Capture and engage your first 500 website visitor leads with Customers.ai X-Ray website visitor identification for free.

Talk and learn about sales outreach automation with other growth enthusiasts. Join Customers.ai Island, our Facebook group of 40K marketers and entrepreneurs who are ready to support you.

Advance your marketing performance with Sales Outreach School, a free tutorial and training area for sales pros and marketers.

Lead Capture Tactic FAQs

1. What are lead capture tactics?

Lead capture tactics are strategies designed to collect contact information from potential customers, like names, emails, or phone numbers. By using forms, pop-ups, or gated content, businesses convert anonymous website visitors into actionable leads for future marketing campaigns.

2. Why are lead capture strategies important?

Lead capture strategies are critical for building a pipeline of potential customers. Without them, businesses lose opportunities to engage with interested visitors, limiting their ability to nurture relationships and drive sales.

3. What are the different types of lead capture forms?

There are several types of lead capture forms, each serving a unique purpose:

In-page forms: Embedded directly into landing pages or blogs for seamless sign-ups.

Pop-up forms: Appear over page content to immediately grab user attention.

Slide-in forms: Subtle forms that slide into view as users scroll down the page.

Multi-step forms: Break the process into multiple steps to reduce overwhelm and improve completion rates.

4. How can I improve my lead capture tactics?

To improve lead capture tactics, focus on delivering value and reducing friction. Offer compelling incentives like eBooks or discounts, simplify your forms to just the essentials, and test CTAs (calls-to-action) to determine what drives conversions. Mobile optimization and A/B testing are also key for identifying what resonates best with your audience.

5. What are some best practices for lead capture?

The best lead capture practices focus on value, trust, and user experience:

Personalize the experience: Tailor messaging based on user behavior or interests.

Create urgency: Use limited-time offers to motivate immediate action.

Communicate value clearly: Tell users exactly what they’ll get in return for their information.

Use trust signals: Highlight testimonials, reviews, or security badges to establish credibility and reduce hesitation.

6. How does visitor identification technology enhance lead capture?

Visitor identification technology identifies anonymous website visitors who haven’t filled out a form. Platforms like Customers.ai can reveal names, emails, and behavior insights, allowing you to reconnect with high-potential leads who would have otherwise disappeared.

7. What role does content marketing play in lead capture strategies?

Content marketing fuels lead capture by attracting and engaging visitors with valuable content. High-quality blog posts, guides, or videos can inspire users to share their contact details in exchange for exclusive resources like templates or whitepapers.

8. How can I use social proof in lead capture tactics?

Social proof builds trust and reduces hesitation. Display customer testimonials, positive reviews, or “Join 10,000+ subscribers” messaging near forms to show potential leads that others trust your brand, making them more likely to sign up.

9. What are the most effective lead capture strategies for B2B businesses?

B2B businesses benefit from lead capture strategies tailored to longer sales cycles. Effective tactics include offering industry-specific whitepapers, hosting webinars with actionable insights, and using account-based marketing (ABM) to target decision-makers directly.

10. How can I optimize lead capture forms for mobile users?

Mobile optimization is key, as over half of web traffic comes from mobile devices. Keep forms short (name and email only), use large tappable buttons, and ensure fast load times to reduce friction for mobile users.

11. What metrics should I track to evaluate lead capture effectiveness?

To measure success, focus on these key metrics:

Conversion rate: The percentage of visitors who submit their contact information.

Bounce rate: Visitors who leave the page without engaging.

Cost per lead: The expense incurred to acquire a single lead.

Lead quality: The potential value of each lead based on behavior or fit.

12. What are gated content lead capture tactics?

Gated content tactics involve offering valuable resources, like guides or templates, in exchange for contact information. Users “unlock” the content by completing a lead form, ensuring they see the exchange as a fair trade.

13. How does personalization improve lead capture performance?

Personalization increases relevance and engagement. By tailoring offers or messages to a user’s behavior—like viewing a specific product page—brands can show highly targeted forms or pop-ups, resulting in higher conversion rates.

14. What role does a lead magnet play in lead capture?

Lead magnets are incentives that motivate users to share their contact info. Examples include discount codes, free trials, or downloadable resources. A strong lead magnet addresses a specific pain point and immediately delivers value.

15. How can gamification improve lead capture tactics?

Gamification, like spin-to-win pop-ups or scratch-offs, makes lead capture fun and interactive. Users engage with a “game” to unlock rewards (e.g., discounts), increasing sign-ups while creating a positive experience.

16. What’s the best timing for lead capture pop-ups?

The ideal timing depends on user behavior. Trigger pop-ups after a specific duration (e.g., 10 seconds), percentage scrolled, or exit intent. Testing different triggers will help you find the sweet spot for engagement.

17. How do countdown timers create urgency in lead capture?

Countdown timers highlight limited-time offers to drive immediate action. When users see a clock ticking down—like “Get 15% off in the next 2 hours”—they’re more likely to act quickly and submit their information.

18. What types of lead capture tools are available?

Lead capture tools include form builders, pop-up tools, visitor identification platforms, and marketing automation software. Popular tools include HubSpot, OptinMonster, and Customers.ai.

19. How can chatbots improve lead capture?

Chatbots simplify lead capture by creating a conversational experience. Instead of filling out a form, visitors answer quick, engaging questions like “Want 10% off? Drop your email here!” to submit their contact information.

20. Why is A/B testing important for lead capture tactics?

A/B testing allows you to compare variations of forms, CTAs, or offers to see what resonates best. Small changes, like button text or form length, can lead to significant improvements in conversion rates.

21. How can I reduce friction in the lead capture process?

To reduce friction, limit fields to essentials (like name and email), enable autofill, and simplify the design. Avoid intrusive pop-ups and ensure forms are quick and easy to complete, especially on mobile.

22. What’s the role of exit-intent pop-ups in lead capture?

Exit-intent pop-ups trigger when a user moves to leave the page, offering a final incentive to capture their information. Examples include discounts, free shipping, or lead magnets that encourage users to stay and engage.

23. How do you align lead capture tactics with user intent?

Align tactics with intent by understanding where users are in the funnel. New visitors respond well to lead magnets, while returning users may prefer exclusive offers or content upgrades tailored to their needs.

24. What industries benefit most from lead capture strategies?

While all industries can benefit, lead capture is especially valuable for ecommerce, SaaS, B2B, and digital service providers. Each industry can tailor tactics, like discounts for ecommerce or webinars for B2B brands.

25. How do I ensure lead capture tactics comply with regulations?

To comply with GDPR, CCPA, or TCPA, always get explicit consent before collecting data. Use opt-in checkboxes, provide clear privacy policies, and ensure users can easily unsubscribe or opt out at any time.
The post 15 High-Converting Lead Capture Tactics to Turn Maybe to Yes appeared first on Customers.ai.

Kill the Chaos: How to Simplify Your Email and SMS Flows

Let’s face it. Most email and SMS flows are a hot mess of chaos and confusion. 

Too many messages, too much segmentation, and way too much effort. The result? Your audience tunes out, your engagement tanks, and you’re left wondering why your open rates look like ghost towns.

The truth is, as marketers, we’ve made things too complicated. According to Aaron Orendorff at DTC Next, most brands are tripping over their own complexity. Instead of guiding customers toward a purchase, they’re overwhelming them. The reality is, simple flows perform better. 

In fact, according to recent stats, email campaigns with just 1-3 messages get over 90% higher engagement rates than overly complex series. Combine that with SMS, where open rates hover around 98% and you’ve got a recipe for streamlined success.

So what should marketers be doing? The answer certainly isn’t ‘do more’. It’s about doing what works. 

Let’s break down what that looks like.

Email Deliverability Hacks:

Unlocking Inboxes 2

HOSTED BY

Larry Kim

Founder and CEO, Customers.ai

Free Webinar: Watch Now

The Problem: Why Overcomplicated Email & SMS Flows Don’t Work

Too Many Steps, Too Few Results

The more complicated your workflows are, the less they actually work. 

Every extra step you add to an email or SMS flow increases the chances of losing your audience. 

Why? Because too many messages feel spammy, and too much segmentation leads to disjointed, inconsistent communication.

Let’s talk mistakes:

Redundant messages: You sent the “cart reminder” email three times. By the fourth, your audience isn’t just ignoring you, they’re unsubscribing.

Over-segmentation: Segmenting down to ultra-specific audiences might sound smart, but it often leaves you chasing tiny groups instead of focusing on bigger opportunities. Example: Do you really need a separate flow for “VIPs who bought green socks in March?”

Timing misfires: If your email flow hits someone with three messages in a day and then goes silent for weeks, you’re confusing your audience, not converting them.

Bottom line? Overcomplicated flows create friction. And friction kills engagement.

Burnout for Both You and Your Audience

Now let’s talk about you, the marketer. If your flow chart looks like a spider web, it’s no surprise you’re overwhelmed. 

Too many branches, too many split conditions, too much everything! 

Complex workflows can stress out your team don’t just stress out your team, they also waste your time. Instead of refining and optimizing, you’re constantly trying to untangle the mess you’ve built.

And your audience? They feel it, too.

A jumbled flow creates a bad experience. Too many emails or texts can feel desperate, pushy, and irrelevant.

Inconsistency kills trust. If messaging is too segmented or poorly timed, customers won’t know what to expect next, and that uncertainty makes them tune out.

Aaron Orendorff nailed it at DTC Next: “You’re not guiding your customers. You’re overwhelming them.” 

According to @AaronOrendorff it’s time to let go of the illusion that SMS and email need to have this extremely curated overlapping interlocking journey.Why? Because none of your customers pay attention to you as much as you do!Want more tips? Get Aaron’s entire… pic.twitter.com/5hGk5oBC0F— CustomersAI (@CustomersAI) December 16, 2024

When your workflows are clean and intentional, you can focus on driving results instead of fighting chaos.

The Solution: Simplifying Your Email and SMS Flows

Combine Email and SMS Strategically

Email and SMS don’t need to compete. Sending them in sync is like a one-two punch – email delivers the full story and SMS seals the deal with a quick nudge. 

The best part? It’s simple and doesn’t double your workload.

Here’s how it works:

Send a promo email highlighting the offer or product.

Follow it up with an SMS reminder an hour or two later for anyone who hasn’t clicked.

Example:

Email: “ Last day for 20% off – don’t miss it!”

SMS: “Hey [Name], your 20% off ends tonight. Don’t miss out: [link].”

No extra segmentation. No unnecessary complexity. Just one clear, consistent message across two high-performing channels.

Take Laura Geller Beauty – they nail this strategy.

First, they send a clean promo email showcasing the sale or new product.

A few hours later, they follow up with a quick SMS reminder for anyone who hasn’t clicked.

The result? A 3.9x growth in quarterly SMS revenue!

Email builds the hype, SMS seals the deal, and the numbers speak for themselves.

Focus on Key Actions, Not Over-Segmentation

Instead of creating 27 flows for every micro-behavior, focus on the actions that actually move the needle:

Purchases – Post-purchase thank-yous, upsells, or review requests.

Sign-ups – Welcome emails or SMS to make a strong first impression.

Abandoned carts – Your biggest opportunity to recover lost revenue.

That’s it. Build flows that align with your customer’s natural journey and ditch the fluff.

Pro tip: Use dynamic content to personalize within a single flow. You don’t need 10 segments when one smartly written message can do the job.

Automate the Clean-Up

If someone clicks, buys, or signs up, take them out of the flow immediately. There’s nothing worse than getting hit with a “Don’t forget about us!” email after you’ve already made a purchase.

Automation tools make this easy:

Set triggers to pull users out of flows once they take the desired action.

Update audiences in real-time so you’re not bombarding loyal customers with irrelevant messages.

Clean-up automation ensures every message has a purpose. Plus, you save yourself the headache of manually managing audiences.

Simplified flows = smoother journeys, better engagement, and higher ROI. 

No chaos, no confusion, just results.

Ecommerce Webinar

Beyond Abandoned Cart to Abandoned Product View Revenue

with Email Deliverability Hacks & AI Tools

Watch The Webinar

Practical Tips for Building Streamlined Email & SMS Flows

Did you know that 45% of marketers say they spend too much time managing overly complex workflows? That isn’t great news. 

What is good news? We’re gonna tell you how to clean up your email and SMS flows with tips the pros swear by. No wasted time here!

1. One Message, One Job

Every email or SMS should have one goal. Not three, not two – one. 

Whether it’s getting a click, driving a purchase, or collecting a review, make the call-to-action (CTA) crystal clear. Too many options = no action.

Example:

Email: “Your cart’s waiting. Complete your purchase now and save 10%.”

SMS: “Hey [Name], don’t forget! Your 10% off cart expires tonight: [link].”

No fluff. Just focus on the next step.

2. Stop Writing Novels

Long-winded emails are a turn-off. If your audience has to scroll forever, they’re gone (oh and Gmail will probably clip it anyway). Keep it short, punchy, and scannable:

Emails: A killer subject line, 1-2 sentences of value, and a strong CTA.

SMS: 1-2 lines. You’re in, you’re out, they’re clicking.

Pro Tip: Use visuals in email to say more with less. For example product shots, GIFs, or UGC all grab attention without word overload.

3. Test Simple vs. Complex

If you think your 12-step nurture flow is working, test it. Run a shorter version of the same flow side by side: fewer steps, tighter messaging, and synced SMS nudges.

Look at the metrics:

Open rates

Click-through rates

Conversion rates

Chances are, simpler flows will outperform. 

We compared two customer flows (one good / one bad) and here’s what we saw: Top accounts had short, intent-based sequences with offers, while the worst performers drowned in endless drip emails. Moral of the story: Less is more when it comes to email marketing! pic.twitter.com/uvsnpHQuhA— CustomersAI (@CustomersAI) October 7, 2024

Don’t be afraid to cut what isn’t working.

4. Watch Your Timing

The timing of your messages can make or break your flow. Here’s a quick cheat sheet:

Abandoned cart recovery: Email within 1 hour, SMS within 2-3 hours.

Post-purchase: A thank-you email immediately, a review request 3-5 days later.

Promos: Email in the morning, SMS in the afternoon for a final nudge.

Syncing your timing keeps your messaging relevant and avoids bombarding your audience.

Simplified flows don’t mean boring flows. They’re clean, intentional, and built to convert. Keep testing, keep trimming, and let the results guide you.

Less Is Truly More for Email & SMS Flows

The bottom line is simplicity is easier and it’s smarter. 

Clean, focused email and SMS flows guide your customers to take action without overwhelming them. And as Laura Geller Beauty proved with a 3.9x boost in SMS revenue, streamlined workflows deliver real results.

So, what’s next? Audit your current email flows. Look for:

Messages that overlap or repeat.

Segments that are too small to matter.

Workflows with way too many steps.

Trim the fat, simplify the journey, and watch your engagement grow.

Need help? Start your free trial of Customers.ai and see how we can help streamline your flows. Because less chaos = more conversions.

See Who Is On Your Site Right Now!

Get names, emails, phone numbers & more.

Try it Free, No Credit Card Required

Start Your Free Trial

Important Next Steps

See what targeted outbound marketing is all about. Capture and engage your first 500 website visitor leads with Customers.ai X-Ray website visitor identification for free.

Talk and learn about sales outreach automation with other growth enthusiasts. Join Customers.ai Island, our Facebook group of 40K marketers and entrepreneurs who are ready to support you.

Advance your marketing performance with Sales Outreach School, a free tutorial and training area for sales pros and marketers.

Email and SMS Flow FAQs

. What is the difference between email flows and SMS flows?

Email flows are automated sequences of emails designed to guide a customer through specific actions, like onboarding or purchasing. SMS flows work similarly but focus on shorter, more urgent messages that get straight to the point. The best results happen when you combine both—email delivers detail, SMS delivers impact.

2. How do you decide which messages go to email vs SMS?

Use email for content that requires more context—product details, visuals, or storytelling. Use SMS for urgency, reminders, or short updates. For example, send the offer details via email and the “Last chance!” nudge via SMS.

3. What’s the ideal timing for abandoned cart flows?

Send the first email within 1 hour of abandonment to strike while the intent is high. Follow up with a second email 24 hours later and an SMS reminder within 2-3 hours of the first message. Time-sensitive offers like discounts should always appear early in the flow.

4. How many messages should be in an email or SMS flow?

Aim for 3-5 messages per flow. Enough to nurture the customer without overwhelming them. Start with a welcome or reminder, add value in the middle, and close strong with urgency or an incentive.

5. How do you personalize email and SMS flows without over-segmenting?

Use dynamic content like merge tags for names, location-based offers, or purchase history within a single flow. This keeps your audience engaged without forcing you to build dozens of micro-segments.

6. What’s the best way to prevent unsubscribes in email and SMS flows?

Avoid bombarding your audience with too many messages in a short time. Keep emails relevant, limit SMS to essentials, and always include clear opt-out options.

7. How do you measure the success of email and SMS flows?

Track open rates, click-through rates (CTR), conversion rates, and unsubscribes. For SMS, also monitor response rates and delivery rates. Ultimately, revenue per flow is the most telling metric.

8. How do you align email and SMS messaging to avoid redundancy?

Sync your timing and content so the two channels support each other. For example, if an email highlights a promotion, send a short SMS reminder a few hours later—don’t repeat the same message verbatim.

9. How often should you test and optimize your email and SMS flows?

Test monthly to stay on top of performance. Focus on one variable at a time: subject lines, timing, CTA placement, or message length. Keep what works, cut what doesn’t.

10. Should SMS flows replace email flows?

No—SMS is a complement, not a replacement. Email allows you to share more details, visuals, and context. SMS drives urgency and action. Together, they’re far more effective than either channel alone.

11. What’s a good SMS opt-in strategy?

Promote SMS sign-ups with clear value: exclusive offers, early access, or reminders customers can’t miss. Use a keyword like “JOIN” to make opt-ins simple and compliant.

12. Can you automate both email and SMS flows at the same time?

Yes, most marketing automation tools allow you to set triggers for both channels within a single workflow. For example, “if no email click, send SMS.” This keeps your communication seamless.

13. How do you write effective SMS messages for flows?

Keep it short (under 160 characters), direct, and action-oriented. Include a clear CTA and avoid jargon. Example: “Your cart is waiting! Complete your order now and get 10% off: [link]”.

14. How can you recover sales with SMS and email flows?

Use both channels to tackle abandoned carts, post-purchase follow-ups, and reminders. Start with an email for details and follow with a time-sensitive SMS to nudge action.

15. What are the most important triggers for automated flows?

Key triggers include welcome sign-ups, abandoned carts, post-purchase thank-yous, and lapsed customer re-engagement. These flows generate the most revenue with minimal effort.

16. How do you manage SMS flow timing to avoid annoying customers?

Send SMS during reasonable hours (e.g., 9 AM–8 PM) and avoid sending more than 2-3 texts per week. Make sure timing aligns with your email schedule to avoid overlap.

17. What are the compliance rules for SMS flows?

Always get explicit opt-in consent, include opt-out instructions in every text, and follow regional laws like TCPA in the U.S. Non-compliance can lead to penalties or carrier blocking.

18. How do you reduce friction in your email and SMS flows?

Remove unnecessary steps, like asking for too much info upfront. Simplify CTAs (“Click here” or “Shop now”) and automate clean-up so users exit flows once they take action.

19. What types of offers perform best in email and SMS flows?

Time-sensitive discounts (e.g., “Ends tonight”), free shipping, and personalized recommendations based on past behavior tend to drive the most clicks and conversions.

20. How do you ensure consistency between email and SMS branding?

Use the same tone of voice, offers, and CTAs across both channels. Your audience should feel like they’re hearing from one brand, not two disconnected systems.
The post Kill the Chaos: How to Simplify Your Email and SMS Flows appeared first on Customers.ai.