AWS performs fine-tuning on a Large Language Model (LLM) to classify t …

The video gaming industry has an estimated user base of over 3 billion worldwide1. It consists of massive amounts of players virtually interacting with each other every single day. Unfortunately, as in the real world, not all players communicate appropriately and respectfully. In an effort to create and maintain a socially responsible gaming environment, AWS Professional Services was asked to build a mechanism that detects inappropriate language (toxic speech) within online gaming player interactions. The overall business outcome was to improve the organization’s operations by automating an existing manual process and to improve user experience by increasing speed and quality in detecting inappropriate interactions between players, ultimately promoting a cleaner and healthier gaming environment.
The customer ask was to create an English language detector that classifies voice and text excerpts into their own custom defined toxic language categories. They wanted to first determine if the given language excerpt is toxic, and then classify the excerpt in a specific customer-defined category of toxicity such as profanity or abusive language.
AWS ProServe solved this use case through a joint effort between the Generative AI Innovation Center (GAIIC) and the ProServe ML Delivery Team (MLDT). The AWS GAIIC is a group within AWS ProServe that pairs customers with experts to develop generative AI solutions for a wide range of business use cases using proof of concept (PoC) builds. AWS ProServe MLDT then takes the PoC through production by scaling, hardening, and integrating the solution for the customer.
This customer use case will be showcased in two separate posts. This post (Part 1) serves as a deep dive into the scientific methodology. It will explain the thought process and experimentation behind the solution, including the model training and development process. Part 2 will delve into the productionized solution, explaining the design decisions, data flow, and illustration of the model training and deployment architecture.
This post covers the following topics:

The challenges AWS ProServe had to solve for this use case
Historical context about large language models (LLMs) and why this technology is a perfect fit for this use case
AWS GAIIC’s PoC and AWS ProServe MLDT’s solution from a data science and machine learning (ML) perspective

Data challenge
The main challenge AWS ProServe faced with training a toxic language classifier was obtaining enough labeled data from the customer to train an accurate model from scratch. AWS received about 100 samples of labeled data from the customer, which is a lot less than the 1,000 samples recommended for fine-tuning an LLM in the data science community.
As an added inherent challenge, natural language processing (NLP) classifiers are historically known to be very costly to train and require a large set of vocabulary, known as a corpus, to produce accurate predictions. A rigorous and effective NLP solution, if provided sufficient amounts of labeled data, would be to train a custom language model using the customer’s labeled data. The model would be trained solely with the players’ game vocabulary, making it tailored to the language observed in the games. The customer had both cost and time constraints that made this solution unviable. AWS ProServe was forced to find a solution to train an accurate language toxicity classifier with a relatively small labeled dataset. The solution lay in what’s known as transfer learning.
The idea behind transfer learning is to use the knowledge of a pre-trained model and apply it to a different but relatively similar problem. For example, if an image classifier was trained to predict if an image contains a cat, you could use the knowledge that the model gained during its training to recognize other animals like tigers. For this language use case, AWS ProServe needed to find a previously trained language classifier that was trained to detect toxic language and fine-tune it using the customer’s labeled data.
The solution was to find and fine-tune an LLM to classify toxic language. LLMs are neural networks that have been trained using a massive number of parameters, typically in the order of billions, using unlabeled data. Before going into the AWS solution, the following section provides an overview into the history of LLMs and their historical use cases.
Tapping into the power of LLMs
LLMs have recently become the focal point for businesses looking for new applications of ML, ever since ChatGPT captured the public mindshare by being the fastest growing consumer application in history2, reaching 100 million active users by January 2023, just 2 months after its release. However, LLMs are not a new technology in the ML space. They have been used extensively to perform NLP tasks such as analyzing sentiment, summarizing corpuses, extracting keywords, translating speech, and classifying text.
Due to the sequential nature of text, recurrent neural networks (RNNs) had been the state of the art for NLP modeling. Specifically, the encoder-decoder network architecture was formulated because it created an RNN structure capable of taking an input of arbitrary length and generating an output of arbitrary length. This was ideal for NLP tasks like translation where an output phrase of one language could be predicted from an input phrase of another language, typically with differing numbers of words between the input and output. The Transformer architecture3 (Vaswani, 2017) was a breakthrough improvement on the encoder-decoder; it introduced the concept of self-attention, which allowed the model to focus its attention on different words on the input and output phrases. In a typical encoder-decoder, each word is interpreted by the model in an identical fashion. As the model sequentially processes each word in an input phrase, the semantic information at the beginning may be lost by the end of the phrase. The self-attention mechanism changed this by adding an attention layer to both the encoder and decoder block, so that the model could put different weightings on certain words from the input phrase when generating a certain word in the output phrase. Thus the basis of the transformer model was born.
The transformer architecture was the foundation for two of the most well-known and popular LLMs in use today, the Bidirectional Encoder Representations from Transformers (BERT)4 (Radford, 2018) and the Generative Pretrained Transformer (GPT)5 (Devlin 2018). Later versions of the GPT model, namely GPT3 and GPT4, are the engine that powers the ChatGPT application. The final piece of the recipe that makes LLMs so powerful is the ability to distill information from vast text corpuses without extensive labeling or preprocessing via a process called ULMFiT. This method has a pre-training phase where general text can be gathered and the model is trained on the task of predicting the next word based on previous words; the benefit here is that any input text used for training comes inherently prelabeled based on the order of the text. LLMs are truly capable of learning from internet-scale data. For example, the original BERT model was pre-trained on the BookCorpus and entire English Wikipedia text datasets.
This new modeling paradigm has given rise to two new concepts: foundation models (FMs) and Generative AI. As opposed to training a model from scratch with task-specific data, which is the usual case for classical supervised learning, LLMs are pre-trained to extract general knowledge from a broad text dataset before being adapted to specific tasks or domains with a much smaller dataset (typically on the order of hundreds of samples). The new ML workflow now starts with a pre-trained model dubbed a foundation model. It’s important to build on the right foundation, and there are an increasing number of options, such as the new Amazon Titan FMs, to be released by AWS as part of Amazon Bedrock. These new models are also considered generative because their outputs are human interpretable and in the same data type as the input data. While past ML models were descriptive, such as classifying images of cats vs. dogs, LLMs are generative because their output is the next set of words based on input words. That allows them to power interactive applications such as ChatGPT that can be expressive in the content they generate.
Hugging Face has partnered with AWS to democratize FMs and make them easy to access and build with. Hugging Face has created a Transformers API that unifies more than 50 different transformer architectures on different ML frameworks, including access to pre-trained model weights in their Model Hub, which has grown to over 200,000 models as of writing this post. In the next sections, we explore the proof of concept, the solution, and the FMs that were tested and chosen as the basis for solving this toxic speech classification use case for the customer.
AWS GAIIC proof of concept
AWS GAIIC chose to experiment with LLM foundation models with the BERT architecture to fine-tune a toxic language classifier. A total of three models from Hugging Face’s model hub were tested:

vinai/bertweet-base
cardiffnlp/bertweet-base-offensive
cardiffnlp/bertweet-base-hate

All three model architectures are based on the BERTweet architecture. BERTweet is trained based on the RoBERTa pre-training procedure. The RoBERTa pre-training procedure is an outcome of a replication study of BERT pre-training that evaluated the effects of hyperparameter tuning and training set size to improve the recipe for training BERT models6 (Liu 2019). The experiment sought to find a pre-training method that improved the performance results of BERT without changing the underlying architecture. The conclusion of the study found that the following pre-training modifications substantially improved the performance of BERT:

Training the model with bigger batches over more data
Removing the next sentence prediction objective
Training on longer sequences
Dynamically changing the masking pattern applied to the training data

The bertweet-base model uses the preceding pre-training procedure from the RoBERTa study to pre-train the original BERT architecture using 850 million English tweets. It is the first public large-scale language model pre-trained for English tweets.
Pre-trained FMs using tweets were thought to fit the use case for two main theoretical reasons:

The length of a tweet is very similar to the length of an inappropriate or toxic phrase found in online game chats
Tweets come from a population with a large variety of different users, similar to that of the population found in gaming platforms

AWS decided to first fine-tune BERTweet with the customer’s labeled data to get a baseline. Then chose to fine-tune two other FMs in bertweet-base-offensive and bertweet-base-hate that were further pre-trained specifically on more relevant toxic tweets to achieve potentially higher accuracy. The bertweet-base-offensive model uses the base BertTweet FM and is further pre-trained on 14,100 annotated tweets that were deemed as offensive7 (Zampieri 2019). The bertweet-base-hate model also uses the base BertTweet FM but is further pre-trained on 19,600 tweets that were deemed as hate speech8 (Basile 2019).
To further enhance the performance of the PoC model, AWS GAIIC made two design decisions:

Created a two-stage prediction flow where the first model acts as a binary classifier that classifies whether a piece of text is toxic or not toxic. The second model is a fine-grained model that classifies text based on the customer’s defined toxic types. Only if the first model predicts the text as toxic does it get passed to the second model.
Augmented the training data and added a subset of a third-party-labeled toxic text dataset from a public Kaggle competition (Jigsaw Toxicity) to the original 100 samples received from the customer. They mapped the Jigsaw labels to the associated customer-defined toxicity labels and did an 80% split as training data and 20% split as test data to validate the model.

AWS GAIIC used Amazon SageMaker notebooks to run their fine-tuning experiments and found that the bertweet-base-offensive model achieved the best scores on the validation set. The following table summarizes the observed metric scores.

Model
Precision
Recall
F1
AUC

Binary
.92
.90
.91
.92

Fine-grained
.81
.80
.81
.89

From this point, GAIIC handed off the PoC to the AWS ProServe ML Delivery Team to productionize the PoC.
AWS ProServe ML Delivery Team solution
To productionize the model architecture, the AWS ProServe ML Delivery Team (MLDT) was asked by the customer to create a solution that is scalable and easy to maintain. There were a few maintenance challenges of a two-stage model approach:

The models would require double the amount of model monitoring, which makes retraining timing inconsistent. There may be times that one model will have to be retrained more often than the other.
Increased costs of running two models as opposed to one.
The speed of inference slows because inference goes through two models.

To address these challenges, AWS ProServe MLDT had to figure out how to turn the two-stage model architecture into a single model architecture while still being able to maintain the accuracy of the two-stage architecture.
The solution was to first ask the customer for more training data, then to fine-tune the bertweet-base-offensive model on all the labels, including non-toxic samples, into one model. The idea was that fine-tuning one model with more data would result in similar results as fine-tuning a two-stage model architecture on less data. To fine-tune the two-stage model architecture, AWS ProServe MLDT updated the pre-trained model multi-label classification head to include one extra node to represent the non-toxic class.
The following is a code sample of how you would fine-tune a pre-trained model from the Hugging Face model hub using their transformers platform and alter the model’s multi-label classification head to predict the desired number of classes. AWS ProServe MLDT used this blueprint as its basis for fine-tuning. It assumes that you have your train data and validation data ready and in the correct input format.
First, Python modules are imported as well as the desired pre-trained model from the Hugging Face model hub:

# Imports.
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
DataCollatorWithPadding,
PreTrainedTokenizer,
Trainer,
TrainingArguments,
)

# Load pretrained model from model hub into a tokenizer.
model_checkpoint = “cardiffnlp/bertweet-base-offensive”
tokenizer = AutoTokenizer.from_pretrained(checkpoint)

The pre-trained model then gets loaded and prepped for fine-tuning. This is the step where the number of toxic categories and all model parameters get defined:

# Load pretrained model into a sequence classifier to be fine-tuned and define the number of classes you want to classify in the num_labels parameter.

model = AutoModelForSequenceClassification.from_pretrained(
model_checkpoint,
num_labels=[number of classes]
)

# Set your training parameter arguments. The below are some key parameters that AWS ProServe MLDT tuned:
training_args = TrainingArguments(
num_train_epochs=[enter input]
per_device_train_batch_size=[enter input]
per_device_eval_batch_size=[enter input]
evaluation_strategy=”epoch”,
logging_strategy=”epoch”,
save_strategy=”epoch”,
learning_rate=[enter input]
load_best_model_at_end=True,
metric_for_best_model=[enter input]
optim=[enter input],
)

Model fine-tuning starts with inputting paths to the training and validation datasets:

# Finetune the model from the model_checkpoint, tokenizer, and training_args defined assuming train and validation datasets are correctly preprocessed.
trainer = Trainer(
model=model,
args=training_args,
train_dataset=[enter input],
eval_dataset=[enter input],
tokenizer=tokenizer,
data_collator=data_collator,
)

# Finetune model command.
trainer.train()

AWS ProServe MLDT received approximately 5,000 more labeled data samples, 3,000 being non-toxic and 2,000 being toxic, and fine-tuned all three bertweet-base models, combining all labels into one model. They used this data in addition to the 5,000 samples from the PoC to fine-tune new one-stage models using the same 80% train set, 20% test set method. The following table shows that the performance scores were comparable to that of the two-stage model.

Model
Precision
Recall
F1
AUC

bertweet-base (1-Stage)
.76
.72
.74
.83

bertweet-base-hate (1-Stage)
.85
.82
.84
.87

bertweet-base-offensive (1-Stage)
.88
.83
.86
.89

bertweet-base-offensive (2-Stage)
.91
.90
.90
.92

The one-stage model approach delivered the cost and maintenance improvements while only decreasing the precision by 3%. After weighing the trade-offs, the customer opted for AWS ProServe MLDT to productionize the one-stage model.
By fine-tuning one model with more labeled data, AWS ProServe MLDT was able to deliver a solution that met the customer’s threshold for model accuracy, as well as deliver on their ask for ease of maintenance, while lowering cost and increasing robustness.
Conclusion
A large gaming customer was looking for a way to detect toxic language within their communication channels to promote a socially responsible gaming environment. AWS GAIIC created a PoC of a toxic language detector by fine-tuning an LLM to detect toxic language. AWS ProServe MLDT then updated the model training flow from a two-stage approach to a one-stage approach and productionized the LLM for the customer to be used at scale.
In this post, AWS demonstrates the effectiveness and practicality of fine-tuning an LLM to solve this customer use case, shares context on the history of foundation models and LLMs, and introduces the workflow between the AWS Generative AI Innovation Center and the AWS ProServe ML Delivery Team. In the next post in this series, we will dive deeper into how AWS ProServe MLDT productionized the resulting one-stage model using SageMaker.
If you are interested in working with AWS to build a Generative AI solution, please reach out to the GAIIC. They will assess your use case, build out a Generative-AI-based proof of concept, and have options to extend collaboration with AWS to implement the resulting PoC into production.
References

Gamer Demographics: Facts and Stats About the Most Popular Hobby in the World
ChatGPT sets record for fastest-growing user base – analyst note
Vaswani et al., “Attention is All You Need”
Radford et al., “Improving Language Understanding by Generative Pre-Training”
Devlin et al., “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding”
Yinhan Liu et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach”
Marcos Zampieri et al., “SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval)”
Valerio Basile et al., “SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter”

About the authors
James Poquiz is a Data Scientist with AWS Professional Services based in Orange County, California. He has a BS in Computer Science from the University of California, Irvine and has several years of experience working in the data domain having played many different roles. Today he works on implementing and deploying scalable ML solutions to achieve business outcomes for AWS clients.
Han Man is a Senior Data Science & Machine Learning Manager with AWS Professional Services based in San Diego, CA. He has a PhD in Engineering from Northwestern University and has several years of experience as a management consultant advising clients in manufacturing, financial services, and energy. Today, he is passionately working with key customers from a variety of industry verticals to develop and implement ML and GenAI solutions on AWS.
Safa Tinaztepe is a full-stack data scientist with AWS Professional Services. He has a BS in computer science from Emory University and has interests in MLOps, distributed systems, and web3.

<