AI Will Eat Itself? This AI Paper Introduces A Phenomenon Called Model …

Using stable diffusion, pictures could be made from just words. GPT-2, GPT-3(.5), and GPT-4 performed amazingly on many language challenges. The public was first exposed to these types of language models through ChatGPT. Large language models (LLMs) have established themselves as a permanent fixture and are expected to alter the entire online text and imagery ecosystem drastically. Training from massive web-scraped data can only be maintained if given due consideration. Indeed, the value of data acquired regarding true human interactions with systems will increase in the inclusion of content generated by LLMs in data scraped from the Internet.

Researchers from Britain and Canada find that model collapse occurs when one model learns from data generated by another. This degenerative process causes models to lose track of the genuine underlying data distribution over time, even when no change has occurred. They illustrate this phenomenon by providing case studies of model failure in the context of the Gaussian Mixture Model, the Variational Autoencoder, and the Large Language Model. They demonstrate how, over successive generations, acquired behaviors converge to an estimate with extremely minimal variance and how this loss of knowledge about the true distribution begins with the disappearance of the tails. In addition, they demonstrate that this outcome is inevitable even in scenarios with nearly optimal conditions for long-term learning, i.e., no function estimation error.

The researchers conclude by talking about the larger effects of model collapse. They point out how important it is to have access to the raw data to determine where the tails of the underlying distribution matter. Thus, data on human interactions with LLMs will become increasingly useful if used to post material on the Internet on a large scale, thereby polluting data collection to train them.

Model Collapse: What Is It?

When one generation of learned generative models collapses into the next, the latter is corrupted since they were trained on contaminated data and thus misinterpret the world. Model collapse can be classified as either “early” or “late,” depending on when it occurs. In the early stage of model collapse, the model starts to lose information about the distribution’s tails; in the late stage, the model entangles different modes of the original distributions and converges to a distribution that bears little resemblance to the original, often with very small variance.

In this approach, which considers many models throughout time, models do not forget previously learned data but instead begin misinterpreting what they perceive to be real by reinforcing their ideas, in contrast to the catastrophic forgetting process. This occurs due to two distinct mistake sources that, when combined throughout generations, lead to a departure from the original model. One particular mistake mechanism is crucial to the process; it would survive past the first generation.

Model Collapse: Causes

The basic and secondary causes of model failure are as follows:

The most common error is the result of a statistical approximation, which occurs when there are a finite number of samples but diminishes as the sample size approaches infinity.

Secondary error caused by function approximators not being sufficiently expressive (or occasionally overly expressive beyond the original distribution) is known as functional approximation error.

Each of these factors may exacerbate or ameliorate the likelihood of model collapse. Better approximation power can be a double-edged sword since greater expressiveness can amplify statistical noise and reduce it, leading to a better approximation of the underlying distribution.

Model collapse is said to occur in all recursively trained generative models, affecting every model generation. They make basic mathematical models that collapse when applied to real data but can be used to derive analytical equations for values of interest. They aim to put a number on the impact of various error types on final approximations of the original distribution.

Researchers show that Model Collapse can be triggered by training on data from another generative model, leading to a shift in distribution. As a result, the model incorrectly interprets the training problem. Long-term learning requires maintaining access to the original data source and keeping other data not produced by LLMs readily available over time. It is still being determined how content generated by LLMs can be tracked at scale, which raises problems about the provenance of content scraped from the Internet and the need to distinguish it from other data. Community-wide coordination is one approach to ensuring that all parties participating in LLM development and deployment are communicating and sharing data necessary to settle provenance problems. With data crawled from the Internet before the widespread adoption of the technology or direct access to data provided by humans at scale, it may become increasingly easier to train subsequent versions of LLMs.

Check Out The Paper and Reference Article. Don’t forget to join our 24k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at

Featured Tools From AI Tools Club



Rask AI



Aragon AI

Check Out 100’s AI Tools in AI Tools Club
The post AI Will Eat Itself? This AI Paper Introduces A Phenomenon Called Model Collapse That Refers To A Degenerative Learning Process Where Models Start Forgetting Improbable Events Over Time appeared first on MarkTechPost.