In deep learning, symmetry is a crucial inductive bias. Convolutional neural networks can use Images with translational symmetry, and permutation symmetry in graphs can be used by graph neural networks. Theoretical research and practical methods for constructing general group equivariant neural networks have seen a recent uptick in interest.
Equivariant neural networks provide several advantages, but building a model first requires knowing the data symmetry explicitly. Identifying the true symmetries of the data can be challenging in practice, and limiting the model to the precise mathematical symmetry may not be optimum.
Researchers from the University of California San Diego, Northeastern University, and IBM Research introduce a novel approach based on generative adversarial training to extract continuous symmetry from data. This work demonstrates how symmetry is related to data distribution. Next, the method trains a symmetry generator that applies the transformations learned to the training data and produces an output distribution comparable to the original dataset, indicating equivariance or invariance.
Their method, LieGAN, finds continuous symmetries as matrix groups by employing the theory of Lie groups and Lie algebras. Parameterization techniques allow it to handle various symmetries, including discrete group transformation and group subsets. LieGAN directly produces an orthogonal Lie algebra basis, making it interpretable. The findings demonstrate that LieGAN’s learned Lie algebra leads to high-quality results in downstream tasks like N-body dynamics and top quark labeling. Using an equivariant model and data augmentation, the prediction performance is increased across several datasets and creates pipelines for exploiting the learned symmetry in downstream prediction tasks.
To achieve the same level of performance as equivariant models with ground truth symmetries, they also present LieGNN, a modified E(n) Equivariant Graph Neural Network (EGNN) that incorporates symmetries learned by LieGAN.
The present work focuses on general linear group subgroups that are globally symmetric. However, the researchers believe that by substituting a more complex structure for the simple linear transformation generator in LieGAN, this framework can be applied to more general scenarios of symmetry discovery. This may include non-connected Lie group symmetry, nonlinear symmetry, and gauge symmetry.
Check Out The Paper and GitHub link. Don’t forget to join our 23k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com
Check Out 100’s AI Tools in AI Tools Club
The post Meet LieGAN: An AI Framework That Uses Generative Adversarial Training To Automatically Discover Equivariances From A Dataset appeared first on MarkTechPost.