Meet Orion-14B: A New Open-source Multilingual Large Language Model Tr …

With the advancement of AI in recent times, large language models are being used in many fields. These models are trained on larger datasets and require bigger training datasets. These are used in various natural language processing (NLP) tasks, such as dialogue systems, machine translation, information retrieval, etc. There has been thorough research in LLMs to formulate new useful models in NLP.

Recently, researchers from Orion Star have come up with a new framework, Orion-14B. This Orion-14B-Base model is trained on 14 billion parameters, and the base model is trained on a huge 2.5 trillion tokens and spans from languages such as Chinese, English, Japanese, and Korean. Also, this framework has an impressive 200,000-token context length. The Orion-14B series comprises several models with specific, unique features and applications. 

The Orion-14B includes models appropriate for specific tasks. One is Orion-14B-Chat-RAG, fine-tuned on a custom retrieval augmented generation dataset, so Orion-14B-Chat-RAG performs well in retrieval increased generation tasks. Orion-14B also has Orion-14B-Chat-Plugin, among other models, designed for agent-related scenarios. In this, the LLM acts as a plugin and function call system. Also, the framework has several other extensions to Orion-14B, involving a long context model, a quantized model, and several other application-oriented models.

The research team emphasized that Orion-14B series models are adaptable and excel in human-annotated blind tests. Its long-chat version can handle lengthy texts and support up to 320,000 tokens. Also, the Orion-14B’s quantized versions have enhanced the efficiency; therefore, the model size was reduced by 70%. It also improved inference speed by 30%, with a minimal performance loss of less than 1%. Further, the researchers emphasized that this model has significantly reduced the model size while increasing inference speed and has only a marginal 1% performance loss. Additionally, they highlighted that this model can perform better than other models of the 20-billion parameter scale level as it excels in comprehensive evaluations and displays robust multilingual capabilities, particularly outperforming in Japanese and Korean test sets.

The dataset used for these models has multilingual texts, focusing on English and Chinese, which account for 90% of the entire dataset. They are also trying to include Japanese and Korean texts in more than 5% of the content. The remaining portion of the dataset has texts in various languages, such as Spanish, French, German, Arabic, and more. This dataset has written language across many topics, including web pages, news articles, encyclopedic entries, books, source code, and academic publications.

The research team emphasized that they faced many obstacles in formulating these models. In conclusion, the Orion-14B series is a significant step in multilingual large language models. This series outperforms other open-source models and is a potential strong baseline for future LLM research. The researchers are focusing on enhancing the efficiency of the series of these models, which can strengthen the LLM research in this field.

Check out the Paper and Model. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel

The post Meet Orion-14B: A New Open-source Multilingual Large Language Model Trained on 2.5T Tokens Including Chinese, English, Japanese, and Korean appeared first on MarkTechPost.

<