The latest and most incredible advancement in the domain of Artificial Intelligence is the development of Large Language Models (LLMs). The very famous ChatGPT developed by OpenAI, which is based on the GPT 3.5 and GPT 4 architecture, is of great use and is mostly in the headlines for generating content and answering questions just like a human would do. Its ability to imitate humans in generating creative and precise content enables it to dive into problem-solving in almost all industries. With the addition of Chain-of-Thought (CoT) prompting, the impact of LLMs like GPT 3.5 has improved, resulting in significant changes in the information processing industry. CoT enhances the LLMs and helps them generate more comprehensive and elaborate reasoning processes in a series of intermediate steps.
Though CoT offers many advantages, its emphasis on intermediate reasoning phases occasionally causes hallucinations and compounded errors, which makes it difficult for the models to generate consistent and accurate reasoning processes. A lot of efforts have been made to enable LLMs to do explicit and rigorous deductive reasoning by drawing inspiration from how humans engage in deliberate deductive logical reasoning procedures to solve problems. To address these challenges, a team of researchers has introduced the Natural Program, a natural language-based deductive reasoning format that uses the inherent power of natural language to achieve deductive reasoning.
The team has mentioned that this approach breaks down the reasoning verification process into a number of sequential sub-processes. Only the context and premises required for the particular step are provided to each subprocess, and the decomposition makes the verification process more approachable. The authors have used publically accessible models like OpenAI’s GPT-3.5-turbo (175B) to run trials on datasets for arithmetic and common sense to show the effectiveness of their natural program-based verification technique. The outcomes demonstrated how well their strategy worked to increase the dependability of reasoning processes produced by big language models.
The Natural Program format enables language models to generate precise reasoning steps, ensuring that subsequent steps are more rigorously grounded on prior steps. The language models perform reasoning self-verification in a step-by-step manner by using this structure, and the resulting reasoning stages are more rigorous and reliable since a verification procedure is integrated into each level of deductive reasoning.
Some of the key contributions mentioned by the team are –
With the introduction of the Natural Program format, the team has proposed a framework for rigorous deductive reasoning, which is suitable for verification and can be simply produced by in-context learning.
It has been shown that the lengthy deductive reasoning processes written in the proposed Natural Program format may be reliably self-verified by using step-by-step subprocesses that only cover the prerequisite context and premises.
Through experiments, the team has shown how effectively the framework enhances the accuracy, dependability, and interpretability of LLM-generated reasoning stages and solutions.
In conclusion, this framework seems promising for enhancing the deductive reasoning capabilities of language models.
Check Out The Paper and Github. Don’t forget to join our 24k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com
Featured Tools From AI Tools Club
Criminal IP: AI-based Phishing Link Checker
Check Out 100’s AI Tools in AI Tools Club
The post UC San Diego and Qualcomm Researchers Unleash Natural Program: A Powerful Tool for Effortless Verification of Rigorous Reasoning Chains in Natural Language – An AI Game Changer appeared first on MarkTechPost.