Researchers from ETH Zurich Introduce GoT (Graph of Thoughts): A Machi …

Artificial Intelligence (AI) has seen a rise in the use of Large Language Models (LLMs). A particular sort of LLM that is based on the Transformer architecture’s decoder-only design has acquired a lot of popularity recently. Models including GPT, PaLM, and LLaMA have gained massive popularity in recent times. Prompt engineering is a strategic technique that has been a successful and resource-efficient way to use LLMs to tackle diverse issues with the main goal of embedding task-specific instructions for the LLM in the input text. The LLM can use its autoregressive token-based approach to create pertinent text and complete the task if these instructions are properly written.

The Chain-of-Thought (CoT) method expands on prompt engineering. In CoT, the input prompt provides thoughts or intermediate steps of deliberation in addition to the task’s description. The LLM’s ability to solve problems is considerably improved by this addition without the need for model updates. Comparing the capabilities of LLMs to current paradigms like Chain-of-Thought and Tree of Thoughts (ToT), a recent Graph of Thoughts (GoT) framework has been introduced.

GoT represents data as an arbitrary graph, enabling LLMs to generate and handle data in a more flexible way. Individual pieces of information, or LLM thoughts, are shown as vertices in this graph, while the connections and dependencies among them are shown as edges. It allows different LLM ideas to be combined to produce more potent and effective results. By allowing these thoughts to be coupled and interdependent inside the graph, this is accomplished. GoT can record complex networks of thoughts, in contrast to linear paradigms that limit thought. This opens the door to combining various ideas into a cohesive answer, reducing intricate thought networks to their essential components and improving ideas through feedback loops.

GoT’s greater performance in comparison to existing methods across multiple tasks serves as an illustration of its effectiveness. GoT outperforms ToT in a sorting test by increasing sorting quality by 62%. It simultaneously reduces computing expenses by more than 31%. These outcomes demonstrate GoT’s capacity to balance task accuracy with resource efficiency. GoT’s extensibility is one of its most noticeable benefits. The framework is flexible enough to lead creative prompting schemes since it is easily adaptable to fresh idea transformations. This agility is essential for navigating the LLM research and application landscape as it changes.

This work significantly advances the alignment of LLM reasoning with human thinking processes and brain systems by establishing the GoT framework. Thoughts interact, branch out, and influence one another in complex networks in both human and brain thought processes. Thus, GoT improves the skills of LLMs and their capacity to handle challenging problems by bridging the gap between conventional linear techniques and these sophisticated, network-like mental processes.

Check out the Paper and Github. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 29k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

The post Researchers from ETH Zurich Introduce GoT (Graph of Thoughts): A Machine Learning Framework that Advances Prompting Capabilities in Large Language Models (LLMs) appeared first on MarkTechPost.

<