Meet Eureka: A Human-Level Reward Design Algorithm Powered by Large La …

Large Language Models (LLMs) are great at high-level planning but need to help master low-level tasks like pen spinning. However, a team of researchers from NVIDIA, UPenn, Caltech, and UT Austin have developed an algorithm called EUREKA that uses advanced LLMs, such as GPT-4, to create reward functions for complex skill acquisition through reinforcement learning. EUREKA outperforms human-engineered rewards by providing safer and higher-quality tips through gradient-free, in-context learning based on human feedback. This breakthrough paves the way for LLM-powered skill acquisition, as demonstrated by the simulated Shadow Hand mastering pen spinning tricks.

Reward engineering in reinforcement learning has posed challenges, with existing methods like manual trial-and-error and inverse reinforcement learning needing more scalability and adaptability. EUREKA introduces an approach by utilising LLMs to generate interpretable reward codes, enhancing rewards in real-time. While previous works have explored LLMs for decision-making, EUREKA is groundbreaking in its application to low-level skill-learning tasks, pioneering evolutionary algorithms with LLMs for reward design without initial candidates or few-shot prompting.

LLMs excel in high-level planning but need help with low-level skills like pen spinning. Reward design in reinforcement learning often relies on time-consuming trial and error. Their study presents EUREKA leveraging advanced coding LLMs, such as GPT-4, to create reward functions for various tasks autonomously, outperforming human-engineered rewards in diverse environments. EUREKA also enables in-context learning from human feedback, enhancing reward quality and safety. It addresses the challenge of dexterous manipulation tasks unattainable through manual reward engineering.

EUREKA, an algorithm powered by LLMs like GPT-4, autonomously generates reward functions, excelling in 29 RL environments. It employs in-context learning from human feedback (RLHF) to enhance reward quality and safety without model updates. EUREKA’s rewards enable training a simulated Shadow Hand in pen spinning and rapid pen manipulation. It pioneers evolutionary algorithms with LLMs for reward design, eliminating the need for initial candidates or few-shot prompting, marking a significant advancement in reinforcement learning.

EUREKA outperforms L2R, showcasing its reward generation expressiveness. EUREKA consistently improves, with its best rewards eventually surpassing human benchmarks. It creates unique rewards weakly correlated with human ones, potentially uncovering counterintuitive design principles. Reward reflection enhances performance in higher-dimensional tasks. Together with curriculum learning, EUREKA succeeds in dexterous pen-spinning tasks using a simulated Shadow Hand.

EUREKA, a reward design algorithm driven by LLMs, attains human-level reward generation, excelling in 83% of tasks with an average of 52% improvement. Combining LLMs with evolutionary algorithms proves a versatile and scalable approach for reward design in challenging, open-ended problems. EUREKA’s success in dexterity is evident in solving complex tasks, such as dexterous pen spinning, using curriculum learning. Its adaptability and substantial performance enhancements are promising for diverse reinforcement learning and reward design applications.

Future research avenues include evaluating EUREKA’s adaptability and performance in more diverse and complex environments and with different robot designs. Assessing its real-world applicability beyond simulation is crucial. Exploring synergies with reinforcement learning techniques, like model-based methods or meta-learning, could further enhance EUREKA’s capabilities. Investigating the interpretability of EUREKA’s generated reward functions is essential for understanding its underlying decision-making processes. Enhancing human feedback integration and exploring EUREKA’s potential in various domains beyond robotics are promising directions.

Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 32k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on WhatsApp. Join our AI Channel on Whatsapp..
The post Meet Eureka: A Human-Level Reward Design Algorithm Powered by Large Language Model LLMs appeared first on MarkTechPost.