HawkEyes: Spotting and Evading Instruction Disalignments of LLMs
LLM agents have been demonstrated to be powerful in vision-language planning (VLP) tasks. However, they often encounter challenges with sequential VLP tasks, particularly in adhering to instructions in prompts, which affects their overall efficacy. To unleash the efficacy of LLM agents against instruction disalignments, this paper proposes HawkEyes, an LLM-based approach to self-identify and self-avoid instruction disalignments of any given LLM agent. Instead of altering the intrinsic mechanism of LLM agents, HawkEyes operates externally on the input and output sequences of LLM agents. Specifically, HawkEyes uses LLMs to decompose the instructions in the LLM agent’s workflow into primitive constraints, creating oracles to detect any disalignments of these primitive constraints and synthesize avoiding actions to preempt potential disalignments. This paper also demonstrates the application of HawkEyes to enhance three state-of-the-art LLM agents, assessing HawkEyes’s effectiveness on two challenging VLP tasks: WebShop and MoTIF. Evaluation results show that HawkEyes significantly boosts the performance of LLM agents across various agents and tasks. Notably, HawkEyes doubles the success rate of LLM-planner, a state-of-the-art LLM agent dedicated to sequential VLP, from 17.2% to 34.5% on the MoTIF dataset, showcasing its capability to adapt LLM planning more flexibly and effectively in sequential VLP scenarios.
Sat 20 AprDisplayed time zone: Lisbon change
16:00 - 17:30 | Session 4: Full Papers + Award & ClosingLLM4Code at Luis de Freitas Branco Chair(s): Prem Devanbu University of California at Davis | ||
16:00 10mTalk | Investigating the Proficiency of Large Language Models in Formative Feedback Generation for Student Programmers LLM4Code Smitha S Kumar Heriot-Watt University -UAE, Michael Lones Heriot Watt University- UK, Manuel Maarek Heriot-Watt University, Hind Zantout Heriot-Watt University -UAE Pre-print | ||
16:10 10mTalk | Tackling Students' Coding Assignments with LLMs LLM4Code Pre-print | ||
16:20 10mTalk | Applying Large Language Models to Enhance the Assessment of Parallel Functional Programming AssignmentsBest Presentation Award LLM4Code Skyler Grandel Vanderbilt University, Douglas C. Schmidt Vanderbilt University, Kevin Leach Vanderbilt University Pre-print | ||
16:30 10mTalk | An Empirical Study on Usage and Perceptions of LLMs in a Software Engineering Project LLM4Code Sanka Rasnayaka National University of Singapore, Wang Guanlin National University of Singapore, Ridwan Salihin Shariffdeen National University of Singapore, Ganesh Neelakanta Iyer National University of Singapore Pre-print | ||
16:40 10mTalk | LLMs for Relational Reasoning: How Far are We? LLM4Code Zhiming Li Nanyang Technological University, Singapore, Yushi Cao Nanyang Technological University, Xiufeng Xu Nanyang Technological University, Junzhe Jiang Hong Kong Polytechnic University, Xu Liu North Carolina State University, Yon Shin Teo Continental Automotive Singapore Pte. Ltd., Shang-Wei Lin Nanyang Technological University, Yang Liu Nanyang Technological University Pre-print | ||
16:50 10mTalk | HawkEyes: Spotting and Evading Instruction Disalignments of LLMs LLM4Code Dezhi Ran Peking University, Zihe Song University of Texas at Dallas, Wenhan Zhang Peking University, Wei Yang University of Texas at Dallas, Tao Xie Peking University | ||
17:00 10mTalk | Semantically Aligned Question and Code Generation for Automated Insight GenerationBest Paper Award LLM4Code Ananya Singha Microsoft, Bhavya Chopra Microsoft, Anirudh Khatry Microsoft, Sumit Gulwani Microsoft, Austin Henley University of Tennessee, Vu Le Microsoft, Chris Parnin Microsoft, Mukul Singh Microsoft, Gust Verbruggen Microsoft Pre-print | ||
17:10 20mDay closing | Award & Closing LLM4Code |