ICSE 2026
Sun 12 - Sat 18 April 2026 Rio de Janeiro, Brazil

Large Language Models (LLMs) show great promise in software engineering tasks like Fault Localization (FL) and Automatic Program Repair (APR). This study examines how input order and context size affect LLM performance in FL, a key step for many downstream software engineering tasks. We test different orders for methods using Kendall Tau distances, including “perfect” (where ground truths come first) and “worst” (where ground truths come last), using two benchmarks that consist of both Java and Python projects. Our results indicate a significant bias in order; Top-1 FL accuracy in Java projects drops from 57% to 20%, while in Python projects it decreases from 38% to approximately 3% when we reverse the code order. Breaking down inputs into smaller contexts helps reduce this bias, narrowing the performance gap in FL from 22% and 6% to just 1% on both benchmarks. We then investigated whether the bias in ordering was caused by data leakage by renaming the method names with more meaningful alternatives. Our findings indicated that the trend remained consistent, suggesting that the bias was not due to data leakage. We also look at ordering methods based on traditional FL techniques and metrics. Ordering using DepGraph’s ranking achieves 48% Top-1 accuracy, which is better than more straightforward ordering approaches like CallGraph_DFS. These findings underscore the importance of how we structure inputs, manage contexts, and choose ordering methods to improve LLM performance in FL and other software engineering tasks.