On-the-Fly Input Adaptation for Reliable Code Intelligence
Code language models (CLMs) play a central role in software engineering across both generation and classification tasks. However, these models still exhibit notable mispredictions in real-world applications, even when trained on up-to-date data. Existing solutions address this by retraining the model, modifying its architecture, or re-engineering prompts. These approaches incur high computational cost requiring substantial effort in data labeling, model updates, and redeployment, and often suffer from poor generalization across tasks and tuning instability across models. This work proposes an alternative strategy based on on-the-fly input adaptation, which improves model behavior without altering its parameters or requiring additional supervision. The method consists of two stages: input validation, which detects inputs likely to cause mispredictions, and input adaptation, which transforms them using syntax- and semantics-preserving operations to better align with the model’s learned behavior. This dual strategy reduces mispredictions across diverse code understanding tasks, boosting model performance without necessitating retraining. As a scalable and resource-efficient solution, this framework holds significant promise for high-stakes applications in software engineering where reliability is critical.