StreamLLM: Enhancing Constraint Programming with Large Language Model-Generated Streamliners
This paper introduces StreamLLM, a method that uses Large Language Models (LLMs) to generate streamliners for constraint programming. Streamliners narrow the search space to improve the efficiency of solving complex problems but typically require extensive manual design or exhaustive testing. StreamLLM instead leverages LLMs to propose effective streamliners dynamically, incorporating realtime feedback and empirical tests within the MiniZinc modeling language. Evaluated across six diverse constraint satisfaction problems, StreamLLM demonstrates substantial runtime reductions, up to 99% improvement in some cases. This work highlights the potential of combining symbolic reasoning with machine learning techniques to enhance constraint-solving speed and adaptability.
Sat 3 MayDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | Session 3: Keynote by Asim Munawar and Paper Presentation NSE at 215 Chair(s): Sona Ghahremani Hasso Plattner Institute, University of Potsdam, Ruben Ruiz-Torrubiano IMC Krems University of Applied Sciences | ||
14:00 60mKeynote | Reasoning Revolution: Cracking the Code of LLM Intelligence NSE | ||
15:00 30mTalk | StreamLLM: Enhancing Constraint Programming with Large Language Model-Generated Streamliners NSE Florentina Voboril TU Wien, Vaidyanathan Peruvemba Ramaswamy TU Wien, Stefan Szeider Vienna University of Technology (TU Wien) |