Current Practices for Building LLM-Powered Reasoning Tools Are Ad Hoc—and We Can Do Better
This program is tentative and subject to change.
There is growing excitement about building software verifiers, synthesizers, and other Automated Reasoning (AR) tools by combining traditional symbolic algorithms and Large Language Models (LLMs). Unfortunately, the current practice for constructing such neurosymbolic AR systems is an ad hoc programming model that does not have the strong guarantees of traditional symbolic algorithms, nor a deep enough synchronization of neural networks and symbolic reasoning to unlock the full potential of LLM-powered reasoning. I propose Neurosymbolic Transition Systems as a principled computational model that can underlie infrastructure for building neurosymbolic AR tools. In this model, symbolic state is paired with intuition, and state transitions operate over symbols and intuition in parallel. I argue why this new paradigm can scale logical reasoning beyond current capabilities while retaining the strong guarantees of symbolic algorithms, and I sketch out how the computational model I propose can be reified in a logic programming language.
I am a computer science postdoc at the University of Melbourne supervised by Toby Murray. I work in the areas of programming languages and automated formal methods, with a current focus on automatically proving security properties about binary code.
I earned a PhD in computer science at Harvard University, where I was advised by Stephen Chong. My PhD research focused on combining logic programming and constraint solving, with applications to program analysis and synthesis.
This program is tentative and subject to change.
Wed 15 OctDisplayed time zone: Perth change
16:00 - 17:40 | |||
16:00 15mTalk | Vibe Coding Needs Vibe Reasoning – Improving Vibe Coding with Formal Verification LMPL | ||
16:15 15mTalk | Current Practices for Building LLM-Powered Reasoning Tools Are Ad Hoc—and We Can Do Better LMPL Aaron Bembenek The University of Melbourne Pre-print | ||
16:30 15mTalk | Composable Effect Handling for Programming LLM-integrated Scripts LMPL Di Wang Peking University Pre-print | ||
16:45 15mTalk | The LLM Era Demands Natural-Language-Aligned Theorem Provers for Mathematics LMPL Qinxiang Cao Shanghai Jiao Tong University, Lihan Xie Shanghai Jiao Tong University, Junchi Yan Shanghai Jiao Tong University | ||
17:00 15mTalk | Programming Large Language Models with Algebraic Effect Handlers and the Selection Monad LMPL Shangyin Tan University of California, Berkeley, Guannan Wei Tufts University, Koushik Sen University of California at Berkeley, Matei Zaharia UC Berkeley |