Corina Pasareanu
Title: Analysis of Perception Neural Networks via Vision-Language Models
Abstract: The analysis of Deep Neural Networks (DNNs), particularly those used as perception modules, is very challenging due to the networks’ complex and opaque decision-making processes. Multi-modal Vision-Language Models (VLMs) such as CLIP offer an exciting opportunity to interpret the representation space of vision models using natural language. VLMs have been trained on a large body of images accompanied by their textual description, and are thus implicitly aware of high-level, human-understandable concepts describing the images.
In this talk, we report on on-going work that seeks to leverage VLMs for the formal analysis and run-time monitoring of requirements expressed in terms of natural-language concepts, as well as debugging of perception modules.
Charles Sutton
Title: Can language models understand code?
Abstract: Large language models are driving major changes in software development tools. There are still things that they are less good at, like repairing their own mistakes, or generating code given only test cases. I claim that these are tasks that require code understanding, rather than code generation, and code understanding is a task that models are less good at. I’ll talk about our recent research in augmenting LLMs with execution information, in the hope of improving their abilities to perform code understanding tasks. This includes (a) asking the model to predict desired “target states” of the program to guide code generation, (b) augmenting broken programs with execution information, to help the model do repair, and (c) predicting program invariants. There is still much to do, so I will also reflect on the role academic research can play in this fast-moving area.
Koushik Sen
Title: Testing with Large Language Models, Symbolic Execution, and Fuzzing
Abstract: Automation has significantly impacted software testing and analysis in the last two decades. Automated testing techniques, such as symbolic execution, concolic testing, and feedback-directed fuzzing, have found numerous critical faults, security vulnerabilities, and performance bottlenecks in mature and well-tested software systems. The key strength of automated techniques is their ability to quickly search state spaces by performing repetitive and expensive computational tasks at a rate far beyond the human attention span and computation speed. In this talk, I will briefly overview our past and recent research contributions in automated test generation using large-language models, symbolic execution, program analysis, constraint solving, and fuzzing. We have combined these techniques to find and rescue $11M from DeFI Smart Contracts.
Tue 29 OctDisplayed time zone: Pacific Time (US & Canada) change
08:30 - 09:00 | |||
08:30 30mKeynote | ASE Opening Keynotes Vladimir Filkov University of California at Davis, USA, Baishakhi Ray Columbia University, New York; AWS AI Lab, Minghui Zhou Peking University |
09:00 - 10:00 | |||
09:00 60mKeynote | Testing with Large Language Models, Symbolic Execution, and Fuzzing Keynotes Koushik Sen University of California at Berkeley |
10:00 - 10:30 | |||
10:00 30mCoffee break | Break Catering |
Wed 30 OctDisplayed time zone: Pacific Time (US & Canada) change
09:00 - 10:00 | |||
09:00 60mKeynote | Analysis of Perception Neural Networks via Vision-Language Models Keynotes Corina S. Pasareanu Carnegie Mellon University Silicon Valley, NASA Ames Research Center |
Thu 31 OctDisplayed time zone: Pacific Time (US & Canada) change
09:00 - 10:00 | |||
09:00 60mKeynote | Can language models understand code? Keynotes Charles Sutton Google Research |
10:00 - 10:30 | |||
10:00 30mCoffee break | Break Catering |
12:00 - 13:30 | |||
12:00 90mLunch | Lunch Catering |
15:00 - 15:30 | |||
15:00 30mCoffee break | Break Catering |
16:30 - 17:00 | |||
16:30 30mTalk | ASE 2024 Closing Ceremony Keynotes Vladimir Filkov University of California at Davis, USA |