TraceCaps: Inline Provenance and Risk Enforcement for Agentic Software Engineering
This program is tentative and subject to change.
Large Language Model (LLM) agents are increasingly embedded in software engineering (SE) workflows—planning, coding, testing, and CI/CD. Failures are frequent: prompt injection, unsafe tool use, supply-chain contamination, and memory poisoning. Existing defences—such as static analysers, provenance attestation, and prompt guardrails—are insufficient: they typically audit after-the-fact or operate without cryptographic guarantees or runtime enforcement. We propose TraceCaps, a runtime approach that (i) attaches cryptographically verifiable provenance capsules to each agent step (e.g. prompt, memory, tool call), and (ii) computes a monotone, persistent risk score that gates tool actions inline via policy thresholds (allow, warn, block). Capsules hash and sign events, link to parents, and embed risk features; an accumulator prevents “risk laundering” by subsequent benign steps. Early demonstrations on SWE-bench illustrate how TraceCaps can expose unsafe behaviors and apply runtime governance through risk accumulation, pointing toward enforceable agent safety. To our knowledge, TraceCaps is the first approach to bind provenance and risk into a single cryptographic substrate, pointing toward a shift from passive audit to runtime, enforceable safety in agentic SE workflows.
This program is tentative and subject to change.
Wed 15 AprDisplayed time zone: Brasilia, Distrito Federal, Brazil change
14:00 - 15:30 | Dependability and Security 2Research Track / Journal-first Papers / New Ideas and Emerging Results (NIER) at Oceania X | ||
14:00 15mTalk | TraceCaps: Inline Provenance and Risk Enforcement for Agentic Software Engineering New Ideas and Emerging Results (NIER) Andre Catarino Faculty of Engineering, University of Porto, Claudia Mamede Carnegie Mellon University, Rui Melo Carnegie Mellon University & FEUP, Rui Maranhao Abreu University of Lisbon | ||
14:15 15mTalk | Can LLMs Hack Enterprise Networks? Autonomous Assumed Breach Penetration-Testing Active Directory Networks Journal-first Papers | ||
14:30 15mTalk | PenForge: On-the-Fly Expert Agent Construction for Automated Penetration Testing New Ideas and Emerging Results (NIER) Huihui Huang Singapore Management University, Singapore, Jieke Shi Singapore Management University, Junkai Chen Singapore Management University, Singapore, Ting Zhang Monash University, Yikun Li Singapore Management University, Chengran Yang Singapore Management University, Singapore, Eng Lieh Ouh Singapore Management University, Singapore, Lwin Khin Shar Singapore Management University, David Lo Singapore Management University | ||
14:45 15mTalk | Evaluating and Improving the Robustness of Security Attack Detectors Generated by LLMs Journal-first Papers Samuele Pasini Università della Svizzera italiana, Jinhan Kim Università della Svizzera italiana, Tommaso Aiello SAP Security Research, Rocio Cabrera Lozoya SAP Security Research, Antonino Sabetta SAP, Paolo Tonella USI Lugano | ||
15:00 15mTalk | LLM4JMH: Studying the Use of LLMs for Generating Java Performance Microbenchmarks Research Track Zongxiong Chen Fraunhofer FOKUS, Derui Zhu Technical University of Munich, Kundi Yao Ontario Tech University, Weiyi Shang University of Waterloo, Jinfu Chen Wuhan University, Jiahui Geng Mohamed bin Zayed University of Artificial Intelligence, Alexander Pretschner TU Munich, Jens Grossklags Technical University of Munich, Manfred Hauswirth Fraunhofer FOKUS, Sonja Schimmler Fraunhofer FOKUS & TU Berlin | ||
15:15 15mTalk | RulePilot: An LLM-Powered Agent for Security Rule Generation Research Track Hongtai Wang National University of Singapore, Ming Xu Shanghai Jiao Tong University / National University of Singapore, Yanpei Guo National University of Singapore, Weili Han Fudan University, Hoon Wei Lim Cyber Special Ops-R&D, NCS Group, Jin Song Dong National University of Singapore | ||