Closing the Gap between Sensor Inputs and Driving Properties: A Scene Graph Generator for CARLA
This program is tentative and subject to change.
The software engineering community has increasingly taken up the task of assuring safety in autonomous driving systems by applying software engineering principles to create techniques to develop, validate, and verify these systems. However, developing and analyzing these techniques requires extensive sensor data sets and execution infrastructure with the relevant features and \textit{known semantics} for the task at hand. While the community has invested substantial effort in gathering and cultivating large-scale data sets and developing simulation infrastructure with varying features, semantic understanding of this data has remained out of reach, relying on limited, manually-crafted data sets or bespoke simulation environments to ensure the desired semantics are met.
To address this, we developed a plugin for the widely-used ADS simulator CARLA called CarlaSGG, that extracts relevant ground-truth spatial and semantic information from the simulator state at runtime in the form of \textit{scene graphs}, enabling online and post-hoc automatic reasoning about the semantics of the scenario and associated sensor data. The tool has been successfully deployed in multiple previous software engineering approach evaluations which we describe to demonstrate the utility of the tool. The precision of the semantic information captured in the scene graph can be adjusted by the client application to suit the needs of the implementation. We provide a detailed description of the tool’s design, capabilities, and configurations, with additional documentation available accompanying the tool’s online source: https://github.com/less-lab-uva/carla_scene_graphs.