Optimizing decision making in concolic execution using reinforcement learning
This paper presents an improvement to a new opensource testing tool capable of performing concolic execution on x86 binaries. The novelty is to use a reinforcement learning solution that reduces the number of symbolically executed states. It does so by learning a set of models that predict how efficiently it would be to change the conditions at various branch points. Thus, we first reinterpret the state-of-the-art concolic execution algorithm as a typical reinforcement learning environment, then we build estimation models used to prune states that do not look promising. The architecture of the base model is a Deep Q-Network used inside an LSTM that captures the patterns from the ordered set of branch points (path) resulted by executing the application under test with different inputs generated at runtime (experiments). Various reward functions can give automatic feedback from the concolic execution environment to define different policies. These are customizable in our open-source implementation, such that users can define their custom test targets.
Sat 24 OctDisplayed time zone: Lisbon change
16:00 - 17:30 | Session IVA-MOST at Porto Chair(s): Florian Lorber Aalborg University A-MOST2020 is held as a virtual workshop via Zoom. Contact amost2020@easychair.org for the details. | ||
16:00 30mFull-paper | Model-Based Testing of Read Only Graph Queries A-MOST Leen Lambers Hasso-Plattner-Institut, Universität Potsdam
, Sven Schneider Hasso-Plattner-Institut, Universität Potsdam
, Marcel Weisgut Hasso-Plattner-Institut, Universität Potsdam
Link to publication DOI | ||
16:30 30mFull-paper | Optimizing decision making in concolic execution using reinforcement learning A-MOST Ciprian Paduraru University of Bucharest
, Alin Stefanescu University of Bucharest
, Miruna Gabriela Paduraru University of Bucharest
Link to publication DOI | ||
17:00 30mDay closing | Closing A-MOST |