A Learning Approach to Enhance Assurances for Real-Time Self-Adaptive Systems
The assurance of real-time properties is prone to context variability. Providing such assurance at design time would require to check all the possible context and system variations or to predict which one will be actually used. Both cases are not viable in practice since there are too many possibilities to foresee. Moreover, the knowledge required to fully provide the assurance for self-adaptive systems is only available at runtime and therefore difficult to predict at early development stages. Despite all the efforts on assurances for self-adaptive systems at design or runtime, there is still a gap on verifying and validating real-time constraints accounting for context variability. To fill this gap, we propose a method to provide assurance of self-adaptive systems, at design- and runtime, with special focus on real-time constraints. We combine off-line requirements elicitation and model checking with on-line data collection and data mining to guarantee the system’s goals, both functional and non-functional, with fine tuning of the adaptation policies towards the optimization of quality attributes. We experimentally evaluate our method on a simulated prototype of a Body Sensor Network system (BSN) implemented in OpenDaVINCI. The results of the validation are promising and show that our method is effective in providing evidence that support the provision of assurance.
Tue 29 MayDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
16:00 - 17:10
|Learning Non-Deterministic Impact Models for AdaptationLong Paper|
|A Learning Approach to Enhance Assurances for Real-Time Self-Adaptive SystemsLong Paper|
Arthur Rodrigues University of Brası́lia, Ricardo Caldas University of Brası́lia, Genaina Rodrigues University of Brasilia, Thomas Vogel Humboldt-Universität zu Berlin, Patrizio Pelliccione University of Gothenburg & Chalmers University of TechnologyPre-print
|Adaptive Runtime Response Time Control in PLC-based Real-Time Systems using Reinforcement LearningShort Paper|