Towards Understanding Trust in Self-adaptive SystemsSHORT
Self-adaptive systems (SAS) can change their structures autonomously and dynamically adapt their behaviors aiming at (i) attaining longer-term system goals and (ii) coping with inevitable dynamics and changes in their operational environments that are difficult to anticipate. As SAS directly or indirectly interact with, and affect humans, such degrees of autonomy create the necessity for these systems to be trusted or considered trustworthy. While the notions of ‘trust’ and ‘trustworthiness’ have been investigated for over a decade, particularly by the SEAMS community, trust is a broad concept that covers diverse notions and techniques and there is currently no clear view on the state of the art. To that end, we present the outcomes of an exploratory literature study that clarifies how trust as a foundational concept has been concretized and used in SAS. Based on an analysis of a set of 16 articles from the published SEAMS proceedings that centrally focus on trust, we provide (i) a summary of the diverse quality attributes of SASs influenced by trust, (ii) a clarification on the different participant roles to trust establishment in SASs, and (iii) a summary of trust qualification or quantification approaches used in the academic literature. As such, this review provides a more holistic view on the current state of the art for attaining trust in the engineering of self-adaptive systems, and identifies research gaps worthy of further investigation.