Leveraging Natural-language Requirements for Deriving Better Acceptance Criteria from ModelsP&I
In many software and systems development projects, analysts specify requirements using a combination of modeling and natural language (NL). In such situations, systematic acceptance testing poses a challenge because defining the acceptance criteria (AC) to be met by the system under test has to account not only for the information in the (requirements) model but also that in the NL requirements. In other words, neither models nor NL requirements per se provide a complete picture of the information content relevant to AC. Our work in this paper is prompted by the observation that a reconciliation of the information content in NL requirements and models is necessary for obtaining precise AC. We perform such reconciliation by devising an approach that automatically extracts AC-related information from NL requirements and helps modelers enrich their model with the extracted information. An existing AC derivation technique is then applied to the model that has now been enriched by the information extracted from NL requirements. Using a real case study from the financial domain, we evaluate the usefulness of the AC-related model enrichments recommended by our approach. Our evaluation results are very promising: Over our case study system, a group of five domain experts found 89% of the recommended enrichments relevant to AC and yet absent from the original model (precision of 89%). Furthermore, the experts could not pinpoint any additional information in the NL requirements which was relevant to AC but which had not already been brought to their attention by our approach (recall of 100%).