A-MOST will be an online-only event and take place on Saturday 24th October 2020. Details about the schedule and the platform we will use will follow soon.
The increasing complexity, criticality and pervasiveness of software results in new challenges for testing. Model Based Testing (MBT) continues to be an important research area, where new approaches, methods and tools make MBT techniques (for automatic test case generation) more deployable and useful for industry than ever. Following the success of previous editions, the goal of the A-MOST workshop is to bring researchers and practitioners together to discuss state of the art, practice and future prospects in MBT. Topics and sub-topics (not exhaustive):
- Models for component, integration and system testing
- Product-line models
- (Hybrid) embedded system models
- Systems-of-systems models
- Architectural models
- Models for orchestration and choreography of services
- Executable models, simulation and model transformations
- Environment and use models
- Non-functional models
- Models for variant-rich and highly configurable systems
- Machine-learning based models
PROCESSES, METHODS AND TOOLS
- Model-based test generation algorithms
- Application of model checking techniques to MBT
- Symbolic execution-based techniques
- Tracing from requirements models to test models
- Performance and predictability of MBT
- Test model evolution during the software life-cycle
- Risk-based approaches for MBT
- Generation of testing infrastructures from models
- Combinatorial approaches for MBT
- Statistical testing
- Non-functional MBT
- Derivation of test models by reverse engineering and machine learning
EXPERIENCES AND EVALUATION
- Estimating dependability (e.g., security, safety, reliability) using MBT
- Coverage metrics and measurements for structural and (non-)functional models
- Cost of testing, economic impact of MBT
- Empirical validation, experiences, case studies using MBT
- The role of MBT in automata learning (model inference, model mining)
- Generating training data for machine learning
- Model-based security testing
- Statistical model checking
This program is tentative and subject to change.
Sat 24 Oct Times are displayed in time zone: (GMT) Azores change
|09:00 - 09:10|
|09:10 - 10:30|
Ana PaivaFaculty of Engineering of the University of Porto
10:30 - 11:00: Catering - Coffee Break
|11:00 - 11:30|
A Tool for the Automatic Generation of Test Cases and Oracles for Simulation Models Based on Functional Requirements
Aitor ArrietaMondragon Goi Eskola Politeknikoa, Joseba Andoni AgirreUniversidad Mondragon , Goiuria SagarduiUniversity of MondragonPre-print
|11:30 - 12:00|
Shahid MahmoodCoventry University , Alexy FouilladeEcole superieure d’electronique de l’Ouest, Hoang Nga NguyenCoventry University , Siraj Ahmed ShaikhCoventry University, Coventry, UKDOI
|12:00 - 12:30|
12:30 - 14:00: Catering - Lunch
|14:00 - 14:30|
Rachid KherraziAkka Technologies
|14:30 - 15:00|
|15:00 - 15:30|
15:30 - 16:00: Catering - Coffee Break
|16:00 - 16:30|
|16:30 - 17:00|
|17:00 - 17:30|
Papers should not exceed 8 pages for full papers or 4 pages for short experience and position papers, excluding references - but it is not a strict limit, if you need more space contact the chairs. Each submitted paper must conform to the IEEE two-column publication format. Papers will be reviewed by at least three members from the program committee. Accepted papers will be published in the IEEE Digital Library.
The aim of journal-first papers in category is to further enrich the program of A-MOST, as well as to provide an overall more flexible path to publication and dissemination of original research in model-based testing. The published journal paper must adhere to the following three criteria:
- It should be clearly within the scope of the workshop.
- It should be recent: it should have been accepted and made publicly available in a journal (online or in print) by 1 January 2017 or more recently.
- It has not been presented at, and is not under consideration for, journal-first tracks of other conferences or workshops.
The 2-page submission should provide a concise summary of the published journal paper.
Journal-first submissions must be marked as such in the submission’s title, and must explicitly include full bibliographic details (including a DOI) of the journal publication they are based on. Submissions will be judged on the basis of the above criteria, but also considering how well they would complement the workshop’s technical program.
A Tool for the Automatic Generation of Test Cases and Oracles for Simulation Models Based on Functional RequirementsPre-print