Mon 15 MayDisplayed time zone: Hobart change
09:00 - 10:30 | Session 1Research Track at Meeting Room 111 Chair(s): Mattia Fazzini University of Minnesota, Jacques Klein University of Luxembourg, Li Li Beihang University, Lili Wei McGill University | ||
09:00 30mTalk | Welcome Research Track | ||
09:30 60mTalk | When AI Meets Mobile App Testing: Getting There in Industrial Cases Research Track Tao Xie Peking University |
11:00 - 12:30 | |||
11:00 25mPaper | Analysis of Library Dependency Networks of Package Managers Used in iOS Research Track Kristiina Rahkema University of Tartu, Dietmar Pfahl University of Tartu, Rudolf Ramler Software Competence Center Hagenberg Pre-print | ||
11:25 25mPaper | FirmwareDroid: Towards Automated Static-Analysis of Pre-Installed Android Apps Research Track Thomas Sutter Zurich University of Applied Science/University of Zurich, Bernhard Tellenbach Armasuisse Cyber-Defence Campus | ||
11:50 15mPaper | Vulnerability Propagation in Package Managers Used in iOS Development Research Track Pre-print | ||
12:05 10mTalk | Q&A Research Track |
13:45 - 15:15 | |||
13:45 25mPaper | Understanding the Impact of Fingerprinting in Android Hybrid Apps Research Track Abhishek Tiwari University of Passau, Germany, Jyoti Prakash University of Passau, Alimerdan Rahimov University of Passau, Germany, Christian Hammer University of Passau | ||
14:25 40mTalk | Leaders Forum Talk - Towards Data-Driven Mobile App Visual Testing Research Track Chunyang Chen Monash University | ||
15:05 10mTalk | Q&A Research Track |
15:45 - 17:15 | Session 4Research Track / Tools and Datasets at Meeting Room 111 Chair(s): Xiaoyu Sun Australian National University, Australia | ||
15:45 25mPaper | Native vs Web Apps: Comparing the Energy Consumption and Performance of Android Apps and their Web Counterparts Research Track Ruben Horn Vrije Universiteit Amsterdam, Abdellah Lahnaoui Vrije Universiteit Amsterdam, Edgardo Reinoso Vrije Universiteit Amsterdam, Sicheng Peng Vrije Universiteit Amsterdam, Vadim Isakov Vrije Universiteit Amsterdam, Tanjina Islam Vrije Universiteit Amsterdam, Ivano Malavolta Vrije Universiteit Amsterdam Pre-print | ||
16:10 15mPaper | Ebserver: Automating Resource-Usage Data Collection of Android Applications Tools and Datasets Wellington de Oliveira Júnior University of Lisbon, Bernardo de Moraes Santana Júnior , Fernando Castor Utrecht University & Federal University of Pernambuco, João Paulo Fernandes LIACC, Universidade do Porto, Porto, Portugal | ||
16:25 40mTalk | Leaders Forum Talk - Discovering Requirements Using the App Store: when automation is not enough Research Track Paola Spoletini Kennesaw State University | ||
17:05 10mTalk | Q&A Research Track |
Tue 16 MayDisplayed time zone: Hobart change
09:00 - 10:30 | Session 5Research Track at Meeting Room 111 Chair(s): Mattia Fazzini University of Minnesota, Jacques Klein University of Luxembourg, Li Li Beihang University, Lili Wei McGill University | ||
09:30 60mTalk | Why vulnerability analysis for Android needs to change fundamentally Research Track Steven Arzt Fraunhofer SIT; ATHENE |
11:00 - 12:30 | Session 6Research Track / Tools and Datasets at Meeting Room 111 Chair(s): Mattia Fazzini University of Minnesota, Jacques Klein University of Luxembourg, Li Li Beihang University, Lili Wei McGill University | ||
11:00 20mTalk | Awards Research Track | ||
11:21 29mTalk | Achieving Energy Efficiency in Mobile Applications: Insights from our Most Influential Paper Research Track Luís Cruz Delft University of Technology | ||
11:50 25mPaper | Reducing the Impact of Breaking Changes to Web Service Clients During Web API Research Track Paul Schmiedmayer Technical University of Munich, Andreas Bauer Technical University of Munich, Bernd Bruegge TU Munich | ||
12:15 15mPaper | Issue-Labeler: an ALBERT-based Jira Plugin for Issue Classification Tools and Datasets Waleed Alhindi Prince Mohammad Bin Fahd University, Abdulrahman Aleid Prince Mohammad Bin Fahd University, Ilyes Jenhani Prince Mohammad Bin Fahd University, Mohamed Wiem Mkaouer Rochester Institute of Technology |
13:45 - 15:15 | |||
13:45 15mPaper | Sensitive and Personal Data: What Exactly Are You Talking About? NIER (Novel Ideas and Emerging Results) Maria Kober , Jordan Samhi University of Luxembourg, Steven Arzt Fraunhofer SIT; ATHENE, Tegawendé F. Bissyandé SnT, University of Luxembourg, Jacques Klein University of Luxembourg | ||
14:00 40mTalk | Leaders Forum Talk - UX is the differential. What can we do as Software Engineers? Research Track Tayana Conte Universidade Federal do Amazonas | ||
14:40 30mTalk | Rising Star - Mining User Interfaces to Support Software Development for Mobile Apps Research Track Kevin Moran George Mason University | ||
15:10 5mTalk | Q&A Research Track |
15:45 - 17:30 | |||
15:45 25mPaper | Energy-Saving Strategies for Mobile Web Apps and their Measurement: Results from a Decade of Research Research Track Benedikt Dornauer University of Innsbruck; University of Cologne, Michael Felderer German Aerospace Center (DLR) & University of Cologne Pre-print | ||
16:10 15mPaper | On Security and Energy Efficiency in Android Smartphones Research Track João Ferreira da Silva Júnior , Bernardo Santos University of Porto, Portugal, Wellington de Oliveira Júnior University of Lisbon, Nuno Antunes Universidade de Coimbra, Bruno Cabral , João Paulo Fernandes LIACC, Universidade do Porto, Porto, Portugal | ||
16:25 50mTalk | Leaders Forum Talk - Automated Test Reuse of GUI Tests across Similar Android Apps: Opportunities and Challenges Research Track Valerio Terragni University of Auckland | ||
17:15 10mTalk | Q&A Research Track | ||
17:25 5mTalk | Closing Research Track |
Accepted Papers
Title | |
---|---|
Ebserver: Automating Resource-Usage Data Collection of Android Applications Tools and Datasets | |
Issue-Labeler: an ALBERT-based Jira Plugin for Issue Classification Tools and Datasets |
Call for Papers
Introduction
This Tool Demos and Dataset track welcomes submissions on reusable tools and datasets in either practice or research. These tools and datasets should showcase novel, practical contributions that aid software architects, designers, researchers and engineers to jumpstart their own workflows, and also reproduce the earlier works. Authors of each accepted submission will give a short presentation of their solution, followed by a detailed demo session.
Both categories may range from early prototypes to in-house or pre-commercialized products. Authors of regular MOBILESoft papers are also welcome to submit an accompanying MOBILESoft Tool Demos and Dataset paper by adding information regarding the actual demo. Each contribution must be submitted in the form of up to a four (4)-page short paper (including all references and appendices).
Dataset Showcase
Dataset showcase submissions are expected to include:
- A description of the data source;
- A description of the methodology used to gather the data (including provenance and the tool used to create/generate/gather the data, if any);
- A description of the storage mechanism, including a schema if applicable;
- If the data has been used by the authors or other, a description of how this was done including references to previously published papers;
- A description of the originality of the dataset (that is, even if the dataset has been used in a published paper, its complete description must be unpublished) and similar existing datasets (if any);
- A description of the design of the tool, and how to use the tool in practice ideas for further research questions that could be answered using the dataset;
- Ideas for further improvements that could be made to the dataset, and
- Any limitations and/or challenges in creating or using the dataset.
Tool Demo Showcase
Tool demo showcase submissions are expected to include
- A description of the tool, which includes the background, motivation, novelty, overall architecture, detailed design, and preliminary evaluation of the tool, as well as the link to download or access the tool;
- A description of the design of the tool, how to use the tool in practice;
- Clear installation instructions and example data set that allow the reviewers to run the tool;
- If the tool has been used by the authors or others, a description of how the tool was used including references to previously published papers, and ideas for future reusability of the tools;
- Any limitations of using the tools.
Formatting and Submission Instructions
All submissions must conform to the MOBILESoft 2023 formatting and submission instructions available at https://www.acm.org/publications/proceedings-template for both LaTeX and Word users. LaTeX users must use the provided acmart.cls and ACM-Reference-Format.bst without modification, enable the conference format in the preamble of the document (i.e., \documentclass[sigconf,review,anonymous]{acmart}
), and use the ACM reference format for the bibliography (i.e., \bibliographystyle{ACM-Reference-Format}
). The review option adds line numbers, thereby allowing referees to refer to specific lines in their comments. The conference information can be set using the following command: \acmConference[MobileSoft’23]{The 10th International Conference on Mobile Software Engineering and Systems}{Melbourne, VIC, Australia}
.
All submissions to the Tool Demo and Dataset track must not exceed 4 pages for the main text, inclusive of all figures, tables, appendices, etc. An extra page is allowed for references. All submissions must be in PDF. The page limit is strict, and it will not be possible to purchase additional pages at any point in the process (including after the paper is accepted).
Papers that do not conform to these guidelines will be desk rejected before the review process.
Submissions may be made through HotCrp at: https://tools-datasets-mobilesoft-2023.hotcrp.com/
The Tool Demo and Dataset track at MOBILESoft 2023 will adopt a double-anonymous review process. No submitted paper may reveal its authors’ identities. The authors must make every effort to honor the double-anonymous review process; reviewers will be asked to honor the double-anonymous review process as much as possible. Any author having further questions on double-blind reviewing is encouraged to contact the track’s program co-chairs by email. Any submission that does not comply with the double-blind review process will be desk-rejected. Further advice, guidance and explanation about the double-blind review process can be found in the Q&A page of ICSE: https://conf.researchr.org/track/icse-2023/icse-2023-submitting-to-icse2023--q-a
Review and Evaluation Criteria
Each submission will be reviewed by three members of the program committee. The main evaluation criteria include
- The value, usefulness, and reusability of the tools and/or datasets.
- The quality of the presentation.
- The clarity of relation with related work and its relevance to mining software repositories.
- The availability of tools and/or datasets.
Tools and Datasets Availability
To promote replicability and to disseminate the advances achieved with the tools and datasetss, we strongly encourage the tools and datasets to be publicly available for download and use, ideally through their distribution with an open-source software license. The tool/dataset should be made available at the time of submission of the paper for review, but will be considered confidential publication of the paper. Whenever the tool and/or dataset are not made publicly available, the paper must include a clear explanation for why this was not possible.
The tool/dataset should include detailed instructions about how to set up the environment (e.g., requirements.txt), how to use the tool/datasets (e.g., how to use the tool with a running example, how to import the data or how to access the data once it has been imported).
To further increase the visibility of the presented tools and datasets, we require all authors of accepted papers to produce a screencast presenting their tool or dataset. Accepted papers will be linked with their accompanying screencasts on the Tool Demonstration and Dataset track website.