Towards Human-Like Automated Test Generation: Perspectives from Cognition and Problem Solving
Automated testing tools typically create test cases that are different from what human testers create. This often makes the tools less effective, the created tests harder to understand, and thus results in tools providing less support to human testers. Here, we propose a framework based on cognitive science and, in particular, an analysis of approaches to problem-solving, for identifying cognitive processes of testers. The framework helps map test design steps and criteria used in human test activities and thus to better understand how effective human testers perform their tasks. Ultimately, our goal is to be able to mimic how humans create test cases and thus to design more human-like automated test generation systems. We posit that such systems can better augment and support testers in a way that is meaningful to them.
Thu 20 MayDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
17:10 - 17:30 | |||
17:10 2mOther | Session opening Research Papers | ||
17:12 2mPoster | A Framework for Intersectional Perspectives in Software Engineering Research Papers DOI | ||
17:14 2mPoster | Towards Human-Like Automated Test Generation: Perspectives from Cognition and Problem Solving Research Papers Pre-print Media Attached | ||
17:16 2mPoster | An Initial Exploration of the “Good First Issue” Label for Newcomer Developers Research Papers Jan Willem David Alderliesten Delft University of Technology, Andy Zaidman Delft University of Technology Pre-print | ||
17:18 2mPoster | A Virtual Mentor to Support Question-Writing on Stack Overflow Research Papers Nicole Novielli University of Bari, Fabio Calefato University of Bari, Federico De Laurentiis University of Bari, Luigi Minervini University of Bari, Filippo Lanubile University of Bari | ||
17:20 10mOther | QA + Session discussion + closing Research Papers |
Go directly to this room on Clowdr