Game Software Engineering: A Controlled Experiment Comparing Automated Content Generation Techniques
Background Video games are complex projects that involve a seamless integration of art and software during the development process to compose the final product. In the creation of a video game, software is fundamental as it governs the behavior and attributes that shape the player’s experience within the game. When assessing the quality of a video game, one needs to consider specific quality aspects, namely design',
difficulty’, fun', and
immersiveness’, which are not considered for traditional software. On the other hand, there are not well-established best practices for the empirical assessment of video games as there are for the empirical evaluation of more traditional software. Aims Our goal is to carry out a rigorous empirical evaluation of the latest proposals to automatically generate content for video games following best practices established in software engineering research. Specifically, we compare Procedural Content Generation (PCG) and Reuse-based Content Generation (RCG). Our study also considers the perception of players and professional developers on the generated content. Method We conducted a controlled experiment where human subjects had to play with content that was automatically generated for a commercial video game by the two techniques (PCG and RCG), and evaluate it according to specific quality aspects of video games. A total of 44 subjects including professional developers and players participated in our experiment. Results The results suggest that participants perceive that RCG generates content is of higher quality than PCG. Conclusions The results can turn the tide for content generation. So far, RCG has been neglected as a viable option: typically, reuse is frowned upon by the developers, who aim to avoid repetition in their video games as much as possible. However, our study uncovered that RCG unlocks latent content that is actually favoured by players and developers alike. This revelation poses an opportunity towards opening new horizons for content generation research.
Thu 24 OctDisplayed time zone: Brussels, Copenhagen, Madrid, Paris change
14:00 - 15:30 | Empirical research methods and applicationsESEM Technical Papers / ESEM Emerging Results, Vision and Reflection Papers Track at Telensenyament (B3 Building - 1st Floor) Chair(s): Valentina Lenarduzzi University of Oulu | ||
14:00 20mFull-paper | Game Software Engineering: A Controlled Experiment Comparing Automated Content Generation Techniques ESEM Technical Papers Mar Zamorano López University College London, África Domingo Universidad San Jorge, Carlos Cetina Universitat Politècnica de València, Spain, Federica Sarro University College London | ||
14:20 20mFull-paper | Evaluating Software Modelling Recommendations: Towards Systematic Guidelines for Modelling ESEM Technical Papers | ||
14:40 20mFull-paper | What do we know about Hugging Face? A systematic literature review and quantitative validation of qualitative claims ESEM Technical Papers Jason Jones Purdue University, Wenxin Jiang Purdue University, Nicholas Synovic Loyola University Chicago, George K. Thiruvathukal Loyola University Chicago and Argonne National Laboratory, James C. Davis Purdue University DOI Pre-print | ||
15:00 15mVision and Emerging Results | On the Creation of Representative Samples of Software Repositories ESEM Emerging Results, Vision and Reflection Papers Track June Gorostidi IN3 - UOC, Adem Ait University of Luxembourg, Jordi Cabot Luxembourg Institute of Science and Technology, Javier Luis Cánovas Izquierdo IN3 - UOC Pre-print | ||
15:15 15mVision and Emerging Results | Can ChatGPT emulate humans in software engineering surveys? ESEM Emerging Results, Vision and Reflection Papers Track Igor Steinmacher Northern Arizona University, Jacob Mcauley Penney NAU, Katia Romero Felizardo UTFPR-CP, Alessandro Garcia Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Marco Gerosa Northern Arizona University |