This session will build upon the keynote talk Autonomous Vehicles and Software Safety Engineering, encompassing challenges to ensuring autonomous vehicle safety. While discussion topic list is flexible, some starter question areas include the following: Should software developers share blame for a fatality? Ethics of when to deploy โbetaโ software on public roads. Specifically excluded is any mention of the red herring โโTrolley Problemโโ. In Machine learning, how do we ensure training data coverage of operational domain and account for high risk heavy tail events. What about commercial/research software for life critical systems. Are there gaps between ICSE research results and ensuring AV system level safety?
Identifier naming is a fairly old research topic, but tool support for it hasnโt gained much traction in developersโ daily activities, or IDEs, outside of support for naming heuristics like camelCase and under_score. Thereโs been a lot of research on the topic, but the question: โWhat makes an identifier name good?โ is still very open and suffers from a significant amount of subjectivity that the field has not controlled for. Iโd like to discuss the currently wide-open field of identifier name quality and recommendation, some of the topics that we see published on regularly, and some of the topics that are in sore need of more research (and researchers) in order for us to finally see this research mature and integrate into software developer IDEs and workflows.
The world is awash in bullshit. Politicians are unconstrained by facts. Science is conducted by press release. Higher education rewards bullshit over analytic thought. Startup culture elevates bullshit to high art. Advertisers wink conspiratorially and invite us to join them in seeing through all the bullshit โ and take advantage of our lowered guard to bombard us with bullshit of the second order. The majority of administrative activity, whether in private business or the public sphere, seems to be little more than a sophisticated exercise in the combinatorial reassembly of bullshit. The purpose of this BoF is to have a conversation around this topic.
Experiments for research in regression testing, program repair, flaky tests, fuzzing, and more can require large-scale computing resources to run, and are quite hard to package and evaluate artifacts for. I am interested in approaches that make it easier to develop these tools, and also easier to evaluate them.
In software engineering research, impact comes in multiple forms. First, the timeline of impact can differ. Further, one can have impact on future research or directly on industrial practice. However, research always involves a certain level of risk and may fail to deliver usable results. Nevertheless, I will argue that software engineering research, of any type, needs to be informed by engineering practice. I will discuss the various models and paradigms to help achieve that.