Blogs (61) >>
Wed 18 Jul 2018 11:40 - 12:00 at Hanoi - Real-World Benchmarking

In 2017, the Software Development Team at King’s College London performed a benchmarking experiment to compare the warmup time and peak performance of modern programming language Virtual Machines (VMs). The experiment was intended to be the most rigorous to date. Our results were both surprising and disappointing. Not only did few modern VMs achieve a steady state of peak performance when running well known benchmarks, but some even slowed down over time.

This talk focuses not on the results of our experiment, but on our experiences of developing the “Krun” benchmarking system and the statistical analyses we used to process our data. The talk will discuss the difficulties we encountered in eliminating confounding variables and will show you how to present performance results in the absence of steady states.

Whilst Krun enabled us to collect robust and accurate results for our experiment, it tends towards being overkill. Ideally we’d like to build a cut-down version of Krun, but this raises the question of “which of Krun’s features make the most difference to benchmarking quality?”.

Slides (benchwork.pdf)2.56MiB

Wed 18 Jul

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

11:00 - 12:30
Real-World BenchmarkingBenchWork at Hanoi
11:00
10m
Opening Remarks
BenchWork
Karim Ali University of Alberta, Cristina Cifuentes Oracle Labs
11:10
30m
Real World Benchmarks for JavaScript
BenchWork
File Attached
11:40
20m
In Search of Accurate Benchmarking
BenchWork
Edd Barrett King's College London, Sarah Mount King's College London, Laurence Tratt King's College London
File Attached
12:00
30m
AndroZoo: Lessons Learnt After 2 Years of Running a Large Android App Collection
BenchWork
Kevin Allix University of Luxembourg