Building a Node.js Benchmark: Initial Steps
To address our first goal, we start by downloading meta-data for all projects that use NPM on GitHub, collecting metrics such as: # of dependencies, # open and closed issues, # stars, # topics, size, # commits, # contributors, and proportion of project-specific code vs. dependency code. Then, we infer the probability distribution of each metrics (e.g. normal, log-normal, exponential, etc.) and sample projects in such a way that the original probability distributions are preserved. To address our second goal, we further select projects that have working test suites, pre-download all their dependencies, and build a harness that runs tests from all projects. Then, we build a test environment in the form of a container or virtual machine where specific versions of Node.js, web browsers, and databases are pre-installed, in an attempt to minimize variations across users of our benchmark.
During the talk, we will explain how we collected various metrics from GitHub, how we inferred their distributions, and how we sampled projects. We will also share our experience building an initial, executable, benchmark suite of 50 applications, and highlight the challenges we foresee ex-panding it. Finally, we will discuss two potential uses of the benchmark: dynamic analyses for performance, and security, and the additional metrics that we might need to consider.
|Slides (Building nodejs benchmark.pdf)||803KiB|
Wed 18 Jul Times are displayed in time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
|14:00 - 14:30||Benchmarking WebKit|
Saam BaratiAppleFile Attached
|14:50 - 15:10||Building a Node.js Benchmark: Initial Steps|
Petr MajCzech Technical University, François GauthierOracle Labs, Celeste HollenbeckNortheastern University, USA, Jan VitekNortheastern University, Cristina CifuentesOracle LabsFile Attached
|15:10 - 15:30||A Micro-Benchmark for Dynamic Program Behaviour|