Write a Blog >>
ISMM 2018
co-located with PLDI 2018
Mon 18 Jun 2018 14:30 - 15:00 at Discovery AB - Optimizing for the Web and the Cloud Chair(s): Christine H. Flood

The cloud is an increasingly popular platform to deploy applications as it
lets cloud users to provide resources to their applications as needed.
Furthermore, cloud providers are now starting to offer
a "pay-as-you-use" model in which users are only charged for the resources that are
really used instead of paying for a statically sized instance. This new model
allows cloud users to save money, and cloud providers to better utilize their
hardware.

However, applications running on top of runtime environments such as the Java Virtual
Machine (JVM) cannot benefit from this new model because they cannot dynamically
adapt the amount of used resources at runtime. In particular, if an application needs
more memory than what was initially predicted at launch time, the JVM will not allow the
application to grow its memory beyond the maximum value defined at launch time. In addition,
the JVM will hold memory that is no longer being used by the application. This
lack of dynamic vertical scalability completely prevents the benefits of the
"pay-as-you-use" model, and forces users to over-provision resources, and to lose
money on unused resources.

We propose a new JVM heap sizing strategy that allows the JVM to dynamically scale
its memory utilization according to the application's needs. First,
we provide a configurable limit on how much the application can grow its memory.
This limit is dynamic and can be changed at runtime, as opposed to the current
static limit that can only be set at launch time. Second, we adapt current
Garbage Collection policies that control how much the heap can grow and shrink to
better fit what is currently being used by the application.

The proposed solution is implemented in the OpenJDK 9 HotSpot JVM, the new release
of OpenJDK. Changes were also introduced inside the Parallel Scavenge collector and
the Garbage First collector
(the new by-default collector in HotSpot). Evaluation experiments using real workloads
and data show that, with negligible throughput and memory overhead, dynamic vertical memory
scalability can be achieved. This allows users to save significant amounts of money by
not paying for unused resources, and cloud providers to better utilize their physical
machines.

Mon 18 Jun

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
Optimizing for the Web and the CloudISMM 2018 at Discovery AB
Chair(s): Christine H. Flood Red Hat
14:00
30m
Talk
Hardware-Software Co-optimization of Memory Management in Dynamic Languages
ISMM 2018
Mohamed Ismail Cornell University, USA, G. Edward Suh Cornell University, USA
14:30
30m
Talk
Dynamic Vertical Memory Scalability for OpenJDK Cloud Applications
ISMM 2018
Rodrigo Bruno INESC-ID / Instituto Superior Técnico, University of Lisbon, Paulo Ferreira INESC-ID / Instituto Superior Técnico, University of Lisbon, Ruslan Synytsky Jelastic, n.n., Tetiana Fydorenchyk Jelastic, n.n., Jia Rao University of Texas at Arlington, USA, Hang Huang Huazhong University of Science and Technology, China, Song Wu Huazhong University of Science and Technology, China
15:00
30m
Talk
OMR: Out-of-Core MapReduce for Large Data Sets
ISMM 2018
Gurneet Kaur , Keval Vora University of California, Riverside, Sai Charan Koduru University of California, Riverside, Rajiv Gupta UC Riverside