Write a Blog >>
ISMM 2018
co-located with PLDI 2018
Mon 18 Jun 2018 14:30 - 15:00 at Discovery AB - Optimizing for the Web and the Cloud Chair(s): Christine H. Flood

The cloud is an increasingly popular platform to deploy applications as it
lets cloud users to provide resources to their applications as needed.
Furthermore, cloud providers are now starting to offer
a "pay-as-you-use" model in which users are only charged for the resources that are
really used instead of paying for a statically sized instance. This new model
allows cloud users to save money, and cloud providers to better utilize their
hardware.

However, applications running on top of runtime environments such as the Java Virtual
Machine (JVM) cannot benefit from this new model because they cannot dynamically
adapt the amount of used resources at runtime. In particular, if an application needs
more memory than what was initially predicted at launch time, the JVM will not allow the
application to grow its memory beyond the maximum value defined at launch time. In addition,
the JVM will hold memory that is no longer being used by the application. This
lack of dynamic vertical scalability completely prevents the benefits of the
"pay-as-you-use" model, and forces users to over-provision resources, and to lose
money on unused resources.

We propose a new JVM heap sizing strategy that allows the JVM to dynamically scale
its memory utilization according to the application's needs. First,
we provide a configurable limit on how much the application can grow its memory.
This limit is dynamic and can be changed at runtime, as opposed to the current
static limit that can only be set at launch time. Second, we adapt current
Garbage Collection policies that control how much the heap can grow and shrink to
better fit what is currently being used by the application.

The proposed solution is implemented in the OpenJDK 9 HotSpot JVM, the new release
of OpenJDK. Changes were also introduced inside the Parallel Scavenge collector and
the Garbage First collector
(the new by-default collector in HotSpot). Evaluation experiments using real workloads
and data show that, with negligible throughput and memory overhead, dynamic vertical memory
scalability can be achieved. This allows users to save significant amounts of money by
not paying for unused resources, and cloud providers to better utilize their physical
machines.

Mon 18 Jun

ismm-2018-papers
14:00 - 15:30: ISMM 2018 - Optimizing for the Web and the Cloud at Discovery AB
Chair(s): Christine H. FloodRed Hat
ismm-2018-papers14:00 - 14:30
Talk
Mohamed IsmailCornell University, USA, G. Edward SuhCornell University, USA
ismm-2018-papers14:30 - 15:00
Talk
Rodrigo BrunoINESC-ID / Instituto Superior Técnico, University of Lisbon, Paulo FerreiraINESC-ID / Instituto Superior Técnico, University of Lisbon, Ruslan SynytskyJelastic, n.n., Tetiana FydorenchykJelastic, n.n., Jia RaoUniversity of Texas at Arlington, USA, Hang HuangHuazhong University of Science and Technology, China, Song WuHuazhong University of Science and Technology, China
ismm-2018-papers15:00 - 15:30
Talk
Gurneet Kaur, Keval VoraUniversity of California, Riverside, Sai Charan KoduruUniversity of California, Riverside, Rajiv GuptaUC Riverside