I am going to present this paper on the 6th International Conference on Cloud Computing and Services Science (CLOSER 2016, Session 5, Cloud Computing Enabling Technology) in Rome, Italy.
- I will present the ppbench benchmark, as well as some insights we got throughout our research.
- A full text version of the paper is provided via ResearchGate.
- Informations how to cite the paper can be found here.
Companies like Netflix, Google, Amazon, Twitter successfully exemplified elastic and scalable microservice architectures for very large systems. Microservice architectures are often realized in a way to deploy services as containers on container clusters. Containerized microservices often use lightweight and REST-based mechanisms. However, this lightweight communication is often routed by container clusters through heavyweight software defined networks (SDN). Services are often implemented in different programming languages adding additional complexity to a system, which might end in decreased performance. Astonishingly it is quite complex to figure out these impacts in the upfront of a microservice design process due to missing and specialized benchmarks. This contribution proposes a benchmark intentionally designed for this microservice setting. We advocate that it is more useful to reflect fundamental design decisions and their performance impacts in the upfront of a microservice architecture development and not in the aftermath. We present some findings regarding performance impacts of some TIOBE TOP 50 programming languages (Go, Java, Ruby, Dart), containers (Docker as type representative) and SDN solutions (Weave as type representative).
The presented ppbench benchmark is provided via GitHub. We provide
ppbench via RubyGems.org. So, installing
on your system (we assume that this is your personal laptop or workstation) is very easy.
Assuming you have Ruby 2.2 (or higher installed) installed, simply run
gem install ppbench
ppbench provides several commands and parameters to run and analyze your experiments.
ppbench comes with an online help included. Simply run
to get some online help about available commands
ppbench is providing.
A benchmark run is started like that:
ppbench run --host http://<pinghostip>:8080 \ --experiment experiment_tag \ --machine machine_tag \ log.csv
All benchmark results are written into a log (csv format). These csv based log files can be processed by
Ppbench is able to do some summary analysis on a set of collected benchmark files for a quick analysis. Simply run
ppbench summary *.csv
to get a tabluar summary.
But much more interesting and helpful (summary data is intended to be used for completenes and plausability checking of data but not for detailed analysis),
ppbench is able to generate R scripts for analysis and visualization of benchmark data.
The plot commands of ppbench have several flags to tune your plotting. By using the following additions flags
it is possible to plot 75% confidence bands without showing all measured detail data points (omitting the
--nopoints would show both,
confidence bands and all data points).
ppbench transfer-plot --machines m3.xlarge \ --experiments bare-dart,bare-go,bare-java \ --withbands \ --confidence 75 \ --nopoints \ --pdf graphic.pdf \ *.csv | Rscript -
This would produce a much clearer picture with additional descriptive statistical information.
For further information please read the informations provided with the Ping Pong github repository.