Looking at the results below, does this really produce such throughput under a limited CPU and Memory environment? I thought, I used Docker for Mac and tried to verify what kind of result would be obtained under Kubernetes environment. https://github.com/networknt/microservices-framework-benchmark/blob/master/README.md
This time, the purpose is to confirm the difference in response speed and throughput when the same resource limit is applied. The CPU and Memory footprints of each framework will be optimized separately, so they are not considered in this verification result. Also, if I have time, I'm thinking of investigating in the GKE environment.
In order to make this verification easy to reproduce, the Dockerfile and Kubernetes manifest file when containerized are all stored in the GitHub repository.
Load the target service from the loading container wrk
.
All loads are given via service. Switch the pod that actually gives the load with service.
The tool that gives the load uses wrk
and implements as follows
wrk
$ wrk -t4 -c64 -d60s http://microservice.default.svc.cluster.local:30000/ --latency
Since the overhead of the warm-up part is ignored, the above command is executed twice and the result of the second measurement is treated as a valid result.
In addition, the container to be measured is measured with the following resources given and one pod started.
resource
resources:
requests:
cpu: 200m
memory: 400Mi
limits:
cpu: 200m
memory: 400Mi
Also, the Dockerfile and Kubernetes manifest file when containerized are all https://github.com/h-r-k-matsumoto/microservices-framework-benchmark It is stored in the repository of. For Java applications, we use jib to make it a container. And it is based on jdk10. This is because jdk8 does not optimize the number of threads as shown below. https://qiita.com/h-r-k-matsumoto/items/17349e1154afd610c2e5
The following 6 items that I personally care about are targeted.
Framework | throughput(req/min) | average(ms) | 90%LINE(ms) | 99%LINE(ms) |
---|---|---|---|---|
light-4j | 295,125 | 29.35ms | 72.17ms | 99.01ms |
go-http | 163,557 | 48.11ms | 119.78ms | 304.28ms |
iris | 143,173 | 66.62ms | 174.93ms | 571.83ms |
spring-boot2-undertow | 107,540 | 46.04ms | 94.61ms | 196.11ms |
spring-boot2-tomcat | 38,068 | 117.01ms | 290.70ms | 492.55ms |
helidon-se | 30,742 | 160.51ms | 299.11ms | 969.99ms |
** light-4j was certainly explosive. ** ** However, the speed difference was not as great as the original site.
Considering the actual operation, it is necessary to consider the API in JSON and gRPC. Verification of this part should also be considered.
Since it was not possible to disable the swap system with Docker for Mac, I would like to proceed with verification even in the GKE environment.
It is difficult to change the framework to light-4j, but at least I want to change it to spring-boot2-undertow.
The execution method is https://github.com/h-r-k-matsumoto/microservices-framework-benchmark/blob/fix/master/k8s_reproduce_benchmark/DockerForMac.md Please Confirm.
After that, in terms of light-4j, kubernetes has a poor performance, so I think it's a stop. https://gitter.im/networknt/light-4j?at=5bf5948a958fc53895c9dbe5