In my previous material. was about comparing the performance of ASP.NET Core -applications running in Windowsand in a Linux+ Docker environment running in Azure application service This topic is of interest to many people – so I decided to write a sequel.
I tested again using a different approach from the old one with better reproducibility, one that gives more reliable results. I now generate the web load on the servers using cloud tools Azure Cloud Agents. , applying Visual Studio and VSTS. And, furthermore, while I previously ran tests using HTTP, now the testing was done using HTTPS.
Running tests in the cloud environment
Thanks to the excellent work done by Microsoft, Running tests in the cloud – it’s very simple. It’s done with Visual Studio Web Performance tools, using a VSTS account. I ran two series of load tests for each of the following scenarios :
- Response, the body of which contains the text
Hello Worldand a time stamp.
- A response with a body of 1 Kb.
- Response with a body of 10 Kb.
- Response with a body of 50 Kb.
- Response with a body of 100 Kb.
This is how the tests were set up :
- Tests were done for 5 minutes at a time.
- In the beginning, the number of users was 50.
- Every 10 seconds the number of users increased by 10.
- The maximum number of users was 150.
- The queries were performed from the same region (Western Europe)where the applications under study were deployed.
Testresults ( original )
Among the test outputs were summaries of practical value, error reports, and violations of resource limits for the systems allocated (e.g., too much CPU load).
Example of test output data ( original )
I used the same tests that were used last time (to find the corresponding code you can here ).
What have I got now?
Analysis of results
The results obtained this time are consistent with those obtained last time using a client system connected to the Internet via a wired network. Specifically, ASP.NET Core applications deployed on Linuxwith Docker container turned out to be much faster than those deployed on a Windowshost (both variants run within the corresponding application service plan). The results of the new tests even more strongly point to the superiority of the Linux variant, especially when dealing with requests with larger response bodies.
Here is a summary of the test results showing the number of requests processed per second (RPS).
|The script||Linux||Windows||Linux +%|
|Hello World||646, 6||432, 85||+49, 38%|
|Response with 1 Kb body||623, 05||431, 95||+44, 24%|
|Response with 10 Kb body||573, 6||361, 9||+58, 5%|
|Response with a body of 50 Kb||415, 5||210, 05||+97, 81%|
|Response with a body of 100 Kb||294, 35||143, 25||+105, 48%|
Here is the average response time (ms).
|The script||Linux||Windows||Linux +%|
|Hello World||168, 85||242, 2||-30, 28%|
|Response with 1 Kb body||171, 25||249, 8||-31, 45%|
|Response with 10 Kb body||184, 2||292, 7||-37, 07%|
|Response with a body of 50 Kb||233, 3||542, 85||-57, 02%|
|Response with a body of 100 Kb||365, 05||817, 35||-55, 34%|
In what ways does Linux show up worse than Windows (and is it really true)?
Almost all load tests on the Linux host resulted in exceeding the allowable CPU load ( Processor% Processor Time ) with the corresponding warnings. At the same time, none of the tests on the Windows host generated such warnings. I’m not entirely sure I’ve understood the documentation on this performance metric, which is included by default in all new load tests created in Visual Studio. If anyone understands this, clarification would be appreciated.
Weird charts regarding Windows system performance and throughput
I noticed a strange pattern in the VSTS graphs showing the performance and throughput of systems during load testing. In the case of Linux systems these graphs are quite smooth lines. On the other hand, Windows graphs resemble something like "saws". Here are the corresponding graphs for the scenario, which contains 10 Kb of data in the response body.
Performance and throughput charts for Linux
Performance and throughput charts for Windows
Other graphs can be found here Here are the graphs ( Linux and Windows ) for a scenario where the response body contains 50 KB of data.
In light of my previous trials considering the results obtained here, I can say that, from a performance point of view, a Linux + Docker configuration is justified in Azure.
There is no benefit to me in presenting Linux in a more favorable light than Windows. I published all of my test source code and instructions regarding reproducing the test environment. If anyone suspects that I’ve tweaked something somewhere, or done something wrong – let them repeat my tests and point out my mistake. And it would be nice if someone would check my results.
I decided to run these performance tests and publish the results only because I plan to create a web service for an application I wrote in Python. I was curious to see if I could get satisfactory results in an Azure environment using a Linux host running Docker. To develop my service, I plan to use PyPy 3 , Gunicorn , Gevent and Flask And I believe that a project based on this technology stack will run faster than a similar ASP.NET Core project using the Kestrel server. But that’s another story, and to talk about it with confidence, you need to do the appropriate tests.
What technology stacks do you use to develop web services?