I wrote a little while ago about how, for running PHP, Nginx was not faster than Apache. At first I figured that it would be and then it turned out not to be, though only by a bit.
But since Apache also has an event-based MPM I wanted to see if the opposite results were true; that if Apache were using its event MPM it would be about the same as Nginx. I had heard that Apache 2.2’s event MPM wasn’t great (it was experimental) but that 2.4 was better, possibly even faster, than Nginx.
So I had a few spare moments this Friday and figured I would try it out. I basically ran ab at concurrency levels of 1, 10, 25, 50, 100 and 1000. Like before the results surprised me.
The first run with Nginx was impressive. It peaked at 14,000 requests per second. Given my wimpy VM that I ran it on, those numbers are pretty good. What surprised me was that Apache was only half that. I will say for the record that I do not know how to tune the event MPM. But I don’t really have to tune Nginx to get 14k requests per second so I was expecting a little better from Apache. So I pulled out all of the LoadModule statements I could but still have a functional implementation of Apache. While the numbers were 25% better or so they were still well shy of what Nginx was capable of. Then I added the prefork MPM to provide a baseline. Again, I was surprised. The event MPM was faster than the prefork MPM for static content, but not by much.
So it seems that if you are serving static content Nginx is still your best bet. If you are serving static content from a CDN or have a load balancer in front of Apache which is running PHP then the prefork MPM is the way to go. While the event MPM will help with concurrency it will not help you speed up PHP and so is not really needed.
I guess it’s time for you to run a benchmark with a “true” PHP application behind it 🙂
And let us know what version of nginx you’re using… I’m using mostly 1.4.3 and 1.4.7 (I have no idea if there are substantial performance issues between them, the changelogs show mainly bug fixes and security patches)
Perhaps I am confused.. doesn’t this show that nginx is way faster than Apache 2.4 MPM?
Gwyneth Llewelyn A “true” PHP application would actually make the results a little more murky because the test would likely involve more logic to be executed. And, largely (though there might be some circumstances that might negate this), once you are in PHP, the web server doesn’t matter much. As such the true performance difference would be minimized. The actual performance difference between the two, once a true PHP application is executed, is negligible and the performance benefit from Nginx largely comes from dishing out static content.
BrianLayman Yep, it does show that Nginx is faster than Apache. I was hoping that Apache’s Event MPM would bring Apache close. In the end it doesn’t. The “faster” that I referred to was that Nginx has been reported to be faster than Apache and is, therefore, always a better choice. My quick test was the previous blog post that I referenced which showed that, when running PHP (that’s key), the Apache prefork MPM is faster. Just a smidge, but faster. That benefit goes away, however, once static content is included at which point Nginx is faster.
kschroeder Good point… and the benchmark would actually only reveal performance data for that single application, and not others. So I guess you’re right, you did your benchmark well.
Were these tests run with allowoverride off?
That is actually a very good question. I don’t recall.
I bet allowoverride were turned on…
I have for the past week or so been setting up an Load Balancing environment and have tried quite a few combo’s, I have tried Varnish with a Nginx and apache Backend. Haproxy with a Nginx and Apache backend. A Nginx with a Nginx backend and now my final setup and a Nginx Load balancer with apache2 running MPM event and php-fpm and its loads faster than any of the other combos. my question is going to be one the backend servers start getting load how they are going to hold up LOL
So that’s an Apache loaded with unnecessary modules (default config), directives, scanning and loading changes to .htaccess files and loads of features which Nginx will never have for years to come vs Nginx.
I just benchmarked the 2 (nginx 1.6). The problem is Nginx is designed for benchmarks. If you tune Apache for benchmarks, it’ll outperform Nginx and it did in my case.
how do I repeat your benchmark?
Are you running Apache Event MPM + mod_php or Apache Event MPM + fastcgi?
Apache Event MPM + mod_php really does not buy much[I’m not really sure how it would buy anything actually]. With mod_php each Apache thread is going to execute PHP directly. So it makes almost zero use of libEvent, libUv or whatever flavor of the month.
Using event, the idea is that when it comes time to execute a cgi script, the processing is passed off to a separate server and the Apache worker thread goes to sleep until there is a response or a timeout. This means that the libEvent server can process many more requests as since it can run more workers.
This is how nginx works. Nginx was built to use libevent from the start which helps it to be so screamingly fast. So to test it, you have to use Apache Event MPM + fastcgi. In this case, the performance for service PHP will be based on how the web server handles it’s worker threads.
Moreover, not serving longer running larger content generating scripts is not really useful. Since the whole purpose of libevent is to allow more processes while waiting for results, server 100byte fast results means your not testing how the server handles having to suddenly handle 100 17k different responses from FPM. Here it will be wildly different since Nginx counts on persistent fpm connections by default and Apache counts on unique ones. So 100 17k results all coming over a single fastcgi connection will be processed differently then 100 17k results over 100 sockets.
Nginx also performs some special caching tricks on small responses. Even though you had still had to transfer the same 100bytes over and over to Nginx, once nginx receives it – based on the hash of the data it will send the data from it’s copy in cache – avoiding having to copy the response from one region in memory to another region in memory, run it through some filters, and then copy it again. It only does this for small responses.
Lastly, I really dislike the common method of benchmarking you ran ab 1,10,….
So basically what your really testing is how does Nginx scale as traffic increases from low to high. Nginx default configuration is to run many many more processes/threads then Apache does. So in Apache you go: 1 concurrent, 5 concurrent, 10 concurrent – opps Apache has to start up another 10 threads, 15 concurrent, 20 concurrent – opps time to start another 10 threads. That is why if you look at the raw data you see these really weird sudden drops in performance as concurrency crosses the new thread boundary.
Run it in the opposite direction and you will see an initial high performance for nginx[ramps up faster to 10,000 concurrent] but then as you go down the charts start to differ..nginx is more aggressive about cleanup/killing extra threads. Try varying back and forth, 10,000 to 5000, to 8000, to 500, to 3000 Of course this will not be a nice chart – but it will be real world.
Gwan has some interesting benchmarking tools and a different metric. I don’t agree with the author on some of his choices[he’s still using small files and sequential ramp up] but his tools do a better job of tracking memory.