by Kevin Schroeder | 2:14 pm

Having started working at Magento I have been making myself more familiar with many of the different parts of the community.  I have spent a fair amount of time over the past several weeks trying to understand how people work with Magento and what their problems are.

One of the things that often comes up is speed.  And there are lists of things that people can do to try and make Magento faster.  But there’s something that bugs me in many of these lists.  Often people say that in order to make your Magento installation run faster you need to put certain things on tmpfs or RAM.

Makes sense, right?  The disk is slow, RAM is fast and so a RAM drive must be fast.  Right?


Yes.  The disk is slow.  But that does not mean that the file system is.  The disk is only part of the file system.  The file system includes this nifty thing called a disk block cache.  The file system will cache often-used disk blocks in RAM.

What?  The file system uses RAM?  Yep.  When you type in “top” and you see that value for “cached”?  That is RAM that the operating system is using to store disk blocks.

The result of that caching is this chart.  It measures the throughput in number of requests per second of a static resource via Nginx over HTTP load tested from a remote host on the same network.

Server Throughput (HTTP Requests per Second)

To conduct the test I did 3 test runs for each of the types of file systems.  A physical ext3 disk, tmpfs and a RAM drive.  I ran 10,000 iterations via 100 concurrent requests.  The results of the test show that the physical media was actually the fastest!

I was actually a little surprised by that.  I was expecting that the physical (local) file system would only be keeping up, instead it was faster.  I would be willing to chalk that win up to entropy in the system but the assertion that a RAM drive or tmpfs is faster than the file system is clearly not true.

Don’t get me wrong.  RAM drives are great if you want to explicitly define a file system which WILL stay in RAM.  However, I side with Linus Torvalds (when he was talking about O_DIRECT) that the purpose of an operating system is to manage a lot of this for you.  You might be able to get some better results for RAM or tmpfs from some tuning but it would seem to be a micro-optimization at best or a giant waste of time at worst.



Hey Kevin,
Is there a list of filesystems that do support in-memory block caching.

Mar 22.2013 | 07:22 am


    dragooni It is a feature of the kernel and not the file system.  Therefore it would be available to all file systems (I’m pretty sure this is true).  It can be “bypassed” by passing the O_DIRECT option to open() in C which allows the application to directly control physical reads and writes.

    Mar 22.2013 | 07:36 am


      kschroeder this is pretty cool and I encourage you to publish information educating people that disk-level caching exists in memory already and they don’t need memcached/redis all the time.
      Also recommendation of a good disk-based caching library would be valuable for people to use/test/benchmark.
      I hope this feedback was useful.

      Mar 22.2013 | 07:41 am


Any idea if the same is true on Windows Server?

Mar 28.2013 | 05:08 pm


    henrylearn2rock I’m pretty sure, yes.  Most, if not all, modern operating systems have disk block caching.  The test is easy.  Do a code loop writing to the file system and another one reading.  if they are vastly different then the OS is caching.

    Mar 29.2013 | 04:31 pm

Leave a Reply

Your email address will not be published. Required fields are marked *