Setting max_input_time (with data!)

Leave a comment

I asked a question on Twitter on why some of the recommend max_input_time settings seem to be ridiculously large.  Some of the defaults I’ve seen have been upwards of 60 seconds.  However, after thinking about it I was a little confused as to why a C program (i.e. PHP) would take so long to process string input.

The reason I was thinking about this was because I was thinking about ways to protect PHP from denial of service attacks.  Having timeouts longer than necessary can exacerbate service availability problems and while I received some responses, those responses did not contain data.

So I decided to get some data.

I ran the test on a local quad core VM with about a 1G of memory.  So clearly I wasn’t going to be pushing a lot of data through.  But it would be enough to figure out what a typical PHP response would need.

I wrote a little test script using the ZF2 HTTP client which would simulate uploading a file and gathered elapsed time for sending the request.  I changed it to measure both read time and full request time.  Read time would only test from when the response had been written to the network to getting data back.  Since there was no data coming back that should only have a small impact on the HTTP processing time.

The script I used was this

The read results time for multiple files was

The full response times for each were

But most PHP requests are not file uploads, but URL encoded form files.  So let’s see what happens when we change the data being sent to a form submission.

I stopped the test there because the system started swapping.

*note* as you can tell from the times there was a lot of entropy on the system causing significant variations in response time.  You can expect a system under load to have similar variations.

So there are a couple of things we learned here.

  1. If your system does simple HTTP requests (no file uploads or crazy form sizes) 1 second should be sufficient, except if you are under significant load
  2. multipart/form-data processing seems to be MUCH more efficient than url-encoding from a memory usage standpoint (I was not expecting this)

*note* if you’re wondering why the second batch started at 1MB it’s because of this change in the testing code

Clearly I could not start at zero.

Leave a Reply

Your email address will not be published. Required fields are marked *