There are some good rules of thumb for configuring webservers, which are not particular to any particular one (nginx, apache, etc).
This page documents a couple of them.
Request Size Limits
Unless you’re allowing remote users to upload images, or other large media, you should disable handling of large-requests.
Large requests cause your server to be stuck waiting for the request to be read and processed, and can cause denial-of-service attacks.
nginx incoming requests can be limited in size via:
# Don't allow requests of more than 100k in size. client_max_body_size 100k;
For Apache the limit is applied like so:
# Don't allow more than 100k to be uploaded. LimitRequestBody 102400
The purpose of keep-alive is simple in theory; rather than clients having to open a fresh connection to your server for each request they make (index, then CSS, then images), they open one connection.
When the initial connection is made that socket is kept open, rather than beign closed, such that further requests can be using it.
From the client point of view this improves things, as rather than making the overhead to establish, use, and close, multiple connections only one is used.
However the server is left keeping sockets open in the hope that further requests will come, and if they don’t resources are being needlessly consumed which could be better spent on handling fresh visitors.
Generally people suggest leaving a small number of sockets available for keep-alive, or only keeping sockets open for a short period of time – such as five seconds – after which time the chances of a further request are minimal.
It is almost always a mistake to perform DNS lookups on the clients which make requests against your server.
If your logfiles log only the IP addresses of your visitors you can configure your log-graphing application (webalizer, etc) to perform the lookups when it generates its reports.
# # Don't resolve requesting IPs into hostnames for logging. # HostnameLookups Off
When you just can’t scale further
When you can’t scale any further you might gain additional performance by placing a caching proxy in front of your server.
Author: Steve Kemp, of tweaked.io