Menu
in ,

Nginx speedup and optimization guide

nginx

nginx is a small and fast webserver which generally outperforms most of the alternatives out of the box, however there is always room for improvement.

In addition to operating as a web-server nginx can also be used as reverse HTTP proxy, forwarding requests it recieves to different back-end servers.

General Tuning

nginx uses a fixed number of workers, each of which handles incoming requests. The general rule of thumb is that you should have one worker for each CPU-core your server contains.

You can count the CPUs available to your system by running:

$grep ^processor /proc/cpuinfo | wc -l

With a quad-core processor this would give you a configuration that started like so:

# One worker per CPU-core.
worker_processes  4;

events {
    worker_connections  8096;
    multi_accept        on;
    use                 epoll;
}

worker_rlimit_nofile 40000;

http {
    sendfile           on;
    tcp_nopush         on;
    tcp_nodelay        on;
    keepalive_timeout  15;

    # Your content here ..
}

Here we've also raised the worker_connections setting, which specifies how many connections each worker process can handle.

The maximum number of connections your server can process is the result of:

  • worker_processes * worker_connections (= 32384 in our example).

We've enabled multi_accept which causes nginx to attempt to immediately accept as many connections as it can, subject to the kernel socket setup.

Finally use of the epoll event-model is generally recommended for best throughput.

Compression

One of the first things that many people try to do is to enable the gzip compression module available with nginx. The intention here is that the objects which the server sends to requesting clients will be smaller, and thus faster to send.

However this involves the trade-off common to tuning, performing the compression takes CPU resources from your server, which frequently means that you'd be better off not enabling it at all.

Generally the best approach with compression is to only enable it for large files, and to avoid compressing things that are unlikely to be reduced in size (such as images, executables, and similar binary files).

With that in mind the following is a sensible configuration:

gzip  on;
gzip_vary on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";

This enables compression for files that are over 10k, aren't being requested by early versions of Microsoft's Internet Explorer, and only attempts to compress text-based files.

Client Caching

A simple way of avoiding your server handling requests is if remote clients believe their content is already up-to-date.

To do this you want to set suitable cache-friendly headers, and a simple way of doing that is to declare that all images, etc, are fixed for a given period of time:

location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
    access_log        off;
    log_not_found     off;
    expires           30d;
}

Here we've disabled logging of media-files, and configured several suffixes which will be considered valid (and thus not re-fetched) for 30 days. You might want to adjust the period if you're editing your CSS files frequently.

Filehandle Cache

If you're serving a large number of static files you'll benefit from keeping filehandles to requested files open – this avoids the need to reopen them in the future.

NOTE: You should only run with this enabled if you're not editing the files at the time you're serving them. Because file accesses are cached any 404s will be cached too, similarly file-sizes will be cached, and if you change them your served content will be out of date.

The following is a good example, which can be placed in either the server section of your nginx config, or within the main http block:

open_file_cache          max=2000 inactive=20s;
open_file_cache_valid    60s;
open_file_cache_min_uses 5;
open_file_cache_errors   off;

This configuration block tells the server to cache 2000 open filehandles, closing handles that have no hits for 20 seconds. The cached handles are considered valid for 60 seconds, and only files that were accessed five times will be considered suitable for caching.

The net result should be that frequently accessed files have handles cached for them, which will cut down on filesystem accesses.

Tip: You might consider the kernel tuning section, which discusses raising the file-handle limit.

Tuning for PHP

Many sites involve the use of PHP, for example any site built around wordpress.

As there is no native mod_php for nginx the recommended way to run PHP applications involves the use of PHP-FPM. To use PHP-FPM you essentially proxy requests for PHP files as follows:

# execute all .php files via php-fpm
location ~ .php$ {
    # connect to a unix domain-socket:
    fastcgi_pass   unix:/var/run/php5-fpm.sock;
    fastcgi_index  index.php;

    fastcgi_param   SCRIPT_FILENAME    $document_root$fastcgi_script_name;
    fastcgi_param   SCRIPT_NAME        $fastcgi_script_name;

    fastcgi_buffer_size 128k;
    fastcgi_buffers 256 16k;
    fastcgi_busy_buffers_size 256k;
    fastcgi_temp_file_write_size 256k;

    # This file is present on Debian systems..
    include fastcgi_params;
}

Notice that this uses a Unix domain-socket to connect to FPM, so you need to ensure that you change /etc/php5/fpm/pool.d/www.conf to read:

listen = /var/run/php5-fpm.sock

This ensures that FPM will listen on a domain socket, rather than the default of listening on a network-socket (the default is “listen=127.0.0.1:9000“).

By default PHP-FPM will start a number of dedicated workers, each running an instance of PHP. If you're not swamped for memory you can increase the number of workers to increase concurrent-throughput.

Edit the file /etc/php5/fpm/pool.d/www.conf to change the number of children:

; set a fixed number of php workers
pm = static

; start up 12 php processes
pm.max_children = 12

This is an example setting, of the kind we mention in the considerations section – the value of 12 is just an example, which you will need to test under-load to see if it is reasonable for your particular setup.

Finally you can configure PHP-FPM to restart automatically, if there are problems. In this example we restart if ten child processes die within 1 minute, and we allow 10 seconds to pass before that happens.

This is a global configuration and belongs in /etc/php5/fpm/php-fpm.conf:

emergency_restart_threshold 10
emergency_restart_interval 1m
process_control_timeout 10s

TIP: You might consider the use of the APC cache, which will cache parsed versions of your PHP-files. This should provide a speedup, even in a PHP-FPM setup.

Author: Steve Kemp, of  tweaked.io

Leave a Reply

Exit mobile version