Menu
in ,

Benchmarking

Considerations

Although it is often possible to improve the responsiveness of your servers and services you should make sure you regularly test your server under load to ensure it continues to function as you expect.

In most of the recipes we have posted we’ve listed suggested values for settings such as max_connections. These are just guidelines that work well on reasonably powered servers.

You might find that because a single connection requires, for example, 10Mb of RAM, you can only sustain 500 of these at the same time before your system is out of resources.

You need to find this limit for yourself, and adjust accordingly.

Mitigating Service Failures

Even if things are working well it is helpful to install some kind of process monitoring upon your servers.

These will catch the case when your servers exit unexpectedly and quickly restart them.

There are many tools you can consider using such as monit, or runit, each suited to a different kind of control:

  • Restarting services when they fail.
  • Restarting services which have exited.

Measuring Performance

Whenever you attempt to make a tweak to your systems performance you need to know what result you’ve achieved, if any.

The best way to do that is to use a tool to test your servers performance, or throughput, and run that both before and after making any particular change.

There are several benchmarking applications and tools out there, and this small page contains a list of the ones you’re most likely to need to use.

Webserver Stress-Testing

Webserver benchmarking largely consists of firing off a few thousand requests at a server, and reporting on the min/max/average response time.

Obviously a server “loses points” if some of the response are errors, rather than valid results, but generally the testing is a matter of juggling the number of total requests, or concurrent requests, until you reach a point where the server starts to take too long to respond, or fails completely.

ab

One of the most popular tools for benchmarking for many years was ab, the Apache benchmark tool. This is looking a little dated now, but still works well.

If you’re running Apache you probably have this installed already, which explains its popularity. Give it a URL, a number of requests, and a concurrency level and it will issue a small report:

ab -c 10 -n 1000 http://example.com/

 

The example above fired 1000 requests, with 10 at a time, to the URL http://example.com/ and the results for me look something like this:

Concurrency Level:      10
Time taken for tests:   36.496 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      1610000 bytes
HTML transferred:       1270000 bytes
Requests per second:    27.40 [#/sec] (mean)
Time per request:       364.955 [ms] (mean)
Time per request:       36.496 [ms] (mean, across all concurrent requests)
Transfer rate:          43.08 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      179  182   1.3    182     192
Processing:   182  183   1.2    182     195
Waiting:      181  183   1.2    182     195
Total:        361  364   1.9    364     383

siege

Siege is very similar to the Apache benchmark application, and allows you to request a lot of URLs from a text-file, along with other options.

A simple example might be to test a server with 50 concurrent users, and a delay of a random amount, up to 10 seconds, between requests:

siege -d10 -c50 -t60s http://example.com/

The use of -t60s will cause the testing to cease after a minute. If you were testing a large site you will probably wish to run for a lot longer.

Perhaps a more realistic approach is to save a list of URLs to the file ~/urls.txt, and fire requests from that file, randomly:

siege -d10 -c50 -t60s -i -f ~/urls.txt

In either case we’d expect to see output like this:

Transactions:                442 hits
Availability:             100.00 %
Elapsed time:              59.43 secs
Data transferred:           0.54 MB
Response time:              0.37 secs
Transaction rate:           7.44 trans/sec
Throughput:             0.01 MB/sec
Concurrency:                2.72
Successful transactions:         442
Failed transactions:               0
Longest transaction:            0.40
Shortest transaction:           0.36

Online services

There are several online testing sites which you can use to graph your server response time, these should have the advantage that you’re not limited by your personal upload-speed.

As many require subscriptions, or validation, they’re not suitable for all, but as an alternative to testing yourself they can be useful:

Simpler testing will just test a single page and report the time taken to load all associated resources (CSS, images, etc). Not quite as valuable, but interesting nonetheless:

Benchmarks Are Unfair

Here we’re going to look at the results of applying some of the nginx-proxying updates to a standard server.

The testing we’re carrying out is of nginx as a reverse-proxy in front of a slow HTTP-server:

  • Internet > nginx > Application server

Posting benchmarks is usually a mistake, because the only thing you care about is the increase in speed in yourserver, or application. Benchmarks tend to be on unrelated systems, with different hardware and unrealistic traffic patterns.

The only fair benchmark I can think of posting is of a reverse-proxy because everything else stays the same. The actual performance of the back-end is largely irrelevent, we’re just testing the overhead, or speedup, introduced by the middle-layer.

But note that even this benchmark isn’t fair or realistic, because it only covers the case of hitting a single static URL – It isn’t representative of a real website.

Initial Testing

The initial testing is literally with Apache proxying through to the application, with zero caching, and zero tweaks.

The nginx configuration file is available for download.

The testing uses siege to fire off 50 concurrent connections for 2 minutes, via this command:

siege -c 50 -t120s http://localhost:8888/index.html

The results were:

Transactions:     10477 hits
Availability:       100.00 %
Transaction rate:    87.31 trans/sec

Adding Caching

We now configure nginx to cache the results of all static pages, which as luck would have it would be all of our site – as this is not a dynamic application.

The updated configuration file is available for download.

We’d expect a significant increase in performance with this caching, because we only expect to hit the back-end once – the rest of the requests will be served from the cache.

As expected our throughput increased, because only a single request was actually processed by the back-end. The rest of the responses came from the cache:

Transactions:     11882 hits
Availability:       100.00 %
Transaction rate:    99.31 trans/sec

Jumped from 87 transactions per second to 99. Not a huge gain, but certainly a measurable one.

Updating Buffer Sizes

Finally we tweak the buffer sizes. The average page-response in our simple HTTP-server is 50k, but we’re only hitting the front-page which is a little smaller.

Updating the buffer-sizes in our configuration file leads to the following results:

Transactions:     12930 hits
Availability:       100.00 %
Transaction rate:   107.75 trans/sec

This took us from our original result of 87 transactions per second to just over a hundred.

Conclusion

If you’re serving static files then using nginx as a caching reverse-proxy will give you a performance boost.

In the real-world most sites are not 100% static, but you can apply this idea to serving /js/media/images, and similar paths from an nginx cache, proxying the dynamic requests to another server.

Author: Steve Kemp, of  tweaked.io

Leave a Reply

Exit mobile version