The performance of your application affects your business more than you might think. Top engineering organizations think of performance not as a nice-to-have, but as a crucial feature of their product. Those organizations understand that performance has a direct impact on user experience and, ultimately, their bottom line. Unfortunately, most engineering teams do not regularly test the performance and scalability of their infrastructure. In my last post on performance testing I highlighted a long list of tools that can be used for load testing. In this post we will walk through three performance testing tools: Siege, MultiMechanize, and Bees with Machine Guns. I will show simple examples to get started performance testing your web applications regardless of the language.
Why performance matters?
A few statistics about the business impact of performance at major internet companies:
- Amazon and Walmart increased revenue 1% for every 100ms of improvement
- Microsoft found that Bing searches that were 2 seconds slower resulted in a 4.3% drop in revenue per user
- When Mozilla shaved 2.2 seconds off their landing page, Firefox downloads increased 15.4% (60 million more downloads)
- Making Barack Obama’s website 60% faster increased donation conversions by 14% (30 million more dollars)
- Shopzilla sped up average page load time from 6 seconds to 1.2 seconds, and increased revenue by 12% and page views by 25%
As you iterate through the software development cycle it is important to measure application performance and understand the impact of every release. As your production infrastructure evolves you should also track the impact of package and operating system upgrades. Here are some tools you can use to load test your production applications:
Apache Bench
Apache bench is simple tool for load testing applications provided by default with the Apache httpd server. A simple example to load test example.com with 10 concurrent users for 10 seconds.
Install Apache Bench:
apt-get install apache2-utils
Apache bench a web server with 10 conncurrent connections for 10 seconds:
ab -c 10 -t 10 -k http://example.com/
Benchmarking example.com (be patient) Finished 286 requests Server Software: nginx Server Hostname: example.com Server Port: 80 Document Path: / Document Length: 6642 bytes Concurrency Level: 10 Time taken for tests: 10.042 seconds Complete requests: 286 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 2080364 bytes HTML transferred: 1899612 bytes Requests per second: 28.48 [#/sec] (mean) Time per request: 351.133 [ms] (mean) Time per request: 35.113 [ms] (mean, across all concurrent requests) Transfer rate: 202.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 9 20 14.2 17 148 Processing: 117 325 47.8 323 574 Waiting: 112 317 46.3 314 561 Total: 140 346 46.0 341 589 Percentage of the requests served within a certain time (ms) 50% 341 66% 356 75% 366 80% 372 90% 388 95% 408 98% 463 99% 507 100% 589 (longest request)
Siege + Sproxy
Personally, I prefer Siege to Apache Bench for simple load testing as it is a bit more flexible.
Install Siege:
apt-get install siege
Siege a web server with 10 conncurrent connections for 10 seconds:
siege -c 10 -b -t 10S http://example.com/
** SIEGE 2.72 ** Preparing 10 concurrent users for battle. The server is now under siege... Lifting the server siege... done. Transactions: 263 hits Availability: 100.00 % Elapsed time: 9.36 secs Data transferred: 0.35 MB Response time: 0.35 secs Transaction rate: 28.10 trans/sec Throughput: 0.04 MB/sec Concurrency: 9.82 Successful transactions: 263 Failed transactions: 0 Longest transaction: 0.54 Shortest transaction: 0.19
More often than not you want to load test an entire site and not just a single endpoint. A common approach is to crawl the entire application to discover all urls and then load test a sample of those urls. The makers of Siege also make sproxy which in conjunction with wget enables you to crawl an entire site through a proxy and record all of the urls accessed. It makes for an easy way to compile a list of every possible url in your appplication.
1) Enable sproxy and specify to output all the urls to a file urls.txt:
sproxy -o ./urls.txt
2) Use wget with sproxy to crawl all the urls of example.com:
wget -r -o verbose.txt -l 0 -t 1 --spider -w 1 -e robots=on -e "http_proxy = http://127.0.0.1:9001" "http://example.com/"
3) Sort and de-duplicate the list of urls from our application:
sort -u -o urls.txt urls.txt
4) Siege the list of urls with 100 concurrent users for 3 minutes:
siege -v -c 100 -i -t 3M -f urls.txt
** SIEGE 2.72 ** Preparing 100 concurrent users for battle. The server is now under siege... Lifting the server siege... done. Transactions: 263- hits Availability: 100.00 % Elapsed time: 90.36 secs Data transferred: 3.51 MB Response time: 0.35 secs Transaction rate: 88.10 trans/sec Throughput: 0.28 MB/sec Concurrency: 9.82 Successful transactions: 2630 Failed transactions: 0 Longest transaction: 0.54 Shortest transaction: 0.19
Multi-Mechanize
When testing web applications sometimes you need to write test scripts that simulate virtual user activity against a site/service/api. Multi-Mechanize is an open source framework for performance and load testing. It runs concurrent Python scripts to generate load (synthetic transactions) against a remote site or service. Multi-Mechanize is most commonly used for web performance and scalability testing, but can be used to generate workload against any remote API accessible from Python. Test output reports are saved as HTML or JMeter-compatible XML.
1) Install Multi-Mechanize:
pip install multi-mechanize
2) Bootstrapping a new multi mechanize project is easy:
multimech-newproject demo
import mechanize import time class Transaction(object): def run(self): br = mechanize.Browser() br.set_handle_robots(False) start_timer = time.time() resp = br.open('http://www.example.com/') resp.read() latency = time.time() - start_timer self.custom_timers['homepage'] = latency assert (resp.code == 200) assert ('Example' in resp.get_data())
3) Run the multi-mechanize project and review the outputted reports
multimech-run demo
Bees with Machine Guns
In the real world you need to test your production infrastructure with realistic traffic. In order to generate the amount of load that realistically represents production, you need to use more than one machine. The Chicago Tribune has invested in helping the world solve this problem by creating Bees with Machine Guns. Not only does it have an epic name, but it is also incredibly useful for load testing using many cloud instances via Amazon Web Services. Bees with Machine Guns is a utility for arming (creating) many bees (micro EC2 instances) to attack (load test) targets (web applications).
1) Install Bees with Machine Guns:
pip install beeswithmachineguns
2) Configure Amazon Web Services credentials in ~/.boto:
[Credentials] aws_access_key_id=xxx aws_secret_access_key=xxx [Boto] ec2_region_name = us-west-2 ec2_region_endpoint = ec2.us-west-2.amazonaws.com
3) Create 2 EC2 instances using the default security group in the us-west-2b availabily zone using the ami-bc05898c image and login using the ec2-user user name.
bees up -s 2 -g default -z us-west-2b -i ami-bc05898c -k aws-us-west-2 -l ec2-user
Connecting to the hive. Attempting to call up 2 bees. Waiting for bees to load their machine guns... . . . . Bee i-3828400c is ready for the attack. Bee i-3928400d is ready for the attack. The swarm has assembled 2 bees.
4) Check if the ec2 instances are ready for battle
bees report
Read 2 bees from the roster. Bee i-3828400c: running @ 54.212.22.176 Bee i-3928400d: running @ 50.112.6.191
5) Attack a url if the ec2 instances are ready for battle
bees attack -n 100000 -c 1000 -u http://example.com/
Read 2 bees from the roster. Connecting to the hive. Assembling bees. Each of 2 bees will fire 50000 rounds, 125 at a time. Stinging URL so it will be cached for the attack. Organizing the swarm. Bee 0 is joining the swarm. Bee 1 is joining the swarm. Bee 0 is firing his machine gun. Bang bang! Bee 1 is firing his machine gun. Bang bang! Bee 1 is out of ammo. Bee 0 is out of ammo. Offensive complete. Complete requests: 100000 Requests per second: 1067.110000 [#/sec] (mean) Time per request: 278.348000 [ms] (mean) 50% response time: 47.500000 [ms] (mean) 90% response time: 114.000000 [ms] (mean) Mission Assessment: Target crushed bee offensive. The swarm is awaiting new orders.
6) Spin down all the EC2 instances
bees down
AppDynamics
With AppDynamics Pro you get in-depth performance metrics to evaluate the scalability and performance of your application. Use the AppDynamics Pro Metrics Browser to track key response times and errors over the duration of the load tests:
Use the AppDynamics Scalability Analysis Report to evaluate the performance of your application against load tests:
Use AppDynamics Pro to compare multiple application releases to see the change in performance and stability:
Get started with AppDynamics Pro today for in-depth application performance management.
As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.