rails/rails-perftest

Name: rails-perftest

Owner: Ruby on Rails

Description: Benchmark and profile your Rails apps

Created: 2013-01-07 21:39:12.0

Updated: 2018-01-13 05:13:23.0

Pushed: 2017-02-22 13:45:20.0

Homepage: null

Size: 52

Language: Ruby

GitHub Committers

UserMost Recent Commit# Commits
Yves Senn2017-02-22 13:28:19.038
Anatol Pomozov2013-12-17 19:53:02.01
Claudio B.2013-04-05 19:47:28.02
Nicholas Rutherford2015-07-02 09:50:56.03
Eliot Sykes2015-03-30 13:01:32.01
Rafael França2013-12-17 20:17:44.03
Miklós Fazekas2014-06-23 08:13:35.01
Ali Fakheri2015-11-26 14:46:08.01
Yuji Yaginuma2015-07-07 22:56:55.01

Other Committers

UserEmailMost Recent Commit# Commits

README

Performance Testing Rails Applications

This guide covers the various ways of performance testing a Ruby on Rails application.

After reading this guide, you will know:

Performance testing is an integral part of the development cycle. It is very important that you don't make your end users wait for too long before the page is completely loaded. Ensuring a pleasant browsing experience for end users and cutting the cost of unnecessary hardware is important for any non-trivial web application.


Installation

As of rails 4 performance tests are no longer part of the default stack. If you want to use performance tests simply follow these instructions.

Add this line to your application's Gemfile:

gem 'rails-perftest'

If you want to benchmark/profile under MRI or REE, add this line as well:

gem 'ruby-prof'

Now run bundle install and you're ready to go.

Performance Test Cases

Rails performance tests are a special type of integration tests, designed for benchmarking and profiling the test code. With performance tests, you can determine where your application's memory or speed problems are coming from, and get a more in-depth picture of those problems.

Generating Performance Tests

Rails provides a generator called performance_test for creating new performance tests:

ils generate performance_test homepage

This generates homepage_test.rb in the test/performance directory:

ire 'test_helper'
ire 'rails/performance_test_help'

s HomepageTest < ActionDispatch::PerformanceTest
Refer to the documentation for all available options
self.profile_options = { runs: 5, metrics: [:wall_time, :memory],
                         output: 'tmp/performance', formats: [:flat] }

st "homepage" do
get '/'
d

Examples

Let's assume your application has the following controller and model:

utes.rb
 to: 'home#dashboard'
urces :posts

me_controller.rb
s HomeController < ApplicationController
f dashboard
@users = User.last_ten.includes(:avatars)
@posts = Post.all_today
d


sts_controller.rb
s PostsController < ApplicationController
f create
@post = Post.create(params[:post])
redirect_to(@post)
d


st.rb
s Post < ActiveRecord::Base
fore_save :recalculate_costly_stats

f slow_method
# I fire gallzilion queries sleeping all around
d

ivate

f recalculate_costly_stats
# CPU heavy calculations
d

Controller Example

Because performance tests are a special kind of integration test, you can use the get and post methods in them.

Here's the performance test for HomeController#dashboard and PostsController#create:

ire 'test_helper'
ire 'rails/performance_test_help'

s PostPerformanceTest < ActionDispatch::PerformanceTest
f setup
# Application requires logged-in user
login_as(:lifo)
d

st "homepage" do
get '/dashboard'
d

st "creating new post" do
post '/posts', post: { body: 'lifo is fooling you' }
d

You can find more details about the get and post methods in the
Testing Rails Applications guide.

Model Example

Even though the performance tests are integration tests and hence closer to the request/response cycle by nature, you can still performance test pure model code.

Performance test for Post model:

ire 'test_helper'
ire 'rails/performance_test_help'

s PostModelTest < ActionDispatch::PerformanceTest
st "creation" do
Post.create body: 'still fooling you', cost: '100'
d

st "slow method" do
# Using posts(:awesome) fixture
posts(:awesome).slow_method
d

Modes

Performance tests can be run in two modes: Benchmarking and Profiling.

Benchmarking

Benchmarking makes it easy to quickly gather a few metrics about each test run. By default, each test case is run 4 times in benchmarking mode.

To run performance tests in benchmarking mode:

ke test:benchmark

To run a single test pass it as TEST:

n/rake test:benchmark TEST=test/performance/your_test.rb
Profiling

Profiling allows you to make an in-depth analysis of each of your tests by using an external profiler. Depending on your Ruby interpreter, this profiler can be native (Rubinius, JRuby) or not (MRI, which uses RubyProf). By default, each test case is run once in profiling mode.

To run performance tests in profiling mode:

ke test:profile
Metrics

Benchmarking and profiling run performance tests and give you multiple metrics. The availability of each metric is determined by the interpreter being used?none of them support all metrics?and by the mode in use. A brief description of each metric and their availability across interpreters/modes is given below.

Wall Time

Wall time measures the real world time elapsed during the test run. It is affected by any other processes concurrently running on the system.

Process Time

Process time measures the time taken by the process. It is unaffected by any other processes running concurrently on the same system. Hence, process time is likely to be constant for any given performance test, irrespective of the machine load.

CPU Time

Similar to process time, but leverages the more accurate CPU clock counter available on the Pentium and PowerPC platforms.

User Time

User time measures the amount of time the CPU spent in user-mode, i.e. within the process. This is not affected by other processes and by the time it possibly spends blocked.

Memory

Memory measures the amount of memory used for the performance test case.

Objects

Objects measures the number of objects allocated for the performance test case.

GC Runs

GC Runs measures the number of times GC was invoked for the performance test case.

GC Time

GC Time measures the amount of time spent in GC for the performance test case.

Metric Availability Benchmarking

| Interpreter | Wall Time | Process Time | CPU Time | User Time | Memory | Objects | GC Runs | GC Time | | ———— | ——— | ———— | ——– | ——— | —— | ——- | ——- | ——- | | MRI | yes | yes | yes | no | yes | yes | yes | yes | | REE | yes | yes | yes | no | yes | yes | yes | yes | | Rubinius | yes | no | no | no | yes | yes | yes | yes | | JRuby | yes | no | no | yes | yes | yes | yes | yes |

Profiling

| Interpreter | Wall Time | Process Time | CPU Time | User Time | Memory | Objects | GC Runs | GC Time | | ———— | ——— | ———— | ——– | ——— | —— | ——- | ——- | ——- | | MRI | yes | yes | no | no | yes | yes | yes | yes | | REE | yes | yes | no | no | yes | yes | yes | yes | | Rubinius | yes | no | no | no | no | no | no | no | | JRuby | yes | no | no | no | no | no | no | no |

NOTE: To profile under JRuby you'll need to run export JRUBY_OPTS="-Xlaunch.inproc=false --profile.api" before the performance tests.

Understanding the Output

Performance tests generate different outputs inside tmp/performance directory depending on their mode and metric.

Benchmarking

In benchmarking mode, performance tests generate two types of outputs.

Command Line

This is the primary form of output in benchmarking mode. Example:

singTest#test_homepage (31 ms warmup)
       wall_time: 6 ms
          memory: 437.27 KB
         objects: 5,514
         gc_runs: 0
         gc_time: 19 ms
CSV Files

Performance test results are also appended to .csv files inside tmp/performance. For example, running the default BrowsingTest#test_homepage will generate following five files:

As the results are appended to these files each time the performance tests are run in benchmarking mode, you can collect data over a period of time. This can be very helpful in analyzing the effects of code changes.

Sample output of BrowsingTest#test_homepage_wall_time.csv:

urement,created_at,app,rails,ruby,platform
738224999999992,2009-01-08T03:40:29Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
755874999999984,2009-01-08T03:46:18Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
762099999999993,2009-01-08T03:49:25Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
603075000000008,2009-01-08T04:03:29Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
619899999999995,2009-01-08T04:03:53Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
755449999999991,2009-01-08T04:04:55Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
595999999999997,2009-01-08T04:05:06Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
740450000000004,2009-01-09T03:54:47Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
603150000000008,2009-01-09T03:54:57Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
771250000000012,2009-01-09T15:46:03Z,,3.0.0,ruby-1.8.7.249,x86_64-linux
Profiling

In profiling mode, performance tests can generate multiple types of outputs. The command line output is always presented but support for the others is dependent on the interpreter in use. A brief description of each type and their availability across interpreters is given below.

Command Line

This is a very basic form of output in profiling mode:

singTest#test_homepage (58 ms warmup)
    process_time: 63 ms
          memory: 832.13 KB
         objects: 7,882
Flat

Flat output shows the metric?time, memory, etc?measure in each method. Check Ruby-Prof documentation for a better explanation.

Graph

Graph output shows the metric measure in each method, which methods call it and which methods it calls. Check Ruby-Prof documentation for a better explanation.

Tree

Tree output is profiling information in calltree format for use by kcachegrind and similar tools.

Output Availability

| | Flat | Graph | Tree | | ———— | —- | —– | —- | | MRI | yes | yes | yes | | REE | yes | yes | yes | | Rubinius | yes | yes | no | | JRuby | yes | yes | no |

Tuning Test Runs

Test runs can be tuned by setting the profile_options class variable on your test class.

ire 'test_helper'
ire 'rails/performance_test_help'

s BrowsingTest < ActionDispatch::PerformanceTest
lf.profile_options = { runs: 5, metrics: [:wall_time, :memory] }

st "homepage"
get '/'
d

In this example, the test would run 5 times and measure wall time and memory. There are a few configurable options:

| Option | Description | Default | Mode | | ———- | —————————————— | —————————– | ——— | | :runs | Number of runs. | Benchmarking: 4, Profiling: 1 | Both | | :output | Directory to use when writing the results. | tmp/performance | Both | | :metrics | Metrics to use. | See below. | Both | | :formats | Formats to output to. | See below. | Profiling |

Metrics and formats have different defaults depending on the interpreter in use.

| Interpreter | Mode | Default metrics | Default formats | | ————– | ———— | ——————————————————- | ———————————————– | | MRI/REE | Benchmarking | [:wall_time, :memory, :objects, :gc_runs, :gc_time] | N/A | | | Profiling | [:process_time, :memory, :objects] | [:flat, :graph_html, :call_tree, :call_stack] | | Rubinius | Benchmarking | [:wall_time, :memory, :objects, :gc_runs, :gc_time] | N/A | | | Profiling | [:wall_time] | [:flat, :graph] | | JRuby | Benchmarking | [:wall_time, :user_time, :memory, :gc_runs, :gc_time] | N/A | | | Profiling | [:wall_time] | [:flat, :graph] |

As you've probably noticed by now, metrics and formats are specified using a symbol array with each name underscored.

Performance Test Environment

Performance tests are run in the test environment. But running performance tests will set the following configuration parameters:

onController::Base.perform_caching = true
veSupport::Dependencies.mechanism = :require
s.logger.level = ActiveSupport::Logger::INFO

As ActionController::Base.perform_caching is set to true, performance tests will behave much as they do in the production environment.

Installing GC-Patched MRI 1.x.x

Since Ruby 2 is now mainstream and handles garbage collection issues these docs have been cut. View older readme explaning how to install optimized Ruby 1 builds.

Command Line Tools

Writing performance test cases could be an overkill when you are looking for one time tests. Rails ships with two command line tools that enable quick and dirty performance testing:

benchmarker

Usage:

e: perftest benchmarker 'Ruby.code' 'Ruby.more_code' ... [OPTS]
-r, --runs N                     Number of runs.
                                 Default: 4
-o, --output PATH                Directory to use when writing the results.
                                 Default: tmp/performance
-m, --metrics a,b,c              Metrics to use.
                                 Default: wall_time,memory,objects,gc_runs,gc_time

Example:

rftest benchmarker 'Item.all' 'CouchItem.all' --runs 3 --metrics wall_time,memory
profiler

Usage:

e: perftest profiler 'Ruby.code' 'Ruby.more_code' ... [OPTS]
-r, --runs N                     Number of runs.
                                 Default: 1
-o, --output PATH                Directory to use when writing the results.
                                 Default: tmp/performance
-m, --metrics a,b,c              Metrics to use.
                                 Default: process_time,memory,objects
-f, --formats x,y,z              Formats to output to.
                                 Default: flat,graph_html,call_tree

Example:

rftest profiler 'Item.all' 'CouchItem.all' --runs 2 --metrics process_time --formats flat

NOTE: Metrics and formats vary from interpreter to interpreter. Pass --help to each tool to see the defaults for your interpreter.

Helper Methods

Rails provides various helper methods inside Active Record, Action Controller and Action View to measure the time taken by a given piece of code. The method is called benchmark() in all the three components.

Model
ect.benchmark("Creating project") do
oject = Project.create("name" => "stuff")
oject.create_manager("name" => "David")
oject.milestones << Milestone.all

This benchmarks the code enclosed in the Project.benchmark("Creating project") do...end block and prints the result to the log file:

ting project (185.3ms)

Please refer to the API docs for additional options to benchmark().

Controller

Similarly, you could use this helper method inside controllers.

process_projects
nchmark("Processing projects") do
Project.process(params[:project_ids])
Project.update_cached_projects
d

NOTE: benchmark is a class method inside controllers.

View

And in views

enchmark("Showing projects partial") do %>
= render @projects %>
nd %>
Request Logging

Rails log files contain very useful information about the time taken to serve each request. Here's a typical log file entry:

essing ItemsController#index (for 127.0.0.1 at 2009-01-08 03:06:39) [GET]
ering template within layouts/items
ering items/index
leted in 5ms (View: 2, DB: 0) | 200 OK [http://0.0.0.0/items]

For this section, we're only interested in the last line:

leted in 5ms (View: 2, DB: 0) | 200 OK [http://0.0.0.0/items]

This data is fairly straightforward to understand. Rails uses millisecond(ms) as the metric to measure the time taken. The complete request spent 5 ms inside Rails, out of which 2 ms were spent rendering views and none was spent communication with the database. It's safe to assume that the remaining 3 ms were spent inside the controller.

Michael Koziarski has an interesting blog post explaining the importance of using milliseconds as the metric.

Useful Links
Rails Plugins and Gems
Generic Tools
Tutorials and Documentation
Commercial Products

Rails has been lucky to have a few companies dedicated to Rails-specific performance tools:


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.