Name: Benchmarks
Owner: JuliaArchive
Description: Benchmarks that used to be included in the Julia source
Created: 2018-03-05 19:11:31.0
Updated: 2018-03-06 21:22:25.0
Pushed: 2018-03-05 19:11:50.0
Homepage: null
Size: 3218
Language: Julia
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
This directory contains benchmarks and related utilities to test Julia's performance. Many of these benchmarks have been ported to the newer BaseBenchmarks package, which contains the benchmark suite used for CI performance testing. In general, new benchmarks should be added to that package instead of placed here (see the BaseBenchmarks README for details).
If you'd like to test the performance of your own package, consider using the BenchmarkTools package.
For the micro benchmarks used to compare Julia's performance against other languages, see the Microbenchmarks repository
In test/perf
run make
. It will run the perf.jl
script in all
the sub-directories and display the test name with the minimum,
maximum, mean and standard deviation of the wall-time of five repeated
test runs in micro seconds.
There is also a perfcomp.jl
script but it may not be working with
the rest at the moment.
Tests generally go into one of the following suites:
blas
, lapack
: Performance tests for linear algebra tasks from
low-level operations such as matrix multiplies to higher-level
operations like eigenvalue problems.cat
: Performance tests for concatenation of vectors and matrices.kernel
: Performance tests used to track real-world code examples
that previously ran slowly.shootout
Tracks the performance of tests taken from the
Computer Language Benchmarks Game performance
tests.sort
: Performance tests of sorting algorithms.spell
Performance tests of
Peter Norvig's spelling corrector.sparse
: Performance tests of sparse matrix operations.Otherwise tests live in their own subdirectories containing a perf.jl
file.
The perf.jl
files include the shared performance utilies via
include("../perfutil.jl")
, and then run the performance test
functions with the @timeit
macro. For example:
eit(spelltest(tests1), "spell", "Peter Norvig's spell corrector")
with arguments: test function call, name of the test, description,
and, optionally, a group. @timeit
will do a warm-up and then 5
timings, calculating min, max, average and standard deviation of
the timings.
If possible, the tests aim to take about 10-100 microseconds.