# Benchee [![Hex Version](]([![docs](]([![Inline docs](]([![Build Status](](

Library for easy and nice (micro) benchmarking. It allows you to easily compare the performance of different pieces of code/functions. Benchee is also versatile and extensible, relying only on functions - no macros!

Somewhat inspired by [benchmark-ips]( from the ruby world, but of course it is a more functional spin.

Provides you with:

* average   - average execution time (the lower the better)
* ips       - iterations per second, how often can the given function be executed within one second (the higher the better)
* deviation - standard deviation (how much do the results vary), given as a percentage of the average
* median    - when all measured times are sorted, this is the middle value (or average of the two middle values when the number of times is even). More stable than the average and somewhat more likely to be a typical you see.

Benchee does not:

* Keep results of previous and compare them, if you want that have a look at [benchfella]( or [bmark](

Make sure to check out the [available plugins](#plugins)!

## Installation

When [available in Hex](, the package can be installed as:

Add benchee to your list of dependencies in `mix.exs`:

def deps do
  [{:benchee, "~> 0.1", only: :dev}]

Install via `mix deps.get` and then happy benchmarking as described in Usage :)

## Usage

After installing just write a little benchmarking script:

list = Enum.to_list(1..10_000)
map_fun = fn(i) -> [i, i * i] end{time: 3},
             [{"flat_map", fn -> Enum.flat_map(list, map_fun) end},
              fn -> list |> |> List.flatten end}])

First configuration options are passed, the only options available so far are:

* `time`   - the time in seconds for how long each individual benchmark should be run and measure. Defaults to 5.
* `warmup` - the warmup time in seconds for which a benchmark should be run without measuring times. This simulates a warm/already running system. Defaults to 2.

Running this scripts produces an output like:

tobi@happy ~/github/benchee $ mix run samples/run.exs
Benchmarking flat_map...
Benchmarking map.flatten...

Name                          ips            average        deviation      median
map.flatten                   1311.84        762.29μs       (±13.77%)      747.0μs
flat_map                      896.17         1115.86μs      (±9.54%)       1136.0μs

map.flatten                   1311.84
flat_map                      896.17          - 1.46x slower

See the general description for the meaning of the different statistics.

It is important to note that the way shown here is just the convenience interface. The same benchmark in its more verbose form looks like this:

list = Enum.to_list(1..10_000)
map_fun = fn(i) -> [i, i * i] end

Benchee.init(%{time: 3})
|> Benchee.benchmark("flat_map", fn -> Enum.flat_map(list, map_fun) end)
|> Benchee.benchmark("map.flatten",
                     fn -> list |> |> List.flatten end)
|> Benchee.measure
|> Benchee.statistics
|> Benchee.Formatters.Console.format
|> IO.puts

This is how the "functional transformation" works here:

1. Configure general parameters
2. run n benchmarks with the given parameters gathering raw run times per function (done in 2 steps, gathering the benchmarks and then running them `Benchee.measure`)
3. Generate statistics based on the raw run times
4. Format the statistics in a suitable way
5. Output the formatted statistics

This is also part of the official API and allows a more fine grained control.
Do you just want to have all the raw run times? Grab them before `Benchee.statistics`! Just want to have the calculated statistics and use your own formatting? Grab the result of `Benchee.statistics`! Or, maybe you want to write to a file or send an HTTP post to some online service? Just replace the `IO.puts`.

This way Benchee should be flexible enough to suit your needs and be extended at will.

For more example usages and benchmarks have a look at the [`samples`]( directory!

## Development

* `mix deps.get` to install dependencies
* `mix test` to run tests or `mix` to run them continuously
* `mix credo` or `mix credo --strict` to find code style problems

Happy to review and accept pull requests or issues :)

## Plugins

Packages that work with Benchee one way or another to enhance its functionality.

* [BencheeCSV](// - generate CSV from your Benchee benchmark results so you can import them into your favorite spreadsheet tool and make fancy graphs

(You didn't really expect to find tons of plugins here when the library was just released, did you? ;) )