README.md

# SplitClient

Elixir client to use for Split.io feature flag service https://www.split.io/

## Installation

If [available in Hex](https://hex.pm/docs/publish), the package can be installed
by adding `split_client` to your list of dependencies in `mix.exs`:

```elixir
def deps do
  [
    {:split_client, "~> 0.1.0"}
  ]
end
```

## Configuration

### Evaluator Configuration

The [Split.io Evaluator](https://help.split.io/hc/en-us/articles/360020037072-Split-Evaluator) sits in your infrastructure between the Split.io servers.
It provides a REST API for accessing Split.io data and is meant to be used in cases where Split.io has not created an SDK for a specific language like Elixir.

In order for your application to access Split.io feature flag data you'll need to configure
how to access your instance of [Split.io Evaluator](https://help.split.io/hc/en-us/articles/360020037072-Split-Evaluator). You'll need to configure the `evaluator_auth_token` and the
`evaluator_url`.

In production you'll need to have your config access these environment variables. You could
do it like this in `config/prod.exs`:

```elixir
  config :split_client, evaluator_auth_token: System.fetch_env!("SPLIT_EVALUATOR_AUTH_TOKEN")
  config :split_client, evaluator_url: System.fetch_env!("SPLIT_EVALUATOR_URL")
```

### Local Feature Development

When you're developing features locally you don't want to toggle split treatments
in the Split.io web interface. The web interface changes things globally across
your infrastructure and isn't meant for testing changes on your local machine.
Instead you'll make changes in a local YAML file. Add this to your `config/dev.exs`

```elixir
config :split_client, data_source: SplitClient.DataSourceYAML
```

By default the `SplitClient` will expect this YAML file to live in the root of
your project and be called `split.yml`. You can change this in `config/dev.exs` with

```elixir
config :split_client, split_file: "my/file/path.yml"
```

The YAML file should follow a format similar to this:

```yml
# - feature_name:
#     treatment: "treatment_applied_to_this_entry"
#     keys: ["single_key_or_list", "other_key_same_treatment"]
#     traffic_type: "customer"
#     config: "{\"desc\" : \"this applies only to ON treatment\"}"
#
# Note that the "treatment" and "traffic_type" keys are mandatory, but both "keys" and
# "config" are optional. If "keys" are omitted they will be a fallback for when a passed key
# doesn't match
# Note that the config is JSON
# Note that `keys` can be a single key or a list of keys
# Not the attributes and bucketing_keys are ignored when using YAML stub

- my_feature:
    treatment: "on"
    keys: "mock_user_id"
    traffic_type: "customer"
    config: "{\"desc\" : \"this applies only to ON treatment\"}"
- some_other_feature:
    treatment: "off"
    traffic_type: "customer"
- my_feature:
    treatment: "off"
    traffic_type: "customer"
```

### Caching and Treatment Updates

Out of the box the SplitClient will automatically cache treatments with a 1 minute expiration.
You can change this by setting the `:cache_ttl` to a number of milliseconds of your choosing

```elixir
config :split_client, :cache_ttl, :timer.seconds(1)
```

### Test Configuration

Normally you want your application to share a global cache of Split.io
data. This isn't the case when you are running tests. In the testing
environment you want each test to have its own isolated set of
Split data so that each test can be run concurrently. To ensure this
you can swap out the regular Treatments server for a Sandbox in
`config/test.exs`

```elixir
config :split_client, :treatments_server, SplitClient.Sandbox
```

## Split.io Glossary

In Split.io nomenclature a "Split" is equivalent to a "Feature Flag"

### Function Arguments

* `key` or `matching_key` - Some kind of unique identifier to identify the user, customer, account, etc.
    A non-match results in default/control treatment
* `split_name` - The name of the Split to get the treatment for (e.g. `"shiny_feature"`)
* `traffic_type` - The Split.io Traffic Type the splits are associated with (e.g. `"user"`, `"customer"`, `"account"`, etc)

### Function Options

* `:attributes` - A `Map` of custom attributes to target on
    (e.g. `%{plan_type: "premium", paying_customer: true}`)

    See also: [Target Customer Attributes](https://help.split.io/hc/en-us/articles/360020793231-Target-with-custom-attributes)
* `:bucketing_key` - Which part of the distribution to look for a treatment out of 100% (e.g. `"20-39"`)

    See also: [Treatment Order](https://help.split.io/hc/en-us/articles/360030117011-Why-setting-the-order-of-treatments-matters)

## Testing

Normally you want your application to share a global cache of Split.io
data. This isn't the case when you are running tests. In the testing
environment you want each test to have its own isolated set of
Split data so that each test can be run concurrently. A little bit of setup in each
test file will be required.

```elixir
import SplitClient.Testing
setup :setup_split_client
```

First, you'll want to import the `SplitClient.Testing` module and execute `setup_split_client/1` before
every test. This will give each test it's own Split environment that won't be affected by any
other tests running in parallel.

Second, you'll want to create your Split definitions. This can be the same for all tests in the file
or vary per test. These definitions follow the same rules as the ones in
[Local Feature Development](README.md#local-feature-development)

```elixir
setup do
  create_splits([
    %{
      "my_feature" => %{
        treatment: "on",
        keys: "mock_user_id",
        traffic_type: "customer"
      }
    },
    %{
      "my_feature" => %{
        treatment: "off",
        traffic_type: "customer"
      }
    }
  ])

  :ok
end
```

`create_splits/1` takes list of maps. Each map should have
one key for the split_name which has as its value a map
with the treatment information.

### Multi-process Collaboration

#### Explicit Allowance

You may have a process in your application that is using Split.io data
that is outside of the test process. In this case you'll need to
explicitally associate the test process with the outside process so that
the isolated Split data can be accessed. You can use `SplitClient.Sandbox.allow/2`
for this.

```elixir
test "test a thing" do
  task =
    Task.async(fn ->
      assert feature_a("mock_user_id") == "New Hotness"
    end)

  Sandbox.allow(self(), task.pid)

  Task.await(task)
end
```

#### Async False

Sometimes there are unfortunate circumstances where you do not own the process
accessing your Split.io data or the process is otherwise unnamed and unaccessible.
In these circumstances you'll be relegated to making your test `async: false` to
ensure that the test is predictable and only accesses Split data that it's supposed to

## Contributing

If you are making improvements to this package there will be some setup involved.

### Accessing a Split.io Evaluator Instance

You'll need to add environment variables of `SPLIT_EVALUATOR_AUTH_TOKEN` and `SPLIT_EVALUATOR_URL` if you would like to test against a real Evaluator instance.

You'll also need to make the Evaluator your datasource in the dev environment.
Update to `config/dev.exs` to include:

```elixir
config :split_client, :data_source, SplitClient.Boundary.Evaluator
```

### Documentation

Documentation can be generated with [ExDoc](https://github.com/elixir-lang/ex_doc)
and published on [HexDocs](https://hexdocs.pm). Once published, the docs can
be found at [https://hexdocs.pm/split_client](https://hexdocs.pm/split_client).