# RelaxedRateLimiter
This is a high-throughput, approximate rate limiter for BEAM, started as a [Hammer fork](https://hexdocs.pm/hammer/readme.html)
It prioritizes speed over global correctness.
If you need exact limits, do not use this.
## Installation
RelaxedRateLimiter is available on [hex.pm]()
To add it to a mix project, just add a line like this in your deps function in mix.exs:
```
defp deps do
[
{:relaxed_rate_limiter, "~> 0.1.2"},
]
end
```
## Configuration
If you would like to use rate limiter with several nodes system or with Out Of Memory guard, add `RelaxedRateLimiter.Sync` worker to your Supervision tree with configuration:
```
children = [
{RelaxedRateLimiter.Sync, [check_oom: true, max_memory_cap: 4.0]} # max_memory_cap is set in Gigabytes
...
]
Supervisor.start_link(children, opts)
```
Define your rate limiter as a module
```
defmodule MyApp.RateLimit do
use RelaxedRateLimiter, algorithm: :leaky_bucket
end
```
And use it
```
# Check the rate limit allowing 10 requests per second
MyApp.RateLimit.hit("some-key", _scale = :timer.seconds(1), _limit = 10)
```
## Comparation
I honestly suggest you to use hammer if you're not sure what to use. It's battle-tested, developed by incredible good engineers project which will fit most cases better than this project
| takes | | [hammer](https://hexdocs.pm/hammer/readme.html) | [ex_rated](https://github.com/grempe/ex_rated) | **relaxed_rate_limiter** |
| -------- | - | ------------------------- | ------------------------ | ------------------------------ |
| read | | fast (ets lookup) | fast (ets lookup) | fast (ets lookup) |
| write | | medium (update+broadcast) | fast (update+no sync) | fast (update) |
| sync | | per insert broadcast | nothing (do it yourself) | total nodes count |
| count | | strict | strict | approximated, with high drifts |
| OOM guard | | - | - | via Sync module |
## Functionality
**relaxed_rate_limiter** will keep our API rate limiting simple (no broadcast overhead), still supporting distributed setup. It should not be used to track API or plan usage! It tolerate values a lot and accept approximation in the rate limiting (**it's only purpose it to keep Nodes alive**)
1. Leaked bucket algorithm
- Fills up with requests at the input rate
- "Leaks" requests at a constant rate
- Has a maximum capacity (the bucket size)
2. Out Of Memory guard
- Each period check Node memory usage
- Turn global "Too many requests" until will be back to "fine" state
### Some motivational staff
I wanted this Rate Limiter to create near-zero overhead on synchronisation between nodes (to preven message passing queues growth), and prevent my BEAM runtime from the most dangerous error: out of memory.
You might wonder why would I include Out Of Memory Guard as a part of Rate Limiting, and this solution may be the wrong for many systems.
But for BEAM applications that control API usage separately from API Rate Limiting due to speed constraints, API Rate Limiting might also include OOM guard for system simplicity and clear separation of concerts. I don't want to use Plugs and other thigs (since it will create additional layers, etc). It still will be slower than including such guard into Phoenix/other framework before message serialization, this should be the best way to avoid OOM errors at 3a.m., but this is out of scope for this project.