# Neat-Ex
This [Elixir](http://elixir-lang.org/) library provides the means to define, simulate, and serialize Artificial-Neural-Networks (ANNs), as well as the means to develop them through use of the Neuro-Evolution of Augmenting Toplogies algorithm ([NEAT](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf)), back-propogation, or gradient approximation.
NEAT, unlike back-propogation, develops both topology and neural weights. It trains using a fitness function instead of just training data. The gradient approximation algorithm is like back-propogation, in that it uses gradient optimization, but instead of calculating the exact gradient using the training data, it approximates the gradient using a training function.
Training functions instead of training data allow for more flexibility.
## Installation
### Add as a dependency
Add `{:neat_ex, "~> 1.1.1"}` to your list of deps in your mix file, then run `mix deps.get`.
### Or clone a copy
To clone and install, do:
git clone https://gitlab.com/onnoowl/Neat-Ex.git
cd Neat-Ex
mix deps.get
## Documentation
For details, the latest documentation can be found at http://hexdocs.pm/neat_ex/index.html. For example usage, see the example below.
## News
### New in Version 1.1.0
This library is expanding to feature more neural training algorithms.
* Updated to Elixir 1.2 and shifted from Dicts and Sets to Maps and MapSets
* Added the [`Backprop`](http://hexdocs.pm/neat_ex/Backprop.html) module for back-propogation
* Added the [`GradAprox`](http://hexdocs.pm/neat_ex/GradAprox.html) module for gradient approximation and optimization of generic parameters
* Added the [`GradAprox.NeuralTrainer`](http://hexdocs.pm/neat_ex/GradAprox.NeuralTrainer.html) module for gradient approximation and optimization of neural networks
#### Backprop vs GradAprox.NeuralTrainer
For training with a dataset, using the Backprop module is preferable. If the problem only allows for a fitness/error function, then GradAprox.NeuralTrainer should be used. This module approximates gradients instead of precisely calculating them by modifying weights slightly and than re-evaluating the fitness function.
See each module's documentation for more details and example usage.
### (Potential) Upcoming Features
* Newton's Method for gradient optimization
* Back-propogation through time (BPTT) and automated neural network unfolding
## Neat Example Usage
Here's a simple example that shows how to setup an evolution that evolves neural networks to act like binary XORs, where -1s are like 0s (and 1s are still 1s). The expected behavior is listed in `dataset`, and neural networks are assigned a fitness based on how close to the expected behavior they come. After 50 or so generations, or 10 seconds of computation, the networks exhibit the expected behavior.
```elixir
dataset = [{{-1, -1}, -1}, {{1, -1}, 1}, {{-1, 1}, 1}, {{1, 1}, -1}] #{{in1, in2} output} -> the expected behavior
fitness = fn ann ->
sim = Ann.Simulation.new(ann)
error = Enum.reduce dataset, 0, fn {{in1, in2}, out}, error ->
result = Dict.get(Ann.Simulation.eval(sim, %{1 => in1, 2 => in2, 3 => 1.0}).data, 4, 0) #node 3 is a "bias node"
error + abs(result - out)
end
:math.pow(8 - error, 2)
end
# Make a new network with inputs [1, 2, 3], and outputs [4].
neat = Neat.new_single_fitness(Ann.new([1, 2, 3], [4]), fitness)
#Then evolve it until it reaches fitness level 63 (this fitness's function's max fitness is 64).
{ann, fitness} = Neat.evolveUntil(neat, 63).best
IO.puts Ann.json(ann) #display a json representation of the ANN.
```
### XOR
mix xor.single
This command runs the sample xor code, evolving a neural network to act as an XOR logic gate. The resulting network can be viewed visually by running the command `./render xor`, and then by opening `xor.png`.
### FishSim
mix fishsim [display_every] [minutes_to_run]
This evolves neural networks to act like fish, and to run away from a shark. Fitness is based on how long fish can survive the shark. It will display ascii art demonstrating the simulation, where the @ sign is the shark, and the digits represent the fish, and the concentration at that specific location (higher numbers show a higher relative concentration of fish).
The evolution will only print out every `display_every` generations (default 1, meaning every generation). Setting it to 5, for example, will evolve for 5 generations between each display (which is far faster).
The evolution lasts `minutes_to_run` minutes (default is 60).
mix fishsim [display_every] [minutes_to_run] [file_to_record_to]
By including a file name, the simulation will record visualization data to the file rather than displaying ascii art. `display_every` becomes handy for limiting the size of the visualization file. To view the recording after it's made, use Jonathan's project found [here](https://gitlab.com/Zanthos/FishSimVisualAid), and pass the file as the first argument.
When the process finishes, you can view the best fish using `./render bestFish`, and then by opening `bestFish.png`
## Testing (for contributors)
mix test