# Neat-Ex
This [Elixir](http://elixir-lang.org/) library provides the means to define, simulate, and serialize Artificial-Neural-Networks (ANNs), as well as the means to develop them through use of the Neuro-Evolution of Augmenting Toplogies algorithm ([NEAT](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf)), back-propogation, or gradient approximation.
NEAT, unlike back-propogation, develops both topology and neural weights. It trains using a fitness function instead of just training data. The gradient approximation algorithm is like back-propogation, in that it uses gradient optimization, but instead of calculating the exact gradient using the training data, it approximates the gradient using a training function.
Training functions instead of training data allow for more flexibility.
## Installation
### Add as a dependency
Add `{:neat_ex, "~> 1.1.1"}` to your list of deps in your mix file, then run `mix deps.get`.
### Or clone a copy
To clone and install, do:
git clone https://gitlab.com/onnoowl/Neat-Ex.git
cd Neat-Ex
mix deps.get
## Documentation
For details, the latest documentation can be found at http://hexdocs.pm/neat_ex/index.html. For example usage, see the example below.
## News
### New in Version 1.1.0
This library is expanding to feature more neural training algorithms.
* Updated to Elixir 1.2 and shifted from Dicts and Sets to Maps and MapSets
* Added the [`Backprop`](http://hexdocs.pm/neat_ex/Backprop.html) module for back-propogation
* Added the [`GradAprox`](http://hexdocs.pm/neat_ex/GradAprox.html) module for gradient approximation and optimization of generic parameters
* Added the [`GradAprox.NeuralTrainer`](http://hexdocs.pm/neat_ex/GradAprox.NeuralTrainer.html) module for gradient approximation and optimization of neural networks
#### Backprop vs GradAprox.NeuralTrainer
For training with a dataset, using the Backprop module is preferable. If the problem only allows for a fitness/error function, then GradAprox.NeuralTrainer should be used. This module approximates gradients instead of precisely calculating them by modifying weights slightly and than re-evaluating the fitness function.
See each module's documentation for more details and example usage.
### (Potential) Upcoming Features
* Newton's Method for gradient optimization
* Back-propogation through time (BPTT) and automated neural network unfolding
## Neat Example Usage
Here's a simple example that shows how to setup an evolution that evolves neural networks to act like binary XORs, where -1s are like 0s (and 1s are still 1s). The expected behavior is listed in `dataset`, and neural networks are assigned a fitness based on how close to the expected behavior they come. After 50 or so generations, or 10 seconds of computation, the networks exhibit the expected behavior.
```elixir
#################
# Training data #
#################
dataset = [{{-1, -1}, -1}, {{1, -1}, 1}, {{-1, 1}, 1}, {{1, 1}, -1}] #{{in1, in2} output} -> the expected behavior
####################
# Fitness function #
####################
fitness = fn ann ->
sim = Ann.Simulation.new(ann)
error = Enum.reduce dataset, 0, fn {{in1, in2}, out}, error ->
#We'll use 1 and 2 as input neurons, and we'll use 3 as a bias (it will always be given a value of 1.0)
inputs = %{1 => in1, 2 => in2, 3 => 1.0}
#Then we'll evaluate a simulation with these given inputs
evaled_sim = Ann.Simulation.eval(sim, inputs)
#Then we'll get the output of the 4th neuron, which is our output
result = Map.fetch!(evaled_sim.data, 4)
#Then we'll add the distance between the result and the expected output to the accumulated error
error + abs(result - out)
end
#We're supposed to return fitness (where greater values are better) instead of error (where greater values are worse).
#The maximum our error can be is 8, so we return (8 - error)^2 as our fitness.
:math.pow(8 - error, 2)
#The range of output for all neurons is -1 to positive 1. This means the furthest away the output
#can ever be from the correct answer is 2. Since our dataset is of size 4, this means the maximum
#error is 4*2 = 8
#Doing 8 - error gives us a fitness value instead of an error value, because higher values are better.
#Finally, we square the result, which is good practice.
end
######################
# Training the model #
######################
#To specify the inputs and outputs, we make a seed_ann. This will be how all the ANNs will start.
#Make a new network with inputs [1, 2, 3], and outputs [4], and use Ann.connectAll to connect
#all of the inputs to the outputs for us. This way it will start with some topology.
seed_ann = Ann.new([1, 2, 3], [4]) |> Ann.connectAll
#Make a new neat struct with this seed_ann and our fitness function
neat = Neat.new_single_fitness(seed_ann, fitness)
#Then evolve it until it reaches fitness level 63 (this fitness's function's max fitness is 64).
{ann, fitness} = Neat.evolveUntil(neat, 63).best
IO.puts Ann.json(ann) #display a json representation of the ANN.
###################
# Using the model #
###################
#We can then try out the produced ANN ourselves by making a simulation
simulation = ann
|> Ann.Simulation.new
|> Ann.Simulation.eval(%{1 => 1.0, 2 => 1.0, 3 => 1.0})
IO.inspect simulation.data[4] #in our example, the output is at neuron #4
```
### XOR
mix xor.single
This command runs the sample xor code, evolving a neural network to act as an XOR logic gate. The resulting network can be viewed visually by running the command `./render xor`, and then by opening `xor.png`. Rendering depends on GraphViz, and having the command `dot` in your path.
### FishSim
mix fishsim [display_every] [minutes_to_run]
This evolves neural networks to act like fish, and to run away from a shark. Fitness is based on how long fish can survive the shark. It will display ascii art demonstrating the simulation, where the @ sign is the shark, and the digits represent the fish, and the concentration at that specific location (higher numbers show a higher relative concentration of fish).
The evolution will only print out every `display_every` generations (default 1, meaning every generation). Setting it to 5, for example, will evolve for 5 generations between each display (which is far faster).
The evolution lasts `minutes_to_run` minutes (default is 60).
mix fishsim [display_every] [minutes_to_run] [file_to_record_to]
By including a file name, the simulation will record visualization data to the file rather than displaying ascii art. `display_every` becomes handy for limiting the size of the visualization file. To view the recording after it's made, use Jonathan's project found [here](https://gitlab.com/Zanthos/FishSimVisualAid), and pass the file as the first argument.
When the process finishes, you can view the best fish using `./render bestFish`, and then by opening `bestFish.png`
## Testing (for contributors)
mix test