Overview
This is a simulation of a population of virtual agents (you could imagine them as living cells,
animals, robots etc.), which compete for survival. Each agent must eat food to avoid starving, but
there is a limited amount of food available at any time. After a certain amount of time has passed,
the simulation will advance to the next generation, in which a new population of agents is created
from the combination and mutation of the genetic code of the previous population. The better an
agent performs (the more food it collects), the more likely it is to be selected for breeding. This
simulates natural selection, and results in the agents of each successive population performing
better than the last.
At the start of the simulation, most agents show no useful correlation between
their senses (their current direction, and the direction to the closest food) and their movements,
often resulting in spiral or sporadic paths. As time goes on the agents eventually evolve to track
the closest food, actively moving towards it. The intricacies of these behaviours are completely
different every time the simulation is run, as they evolve organically from the random initial
conditions. For example, later populations tend to develop a predominant direction in which the
agents move. This is because the population shares a lot of its genetic code, and because all agents
moving in one direction seems to result in each agent coming across more food.
Tip: most significant population improvement usually has happened by about generation 50-75, which
takes a few minutes to get to at maximum speed with drawing turned off.
Interface
- The world: the main part of the interface shows the simulated world. Blue circles are agents,
deep blue circles are elite agents (the best agents from the previous generation which get brought
across directly), and green circles are food blobs.
- Generation: the current generation of the simulation.
- Time: how long the current generation has run for. Measured in abstract real-time and frame-rate
independent units.
- Save: save the current state of the simulation.
- Load: load the most recent saved simulation state, if there is one.
- Restart: restart the simulation as if the page had reloaded.
- Pause: pause the simulation time.
- Stop/start drawing: toggle the drawing of the canvas. The simulation runs much faster when
drawing is disabled.
- Advance generation: if checked, the simulation will advance to the next generation when the time
runs out.
- Speed: controls the speed of the simulation, up to 2x faster.
- Alive agents: the number of agents that are alive.
- Dead agents: the number of agents that have died this generation.
- Total food eaten: the total amount of food that has been consumed this generation.
- Average food eaten: the average amount of food that has been consumed by each agent this
generation.
- Best food eaten: the highest amount of food that has been consumed by any single agent this
generation.
- Select best agent:
- Fullness: how full the selected agent is. The agent dies when this reaches 0.
- Food eaten: how much food the selected agent has consumed this generation.
- Network: the structure of the agents neural network (brain). Each circle is a node that performs
some computation, and the lines connecting them carry information between them, influenced by
the weight of the line (represented by the thickness and colour of the line). Green circles are
inputs, red circles are outputs.
- Population progress: graphs the average and best food consumed for each generation.
Controls
- Use the left mouse button to select agents and view their details.
- Use the arrow keys or drag with the right mouse button to pan the canvas around.
- Use the mouse wheel to zoom in and out.