ANNE: A simple XOR training example
[Main] | [Downloads] | [Links] | [Contact me]

---------------------------------------

This is a simple example of a feedforward backprop neural network learning session :
As the already initiated know, a net correctly classifies the XOR problem if it respects the folowing table of correspondence:
Input 1 Input 2 Output
0 0 0
0 1 1
1 0 1
1 1 0
The net must, basically, separate the input values into two classes (0 and 1), by repeated training and backpropagation of the eventual error.
Here are visual representations of network's learning:
The images below are successive snapshots from the net's status at different training epochs. You can see how the net separates the input space (which is not linearly separable, e.g. you cannot separate the two classes (0 and 1) by drawing a single line) in three subspaces.
Epoch 0
Stage 0
Epoch 450
Stage 1
Epoch 650
Stage 2
Epoch 700
Stage 3
Epoch 800
Stage 4
These pictures illustrate the evolution of a part of the real input space. It is the [-1.2, 1.2] x [-1.2, 1.2] area of the 2-dimensional plane. The coordinates are chosen so that the area would contain the four important points (these points are not explicitly plotted in the image):
  • (0, 0) -- near the top-left of the image;
  • (0, 1) -- near the top-right of the image;
  • (1, 0) -- near the bottom-left of the image;
  • (1, 1) -- near the bottom-right of the image.
A black pixel signifies a value of 0, whereas a red one a value of 1. You can see that many pixels have a value in between these extremes.

For this classification, no use of momentum and other rapid learning techiques was employed, just vanilla backpropagation.

---------------------------------------