This is a simple example of a feedforward backprop
neural network learning session :
|
As the already initiated know,
a net correctly classifies the XOR problem if it respects the folowing
table of correspondence:
Input 1
|
Input 2
|
Output
|
0
|
0
|
0
|
0
|
1
|
1
|
1
|
0
|
1
|
1
|
1
|
0
|
The net must, basically, separate the input values into two classes
(0 and 1),
by repeated training and backpropagation of the eventual error.
|
Here are visual representations of network's learning:
|
The images below are successive snapshots from the net's status
at different training epochs. You can see how the net separates
the input space (which is not linearly separable, e.g. you cannot
separate the two classes (0 and 1) by drawing a single line) in
three subspaces.
Epoch 0 |
|
|
Epoch 450 |
|
|
Epoch 650 |
|
|
Epoch 700 |
|
|
Epoch 800 |
|
|
|
These pictures illustrate the evolution of a part of the real
input space.
It is the [-1.2, 1.2] x [-1.2, 1.2] area of the 2-dimensional
plane. The coordinates are chosen so that the area would contain
the four important points (these points are not explicitly
plotted in the image):
- (0, 0) -- near the top-left of the image;
- (0, 1) -- near the top-right of the image;
- (1, 0) -- near the bottom-left of the image;
- (1, 1) -- near the bottom-right of the image.
A black pixel signifies a value of 0, whereas a red one
a value of 1. You can see that many pixels have a value
in between these extremes.
|
|