Please do not use the code in this blog for academic work
**************************************************
Over the last couple of days I've been locked away in my room programmin a neural network. I know not the most fun thing I can be doing.
The XOR Neural Network users back propergation to calculte the errors of each individual connection in the network. This is done by finding th error at the output to the network and then passing this back to the networks previous nodes.
My current alpha version of the network will train the network on a set of test data and then can be used on a live set of data to see if it works, at the current time I only have the network working for unipolar input as the bipolar input is sticking when the output hits zero. This causes the messy problem of multiplying by zero when we pass back the error. I still have yet to figure out how to solve this, but will keep you informed when I find out.
The code below is part of my network:
private void train(object sender, EventArgs e)
{
//set training set
// input 1 input 2 desired out
td[0, 0] = 0; td[0, 1] = 0; td[0, 2] = 0;
td[1, 0] = 0; td[1, 1] = 1; td[1, 2] = 1;
td[2, 0] = 1; td[2, 1] = 0; td[2, 2] = 1;
td[3, 0] = 1; td[3, 1] = 1; td[3, 2] = 0;
//randomise weights
randweights();
//do while loop checking to see if training is complete
do
{
E = 0;
for (vector = 0; vector < 4; vector++)
{
forwardsprop(2, 2);
double output = calculate((y1[0] * w2[0]) + (y1[1] * w2[1]));
double error = calcerror(output);
backprop(error);
chageweight(error, 2, 2);
E += cycleerror(output);
}
number++;
}while(E>Emax && number < epoch);
MessageBox.Show("Training Complete, E = " + E.ToString() + " Epochs = " + number.ToString());
}
This is the general overview of what happens in the program, the training section of code will repeatedly input the training data and calculate the new weights for each of the connections this is done by first forward propogation to get the output of the network.
//propogate forwards throught the network
public void forwardsprop(int inputs,int outputs)
{
int i = vector;
int count = 0;
for(int y=0; y<outputs; y++)
{
y1[y] = 0;
for(int j=0; j<inputs; j++)
{
y1[y] += td[i, j] * w1[count, j];
}
y1[y] = calculate(y1[y]);
count++;
}
}
This gives the output for each node and also the overall output of the network. Given this overall out put we can then find the error in the system, using the desired output in the training set and pass this value back through the network. This is called back propogation.
//pass errors back through the network to calculate new weights
public void backprop(double error)
{
for (int i = 0; i < 2; i++)
{
double nodeerror = (error * w2[i]);
nodeerror = (y1[i] * (1 - y1[i])) * nodeerror;
chageweight(nodeerror, i, 1);
}
}
Once we have all the errors for every node in the network we can work out have we need to change the weightings of each of the connections to reduce the ammount of error in the netwok.
//messy way to calculate the new weights for a connection
public void chageweight(double error, int node, int layer)
{
int i = vector;
if (layer == 1)
{
for (int j = 0; j < 2; j++)
{
double wadjust = n * error * td[i, j];
w1[node, j] += wadjust;
}
}
if (layer == 2)
{
for (int j = 0; j < 2; j++)
{
double wadjust = n * error * y1[j];
w2[j] += wadjust;
}
}
}
The weights havenow been changed in the network and next time you put data into the network the ammount of error given will be less.
To train the network you continually do this process untill you have reach Emax the maximum ammount of error that the system is allowed, or untill you have reached the desired number of epochs (amount of loops of all the data in the training set).
Once the system has been trained you can then test out the system to see if it actually works for the desired purpose.
As can be seen in the images work still needs to be done to the network, however the hard stuff is done and I can eventually get on with the other sections of work within my project. Along with trying to get the code to work for bipolar networks.