Making of NN4 – Data Structures

Example Neural Network

Below is example a neural network visualized that NN4 can produce in memory. This is example of a 4 layer network. Minimum NN4 can do is 3 layer network, e.g. 1 x input layer, 1 x hidden layer and 1 x output layer. Below example has so called 2,3,2,1 configuration. Meaning, from left to right, two (2) input neurons at 1st layer, three (3) neurons in 1st hidden layer, two (2) neurons in 2nd hidden layer and one (1) neuron at output layer. (NN4 maximum in v1.6 is set to max 8 layers, but nothing is stopping more. It is only limited by machine memory as you will see later).

At run time NN4 builds dynamic arrays to hold neural network model. To build we needs some variables setup, e.g. number of layers, and some kind of array for neurons per layer. In pascal this is done like this:

Type
NeuronsPerLayerType = Array of word;

Var
NumberOfLayers : word;
NeuronsPerLayer : NeuronsPerlayerType;
layer : word;

……

NumberOfLayers:=4;
SetLength(NeuronsPerLayer,NumberOfLayers); { will allocate array references [0] [1] [2] and [3] }

For layer := 0 to NumberofLayers-1 Do NeuronsPerLayer[layer]:= {values from file or parameters from command line etc}

Note comment values from file, i chose to save NN4 parameters in a text file. At run time, NN4 will open and read in text lines, parse and load into the array.

Setup of Neurons model in memory

Similar to above i chose to build entire neuron network model in memory. Thus at run time, NN4 will build arrays, and run or learn, all in memory. This of course is limitation to max size, but as mentioned for me this is was exercise on understanding the fundamentals. And anyway, machines these days have quite a lot of memory as can be seen later from various models i tried to build.
In pascal it looks likes this:

Type
ActivationFunctionType = (Sigmoid,TanH,SoftPlus,Swish); {ordinal set in v1.6 more in later version}

Type
NeuronType = Record
ValueIn, { will store sum of weights * prior neurons }
ValueOut, { will store output of the activation function on the valuein }
Derivative, { will store derivative outcome, useful for certain activation functions }
Error : Single; { will store error value at this neuron node }

ActivationFunction : ActivationFunctionType; { ** note 1 ** }
End;


Var

Neuron = Array of Array of NeuronType; {2 dimensional array }
layer : word;

…..

SetLength(Neuron,NumberOflayers); {set 1st dimension of array }
for layer := 0 to NumberOfLayers-1 Do SetLength(Neuron[m],NeuronsPerLayer[m]+1);

Keen eye will have spotted +1, this is because we want to use last neuron to store the Bias. Even though Bias will “always” be 1, it’s better not to hard code as we can then use same loops for adjusting weights etc.

Note 1 – Activation Function can be defined at node level. In all models i show later i stick to using same activation function across all nodes in neural network mode. It is possible, and some research shows, to mix activation functions between hidden and output etc – but those experiments i will do when i graduate this level :-).

Setup weights array in memory

Think of weights along 3 dimensions, 1st dimension being the layer they are in, 2nd dimension the neuron to the left and 3rd dimension neuron to the right. As reminder here is the picture again.

Layer 0
From To
N0,0 > N1,0 is W0,0,0
N0,0 > N1,1 is W0,0,1
N0,0 > N1,2 is W0,0,2
N0,1      > N1,0        is W0,1,0
…….
N0,2 N1,2 W0,2,2

Layer 1
From To
N1,0 N2,0 W1,0,0
…….
N1,2 N2,2 W1,2,2

and so on… so in pascal this looks like this:

Type
WeightRecord = Record
Value, {this is the weight value }
PriorAdjust : Single; { this is adjustment during back prorogation, storing prior to drive momentum }
End;

WeightType = Array of Array of Array of WeightRecord; {note 3 dimensional array }

Var
Weight:WeightType, layer, left, right : word;

…….

SetLength ( Weight, NumberOfLayers-1); {1st dimension will be same as number of layers}
{-1 because there is 4 node layers and only 3 weight layers}
for layer:=0 to NumberOfLayers-2 Do {-2 accommodates of indexing starting at 0 and see above}
for left := 0 to NeuronsPerLayer[layer] Do
Begin

SetLength(Weight[layer],NeuronsPerLayer[layer]+1); {2nd dim allocated, and space for Bias}
for right := 0 to NeuronsPerLayer[layer+1]-1 Do
SetLength(Weight[layer,left],NeuronsPerLayer[layer+1]-1); {-1 as no need for weight between Bias}

End;

Up to this point, we have seen how to from two parameters that describe the network, e.g. number of layers and neurons per layer array we can in memory build neurons and weights arrays. On next pages we shall see how using forward and back propagation we manipulate the values.

Data Lines Structures

I chose to also read in the data file into memory. This may not be best choice if there is millions of data lines to use for learning – but for now NN4 in v1.6 is what it is. the data file will have first parameter data lines. NN4 will only ready (and expect exactly that many) data lines. Data line is a set of values terminated by CR (Carriage Return). values are expected to be delimited by delimiter char as specified in config file (typically space or comma). Values should be real e.g. 1 or 0.1 or -456.009 etc. Number of values should be sum of Neurons from 1st layer (input) and neurons at output layer. In out example above configuration was 2,3,2,1. This means 2 input values and 1 output value. Thus data file must have 3 values. if there are more they are ignored (see later how NN4 produces more values for display purposes).

Anyhow, the data lines we need are build in pascal as shown below:

Type
DataLineType = Record
DataIn : Array of Single;
DataOut: Array of Single;
End;
Var
DataLine : Array of DataLineType;
DataLinesCount : Word; {stored max number of lines, defined by first param in data file}

…..

DataLinesCount:={read 1st param from file}
{example how this could be done but beyond the scope here}
{open datafile text file type for reading}
Readln(DataFileHandle,TextLine); {fetch entire line}
TextLine:=Copy(TextLine,0,Pos(DelimiterChar,TextLine)-1); {separate value by delimiter}
DataLinesCount:=StrToInt(TextLine); {convert to number}
{end of example}

SetLength(DataLine,DataLinesCount-1);
SetLength(DataLine.DataIn,NeuronsPerLayer[0]);
SetLength(DataLine.DataOut, NeuronsPerLayer[NumberOfLayers-1] ); {-1 as indexing is 0,1,…NumberOfLayers-1}

I know i have mixed structures and some procedures but never mind you get the picture.