Monday, September 29, 2008

A20 - Neural Networks

This is another way of classifying piattos and pillows chips. First, we train our system. We have an input feature set. The columns are the features (red-and-green contrast, and area normalized to the largest area in the set) and the rows are the individuals.

x =[0.313023 0.7962447;
0.2596636 1.;
0.2721711 0.7661728;
0.3666842 0.842306;
0.8614653 0.8345313;
0.8959559 0.7132170;
0.9718898 0.5795805;
0.9224472 0.5188499]';

The first four rows pertain to piattos chips while the last four refer to pillows chips. We designate piattos as 0 and pillows as 1.Then our target set is
t = [0 0 0 0 1 1 1 1];
We input this into the neural network program and choose a test set whose member classifications we do not know.

testset = [0.3322144 0.8215565;
1.0195367 0.3562358;
0.3121461 0.9116466;
1.043473 0.3339846;
0.4000078 1.;
1.0175605 0.3679583;
0.3930316 1.;
0.9543794 0.2963204 ];

The test result is
0.1190987
0.9510804
0.0952159
0.9528344
0.1154394
0.9507229
0.1122649
0.9477875

which when binarized becomes
0
1
0
1
0
1
0
1.

Our system has classified the test samples with perfect accuracy. Indeed the test set comprises of alternating piattos and pillows photos.

SELF-GRADE: 10/10

Acknowledgment: Jeric Tugaff for the code and Cole Fabros for explaining it to me.

Mr. Tugaff's code modified to process my data:
// Simple NN that learns 'and' logic
// ensure the same starting point each timerand('seed',0);
// network def.// - neurons per layer, including input//2 neurons in the input layer, 2 in the hidden layer and 1 in the ouput layerN = [2,2,1];
// inputsx = [0.313023 0.7962447; 0.2596636 1.; 0.2721711 0.7661728; 0.3666842 0.842306; 0.8614653 0.8345313; 0.8959559 0.7132170; 0.9718898 0.5795805; 0.9224472 0.5188499]'; x2 = [0.3322144 0.8215565; 1.0195367 0.3562358; 0.3121461 0.9116466; 1.043473 0.3339846; 0.4000078 1.; 1.0175605 0.3679583; 0.3930316 1.; 0.9543794 0.2963204]';
// targets, 0 if there is at least one 0 and 1 if both inputs are 1t = [0 0 0 0 1 1 1 1];// learning rate is 2.5 and 0 is the threshold for the error tolerated by the networklp = [0.1,0];
W = ann_FF_init(N);
// 400 training cylesT = 1000;W = ann_FF_Std_online(x,t,N,W,lp,T);

No comments: