Monday, September 29, 2008

Project - Image Deblurring

I'm still on the process of searching references:
[1] http://www.picturesolve.com,
[2] http://ieeexplore.ieee.org/iel3/4760/13241/00600843.pdf
[3] How to Sharpen an Image in Photoshop, http://www.photoshopsupport.com/tutorials/sharpen-an-image/photo-sharpening.html
[4] Sharpen Effect, http://www.websupergoo.com/helpie/source/2-effects/sharpen.htm

Hehe, we changed topic. Sharpening of images taken under a microscope. Due to the limited depth of field of the microscope, and the sample's varying topography, some parts of the image become blurred. The sharpening filter to be used is a high-boost filter given by
H = (1/9)*[-1 -1 -1; -1 w -1; -1 -1 -1]; (1)

where w = 9A-1 with A>1. The effect of the above mask is an ouput image related to the original by this equation
High-boost Output = (A - 1) Original + Highpass [2].

A20 - Neural Networks

This is another way of classifying piattos and pillows chips. First, we train our system. We have an input feature set. The columns are the features (red-and-green contrast, and area normalized to the largest area in the set) and the rows are the individuals.

x =[0.313023 0.7962447;
0.2596636 1.;
0.2721711 0.7661728;
0.3666842 0.842306;
0.8614653 0.8345313;
0.8959559 0.7132170;
0.9718898 0.5795805;
0.9224472 0.5188499]';

The first four rows pertain to piattos chips while the last four refer to pillows chips. We designate piattos as 0 and pillows as 1.Then our target set is
t = [0 0 0 0 1 1 1 1];
We input this into the neural network program and choose a test set whose member classifications we do not know.

testset = [0.3322144 0.8215565;
1.0195367 0.3562358;
0.3121461 0.9116466;
1.043473 0.3339846;
0.4000078 1.;
1.0175605 0.3679583;
0.3930316 1.;
0.9543794 0.2963204 ];

The test result is
0.1190987
0.9510804
0.0952159
0.9528344
0.1154394
0.9507229
0.1122649
0.9477875

which when binarized becomes
0
1
0
1
0
1
0
1.

Our system has classified the test samples with perfect accuracy. Indeed the test set comprises of alternating piattos and pillows photos.

SELF-GRADE: 10/10

Acknowledgment: Jeric Tugaff for the code and Cole Fabros for explaining it to me.

Mr. Tugaff's code modified to process my data:
// Simple NN that learns 'and' logic
// ensure the same starting point each timerand('seed',0);
// network def.// - neurons per layer, including input//2 neurons in the input layer, 2 in the hidden layer and 1 in the ouput layerN = [2,2,1];
// inputsx = [0.313023 0.7962447; 0.2596636 1.; 0.2721711 0.7661728; 0.3666842 0.842306; 0.8614653 0.8345313; 0.8959559 0.7132170; 0.9718898 0.5795805; 0.9224472 0.5188499]'; x2 = [0.3322144 0.8215565; 1.0195367 0.3562358; 0.3121461 0.9116466; 1.043473 0.3339846; 0.4000078 1.; 1.0175605 0.3679583; 0.3930316 1.; 0.9543794 0.2963204]';
// targets, 0 if there is at least one 0 and 1 if both inputs are 1t = [0 0 0 0 1 1 1 1];// learning rate is 2.5 and 0 is the threshold for the error tolerated by the networklp = [0.1,0];
W = ann_FF_init(N);
// 400 training cylesT = 1000;W = ann_FF_Std_online(x,t,N,W,lp,T);

A19-Probabilistic Classification

I used data from the previous activity. There were again two classes (piattos and pillows), each with four training images . There was only one test image chosen for analysis. The features set was the same as in the previous activity.

----RG----Area--Classification----

0.3130230 5428. piattos
0.2596636 6817. piattos
0.2721711 5223. piattos
0.3666842 5742. piattos
0.8614653 5689. pillows
0.8959559 4862. pillows
0.9718898 3951. pillows
0.9224472 3537. pillows






We have our global feature set


x = [0.3130230 5428.;


0.2596636 6817.;


0.2721711 5223.;


0.3666842 5742.;


0.8614653 5689.;


0.8959559 4862.;


0.9718898 3951.;


0.9224472 3537.];


and its classification vector


y = [1;1;1;1;2;2;2;2];


We separate the global feature set to class feature sets


x1 = [0.3130230 5428.; 0.2596636 6817.; 0.2721711 5223.; 0.3666842 5742.];
x2 = [0.8614653 5689.; 0.8959559 4862.; 0.9718898 3951.; 0.9224472 3537.];


We get the mean for each feature in each group


u1 = [0.3028855 5802.5];


u2 = [0.9129396 4509.75];


and the global mean vector


u = [0.6079125 5156.125];


Then we get the mean-corrected data (xnaught1 and xnaught2) and the covariance matrix for each group (c1 and c2)


xnaught1 = [- 0.2948895 271.875;


- 0.3482489 1660.875;


- 0.3357414 66.875;


- 0.2412283 585.875];


xnaught2 = [0.2535528 532.875;


0.2880434 - 294.125;


0.3639773 - 1205.125;


0.3145347 - 1619.125 ];


c1 = [0.0947876 - 205.58834;


- 205.58834 795035.89];


c2 = [0.0946674 - 224.37948;


- 224.37948 1111089.3];


The pooled within group matrix C is then


C = [0.0947275 - 214.98391;


- 214.98391 953062.61];


Its inverse is


inv(C) = [21.629463 0.0048790;


0.0048790 0.0000021];


The prior probability vector is p = [4/8; 4/8];


The linear discriminant functions yield


----f1----- ----f2----- Class


40.193166 38.215655 1.


57.712379 55.641361 1.


35.908829 33.609495 1.


46.444826 44.89887 1.


62.954231 64.805784 2.


52.618282 54.544249 2.


42.555135 44.824398 2.


35.055327 36.902364 2.


Let our test image be that of a member of the piattos group (suppose we do not know this fact). It has a feature vector of


test = [0.3322144 0.8215565];


Its f1 and f2 values are 43.269644 and 41.458361, respectively.


We show the linear discriminant functions plot and use it to classify the test image. Through the graph, we can say that the test sample is a piattos chip.


Code:

x = [0.313023 0.7962447; 0.2596636 1.; 0.2721711 0.7661728; 0.3666842 0.842306; 0.8614653 0.8345313; 0.8959559 0.7132170; 0.9718898 0.5795805; 0.9224472 0.5188499];


y = [1;1;1;1;2;2;2;2];

x1 = [0.313023 0.7962447; 0.2596636 1.; 0.2721711 0.7661728; 0.3666842 0.842306];


x2 = [0.8614653 0.8345313; 0.8959559 0.7132170; 0.9718898 0.5795805; 0.9224472 0.5188499];


u1 = mean(x1,'r');

u2 = mean(x2,'r');

u = mean(x,'r');

xnaught1 = [];

for i = 1:size(x1,1);

xnaught1(i,:) = x1(i,:) -u;

end

xnaught2 = [];

for i = 1:size(x2,1);

xnaught2(i,:) = x2(i,:) -u;

end

c1 = xnaught1'*xnaught1/size(xnaught1,1);

c2 = xnaught2'*xnaught2/size(xnaught2,1);

C = (4*c1 + 4*c2)/8;
Cinv = inv(C);

P = [4/8; 4/8];

f1 = u1*Cinv*x' - (u1*Cinv*u1')/2 + log(P(1));

f2 = u2*Cinv*x' - (u2*Cinv*u2')/2 + log(P(2));

test = [0.3322144 0.8215565];

f1test = u1*Cinv*test' - (u1*Cinv*u1')/2 + log(P(1));

f2test = u2*Cinv*test' - (u2*Cinv*u2')/2 + log(P(2));

SELF-GRADE: 9/10 . I enjoyed the activity but it took me long to do this.

Monday, September 15, 2008

A18 - Pattern Recognition

For this activity, the aim is to automatically classify images of piattos and pillows chips using features extracted from the training images. There were four training images for the piattos and another four images for the pillows. The features used were red-and-green contrast and area.
The area was obtained by simply binarizing the images and summing the pixels. red-and-green contrast is defined by the following.
We let r be the contrast in the red channel: r = (max(r) - min(r))/(max(r) + min(r))
We let g be the contrast in the green channel: g = (max(g) - min(g))/(max(g) + min(g))
The red-and-green contrast is given by the rg = sqrt(r*r + g*g).

I designated piattos as 1 and pillows as 2.


Piattos features:
---RG---- ----Area----
0.3130230 5428.
0.2596636 6817.
0.2721711 5223.
0.3666842 5742.
Pillows features:
---RG---- ----Area----
0.8614653 5689.
0.8959559 4862.
0.9718898 3951.
0.9224472 3537.

Then I input a series of images consisting of four consecutive piattos and four consecutive pillows.
Test features:
---RG---- ----Area----
0.3322144 7569.
0.3121461 8399.
0.4000078 9213.
0.3930316 9213.
1.0195367 3282.
1.043473 3077.
1.0175605 3390.
0.9543794 2730.


The output of the program below, which made use of minimum distance classification, was
1 1 1 1 2 2 2 2
The chips were correctly classified with 100% accuracy.
If I add another class, the vcut class, using the same features, the accuracy will drop to 50%. This is because piattos and vcut nearly have the same color and the same area.


//for piattos
piattos = [];
for i = 1:4
M = imread(strcat('D:\ap186\september17\piatos'+string(i)+'.jpg'));
r = M(:,:,1);
g = M(:,:,2);
contrast_r = (max(r)-min(r))/(max(r)+min(r));
contrast_g = (max(g)-min(g))/(max(g)+min(g));
piattos(1,i) = sqrt(contrast_r*contrast_r + contrast_g*contrast_g);
M_gray = im2gray(M);
M_bw = im2bw(abs(1-M_gray),0.78);
piattos(2,i) = sum(M_bw);
end

//for pillows
pillow = [];
for i = 1:4;
M = imread(strcat('D:\ap186\september17\pillow'+string(i)+'.jpg'));
r = M(:,:,1);
g = M(:,:,2);
contrast_r = (max(r)-min(r))/(max(r)+min(r));
contrast_g = (max(g)-min(g))/(max(g)+min(g));
pillow(1,i) = sqrt(contrast_r*contrast_r + contrast_g*contrast_g);
M_gray = im2gray(M);
M_bw = im2bw(abs(1-M_gray),0.6);
pillow(2,i) = sum(M_bw);
end

m = [];
m(:,1) = sum(piattos,'c')/size(piattos,2);
m(:,2) = sum(pillow,'c')/size(pillow,2);

//for test samples
test = [];
for i = 1:8
M = imread(strcat('D:\ap186\september17\test\'+string(i)+'.jpg'));
r = M(:,:,1);
g = M(:,:,2);
contrast_r = (max(r)-min(r))/(max(r)+min(r));
contrast_g = (max(g)-min(g))/(max(g)+min(g));
test(1,i) = sqrt(contrast_r*contrast_r + contrast_g*contrast_g);
M_gray = im2gray(M);
M_bw = im2bw(abs(1-M_gray),0.63);
test(2,i) = sum(M_bw);
end

//Minimum distance classification
d = [];
for i = 1:8
for j = 1:2
d(j,i) = test(:,i)'*m(:,j) - m(:,j)'*m(:,j);
end
end

d = abs(d);
x =[];
for i = 1:8
x(i) = find(d==min(d(:,i)));
end
for i = 1:length(x);
x(i) = x(i) - 2*(i-1);
end

x

SELF-GRADE: 10/10 because I have 100% accuracy

Monday, September 1, 2008

A16 - Image Color Segmentation



September 2, 2008



Suppose I have a reference image Mpatch and an image I want to segment M, they have NCC (normalized chromaticidty coordinates) components rpatch, gpatch and bpatch, and r, g, and b
Code for non-parametric image segmentation by color:


delta_pix = 0.01;
val=[];
num_=[];
num=[];
counter=1;
for i=0:delta_pix:1;
[x,y]=find((rpatch>=i) & (rpatch<(i+delta_pix)));
val(counter)=i;
num_(counter)=length(x);
counter=counter+1;
end
num = num_./((size(rpatch,1))*size(rpatch,2));
plot(val, num_);
B = zeros(size(M,1),size(M,2),3);
delta_pix = 0.01;
val=[];
num_=[];
num=[];
counter=1;
for i=0:delta_pix:1;
[x,y]=find((gpatch>=i) & (gpatch<(i+delta_pix)));
val(counter)=i;
num_(counter)=length(x);
counter=counter+1;
end
num = num_./((size(gpatch,1))*size(gpatch,2));
plot(val, num_);

for i = 0:delta_pix:1
[x,y] = find((g>=i) & (g<(i+delta_pix)));
for k=1:length(x) for l=1:length(y)
B(x(k),y(l),2) = num_((i+delta_pix)/delta_pix);
end
end
i
end
B(:,:,3) = 1-B(:,:,1)-B(:,:,2);

Code for parametric segmentation:

delta_pix = 0.01;
val=[];
num_=[];
PDF = [];
counter=1;
for i=0:delta_pix:1;
[x,y]=find((rpatch>=i) & (rpatch<(i+delta_pix)));
val(counter)=i;
num_(counter)=length(x);
counter=counter+1;
end
PDF = num_./((size(rpatch,1))*size(rpatch,2));
plot(val, PDF);
mn = find(PDF==max(PDF));
mu_red = val(mn);
sigma_red = stdev(PDF);
val=[];
num_=[];
PDF = [];
counter=1;
for i=0:delta_pix:1;
[x,y]=find((gpatch>=i) & (gpatch<(i+delta_pix)));
val(counter)=i;
num_(counter)=length(x);
counter=counter+1;
end
PDF = num_./((size(gpatch,1))*size(gpatch,2));
plot(val,PDF);
mn = find(PDF==max(PDF));
mu_green = val(mn);
sigma_green = stdev(PDF);
x = [0:delta_pix:1];
pr = (exp(-((x-mu_red)^2)/(2*sigma_red)))/(sigma_red*sqrt(2*%pi));
pg = (exp(-((x-mu_green)^2)/(2*sigma_green)))/(sigma_green*sqrt(2*%pi));

K = zeros(size(M,1),size(M,2));
for i = 0:delta_pix:1
[x,y] = find((r>=i) & (r<(i+delta_pix)));
for k=1:length(x)
for l=1:length(y)
green = round(g(x(k),y(k)));
probability = pr((i+delta_pix)/delta_pix)*pg((green+delta_pix)/delta_pix);
K(x(k),y(l)) = probability;
end
end
i
end
Display
G = im2gray(B);
L = im2gray(M);
subplot(311)
imshow(L,[]);
subplot(312)
imshow(G,[]);







Figure 1. The raw image.




Figure 2. The reference patch which belongs to the hand.








Figure 3. The raw image (top) and the maps showing the probability that a pixel belongs to the region of interest which in this case is the skin. The center image is the result of non-parametric image segmentation while the bottom image is the result of parametric image-segmentation.

Self-grade: 10/10 because I was able to finish the activity on time and my parametric and non-parametric image segmentations result looks reasonable

Thank you, Lei for helping me with histogramming.