Wednesday, August 27, 2008

A15 - Color Camera Processing

August 28, 2008


We performed white balancing on images under different white balancing settings. In figure 1, row 1, we see the original images under white balancing settings inside-cloudy, inside-incandescent and auto-white balance, respectively. We see that the colors they contain per point are different and the white of the first two settings do not appear white. We try to process the first two images using Reference White Algorithm (results in second column) and Gray World Algorithm (results in third column). Now their white looks whiter, with the processed image for the inside-cloudy setting closer to the AWB setting. We may say that processed images from the Gray World Algorithm are slightly darker than those from the Reference White Algorithm.Figure 1. Myriad-colored objects processed using Reference White Algorithm and Gray World Algorithm. The white patches used for processing are in the lower-right corner of the collage figure.

Next we, try processing objects with different shades of the same hue. We try this for a high contrast image (with dark and bright background), and low contrast image (bright background only). We now see that Gray World Algorithm results are brighter. And white appears whiter for the high contrast image.Figure 2. Monochromatic Objects Processed. 1st row: raw images; 2nd row: Reference White Algorithm outcomes; 3rd row: Gray World Algorithm results. Right most image: the white patch

Code for performing the Reference White and Gray World Algorithms

//1. Defining the stacksize and calling of image
n = 100000000;
stacksize(n);
M = imread('C:\Documents and Settings\AP186user20\Desktop\act15\i - incandescent.jpg');

//2. Reference White Algo
white = imread('C:\Documents and Settings\AP186user20\Desktop\act15\whitepatch_i-incandescent.jpg');
Rw = mean(white(:,:,1));
Gw = mean(white(:,:,2));
Bw = mean(white(:,:,3));

//3. Gray World Algo
K(:,:,1) = M(:,:,1)/Rw;
K(:,:,2) = M(:,:,2)/Gw;
K(:,:,3) = M(:,:,3)/Bw;
K = K/max(max(max(K)));
imwrite(K,'C:\Documents and Settings\AP186user20\Desktop\act15\ReferenceWhite_i-incandescent.jpg');

SELF-GRADE:
10/10 because I was able to finish the activity on time

CREDITS:
Rica, Aiyin, Beth for helping me acquire images and explaining the algorithms

A14 -Stereometry

I have acquired the images. Still processing...


Above are two images of the same object at slightly different perspectives (The object was moved to the right by 15cm).

My code:

f = 6; //focus

b = 150;

M1 = imread('C:\Documents and Settings\AP186user20\Desktop\august12_2008\Cube1.jpg');

M2 = imread('C:\Documents and Settings\AP186user20\Desktop\august12_2008\Cube2.jpg');

M1 = im2gray(M1);

M2 = im2gray(M2);

imshow(M1,[]);

M2 = im2gray(M2);

imshow(M1,[]);

x1 = locate(10,1);

imshow(M2,[]);

x2 = locate(10,1);

z = b*f/(x2(1,:) - x1(1,:));
nx = 40;

ny = 40;

x = linspace(1,1,nx);

y = linspace(1,1,ny);

[XP,YP] = ndgrid(x,y);
ZP = interp2d(XP, YP, x1(1,:), x1(2,:), splin2d(x1(1,:), x1(2,:), z, "natural"));
//plot(x2(1,:),x2(:,1), z);

mesh(ZP)

SELF-GRADE: 6/10, hindi ko pa yari

Thanks to Beth for pictures, Rica and Aiyin for storage

Monday, August 18, 2008

A12 - Geometric Distortion Correction

August 19, 2008

We have an image of a Capiz window, an analog to a grid, whose left side is slightly tilted. We aim to correct this distortion by creating an ideal grid, then mapping into it the grayscale values of the corresponding pixels in the distorted image.


First, we choose an eminently distorted area in the image. This area is covered by a rectangle in figure 1. We designate the rectangle to be the ideal polygon, how the shape of the set of small squares in the window appear. The ideal polygon vertices are labeled as [xi, yi] , while the vertices of the distorted area are tagged as [x^i, y^i].

The transformation of [x, y] to [x^, y^] can be expressed by a function r and s.
x^ = r(x,y)
y ^ = s(x,y) (1)
If the polygon is not too large, we could assume that the transformations for both directions happen linearly. Thus, we guess.
x^ = c1x + c2y + c3xy + c4
y^ = c5x + c6y + c7xy + c8 (2)
From equation (2), we could obtain the transformation vectors C14 = [c1, c2, c3, c4]t and
C58 = [c5, c6, c7, c8] t, where t stands for transposed via
C14 = (inv(T) ) X
C58 = (inv(T) ) Y
where inv means inv(T) means the inverse of T, T is a 4 by 4 matrix with row i given by [xi, yi, xiyi, 1], X = [[x^1,x^2, x^3, x^4] t and Y = [y^1, y^2, y^3, y^4] t.

In the ideal rectangle, we find the corresponding pixel in the distorted image. If the corresponding coordinates are integer-valued, we simply duplicate the point intensity value into the ideal space. Otherwise, we employ the bilinear interpolation wherein if x^ or y^ is not an element of the set of integers, the value at (x^, y^) is given by
v(x^,y^) = ax^ + by^ + cx^y^ + d (3)
To find the unknowns a, b, c and d, we use the nearest neighbors of (x^,y^).

We, see that the image has been quite corrected. The sides of the squares without using bilinear interpolation are crooked. Meanwhile, the sides of the squares when bilinear interpolation was employed were straight, though the resulting image had less contrast,

//1 Call the image
M = imread('D:\ap187\distorted_capiz_window.jpg');
M = im2gray(M);
imshow(M,[]);
[m,k] = size(M);

//2 Select large subgrid vertices
g = locate(4,1);
g = [46.723164 238.70056;
99.265537 239.83051;
100.96045 41.525424;
50.112994 40.960452 ];
gprime(:,1) = g(:,2);
gprime(:,2) = g(:,1);
gprime(:,1) = abs(gprime(:,1) - m - 1);
//3 Generate ideal grid
x_1 = gprime(1,1);
x_2 = gprime(1,1);
x_3 = gprime(3,1);
x_4 = x_3;
y_1 = gprime(1,2);
y_2 = gprime(2,2);
y_3 = gprime(2,2);
y_4 = gprime(1,2);

//4 Obtain transformation matrix C
T = [x_1 y_1 x_1*y_1 1;
x_2 y_2 x_2*y_2 1;
x_3 y_3 x_3*y_3 1;
x_4 y_4 x_4*y_4 1];

X = gprime(:,1);
Y = gprime(:,2);

C14 = (inv(T))*X;
C58 = (inv(T))*Y;

//5 Map the distorted image into the ideal space (w/ bilinear interpolation)
v = zeros(m,k);

for x = 5:m-5;
for y = 5:k-5;
t = [x y x*y 1];
xhat = t*C14;
yhat = t*C58;
xhat_integer = int(xhat);
yhat_integer = int(yhat);

if xhat_integer == xhat & yhat_integer == yhat then
if xhat_integer == 0 then
xhat_integer = 1;
end
if yhat_integer == 0 then
yhat_integer = 1;
end

v(x,y) = M(xhat_integer, yhat_integer);

else
xplus = xhat_integer + 1;
xminus = xhat_integer;
yplus = yhat_integer + 1;
yminus = yhat_integer;

nearestneighbors = [xplus yplus xplus*yplus 1;
xplus yminus xplus*yminus 1;
xminus yminus xminus*yminus 1;
xminus yplus xminus*yminus 1];

vhat = [M(xplus,yplus); M(xplus,yminus); M(xminus,yminus); M(xminus,yplus)];
a_b_c_d = inv(nearestneighbors)*vhat;
nu = [x y y*x 1];
v(x,y) = nu*a_b_c_d;
end
end
end

//6 Mapping without bilinear interpolation
v2 = zeros(m,k);

for x = 5:m-5;
for y = 5:k-5;
t = [x y x*y 1];
xhat = t*C14;
yhat = t*C58;
xhat_integer = int(xhat);
yhat_integer = int(yhat);
v2(x,y) = M(xhat_integer, yhat_integer);
end
end

//7 Showing the images, M - original, v - corrected with interpolation, v2 - corrected without interpolation
subplot(221)
imshow(M,[]);
subplot(222)
imshow(v2,[]);
subplot(223)
imshow(v3,[]);

I give myself 7/10 because it took me this long to solve the problem.

Thank you to Mark Leo for some explanations.





Wednesday, August 6, 2008

A13 - Photometric Stereo

August 7, 2008
We have 4 synthetic images, namely I1, I2, I3 and I4, of a sphere illuminated from 4 different directions.
The illumination matrix is a composite of 4 vectors given by
V1 = [0.085832 0.17365 0.98106]
V2 = [0.085832 -0.17365 0.98106]
V3 = [0.17365 0 0.98481]
V4 = [0.16318 -0.34202 0.92542].



The illumination is from an infinitely far away source. The relationship between the reflected intensity I, the illumination vector L and the object surface normal N is given by
I = L*N (1).

Since we know I and L, we could get the surface normals via
g = [(inv(V'V))V'] I (2).
We normalize g by dividing by its magnitude to obtain the normalized surface normal
n^ = g/|g| (3).
This components of the normal vector can be transformed to the partial derivatives by using the relation
df/dx = -nx/nz and df/dy = -ny/nz.
The derivatives are then integrated via line integration to obtain the surface profile of the hemisphere (See Figure )2. The drawback of using line integration is that it produces small spikes in the reconstruction.


code:
loadmatfile('C:\Documents and Settings\AP186user20\Desktop\august7_2008\photos.mat',['I1','I2','I3','I4']);
subplot(221)
imshow(I1,[]);
subplot(222)
imshow(I2,[]);
subplot(223)
imshow(I3,[]);
subplot(224)
imshow(I4,[]);
[m,k] = size(I1)
I = [I1(:)'; I2(:)'; I3(:)';I4(:)'];
V = [0.085832 0.17365 0.98106;
0.085832 -0.17365 0.98106
0.17365 0 0.98481
0.16318 -0.34202 0.92542];
MoorePennrose = (inv(V'*V))*V';
g = MoorePennrose*I;
[M,K] = size(g);
for i = 1:M;
for j = 1:K;
absolutevalue = sqrt( g(1,j)*g(1,j) + g(2,j)*g(2,j) + g(3,j)*g(3,j) );
n(i,j) = g(i,j)./(absolutevalue + 1.0e-015);
end
end
//derivative along the x-axis
p = -n(1,:)./(n(3,:)+1.0e-20);
//derivative along the y-axis
q = -n(2,:)./(n(3,:)+1.0e-20);

Imagexderivative = matrix(p,m,k);
Imageyderivative = matrix(q,m,k);
x=cumsum(Imagexderivative,2);
y= cumsum(Imageyderivative,1);
Image = x+y;
mesh(Image);
Reference
[1]. Dr. Soriano's lecture notes

Self-grade: 10/10