Image Processing p6 7111 2

20
Image Processing: The Fundamentals. Maria Petrou and Panagiota Bosdogianni Copyright 0 1999 John Wiley & Sons Ltd Print ISBN 0-471-99883-4 Electronic ISBN 0-470-84190-7 Chapter 6 Image Restoration What is image restoration? Image restoration is the improvement of an image using objective criteria and prior knowledge as to what the image should look like. What is the difference between image enhancement and image restoration? In image enhancement we try to improve the image usingsubjective criteria, while in image restoration we are trying to reverse a specific damage suffered by the image, using objective criteria. Why may an image require restoration? An image may be degraded because the grey values of individual pixels may be altered, or it may be distorted because the position of individual pixels may be shifted away from their correct position. The second case is the subject of geometric lestoration Geometric restoration is also called image registration because it helps in finding corresponding points between two images of the same region taken from different viewing angles. Image registration is very important in remote sensing when aerial photographs have to be registered against the map, or two aerial photographs of the same region have to be registered with eachother. How may geometric distortion arise? Geometric distortion may arise because of the lens or because of the irregular move- ment of the sensor during image capture. In the former case, the distortion looks regular like those shown in Figure 6.1. The latter case arises, for example, when an aeroplane photographs the surface of the Earthwith a line scan camera. As the aero- plane wobbles, the captured image may be inhomogeneously distorted, with pixels displaced by as much as 4-5 interpixel distances away from their true positions.

description

Image processing

Transcript of Image Processing p6 7111 2

Page 1: Image Processing p6 7111 2

Image Processing: The Fundamentals. Maria Petrou and Panagiota Bosdogianni Copyright 0 1999 John Wiley & Sons Ltd

Print ISBN 0-471-99883-4 Electronic ISBN 0-470-84190-7

Chapter 6

Image Restoration

What is image restoration?

Image restoration is the improvement of an image using objective criteria and prior knowledge as to what the image should look like.

What is the difference between image enhancement and image restoration?

In image enhancement we try to improve the image usingsubjective criteria, while in image restoration we are trying to reverse a specific damage suffered by the image, using objective criteria.

Why may an image require restoration?

An image may be degraded because the grey values of individual pixels may be altered, or it may be distorted because the position of individual pixels may be shifted away from their correct position. The second case is the subject of geometric lestoration

Geometric restoration is also called image registration because it helps in finding corresponding points between two images of the same region taken from different viewing angles. Image registration is very important in remote sensing when aerial photographs have to be registered against the map, or two aerial photographs of the same region have to be registered with eachother.

How may geometric distortion arise?

Geometric distortion may arise because of the lens or because of the irregular move- ment of the sensor during image capture. In the former case, the distortion looks regular like those shown in Figure 6.1. The latter case arises, for example, when an aeroplane photographs the surface of the Earth with a line scan camera. As the aero- plane wobbles, the captured image may be inhomogeneously distorted, with pixels displaced by as much as 4-5 interpixel distances away from their true positions.

Page 2: Image Processing p6 7111 2

194 Image Processing: The Fundamentals

(a) Original (b) Pincushion distortion (c) Barrel distortion

Figure 6.1: Examples of geometric distortions caused by the lens.

Y

X - X A

Corrected image Distorted image

Figure 6.2: In this figure the pixels correspond to the nodes of the g1 rids. Pixel A of the corrected grid corresponds to inter-pixel position A’ of the original image.

How can a geometrically distorted image be restored?

We start by creating an empty array of numbers the same size as the distorted image. This array will become the corrected image. Our purpose is to assign grey values to the elements of this array. This can be achieved by performing a two-stage operation: spatial transformation followed by grey level interpolation.

How do we perform the spatial transformation?

Suppose that the true position of a pixel is (z, y) and the distorted position is (g ,$) (see Figure 6.2 ). In general there will be a transformation which leads from one set of coordinates to the other, say:

Page 3: Image Processing p6 7111 2

Image Restoration 195

First we must find to which coordinate position in the distorted image each pixel position of the corrected image corresponds. Here we usually make some assumptions. For example, we may say that the above transformation has the following form:

where cl , c2, . . . , c8 are some parameters. Alternatively, we may assume a more general form, where squares of the coordinates X and y appear on the right hand sides of the above equations. The values of parameters c l , . . . , C8 can be determined from the transformation of known points called tie points. For example, in aerial photographs of the surface of the Earth, there are certain landmarks with exactly known positions. There are several such points scattered all over the surface of the Earth. We can use, for example, four such points to find the values of the above eight parameters and assume that these transformation equations with the derived parameter values hold inside the whole quadrilateral region defined by these four tie points.

Then, we apply the transformation to find the position A' of point A of the corrected image, in the distorted image.

Why is grey level interpolation needed?

It is likely that point A' will not have integer coordinates even though the coordinates of point A in the (z, y) space are integer. This means that we do not actually know the grey level value at position A'. That is when the grey level interpolation process comes into play. The grey level value at position A' can be estimated from the values at its four nearest neighbouring pixels in the (?,G) space, by some method, for example by bilinear interpolation. We assume that inside each little square the grey level value is a simple function of the positional coordinates:

g(D,jj) = a0 + pjj + y2i.g + 6

where a, . . . , 6 are some parameters. We apply this formula to the four corner pixels to derive values of a, p, y and 6 and then use these values to calculate g($, jj) at the position of A' point.

Figure 6.3 below shows in magnification the neighbourhood of point A' in the distorted image with the four nearest pixels at the neighbouring positions with integer coordinates.

Simpler as well as more sophisticated methods of interpolation may be employed. For example, the simplest method that can be used is the nearest neighbour method where A' gets the grey level value of the pixel which is nearest to it. A more sophisti- cated method is to fit a higher order surface through a larger patch of pixels around A' and find the value at A' from the equation of that surface.

Page 4: Image Processing p6 7111 2

196 Image Processing: The Fundamentals

Figure 6.3: The non-integer position of point A' is surrounded by four pixels at integer positions, with known grey values ( [ X ] means integer part of X ) .

Example 6.1

In the figure below the grid on the right is a geometrically distorted image and has to be registered with the reference image on the left using points A, B , C and D as tie points. The entries in the image on the left indicate coordinate positions. Assuming that the distortion within the rectangle ABCD can be modelled by bilinear interpolation and the grey level value at an interpixel position can be modelled by bilinear interpolation too, find the grey level value at pixel position (2,2) in the reference image.

Suppose that the position (i, 6) of a pixel in the distorted image is given in terms of its position (X, y ) in the reference image by:

W e have the following set of the corresponding coordinates between the two grids using the four tie points:

Page 5: Image Processing p6 7111 2

Image Restoration 197

Distorted ( 2 , y ) coords. Reference ( X , y ) coords. Pixel A (0,O) (090) Pixel B (3,1> (390) Pixel C (1, 3) ((43) Pixel D (4, 4) (3,3)

W e can use these to calculate the values of the parameters cl, . . . , CS:

Pixel A:

Pixel B:

Pixel C 1 = 3 ~ 2 } + { Pixel D

1

3 = 3cs 4 = 3 + 3 x $ + 9 c 3 4 = 3 x + + 3 + 9 c 7

The distorted coordinates, therefore, of any pixel within the square ABDC are given by:

Y 3 3

? = a : + - , y = - + y X

For X = y = 2 we have 2 = 2 + 2 , y = $ + 2. So, the coordinates of pixel (2,2) in the distorted image are (2$, 2 i ) . This position is located between pixels in the distorted image and actually between pixels with the following grey level values:

- 213

W e define a local coordinate system (2 , y”), so that the pixel at the top left corner has coordinate position (0,O) the pixel at the top right corner has coordinates (1,0), the one at the bottom left (0 , l ) and the one at the bottom right (1 , l ) . Assuming that the grey level value between four pixels can be computed from the grey level values in the four corner pixels with bilinear interpolation, we have:

Applying this for the four neighbouring pixels we have:

Page 6: Image Processing p6 7111 2
Page 7: Image Processing p6 7111 2

Image Restoration 199

We recognize now that equation (6.7) is the convolution between the undegraded image f (z, y) and the point spread function, and therefore we can write it in terms of their Fourier transforms:

G(u, W) = F(u, v)H(u, W) (6.8)

where G , F and H are the Fourier transforms of functions f , g and h respectively.

What form does equation (6.5) take for the case of discrete images?

N N

k=l k 1

We have shown that equation (6.9) can be written in matrix form (see equation (1.25)):

g = H f (6.10)

What is the problem of image restoration?

The problem of image restoration is: given the degraded image g, recover the original undegraded image f .

How can the problem of image restoration be solved?

The problem of image restoration can be solved if we have prior knowledge of the point spread function or its Fourier transform (the transfer function) of the degradation process.

How can we obtain information on the transfer function ~ ( u , w ) of the degradation process?

1. From the knowledge of the physical process that caused degradation. For exam- ple, if the degradation is due to diffraction, H ( u , W ) can be calculated. Similarly, if the degradation is due to atmospheric turbulence or due to motion, it can be modelled and H(u, W) calculated.

2. We may try to extract information on k ( u , W ) or h(a - z, ,B -9) from the image itself; i.e. from the effect the process has on the images of some known objects, ignoring the actual nature of the underlying physical process that takes place.

Page 8: Image Processing p6 7111 2

200 Image Processing: The Fundamentals

II Example 6.2

When a certain static scene was being recorded, the camera underwent planar motion parallel to the image plane (X, y ) . This motion appeared as if the scene moved in the X, y directions by distances zo(t) and yo( t ) which are functions of time t. The shutter of the camera remained open from t = 0 to t = T where T is a constant. Write down the equation that expresses the intensity recorded at pixel position (X, y ) in terms of the scene intensity function f (X, y ) .

The total exposure at any point of the recording medium (say the film) will be T and we shall have for the blurred image:

T (6.11)

Example 6.3

In Example 6.2, derive the transfer function with which you can model the degradation suffered by the image due to the camera motion, as- suming that the degradation is linear with a shift invariant point spread function.

Consider the Fourier transform of g ( z , y ) defined in Example 6.2.

+a +a

i*(u,v) = S_, S_, g(z, y)e-2"j(uz+uY)d XdY (6.12)

If we substitute (6.11) into (6.12) we have:

W e can exchange the order of integrals:

f (X - xo(t) , y - yo(t))e-2"j(ur+ur)drdy} dt (6.14)

This is the Fourier transfoA of the shifted function by 20, yo in directions X, y respectively

Page 9: Image Processing p6 7111 2

Image Restoration 201

W e have shown (see equation (2.67)) that the Fourier transform of a shifted func- tion and the Fourier transform of the unshifted function are related by:

(F . T. of shifted function) = (F . T. of unshifted function)e-2aj(uzo+vyo)

Therefore:

&(U, v) = fi(u, y)e-2?rj(uzo+vy'J)dt I' where F(u, v) is the Fourier transform of the scene intensity function f (X, y ) , i.e. the unblurred image. F(u,v) is independent of time, so it can come out of the integral sign:

.) = .) e-2?rj(uzo(t)+vYo(t))dt I' Comparing this equation with (6.8) we conclude that:

(6.15)

Example 6.4

Suppose that the motion in Example 6.2 was in the X direction only and with constant speed F, so that yo(t) = 0, zo(t) = F. Calculate the transfer function of the motion blurring caused.

I n the result of Example 6.3, equation (6.15), substitute yo(t) and xO(t) to obtain:

Page 10: Image Processing p6 7111 2

202 Image Processing: The Fundamentals

Example 6.5 (B)

It was established that during the time interval T when the shutter was open, the camera moved in such a way that it appeared as if the objects in the scene moved along the positive y axis, with constant acceleration 2 a and initial velocity SO, starting from zero displacement. Derive the transfer function of the degradation process for this case.

In this case x:o(t) = 0 and

% = 2 a + - = 2 a t + b + y o ( t ) = at2 + bt + C dY0

dt2 dt

where a is half the constant acceleration and b and c some integration constants. W e have the following initial conditions:

t=O zero shifting i.e. c = 0 t = 0 velocity of shifting = SO + b = SO

Therefore: yo( t ) = at2 + sot

W e substitute xo(t) and yo(t) in equation 6.15 f o r H(u , v ) :

H(u,w) = I’ e-~?rj71(Cd2+Sot)dt

= LT cos [2.rrvat2 + 2 m s o t ] d t - j sin [27rvat2 + 2 ~ v s o t ] d t I’ S We may use the following formulae:

cos(ax2+bx+c)dx =

S sin(ax2+bx+c)dx =

where S ( x ) and C(x) are

S ( x ) E c lx sin t2dt

C ( x ) E 6 lx cos t2dt

and they are called Fresnel integrals.

Page 11: Image Processing p6 7111 2
Page 12: Image Processing p6 7111 2

204 Image Processing: The Fundamentals

1 lim S(z) = -

2 1

lim C(%) = - 2

lim S(z) = 0

lim C(%) = 0

X+CC

X + o O

x+o

x+o

Therefore, for SO N 0 and T + m, we have:

c ( E ( a T + s o ) ) + ; c (Eso) + 0

S (E,,,,,)) +; S (Eso) + o

cos - 2 7 4 27Fvs; + 1 sin - + O

a! a

Therefore equation (6.17) becomes:

1 H(u ,v ) c! ~ ['+a] = ~

1 - j 2 6 2 4 6

Example 6.7

How can we infer the point spread function of the degradation process from an astronomical image?

We know tha t by definition the point spread function is the output of the imaging system when the input is a point source. In a n astronomical image, a very distant star can be considered as a point source. B y measuring then the brightness profile of a star we immediately have the point spread function of the degradation process this image has been subjected to.

Page 13: Image Processing p6 7111 2

Image Restoration 205

Example 6.8

Suppose that we have an ideal bright straight line in the scene parallel to the image axis X. Use this information to derive the point spread function of the process that degrades the captured image.

Mathematically the undegraded image of a bright line can be represented by:

f (2, Y ) = S(Y)

where we assume that the line actually coincides with the X axis. Then the image of this line will be:

h1 (X, Y ) = /'" /'" L +" h ( ~ - X ' , y - y')S(y')dy'dx' = h ( ~ - X ' , y ) d d

-m -m

W e change variable 3 X - X' + dx' = -d3. The l imits of 3 are f rom +m t o -ca. Then:

-" +" h Z ( X , Y ) = -S,, h(?,Y)d? = S_, h(?,Y)d? (6.18)

The right hand side of this equation does not depend on X, and therefore the left hand side should not depend either; i.e. the image of the line will be parallel t o the X axis (or rather coincident with it) and the same all along it:

+" hl (X,Y) = hl (Y) = S_, h l (%Y)d3 -

3 is a dummy variable, independent of X

(6.19)

(6.20)

The point spread function has as Fourier transform the transfer function given by:

J -m

If we set U = 0 in this expression, we obtain

(6.21)

Page 14: Image Processing p6 7111 2

206 Image Processing: The Fundamentals

k(0, W) = 1: [L: h(x, y)dx] e-2?rjwydy (6.22) - ~I(Y) f rom (6.19)

1 1 B y comparing equation (6.20) with (6.22) we get:

H ( 0 , W) = H l ( W ) (6.23)

That is, the image of the ideal line gives us the profile of the transfer function along a single direction; i.e. the direction orthogonal to the line. This is understandable, as the cross-section of a line orthogonal to its length is no different from the cross- section of a point. By definition, the cross-section of a point is the point spread function of the blurring process. If now we have lots of ideal lines in various directions in the image, we are going to have information as to how the transfer function looks along the directions orthogonal to the lines in the frequency plane. B y interpolation then we can calculate H(u, W) at any point in the frequency plane.

Example 6.9

It is known that a certain scene contains a sharp edge. How can the image of the edge be used to infer some information concerning the point spread function of the imaging device?

Let us assume that the ideal edge can be represented by a step function along the X axis, defined by:

1 f o r y > 0 0 for y 5 0

The image of this function will be: 00

he(%, Y ) = / h(x - X', y - y')u(y')dx'dy'

W e m a y define new variables 53 X - X', J y - y'. Obviously dx' = -d3 and dy' = -dJ. The l imits of both 3 and J are f rom +m t o -m. Then:

Page 15: Image Processing p6 7111 2

Image Restoration 207

Let us take the partial derivative of both sides of this equation with respect to y :

It is known that the derivative of a step function with respect to its argument is a delta function:

(6.24)

If we compare (6.24) with equation (6.18) we see that the derivative of the image of the edge is the image of a line parallel to the edge. Therefore, we can derive information concerning the point spread function of the imaging process by obtain- ing images of ideal step edges at various orientations. Each such image should be differentiated along a direction orthogonal to the direction of the edge. Each resultant derivative image should be treated as the image of an ideal line and used to yield the profile of the point spread function along the direction orthogonal to the line, as described in Example 6.8.

Example 6.10

Use the methodology of Example 6.9 to derive the point spread function of an imaging device.

Using a ruler and black ink we create the chart shown in Figure 6.4.

Figure 6.4: A test chart for the derivation of the point spread function of an imaging device.

Page 16: Image Processing p6 7111 2

208 Image Processing: The Fundamentals

This chart can be used to measure the point spread function of our imaging sys- tem at orientations 0", 45", 90" and 135". First the test chart is imaged using our imaging apparatus. Then the partial derivative of the image is computed by convolution at orientations O", 45", 90" and 135" using the Robinson operators. These operators are shown in Figure 6.5.

-1 -2 -1 0 -1 -2

0 -1 -2

2 1

( 4 MO (b) M1 (c) M2 ( 4 M3

Figure 6.5: Filters used to compute the derivative in 0,45,90 and 135 degrees.

190.0

so 0

-10 0 0.0 50.0 1w.o

(a) Four profiles of the PSF

(c) PSF profile for orientations 0' and 90'

-1000 I ' ' ' ' '

55.0 60.0 65.0

(b) Zooming into (a)

-1000 I ' ' ' ' '

55 0 60.0 65.0

(d) PSF profile for orientations 45' and 135'

Figure 6.6: The point spread function (PSF) of an imaging system, in two different scales.

Page 17: Image Processing p6 7111 2

Image Restoration 209

The profiles of the resultant images along several lines orthogonal to the original edges are computed and averaged to produce the four profiles for 0", 45", 90" and 135" plotted in Figure 6.6a. These are the profiles of the point spread function. In Figure 6.6b we zoom into the central part of the plot of Figure 6.6a. Two of the four profiles of the point spread functions plotted there are clearly narrower than the other two. This is because they correspond to orientations 45" and 135" and the distance of the pixels along these orientations is fi longer than the distance of pixels along 0" and 90". Thus, the value of the point spread function that is plotted as being 1 pixel away from the peak, in reality is approximately 1.4 pixels away. Indeed, if we take the ratio of the widths of the two pairs of the profiles, we find the value of 1.4.

In Figures 6 . 6 ~ and 6.6d we plot separately the two pairs of profiles and see that the system has the same behaviour along the 45", 135" and 0", 90" orientations. Taking into account the fi correction for the 45" and 135", we conclude that the point spread function of this imaging system is to a high degree circularly symmetric.

In a practical application these four profiles can be averaged to produce a single cross-section of a circularly symmetric point spread function. The Fourier transform of this 2D function is the system transfer function of the imaging device.

If we know the transfer function of the degradation process, isn't the solution to the problem of image restoration trivial?

If we know the transfer function of the degradation and calculate the Fourier transform of the degraded image, it appears that from equation (6.8) we can obtain the Fourier transform of the undegraded image:

(6.25)

Then, by taking the inverse Fourier transform of $(U, W), we should be able to recover f ( z , y ) , which is what we want. However, this straightforward approach pro- duces unacceptably poor results.

What happens at points (U, W ) where B(u, W ) = O?

H(u, W ) probably becomes 0 at some points in the (U, W) plane and this means that G(u,v) will also be zero at the same points as seen from equation (6.8). The ratio G(u, o)/I?(u, W) as appears in (6.25) will be O/O; i.e. undetermined. All this means is that for the particular frequencies (U, W ) the frequency content of the original image cannot be recovered. One can overcome this problem by simply omitting the corre- sponding points in the frequency plane, provided of course that they are countable.

Page 18: Image Processing p6 7111 2

210 Image Processing: The Fundamentals

Will the zeroes of B(u, v) and G(u7 v) always coincide?

No, if there is the slightest amount of noise in equation (6.8), the zeroes of k ( u , v) will not coincide with the zeroes of G(u7 v).

How can we take noise into consideration when writing the linear degra- dation equation?

For additive noise, the complete form of equation (6.8) is:

G(u7 v) = $(U7 v)B(u , v) + fi(u, v) (6.26)

where fi(u, v) is the Fourier transform of the noise field. @ ( U , v) is then given by:

A G(,, v) fi(u, v) F(u,v) = ~ - ~

k ( u , v) k ( u , v) (6.27)

In places where k ( u , v) is zero or even just very small, the noise term may be enormously amplified.

How can we avoid the amplification of noise?

In many cases, Ik(u, v)l drops rapidly away from the origin while Ifi(u, v)l remains more or less constant. To avoid the amplification of noise then when using equation (6.27), we do not use as filter the factor l / k (u ,v ) , but a windowed version of it, cutting it off at a frequency before Ik(u, v)[ becomes too small or before its first zero. In other words we use:

@(U, v) = &!(U, v)G(u, v) - &(U, v)fi(u, v) (6.28)

where

(6.29)

where WO is chosen so that all zeroes of H ( u , v) are excluded. Of course, one may use other windowing functions instead of the above window with rectangular profile, to make &(U, v) go smoothly to zero at WO.

Example 6.11

Demonstrate the application of inverse filtering in practice by restoring a motion blurred image.

Page 19: Image Processing p6 7111 2

Image Restoration 211

Let us consider the image of Figure 6 . 7 ~ . To imitate the way this image would loot if it were blurred by motion, we take every 10 consecutive pixels along the X axis, find their average value, and assign it to the tenth pixel. This is what would have happened if, when the image was being recorded, the camera had moved 10 pixels to the left: the brightness of a line segment in the scene with length equivalent to 10 pixels would have been recorded by a single pixel. The result would loot lite Figure 6.76. The blurred image g ( i , j ) in terms of the original image f ( i , j ) is given by the discrete version of equation (6.11):

1 ZT-l g ( i , j ) = 7 C f ( i - t , j ) i = O , l , ..., N - l

2T (6.30)

k=O

where i~ is the total number of pixels with their brightness recorded by the same cell of the camera, and N is the total number of pixels in a row of the image. I n this example i~ = 10 and N = 128.

The transfer function of the degradation is given by the discrete version of the equation derived in Example 6.4. W e shall derive it now here. The discrete Fourier transform of g ( i , j ) is given by:

If we substitute g(1, t ) from equation (6.30) we have:

W e rearrange the order of summations to obtain:

D F T of shifted f (1, t )

By applying the property of the Fourier transforms concerning shifted functions, we have:

where F(m,n) is the Fourier transform of the original image. As F(m,n) does not depend on t , it can be taken out of the summation:

Page 20: Image Processing p6 7111 2

212 Image Processing: The Fundamentals

W e identify then the Fourier transform of the degradation process as

(6.32)

The sum on the right hand side of this equation is a geometric progression with ratio between successive terms

. a r m q E e - 3 7

W e apply the formula

n-l Cqk=- qn - 1 where q # 1

k=O q - 1

t o obtain:

Therefore:

(6.33)

Notice that for m = 0 we have q = 1 and we cannot apply the formula of the geometric progression. Instead we have a s u m of 1 'S in (6.32) which is equal t o i~ and so

f i (o,n) = 1 f o r o 5 n 5 N - 1

It is interesting to compare equation (6.33) with its continuous counterpart, equa- t ion (6.16). W e can see that there is a fundamental diference between the two equations: in the denominator equation (6.16) has the frequency U along the blur- ring axis appearing on its own, while in the denominator of equation (6.33) we have the sine of this frequency appearing. This is because discrete images are treated by the discrete Fourier transform as periodic signals, repeated ad injini- tum in all directions.

W e can analyse the Fourier transform of the blurred image in its real and imaginary parts:

G(m, n) E G1 (m, n) + jG2 (m, n)