1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

74
1 Lecture 11 Lecture 11 Neighbourhood Neighbourhood Operations (1) Operations (1) TK3813 TK3813 DR MASRI AYOB
  • date post

    19-Dec-2015
  • Category

    Documents

  • view

    217
  • download

    0

Transcript of 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

Page 1: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

1

Lecture 11Lecture 11

Neighbourhood Neighbourhood Operations (1)Operations (1)

TK3813TK3813

DR MASRI AYOB

Page 2: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

2

OutlinesOutlines

Convolution and Correlation.Linear Filtering:

Low pass filtering• Mean Filtering• Gaussian Filtering

High pass filteringHigh boost filtering.

Page 3: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

3

Neighborhood OperationsNeighborhood Operations

Neighbourhood operations modify pixel values based on the values of nearby pixels. Convolution and Correlation are fundamental neighborhood operations.

Convolution is used to filter images for specific reasons – to remove noise, to remove motion blur, to enhance image features, etc…

Correlation is used to determine the similarity of regions of an image to other regions of interest. Used in pattern recognition and image registration.

Page 4: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

4

Principle Objective of EnhancementPrinciple Objective of Enhancement

Process an image so that the result will be more suitable than the original image for a specific application.The suitableness is up to each application.A method which is quite useful for enhancing an image may not necessarily be the best approach for enhancing another images.

Page 5: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

5

2 domains2 domains

Spatial Domain : (image plane)Techniques are based on direct manipulation of pixels in an image

Frequency Domain : Techniques are based on modifying the Fourier transform of an image

There are some enhancement techniques based on various combinations of methods from these two categories.

Page 6: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

6

Good imagesGood images

For human visualThe visual evaluation of image quality is a highly subjective process.It is hard to standardize the definition of a good image.

For machine perceptionThe evaluation task is easier.A good image is one which gives the best machine recognition results.

A certain amount of trial and error usually is required before a particular image enhancement approach is selected.

Page 7: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

7

Spatial DomainSpatial Domain

Procedures that operate directly on pixels.

g(x,y) = T[f(x,y)]g(x,y) = T[f(x,y)]where

f(x,y) f(x,y) is the input imageg(x,y) g(x,y) is the processed imageT T is an operator on f f defined over some neighborhood of (x,y)(x,y)

Page 8: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

8

Mask/FilterMask/Filter

Neighborhood of a point (x,y) can be defined by using a square/rectangular (common used) or circular subimage area centered at (x,y)The center of the subimage is moved from pixel to pixel starting at the top of the corner

(x,y)

Page 9: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

9

Mask Processing or FilterMask Processing or Filter

Neighborhood is bigger than 1x1 pixelUse a function of the values of f in a predefined neighborhood of (x,y) to determine the value of g at (x,y)The value of the mask coefficients determine the nature of the process.Used in techniques

Image Sharpening Image Smoothing

Page 10: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

10

TerminologyTerminology

Neighborhood operation work with values of the image pixels and corresponding values of a sub image.The sub image is called:

FilterMaskKernelTemplateWindow

The values in a filter sub image are referred to as coefficients, rather than pixels.Our focus will be on masks of odd sizes, e.g. 3x3, 5x5,…

Page 11: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

11

Spatial Filtering ProcessSpatial Filtering Process

simply move the filter mask from point to point in an image.at each point (x,y), the response of the filter at that point is calculated using a predefined relationship.

mn

iiii

mnmn

zw

zwzwzwR ...2211

Page 12: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

12

Smoothing Spatial FiltersSmoothing Spatial Filters

used for blurring and for noise reductionblurring is used in preprocessing steps, such as:

removal of small details from an image prior to object extractionbridging of small gaps in lines or curves

noise reduction can be accomplished by blurring with a linear filter and also by a nonlinear filter.

Page 13: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

13

ConvolutionConvolution

The value of a pixel is determined by computing a weighted sum of nearby pixels.

827888

655676

605372

Compute the new value of the center pixel by

“overlaying” the kernel and computing the weighted

sum

10-1

20-2

10-1

Given a “kernel” of weights to be centered on the pixel of

interest

-1 0 +1

+1 0 -1

1

1

1

1

),(),(),(k j

kyjxfkjhyxg

Note: that this only applies to kernels that have dimensions 3x3. Since the operations revolve around a particular pixel, neighbourhoods are always of odd dimensions (3x3, 5x5, 7x7...). The neighbourhoods are nearly always square too.

Page 14: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

14

ConvolutionConvolution

g(x,y) = h(-1, -1) * f(x+1, y+1) +h(0, -1) * f(x, y+1) + h(1, -1) * f(x-1, y+1) + h(-1, 0) * f(x+1, y) + h(0, 0) * f(x, y) + h(1, 0) * f(x-1, y) + h(-1, 1) * f(x+1, y-1) + h(0, 1) * f(x, y-1) + h(1, 1) * f(x-1, y-1).

g(x,y) = -1 * 82 + 0 * 78 + 1 * 88 + -2 * 65 + 0 * 56 + 2 * 76 + -1 * 60 + 0 * 53 + 1 * 72

1

1

1

1

),(),(),(k j

kyjxfkjhyxg

10-1

20-2

10-1

-1 0 +1+1 0 -1

j

k

827888

655676

605372

Location (x,y)

Location (x,y)

x

y

Page 15: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

15

ConvolutionConvolution

1

1

1

1

),(),(),(k j

kyjxfkjhyxg

Page 16: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

16

ConvolutionConvolution

The summation is expressed as:

1

1

1

1

),(),(),(k j

kyjxfkjhyxg

The book is “technically” correct in using this formulation but most implementations and many books directly align the kernel with the image and compute the weighted sum.

For a kernel of width M and height N the more general formula becomes:

2/

2/

2/

2/

),(),(),(N

Nk

M

Mj

kyjxfkjhyxg (7.5)

Page 17: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

17

ConvolutionConvolution

Convolution is used so frequently in certain domains that it has been given the following shorthand notation:

),(),( yxfhyxg The above formula implies convolution at a single pixel. To

indicate convolution of an entire image with a kernel we write:

fhg

Page 18: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

18

ConvolutionConvolution(Implementation Details)(Implementation Details)

Consider the following issues whether they affect implementation

The weights may be real-values numbers The range of values of the output may be significantly changed by the weights What to do about corners and edges?

Page 19: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

19

ConvolutionConvolution(Implementation Details)(Implementation Details)

The weights may be real-valued numbersThe resulting pixel values must either be quantized or maintained with real-valued pixel intensities!

The range of values of the output may be significantly changed by the weights

Must increase the pixel resolution or re-scale the computed image

Page 20: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

20

ConvolutionConvolution(Implementation Details)(Implementation Details)

What about borders?Ignore pixels where the kernel “falls off” the image.

• Output pixels may be set to zero• Input pixel may be copied to the output• Truncate the output image

Truncate the kernel!• The kernel is made smaller for processing borders,

edgesCopy last lineUse “circular” or “reflected” indexing

The most common way is just to ignore them and have an output image slightly smaller than the input. Other techniques include truncating the kernel to process the edge pixels correctly, but this can be complex to implement.

Page 21: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

21

ConvolutionConvolution(Implementation Details)(Implementation Details)

Reflected IndexingPretends the image is “tiled” at each border by a mirrorImagine a mirror vertically placed at each border that “reflects” the image back upon itself

Reflection of X componentLet M be the width of the imageif x < 0 then

x = -x-1else if x >= M then

x = 2M-x-1end

Reflection of X componentLet M be the width of the imageif x < 0 then

x = -x-1else if x >= M then

x = 2M-x-1end

Page 22: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

22

ConvolutionConvolution

Page 23: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

23

ConvolutionConvolution

Circular IndexingPretends the image is infinitely repeated at each border.Sometimes a good theoretical reason for doing this.

Circular indexing of X componentLet M be the width of the imageif x < 0 then

x = x+Melse if x >= M then

x = x-Mend

Circular indexing of X componentLet M be the width of the imageif x < 0 then

x = x+Melse if x >= M then

x = x-Mend

Page 24: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

24

Convolution ComputationConvolution Computation

Page 25: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

25

Convolution CodingConvolution Coding

Java has built-in classes to support convolution.

The code is typically (at least on Windows boxes) implemented in native code (usually C).

Page 26: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

26

Convolution CodingConvolution Coding

The Kernel class is used specifically for convolution operations.

The ConvolveOp implements the BufferedImageOp and filters images by performing a convolution on an image.

int width = 3, height = 3;float[] coeffs = new float[width*height];for(int i=0; i<coeffs.length; i++) { coeffs[i] = 1.0f/coeff.length;}Kernel kernel = new Kernel(width, height, coeffs);

int width = 3, height = 3;float[] coeffs = new float[width*height];for(int i=0; i<coeffs.length; i++) { coeffs[i] = 1.0f/coeff.length;}Kernel kernel = new Kernel(width, height, coeffs);

Page 27: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

27

Convolution CodingConvolution Coding

EDGE_ZERO_FILL: default. Place zeros on the borderEDGE_NO_OP: copy border pixels from input directly to output

ConvolveOp op = new ConvolveOp(kernel);BufferedImage image = op.filter(inputImage, null);ConvolveOp op = new ConvolveOp(kernel);BufferedImage image = op.filter(inputImage, null);

ConvolveOp op1 = new ConvolveOp(kernel, ConvolveOp.EDGE_ZERO_FILL, null);ConvolveOp op2 = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP, null);

ConvolveOp op1 = new ConvolveOp(kernel, ConvolveOp.EDGE_ZERO_FILL, null);ConvolveOp op2 = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP, null);

The “filter” method returns a gray-scale image if the input is gray-scale

The ConvolveOp class places “zeros” at borders by default. One other option is available by using a different constructor

Page 28: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

28

Convolution CodeConvolution Code

The built-in convolution code has some limitations:Only two ways of dealing with borders:

• EDGE_ZERO_FILL• EDGE_NO_OP• Would like to do

– COPY_BORDER_PIXELS– REFLECTED_INDEXING– CIRCULAR_INDEXING

Always truncates (without rescaling) all pixel values• Would like to rescale in various ways.

Doesn’t take advantage of separable kernels

Maybe we can write more flexible code!

Page 29: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

29

StandardGreyOp ReviewStandardGreyOp Review

public class StandardGreyOp implements BufferedImageOp {

public BufferedImage filter(BufferedImage src, BufferedImage dest) {

checkImage(src);

if (dest == null)

dest = createCompatibleDestImage(src, null);

WritableRaster raster = dest.getRaster();

src.copyData(raster);

return dest;

}

public BufferedImage createCompatibleDestImage(BufferedImage src,

ColorModel destModel) {

if (destModel == null)

destModel = src.getColorModel();

int width = src.getWidth();

int height = src.getHeight();

BufferedImage image = new BufferedImage(destModel,

destModel.createCompatibleWritableRaster(width, height),

destModel.isAlphaPremultiplied(), null);

return image;

}

// other methods here ///

}

public class StandardGreyOp implements BufferedImageOp {

public BufferedImage filter(BufferedImage src, BufferedImage dest) {

checkImage(src);

if (dest == null)

dest = createCompatibleDestImage(src, null);

WritableRaster raster = dest.getRaster();

src.copyData(raster);

return dest;

}

public BufferedImage createCompatibleDestImage(BufferedImage src,

ColorModel destModel) {

if (destModel == null)

destModel = src.getColorModel();

int width = src.getWidth();

int height = src.getHeight();

BufferedImage image = new BufferedImage(destModel,

destModel.createCompatibleWritableRaster(width, height),

destModel.isAlphaPremultiplied(), null);

return image;

}

// other methods here ///

}

Page 30: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

30

Convolution CodeConvolution Code

public class NeighbourhoodOp extends StandardGreyOp { public static final int NO_BORDER_OP = 1; public static final int COPY_BORDER_PIXELS = 2; public static final int REFLECTED_INDEXING = 3; public static final int CIRCULAR_INDEXING = 4; protected int width, height, size; protected int borderStrategy; public NeighbourhoodOp(int w, int h, int strategy) { if (w < 1 || h < 1 || w%2 == 0 || h%2 == 0) throw new ImagingOpException("invalid neighbourhood dimensions"); width = w; height = h; size = w*h; borderStrategy = strategy; } public static final int refIndex(int i, int n) { if (i < 0) return -i-1; else if (i >= n) return 2*n-i-1; else return i; }

public static final int circIndex(int i, int n) { if (i < 0) return i+n; else if (i >= n) return i-n; else return i; }}

public class NeighbourhoodOp extends StandardGreyOp { public static final int NO_BORDER_OP = 1; public static final int COPY_BORDER_PIXELS = 2; public static final int REFLECTED_INDEXING = 3; public static final int CIRCULAR_INDEXING = 4; protected int width, height, size; protected int borderStrategy; public NeighbourhoodOp(int w, int h, int strategy) { if (w < 1 || h < 1 || w%2 == 0 || h%2 == 0) throw new ImagingOpException("invalid neighbourhood dimensions"); width = w; height = h; size = w*h; borderStrategy = strategy; } public static final int refIndex(int i, int n) { if (i < 0) return -i-1; else if (i >= n) return 2*n-i-1; else return i; }

public static final int circIndex(int i, int n) { if (i < 0) return i+n; else if (i >= n) return i-n; else return i; }}

Page 31: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

31

Convolution CodeConvolution Code

public class NeighbourhoodOp extends StandardGreyOp { protected void copyBorders(Raster src, WritableRaster dest) { int w = src.getWidth(); int h = src.getHeight(); int m = width/2; int n = height/2;

for (int x = 0; x < w; ++x) { // copy top and bottom for (int y = 0; y < n; ++y) dest.setSample(x, y, 0, src.getSample(x, y, 0)); for (int y = h-n; y < h; ++y) dest.setSample(x, y, 0, src.getSample(x, y, 0)); }

for (int y = 0; y < h; ++y) { // copy left and right for (int x = 0; x < m; ++x) dest.setSample(x, y, 0, src.getSample(x, y, 0)); for (int x = w-m; x < w; ++x) dest.setSample(x, y, 0, src.getSample(x, y, 0)); } }}

public class NeighbourhoodOp extends StandardGreyOp { protected void copyBorders(Raster src, WritableRaster dest) { int w = src.getWidth(); int h = src.getHeight(); int m = width/2; int n = height/2;

for (int x = 0; x < w; ++x) { // copy top and bottom for (int y = 0; y < n; ++y) dest.setSample(x, y, 0, src.getSample(x, y, 0)); for (int y = h-n; y < h; ++y) dest.setSample(x, y, 0, src.getSample(x, y, 0)); }

for (int y = 0; y < h; ++y) { // copy left and right for (int x = 0; x < m; ++x) dest.setSample(x, y, 0, src.getSample(x, y, 0)); for (int x = w-m; x < w; ++x) dest.setSample(x, y, 0, src.getSample(x, y, 0)); } }}

Page 32: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

32

Convolution CodeConvolution Code

public class ConvolutionOp extends NeighbourhoodOp { public static final int SINGLE_PASS = 1; public static final int SEPARABLE = 2; public static final int NO_RESCALING = 1; /** Indicates that maximum output value should be scaled to 255. */ public static final int RESCALE_MAX_ONLY = 2; /** Indicates that range should be scaled to 0-255. */ public static final int RESCALE_MIN_AND_MAX = 3; private Kernel kernel; private int calculation; /** Calculation method (single pass or separable). */ private int rescaleStrategy;

public ConvolutionOp(Kernel kernel) { this(kernel, NO_BORDER_OP, SINGLE_PASS, NO_RESCALING); }

public ConvolutionOp(Kernel kernel, int border, int calc, int rescale) { super(kernel.getWidth(), kernel.getHeight(), border); this.kernel = kernel; calculation = calc; rescaleStrategy = rescale; }

///…other stuff here…///}

public class ConvolutionOp extends NeighbourhoodOp { public static final int SINGLE_PASS = 1; public static final int SEPARABLE = 2; public static final int NO_RESCALING = 1; /** Indicates that maximum output value should be scaled to 255. */ public static final int RESCALE_MAX_ONLY = 2; /** Indicates that range should be scaled to 0-255. */ public static final int RESCALE_MIN_AND_MAX = 3; private Kernel kernel; private int calculation; /** Calculation method (single pass or separable). */ private int rescaleStrategy;

public ConvolutionOp(Kernel kernel) { this(kernel, NO_BORDER_OP, SINGLE_PASS, NO_RESCALING); }

public ConvolutionOp(Kernel kernel, int border, int calc, int rescale) { super(kernel.getWidth(), kernel.getHeight(), border); this.kernel = kernel; calculation = calc; rescaleStrategy = rescale; }

///…other stuff here…///}

Page 33: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

33

Convolution CodeConvolution Codepublic class ConvolutionOp extends NeighbourhoodOp { public float[] convolve(BufferedImage image) { int w = image.getWidth(); int h = image.getHeight(); Raster raster = image.getRaster(); float[] result = new float[w*h]; float[] coeff = kernel.getKernelData(null); int m = width/2, n = height/2;

float sum; int i, j, k, x, y; switch (borderStrategy) {

case REFLECTED_INDEXING: for (y = 0; y < h; ++y) for (x = 0; x < w; ++x) { for (sum = 0.0f, i = 0, k = -n; k <= n; ++k) for (j = -m; j <= m; ++j, ++i) sum += coeff[i] * raster.getSample(refIndex(x-j, w), refIndex(y-k, h), 0); result[y*w+x] = sum; } break;

case CIRCULAR_INDEXING: for (y = 0; y < h; ++y) for (x = 0; x < w; ++x) { for (sum = 0.0f, i = 0, k = -n; k <= n; ++k) for (j = -m; j <= m; ++j, ++i) sum += coeff[i] * raster.getSample(circIndex(x-j, w), circIndex(y-k, h),

0); result[y*w+x] = sum; } break; ///…rest of code here…//}

public class ConvolutionOp extends NeighbourhoodOp { public float[] convolve(BufferedImage image) { int w = image.getWidth(); int h = image.getHeight(); Raster raster = image.getRaster(); float[] result = new float[w*h]; float[] coeff = kernel.getKernelData(null); int m = width/2, n = height/2;

float sum; int i, j, k, x, y; switch (borderStrategy) {

case REFLECTED_INDEXING: for (y = 0; y < h; ++y) for (x = 0; x < w; ++x) { for (sum = 0.0f, i = 0, k = -n; k <= n; ++k) for (j = -m; j <= m; ++j, ++i) sum += coeff[i] * raster.getSample(refIndex(x-j, w), refIndex(y-k, h), 0); result[y*w+x] = sum; } break;

case CIRCULAR_INDEXING: for (y = 0; y < h; ++y) for (x = 0; x < w; ++x) { for (sum = 0.0f, i = 0, k = -n; k <= n; ++k) for (j = -m; j <= m; ++j, ++i) sum += coeff[i] * raster.getSample(circIndex(x-j, w), circIndex(y-k, h),

0); result[y*w+x] = sum; } break; ///…rest of code here…//}

Page 34: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

34

Convolution CodeConvolution Codepublic class ConvolutionOp extends NeighbourhoodOp { public BufferedImage filter(BufferedImage src, BufferedImage dest) { checkImage(src); if (dest == null) dest = createCompatibleDestImage(src, null);

float[] rawData; if (calculation == SEPARABLE) { rawData = separableConvolve(src); } else { rawData = convolve(src); }

DataBufferByte buffer = (DataBufferByte) dest.getRaster().getDataBuffer(); convertToBytes(rawData, buffer.getData()); return dest; }

protected void convertToBytes(float[] in, byte[] out) { if (rescaleStrategy == NO_RESCALING) { for (int i = 0; i < in.length; ++i) { int value = Math.round(in[i]); if (value < 0) out[i] = (byte) 0; else if (value > 255) out[i] = (byte) 255; else out[i] = (byte) value; } } else { // other cases go here // }}

public class ConvolutionOp extends NeighbourhoodOp { public BufferedImage filter(BufferedImage src, BufferedImage dest) { checkImage(src); if (dest == null) dest = createCompatibleDestImage(src, null);

float[] rawData; if (calculation == SEPARABLE) { rawData = separableConvolve(src); } else { rawData = convolve(src); }

DataBufferByte buffer = (DataBufferByte) dest.getRaster().getDataBuffer(); convertToBytes(rawData, buffer.getData()); return dest; }

protected void convertToBytes(float[] in, byte[] out) { if (rescaleStrategy == NO_RESCALING) { for (int i = 0; i < in.length; ++i) { int value = Math.round(in[i]); if (value < 0) out[i] = (byte) 0; else if (value > 255) out[i] = (byte) 255; else out[i] = (byte) value; } } else { // other cases go here // }}

Page 35: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

35

ConvolutionConvolution

Page 36: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

36

ConvolutionConvolution

Page 37: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

37

ConvolutionConvolution

Page 38: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

38

FrequencyFrequency

Frequency of a sound wave or audio signal – referring to the rate at which the signal changes with time.Frequency in an image - referring to changes occuring in space.

Spatial frequency is a measure of how rapidly brightness or colours varies as we traverse an image.Images in which grey level varies slowly and smoothly are characterised solely by components with low spatial frequncy.Images containing sudden grey level transitions, fine detail or strong texture will also contain components with high spatial frequencies.

Page 39: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

39

Page 40: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

40

FilteringFiltering

Convolution will have different effects depending upon the values of the Kernel. Convolution is an operation taken between images. (1) image data (2) kernel. Primary technique for spatial filtering. Convolution is a linear operation that can be undone. Filtering is a way of tuning image frequencies – much like a graphic equalizer.

Linear filtering

Page 41: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

41

FilteringFiltering

Page 42: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

42

FilteringFiltering

Page 43: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

43

FilteringFiltering

Low pass filter:Allows low spatial frequencies to pass unchanged.Suppresses high frequencies.Smoothes or blurs the image.Tend to reduce noise but also obscures fine detail.

High pass filter:Allows high spatial frequencies to pass unchanged.Suppresses low frequencies.Preserves sudden variation, such as those that occur at the boundaries of objects.But suppresses the more gradual variation.

Page 44: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

44

FilteringFiltering

Page 45: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

45

FilteringFiltering

Page 46: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

46

Low-Pass FilteringLow-Pass Filtering

Page 47: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

47

Low-Pass FilteringLow-Pass Filtering

Page 48: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

48

Low-Pass FilteringLow-Pass Filtering

Page 49: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

49

Low-Pass FilteringLow-Pass Filtering

Average kernel : Equally weighted sum of all pixels in a neighborhoodAny kernel having all positive coefficients will act as a low-pass filter.

.111 .111 .111

.111 .111 .111

.111 .111 .111

0.04 0.04 0.04 0.04 0.04

0.04 0.04 0.04 0.04 0.04

0.04 0.04 0.04 0.04 0.04

0.04 0.04 0.04 0.04 0.04

0.04 0.04 0.04 0.04 0.04 Consider the two kernels shown

above. What do they do?

Consider the two kernels shown above. What do they do?

•Their coefficient sum to 1.

•Convolution with them will not result in an overall brightening of the image.

Page 50: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

50

Low-Pass FilteringLow-Pass Filtering

Any kernel having all positive coefficients will act as a low-pass filter.

.111 .111 .111

.111 .111 .111

.111 .111 .111

The center pixel becomes the average of all neighboring pixels. Also known as a mean filter.

The center pixel becomes the average of all neighboring pixels. Also known as a mean filter.

1 1 1

1 1 1

1 1 1

= 1/9 *

Page 51: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

51

Low-Pass FilterLow-Pass Filter

Pixel values from the neighbourhood are summed without being weighted. The sum is divided by the number of pixels in the neighbourhood.Convolution with these kernels is therefore equivalent to computing the mean grey level over the neighbourhood defined by the kernel.

These kernels are sometimes described as mean filters.

Page 52: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

52

Low-Pass FilteringLow-Pass Filtering

Page 53: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

53

Low-Pass FilteringLow-Pass Filtering

Page 54: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

54

Mean FilteringMean Filtering

Mean filters are good at removing noise.

Mean filters “blur” or “smooth” edges (by damping high frequency components and resisting fast changes in intensities)

Kernels are typically normalized so that they sum to 1

Page 55: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

55

Mean FilteringMean Filtering

(100+100+100+100+200+205+100+195+200)/9

Page 56: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

56

Gaussian FilterGaussian Filter

The Gaussian filter is a 2-D convolution operator that is used to `blur' images and remove detail and noise much like the mean filter.It is similar to the mean filter, but it uses a different kernel that represents the shape of a Gaussian (`bell-shaped') hump. This kernel has some special properties which are detailed below. The degree of smoothing is determined by the standard deviation of the Gaussian.

Larger standard deviation Gaussians, of course, require larger convolution kernels in order to be accurately represented.

The Gaussian outputs a `weighted average' of each pixel's neighborhood, with the average weighted more towards the value of the central pixels.

This is in contrast to the mean filter's uniformly weighted average. A Gaussian provides gentler smoothing and preserves edges better than an identically sized mean filter.

Page 57: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

57

Gaussian FilterGaussian Filter

3D plot of the Gaussian filter

Page 58: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

58

Gaussian FilterGaussian Filter

The Gaussian kernel is separable and symmetric.

To construct the kernel we must sample and quantize! (basically, we “image” the function)

The kernel below is an example where sigma = 1.

1 4 7 4 1

4 16 28 16 4

7 28 49 28 7

4 16 28 16 4

1 4 7 4 1

Page 59: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

59

Gaussian FilterGaussian Filter

15 x 15 Larger kernels result -> more blur

Page 60: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

60

Gaussian FilterGaussian Filter

Page 61: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

615x5 Gaussian Kernel

5x5 Mean Kernel

Page 62: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

62

Noise ReductionNoise Reduction

Gaussian and mean filters are usually used to reduce “noise” in images.

The above image has been corrupted by impulse noise.

Page 63: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

63

Noise ReductionNoise Reduction

Page 64: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

64

Noise ReductionNoise Reduction

Page 65: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

65

Sharpening Filters (High-Pass Filtering)Sharpening Filters (High-Pass Filtering)

These filters highlight fine image detail or de-blur an image.Highpass filter: allows only high-frequency information through.

Main feature is a positive center coefficient and negative perimeter values.The sum of the coefficients is zero, which means that areas of constant intensity are completely eliminated.

-1 -1 -1

-1 8 -1

-1 -1 -1

Page 66: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

66

High-Pass FilteringHigh-Pass Filtering

The sum of the coefficients in this kernel is zero.This means that, when the kernel is over an area of constant or slowly varying grey level, the result of convolution is zero or some very small number.However, when grey level is varying rapidly within the neighbourhood, the result of convolution can be a large number (+ve or –ve).

Need to choose an output image representation that support negative numbers.If we wish to display/print image, we must map the pixel values onto a 0-255 range.

• Usually map 0 onto the middle of the range.• Thus, negative filter responses will show up as dark tones,

whereas positive responses will be represented by light tones.

Page 67: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

67

Sharpening FiltersSharpening Filters

Page 68: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

68

Sharpening FiltersSharpening Filters

Page 69: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

69

HighPass ExamplesHighPass Examples

Page 70: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

70

HighPass ExampleHighPass Example

Page 71: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

71

HighPass ExampleHighPass Example

Page 72: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

72

High Boost FilteringHigh Boost Filtering

An image can be sharpened by high-boost filtering, an operation that emphasises the high spatial frequencies present in that image. In the spatial domain, this can be accomplished by convolution with a kernel of the form

where c > 8. Larger values give more weight to a pixel's true value and less to the difference between it and its surroundings, thereby reducing the sharpening effect. As c get closer to 8, the degree of sharpening increases.If c=8, the kernel becomes the high pass filter.Keeps the “original” image while enhancing (boosting) the high-frequency components.

-1 -1 -1

-1 c -1

-1 -1 -1

Page 73: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

73

Example: High boost filteringExample: High boost filtering

Page 74: 1 Lecture 11 Neighbourhood Operations (1) TK3813 DR MASRI AYOB.

74

Thank youThank you

Q&AQ&A