# Final Report Indian Academy of Sciences Summer Fellowship

date post

01-Nov-2014Category

## Documents

view

39download

0

Embed Size (px)

description

### Transcript of Final Report Indian Academy of Sciences Summer Fellowship

Eight-Week ReportRegistration / Application No: ENGS2229S Name: Amogha P Institution: National Institute of Technology, Surathkal Abstract: Alzheimer's disease is one of the main problem haunting the world today. According to the latest study 1 out of 8 Americans suffer from Alzheimer's disease. In developing countries like India, it is even worse as number of people with dementia is presently 58% and by 2050, it is projected to rise to 71% according to study conducted by Alzheimer's Disease International. It is important to have a method diagnosis, so that AD can be identified. The only way to confirm AD is Autopsy of Brain, Hence we need in vivo methods to identify AD. The present study, is to identify AD with the help of support vector machine. The Axial Images of 302 controls, 54 MCI patients and 62 AD patients in which Hippocampus atrophy was visible was taken. These images were processed to segment the images into four regions- Grey matter, white matter, Skull, Voids( Atrophy and background) by improved fuzzy c-means methods and further Images were classified using support vector machine and an accuracy of 79.18% was achieved. Hence, a better classifier was studied, a Hidden neural network classifier, with this 85.56% accuracy was achieved. This is a three class pattern classification problem, Hence the accuracy is very good for a three class problem

1

Table of Contents 1. Introduction 2.Methodology and Discussion a. Preprocessing b. Image Segmentation c. Training d. Testing 3. Results a. Support Vector Machine b. Hidden Neural Network 4. Conclusion 5. References 15 16 18 19 3 4 4 7 10 14

2

IntroductionAlzheimer's disease is one of the most common neurodegenerative diseases[1]. However, the disease can be confirmed only after autopsy of brain. Hence, there is a need for a in vivo method for diagnosis of Alzheimer's disease[1,2]. The methods presently used are MRI, MRS, PET scan, CT scan etc. However, diagnosis is based on Atrophy rating or some visual features, which are observed by an expert. However, this method leads to bias, as each expert may rate atrophy differently. Hence, there is a need for an automated method to diagnose Alzheimer's disease. The automated method that I am trying to implement is to identify Alzheimer's disease from the MRI image of the test subject. The problem to be solved here is a pattern recognition problem. There are lot of classifiers available in the literature. The classifier I have used is support vector Machine, This is because in this study the sample size(number of test subjects on whom MRI is done=400) is much smaller than sample size (pixels in the image=256*256). This is a classic problem of pattern recognition. Hence, naturally it involves these steps[3]: 1. Preprocessing of the MRI image 2. Segmentation of Image 3. Training Classifier based from MRI images 4. Validation Preprocessing is done to remove the noise in the image and improve the contrast of the image, The information in the image must be usable for SVM based classification hence, segmentation into different anatomical parts in Brain is a way to extract features for SVM. These segmented images are used for training the SVM to distinguish between AD and control subjects. Validation is done to test the accuracy of SVM.

3

Methodology1. Preprocessing The MRI images used were from www.oasis-brains.org, this website has MRI images of 416 subjects. The subjects were also subjected to MMSE(Mini Mental State Exam) and hence, the CDR(Clinical Dementia Rating) is obtained. A CDR of 1 and above indicates probable Alzheimer's disease. The Images were processed using standard procedure, by applying Inverse Fourier Transform and applying connected component analysis. For my study, I chose the Coronal Image of brain as Medial Temporal Atrophy can be found out from this image. Histogram equalization was applied to enhance, the intensity of the image. However, adaptive histogram was found to outperform the normal histogram equalization. Histogram Equalization: Histogram equalization is common image contrast enhancement method. Consider a discrete grayscale image {x} and let ni be the number of occurrences of gray level i. The probability of an occurrence of a pixel of level i in the image is

L being the total number of gray levels in the image, n being the total number of pixels in the image, and being in fact the image's histogram for pixel value i, normalized to [0,1].

The cumulative distribution function corresponding to px as

which is also the image's accumulated normalized histogram. Transformation of the form y = T(x) to produce a new image {y}, such that its CDF will be literalized across the value range, i.e.

4

for some constant K. The properties of the CDF allow us to perform such a transform it is defined as

In order to map the values back into their original range, the following simple transformation needs to be applied on the result[4]:

Adaptive Histogram(CLAHE): CLAHE differs from ordinary adaptive histogram equalization in its contrast limiting. This feature can also be applied to global histogram equalization, giving rise to contrast-limited histogram equalization (CLHE), which is rarely used in practice. In the case of CLAHE, the contrast limiting procedure has to be applied for each neighborhood from which a transformation function is derived. CLAHE was developed to prevent the overamplification of noise that adaptive histogram equalization can give rise to. This is achieved by limiting the contrast enhancement of AHE. The contrast amplification in the vicinity of a given pixel value is given by the slope of the transformation function. This is proportional to the slope of the neighbourhood cumulative distribution function (CDF) and therefore to the value of the histogram at that pixel value. CLAHE limits the amplification by clipping the histogram at a predefined value before computing the CDF. This limits the slope of the CDF and therefore of the transformation function. The value at which the histogram is clipped, the so-called clip limit, depends on the normalization of the histogram and thereby on the size of the neighbourhood region. Common values limit the resulting amplification to between 3 and 4. It is advantageous not to discard the part of the histogram that exceeds the clip limit but to redistribute it equally among all histogram bins. The redistribution will push some bins over the clip limit again (region shaded green in the figure), resulting in an effective clip limit that is larger than the prescribed limit and the exact value of which depends on the image. If this is undesirable, the redistribution procedure can be repeated recursively until the excess is negligible[5].

5

When Normal histogram equalization was applied to an image with greater voids, the contrast equalization makes the white matter intensity dull. Hence, a problem in segmentation could be possible. The images of this normal segmentation are attached below. Hence, an adaptive histogram equalization is used. fig 1(a) and fig1(b) are attached to show the results, of both methods, adaptive enhancement performs better hence, it was applied.

The above images are preprocessed images, 1(a). The image is equalized by normal histogram equalization, 1(b). The image is enhanced by adaptive histogram.

Noise Removal: After Image contrast adjustment, the noise gets boosted, Hence, noise removal is important to have proper segmentation. The noise removal was done by wiener adaptive filter. Wiener filter estimates the local mean and variance around each pixel[6].

where is the N-by-M local neighborhood of each pixel in the image A. Pixelwise Wiener filter are created using these estimates,

where 2 is the noise variance. Noise Variance is the average of all the local estimated variances. 6

2. Segmentation of Image The FCM algorithm assigns pixels to each category by using fuzzy memberships. Let denotes an image with N pixels to be partitioned into c clusters, where represents

multispectral (features) data. The algorithm is an iterative optimization that minimizes the cost function defined as follows:

where

represents the membership of pixel

in the ith cluster, vi is the ith cluster center, || || is

a norm metric, and m is a constant. The parameter m controls the fuzziness of the resulting partition, and m=2 is used in this study. The cost function is minimized when pixels close to the centroid of their clusters are assigned high membership values, and low membership values are assigned to pixels with data far from the centroid. The membership function represents the probability that a pixel belongs to a specific cluster. In the FCM algorithm, the probability is dependent solely on the distance between the pixel and each individual cluster center in the feature domain. The membership functions and cluster centers are updated by the following[7]:

7

Starting with an initial guess for each cluster center, the FCM converges to a solution for vi representing the local minimum or a saddle point of the cost function. Convergence can be detected by comparing the changes in the membership function or the cluster center at two successive iteration steps. However, this method treats whole image as a column vector and segments the image. However, in an MRI it is a known fact that all white matter is a connected set. Also, all the grey matter is connected set. Hence, small change was done in algorithm to account for this fact. Hence, the Membership matrix is convolved by a 5x5 ones matrix and the update rule is modified so that the neighboring elements probability is added up to the given pixel, hence connectivity is ensured[8] This spatial relationship is important in clustering, but it is not utilized in a standard FCM algorithm. To

*View more*