Minor_project

20
INTRODUCTION Medical Imaging provides a non-invasive technique to look at the functional and structural information of internal organs. There are many different types of modern medical imaging techniques, including Computed Tomography (CT), Magnetic Resonance Imaging (MRI). Two fundamental parts of emission CT are: single photon emission computed tomography (SPECT) and positron emission tomography (PET). Since its introduction in the 1970s, CT has become an important tool in medical imaging to supplement x-rays and medical ultrasonography. CT completely eliminates the superimposition of images of structures outside the area of interest. Earlier mathematical algorithms were used to reconstruct the image. The most famous reconstruction technique that relies on the Fourier transform is Filtered Back Projection. It is also computationally undemanding. In the early 1980’s, new methods of image reconstruction in emission tomography emerged, which could overcame some of the shortcoming of the traditional methods. Conventional reconstruction algorithms like FBP cannot reconstruct images with incomplete dataset. Hence we work with iterative algorithm, Maximum Likelihood-Expectation Maximum (ML-EM), Median Root Prior (MRP), Ordered Subset Expectation Maximum (OSEM). In our project, we have used Shepp-Logan head phantom, and two other phantoms as test cases. These techniques are proved to be advantageous as they use an internal model of the scanner's properties and of the physical laws of X-ray interactions. Earlier methods, such as filtered back projection, made assumptions that the scanners were perfect and didn’t consider the fact that simple physics is not applicable at subatomic level which led to a number of artifacts, high noise and bad image resolution. Iterative techniques provide images with improved and high resolution, reduced noise factor and lesser artifacts, as well as have the ability to reduce the radiation dose amount greatly in certain circumstances. These methods could take into account the Poisson nature of noise and many such other factors leading to better image reconstruction. The advantages of the iterative approach include insensitivity to noise and capability of reconstructing a favorable image in the case of incomplete data. We have used Anisotropic Diffusion Filter (a nonlinear filter) that is able to obtain a smooth image from a noisy image without blurring edges. AD filter is widely used for image de-noising, enhancement, segmentation etc. We have also used filters such as Total Variant (TV), Probability Patch Based filter (PPB), BM3D, Huber, QM that is able to obtain a smooth image from a noisy image as well as preserve edges without blurring. Tomographic Image Reconstruction Tomography is a non-invasive imaging technique allowing for the visualization of the internal structures of an object without the superposition of over- and under-lying structures that usually plagues conventional projection images. For example, in a conventional chest radiograph, the heart, lungs, thorax, and ribs are all superimposed on the same film, whereas a computed tomography (CT) slice captures each organ in its actual three dimensional positions. Each tomographic modality measures a different physical quantity. Tomograms can be created using a variety of physical mechanisms. Tomograms Modalities X-ray Attenuation X-ray Computerized Tomography Nuclear Magnetic Resonance Magnetic Resonance Imaging

Transcript of Minor_project

Page 1: Minor_project

INTRODUCTION

Medical Imaging provides a non-invasive technique to look at the functional and structural information of

internal organs. There are many different types of modern medical imaging techniques, including

Computed Tomography (CT), Magnetic Resonance Imaging (MRI). Two fundamental parts of emission

CT are: single photon emission computed tomography (SPECT) and positron emission tomography

(PET). Since its introduction in the 1970s, CT has become an important tool in medical imaging to

supplement x-rays and medical ultrasonography. CT completely eliminates the superimposition of images

of structures outside the area of interest.

Earlier mathematical algorithms were used to reconstruct the image. The most famous reconstruction

technique that relies on the Fourier transform is Filtered Back Projection. It is also computationally

undemanding. In the early 1980’s, new methods of image reconstruction in emission tomography

emerged, which could overcame some of the shortcoming of the traditional methods. Conventional

reconstruction algorithms like FBP cannot reconstruct images with incomplete dataset. Hence we work

with iterative algorithm, Maximum Likelihood-Expectation Maximum (ML-EM), Median Root Prior

(MRP), Ordered Subset Expectation Maximum (OSEM). In our project, we have used Shepp-Logan head

phantom, and two other phantoms as test cases.

These techniques are proved to be advantageous as they use an internal model of the scanner's properties

and of the physical laws of X-ray interactions. Earlier methods, such as filtered back projection, made

assumptions that the scanners were perfect and didn’t consider the fact that simple physics is not

applicable at subatomic level which led to a number of artifacts, high noise and bad image resolution.

Iterative techniques provide images with improved and high resolution, reduced noise factor and lesser

artifacts, as well as have the ability to reduce the radiation dose amount greatly in certain circumstances.

These methods could take into account the Poisson nature of noise and many such other factors leading to

better image reconstruction. The advantages of the iterative approach include insensitivity to noise and

capability of reconstructing a favorable image in the case of incomplete data. We have used Anisotropic

Diffusion Filter (a nonlinear filter) that is able to obtain a smooth image from a noisy image without

blurring edges. AD filter is widely used for image de-noising, enhancement, segmentation etc. We have

also used filters such as Total Variant (TV), Probability Patch Based filter (PPB), BM3D, Huber, QM that

is able to obtain a smooth image from a noisy image as well as preserve edges without blurring.

Tomographic Image Reconstruction

Tomography is a non-invasive imaging technique allowing for the visualization of the internal structures

of an object without the superposition of over- and under-lying structures that usually plagues

conventional projection images. For example, in a conventional chest radiograph, the heart, lungs, thorax,

and ribs are all superimposed on the same film, whereas a computed tomography (CT) slice captures each

organ in its actual three dimensional positions.

Each tomographic modality measures a different physical quantity. Tomograms can be created using a

variety of physical mechanisms.

Tomograms Modalities

X-ray Attenuation X-ray Computerized Tomography

Nuclear Magnetic Resonance Magnetic Resonance Imaging

Page 2: Minor_project

Position-Electron Annihilation Positron Emission Tomography

Ultrasound Interaction Ultrasound Interaction

Medical imaging systems such as positron emission tomography (PET) and electronically collimated

single positron emission tomography (SPECT) record particle emission events based on timing

coincidences. These systems record Accidental Coincidence (AC) events simultaneously with the true

coincidence events. Similarly in low light-level imaging, thermo electrons generated by photo detector are

indistinguishable from photoelectrons generated by photo-conversion, and their effect is similar to the AC

events.

Positron Emission Tomography (PET)

In positron emission tomography (PET) imaging, 2-D or 3-D tomographic images of radioactivity

distribution within the subject are generated by giving the subject a proton-rich isotope and placing that

patient in the field of number of detectors. The radionuclide decays to produce a positron, neutron and

neutrino. The proton travels a short distance within the body and gives up its kinetic energy due to

interaction with the human tissue. The positron annihilates with an electron to produce two 511 keV

photons, traveling in opposite directions.

Basic Principle of PET

The photons are detected by the detectors surrounding the subject. The detectors are linked in order to

detect an emission due to the same obliteration. When a photon is registered at a detector, it generates a

time pulse. The pulses are registered and called coincident only if the two detection events fall within a

small time window. These coincidence events are stored in arrays corresponding to projections through

the patient which later contribute to the image. The coincidence events fall into 4 categories:

a) True coincidence

b) Scattered coincidence is one in which one of the detected photons has undergone a Compton

scattering event, resulting in a wrong Line of Response

c) Random coincidence occurs when two photons not arising from the same annihilation are

detected within the coincidence time window of the system.

d) Multiple coincidences occur when more than two photons are detected in different detectors

within the coincidence resolving time.

Fig 1. Coincidences in Positron Emission Tomography

Page 3: Minor_project

Note: Scattered and Random coincidences contribute to the statistical noise in the image which makes the

image noisy.

Single Positron Emission Tomography (SPECT)

SPECT is used for any gamma imaging study, where a true 3D representation can be helpful, e.g., tumor

imaging, infection imaging and thyroid imaging. Because SPECT permits accurate localization in 3D

space, it can be used to provide information about localized function in internal organs, such as functional

cardiac or brain imaging. In this method, single photon emissions are the source of emission rather than

X-ray transmissions. In SPECT the patient is injected with a harmless tracer chemical which will emit

gamma rays. In SPECT, the detectors are set up in a strip, so the collimator is necessary to locate the site

from which the photon was emitted.

Once a tracer is injected, the gamma rays need to be detected. The gamma camera is used to detect

gamma rays. The components making up the gamma camera are collimator, detector crystals,

photomultiplier tube array, position logistics controller and data analysis computer. The collimator only

allows straight travelling rays to pass through. They are made of gamma rays absorbing material such as

lead or tungsten. The detector detects the gamma rays incident on it. The gamma rays interact with the

crystal by means of the Photoelectric effect.

Radon Transform and Filtered Back Projection

The Radon Transform and its inverse give the reconstruction of the tomographic images. Radon transform

of a distribution f(x, y) is given by

𝑝(𝑟, 𝜃) = ∫ 𝑓(𝑥, 𝑦)𝛿(𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃 − 𝑟)𝑑𝑥𝑑𝑦

Where 𝛿 is delta function

x, y – coordinate of the point in spatial domain

r, 𝜃 – coordinate of the point in polar domain

Back projection is defined as :

𝑓(𝑥, 𝑦) = ∫ 𝑝(𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃, 𝜃)𝑑𝜃𝑥

0

Due to point spread function the reconstructed image is blurred.

FBP is most widely used image reconstruction algorithm. It consists of two steps:

1. The filtering step – Consists of the line integrals that form the projection of the object.

2. Back projection step

Page 4: Minor_project

Iterative Reconstruction Methods:-

We are interested in finding a vector f that is a solution of G=Af. The basic principle of the iterative

algorithms is to find a solution by successive estimates. The projections corresponding to the current

estimate are compared with the measured projections. The result of the comparison is used to modify the

current estimate, thereby creating a new estimate. The algorithms differ in the way the measured and

estimated projections are compared and the kind of correction applied to the present estimate. The process

is initiated by arbitrarily creating a first estimate—for example, a uniform image initialized to 0 or 1 or

adding some error in the initial estimate(depending on whether the correction is carried out under the

form of an addition or a multiplication).

In computed tomography, the above approach was the one first used by Sir Hounsfield. There are

different varieties of algorithms, but each starts with an assumed or estimated image, computes

projections from the image, compares with the original projection data and updates the image based upon

the difference between the calculated and the actual projections.

Following are mainly 4 components of the iterative image reconstruction algorithms:

1. An object model that expresses an unknown continuous-space function f(r) that is to be

reconstructed in terms of a finite series with unknown coefficients that is estimated from the data.

2. A system model that relates the unknown object to the "ideal" measurements that would be

recorded in the absence of measurement noise. Often this is a linear model of the form Y=Ax+e.

3. A statistical model that describes how the noisy measurements vary around their ideal values.

Often Gaussian or Poisson noise are assumed.

4. A cost function that is to be minimized to estimate the image coefficient vector. An algorithm

including some initial estimate of the image and some stopping criterion for terminating the

iterations.

Maximum Likelihood-Expectation Maximum

The theoretic basis of ML-EM algorithm is the Poisson process of electronic emission. For image

reconstruction model:

Y=AX+e

Where Y = {𝑦1,𝑦2, … , 𝑦𝑀,} is electron projection data; X = {𝑥1, 𝑥2, … , 𝑥𝑀} is the image intensity data;

𝑒 = {𝑒1 , 𝑒2, … , 𝑒𝑀} is the noise; A denotes the projection matrix M×N. Under the Poisson assumption,

ML-EM algorithm updates the value of pixel 𝑥𝑖 at the iteration n according to the following

multiplication scheme:

Page 5: Minor_project

𝒙𝒋𝒏+𝟏 =

(𝒙𝒋𝒏 ∑ (

𝒚𝒊𝒑(𝒊|𝒋)

∑ 𝒙𝒋𝒏𝒑(𝒊|𝒋)𝑵

𝒋=𝟏

)𝑴𝒊=𝟏 )

(∑ 𝒑(𝒊|𝒋)𝑵𝒋=𝟏 )

Where 𝑥𝑗𝑛 , 𝑥𝑗

𝑛+1 are the the value of the pixel 𝑥𝑗 after iteration n and n+1. 𝑦𝑖 = ∑ 𝑃(𝑖|𝑗)𝑥𝑖𝑦∈𝑇 , i=1,2,…

N. T denotes conditionam mathematical expectation of j pixel’s rays hit detector i.

A flowchart is shown below to explain the working of the of the algoritms ML-EM and MRP. For

reconstruction, the image is digitized into matrix. Then for computational purpose the image is

represented in column vector form.

System matrix (𝑮𝒊𝒋) represents the probability of the projection data. p(i,j) represents the probability of

emission i (pixel b) that is supposed to be detected by the detector j.

Median Root Prior (MRP)

The median root prior (MRP) algorithm was developed based on a general assumption of the

unknown image: the desired image is locally monotonic. Pixel values are spatially non-increasing

or non-decreasing in a local neighborhood. This is accomplished using median filtering. Median

Root Prior based on the one-step-late approach, which employs median filter regularization and

efficiently removes noisy patterns, typical in images reconstructed with iterative algorithms based

on probability methods after a large number of iterations, without blurring the locally monotonic

structures.

The elements of the vector of MRP coefficients are calculated according to the following scheme:

𝑴𝒊(𝒌)

= (𝟏 + 𝜷𝒙𝒊

(𝒌)− |𝒎𝒆𝒅(𝒙𝒊

(𝒌), 𝒊)

𝒎𝒆𝒅 (𝒙𝒊(𝒌)

, 𝒊))

−𝟏

where 𝑚𝑒𝑑 (𝑥𝑖(𝑘)

, 𝑖) is the median over a neighborhood of the voxel i.

MRP algorithm is same as ML-EM algorithm. It has one extra updating step

ƛ′𝒋𝒌+𝟏 =

ƛ𝒋𝒌+𝟏

(1 + 𝛽𝑥𝑖

(𝑘)− |𝑚𝑒𝑑(𝑥𝑖

(𝑘), 𝑖)

𝑚𝑒𝑑(𝑥𝑖

(𝑘), 𝑖)

)

ƛ′𝒋𝒌+𝟏 is the new estimated image. And then rests of the steps are same as ML-EM.

Page 6: Minor_project

Advantages

1) MRP contains implicitly a general description of the unknown tracer concentration. No

special knowledge of the appearance of the true image is required.

2) The edge preservation is a built-in feature in median filtering, independent of the height of

the edge.

3) MRP is general, robust in use, and quantitatively accurate.

4) As the median is used as an estimator of a penalty reference for each pixel, MRP penalizes

only details smaller than a certain spatial size: individual pixels or too few pixels of different

amplitude from their neighborhood do not pass median filtering.

Disadvantages

1) It tends to generate sharp edges of small height onto flat and noisy area. Although the noise is

effectively reduced, the remaining blocks may be disturbing to the human eye. This effect,

called streaking, arises when the output of median filter happens to be the same for adjacent

window locations.

The algorithm of MRP used by our program

Page 7: Minor_project

𝑪𝒍𝒂𝒄𝒖𝒍𝒂𝒕𝒆 𝑷𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒚 𝒎𝒂𝒕𝒓𝒊𝒙 𝑮𝒊𝒋

𝑺𝒕𝒂𝒓𝒕 𝒘𝒊𝒕𝒉 𝒊𝒏𝒊𝒕𝒊𝒂𝒍 𝒊𝒎𝒂𝒈𝒆 𝒆𝒔𝒕𝒊𝒎𝒂𝒕𝒆

ƛ(𝒃)

𝑭𝒐𝒓𝒘𝒂𝒓𝒅 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒊𝒐𝒏 ∶ 𝒏𝒊 = ∑ 𝑮𝒊,𝒋ƛ𝒋𝒌

𝑩

𝒋=𝟏

𝑪𝒐𝒎𝒑𝒂𝒓𝒊𝒔𝒊𝒐𝒏: 𝒏′𝒊 =

𝒏𝒊∗

𝒏𝒊

𝑼𝒑𝒅𝒂𝒕𝒆: ƛ𝒋𝒌+𝟏 = ƛ𝒋

𝒌 ∗ 𝒙′𝒋

𝑼𝒑𝒅𝒂𝒕𝒆 ∶

ƛ′𝒋𝒌+𝟏 =

ƛ𝒋𝒌+𝟏

(1 + 𝛽𝑥𝑖

(𝑘)− |𝑚𝑒𝑑(𝑥𝑖

(𝑘), 𝑖)

𝑚𝑒𝑑(𝑥𝑖

(𝑘), 𝑖)

)

𝑩𝒂𝒄𝒌 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒊𝒐𝒏: 𝒙𝒊 = ∑ 𝑮𝒊,𝒋𝒏′𝒊

𝑫

𝒊=𝟏

𝑵𝒐𝒓𝒎𝒂𝒍𝒊𝒛𝒂𝒕𝒊𝒐𝒏: 𝒙′𝒋 = 𝒙𝒋/ ∑ 𝑮𝒊,𝒋

𝑫

𝒊=𝟏

A SIMFLIFIED FLOWCHART OF MLEM and MRP

MLEM

MRP

Page 8: Minor_project

FILTERS USED

1. Anisotropic Diffusion Filter

The Anisotropic Diffusion (AD) filter was first introduced by Perona and Malik in 1990. It is a

nonlinear filter based on Partial Differential equation. It smoothens the intra-region of the image

(preserving the edges) or without blurring the edges. It adaptively chooses diffusion coefficient in

different diffusion iterations so that the intra region become smooth while edges are preserved.

Small variations of intensity of pixels with low gradient values can be easily smoothed, while

edges having large gradient values of intensity are effectively retained.

In 1990, Perona and Malik first proposed an anisotropic diffusion equation as:

{

𝜕𝑓

𝜕𝑡= 𝑑𝑖𝑣(𝐶𝑝−𝑚(|∇𝑓|)∇𝑓

𝑓(𝑥, 𝑦, 0) = 𝑓0(𝑥, 𝑦)

𝑓 - image at ith iteration

∇𝑓 – gradient of the image

div – divergence operator

𝐶𝑝−𝑚 – denotes diffusion coefficient

𝑪𝒑−𝒎(|𝛁𝒇| =𝟏

𝟏 + (|𝛁𝒇| 𝑲)⁄ 𝟐

Or

𝑪𝒑−𝒎(|𝛁𝒇| = 𝒆𝒙𝒑 (− (|𝛁𝒇|

𝑲))

𝟐

2. BM3D:

The enhancement of the sparsity is achieved by grouping similar 2D fragments of the image into

3D data arrays called “groups”. Due to the similarity between the grouped blocks, it enables a

highly sparse representation in 3D transform domain, so that the noise can be well separated by

shrinking the coefficients. To the best of our knowledge, BM3D achieves the best performance

for removing additive white Gaussian noise at a reasonable computational cost.

3. Huber:

The principal rapport of this work is the proposition of the Huber potential function for image

restoration, which performance is comparable with respect to half-quadratic functional

performance. In order to assure completely the robustness into the edge preserving image

filtering, diminishing at the same time the convergence speed, the Huber potential function is

Page 9: Minor_project

proposed as a half quadratic (HQ) function. Such functional has been use d in one dimensional

robust estimation as described in for the case of non-linear regression.

Huber filter is defined as

H=t.^2/2

ii=abs(t)>d

H(ii)=d*abs(t(ii))-d.^2/2

4. Total Variation:

Total variation de-noising (TVD) is an approach for noise reduction developed so as to preserve

sharp edges in the underlying signal. Unlike a conventional low-pass filter, TV denoising is

defined in terms of an optimization problem. The output of the TV denoising ‘filter’ is obtained

by minimizing a particular cost function. Any algorithm that solves the optimization problem can

be used to implement TV denoising. However, it is not trivial because the TVD cost function is

non-differentiable. Numerous algorithms have been developed to solve the TVD problem.

Total variation denoising assumes that the noisy data y(n) is of the form

y(n) = x(n) + w(n), n = 0, . . . , N − 1 (1)

where x(n) is a (approximately) piecewise constant signal and w(n) is white Gaussian noise.

TV denoising estimates the signal x(n) by solving the optimization problem:

𝐚𝐫𝐠 𝒎𝒊𝒏𝒙 {𝑭(𝒙) =𝟏

𝟐∑ |𝒚(𝒏) − 𝒙(𝒏)|𝟐 +

𝑵−𝟏

𝒏=𝟎

𝛌 ∑ |𝒙(𝒏) − 𝒙(𝒏 − 𝟏)|

𝑵−𝟏

𝒏=𝟎

5. Probabilistic Patch Based (PPB) Filter:

The NL means filter denoises an image by computing for each pixel s the mean of similar pixel

values vt . The similarity is measured with respect to the Euclidean distance between the

neighborhood of the pixels s and t. The Euclidean distance is well adapted to images corrupted by

an additive white Gaussian noise.

To extend the filter to non-Gaussian noise, we propose to estimate for each pixel s the underline

image parameter θ* by computing the weighted maximum likelihood from the pixel values vt :

�̂�𝒔𝑾𝑴𝑳𝑬 = 𝐚𝐫𝐠 𝒎𝒂𝒙 ∑ 𝐰(𝐬, 𝐭) 𝐥𝐨𝐠 𝒑(𝒗𝒕|𝜽𝒔)

𝒕

Page 10: Minor_project

where the weight w(s,t) defines the similarity between the pixels s and t. Based on the NL means

filter assumptions, we define this weight by the similarity between two patches Δs and Δt centred

respectively around s and t. This similarity is assumed to be linked to the similarity probability :

𝒘(𝒔, 𝒕)𝑷𝑷𝑩 = 𝒑(𝜽∆𝒔

∗ = 𝜽∆𝒕

∗ |𝒗)𝟏/𝒉

OBSERVATION

Quantitative Analysis

1. Normalized mean-square error (NRMSE),

𝐍𝐌𝐒𝐄 =∑ ∑ [𝒇(𝒊, 𝒋) − 𝒇′(𝒊, 𝒋)]𝟐𝑴

𝒋=𝟏𝑵𝒊=𝟏

∑ ∑ [𝒇(𝒊, 𝒋)]𝟐𝑴𝒋=𝟏

𝑵𝒊=𝟏

Where 𝑓(𝑖, 𝑗) and 𝑓𝑖(𝑖, 𝑗) denote the gray value of the pixel (𝑖, 𝑗) in the original image and

reconstructed image respectively. The smaller the value of the NRMSE is, the better the

performance of the algorithm.

Fig.1 The plot of NMSE along with the number of iterations for phantom 1

Page 11: Minor_project

2. Signal-to-noise-ratio (SNR)

𝐒𝐍𝐑 =∑ ∑ [𝒇′(𝒊, 𝒋) − 𝒇]𝟐𝑴

𝒋=𝟏𝑵𝒊=𝟏

∑ ∑ [𝒇′(𝒊, 𝒋) − 𝒇(𝒊, 𝒋)]𝟐𝑴𝒋=𝟏

𝑵𝒊=𝟏

Where 𝑓′(𝑖, 𝑗) and 𝑓(𝑖, 𝑗) has been explained above. 𝑓 is the average gray scale of all pixels in

reconstructed image. The better the algorithm is, the bigger the value of SNR is. Here numerator

stands for signal power and the denominator stands for the noise power.

Fig 2. The plot of SNR along with the number of iterations for phantom 1

Page 12: Minor_project

3. Mean-square error (MSE)

𝑴𝑺𝑬 = 𝟏

𝑴𝑵∑ ∑[𝒇(𝒊, 𝒋) − 𝒇′(𝒊, 𝒋)]𝟐

𝑵

𝒋=𝟏

𝑴

𝒊=𝟏

The smaller the value of the MSE is, the better the performance of the algorithm.

Fig 3. The plot of MSE along with the number of iterations for phantom 1

Page 13: Minor_project

4. Peak signal-to-noise-ratio (PSNR)

𝑷𝑺𝑵𝑹 = 𝟏𝟎𝒍𝒐𝒈𝟏𝟎 (𝑴𝒂𝒙𝑰

𝟐

𝑴𝑺𝑬)

where 𝑴𝒂𝒙𝑰 is maximum possible pixel value of the image. The better the algorithm is, the

bigger the value of PSNR is.

Fig.4 The plot of PSNR along with the number of iterations for phantom 1

Page 14: Minor_project

5. Structural Similarity (SSIM) index:

𝑺𝑺𝑰𝑴(𝒙, 𝒚) =(𝟐𝝁𝒙𝝁𝒚 + 𝒄𝟏)(𝟐𝝈𝒙𝒚 + 𝒄𝟐)

(𝝁𝒙𝟐 + 𝝁𝒚

𝟐 + 𝒄𝟏)(𝝈𝒙𝟐 + 𝝈𝒚

𝟐 + 𝒄𝟐)

with

𝜇𝑥 the average of x;

𝜇𝑦 the average of y;

𝜎𝑥2 the variance of x;

𝜎𝑦2 the variance of y;

𝜎𝑥𝑦 the covariance of x and y;

𝑐1 and 𝑐2 are the two variables to stabilize the division with denominator;

Fig.5 The plot of MSSIM along with the number of iterations for phantom 1

Page 15: Minor_project

6. Correlation Coefficient(CC):

𝑪𝑪 =∑ (𝝁𝒓,𝒏 − �̅�𝒓)(𝝁𝟎,𝒏 − �̅�𝟎)

𝑸𝒏=𝟏

√∑ (𝝁𝒓,𝒏 − �̅�𝒓)𝟐𝑸𝒏=𝟏

∑ (𝝁𝟎,𝒏 − �̅�𝟎)𝟐𝑸𝒏=𝟏

Fig.6 The plot of NMSE along with the number of iterations for phantom-1

7. Correlation parameter(CP):

Correlation parameter, it should be close to unity for an optimal effect of edge preservation

Fig.7 The plot of CP along with the number of iterations for phantom-1

Page 16: Minor_project

Pixel intensity versus location graph to make comparison between various reconstructed images.

Fig 8.Pixel intensity graph for phantom -1

Page 17: Minor_project

Table 1

Phantom-1

Fig 9. Simulations and results

Page 18: Minor_project

Table 2

Phantom-2

Fig. 10 Simulation and results for Phantom-2

Page 19: Minor_project

Table 3

Phantom 3

Fig. 11 Simulation and results for Phantom-3

Page 20: Minor_project

Conclusion

In summary, a non-invasive technique is provided by Medical Imaging to look at the functional

and structural information of the internal organs. By the measurement of the radioactive distribution gives

the physiological information about the patient. The most widely used technique to reconstruct this image,

using the data acquired, is known as Filtered Back Projection (FBP).

The main purpose of this study is to verify whether or not we can reconstruct images with a

limited dataset and the comparison between different iterative algorithms and effect on the reconstructed

image after applying AD Filter. This method will provide us a better estimate of the exact location of the

radioactivity. Traditional reconstruction algorithms like FBP cannot reconstruct an image with a limited

dataset. We work with the iterative methods like ML-EM, OSEM, and MRP by modifying the probability

matrix (system matrix). The best results were produced by MRP+AD method. The plots of the various

comparing units like NRMSE, MSE, SNR, and PSNR are shown in the observation. Fig3 shows the curve

between SNR versus iterations. From fig3, we get that SNR produced by MRP+AD is above other.

For further work, we suggest the implementation of same code in a lower level language such as

Open CV etc. This would decrease the reconstruction time. Also we have used other filters such as

BM3D, Total Variant, Huber, QM, Probability patch based (PPB) to obtain better results.

References

1. Qian He and Lihong Huan. “Penalized Maximum Likelihood Algorithm for positron Emission

Tomography by using Anisotropic Median-Diffusion.”

2. Rahul Patel. “Maximum Likelihood-Expectation Maximum Reconstruction with liited dataset for

Emission Tomography.”

3. Damien Farrell. “Investigation and demonstration of a technique in CT image reconstruction for

use with truncated data.”

4. Jeffrey A. Fesseler, “Penalized weighted least-squares image reconstruction for positron

tomography, IEEE Trans, Med. Imaging 13(2) (1994) 290-300.

5. J. Zhou, L.M. Luo, “Sequential weighted least squares algorithm for PET image reconstruction,

Digit. Signal Process. 16 (2006) 735–745.”

6. Yan Jianhua, “Investigation of Positron Emission Tomography Image Reconstruction, Huazhong

University of Science & Technology, Wuhan, 2007.”

7. P. Peronal, J. Malik, “Scale-space and edge detection using anisotropic diffusion, IEEE Trans.

Pattern Analysis and Machine Intelligence 14 (7) (1990) 629–639.”

8. http://en.wikipedia.org/wiki/Iterative_reconstruction

9. http://depts.washington.edu/nucmed/IRL/pet_intro/