Combined analysis of FDG-PET and CT images with ... · Combined analysis of FDG-PET and ... cancer...

49
Combined analysis of FDG-PET and CT images with application to brain cancer using Matlab-based program Leon Walter Vink 10189912 University of Amsterdam Master Thesis Study: Physics: Life and Health Period of writing: 1st of September 26th of June 2015 Date of handing in: June 2016 Supervisor Holland: Gustav Strijkers Supervisor Portugal: Nuno Matela Second assessor: Ton van Leeuwen

Transcript of Combined analysis of FDG-PET and CT images with ... · Combined analysis of FDG-PET and ... cancer...

Combined analysis of FDG-PET and

CT images with application to brain

cancer using Matlab-based program

Leon Walter Vink

10189912

University of Amsterdam

Master Thesis

Study: Physics: Life and Health

Period of writing: 1st of September – 26th of June 2015

Date of handing in: June 2016

Supervisor Holland: Gustav Strijkers

Supervisor Portugal: Nuno Matela

Second assessor: Ton van Leeuwen

2

3

Abbreviations

FDG Fluorodeoxyglucose

CT Computed Tomography

PET Positron Emission Tomography

FC Fundação Champalimaud

ITA Injection-to-Acquisition

AD Alzheimer Disease

DLB Dementia with Lewy Bodies

FTD Fronto-Temporal Dementia

PPA Primary Progressive Disease

NPH Normal Pressure Hydrocephalus

TSH Thyroid Stimulating Hormones

QOL Quality of Life

DOF Degrees of Freedom

LOR Line of Responds

FT Fourier Transformation

HU Hounsfield Unit

TOF Time of Flight

BP Back Projection

BPF Back Projection Filtered

FBP Filtered Back Projection

MI Mutual Information

SSD Sum of Squared Differences

SAD Sum of Absolute Differences

4

5

Index

1. Introduction .................................................................................................................

9

1.1. Medical Imaging Modalities…………

........................................................................... 11

1.1.1. Computed tomography…

........................................................................................ 11

1.1.2. Filtered back projection…

........................................................................................ 14

1.1.3. Hounsfield units.. .........................................................................................

…………. 21

1.1.4. Fluorodeoxyglucose - positron emission tomography……..

...................................... 21

1.1.5. Decay correction..........................................................................................

…………. 23

1.1.6. Standard Uptake

Value…………………………………………………………………………………………

24

1.2. Image registration. .........................................................................................

…………. 24

1.2.1. Intensity based image registration…………

...................................................…………. 25

1.2.2. Mutual information based image registration.........................................................

26

1.2.3. Transformations..........................................................................................

…………. 27

1.2.4. Nearest neighbor interpolation...............................................................................

28

2. Methods.............................................................................................................

…………..29

2.1. Sample of patients..........................................................................................

…………. 29

2.2. Loading images into the main program..........................................................

…………. 30

2.3.Visualization of images....................................................................................

…………. 30

6

2.4. Standard Uptake Value...................................................................................

…………. 30

2.5. Injection-to-Acquisition time..........................................................................

…………. 31

2.6. Rescaling the images......................................................................................

…………. 31

2.7. Creation of the reference image and standard deviation image....................

…………. 32

2.8. Image registration..........................................................................................

…………. 33

2.9. Abnormality detection...................................................................................

…………. 33

2.10. Region of interest.........................................................................................

…………. 35

2.11. Grey- and white matter detection................................................................

…………. 35

2.12. Additional research......................................................................................

…………. 36

3. Results...............................................................................................................

…………. 38

4. Discussion..........................................................................................................

…………. 41

4.1. Data................................................................................................................

…………. 42

4.2. Grey matter detection....................................................................................

………… 42

4.3. Grey matter layer detection...........................................................................

…………. 42

4.4. Extra visualization tools..................................................................................

…………. 42

4.5. Injection-to-Acquisition time differences.......................................................

…………. 43

4.6. Brain map.......................................................................................................

…………. 43

4.7. Additional research........................................................................................

…………. 43

7

4.8. Suggested future research..............................................................................

…………. 43

5. Conclusion.........................................................................................................

…………. 44

6. References.........................................................................................................

…………. 45

8

Abstract:

Background: Improvement imaging diagnostics is always desired. Early and correct diagnosis of

brain cancer is dearly needed in order to start treatment in early stage, increase the survival

rate and quality of life (QOL) of patients.

Aim: This paper has the aim to improve imaging diagnostics by developing a program that

assists during brain cancer diagnosis. The program should be able to analyze combined FDG-

PET and CT images which should improve the accuracy and reproducibility of the diagnosis.

Combining both modalities provides the information needed to diagnose brain cancer and plan

the treatment.

Method: For this paper Matlab will be used to develop such a program. The program should

contain an abnormality detection, image registration and an abnormality location comparison

of both modalities. FDG-PET describes the metabolic activity in the brain and can be used to

characterize the tumor due to the fact that a tumor will show an higher metabolic activity. The

CT images will be used as anatomical reference in order determine the location and the size of

the tumor.

Results: In order for optimal abnormality detector, a reference image and a standard deviation

map can be produces of an maximal amount of 10 loaded images. This can be loaded into the

program and used for the abnormality detection of either FDG-PET or CT depending which

images are used to calculate the reference image and standard deviation image. Before

detecting abnormalities in FDG-PET the image needs to be corrected for the injection-to-

acquisition (ITA) time. After the abnormality detection the locations of the abnormalities of

both modalities are compared. If they coincide, this location will be marked as a region of

interest. Semi-automatic grey matter detection is also developed .

Conclusion: The program works and is able to execute a combined analysis of FDG-PET and CT

images. It might be used as clinical research tool. If so it will be used to detect differences in

FDG uptake.

Keywords: Abnormalities, FDG-PET,CT, Injection-to-acquisition time, standard

deviation, Matlab, program, Reference image, patient image

9

10

1. Introduction

For the age between 15 and 34 about 2% of the mortality rate in men and 1.4% of the

mortality rate in women is brain cancer. In 39% of the cases the tumor is a glioblastoma

multiforme (GBM), which is a type IV tumor with a survival rate of only 6% in 2 years after

diagnosis [1]. Early and correct diagnosis is therefore critical to start treatment in an early

stage, increase the survival rate and quality of life (QOL) of patients [2]. In recent years several

methods have been developed in order to correctly classify brain cancer by imaging. However,

improved imaging diagnostics are still dearly needed. In order to improve imaging diagnostics

Fundação Champalimaud came up with the idea to create a Matlab-based computer program

that assists in the analysis of fluorodeoxyglucose positron emission tomography (FDG-PET) and

computed tomography (CT) brain images of patients with brain cancer. Radio-active

fluorodeoxyglucose (FDG), a sugar analog, accumulates in cancerous tissue because of

increased metabolic activity in tumors and can therefore be used to diagnose, localize and

characterize brain tumors [3][4][5] . CT images are used to provide anatomical reference.

When the tumor reaches a certain size, it also becomes visible on CT images which provides

additional confirmation of tumor location and size. Combining both modalities provides the

information needed to diagnose brain cancer and plan the treatment. [6][7]. Visual

interpretation of FDG-PET and CT images is difficult and subjective. Objective quantification of

FDG uptake requires co-registration of CT and PET images and segmentation of the brain to

objectively quantify FDG-PET uptake in the tumor and various relevant regions of the brain.

This approach improves accuracy and reproducibility. PET and CT images used in this report

were acquired by the Fundação Champalimaud (FC).

The primary features of the developed program are:

(1) Program that makes a reference image and a standard deviation map from loaded

images. This is practical for research purposes.

(2) Image registration; Register the reference image with the patient image in order to

prepare for analysis

(3) Find abnormalities in terms of standard deviation differences compared to a reference

image and its standard deviation map to see whether the patient image differs from an

average normal brain image.

(4) Comparing abnormality locations in both modalities to see whether the locations of

the abnormalities in CT coincide with the ones from FDG-PET.

(5) Enlargement and 3D visualization for efficient diagnosis

(6) Semi-automatic grey matter detection and volume calculation to apply grey matter

layer detection.

(7) Grey matter layer detection to exclude false-positives in increased metabolic activity

regions (zeg dat dit niet gelukt is qua tijd en dat als het gelukt zou zijn dat het dan

helemaal niet accuraat is sinds de grey matter detection al een benadering is en dan

zou de grey matter layer detection ook een benadering zijn en dus niet betrouwbaar

zijn. Eerst zou ere en betere grey matter detection moeten zijn alvorens de grey

matter layer detection gecreeerd kan worden.)

(8) A brain map so that the differences can be precisely located.

11

The analysis part consists of a program that is based on semi-automatic grey- and white matter

detection in the CT images based on thresholds. A well-known confounding factor in FDG-PET

images is that thick grey matter layers display increased metabolic activity [3]. The program

will exclude these false-positives from further analysis. Since grey- and white matter detection

is difficult for CT due to the fact that Hounsfield Units (HU) values can differ between patients,

user interaction is required to set appropriate HU thresholds for correct segmentation [3].

The Matlab program can be applied beyond analysis of brain cancer, e.g. for the

characterization of depression, epilepsy, and dementia, conditions which are known to display

alterations in brain FDG uptake. After further validation studies, the developed program might

develop into a clinical tool to assist the radiologist and oncologist in analyzing these type of

images. Analysis of differences in FDG uptake in terms of standard deviation in normal-

appearing grey matter involved brain conditions, including epilepsy, dementia can be

performed. Additionally, information can be extracted from FDG-PET image analysis of

measurements with different injection-to-acquisition times.

In the FC, FDG-PET images are acquired using either an inject-to-acquisition (ITA) time of < 60

minutes or > 3 hours. If a patient is scanned with different ITA times, the differences can be

revealed in standard deviations. This may help to determine which ITA time is preferred and

how fast the contrast agent is being distributed. Since after three hours some of the contrast

agent will be filtered out of the vascular system by the kidneys, the amount of contrast agent is

lower when the patient is scanned > 3 hours after injection compared to < 60 minutes. This is

however only true for longer ITA times. For short ITA times it only can change with tumor

grade [8].

Proposed additional research using the program:

(1) Analysis of differences in FDG uptake for normal’s and abnormal´s, in which normal’s

are healthy patients with no abnormal FDG uptake and abnormal´s are patients with

an abnormal FDG uptake. Using this analysis new diagnostic parameters may be found.

(2) Analysis of the differences in FDG uptake for patients with an ITA time of < 60 minutes

and patients with an ITA time of > 3 hours. This may give some information about the

optimal ITA time.

The layout of this report is as follows. In order to understand the underlying theory of this

program some basic information of the program will be explained first. The basics of

Computed Tomography (CT) and Fluorodeoxyglucose Positron Emission Tomography (FDG-

PET) will be introduced. This includes x-ray- interaction and attenuation with the body, filtered

back projection and the transport and detection of FDG as contrast agent. Furthermore, the

mathematical explanation of image registration will be provided. After the methods, where

the content of the project will be discussed, the sample of patients will be described. At last

the results show the completed program and the outcome of the additional research.

12

1.1. Medical Imaging Modalities

The program that is created during this project is called CompareMechanism and uses CT- and

FDG-PET images. Combining these two images for diagnosis creates a reliable set of

parameters to diagnose brain cancer. FDG-PET shows the metabolic activity in the brain which

is particularly interesting since a cancerous region will have a significant higher metabolic

activity that pinpoints to a location and characterization of the tumor [4]. The CT is helpful

because of the morphological images that might confirm the localization and the size of the

tumor [7]. At the FC, the PET-CT Philips Gemini TF TOF 16 with 120 kVp and a time of flight

(TOF) is used. This scanner provides 128 x 128 matrix images with a field of view (FOV) of 256

mm and a slice thickness of 2 mm.

1.1.1. Computed tomography

Computed tomography (CT) is a medical imaging technique based on the attenuation

of X-rays by tissues in the body. CT is superior in showing bony structures, which is in

contrary to MR that better shows soft tissue structures [9]. The word ‘’tomography’’

stands for slice which refers to the method of scanning. The CT scanner scans per

slice so that the reconstruction shows multiple slices representing the complete region

of interest.

CT uses X-rays that penetrate through the body. Due to the different absorption factor

of the different types of tissue the image can be reconstructed. There are several types

of CT scanners but the main idea is that the X-ray tube turns around the body shooting

x-ray beams at the body as the patient is going through the center of the machine [10].

Figure 1: the rotating x-ray tube sends x-ray’s through the body and will be detected by rotating X-ray detectors on the

other side of the body. The patient lies on a bed that slides through the machine.

13

In the patients the photons of the x-ray beam interact with molecules. Depending on

the energy of the photon, the x-ray interaction might either be photoelectric absorption

or Compton (incoherent) scatter [10]. In photoelectric absorption, the photon gets

absorbed completely and uses his energy to eject an electron in a random direction

that is able to ionize neighboring atoms. However, there will be no scatter events [11].

Figure 2: Photoelectric absoption. The energy of the incident gamma ray gets completely absorbed by the electron. The

energy absorption causes the electron to eject [11].

Compton scattering is an incoherent kind of scatter. As the photon interacts with the

atom, part of the energy of the photon will be used to eject or ionize an electron.

However, due to the fact that the photon still has energy, it will scatter of and move on

in an altered direction [12]. This angle of scatter can be calculated with the following

formula:

𝜆′ − 𝜆 =ℎ

𝑚𝑒𝑐(1 − 𝑐𝑜𝑠𝜃) (1)

Where λ´ is the wavelength of the altered photon in meters, λ the wavelength of the

original photon in meters, h is Planck’s constant in J⋅s, me the mass of an electron

kilograms, c the speed of light in meters per second and θ the angle of scattering of the

altered photon in degrees [12]. In this formula h/me c can be considered the Compton

length and is equal to 2.43⋅10-12 meters. Using the photon energy formula

𝐸 =ℎ𝑐

𝜆 (2)

, where E is the energy in Joule, the energy of the photons can be calculated [12].

Filling in formula (1) the formula can be rewritten as

𝐸𝛾′ =

𝐸𝛾

1+𝐸𝛾

𝑚0𝑐2(1−𝑐𝑜𝑠𝜃)

(3)

𝐸𝛾′ =

511𝑀𝑒𝑉

2−𝑐𝑜𝑠𝜃 (4)

14

, where Eγ’ is the photon energy after the collision and Eγ the photon energy before the

collision [14]. On his part this photon can either be absorbed or scattered afterwards.

The ionized electron can cause DNA damage [12].

Figure 3: Compton scattering. The energy of the incident photon is partly absorbed by and electron that due to this

energy absorption gets ejected. As a result of the partly absorbed photon, a low energy photon scatters in a random

direction [12].

The probability of the occurrence of photoelectric absorption is approximately

proportional to (Z/E)3 where Z is the atom number of the impact molecule and E the

photon energy. As the energy increases, the likelihood of interaction with atoms

decreases. The effect is dominant for energies lower than 100 keV. Compton scattering

occurs with a wider range of energies, however, with higher energies the amount of

scattering events decreases [13].

The detectors on the other side of the x-ray tube record the amount of x-rays coming

from the body. After the x-rays are recorded, the image can be reconstructed using a

filtered back projection reconstruction [15].

The attenuation of the x-rays can be described by the Lambert-Beer law [15]:

𝐼 = 𝐼0 𝑒−µ𝑥 (5)

Where I is the intensity of the x-ray beam after penetration of the body and I0 the x-ray

beam that entered the body. X the distance ad µ the attenuation coefficient that varies

per tissue type.

15

Figure 4: The simplest graphical explanation of the Lambert-Beer law [15].

However, the body is not homogeneous and will have multiple tissue types.

Figure 5: The Lambert-Beer law for multiple tissue types [15]

Therefore the lambert-Beer will become:

𝐼 = 𝐼0 𝑒−(µ1𝑥1+µ2𝑥2+𝑥3𝑥3+⋯ ) (6)

𝐼 = 𝐼0 𝑒− ∑ µ𝑖𝑥𝑖𝑛𝑖 (7)

In this case every type of tissue has a different thickness xi, for i = 0, 1, 2, 3, etc. This

formula can be altered due to the fact that the X-ray production of the CT scanner has

an energy E.

𝐼 = 𝐼0 ∙ ∫ 𝑒− ∑ µ𝑖𝑥𝑖𝑛𝑖 𝑑𝐸

𝐸𝑚𝑎𝑥

0 (8)

This can be rewritten as:

𝑙𝑛𝐼0

𝐼= ∫ 𝜇(𝐸)𝑥 𝑑𝑥

𝑥

0 (9)

In the case of a brain tumor the attenuation coefficient is higher than that of the

surrounding brain matter due to its dense structure. It should therefore appear darker

on the image. However, as said before, CT is more precise at bony structures.

Therefore, the small brain tumors are not well visualized.

1.1.2. Filtered back projection

As the detectors, which are rotating with the same speed as the x-ray tube detects the

x-rays, several 1D images are stored. These images are intensity plots or projections of

the amount of x-rays per location. On the y-axis it has the intensity of x-rays coming

through the object of interest and on the x-axis it saves the spatial information.

16

Figure 6: By the rotating acquisition of the CT scanner the projections show the 1-D representation of an 2-D object [16].

These projections can be explained by the Radon Transform Theorem. This theorem

shows a relationship between the projections of a two-dimensional object obtained by

rotational scanning and the 2-dimensional object itself. Each single projection is

obtained by a scan along the line of responds (LOR) at a fixed angle θ which

represents a projective transformation of the 2-dimensional function onto a polar

coordinate space being (x’,θ). The result can be shown in a sonogram [16]. An

example is shown below in figure 7.

Figure 7: The sonogram is adding the projections of all angles and transform the result into polar coordinates [16].

The Radon transform is defined as

𝑝𝜃(𝑥′) = ∫ ∫ 𝑓(𝑥, 𝑦)𝛿(𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃 − 𝑥′)𝑑𝑥𝑑𝑦+∞

−∞ (10)

Where pθ(x’) is the projection of f(x,y) in the polar coordinate space (x’,θ), f(x,y) is the 2-

dimensional function that is obtained by integrating along the line that has the normal vector in

θ direction as shown in figure 8.

17

Figure 8: A single projection of a 2-D object [16].

Since f(x,y) is obtained by integrating over y’ and since the coordinate system (x’,y’) is

obtained by rotating the (x,y) coordinate system over the angle θ, the relation between

x, y, x’, y’ and θ can be expressed as:

𝑥′ = 𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃 (11)

𝑦′ = −𝑥𝑠𝑖𝑛𝜃 + 𝑦𝑐𝑜𝑠𝜃 (12)

Therefore, since we know that the delta function only fires when x’ = xcosθ + ysinθ and

that dxdy = dx’dy’ pθ(x’) can be written as follows:

𝑝𝜃(𝑥′) = ∫ ∫ 𝑓(𝑥′, 𝑦′)𝑑𝑦′+∞

−∞ (13)

Using the Fourier Slice Theorem or the Central Slice Theorem we can prove that if we

take the Fourier Transformation (FT) of the projections pθ(x’) that that is equal to the

cross section of the 2-D FT of the object which is perpendicular to the direction of the

projections. This is important if we want to reconstruct the original image [16]. A

visualization of the Central Slice Theorem is shown in the image below.

18

Figure 9: The 1-D Fourier transformation of the projection is equal to the 2-D Fourier transformation of the object [16].

Taking the 1-D FT of a single projection Pθ(v) can be expressed as follows

𝑃𝜃(𝑣) = ∫ 𝑝𝜃(𝑥′)+∞

−∞𝑒−𝑖𝑣𝑥′𝑑𝑥 (14)

Using formula (13) and implementing the formulas (11) and (12) we get

𝑃𝜃(𝑣) = ∫ ∫ 𝑓(𝑥′, 𝑦′)𝑒−𝑖𝑣𝑥′𝑑𝑥′𝑑𝑦′+∞

−∞ (15)

𝑃𝜃(𝑣) = ∫ ∫ 𝑓(𝑥, 𝑦)𝑒−𝑖𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑥𝑑𝑦+∞

−∞ (16)

This term of Pθ(v) should be equal to the 2-D FT of f(x,y). In order to calculate the 2-D

FT of f(x,y) we need to transform to polar coordinates using

𝑣𝑥 = 𝑣𝑐𝑜𝑠𝜃 (17)

𝑣𝑦 = 𝑣𝑠𝑖𝑛𝜃 (18)

𝐹(𝑣𝑥 , 𝑣𝑦) = ∫ ∫ 𝑓(𝑥, 𝑦)𝑒−𝑖(𝑣𝑥𝑥+𝑣𝑦𝑦)𝑑𝑥𝑑𝑦+∞

−∞ (19)

Implementing formulas (15) and (16)

𝐹(𝑣𝑐𝑜𝑠𝜃, 𝑣𝑠𝑖𝑛𝜃) = ∫ ∫ 𝑓(𝑥, 𝑦)𝑒−𝑖𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑥𝑑𝑦+∞

−∞ (20)

19

This shows formulas (16) and (20) are equal and that we can state that as shown in the figure 9.

𝑃𝜃(𝑣) = 𝐹(𝑣𝑐𝑜𝑠𝜃, 𝑣𝑠𝑖𝑛𝜃) = 𝐹(𝑣𝑥 , 𝑣𝑦) (21)

Until now it is shown that if we take the 1-D Fourier Transform of a single projection

pθ(x’) we get Pθ(v), which is equal to the 2-D FT of the object F(vx, vy) itself. In order to

reconstruct the original image f(x,y) back, the 2-D inverse FT need to be taken of F(vx,

vy) [16].

The simplest reconstruction method is the back-projection (BP) method which sums up

all the projections of the different angles θ. However, this also will show that using only

the simple BP method a blur will appear [16]. Using formula (10) we can state that the

BP can be expressed as

𝐵𝑃(𝑥, 𝑦) = ∫ 𝑝𝜃(𝑥′)𝑑𝜃𝜋

0= ∫ ∫ ∫ 𝑓(𝑥, 𝑦)𝛿(𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃 − 𝑥′)𝑑𝑥𝑑𝑦 𝑑𝜃

+∞

−∞

𝜋

0 (22)

In order to avoid confusion x and y will be replaced by q and r. Using formula (9) we get

𝐵𝑃(𝑥, 𝑦) = ∫ ∫ ∫ 𝑓(𝑞, 𝑟)𝛿(𝑞𝑐𝑜𝑠𝜃 + 𝑟𝑠𝑖𝑛𝜃 − (𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃))𝑑𝑞𝑑𝑟 𝑑𝜃+∞

−∞

𝜋

0 (23)

= ∫ ∫ 𝑓(𝑞, 𝑟) 𝑑𝑞𝑑𝑟+∞

−∞

∫ 𝛿((𝑞 − 𝑥)𝑐𝑜𝑠𝜃 + (𝑟 − 𝑦)𝑠𝑖𝑛𝜃)𝑑𝜃𝜋

0

In order to proceed an assumption is needed about the term in the delta function. We

state that

(𝑞 − 𝑥)𝑐𝑜𝑠𝜃 + (𝑟 − 𝑦)𝑠𝑖𝑛𝜃 = √(𝑞 − 𝑥)2 + (𝑟 − 𝑦)2(𝑠𝑖𝑛𝜑 ⋅ 𝑐𝑜𝑠𝜃 + 𝑐𝑜𝑠𝜑 ⋅ 𝑠𝑖𝑛𝜃)

(24)

Where sinφ⋅cosθ + cosφ⋅sinθ can be replaced by sin(θ+φ). Now to make it easier the

Dirac delta function will be rewritten as

𝛿(𝑔(𝜃)) = ∑1

|(𝑑𝑔(𝜃)

𝑑𝜃)|𝜃=𝜃𝑘

|𝛿(𝜃 − 𝜃𝑘)𝑘 (25)

This formula suggests that for a finite number of θ=θk g(θ) = 0. In order to obtain this

sin(θ+φ) needs to be zero which implies that θ=π-φ. Implementing formula (24) and

(25) in formula (23) provides

𝐵𝑃(𝑥, 𝑦) = ∫ ∫ 𝑓(𝑞, 𝑟)(1

|√(𝑞−𝑥)2+(𝑟−𝑦)2 cos(𝜋)|

+∞

−∞𝛿(𝜃 − (𝜋 − 𝜑))𝑑𝑞𝑑𝑟 (26)

= ∫ ∫ 𝑓(𝑞, 𝑟)(1

|√(𝑞 − 𝑥)2 + (𝑟 − 𝑦)2 ⋅ 1|

+∞

−∞

⋅ 1 ) 𝑑𝑞𝑑𝑟

20

Using the convolution definition BP becomes

𝐵𝑃(𝑥, 𝑦) = 𝑓(𝑥, 𝑦) ⋅ (1

√𝑥2+𝑦2) = 𝑓(𝑥, 𝑦) ⋅

1

𝑠 (27)

This shows that reconstructing the image using the simple BP method, the image will

get blurred by convolution with a factor 1/s. In order to de-blur this image the back-

projection filtering (BPF) method can be used. However, this has the disadvantage that

due to the convolution of the filter term, the BP(x,y) is accented more than the original

image f(x,y). The problem lies in the fact that the projection data is first back-projected,

then filtered in Fourier space and finally inverse Fourier Transformed. In order to apply

this method, first see how the blur reacts under the FT [16].

𝐹𝑇 [1

𝑠] = 𝐹𝑇 [

1

√𝑥2+𝑦2] =

1

√𝑣𝑥2+𝑣𝑦

2=

1

𝑣 (28)

This shows that

𝐹𝑇[𝑓(𝑥, 𝑦)] = 𝐹(𝑣𝑥 , 𝑣𝑦) = 𝑣 ⋅ 𝐹𝑇[𝐵𝑃(𝑥, 𝑦)] (29)

In words this means that the 2-D FT of the back-projection filter image F(vx, vy) or the

FT of the original image is equal to the un-blur factor v times the 2-D FT of the back-

projected image. This phenomenon is called inverse filtering or deconvolution. Now in

order to reconstruct the original image the inverse FT needs to be taken of the 2-D FT

of the back-projection-filtered image F(vx, vy) [16]. This is the same as

𝑓(𝑥, 𝑦) = 𝐵𝑃(𝑥, 𝑦) ⋅ 𝐹𝑇−1[𝑣] (30)

However, in order to get rid of the disadvantage of accenting BP(x,y) more than the

original image f(x,y) another method called the Filter Back-Projection Method (FBP) is

used. This method applies the filters first before it performs the back projection, which

avoids the BP(x,y) of being more accentuated than the original image f(x,y). This

happens in the BPF method where the filtering is applied after the back projection. In

order to reconstruct the original image, the FBP method uses an inverse FT of the 2-D

FT of the back-projection filter image F(vx, vy) [16].

𝑓(𝑥, 𝑦) = ∫ ∫ 𝐹(𝑣𝑥 , 𝑣𝑦)𝑒2𝑖𝜋(𝑣𝑥𝑥+𝑣𝑦𝑦)𝑑𝑣𝑥𝑑𝑣𝑦+∞

−∞ (31)

Converting this back to polar coordinated will provide

𝑓(𝑥, 𝑦) = ∫ ∫ 𝑣𝐹(𝑣𝑐𝑜𝑠𝜃, 𝑣𝑠𝑖𝑛𝜃)𝑒2𝜋𝑖𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑣𝑑𝜃+∞

0

2𝜋

0 (32)

The first integral can be separated into two subintervals when using the Fourier Slice

Theorem, being (0, π) and (π, 2π). Also we can substitute F(vcosθ, vsinθ) with Pθ(v) as

proved before by the Central Slice Theorem and shown in formula (21).

𝑓(𝑥, 𝑦) = ∫ ∫ 𝑣𝑃𝜃(𝑣)𝑒2𝜋𝑖𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑣𝑑𝜃+∞

0

𝜋

0+ (33)

∫ ∫ 𝑣𝑃𝜃(𝑣)𝑒2𝜋𝑖𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑣𝑑𝜃+∞

0

2𝜋

𝜋

21

Of these two terms, the subinterval (π, 2π) can be altered to the form of

𝑓2(𝑥, 𝑦) = ∫ ∫ 𝑣𝑃𝜃+𝜋(𝑣)𝑒2𝑖𝜋𝑣(𝑥𝑐𝑜𝑠(𝜃+𝜋)+𝑦𝑠𝑖𝑛(𝜃+𝜋))𝑑𝑣𝑑𝜃+∞

0

2𝜋

𝜋 (34)

𝑓2(𝑥, 𝑦) = ∫ ∫ 𝑣𝑃𝜃+𝜋(𝑣)𝑒−2𝑖𝜋𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑣𝑑𝜃+∞

0

2𝜋

𝜋 (35)

𝑓2(𝑥, 𝑦) = ∫ ∫ −𝑣𝑃𝜃+𝜋(−𝑣)𝑒2𝑖𝜋𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑣𝑑𝜃+∞

0

2𝜋

𝜋 (36)

This shows that for the frequency space the next definition can be state

𝑃𝜃(𝑣) = 𝑃𝜃+𝜋(−𝑣) (37)

This changes formula (33) to

𝑓(𝑥, 𝑦) = ∫ ∫ |𝑣|𝑃𝜃(𝑣)𝑒2𝑖𝜋𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)+∞

−∞𝑑𝑣𝑑𝜃

𝜋

0= ∫ ∫ |𝑣|𝑃𝜃(𝑣)𝑒2𝑖𝜋𝑣𝑥′+∞

−∞𝑑𝑣𝑑𝜃

𝜋

0 (38)

which is the final formula for the FBP. As stated before the Radon transforms which are

all the 1-D FT of a single projections Pθ(v) over all angles θ will first be multiplied by the

1-D ramp filter |v| after which it will be back-projected. In the image below an illustration

is shown of the simple BP method without filtering and the FBP [16].

Figure 10: The result of the back projection method and the filtered back projection method [16].

In this image it is obvious that without a filter the image will have blur due to

convolution. Besides the halo around the white ball, there is also an intensity difference

inside the ball; where the inside appears to be whiter than on the edges. Due to the

projection the middle of the ball will be over intense. This blur has been determined to

be 1/s. Some of this so called 1/s noise can be filtered out using the FBP [16]. Image

reconstruction can however also be done with iterative algorithms. Due to increased

computer power the use of iterative reconstruction methods increases [17].

1.1.3. Hounsfield units

22

Pixel intensity in CT images is expressed in terms of Hounsfield Units (HU). HU is the

unit of the quantitative measure of linear attenuation coefficient [37]. This is a linear

scale that is based on the attenuation coefficient [18].

𝐻𝑈 = 1000 ∙ 𝜇− 𝜇𝑤𝑎𝑡𝑒𝑟

𝜇𝑤𝑎𝑡𝑒𝑟− 𝜇𝑎𝑖𝑟 (39)

As the tissue gets denser, the attenuation coefficient gets larger and so will the HU. In

the image this will appear darker [19].

1.1.4. Fluorodeoxyglucose - Positron Emission Tomography

Positron emission tomography (PET) is a noninvasive medical imaging technique that

assesses the biochemical process such as the metabolic activity in for instance the

brain. It images biological function through molecular imaging. In order to do so a

contrast agent is used. The main contrast agent used is fluoro-deoxy-D-glucose (FDG)

which has a high similarity with glucose labeled with the isotope 18F. This has the great

advantage that the contrast can be guided past the blood brain barrier [20]. The FDG

gets, like normal sugar, transported by the transporter GLUT1 (glucose transporter 1)

[45]. Due to an increased glucose consumption and an increased hexokinase activity in

cancerous cells, over-expression of metabolic activity can direct to cancerous regions.

However, in contrast to glucose, FDG cannot be metabolized [21], [22].18F is produced

in a cyclotron and has a half-life of 109.7 minutes [20]. Using the half-life the activity of

the contrast agent can be calculated using~

𝐴(𝑡) = 𝐴(0)𝑒−𝑡(ln(2)

𝜏) (40)

where A(t) is the activity in Becquerel at time t in minutes, A(0) the activity in Bq at t=0

and τ the half-life in minutes [24]. After the 109.7 minutes the isotope will decay

sending a proton by Positron decay. During Positron decay a proton (p) will transform

into a neutron (n) so that it emits a positron (e+) and a neutrino (v) [23].

𝑝 → 𝑛 + 𝑒+ + 𝑣 (41)

This positron travels several millimeters before it loses his kinetic energy and stops

moving. On the halted position the proton is captured in the electric field of an electron

(e-) and the two will annihilate into two photons (γ) [23].

𝑒+ + 𝑒− → 𝛾 + 𝛾 (42)

This is a process where both the mass of the electron and the proton is converted into

energy with the following formula from Einstein:

𝐸 = 𝑚𝑐2 (43)

where E is the energy in joules, m the mass in kilograms and c the speed of light in

meters per second. Since the mass of a positron is the same as that of an electron,

being 9.10938356⋅10-31 kilograms but then with an opposite charge and the speed of

light is 299 792 458 meters per second, the energy can be calculated to be ~1022 keV.

23

This is ~511 keV per gamma-photon since the netto impulse of the environment is

negligible.

Figure 11: Annihilation of a positronium into two photons with equal energy [23].

The emitted photons will travel at an angle of 180 degrees from each other if the

velocity of the positronium (the electron and its counterpart the positron) is neglected.

Due to this phenomenon the line of responds (LOR) can be determined by the so

called coincidence detection [46]. Using timers that record the time of flight (TOF) of

both photons, the position can be determined with more precision so that the

reconstruction will be more accurate [23]. In the image below the conventional- and the

TOF PET is shown.

Figure 12: the time of flight calculates which photon of two produced during one annihilation arrives earlier at the

detectors in order to determine a more precise annihilation location [25].

Without TOF the annihilation could have happened anywhere on the LOR. There is no

indication of a specific location. Using TOF the time when both photons arrive at a

detector on opposite locations, the time distance from the center point can be

determined using the formula

24

∆𝑥 =𝑐∆𝑡

2 (44)

Where Δt = t2-t1 and Δx is the distance from the center point. This helps to improve the

signal to noise ratio of the image and leads to a decreased radiation dose and a

decreased scan time [26]. However, not all recorded events are true coincidence

events. Some of them might be random or accidental due to multiple events occurring

on the same time or due to a changed LOR because of to Compton scattering. These

events are called random- or scatter coincidences and are shown in the image below.

All physical effects, being Compton scattering and photoelectric absorption, exist also

in PET. Therefore, it requires an image reconstruction procedure as well [25].

Figure 13: The three difference coincidences in PET; true-, random- and accidendental coincidences [25].

The CT part in PET-CT is only there to generate topographic images that allow

attenuation correction, visualization of morphological and anatomical structures [20].

1.1.5. Decay correction

Since the FDG-PET scan can be made with differing ITA time, the intensity of the

voxels will differ per ITA time as well. As the contrast agents is filtered out of the blood

by the kidneys, images acquired with a larger ITA time will have a lower contrast than

images acquired with a smaller ITA time. In order to correct for this decay artifact

formula 40 will be used

𝐴(𝑡) = 𝐴(0)𝑒−𝑡(ln(2)

𝜏) (45)

where A(t) is the activity in Becquerel at time t in minutes, A(0) the activity in Bq at t=0

and τ the half-life in minutes [24]. In order to correct the image formula 40 will be

changed to the form

𝐼𝑚(0) = 𝐼𝑚(𝑡) ∗ 𝑒𝑡(ln(2)

𝜏) (46)

This shows that A(t) is replaced by the obtained image Im(t) without any atomic decay

correction and that A(0) is replaced by Im(0) that represents the image with decay

correction [24].

25

1.1.6. Standard Uptake Value

The Standard Uptake Value (SUV) is a measure for the FDG uptake and is used

commonly used in FDG-PET/CT oncology imaging and assessment. IT can be used

as a marker for assessing the patient responds to cancer therapy. Due to the fact that

the injected dose and the weight of the patient are needed to calculate the SUV, some

variability concerning these two paremeters can be removed [43]. The SUV can be

calculated using the formula

𝑆𝑈𝑉 =𝑎

𝐷𝑖𝑛𝑗/𝑊 (46)

, where a is the radioactivity activity concentration in kBq/mL, D inj the decay corrected

injected dose in kBq and W the weight of the patient in grams. This formula shows that

uniformly distributed contrast agent in the body will provide a SUV of 1g/ml in every

part of the body [43].

1.2. Image registration.

Image registration is an iterative execution that overlays two images and geometrically

aligns them. These images can either be mono-modal or multi-modal; meaning that

both the so called fixed and moving image are either the same or from different image

modalities. First the moving image will be transformed using an initial transformation

matrix. Because the set of data points of the moving images changed it needs to be

interpolated to match the set of data points of the moving image. After that the moving-

and fixed image go through a similarity measurement that the images are analyzed by

some registration method. If, using this registration method the images are considered

registered the image registration is done. If not, the cycle repeats itself until the

similarity between both images is maximized and the images can be considered

registered [27], [44].

26

Figure 14: the iterative execution of image registration. The moving image gets transformed and interpolated before it

will be compared to the fixed image. This cycle repeats itself until the maximal registration is reached. The image is

registered [29].

There are several image registration types such as the intensity based-, the partition

based- and the mutual information based image registration [28]. The image

registration tool used in CompareMechanism is an intensity based image registration.

1.2.1. Intensity based image registration

Intensity based registration in one of the most known image registration and is used in

the ‘imregister(‘’)’ function of Matlab [29]. The name already tells that the image

registration is done based on the intensities of the voxels. The registration can be done

by either a mono-modal- or a multi-modal registration.

There are multiple mono-modal similarity measures, of which two are the sum of

squared differences (SSD) [30] and the sum of absolute differences (SAD) [31]. The

SSD is a simple measure while the SAD is less sensitive on large intensity difference

within the images [31]. Their formulas are shown below [31].

𝑆𝑆𝐷 = ∑ (𝐴𝑖 − 𝐵𝑖)2𝑖 (48)

𝑆𝐴𝐷 = ∑ |𝐴𝑖 − 𝐵𝑖|𝑖 (49)

Both SSD and SAD work the same way. It uses two image blocks; one block of the

fixed image and one block of the moving image. In order to compare these two blocks

with each other, the blocks are laid over each other. Per overlaid pixel either the

squared differences or the absolute differences are calculated; A being a pixel of the

27

moving image block and B being a pixel of the fixed image block. Summing the values

will provide either the SSD or SAD, depending on whether the squared differences or

the absolute differences are calculated. Although the blocks might differ in size, the

block of the moving image must always be smaller. In this case the SSD or the SAD

will be calculated per location such that the moving image block fits in the moving

image block. The lowest SSD or SAD will be the best match [30].

1.2.2. Mutual information based image registration

The second possible way to make multi-modal work is the application mutual

information (MI). This method is also used in the ‘imregtform’ function of Matlab [29].

MI was introduced by Maes et al. (1997) and Viola and Wells (1997) for rigid image

registration of multi-modal scans [31]. However, it has been applied to a wide range of

medical analysis tasks in image registration. Particularly the rigid body registration of

multi-modal image showed abundant registration accuracy. Therefore it is used in

many similarity measures for intensity based multi-modal image registration [32]. This

application looks at the mutual information in both images or in easier terms how well

one image is able to explain the other [33].

Consider two blocks that are partially overlapping. The surface of the left one is H(X)

and the right one is H(Y). The joint surfaces are having a total surface of H(X,Y) so that

the overlapping part of H(X) and H(Y) is the MI, being I(X:Y). This leaves only the non

overlapping parts that are H(X|Y) and H(Y|X). The MI can therefore be defined as

𝑀𝐼 = 𝐻(𝑋) + 𝐻(𝑌) − 𝐻(𝑋, 𝑌) (50)

where H(A) and H(B) are marginal entropies (entropies that are adjacent) of both

images A and B and H(A,B) the joint entropy of both images [30]. However, it can also

be defined as

𝐼(𝑋: 𝑌) = 𝐻(𝑋) − 𝐻(𝑋|𝑌) (51)

𝐼(𝑋: 𝑌) = 𝐻(𝑌) − 𝐻(𝑌|𝑋) (52)

𝐼(𝑋: 𝑌) = 𝐻(𝑋, 𝑌) − 𝐻(𝑋|𝑌) − 𝐻(𝑌|𝑋) (53)

where H(X|Y) and H(Y|X) are the conditional entropies (entropies that are dependent

on certain conditions or variable). However, to keep it simple formula (50) is used in

order to calculate the MI. In order to do so the Shannon entropies are used. The

Shannon entropies in the formula of MI, being H(A) and H(B) can be defined as

𝐻(𝑋) = −𝐾 ∑ 𝑝(𝑥𝑖) ⋅ log 𝑝(𝑥𝑖)𝑛𝑖=1 (54)

𝐻(𝑌) = −𝐾 ∑ 𝑝(𝑦𝑖) ⋅ log 𝑝(𝑦𝑖)𝑛𝑖=1 (55)

28

where X is defined as the image, being either A or B, n the number of different

intensities, pi the estimated intensity probabilities, and K being a positive constant.

H(A,B) is determined as the joint entropy and is defined as

𝐻(𝐴, 𝐵) = −𝐾 ∑ 𝑛𝑖=1 ∑ 𝑝(𝑥𝑖 , 𝑦𝑗) ⋅ log p(xi, yj)

𝑚𝑗=1 (56)

where pi (a,b) is the probability of having an intensity pair (a,b) in an image pair (A,B)

[30]. Using Baye’s classical theorem and formula (54) (55) and (56), the mutual

information can be calculated [34]. The result will be

𝐼(𝑋: 𝑌) = − ∑ ∑ 𝑝(𝑥𝑖 , 𝑦𝑗) ⋅ log (𝑝(𝑥𝑖,𝑦𝑗)

𝑝(𝑥𝑖)𝑝(𝑦𝑗))𝑚

𝑗=1𝑛𝑖=1 (57)

MI has however a disadvantage being that it is a global measure. It ignores its spatial

neighborhood of a chosen pixel within one image and therefore ignores the spatial

information which is shared across the images. For this reason it can be difficult to

estimate MI locally that may lead to false local optima in non-rigid image registrations.

In addition, MI is computational complex and can therefore take more time to compute

than a simple intensity metric, such as the SSD [31]. In order to reduce the effect of

changing the image overlap on MI and to improve alignment, Studholme et all. (1999)

introduced the normalized mutual information (NMI) [33]. This normalized mutual

information (NMI) is defined as

𝑁𝑀𝐼 =𝐻(𝐴)+𝐻(𝐵)

𝐻(𝐴,𝐵) (58)

which is the sum of probability distributions of both the fixed and the moving image

divided by the joint probability distribution of again both the fixed and the moving image

[30], [33].

1.2.3. Transformations

As the image registration takes place, the moving image will get transformed using

transformations in order to match the fixed image. There are four types of

transformations: rigid, affine, linear and nonlinear. The rigid transformation is simply

rotation of the image or shifting the image in either the X or Y direction defining six

degrees of freedom (DOF) being 3 translational and 3 rotational [33].

The similarity transformation or linear transformation is similar to the rigid

transformation but it includes an isotropic scaling factor. It defines 9 DOF being the six

DOF of the rigid-body transformation plus three scaling DOF. However, if this scaling

factor is anisotropic, the transformation is called affine [33].

In the affine transformation the parallelism mainly remains. It defines 12 DOF being 3

shearing DOF plus 9 DOF of the similarity or linear transformation. However, it is a

general linear transformation. The parallelism does, however, not remain in projective

transformations. It exceeds the 12 DOF of the Affine transformation depending on the

29

transformation models used in the image registration. The deformable transformation is

therefore not linear. It changes lines and points [33].

1.2.4. Nearest neighbor interpolation

After the transformation of the moving image, the coordinate systems of both the

moving and the fixed image do not coincide. In order to merge them a simple

interpolation called nearest neighbor interpolation is developed. This interpolation looks

at a point in image A and checks what the closest point in image B is. The voxel of

image a will then take the value of the closest voxel [35] in image b as shown in figure

15. To even be more precise, the tri-linear interpolation is developed. This interpolation

will give the voxel of image A a weighted average of its neighboring voxels in image B

depending on their closeness. The closer the voxel in image B to the voxel in image A,

the higher this voxel value will contribute to the average. For optimal image registration

for several types of images there are multiple types of image registration. The

interpolation that is used by the image registration tool of CompareMechanism is called

bilinear interpolation. This interpolation takes a weighted average of the four closest

points. This average will be the voxel value of this particular point [36].

Figure 15: the graphical representation of nearest neighbor interpolation and bilinear interpolation. The nearest neighbor

interpolation searches the closest pixel to the to be interpolated pixel. The interpolated pixel will get the value of the

closest pixel. In bilinear interpolation the interpolated pixel will get a scaled pixel value of all four pixels around.

30

2. Methods

The program created in this paper is a multifunctional program. Images can be loaded

into the program, altered, analyzed, and visualized. In this section of the paper the

formulas behind the operations in Matlab will be explained. Additionally, some

information is provided about what the program can do. First some information about

the images used to create this program.

2.1. Sample of patients

The data acquired for this paper comes from the Champalimaud Foundation. All data is

selected and clinically classified by a nuclear medicine specialist from the

Champalimaud foundation. There are three parts of data: healthy subjects (9), Patients

with defects (12) and patients with metastasis (9). The patients with defects can have:

Degenerative Dementia (DD) which can include

Alzheimer disease (AD). This is the most common neurodegenerative dementia

[39] which starts slow and speeds up in time. A symptom is short term memory

loss [38].

Dementia with Lewy bodies (DLB). This is the second most common dementia

type. It is closely related to Parkinson and can cause rapid cognitive decline

which might lead to hallucinations. Patients with DLB are unable to plan and

have a lack of analytical and abstract thinking [39].

Fronto-temporal dementia (FTD). This is a progressive degeneration of the

fronto-temporal lobe. Rapid decline in cognitive and motor functions are

symptoms of the disease [40].

Furthermore, there are cases of primary progressive aphasia (PPA) that can

pathological overlap with fronto-temporal lobar degeneration and Normal Pressure

Hydrocephalus (NPH) which is a brain malfunction due to the decreased absorption of

a cerebrospinal fluid. This is also one of the causes of dementia [41].

Patient ITA time Disease Abnormality detected by nuclear medicine specialist

1 3 h 30 min Normal - 2 3 h 50 min 30 min Depression?

DFT Hipometabolim back cingulated cortex, back temporal lobes

3 DFT Hipermetabolism frontal dorso-lateral, back cingulate cortex, temporal cortex

4 3 h 30 min DFT/APP Hipometabolim left hemisphere (dorso-lateral frontal, back cingulate, temporal cortex (complete left temporal lobe), dors-lateral parietal, insular cortex and front cingulate left

5 NPH Reduced global FDG uptake 6 4 h 30 min Refractory Hipometabolism generalized in left

31

epilepsy hemisphere 7 DFT? Fronto-temporal lobe 8 3 h 60 min Generalized

idiopathic epilepsy

Hipometabolism in right front without abnormalities

9 3 h 30 min 30 min DFT/Bipolar disease

Areas with reduction in FDG uptake

10 4 h 30 min DFT with 3 year of evolution

Hipometabolism in frontal, parietal, temporal, and subcortical bilateral

11 5 h 30 min Depression/DFT -

12 3h 30 min Normal/ Refractory epilepsy

Left temporal lobe

2.2. Loading images into the main program

The Matlab program is made for a variable size of 3D images. These images can either

be dicom, v3d, gipl, hdr, isi, nii, vmp, raw, xif, vtk, mha, vff, or par files. Using the

uigetfile function the file can get selected so that it can be opened using the fopen

function. However, dicom will be opened using the dicomread function. The loaded

images, that can be rescaled to Hounsfield units, will get saved as a double in the

workspace of the Matlab program and can therefore be called from the program.

Pressing the button info all information about the file will be displayed in an extra

window. The loading tool is developed by Dirk Jan Kroon

(http://www.mathworks.com/matlabcentral/fileexchange/29344-read-medical-data-3d),

[29].

2.3. Visualization of images

The images can be visualized using the enlargement tool. This is a tool is originally

developed by Joshua Stough

(http://www.mathworks.com/matlabcentral/fileexchange/37268-3d-volume-visualization)

but has been somewhat improved [29]. This tool shows the chosen image in the

sagittal, coronal and axial planes and has the ability of changing the brightness of the

images using slide bars. Clicking on one of the images and sliding up and down,

changes the orientation in the other two images for better visualization. It shows a line

in all planes so that the user can see what he/she is seeing and in what orientation.

Another visualization tool is a three dimensional visualization. The three dimensional

representation is smoothened.

2.4. Standard uptake value

When the FDG-PET patient image is loaded, the standard value uptake (SUV) map

can be calculated. This is a value that often is used to check for the uptake in tissue. In

order to calculate the SUV map, the activity concentration of the medical scanner

needs to be known. For the Philips Gemini TF the activity concentration is 17.4 kBq/mL

which results in 125000 counts per second in the PET scanner [47]. This is used for the

32

calculation of the SUV which is the activity concentration per count times the weight

and the counts per second in the patient image divided by the injected dose at time

zero. This is calculated per pixel so that a complete SUV map can be constructed. The

maximum SUV is shown in an edit box besides the calculate button.

2.5. Injection-To-Acquisition correction

Before image registration, the images need to be scaled for the injection-to-acquisition

time. This is the time between contrast agent injection and acquisition. In time the

contrast agent is filtered out of the blood stream by the kidneys. This results in a

different decay due to the fact that there is less contrast agent present in the vascular

system for a larger ITA time.Therefore, the voxel intensities are different for images

wiht diferente ITA time. In order to correct for the decay artefact the images will be

rescaled by formula (45). 𝐼𝑚(0) = 𝐼𝑚(𝑡) ∗ 𝑒𝑡(ln(2)

𝜏)

(59)

, where t is the ITA time in seconds, τ is the half-life time of the contrast agent used in

seconds, Im(0) the corrected image and Im(t) the obtained image before correction. In

the program there will be an edit box where the ITA time can be inserted. Pressing the

calculate button, the images will be rescaled.

2.6. Rescaling the images

As the images are rescaled for the ITA time the images are not scaled with respect to

each other. Due to the fact that some voxel intensities are too low for correct

abnormality detection some detections cannot be performed. In order to execute the

detection, the images need to be rescaled with respect to each other. Also in order to

make the comparison easier to interpret the images per modality get rescaled between

0 and a chosen value. This value can be inserted into an edit box. Pressing the button

rescale the images be rescaled. When the reference- and the patient image are rescale

and a new patient image is added, it will be rescaled automatically.

The rescaling option is not only present in the main program. It also is used in the

program that calculates the reference image and the standard deviation map since as

the patient image is rescaled, the reference image and its standard deviation map need

to be rescaled as well for correct abnormality detection. Another part of the main

program where the rescale option is added is in the grey matter detection. In order to

make the grey matter detection even more user friendly, the CT image used for grey

matter detection can be rescaled. The formula used to rescale is

𝐼𝑚𝑅 = 𝐴 ∗𝐼𝑚−𝐼𝑚𝑚𝑖𝑛

𝐼𝑚𝑚𝑎𝑥−𝐼𝑚𝑚𝑖𝑛 (60)

, where ImR is the rescaled image, A is the maximum value in which the image is

rescaled, Im the image itself, Immin the minimum pixel value in image Im and Immax the

maximum pixel value in image Im. In order to average the images they need be

registered.

33

2.7. Creation and use of reference image and standard deviation map

In order to create a reference image an averaging tool is developed. Images can be

loaded using the same loading tool used in the main program and will be displayed in a

listbox. All images can be corrected for atomic decay inserting the ITA time and the

half-life time of the contrast agent used; 109.8 minutes in case of FDG. The ITA time

can be inserted in an edit box and the half-life time of the contrast agent can be

adjusted when other contrast agents with different half-life times are used. Pressing the

calculate button will rescale the images for the inserted ITA time. The formula used is

𝐼𝑚(0) = 𝐼𝑚(𝑡) ∗ 𝑒𝑡(ln(2)

𝜏) (61)

, where t is the ITA time in seconds, τ is the half-life time of the contrast agent used in

seconds, Im(0) the corrected image and Im(t) the obtained image before correction.

If necessary, all images can be intensity scaled as well. This rescale is meanly

performed when abnormality detection is not possible due to low voxel intensities. If so,

it should be scaled between the same intensity values as the patient image loaded in

the main program. The formula used is:

𝐼𝑚𝑅 = 𝐴 ∗𝐼𝑚−𝐼𝑚𝑚𝑖𝑛

𝐼𝑚𝑚𝑎𝑥−𝐼𝑚𝑚𝑖𝑛 (62)

, where ImR is the rescaled image, A is the maximum value in which the image is

rescaled, Im the image itself, Immin the minimum pixel value in image Im and Immax the

maximum pixel value in image Im. In order to average the images they need be

registered before. Clicking on one of the images the image registration tool (explained

later on) will open the program that registers the chosen image with the first loaded

image. When all images are loaded, corrected, scaled and registered the average can

be calculated. This calculation is done per pixel using the formula

𝐼𝑎𝑣𝑒𝑟𝑎𝑔𝑒 =∑ 𝐼𝑛

𝑛𝑛=1

𝑛 (63)

,where Iaverage is the average pixel value on place (x,y,z) in the average image, In is a

pixel value on place (x,y,z) in image n, and n is one of the images loaded in the

program. Automatically, the created reference image will be saved as a dicom file.

In order to detect abnormalities a standard deviation map is needed also. Using the

average image, per pixel the standard deviation can be calculated using the formula

𝜎 = ∑ √1

𝑛∗ (𝐼𝑛

2 −2

𝑛𝐼𝑛)𝑛

𝑛=0 (64)

, where σ is the standard deviation pixel value on place (x,y,z) within the standard

deviation image, n is the amount of images used to calculate the standard deviation,

and In the standard deviation pixel value on place (x,y,z) in image n. For research

34

purposes it is also possible to create only a standard deviation map for just one

subject. This can be selected in a popup menu.

In order to save the the reference image and the standard deviation map correctly, the

modality and, in case of PET the ITA time can be selected in a popup menu. This

saves the images automatically under a different name. Using a popup menu in the

main program the saved reference image or standard deviation map can be opened in

the main program.

This tool is also used in order to create a pre-made reference image and standard

deviation map for two difference ITA times: < 60 minutes and > 3 hours. These images

are used for detecting differences in the patient image compared to the reference

image. The reference image is built up by an average of the intensities per pixel of 10

healthy subjects.

2.8. Image registration

This image registration tool is developed by Bret Shoelsen

(http://www.mathworks.com/matlabcentral/fileexchange/34510-image-registration-app)

and is adapted to this program [29]. It can register mono-modal and multi-modal

images using the translational-, rigid-, similarity-, or affine transformations. When a

transform type is chosen the image registration can be tuned using optimizer

parameters. These parameters include the growth factor, maximal iterations, amount of

spatial samples, epsilon, initial radius, pyramid level and the amount of histogram bins.

The growth factor is the rate that the search radius grows per iteration. Its ideally factor

is according to 1.05. The search radius starts with the initial radius, ideally set by

Matlab at 6.25e-3, and ends when the maximal iteration is reached. This maximal

iteration is the amount of iteration done per pyramid level and is set according to

Matlab on 100. The amount of spatial samples is the amount of bins the images is

divided into, which is in case of Matlab 500. Epsilon is a scalar that adjusts the minimal

size of the search radius. Therefore it controls the accuracy of the convergence. The

set epsilon according to Matlab is 1.5e-6. Finally, the amount of histogram bins are the

amount of histogram towers of the pixel intensities [29]. In order to run the image

registration Matlab version 2012a or later need to be used. In order to register two

images it uses the imregister- and the imregtform functions. Imregtform uses mutual

information in order to estimate the geographic transformation that will be used during

image registration. Imregister registeres the moving image with the fixed image using

the geopgrahic transformation that is estimated by imregtform. Imregister uses an

intensity based-image registration [29].

2.9. Abnormality detection

35

Now that the images are registered, the detection of abnormalities can start. This

calculation is done per modality. Per pixel the intensity at location (x,y,z) of the patient

image will be compared to the intensity at location (x,y,z) of its reference image plus

the intensity at location (x,y,z) of the reference image its standard deviation map times

A.

The A stands for the amount of standard deviations. Meaning that if you want to check

whether the patient image differs from the reference image with 3 standard deviation,

the A will be + 3. For the abnormality detection 7 different formulas are used.. In order

to speed up the comparison a loop will run per slice that compares each pixel of the

patient image with each pixel of the reference image and its standard deviation map.

The comparison is based on the amount of standard deviations per pixel that the

patient image differs from the reference image. This reference image is an average

image of 10 healthy brains. Using this reference image the standard deviation in

calculated, which is the variation of a pixel and its neighbors.

This formula is used in order to create a standard deviation image of the reference

image so that the differences of the reference and the patient image can be calculated

by

𝑃𝑖,𝑗,𝑘 < 𝑅𝑖,𝑗,𝑘 − 3𝜎 (65)

𝑅𝑖,𝑗,𝑘 − 3𝜎 ≤ 𝑃𝑖,𝑗,𝑘 < 𝑅𝑖,𝑗,𝑘 − 2𝜎 (66)

𝑅𝑖,𝑗,𝑘 − 2𝜎 ≤ 𝑃𝑖,𝑗,𝑘 < 𝑅𝑖,𝑗,𝑘 − 𝜎 (67)

𝑅𝑖,𝑗,𝑘 − 𝜎 ≤ 𝑃𝑖,𝑗,𝑘 ≤ 𝑅𝑖,𝑗,𝑘 + 𝜎 (68)

𝑅𝑖,𝑗,𝑘 + 𝜎 < 𝑃𝑖,𝑗,𝑘 ≤ 𝑅𝑖,𝑗,𝑘 + 2𝜎 (69)

𝑅𝑖,𝑗,𝑘 + 2𝜎 < 𝑃𝑖,𝑗,𝑘 ≤ 𝑅𝑖,𝑗,𝑘 + 3𝜎 (70)

𝑃𝑖,𝑗,𝑘 > 𝑅𝑖,𝑗,𝑘 + 3𝜎 (71)

where Pi,j,k is the patient image, i, j, and k are the locations in x,y,and slice, R i,j,k is the

reference image, σi,j,k is the standard deviation image of the reference image and a = 1,

2, 3 being the amount of standard deviations. If one of these formulas apply to a certain

pixel, the pixel in Pi,j,k has a abnormality of a standard deviations at the location (x, y,

slice) = (i, j, k). Depending on which formula applies to the pixel, the amount of

standard deviation difference can be determined. For example if formula 60 applies

there will be a standard deviation difference of -3σ at location (x, y, slice) = (i,j,k).

After computing the code the abnormalities are calculated in amount of standard

deviations difference ranging between -3 to 3 standard deviations. The abnormalities

are shown with a morphological background. This part of the code has been a previous

master thesis of Carolina Nogueira Guedes of the University of Lisbon.

36

2.10. Regions of interest

Using the abnormalities of both the CT and the FDG-PET the location comparison is initiated. In

this step the abnormalities of both modalities (1) and (2) are compared with each other in

terms of location.

Figure 16: The main program. FDG-PET and CT images can be loaded into the program in order to detect abnormalities.

Abnormalities of both modalities can be compared in terms of location in order to pinpoint regions of interest that correspon d to

both modalities.

Again this is done per slice per pixel. The program makes of this fourth dimension of

both abnormality images (one per modality) a binary image where the 1 stands for an

abnormality and a 0 for normal. Adding the images of both modalities will provide an

image that contains 0's, 1's, and 2's. Subtracting only the 2's will provide the

abnormalities that correspond in location in both the FDG-PET- and the CT scan. ). If

abnormalities match in location it will be marked as a region of interest (ROI). This

region is a three dimensional volume that includes one complete ROI including the

tumor(s) (3). The coordinates of the eight corner points of the ROI will be used to

subtract the ROI from the CT patient image in order to display the ROI (5). This region

of interest can be zoomed into (5). Using the popup menu (4) the amount of differences

in standard deviations can be selected so that only these ROI with the selected amount

of standard deviation difference is shown. The ROI are subtracted from the patient

image using the fourth dimension of the abnormality image.

37

2.11. Grey matter detection

Having some regions of interest with abnormal activity from the FDG-PET and some

abnormalities from the CT, might not indicate that there is something abnormal in that

particular region. Some higher metabolic activity showed in the FDG-PET can be due to

multiple grey matter layers overgenerating the FDG-PET signal. Since the grey matter is

emitting more gamma rays than white matter, multiple grey matter layers will therefore

generate more activity than there actually is and will shadow the activity of the small white

matter regions in between the multiple grey matter layers. If the location of the high metabolic

activity region in the FDG-PET matches a region of a thick grey matter layer in the grey matter

detection image, the region can be ruled out of the ROI’s. In order to subtract the grey- and

white matter from the patient’s image (1) a tool is developed since grey- and white matter

detection for CT is not straightforward.

Figure 17: the semi-automatic grey matter detection. The CT image will be loaded and using a lower- and upper threshold the grey

– and white matter can be detected. While the grey- and white matter is detected, the volume is calculated as well.

Using the Hounsfield Unit thresholds (5,6) parts of the brain can be subtracted (2,3). Since in

every CT image the HU’s of grey- and white matter is different these thresholds can be

adjusted using a lower- and upper limit (5,6). In such a way the grey- and white matter can be

detected semi-automatically. Unfortunately, the grey matter layer detection is not developed.

Due to time and material reasons, it will be passed on to suggested research. Additionally,

using information about the size of the voxels of the medical images, the volume of either the

grey- or white matter is determined (7).

2.12. Additional research

Due to the fact that the program is capable of detection abnormalities this program can also

be used for research purposes assisting the radiologist and oncologist in analyzing FDG-PET-

and CT images. Besides analyzing differences in FDG uptake in brain cancer, FDG uptake in

brains of patients with epilepsy or dementia can also be analyzed and might provide valuable

information in terms of new diagnostic parameters. 12 cases are researched in order to check

certain regions. However, due to the drawback that images from the FC couldn’t be opened,

there are not enough cases to be analyzed. Additional to the abnormality difference,

differences in FDG uptake due to longer ITA time is FC provides FDG-PET images with two ITA

38

times; > 3 hours and < 60 minutes. 12 Images of the same patient with two different injection

times are compared in order to get the differences. This might say something about the

accuracy of the FDG-PET images using different injection times. Proposed additional research

using the program:

(1) Analysis of differences in FDG uptake for normal's and abnormal's, in which normal's

are healthy patients with no abnormal FDG uptake and abnormal's are patients with

an abnormal FDG uptake. Using this analyzation new diagnostic parameters may be

found.

(2) Analysis of the differences in FDG uptake for patients with an ITA time of < 60 minutes

and patients with an ITA time of > 3 hours. This may give some information about the

optimal ITA time.

39

3. Results

The main purpose of this paper was to create a program that is able to analyze CT and

FDG-PET images in order to improve the diagnosis of brain cancer. Loading images

and per modality comparing them using a reference image that is based on an average

of 10 healthy subjects. The results of this comparison can be analyzed. This program is

created in MATLAB and has several functions.

Figure 18: The main program. FDG-PET and CT images can be loaded into the program in order to detect abnormalities.

Abnormalities of both modalities can be compared in terms of location in order to pinpoint regions of interest that correspond to

both modalities.

The program can load several types of images: dicom, v3d, gipl, hdr, isi, nii, vmp, raw,

xif, vtk, mha, vff and par files. If a patient image is loaded its file name will appear in a

listbox (14) from where previously opened images can be opened again. Loading the

FDG-PET and CT scans (2, 4), the comparison can be performed in order to get

images with standard deviation differences (6, 10) compared to a reference image (1,

7). Per modality the reference image differs. However, before comparing the images

need to be corrected for atomic decay of the FDG contrast agent by inserting the ITA

time (3, 15). The images are scaled (4, 9) as well for optimal comparison.

Figure 19: the program that loads images in both the main program and the reference image creator.

40

As for the reference image for the FDG-PET image, there is a choice of two types of

reference images since the patient images can be acquired with either an ITA time of <

60 minutes or > 3 hours. All reference images are made of an average of 10 healthy

subjects with either an ITA time of < 60 minutes or > 3 hours depending on the

reference image.

In order to calculate the reference image a tool is developed. In this separate program

images can be loaded using the same tool used in the main program where dicom,

v3d, gipl, dr, isi, nii, vmp, raw, xif, vtj, mha, vff or par files can be loaded (A). As the

images are loaded, they will be shown in the listbox. Before applying the image

registration the images can be corrected for the atomic decay inserting the ITA time

and pressing correct (B). This will correct for the activity of the contrast agent that

decreases over time. Secondly the images can be scaled (4, 9). This is important when

the reference- and patient image are compared. As the images are scaled the images

can be registered. Clicking on one of the images in the listbox will open the image

registration tool. The chosen image will be registered to the first loaded image. If the

images are scaled, corrected and registered the reference image and its standard

deviation map over all loaded images (C) can be calculated and saved as dicom file.

The standard deviation is calculated based on the result of the average of the 10

healthy subjects. The reference image and its standard deviation map can be loaded

into the main program. Now the abnormality detection can start.

However, just one standard deviation map of a single image can be made as well. This

can be used for research purposes when for instance the difference in FDG uptake

with two different ITA times is determined. One of the images will be the reference

image. Based on this image the standard deviation is made also.

Figure 20: the reference- and standard deviation map image creator. Images can be loaded into the program and intensity scaled

for ITA time. An general intensity scale can also be applied.

In order for an optimal abnormality detection, the patient- and the reference image

need to be registered. For optimal registration, there is a choice of rigid, translational,

similarity and affine registration. This registration is intensity based. It is possible to

41

register the reference- with the patient image per modality and the patient image of

FDG-PET and CT. Image registration is also used for the calculation of the reference

image since the images need to be aligned before the average over the loaded images

is calculated.

Figure 21: Image registration by Bret Shoelsen. There are multiple options to register the image. The program is about

to execute monomodal- or multimodal registration with translational, rigid, similarty or affine transformations.

For FDG-PET, the weight of the patient and the amount of contrast agent can be

inserted to calculate the standard uptake value (SUV) (5). Inserting and pressing the

button calculate will show the maximum SUV in the patient image. Selecting the SUV in

the check box (5) will show the complete SUV image, which is calculated per pixel in

the field where normally the abnormalities of FDG-PET will be shown (6). Unselecting

the checkbox will show the FDG-PET abnormalities again.

When the standard deviation difference images of both modalities are made, they can

be compared. This results in a map what shows where the coinciding locations of the

differences are of both modalities (11). For optimal registration both images will be

registered. The coinciding standard deviation difference map is used to show the

regions of interest.

42

In order to exclude regions of interest that are created by multiple grey matter layers

that provide a higher metabolic activity as well, a grey matter detection tool is

developed. This is a semi-automatic tool that calculates the volume of the grey matter

as well. It uses thresholds in terms of HU values in order to subtract the gray matter

from the original image. Using this tool, the intentions were to make a tool that detects

multiple grey matter layers so that their locations can be excluded from the coinciding

standard deviation difference image. This tool however is not created yet.

There are additional functions in the program such as three planar- and three

dimensional visualizations. These functions are included for optimal visualization

purposes. In the three planar visualization the brightness can be adjusted. Clicking on

an iamges and dragging the mouse down will allow the user to move through the

images. A drawn line will show where in the image the user is.

Figure 22: the enlargement tool that shows the sagital, axial and coronal orientations of the image. The brightness can

be adjusted and the user can scroll through the images by clicking on one of the three images and dragging the mouse

over the screen. The three dimensional visualization shows the image but uses a smoothing function which disallow the

correct visualization.

43

4. Discussion

4.1. Data

During the project there were some drawbacks in terms of the data acquisition. The

images provided by FC were unreadable files for Matlab. FC could open them but for

several dicom viewers the files remained unreadable. In order to solve this problem

Philips Portugal helped to find a solution to open the images. The result was that the

images could be opened with Osirix so that they can be converted to readable dicom

files. After this conversion the images could be loaded into MATLAB and were

therefore usable for additional research. However, all the images where received a

month before the deadline of this thesis.

Due to the fact that it took a lot of time to get readable images from the Champalimaud

foundation the amount of data is less than desired. Therefore, some additional

research could not be performed and some additional researches that require a large

database are done due to the limited amount of data. One additional research is

showing the differences in between normal brains and abnormal brains. For the

abnormal brains there are four categories, which are dementia, depression, brain

cancer and epilepsy. For this research the data is limited and will therefore provide

unreliable results. More images are needed per category in order to show whether

some regions significantly differ. As for this moment, the outcome might only show

whether there is a difference in general. The outcome of the locations and the amount

of differences would be unreliable due to the small control group.

4.2. Grey matter detection

As for the grey matter detection, it is hard to accomplish good grey matter detection for

CT. A lot of papers refer to a certain HU level for grey- and white matter [42] but when

applying them to the images of FC no usable result followed. In order to get usable

results the decision was made to make a semi-automatic grey matter detection.

Doctors can adjust the upper- and lower HU level in order to detect the grey matter with

thresholds. Using this technique, the trained eye will be able to detect the grey matter

that can be used in order to calculate the volume of the grey- and white matter.

4.3. Grey matter layer detection

In addition to the grey matter detection, there was supposed to be a program

developed that detected grey matter layers. Since multiple grey matter layers can

provide a significantly higher metabolic activity, excluding those regions where there

are multiple grey matter layers draws the attentions only to the high metabolic activity

regions that are possibly only caused by cancer or other defects. However, such a

program is not developed. It is difficult to program and since the grey matter detection

is not precise enough due to the method of detection, the grey matter layer detection

would provide doubtful results.

4.4. Extra visualization options

44

Each image can be visualized using either a 3D visualization program or three planar

visualization program. The 3D visualization program uses a smoothing function that

might corrupt some data. Therefore, it only can be used as a simple visualization tool.

Small un-metabolized regions will not be seen in this visualization. Only significantly

large un-metabolized areas will be visible. For the three planar visualization there is an

execution issue. Due to the fact that Windows and Mac have different software, the

enlargement program works unfortunately only on Windows. On Mac it provides an

error in cell2mat. It ‘’does not support cell arrays containing other cell arrays’’.

4.5. ITA time differences

In this paper the patient group with an ITA time > 3 hours have different ITA times

ranging from 3 hours up to 5 hours. This 2 hour difference is probably too large in order

to call it one patient group since the 2 hour difference comes close to the 2 and a half

hours difference from the two patients groups of an ITA time of 3 hours and of 30

minutes. This might affect the outcome although the images are corrected with a

weighting function for the activity.

4.6. Brain map

As the abnormalities are acquired, the results need to be analyzed. However, without a

nuclear medicine specialist that analyzes the images, good results will not be acquired.

Therefore, a brain map should be included so that the location of the abnormalities can

be provided. This helps during research when fast analyzation is welcome.

4.7. Additional Research

Since the brain map not is included in the main program yet, accurate and precise

results for the additional research can be difficult. Since the images arrived a month

before the deadline of the thesis, there was no time to let a nuclear medicine specialist

analyze the results of the additional research. Therefore, after the deadline the brain

map will get included so that the additional research still will be performed.

4.8. Suggested Future Research

For this paper there were several ideas which could be included or added to the initial idea of

creating a program that assists in analyzing FDG-PET and CT images with an application for

brain cancer. Some ideas are in line with the initial idea such as a brain map so that the

differences that are detected could be linked to a brain region. Another idea is grey matter

layer detection. However, since the semi-automatic grey matter detection does not provide

accurate enough results, the grey matter layer detection, which is based on results of the grey

matter detection, would not have been accurate as well. Therefore, in order to create accurate

grey matter layer detection, the creation of accurate grey matter detection for CT is obligatory

and suggested as future idea. If such accurate grey matter detection is created, the grey

matter layer detection can be developed as well.

Since the program created in this paper can easily be used for (clinical) research, multiple ideas

emerged. Besides differences in healthy patients and patients with a lesion and the differences

in FDG uptake between data with an ITA time > 3 hours and < 60 minutes, there are ideas to

check for differences in FDG uptake over longer periods of time. Meaning that at several points

45

in time, FDG-PET scans will be made of the same patient with a lesion in order to see how the

FDG uptake might differ over time.

One idea of small investigations was about the removal of the thyroid. When removed

depression can occur since the hormone Serotonin is not produced anymore by the thyroid. In

order to avoid depression certain patients would get Thyroid Stimulating Hormones (TSH). The

small investigation was about whether there were differences over time in metabolic activity

in the two patient groups; one group with TSH treatment and one without. This is also a

suggestion for further research using the program created in this paper.

At last there is a part of the program that is linked to the grey matter detection. Using the

results of the grey matter that is detected, a part of the program calculates the volume of the

grey matter. This might be used for patients that have Parkinson disease. When exposed to

this disease, grey matter volume will enlarge itself while the white matter volume will

decrease. Using the program created in this paper, additional research could have been

analyzing these patients in terms of grey- and white matter volume. The goal was to see

whether there is a threshold in terms of a ratio between the grey- and white matter volume

for which you can say whether the patient is affected with this disease.

5. Conclusion

Although there are several parts of the program that can be added such as a brain map, an

improved grey matter detection and grey matter layer detection, the program is a good

working program that is on its way to improve imaging diagnostics, accuracy and

reproducibility. Almost all requirements have been implemented in the program and many

more ideas are ready to be implemented. Due to its ability to detect differences between two

images in terms of standard deviation the program might be used for (clinical) research

searching for differences in FDG uptake.

46

6. References

[1] Lucchiari C. Botturi A. Manzini L. Et al. Experiencing brain cancer: what physicians should

know about patients. Ecancer 2015; 9(591)

[2] Luo H. Hernandez R. Hong. H. Et al. Noninvasive brain cancer imaging with bispecific

antibody fragment generated via click chemistry. Proceedings of the National Academy of

Sciences 2015;112(41):12806-12811

[3] Basu S. Alavi A. Molecular imaging (PET) of brain tumors. Neuroiamging Clin. N. Am. 2009;

19(4): 625-646

[4] Herholz K. Langen K. Schiepers C. et al. Brain tumors. Semin Nucl. Med. 2012; 42(6): 356-

370

[5] Das K. Mittal B.R. Vasistha R.K. Role of 18F-fluorodeoxyglucose positron emission

tomography scan in differentiating enhancing brain tumors. Indian J. Nucl. Med. 2011; 26(4):

171-176

[6] Aydin F. Dertsiz L. Budak E.S. et al. Measurements of tumor size using CT and PET compared

to histopathological size in non-small cell lung cancer

[7] Cistaro A. Asabella A.N. Coppolino P. Diagnostic and prognostic value for 18F-FDG in

comparison with morphological imaging in primary adrenal gland malignancies a multicenter

experience. Hell J Nucl Med 2015; 18(2): 97-102

[8] Pöpperl G. Friedrich M.D. Kreth W. et al. Analysis of 18F-FDG PET for grading of recurrent

gliomas: Is evaluation of uptake kinetics superior to standard method? J Nucl. Med. 2006; 47:

393-403

[9] Jonsson J.H. Akhtari M.M. Karlsson M.G. et al. Accuracy of inverse treatment planning on

substitute CT images derived from MR data for brain lesions. Radiation Oncology 2015; 10(13)

[10] Fornaro J. Leschka S. Hibbeln D. Dual- and muli-energy CT: approach to functional imaging.

Insights Imaging 2011; 2: 149-159

[11] Liu Y. Cheng Y. Impurity channels of the long-lived Mossbauer effect. Nature scientific

reports 2015

[12] Retif. P. Pinel S. Toussaint M. et al. Nanoparticles for radiation therapy enhancement: the

key parameters. Theranostics 2015; 5(9): 1030-1045

[13] Faiz M. Khan, The physics of radiation therapy 3rd ed. (Lippincott Williams & Wilkins, 2003)

[14] Bailey D.L. Townsend D.W. Valk P.E. et al. Positron Emiision Tomography. ISBN 978-1-

84628-007-8 2005

[15] Soleimani M. Pengpen T. Introduction: a brief overview of iterative algorithms in X-ray

computed tomography. Phil. Trans. R. Soc. 2015; 373

47

[16] Asl M.N. Sadremomtaz A. Analytical image reconstruction methods in emission

tomography. J. Biomedical Science and Engineering 2013; 6: 100-107

[17] Korn A. Fenchel M. Bender B. et al. Iterative reconstruction in head ct: image quality of

routine and low-dose protocols in comparison with standard filtered back-projection. AJNR

2012; 33: 218-224

[18] Yohannes I. Kolditz D. Langner O. et al. A formulation of tissue- and water-equivalent

materials using the stoichoimetric analysis method for CT-number calibration in radiotherapy

treatment planning. Physicss in Medicine and Biology 2012; 57: 1173-1190

[19] Reeves T.E Mah P. McDavid W.D. Deriving Hounsfield units using grey levels in cone beam

CT: a clinical application. Dentomaxillofacial Radiology 2012; 41: 500-508

[20] Boellaard R. Delgado-Bolton R. Oyen W.J.G. et al. FDG PET/CT: EANM procedure

guidelines for tumour imaging: version 2.0. Eur J. Nucl. Med. Mol. Imaging 2015; 42: 328-354

[21] Okazaki Y. Yamada S. Kitada s. et al. Significance of 18F-FDG PET and

immunohistochemical GUT-1 expression for cardiac myxoma. Diagnostic Pathology 2014; 9:

117

[22]Kostakoglu L. Agress H. Goldsmith S.J. Clinical Role of FDG PET in Evaluation of cancer

patients. RadioGraphics 2003; 23: 315-340

[23] Shukla A.K. Kumar U. Positron emission tomography: An overview. J Med Phys 2006;

31(1): 13-21

[24] Galbraith S.M. Maxwell R.J. Lodge M.A. et al. Combretastatin A4 phosphate has tumor

antivascular activity in rat and man as demonstrated by dynamic magnetic resonance imaging.

Journal of Clinical Oncology 2003; 21(15): 2831-2842

[25] Rafecas M. Torres I. Spanoudaki V. Estimating accidental coincidences for pixelated PET

detectors and singles list-mode acquisition. Nuclear Instruments and Methods in Physics

Research 2007; 571: 285-288

[26] Defrise M Panin V.Y Casey M.E. New Consistency Equation for Time-of-Fligt PET.

Transactions on Nuclear Medicine 2013; 60:124-133

[27] Tagare H.D. Rao M. Why does mutual-information work for image registration? A

deterministic explanation. Transactions on pattern analysis and machine intelligence 2015;

37(6): 1286-1296

[28] Oliveira F.P.M. Tavares J.M.R.S. Medical image registration: a review. Computer Methods

in Biomechanics and Biomedical Engineering 2014; 17(2), 73-93

[29] Matlab.Inc Mathworks Retrieved 24th of June 2016 by

http://www.mathworks.com/support/?s_tid=gn_supp

[30] Glodeck D. HesserJ. Zheng L. Distortion correction of EPI data using multimodal nonrigid

registration with an anisotropic regularization. Elsevier 2016; 34: 127-136

48

[31] Heinrich M.P. Jenkinson M. Bhushan M. MIND: Modality independent neighbourhood

descriptor for multi-modal deformable registration. Elsevier 2012; 16: 1423-1435

[32] Heinrich M.P. Jenkinson M. Brady J.M. et al. Textural mutual information based on cluster

trees for multimodal deformable registration. IEEE International Symposium on

Biomedical Imaging (ISBI) 2012; 9: 1471-1474

[33] Shan Z.Y. Mateja S.J. Reddick W.E. et al. Retrospective Evaluation of PET-MRI Registration

Alogirihms. Jounal of Digital Imaging 2011; 24: 485-493

[34] Toews M. Well III W. Bayesian registration via local image regions: Information, Selection

and Marginalization. Inf Process Med. Imaging 2009; 21: 435-446

[35] Yanalak M. Sibson (natural neighbor) and non-sibsonian interpolation for digital elevation

model (DEM). Survey review 2013; 37(291): 360-376

[36] Raveendran H. Thomas D. Image fusion using LEP filtering and bilinear interpolation.

International Journal of Engeneering Trends and Technology (IJETT) 2014; 12(9): 427-431

[37] Chan K. Jiao F. Mikulski M.A. et al. Novel logistic regression model of chest CT attenuation

coefficient distributions for the automated detection of abnormal (Emphysema or ILD) versus

normal lung. Acad. Radiol. 2016; 23: 304-314

[38] Villain N. Chételat G. Grassiot B. et al. Regional dynamics of amyloid-Beta deposition in

healthy elderly, mild cognitive impairment and Alzheimer´s disease: a voxelwise PiB-PET

longitudinal study. Brain 2012; 135: 2126-2139

[39] Geiger J.T. Ding J. Crain B. Next-generation sequencing reveals substancial genetic

contribution to dementia with Lewy bodies. Neurobiology of disease 2016; 94: 55-62

[40] Kumar V. Islam A. Hassan I. et al. Delineating the relationship between amyotrophuc

lateral sclerosis and frontotemporal dementia: sequence and structure-based predictions.

BBA- Molecular Basis of Disease 2016 PII: S0925-4439(16)30153-3

[41] Vos. S.J.B. Gordon. B.A. Su Y. et al. NIA-AA staging of preclinical Alzheimer disease:

discordance and concordance of CDF and imaging biomarkers. Neurobiology 2016; 44: 1-8

[42] Ben-David E. Cohen J.E. Goldberg S.N. et al. Significance of enhanced cerebral gray-white

matter contrast at 80kVp compared to conventional 120 kVp CT scan in the evaluation of acute

stroke. Journal of Clinical Neuroscience 2014; 21: 1591-1594

[43] Kinahan P.E. Fletcher J.W. PET/CT Standardized Uptake Values (SUVs) in Clinical Practice

and Assessing Response to Therapy. Semin Ultrasound CT MR 2010; 31(6): 496-505

[44] Zitová B. Flusser J. Image Registration methods: a survey. Elsevier image and vision

computing 2003; 21: 977-1000

[45] Mamede M. Higashi T. Kitaichi M. et al. 18F-FDG Uptake and PCNA, Glut-1, and

Hexokinase-II Expression in cancers and inflammatory lesions of the lung. Neoplasa 2005; 7(4):

369-379

49

[46] Watanbe R. Yokoya A. Fuji K. et al. DNA strand breaks by direct energy deposition by

Auger and photo-electrons ejected from DNA constituent atoms following K-shell

photoabsorption. International Journal of Radiation Biology 2009; 80 issue 11-12: 823-832

[47] Surti S. Kuhn A. Werner M.E. et al. Performance of Philips Germini TF PET/CT scanner with

special consideration for its time-of-flight imaging capabilities. J. Nucl. Med. 2007; 48: 471-480