research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation...

140
VU Research Portal Third harmonic generation microscopy Zhang, Z. 2017 document version Publisher's PDF, also known as Version of record Link to publication in VU Research Portal citation for published version (APA) Zhang, Z. (2017). Third harmonic generation microscopy: Towards automatic diagnosis of brain tumors. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ? Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. E-mail address: [email protected] Download date: 09. Feb. 2021

Transcript of research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation...

Page 1: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

VU Research Portal

Third harmonic generation microscopy

Zhang, Z.

2017

document versionPublisher's PDF, also known as Version of record

Link to publication in VU Research Portal

citation for published version (APA)Zhang, Z. (2017). Third harmonic generation microscopy: Towards automatic diagnosis of brain tumors.

General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright ownersand it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ?

Take down policyIf you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediatelyand investigate your claim.

E-mail address:[email protected]

Download date: 09. Feb. 2021

Page 2: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Third harmonic generation microscopy: towards automatic diagnosis of brain tumors

Page 3: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

This thesis was reviewed by:

prof.dr. J. Hulshof VU University Amsterdam prof.dr. J. Popp Jena University prof.dr. A.G.J.M. van Leeuwen Academic Medical Center prof.dr. M. van Herk The University of Manchester dr. I.H.M. van Stokkum VU University Amsterdam dr. P. de Witt Hamer VU University Medical Center

© Copyright Zhiqing Zhang, 2017 ISBN: 978-94-6295-704-6 Printed in the Netherlands by Proefschriftmaken.

The work presented in this thesis was performed at the Biophotonics & Medical Imaging group at the LaserLab of the Department of Physics and Astronomy of the VU University and the Department of Radiology and Nuclear Medicine of the VU University Medical Center. This work was funded by China Scholarship Council (CSC).

Page 4: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

VRIJE UNIVERSITEIT

Third harmonic generation microscopy: towards automatic diagnosis of brain

tumors

ACADEMISCH PROEFSCHRIFT

ter verkrijging van de graad Doctor aan

de Vrije Universiteit Amsterdam,

op gezag van de rector magnificus

prof.dr. V. Subramaniam,

in het openbaar te verdedigen

ten overstaan van de promotiecommissie

van de Faculteit der Bètawetenschappen

op vrijdag 3 november 2017 om 9.45 uur

in de aula van de universiteit,

De Boelelaan 1105

door

Zhiqing Zhang

geboren te Qingliu, China

Page 5: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

promotor: prof.dr. M.L. Groot

copromotor: dr. J.C. de Munck

Page 6: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Contents

Chapter 1 Introduction to quantitative third harmonic generation 1

1.1 Third harmonic generation microscopy ........................................................................................ 2

1.2 Potential clinical applications and brain tumor imaging ............................................................... 3

1.3 The importance of image quantification ....................................................................................... 3

1.4 Challenges of THG image quantification ..................................................................................... 3

1.5 Main image processing tools used ................................................................................................ 5

1.6 PDE-based denoising: a mathematical introduction ..................................................................... 5

1.7 Active contour: a mathematical introduction ................................................................................ 8

1.8 Thesis outline .............................................................................................................................. 11

References ............................................................................................................................................... 13

Chapter 2 Extracting morphologies from third harmonic generation images of structurally normal human brain tissue 17

2.1 Abstract ....................................................................................................................................... 18

2.2 Introduction ................................................................................................................................. 18

2.3 Methods and algorithms .............................................................................................................. 20

2.3.1 Image sample and acquisition ............................................................................................. 21

2.3.2 Anisotropic diffusion driven by salient edges ..................................................................... 21

2.3.3 Active contour weighted by prior extremes ........................................................................ 25

2.3.4 Post-processing ................................................................................................................... 27

2.3.5 SHG/AF segmentation and validation method ................................................................... 27

2.4 Results and validation ................................................................................................................. 29

2.4.1 Parameter settings ............................................................................................................... 29

2.4.2 Segmentation evaluation ..................................................................................................... 29

2.5 Discussion ................................................................................................................................... 31

2.6 Conclusion and outlook .............................................................................................................. 32

2.7 Supplementary data ..................................................................................................................... 32

2.7.1 Image sample and acquisition ............................................................................................. 32

2.7.2 Anisotropic diffusion driven by salient edges ..................................................................... 33

2.7.3 Active contour weighted by prior extremes ........................................................................ 35

Page 7: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

2.7.4 Validation method ............................................................................................................... 37

2.7.5 Validation results ................................................................................................................ 37

References ............................................................................................................................................... 39

Chapter 3 Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images 43

3.1 Abstract ....................................................................................................................................... 44

3.2 Introduction ................................................................................................................................. 44

3.3 Sample preparation and image acquisition ................................................................................. 45

3.4 Image processing ........................................................................................................................ 47

3.4.1 THG images segmentation .................................................................................................. 47

3.4.2 Fluorescence images segmentation ..................................................................................... 48

3.4.3 Quantitative comparison ..................................................................................................... 49

3.5 Results and discussion ................................................................................................................ 49

3.5.1 Segmentation challenges ..................................................................................................... 50

3.5.2 Segmentation results ........................................................................................................... 51

3.5.3 Validation and quantitative comparison.............................................................................. 53

3.6 Discussion ................................................................................................................................... 57

3.7 Conclusion .................................................................................................................................. 59

References ............................................................................................................................................... 59

Chapter 4 Active contour models for microscopic images with global and local intensity inhomogeneities 65

4.1 Abstract ....................................................................................................................................... 66

4.2 Introduction ................................................................................................................................. 66

4.3 Existing active contour models ................................................................................................... 67

4.3.1 Level set formulation of ACMs .......................................................................................... 67

4.3.2 CV model ............................................................................................................................ 68

4.3.3 LIC model ........................................................................................................................... 68

4.3.4 CVPE model ....................................................................................................................... 69

4.4 Three-phase active contours weighted by prior extremes ........................................................... 69

4.4.1 ACMs weighted by prior extremes ..................................................................................... 69

4.4.2 Three-phase CVPE model ................................................................................................... 70

4.4.3 Three-phase LBFPE model ................................................................................................. 70

4.4.4 Three-phase LICPE model .................................................................................................. 70

4.4.5 Three-phase RLSFPE model ............................................................................................... 71

Page 8: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

4.4.6 Numerical implementation .................................................................................................. 71

4.5 Experimental results .................................................................................................................... 71

4.5.1 Intensity inhomogeneities within microscopic images ....................................................... 72

4.5.2 Comparison on two-phase images ...................................................................................... 73

4.5.3 Robustness to initialization ................................................................................................. 76

4.5.4 Comparison on three-phase THG images ........................................................................... 77

4.6 Conclusion .................................................................................................................................. 80

References ............................................................................................................................................... 80

Chapter 5 Tensor regularized total variation for third harmonic generation brain images 83

5.1 Abstract ....................................................................................................................................... 84

5.2 Introduction ................................................................................................................................. 84

5.3 Related works .............................................................................................................................. 85

5.3.1 The ADF model .................................................................................................................. 85

5.3.2 Connection between the ADF and TV models ................................................................... 86

5.3.3 The adaptive TRTV model ................................................................................................. 86

5.4 The proposed method .................................................................................................................. 86

5.4.1 Efficient estimation of the diffusion tensor ......................................................................... 87

5.4.2 Robust anisotropic regularization ....................................................................................... 87

5.4.3 A robust TRTV model ........................................................................................................ 88

5.5 Results ......................................................................................................................................... 89

5.6 Conclusion .................................................................................................................................. 90

References ............................................................................................................................................... 90

Chapter 6 Rich histopathological morphology revealed by quantitative third harmonic generation microscopy for detecting human brain tumors 91

6.1 Abstract ....................................................................................................................................... 92

6.2 Introduction ................................................................................................................................. 92

6.3 Results ......................................................................................................................................... 93

6.3.1 Quantitative THG microscopy ............................................................................................ 93

6.3.2 Quantification of histopathological morphology ................................................................ 96

6.3.3 Difference of feature density between normal and tumor tissues ....................................... 97

6.3.4 Low-grade versus high-grade, WM versus GM .................................................................. 99

6.3.5 Quantification of infiltrative tumor boundary ..................................................................... 99

6.3.6 H&E morphologies detected by quantitative THG ............................................................. 99

Page 9: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

6.4 Discussion ................................................................................................................................. 102

6.5 Materials and methods .............................................................................................................. 104

6.5.1 THG microscopy and tissue preparation ........................................................................... 104

6.5.2 Quantification workflow ................................................................................................... 105

References ............................................................................................................................................. 105

Chapter 7 Discussion and outlook 111

7.1 General discussion .................................................................................................................... 112

7.1.1 Automatic diagnosis of human brain tumor ...................................................................... 112

7.1.2 Quantitative comparison of THG and other imaging techniques ...................................... 113

7.2 Pushing towards the future: Outlook ........................................................................................ 114

7.2.1 Applying deep learning to classify THG images directly ................................................. 115

7.2.2 Applying the developed algorithms to THG images of other tissue types ........................ 115

7.2.3 Combination of THG with other imaging techniques ....................................................... 116

7.2.4 Studying the tumor ecosystem with quantitative THG ..................................................... 117

7.2.5 Towards super-resolution THG......................................................................................... 117

References ............................................................................................................................................. 118

Index of Abbreviation 121

Summary 124

Samenvatting 125

总结 127

List of Publications 129

Acknowledgement 130

Page 10: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

1

Chapter 1

Introduction to quantitative third harmonic generation

Page 11: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 1

2

B C A

1.1 Third harmonic generation microscopy The main optical imaging technique used in this thesis is third harmonic generation (THG) microscopy [1-3]. THG is an important label-free imaging technique that enables in-vivo studying of bio-materials in their natural environment. The signals of THG are generated by a nonlinear optical process that depends on the third-order susceptibility χ(3) of the tissue (Fig. 1.1A) and phase-matching conditions that make it essentially an interface sensitive technique. Three incident photons are converted into one photon with triple energy and one third of the wavelength (Fig. 1.1B) [4]. Because of the long wavelength used, no or very little photon-damage is induced by this nonlinear optical process, which allows long-term observation of living tissue [5]. The first dynamical image of living systems made by THG microscopy was reported in 1998, with plant rhizoids imaged [2]. After that, THG microscopy has been successfully applied to image unstained samples such as insect embryos, plant seeds and intact mammalian tissue [6], zebrafish nervous system [7], zebrafish embryos [8], epithelial tissues [9, 10] and mouse brain [11].

Figure 1.1 Third harmonic generation. (A) Geometry of third harmonic generation. (B) Energy level diagram of the third harmonic generation process. (C) Schematic of the setup for label-free THG brain imaging. OPO: Optical parametric oscillator, GM: Galvo mirror, SL: Scan lens, TL: Tube lens, DM: Dichroic mirror, MO: Microscope objective, IF: Interference filter, and PMT: Photomultiplier tube.

In particular, the ability to visualize brain cells, e.g., neurons, inside living brain tissue provides important research opportunities in neuroscience and has potential clinical applications in neurosurgery. The first THG image of a live neuron was reported in 1999, in a cell culture [3]. In 2011, in our group we have reported ex-vivo and in-vivo imaging of mouse brain, revealing key brain structures like neurons, glial cells, blood cells and blood vessels [11]. The imaging setup for THG microscopy used is shown in Fig. 1.1C. It consists of a commercial two-photon laser-scanning microscope (TriMScope I, LaVision BioTec GmbH) and a femto-second laser source. The laser source is an optical parametric oscillator (Mira-OPO, APE) pumped at 810 nm by a Ti-sapphire oscillator (Coherent Chameleon Ultra II). The OPO generates 200 fs pulses at 1200 nm and repetition rate of 80 MHz. The OPO beam is focused on the sample using a 25×/1.10 (Nikon APO LWD) water-dipping objective (MO). The 1200 nm beam focal spot size on the sample was dlateral ~0.7 μm and daxial ~4.1 μm. Measured with 0.175 μm fluorescent microspheres, this yields two- and three-photon resolution values of Δ2P,lateral ~0.5 μm, Δ2P,axial ~2.9 μm, Δ3P,lateral ~0.4 μm, and Δ3P,axial ~2.4 μm (2P: 2 photon, 3P: 3 photon). Two high-sensitivity GaAsP photomultiplier tubes (PMT, Hamamatsu H7422-40) equipped with narrowband filters at 400 nm and 600 nm are used to collect the THG and second harmonic generation (SHG) signals, respectively, as a function of the position of the

Page 12: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Introduction to quantitative third harmonic generation

3

focus in the sample. The signals are filtered from the 1200 nm fundamental photons by a dichroic mirror (DM1, Chroma T800LPXRXT), split into SHG and THG channels by a dichroic mirror (DM2, Chroma T425LPXR), and passed through narrow-band interference filters (IF) for SHG (Chroma D600/10X) and THG (Chroma Z400/10X) detection. The efficient back-scattering of the harmonic signals allows for backward (epi-)detection of THG signals. The laser beam is transversely scanned over the sample by a pair of galvo mirrors (GM). THG and SHG modalities are intrinsically confocal and therefore provide direct depth sectioning. We obtain a full 3D image of the tissue volume by scanning of the microscope objective with a stepper motor in vertical direction. Imaging data is acquired with the TriMScope I software (“Imspector Pro”), and image stacks are stored in 16-bit tiff-format.

1.2 Potential clinical applications and brain tumor imaging Besides for studying intact tissues, THG microscopy is also establishing itself as an important clinical tool. It shows great potential for diagnosis of skin cancer [12], breast tumor [13, 14], and brain tumor [15]. The THG signal generated in these tissues has been proved to arise from the cell membrane, cytoplasmic organelles, hemoglobin, elastic fiber, and lipid bodies [12]. In particular, brain is a perfect material for label-free THG imaging because brain consists of a large part of lipid rich axons and dendrites [11]. More recently, THG has been shown to yield label-free images of ex-vivo human tumor tissue of histopathological quality, in real-time [15]. Increased cellularity, nuclear pleomorphism and rarefaction of neuropil in THG tumor images of fresh, unstained human brain tissue have been recognized clearly. This has been the first evidence that, applying the same microscopic criteria that are used by the pathologist, THG ex-vivo microscopy can be used to recognize the presence of diffuse infiltrative glioma in fresh, unstained human brain tissue [15]. Moreover, an optic needle with a graded index (GRIN) objective has been developed as a step toward in-situ THG microendoscopy of tumor boundaries [15].

1.3 The importance of image quantification In this thesis we focus on processing THG images of brain tissues (THG brain images), especially for the automatic diagnosis of brain tumor. Several reasons make quantification of THG brain images important. First, large-scale statistical analysis of THG images of healthy and brain tumor tissues will reveal the histopathological differences between healthy and tumor tissues. This goal cannot be achieved without the help of automatic image processing tools. Second, image quantification will greatly facilitate the interpretation of the rich morphologies observed in THG brain images, i.e., it will elucidate what the observed features mean. The interpretation of THG images is usually linked to images of more standard imaging techniques, e.g., fluorescence microscopy. Visual inspection and comparison of THG and a standard technique can only verify that a limited number of certain structures can be visualized in both images, but it does not guarantee that each observed object or even the major part of it, indeed corresponds to, e.g., a brain cell. Image processing tools provide the possibility of large-scale quantitative comparison of THG images and images of a more standard type. Finally, automatic image processing tools are needed to quantify pathologically relevant features: cell size, cell types, and cell density of each type, etc, enabling proper classification in the operating theater where no pathologist may be present to interpret the acquired histopathological/THG images.

1.4 Challenges of THG image quantification Automatic image analysis of THG brain images can not only help us to better understand the features observed in THG images, but also generate a wealth of quantitative parameters relevant for the characterization of the pathological state of the tissue. However, due to the complexity of THG images,

Page 13: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 1

4

A

100μm 25 μm

E F

B

125μm

C D

6μm 15μm

40μm

quantification of THG images is challenging, even with modern image processing tools for denoising and segmentation.

Figure 1.2 Rich morphologies observed within THG brain images of healthy human and mouse brain tissues. (A) A THG image of human tissue. The imaged tissue has a rough tissue boundary on the right, which appears as a large dark shadow. (B) A THG image of mouse tissue. The image contrast and intensities in the middle are higher than in the corners, indicating intensity inhomogeneity. (C) A neuron with lipofuscin granules inside, observed in human tissue. (D) A typical brain cell observed in mouse tissue. (E) A microvessel. (F) Neuropil formed by cellular processes (a bright vertical axon can be seen in the middle).

To illustrate the challenges, THG brain images of mouse and human brain tissues are shown in Fig. 1.2. First, the observed features appear both as dark and bright objects and pose a 3-phase segmentation problem: ‘dark’ objects, ‘bright’ objects and a background of intermediate intensity (Fig. 1.2A-B). Brain cells are visible as dark holes (Fig. 1.2C-D), and are the salient features of THG images of mouse brain and human brain tissues [11, 15]. Dark objects include neurons (Fig. 1.2C), glial cells, dark blood vessel (Fig. 1.2E), and the surrounding small cells. Bright objects mainly include lipofuscin granules inside the dark objects (Fig. 1.2C) neuropil consisting of axons and dendrites (Fig. 1.2F). Second, the rich morphologies and the associated noise of the THG brain images make image denoising challenging, because of the necessity to keep all the morphological information, e.g. the neuropil. Third, the THG brain images usually suffer from low contrast in the corners because of the imperfectness of the imaging system (Fig. 1.2A-B) and therefore image segmentation algorithms should account for non-uniform background. Fourth, the imaged tissues often have a rough surface (the dark shadow on the right part of Fig. 1.2A) which not only results in a depth varying intensity inhomogeneity but also poses challenges in the removal of dark shadows in the post-processing phase. Finally, due to the novelty and complexity of the THG brain images, the validation of segmentation is a challenge in its own right because no ground truth is available in advance. In summary, automatic analysis of 2D/3D THG images is hampered because of its 3-phase segmentation aspect, its low signal-to-noise ratio, the intensity inhomogeneity and low local contrast, the post-processing and the validation of results. These difficulties pose serious challenges in five different sub-domains of image processing: (1) contrast enhancement, aiming to enhance the global/local contrast of an image and attenuate the intensity inhomogeneity; (2) image denoising, aiming to remove the image noise and reconstruct objects of interest; (3) image segmentation, aiming to extract the targeted objects within a homogeneous or inhomogeneous background; (4) post-processing, aiming to

Page 14: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Introduction to quantitative third harmonic generation

5

THG images Pre-processing Denoising Segmentation Post-processing

keep merely the objects of interest; (5) validation, aiming to evaluate the accuracy of the segmentation results after post-processing.

At the onset of the PhD project reported here, no image processing tools were available that were specifically suited for THG brain images. Most of the existing image processing algorithms, e.g., spatial filtering [16], global and local intensity thresholding (like Otsu’s method and the Sauvola method) [16, 17] and seed watershed transform [18, 19], were not capable to address the above challenges to a satisfactory level. Therefore, new image processing tools needed to be developed in order to enable quantitative analysis of THG images and unlock their potential in various clinical applications.

1.5 Main image processing tools used To address the main image processing challenges inherent to THG images, an integrated workflow should generally consist of four major steps (Fig. 1.3), preprocessing, denoising, segmentation and post-processing. The preprocessing step mainly includes histogram truncation to enhance the global image contrast, local histogram equalization to enhance the local image contrast and to reduce intensity correction along the depth. The denoising and segmentation are the two main problems that will be addressed specifically in this work. The post-processing step addresses problems like object clump splitting and candidate selection.

Partially overlapped sub-block histogram equalization (POSHE) [20] is exploited to enhance the local contrast and attenuate intensity inhomogeneity. Partial differential equation (PDE) based methods are used for image denoising and segmentation. PDEs have led to an entire new field in image processing, and hundreds of publications have appeared in the last decade. PDE-based image denoising is used to remove image noise while keeping the object edges sharp [21]. Another PDE-based method, active contour [22], is used for segmentation. The methods involved in the post-processing include general filters like watershed transform [18] and morphological filters [23] to split detected objects.

To highlight the long history and the importance of the two PDE-based methods, a general introduction to PDE denoising and segmentation will be made in Section 1.6 and 1.7, respectively.

Figure 1.3 The general workflow for the THG image processing. Four major steps are involved.

1.6 PDE-based denoising: a mathematical introduction Image denoising aims to restore a clear image from its noisy counterpart meanwhile preserving sharp edges. PDE plays an important role in image denoising, because PDE-based methods are one of the mathematically best-founded techniques in image processing [24]. The PDE-based methods arose from successful attempts to overcome the blurring effect of simple Gaussian smoothing.

Page 15: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 1

6

Let f denote a mD (m=2 or 3) image of the image domain Ω . The Gaussian filter Kσ with standard deviationσ is equivalent to the linear diffusion process,

( ),tu div u∂ = ∇ (1.1)

( ,0) ( ),u f=x x (1.2)

stopping at time 2 / 2t σ= , because of the classical mathematical result that the linear diffusion possesses the following solution [25],

2

( ) ( 0)( , ) .

( )( ) ( 0)t

f tu t K f t

== ∗ >

xx x

(1.3)

The linear diffusion filter does not only smooth noise, but also make edges blurred [26]. The Perona-Malik (PM) model [26], proposed in 1990, was the first PDE-based model that attempted to overcome the drawbacks of the linear diffusion filter. The PM model replaces the constant coefficient in the linear diffusion equation (1.1) by a spatially varying one derived from an edge detector g,

2( (| | ) ).tu div g u u∂ = ∇ ∇ (1.4)

The modulus of gradient is used to guide the diffusion process in order to inhibit diffusion at those locations where clear edges are present while encouraging diffusion at other locations. In this way, noise in the background is well suppressed. The PM model (1.4) can be implemented within the explicit scheme, but to reach a more efficient algorithm, the semi-implicit scheme is usually exploited [27]. In this context, the terms implicit and explicit refer to the way the temporal derivative is discretized. Towards an even more efficient algorithm, a diffusion model of such a kind can be linked to another well-known PDE-based denoising model [28], the total variation (TV) model,

22min | | || || .

2uu u fλ

Ω∇ + −∫ (1.5)

λ is the coefficient used to control the smoothness of the minimizer. The TV model was studied by Rudin, Osher and Fatemi in 1992 [29], who used the gradient descent method to solve the minimization problem (1.5). It led to the Euler-Lagrange (EL) equation, as follows,

( ) ( ).| |t

uu div u fu

λ∇∂ = − −

∇ (1.6)

Therefore, the TV minimization gives the diffusion term 1(| | )div u u−∇ ∇ that smoothes the image with mean curvature flow, with a non-linear diffusion of coefficient 1| |u −∇ . The physical interpretation of this equation is that diffusion (smoothing) is inhibited close to image edges where the image gradient | |u∇ is large, and diffusion is encouraged in homogeneous areas with small variations.

The convenience of connecting the diffusion approach (1.4) to the variational approach (1.5) are two-fold. On one hand, the behavior of the two approaches is easier to analyze in terms of diffusion, and on the other hand, the convexity of the variational approach makes both approaches easier to solve numerically

Page 16: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Introduction to quantitative third harmonic generation

7

with the well-established theories of convex optimization. The algorithm induced from applying gradient descent to the primal functional (1.5) is called the primal gradient algorithm. It has trouble where the gradient of the solution is zero because the functional is not differentiable there. Chambolle’s dual algorithm [30] was proposed to overcome this problem by solving the dual functional of (1.5), which expresses the TV term (the first term of (1.5)) as,

1| | sup ( ) ( ) : ( ; ),| ( ) | 1 .mcu d u div d Cξ ξ ξ

Ω Ω∇ = ∈ Ω ≤ ∀ ∈Ω∫ ∫x x x x x x

(1.7)

Note that u∇ disappears from this expression. Although the induced dual algorithm can overcome the problem that the primal algorithm converges slow, the rank-deficient operator div in (1.7) makes the dual minimizers possibly non-unique. Therefore, primal-dual gradient hybrid algorithms [31-33] were proposed to benefit from both the primal and dual approaches. The split Bregman method [34] is another important approach to minimize the functional (1.5), but it has been shown less efficient than the primal-dual approach [33].

It has appeared in practice that neither the diffusion model (1.4) nor the TV model (1.5) are able to eliminate noise at edges in all circumstances, because only the modulus of edges is considered. Moreover, in certain applications it is desirable to bias the diffusion towards the orientation of interesting features, e.g., a flow structure. These requirements cannot be satisfied by a scalar diffusivity anymore, and a diffusion tensor leading to anisotropic diffusion filters has to be introduced.

Anisotropic diffusion (AD) not only takes into account the modulus of the edge, but also the diffusion directions [21]. AD acts like a Gaussian filter in the homogeneous background. At the edge locations, AD inhibits diffusion across the edges of objects, and the noise on the edge is removed by allowing diffusion along the edges. The edge direction is usually indicated by the eigenvector direction with small variation. The partial differential equation of AD is defined as follows,

( ).tu div D u∂ = ∇ (1.8)

where u denotes a 3D image and D is the diffusion tensor, depending on the gradient of a Gaussian smoothed version of the image uσ∇ . The diffusion tensor D is constructed from the structure tensor defined as follows,

( ) * ( ), 0,J u K u uρ σ ρ σ σ ρ∇ = ∇ ⊗∇ ≥ (1.9)

where each component of the resulting matrix of the tensor product is convolved with a Gaussian kernelKρ of standard deviation ρ . The standard deviation σ denotes the noise scale, and ρ is the integration scale that reflects the characteristic size of the texture, and usually it is large compared to the noise scaleσ [21]. The structure tensor J can be decomposed as the production of the eigenvectors and the diagonal matrix of eigenvalues. We denote the eigenvectors of J as , 1, 2,3iv i = with their corresponding eigenvalue

iµ . The eigenvectors are ordered decreasingly according to their eigenvalues.

The information of eigenvectors and eigenvalues summarizes the distribution of the gradient directions within the neighborhood of a point. This information can be visualized as an ellipsoid whose semi-axes are equal to the eigenvalues and directed along their corresponding eigenvectors (Fig. 1.4A). In particular,

Page 17: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 1

8

A B C D 1

2 3

1

2 3

1

2 3

1

2 3

if 1µ is much larger than both 2µ and 3µ , the ellipsoid is stretched along one axis only. The gradients in the neighborhood are predominantly aligned with the direction 1v

, which can occur when the point lies on a thin plate-like feature (Fig. 1.4B). If 3µ is much smaller than both 1µ and 2µ , the ellipsoid is flattened in one direction only. The gradient directions are spread out but perpendicular to 3v , which can occur when the point lies on a thin line-like feature (Fig. 1.4C). If the ellipsoid is roughly spherical, i.e.,

1 2 3µ µ µ≈ ≈ , it means that the gradient directions are more or less evenly distributed which happens when the neighborhood of the point has spherical symmetry (Fig. 1.4D). Finally, if the three eigenvalues are zero, the ellipsoid degenerates to a point, indicating the point lies in the background.

Figure 1.4 The distribution of the gradient directions within the neighborhood of a point. (A) Ellipsoidal representation of the 3D structure tensor. (B) The structure tensor ellipsoid of a plate-like neighborhood. (C) The structure tensor of a line-like neighborhood. (D) The structure tensor of an isotropic neighborhood. Note that all the four pictures are downloaded from Wikipedia and the numbers indicate eigenvector directions.

With the distribution information of the local gradients, one can design a new diffusion tensor according to the kind of structures one wants to reconstruct. This is done by constructing the new diffusion tensor D from J by replacing all the eigenvalues iµ by iλ , as follows,

1 2 3 1 2 3 1 2 3( ) ( )( ) .TD v v v diag v v vλ λ λ= (1.10)

Here iλ represents the amount of desired diffusivity along the eigenvector direction iv . Based on the described construction procedure, various tensor-driven diffusion models have been developed in recent years to reconstruct objects, such as vessels in macroscopic medical images [35], fiber-like structures [36] and membranes in 3D microscopic images [37], 2D blob and ridges in remote sensing images [38]. In these models, the third diffusivity is always set to 1 to restore the fiber-like structures, and the second one approaches to 1 when the structure tensor ellipsoid indicates a plate-like object. Both the explicit and semi-implicit schemes have been widely used to implement an AD model, but the explicit scheme requires a very small time step in order to be stable, resulting in a less efficient algorithm [21]. In this thesis the semi-implicit AOS-stabilized scheme proposed by Weickert [21] is used to implement an AD model. More recently, using the same way of linking the diffusion model (1.4) to the TV model (1.5), AD has been combined with the TV model to reach more robust and efficient denoising algorithms [39, 40].

1.7 Active contour: a mathematical introduction The usage of active contour models or snakes for image segmentation has a history of nearly 30 years, which dates back to the late 1980s. In 1988, the original active contour model (ACM) was proposed by Kass et al. [41]. The basic idea was to start from a curve around the object to be detected, and evolved it

Page 18: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Introduction to quantitative third harmonic generation

9

C φ=0

Inside φ>0 Outside φ<0

subject to constraints towards the boundary of the requested object. In this first version of the ACM, the contour was explicitly given by parametric curves and object detection was based on the minimization of a cost function that depended on these parameters. The parametric ACM is one of the edge-based ACMs that has been successful in several applications but because of the requirement to start from an explicit contour parameterization it has some intrinsic drawbacks, such as its difficulty in handling topological changes during the evolution of the contour [42].

Different from the parametric ACM, the level set method is another solution of curve evolution because it allows for automatic topological changes [42]. Within the level set scheme, the discretization of the curve evolution problem can be made on a fixed rectangular grid in contrast to the parametric model. Based on this observation, the first region-based ACM that used the level set scheme was proposed by Mumford and Shah [43], widely known as the piecewise smooth (PS) or Mumford-Shah (MS) ACM. The region-based AMCs are able to detect objects whose edges are not well defined, which is impossible for the edge-based ACMs that can detect only objects with edges defined by gradients.

Let Ω be the image domain, and :I Ω→ℜ be a gray level image. In [43], a segmentation of the image is achieved by finding a contour C, which separates the image domain Ω into disjoint regions 1, , NΩ Ω , and a PS function u that approximates the image I and is smooth inside each region iΩ . Mumford and Shaw have formulated this segmentation problem as a problem of minimizing the following functional,

2 2\

( , ) ( ) | ( ) ( ) | | ( ) | .MS CE u C Length C I u d u dµ λ

Ω Ω= ⋅ + − + ∇∫ ∫x x x x x (1.11)

On the right hand side of (1.11), the first term is introduced to regularize the contour C. The second term is the data term, which forces u to be close to the image I. The third term is the smoothing term, which forces u to be smooth within each of the regions separated by the contour C. The PS model is able to extract objects of interest from images with or without intensity inhomogeneity, but it needs intensive computational effort to converge.

To reach a more economic model, Chan and Vese simplified the PS model in 2001 by assuming that the image I can be approximated by a piecewise constant (PC) function u [22]. This PC model, also called the CV model, is one of the state-of-arts of ACMs. It segments the image I via finding a PC function that takes value 1c inside the foreground and 2c outside. It is formulated as follows,

1 2

2 21 2 1 2( , , ) ( ) | ( ) | | ( ) | .CVE c c C Length C I c d I c dµ

Ω Ω= ⋅ + − + −∫ ∫x x x x (1.12)

Figure 1.5 The zero level set of a Lipschitz functionφ used to represent the curve C.

Page 19: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 1

10

Within the level set scheme, the curve C is represented by the zero level set of a Lipschitz functionφ , such that 1 | ( ) 0φΩ = >x x and 2 | ( ) 0φΩ = <x x (Fig. 1.5). This curve automatically partitions the image domain Ω into foreground 1Ω and background 2Ω , and therefore defines the segmentation. Using the Heaviside function H, and the one-dimensional Dirac measureδ , and defined, respectively, by

1, if 0( )

0, if 0,z

H zz≥

= < ( ) ( ),dz H z

dzδ = (1.13)

the energy function (1.12) is expressed as follows,

2 21 2 1 2( , , ) | ( ( )) | | ( ) | ( ( )) | ( ) | (1 ( ( ))) .CVE c c H d I c H d I c H dφ µ φ φ φ

Ω Ω Ω= ∇ + − + − −∫ ∫ ∫x x x x x x x x (1.14)

To make the curve C evolve to the object boundaries, we need to minimize the energy function (1.14). Since this minimization problem is not convex, the gradient descent method is the most commonly used method to minimize (1.14). First, keeping 1c and 2c fixed, and minimizing (1.14) with respect to φ , we deduce the associated Euler–Lagrange equation for φ ,

2 21 2( ) ( ) ( ) .I c I c

t εφ φδ φ µ

φ ∂ ∇

= ∇ ⋅ − − + − ∂ ∇ (1.15)

Here εδ is the regularized Dirac function. The artificial time t is used to parameterize the descent direction, with I defining the initial contour. Second, keeping φ fixed, minimizing the energy function (1.14) with respective to , 1, 2ic i = , gives that these variables are the means of the foreground and background, respectively,

1 2

( ) ( ( )) ( )(1 ( ( )))( ) , and ( ) .

( ( )) (1 ( ( )))

I H d I H dc c

H d H d

φ φφ φ

φ φΩ Ω

Ω Ω

−= =

∫ ∫∫ ∫

x x x x x x

x x x x (1.16)

Equations (1.15) and (1.16) should be iteratively run until a steady state of φ or a fixed number of iterations is reached. Note that the central difference scheme is used to compute all the spatial partial derivatives, and the forward difference scheme is used to compute the temporal partial derivative [44].

There are two issues that strongly influence the performance of the ACMs, including the CV model. One issue is that all the ACMs are initialization dependent. A good segmentation initialization will not only produce the correct segmentation result but also significantly decrease the computational effort. The non-PDE based method of the CV model [45] is a reasonable initialization. Another issue of the level set approach is re-initialization of the level set function. To prevent the level set function to become too flat, the level set function needs to be reinitialized to the signed distance function, which is very time-consuming. Some re-initialization free frameworks [44, 46] have been proposed to overcome this issue. The other approach to overcome the re-initialization issue is to reformulate the minimization problem (1.12) as a convex minimization problem, which can be solved by well-established convex analysis theory [47, 48].

Page 20: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Introduction to quantitative third harmonic generation

11

In the past decade, it has been demonstrated that the CV model is a powerful tool for cell/nuclei segmentation [49, 50], but it has also been shown that this model is not directly applicable to images with intensity inhomogeneity. Several modifications have been proposed to overcome the inhomogeneity, e.g., the multiphase CV model [51], the local binary fitting (LBF) model [52] and the local intensity clustering (LIC) model [53]. The multiphase CV model is the n-phase extension of the CV model. The LBF model and LIC model do not assume homogeneity of the background and foreground, and thus they are able to deal with specific types of intensity inhomogeneity.

The LBF model partitions the image I into smooth regions represented by functions, , 1, 2ig i = , as follows,

1 2

2 21 2 1 2( , , ) ( ) ( ) | ( ) ( ) | ( ) | ( ) ( ) | .LBFE g g C Length C K I g d d K I g d dµ

Ω Ω

= ⋅ + − − + − − ∫ ∫ ∫ ∫y x x y x y y x x y x y (1.17)

The truncated Gaussian kernel K of standard deviationσ is used to control the smoothness the function ig with a controllable scale σ . The LBF model outperforms the PS model and PC model and it is to some extent able to deal with intensity inhomogeneity, but it has also been shown that the LBF model can attain a better performance when it is combined with a global guidance, e.g., the CV model [54].

The LIC model [53] considers both the local intensity variation and global intensity guidance. It assumes that the image I can be modeled as the multiplication of a bias field b and a PC function J, ( ) ( ) ( )I b J=x x x . The bias field b accounts for the intensity inhomogeneity that varies slowly and thus it is locally constant. With a Gaussian kernel K of standard deviation σ , truncated in the local square window of width

4 1ρ σ= + , the energy function of the LIC model is formulated as follows,

1 2

2 21 2( ) ( ) | ( ) ( ) | ( ) | ( ) ( ) | .LICE L K I b c d d K I b c d dµ φ

Ω Ω

= + − − + − − ∫ ∫ ∫ ∫y x x y x y y x x y x y (1.18)

It measures the total loss of all local errors introduced by the multiplicative approximation. The LIC model outperforms most ACMs on images with intensity inhomogeneity. Nevertheless, it has still been shown that the LIC model gives better segmentations when it is combined with the CV model [55].

1.8 Thesis outline The work in this thesis is focused on developing new image processing tools for THG brain images, especially the algorithms of image denoising and segmentation. Other aforementioned challenges, e.g., contrast enhancement and validation are also thoroughly addressed along with these two problems. The ultimate goal is to use the developed tools to quantify the pathological features present in THG images of human brain tumors and structurally normal brain tissues, based on which we will be able to classify THG brain images and provide the surgeon with feedback on the nature of the imaged tissue.

In chapter 2 a novel 3D ADF model, a new 3D active contour model and an integrated image segmentation workflow are presented to address the image denoising and the 3-phase segmentation problem. This ADF reconstructs noise-free THG images with salient objects remaining. The proposed active contour model accurately extracts the key pathological features observed by THG. A watershed-based algorithm is also presented to remove the dark shadow caused by the rough surface of the imaged tissue. 3D THG images of structurally human normal brain tissue are used to test the proposed algorithms. Several THG images are manually segmented as ground truth to validate the segmentation results. Using

Page 21: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 1

12

the same segmentation workflow, SHG/auto-fluorescence images acquired simultaneously from the same tissue areas are segmented and quantitatively compared to THG images, in order to confirm the correctness of the main THG features detected.

Chapter 3 describes one of the two main problems to be addressed in this thesis, i.e., how to interpret THG brain images. THG and fluorescence images are acquired simultaneously from the same mouse brain tissue area. Using the new models described in chapter 2, an integrated image segmentation workflow is proposed to segment both the THG and the corresponding fluorescence images. A watershed-based algorithm is presented to split cells that slightly touch each other. A human observer thoroughly validates the segmentation results via visual inspection of all the detected dark brain cells and nuclei in THG and fluorescence images. A quantitative comparison between THG and fluorescence images confirms the correctness of interpreting dark and bright objects as brain cells.

Chapter 4 generalizes and deepens the main idea presented in chapter 2 to use a priori information to segment images with intensity inhomogeneities observed in microscopic images, including THG, SHG and fluorescence brain images. Because the existing ACMs fail to segment these microscopic images with both global and local intensity inhomogeneities, a general form of their energy functions is summarized that the prior information can be combined with a wide range of ACMs. Such a modification enables more accurate segmentation of THG images in the presence of intensity inhomogeneities.

Chapter 5 focuses on the computational aspects of the ADF models, in contrast to chapter 2 where a salient edge-enhancing model of ADF has been proposed. A novel framework of ADF is proposed to accelerate the existing ADF models, by allowing diffusion only in the non-flat areas. ADF is reformulated in terms of another classical PDE-based denoising model, the total variation model. The resulting convex minimization problem is solved by an efficient and easy-to-code primal-dual algorithm. Compared to the existing ADF, the new denoising model also significantly improves the denoising effect.

In chapter 6 the most important problem of this thesis is addressed, i.e., the application of THG images for the detection of brain tumor boundaries. The image processing tools developed in previous chapters are applied to quantify THG brain images from 12 patients undergoing neurosurgery, of which 8 are diagnosed with low-grade glioma, 2 with high-grade glioma, and 2 with epilepsy (as structurally normal reference). Pathologically relevant features, i.e., the brain cells, nuclei, neuropil and large bright cells, are detected with high accuracy. Statistical analysis of the density of the quantified features reveals the quantitative difference among THG brain images of low-grade tumor, high-grade tumor and structurally human normal brain tissues. The generated density thresholds of these features enable the detection of tumor infiltration and thus of tumor boundary with high sensitivity and specificity. From these results we conclude that quantitative THG microscopy holds potential for improving the accuracy of brain tumor surgery, without the need for expert interpretation in the operating theater.

The scientific content of this thesis ends with chapter 7 in which an overall discussion on the obtained results is given and where an outlook is provided on future research.

Page 22: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Introduction to quantitative third harmonic generation

13

References [1] Y. Barad, H. Eisenberg, M. Horowitz, and Y. Silberberg, "Nonlinear scanning laser microscopy by

third harmonic generation," Applied Physics Letters, vol. 70, pp. 922-924, Feb 24 1997. [2] J. A. Squier, M. Muller, G. J. Brakenhoff, and K. R. Wilson, "Third harmonic generation

microscopy," Optics Express, vol. 3, pp. 315-324, Oct 26 1998. [3] D. Yelin and Y. Silberberg, "Laser scanning third-harmonic-generation microscopy in biology,"

Opt Express, vol. 5, pp. 169-75, Oct 11 1999. [4] R. W. Boyd, "Nonlinear optics," in Handbook of Laser Technology and Applications (Three-

Volume Set), ed: Taylor & Francis, 2003, pp. 161-183. [5] V. Andresen, S. Alexander, W. M. Heupel, M. Hirschberg, R. M. Hoffman, and P. Friedl, "Infrared

multiphoton microscopy: subcellular-resolved deep tissue imaging," Current Opinion in Biotechnology, vol. 20, pp. 54-62, Feb 2009.

[6] D. Debarre, W. Supatto, A. M. Pena, A. Fabre, T. Tordjmann, L. Combettes, M. C. Schanne-Klein, and E. Beaurepaire, "Imaging lipid bodies in cells and tissues using third-harmonic generation microscopy," Nature Methods, vol. 3, pp. 47-53, Jan 2006.

[7] S. Y. Chen, C. S. Hsieh, S. W. Chu, C. Y. Lin, C. Y. Ko, Y. C. Chen, H. J. Tsai, C. H. Hu, and C. K. Sun, "Noninvasive harmonics optical microscopy for long-term observation of embryonic nervous system development in vivo," Journal of Biomedical Optics, vol. 11, Sep-Oct 2006.

[8] N. Olivier, M. A. Luengo-Oroz, L. Duloquin, E. Faure, T. Savy, I. Veilleux, X. Solinas, D. Debarre, P. Bourgine, A. Santos, N. Peyrieras, and E. Beaurepaire, "Cell Lineage Reconstruction of Early Zebrafish Embryos Using Label-Free Nonlinear Microscopy," Science, vol. 329, pp. 967-971, Aug 20 2010.

[9] J. Adur, V. B. Pelegati, A. A. de Thomaz, M. O. Baratti, D. B. Almeida, L. A. Andrade, F. Bottcher-Luiz, H. F. Carvalho, and C. L. Cesar, "Optical biomarkers of serous and mucinous human ovarian tumor assessed with nonlinear optics microscopies," PLoS One, vol. 7, p. e47007, 2012.

[10] P. C. Wu, T. Y. Hsieh, Z. U. Tsai, and T. M. Liu, "In vivo Quantification of the Structural Changes of Collagens in a Melanoma Microenvironment with Second and Third Harmonic Generation Microscopy," Scientific Reports, vol. 5, Mar 9 2015.

[11] S. Witte, A. Negrean, J. C. Lodder, C. P. De Kock, G. T. Silva, H. D. Mansvelder, and M. L. Groot, "Label-free live brain imaging and targeted patching with third-harmonic generation microscopy," Proceedings of the National Academy of Sciences, vol. 108, pp. 5970-5975, 2011.

[12] S. Y. Chen, S. U. Chen, H. Y. Wu, W. J. Lee, Y. H. Liao, and C. K. Sun, "In Vivo Virtual Biopsy of Human Skin by Using Noninvasive Higher Harmonic Generation Microscopy," Ieee Journal of Selected Topics in Quantum Electronics, vol. 16, pp. 478-492, May-Jun 2010.

[13] E. Gavgiotaki, G. Filippidis, H. Markomanolaki, G. Kenanakis, S. Agelaki, V. Georgoulias, and I. Athanassakis, "Distinction between breast cancer cell subtypes using third harmonic generation microscopy," Journal of Biophotonics, Nov 2016.

[14] W. Lee, M. M. Kabir, R. Emmadi, and K. C. Toussaint, Jr., "Third-harmonic generation imaging of breast tissue biopsies," J Microsc, vol. 264, pp. 175-181, Nov 2016.

[15] N. V. Kuzmin, P. Wesseling, P. C. Hamer, D. P. Noske, G. D. Galgano, H. D. Mansvelder, J. C. Baayen, and M. L. Groot, "Third harmonic generation imaging for fast, label-free pathology of human brain tumors," Biomed Opt Express, vol. 7, pp. 1889-904, May 01 2016.

[16] R. S. Gonzalez and P. Wintz, "Digital image processing," 1977. [17] J. Sauvola and M. Pietikainen, "Adaptive document image binarization," Pattern Recognition, vol.

33, pp. 225-236, Feb 2000.

Page 23: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 1

14

[18] L. Vincent and P. Soille, "Watersheds in Digital Spaces - an Efficient Algorithm Based on Immersion Simulations," Ieee Transactions on Pattern Analysis and Machine Intelligence, vol. 13, pp. 583-598, Jun 1991.

[19] C. Wahlby, I. M. Sintorn, F. Erlandsson, G. Borgefors, and E. Bengtsson, "Combining intensity, edge and shape information for 2D and 3D segmentation of cell nuclei in tissue sections," J Microsc, vol. 215, pp. 67-76, Jul 2004.

[20] J. Y. Kim, L. S. Kim, and S. H. Hwang, "An advanced contrast enhancement using partially overlapped sub-block histogram equalization," Ieee Transactions on Circuits and Systems for Video Technology, vol. 11, pp. 475-484, Apr 2001.

[21] J. Weickert, "Coherence-enhancing diffusion filtering," International Journal of Computer Vision, vol. 31, pp. 111-127, Apr 1999.

[22] T. F. Chan and L. A. Vese, "Active contours without edges," Ieee Transactions on Image Processing, vol. 10, pp. 266-277, Feb 2001.

[23] P. Soille, Morphological image analysis: principles and applications: Springer Science & Business Media, 2013.

[24] J. Weickert, Anisotropic diffusion in image processing vol. 1: Teubner Stuttgart, 1998. [25] G. Hellwig, "Partial differential equations," Blaisdell, New York, vol. 8, 1964. [26] P. Perona and J. Malik, "Scale-Space and Edge-Detection Using Anisotropic Diffusion," Ieee

Transactions on Pattern Analysis and Machine Intelligence, vol. 12, pp. 629-639, Jul 1990. [27] J. Weickert, B. M. T. Romeny, and M. A. Viergever, "Efficient and reliable schemes for nonlinear

diffusion filtering," Ieee Transactions on Image Processing, vol. 7, pp. 398-410, Mar 1998. [28] O. Scherzer and J. Weickert, "Relations between regularization and diffusion filtering," Journal of

Mathematical Imaging and Vision, vol. 12, pp. 43-63, Feb 2000. [29] L. I. Rudin, S. Osher, and E. Fatemi, "Nonlinear Total Variation Based Noise Removal Algorithms,"

Physica D, vol. 60, pp. 259-268, Nov 1 1992. [30] A. Chambolle, "An algorithm for total variation minimization and applications," Journal of

Mathematical Imaging and Vision, vol. 20, pp. 89-97, Jan-Mar 2004. [31] M. Zhu and T. Chan, "An efficient primal-dual hybrid gradient algorithm for total variation image

restoration," UCLA CAM Report, pp. 08-34, 2008. [32] E. Esser, X. Q. Zhang, and T. F. Chan, "A General Framework for a Class of First Order Primal-Dual

Algorithms for Convex Optimization in Imaging Science," Siam Journal on Imaging Sciences, vol. 3, pp. 1015-1046, 2010.

[33] E. Esser, X. Zhang, and T. Chan, "A general framework for a class of first order primal-dual algorithms for TV minimization," UCLA CAM Report, pp. 09-67, 2009.

[34] T. Goldstein and S. Osher, "The Split Bregman Method for L1-Regularized Problems," Siam Journal on Imaging Sciences, vol. 2, pp. 323-343, 2009.

[35] R. Manniesing, M. A. Viergever, and W. J. Niessen, "Vessel enhancing diffusion - A scale space representation of vessel structures," Medical Image Analysis, vol. 10, pp. 815-825, Dec 2006.

[36] M. Maska, O. Danek, S. Garasa, A. Rouzaut, A. Munoz-Barrutia, and C. Ortiz-de-Solorzano, "Segmentation and Shape Tracking of Whole Fluorescent Cells Based on the Chan-Vese Model," Ieee Transactions on Medical Imaging, vol. 32, pp. 995-1006, Jun 2013.

[37] S. Pop, A. C. Dufour, J. F. Le Garrec, C. V. Ragni, C. Cimper, S. M. Meilhac, and J. C. Olivo-Marin, "Extracting 3D cell parameters from dense tissue environments: application to the development of the mouse heart," Bioinformatics, vol. 29, pp. 772-779, Mar 15 2013.

[38] Z. Qiu, L. Yang, and W. P. Lu, "A new feature-preserving nonlinear anisotropic diffusion for denoising images containing blobs and ridges," Pattern Recognition Letters, vol. 33, pp. 319-330, Feb 1 2012.

Page 24: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Introduction to quantitative third harmonic generation

15

[39] V. Estellers, S. Soatto, and X. Bresson, "Adaptive Regularization With the Structure Tensor," Ieee Transactions on Image Processing, vol. 24, pp. 1777-1790, Jun 2015.

[40] M. Grasmair and F. Lenzen, "Anisotropic Total Variation Filtering," Applied Mathematics and Optimization, vol. 62, pp. 323-339, Dec 2010.

[41] M. Kass, A. Witkin, and D. Terzopoulos, "Snakes - Active Contour Models," International Journal of Computer Vision, vol. 1, pp. 321-331, 1987.

[42] S. Osher and J. A. Sethian, "Fronts Propagating with Curvature-Dependent Speed - Algorithms Based on Hamilton-Jacobi Formulations," Journal of Computational Physics, vol. 79, pp. 12-49, Nov 1988.

[43] D. Mumford and J. Shah, "Optimal Approximations by Piecewise Smooth Functions and Associated Variational-Problems," Communications on Pure and Applied Mathematics, vol. 42, pp. 577-685, Jul 1989.

[44] K. H. Zhang, L. Zhang, H. H. Song, and D. Zhang, "Reinitialization-Free Level Set Evolution via Reaction Diffusion," Ieee Transactions on Image Processing, vol. 22, pp. 258-271, Jan 2013.

[45] B. Song, "Topics in variational PDE image segmentation, inpainting and denoising," University of California Los Angeles, 2003.

[46] C. M. Li, C. Y. Xu, C. F. Gui, and M. D. Fox, "Distance Regularized Level Set Evolution and Its Application to Image Segmentation," Ieee Transactions on Image Processing, vol. 19, pp. 3243-3254, Dec 2010.

[47] X. Bresson, S. Esedoglu, P. Vandergheynst, J. P. Thiran, and S. Osher, "Fast global minimization of the active Contour/Snake model," Journal of Mathematical Imaging and Vision, vol. 28, pp. 151-167, Jun 2007.

[48] H. L. Zhang, X. J. Ye, and Y. M. Chen, "An Efficient Algorithm for Multiphase Image Segmentation with Intensity Bias Correction," Ieee Transactions on Image Processing, vol. 22, pp. 3842-3851, Oct 2013.

[49] A. Dufour, V. Shinin, S. Tajbakhsh, N. Guillen-Aghion, J. C. Olivo-Marin, and C. Zimmer, "Segmenting and tracking fluorescent cells in dynamic 3-D microscopy with coupled active surfaces," Ieee Transactions on Image Processing, vol. 14, pp. 1396-1410, Sep 2005.

[50] O. Dzyubachyk, W. A. van Cappellen, J. Essers, W. J. Niessen, and E. Meijering, "Advanced Level-Set-Based Cell Tracking in Time-Lapse Fluorescence Microscopy (vol 29, pg 852, 2010)," Ieee Transactions on Medical Imaging, vol. 29, pp. 1331-1331, Jun 2010.

[51] L. A. Vese and T. F. Chan, "A multiphase level set framework for image segmentation using the Mumford and Shah model," International Journal of Computer Vision, vol. 50, pp. 271-293, Dec 2002.

[52] C. M. Li, C. Y. Kao, J. C. Gore, and Z. H. Ding, "Minimization of region-scalable fitting energy for image segmentation," Ieee Transactions on Image Processing, vol. 17, pp. 1940-1949, Oct 2008.

[53] C. M. Li, R. Huang, Z. H. Ding, J. C. Gatenby, D. N. Metaxas, and J. C. Gore, "A Level Set Method for Image Segmentation in the Presence of Intensity Inhomogeneities With Application to MRI," Ieee Transactions on Image Processing, vol. 20, pp. 2007-2016, Jul 2011.

[54] L. Wang, C. M. Li, Q. S. Sun, D. S. Xia, and C. Y. Kao, "Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation," Computerized Medical Imaging and Graphics, vol. 33, pp. 520-531, Oct 2009.

[55] L. X. Liu, Q. Zhang, M. Wu, W. Li, and F. Shang, "Adaptive segmentation of magnetic resonance images with intensity inhomogeneity using level set method," Magnetic Resonance Imaging, vol. 31, pp. 567-574, May 2013.

Page 25: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 1

16

Page 26: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

17

Chapter 2

Extracting morphologies from third harmonic generation

images of structurally normal human brain tissue

This chapter is based on: Z. Zhang, N. V. Kuzmin, M. L. Groot, and J. C. de Munck, Bioinformatics, doi: https://doi.org/10.1093/bioinformatics/btx035, 2017 Jan.

Page 27: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

18

2.1 Abstract The morphologies contained in 3D third harmonic generation (THG) images of human brain tissue can report on the pathological state of the tissue. However, the complexity of THG brain images makes the usage of modern image processing tools, especially those of image filtering, segmentation and validation, to extract this information challenging. We developed a salient edge-enhancing model of anisotropic diffusion for image filtering, based on higher order statistics. We split the intrinsic 3-phase segmentation problem into two 2-phase segmentation problems, each of which we solved with a dedicated model, active contour weighted by prior extreme. We applied the novel proposed algorithms to THG images of structurally normal ex-vivo human brain tissue, revealing key tissue components -- brain cells, microvessels, and neuropil, enabling statistical characterization of these components. Comprehensive comparison to manually delineated ground truth validated the proposed algorithms. Quantitative comparison to second harmonic generation/auto-fluorescence images, acquired simultaneously from the same tissue area, confirmed the correctness of the main THG features detected.

2.2 Introduction Multi-photon microscopies, (a combination of) second and third harmonic generation microscopy, 2- and 3-photon excited auto-fluorescence microscopy, and coherent Raman scattering microscopies (CARS/SRS), show great potential as clinical tools for the assessment of the pathological state of tissue during surgery, as the relative speed of the imaging modalities approaches ‘real’ time, and no preparation steps of the tissue are required (see for example [1-7]). Third harmonic generation (THG) imaging [8-11] in particular is an emerging label-free microscopy technique, with strong potential. THG is a nonlinear optical process that depends on the third-order susceptibility χ(3) of the tissue and phase-matching conditions that make it essentially an interface sensitive technique [8]. Excellent agreement with standard histopathology has been demonstrated for THG in case of skin cancer diagnosis [12, 13] and for ex-vivo human brain tumor tissue [7].

The lipid rich microstructure of the brain has been proven a major source of contrast in THG microscopy of brain tissues [11]. THG has been shown recently to yield label-free images of ex-vivo human brain tumor tissue of histo-pathological quality. The morphologies observed in THG tumor images, e.g. increased cellularity, nuclear pleomorphism and rarefaction of neuropil, are similar to the standard histo-pathological criterion currently used by pathologists, making the transition from the current practice to THG images relatively easy [7]. To exploit all these attractive features of THG, automatic image analysis tools need to be developed for better visualization of the rich morphologies observed in THG images and for accurate statistical analysis of the pathologically relevant features. To this end, we have combined and further developed several classical image processing tools to create an effective tool for the statistical analysis of THG images. THG images of structurally normal ex-vivo human brain tissue were used as test material.

The very rich morphological information contained in THG images of brain (tumor) tissue makes it challenging to extract all the features. The observed features appear both as dark and bright objects and thus pose a 3-phase segmentation problem: “dark” objects (DO), “bright” structures (BS) and a background of intermediate intensity (Fig. 2.1). Brain cells are visible as dark holes (Fig. 2.1E), and are the salient features of THG images of mouse brain and human brain tumor tissues [7, 11]. In images of normal human brain tissue, dark objects include neurons with bright lipofuscin granules inside (Fig. 2.1B), glial cells, dark blood vessel with bright red blood cells inside (Fig. 2.1A and C), and the surrounding

Page 28: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

19

A

100μm

20 μm

B

40 μm 10 μm

C

D E

25 μm

small cells. Bright structures mainly include lipofuscin granules inside the dark objects, neuropil consisting of axons and dendrites (Fig. 2.1D), and red blood cells inside the vessel (Fig. 2.1C). Moreover, the rich morphologies and the associated noise of the THG images make image filtering challenging. Finally, the THG images usually suffer from low contrast in the corners because of the imperfectness of the imaging system, and intensity inhomogeneity caused by the rough surface of the imaged tissue (Fig. 2.1A). Therefore, automatic analysis of 3D THG images is hampered by a 3-phase segmentation problem, low signal-to-noise ratio, and intensity inhomogeneity together with low local contrast in the corners. These three difficulties correspond to three different image processing aspects: (1) image segmentation, aiming to extract the targeted objects from an image; (2) image filtering, aiming to remove the image artifacts and the noise; (3) contrast enhancement, aiming to enhance the global/local contrast of an image and attenuate the intensity inhomogeneity.

Figure 2.1 Rich morpological information of THG images of structurally normal gray matter of human brain tissue. (A-E) Typical examples of an image slice (A), a neuron with lipofuscin granules inside (B), a microvessel with red blood cells inside (C), neuropil formed by cellular processes (a bright vertical axon can be seen in the middle, D), and a brain cell with a dimly visible nucleus within (E).

In this paper we concentrate on image filtering and solving the 3-phase segmentation problem. The problem of low contrast can be addressed by local histogram equalization [14]. To the best of our knowledge, image filtering for THG images has not been studied. Image filtering has been in the center of the image processing field for decades, especially, the anisotropic diffusion filter [15], which has been widely used for noise reduction because of its capability of keeping the object edges sharp and enhancing certain kinds of structures [15, 16]. Image segmentation is another hot issue in image processing. Although large numbers of segmentation methods have been proposed in the past decades for microscopic images, they have been only sparsely applied to THG images. Watershed algorithms and manual delineations lie in the center stage of THG image segmentation. The viscous watershed transform was used to delineate the THG cell membranes of zebrafish embryos [10, 17]. The watershed-based approach

Page 29: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

20

A THG image

Anisotropic Diffusion

(BS filtering)

Histogram Equalization

(DO enhancement)

Anisotropic Diffusion

(DO filtering)

Active Contour

(BS segmentation)

Active Contour

(DO segmentation)

was used to extract the nuclear-to-cytoplasmic ratio of 2D THG images of human skin cancer [13, 18]. Manual delineation and intensity thresholding were combined to segment THG images of stem cells [19]. However, THG images of different tissues contain different structures, presenting different morphologies. The rich morphologies make the watershed based approaches fail to accurately extract all the structures in THG images of brain tissue. Active contour [20-23] is another important image segmentation technique. Thanks to the level set representation, active contour models have received intensive attention in the past years [24-27], and they are widely used for cell/nuclei segmentation [24, 28, 29].

In this paper, a novel image filtering model, a combination of anisotropic diffusion and higher order statistics (HOS), is proposed and applied for image denoising and edge enhancement. The 3-phase segmentation problem is split into two 2-phase segmentation problems, each of which is addressed by a novel segmentation model, active contour weighted by prior extremes (ACPE). THG images of normal brain tissue are used to test the algorithms. Both manually delineated ground truth and quantitative comparison to second harmonic generation (SHG) images acquired simultaneously from the same tissue area, containing also auto-fluorescence (AF) from the lipofuscin granules in brain cells, are used for validation.

2.3 Methods and algorithms A general overview of the proposed processing procedure is illustrated in Fig. 2.2. We split the procedure into two separate lines with the same input. The rationale is that local histogram equalization (LHE) is needed to enhance the local contrast for the dark objects, especially those in the corners, but LHE will destroy the 1D line-like neuropil. Therefore, the 3-phase segmentation problem is split into two 2-phase segmentation problems, while the segmentation results are merged together after post-processing. The difference between the two lines is that, partially overlapped sub-block histogram-equalization (POSHE) [14] is applied to enhance the local contrast for further segmenting dark objects. Another bonus of POSHE is that intensity inhomogeneity is attenuated. After filtering by our proposed anisotropic diffusion model, dark objects and bright structures are segmented by our proposed active contour model, respectively.

Figure 2.2 The flowchart of the processing procedure.

Page 30: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

21

2.3.1 Image sample and acquisition The imaged normal brain samples were cut from the temporal cortex and subcortical white matter that had to be removed for the surgical treatment of deeper brain structures for epilepsy. For details of the imaging setup and the preparation of the samples, we refer to previous works [7, 11] and the supplementary material.

2.3.2 Anisotropic diffusion driven by salient edges Image filtering is crucial for processing THG images, and the filter used should meet the following demands: (1) the filter should be able to remove noise while keeping the edges sharp; (2) the filter should be able to restore line and flow-like structures in a cluttered background.

Anisotropic diffusion filtering (ADF) lies in the center of the methods satisfying the above requirements, because it not only takes into account the modulus of the edge, but also its directions. ADF has been one of the standard choices for noise reduction available from open source software like ImageJ [30] and Icy [31]. The ADF model was first proposed by Weickert [32] to enhance the edges of objects, known as the EED model. Later on in 1999 [15] the CED model was proposed to enhance flow-like structures, based on which various other ADF models have been developed to enhance line-like structures [29, 33], and 2D plate-like structures in 3D images [16]. However, the CED model performs poor on the background, and it creates artifacts [16]. The EED performs well on noise reduction, but it fails to restore line-like structures because it diffuses too much along the second direction. The membrane-enhancing diffusion (MED) model [16] performs very well on the noise reduction and is able to enhance both line-like and plate-like structures, but it fails to work on highly noisy and cluttered images since it allows negative diffusivity along the first direction, making the algorithm unstable and false objects created around the edges.

Motivated by the MED model, as well as the model of Kim [34] where higher order statistics (HOS) were combined with the isotropic diffusion model -- the PM model [35], we further combine HOS with ADF to provide an edge-enhancing framework that offers not only strong noise suppression but also the capability to enhance the salient edges of objects in a cluttered background.

The diffusion equation of ADF reads as follows:

( ),tu div D u∂ = ∇ (2.1)

where u denotes a 3D image and D is the diffusivity tensor, depending on the gradient of a Gaussian smoothed version of the image uσ∇ . The diffusivity tensor D is constructed from the structure tensor defined as follows:

( ) * ( ), 0,J u K u uρ σ ρ σ σ ρ∇ = ∇ ⊗∇ ≥ (2.2)

where each component of the resulting matrix of the tensor product is convolved with a Gaussian kernelKρ of standard deviation ρ . The standard deviation σ denotes the noise scale, and ρ is the integration scale that reflects the characteristic size of the texture, and usually it is large compared to the noise scaleσ [15]. Here we set 1σ = and 2ρ = .

Page 31: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

22

The structure tensor J can be decomposed as the production of the eigenvectors and the diagonal matrix of eigenvalues. We denote the eigenvectors of J as , 1, 2,3iv i = with their corresponding eigenvalue iµ . The eigenvectors are ordered decreasingly according to their eigenvalues. Then the diffusivity tensor D is constructed from J by replacing the all the eigenvalues iµ by iλ , which represents the amount of diffusivity along the eigenvector direction, as follows,

1 2 3 1 2 3 1 2 3( ) ( )( ) .TD v v v diag v v vλ λ λ= (2.3)

By exploiting some peculiar properties of higher order statistics (HOS), e.g., insensitivity to additive Gaussian noise, provision of high contrast at the region boundary, and capabilities of better characterizing non-Gaussian signals, it has been proved that HOS-based algorithms are able to solve problems involving non-linear, non-Gaussian and noisy signals even with very low signal-to-noise ratios [34, 36, 37]. According to the work of Kim [34], the salient edge is defined by HOS as follows,

( )( )

1HOS ( ) ( ) ( ) ,nn

Wu

∈= −∑

y xx y x (2.4)

where n denotes the order of HOS and ( )W x is a set of neighbor voxels with size N centered at the position x . ( )µ x is the mean computed in the window ( )W x . In our implementation, we use an image window of 3×3×3 voxels for computing the HOS (i.e., N = 27). Note that the obtained HOS image is normalized to [0, 255] for grayscale representation.

Then we combine the HOS and the MED model together, and propose the HOS model of ADF as follows,

( )2 21

2 1 1 3

3

exp HOS ( ) ;

( ) ( );1.

n

plane

h

h Cτ

λ

λ λ λ λλ

= − = − − =

x (2.5)

HOS is then incorporated into the diffusivity tensor D, and should be updated at each iteration step. We refer readers to the supplementary material for the definition of hτ and planeC , the discussion of the behaviors of diffusion along each eigenvector direction, and the details of the implementation.

The performance of our HOS model in comparison with the CED, MED and EED models is illustrated on a simulated image and a THG image. In Fig. 2.3A the simulated image contains a 3D ball, a 3D ring, a 3D line and 4 small gray balls to mimic a cell, a plate-like structure, a 1D line-like structure, and a cluttered background, respectively. All the objects have intensity 255 except the gray balls whose intensity is 150. In Fig. 2.3B, we randomly remove 50% of the foreground voxels to simulate an inhomogeneous signal and Gaussian noise of standard deviation 60 is added. From Fig. 2.3C-E, we clearly see the noise cannot be well removed by CED model. The MED model is not so stable, because the contrast is lowered and some artificial objects have been created around the edges and inside. The EED model restores the objects very well with high contrast but it fails to restore the line-like structure. However, from Fig. 2.3F-H, we see that the HOS model with 1n = (the HOS1 model) succeeds to restore all the objects, while only objects with salient edges are kept by the HOS model with 2n = (the HOS2 model). Fig. 2.3H shows the

Page 32: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

23

A B

E F

C

G

D

H

edge map of Fig. 2.3G, calculated from equation (2.4). The edges of all gray balls have disappeared, which means the cluttered background is removed by the HOS2 model.

In Fig. 2.4, the models are applied to a THG image of normal human brain tissue. Similarly, the noise cannot be well removed by the CED model (Fig. 2.4B). The noise is removed successfully with the MED model, but many artifacts have been created around the edges and the contrast has become lower (Fig. 2.4C). The EED model fails to restore the line-like neuropil (Fig. 2.4D). The HOS1 model succeeds to remove the noise and restores all the bright structures (Fig. 2.4E), while with the HOS2 model, the line-like neuropil are blurred as expected and only salient objects are left (Fig. 2.4F).

Compared to the MED model, our improvements are two-fold: the HOS model is stable with less artifacts created and the cluttered background can be removed. With the first order HOS (n=1) more details are preserved while with higher order HOS, only objects with salient edges are kept. Since the goal of ADF for filtering dark objects is edge enhancement with few details remaining, the HOS2 model is used, while for filtering bright structures, the HOS1 model is used to keep as many neuropil as possible. Fig. 2.5 shows an example of the filtered images of the dark objects and bright structures, respectively. Note that the HOS model with 3n ≥ gives similar results as Fig. 2.5D with even fewer details left.

Figure 2.3 The performance of the HOS model in comparison with the CED, MED and EED model on a simulated image, with all the parameter settings optimized. (A) The ground truth of the simulated image. (B) The simulated image with 50% of the foreground voxels removed and Gaussian additive noise of standard deviation 60 added. (C) The CED filtered image after 150 iteration steps. (D) The MED filtered image after 150 iteration steps. (E) The EED filtered image after 50 iteration steps. (F) The HOS1 filtered image after 80 iteration steps. (G) The HOS2 filtered image after 61 iteration steps. (H) The edge map of (G) calculated from equation (2.4).

Page 33: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

24

A B C

E D F

40μm 40μm

40μm 40μm

40μm

40μm

A C B D

60μm 60μm 60μm 60μm

A B C

20μm 15μm

Figure 2.4 The performance of the HOS model in comparison with the CED, MED and EED models on a THG image. (A) A slice of the raw image. (B-F) The CED, MED, EED, HOS1, and HOS2 filtered image of (A).

Figure 2.5 A THG image filtered by the proposed HOS model. In (A-B), we aim to restore bright structures while in (C-D) we aim to enhance edges of dark objects. (A) A slice of the raw image. (B) The HOS1 filtered image of (A). (C) The enhanced image of (A) by local histogram equalization [14]. (D) The HOS2 filtered image of (C).

Figure 2.6 A large shadow around the dark object, and the segmentation results of it by the ACPE and the CV model shown in red. The image was already filtered by the HOS model with n=2. (A) The green curve is the profile of the white line, which shows the intensity transition between the left dark hole and the background. (B) The detected dark holes by the ACPE model. (C) The detected dark holes by the classical CV model.

Page 34: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

25

2.3.3 Active contour weighted by prior extremes The classical CV model [22] has been demonstrated to be a powerful tool for cell/nuclei segmentation in the past years [24, 28]. Let Ω be the image domain, and ( )I x is the image intensity at the location

( , , )x y z= ∈Ωx . A segmentation of the image I is achieved by finding a contour C which partitions the image domain Ω into the foreground and the background, and a piecewise constant function u which takes value 1c inside the foreground, and 2c inside the background. This can be formulated as a problem of minimizing the energy function,

2 21 2 1 1 2 2( , , ) | ( ( )) | | ( ) | ( ( )) | ( ) | (1 ( ( ))) .E c c H d I c H d I c H dφ µ φ λ φ λ φΩ Ω Ω= ∇ + − + − −∫ ∫ ∫x x x x x x x x (2.6)

In the expression, H is the Heaviside function and φ is the zero level set function of the contour C which partitions the image domain Ω into foreground and background regions, 1 | ( ) 0φΩ = >x x and

2 | ( ) 0φΩ = <x x .

The CV model works well on homogeneous images, and it segments an image using the means of the foreground and background. However, for THG images, due to intensity inhomogeneity caused by the rough surface of the tissue and the intensity degeneration along the depth, the boundaries between some dark objects and the background are not clear, resulting in large “shadows” around them (Fig. 2.6A). This significantly affects the segmentation accuracy of the CV model. The bright structures suffer from the similar problem, but not as intensively, as the dark objects. To reduce the influence of the “shadows” associated with intermediate gray values, we propose a novel model, active contour weighted by prior extremes (ACPE), by adding a penalty term to the energy function of the classical CV model to force the foreground towards the desired regions, as follows,

2 21 2 2 1

2

( , , ) | ( ( )) | | ( ) | (1 ( ( ))) | ( ) | ( ( ))

(1 ) | ( ) | ( ( ))ext

E c c H d I c H d w I c H d

w I c H d

φ µ φ φ φ

φΩ Ω Ω

Ω

= ∇ + − − + −

+ − −

∫ ∫ ∫

x x x x x x x x

x x x (2.7)

where extc is the intensity extreme of the image and [0,1]w∈ is the weight between the mean and extc . The smaller w is, the smaller the foreground regions will be, and vice versa. Note that extc is assigned the lowest intensity of the image to segment the dark objects, and the highest intensity to segment the bright structures. We make use of the histogram to assign a value for extc to make it less noise sensitive.

Keeping φ fixed and minimizing the energy function E with respect to the constants 1c and 2c , it immediately follows that they are the means of the foreground and background, respectively,

1( ) ( ( ))

( ) ,( ( ))

I H dc

H dφ

φφ

Ω

Ω

= ∫∫

x x xx x

and 2( )(1 ( ( )))

( ) .(1 ( ( )))

I H dc

H dφ

φφ

Ω

Ω

−=

−∫∫

x x xx x

(2.8)

Then, keeping 1c and 2c fixed and applying the Euler-Lagrange equation to (2.7), the evolution equation is obtained,

Page 35: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

26

A

D

B

E

C

F

60μm

ACPE CV

2 2 21 2( ) ( ) (1 )( ) ( ) ,extw I c w I c I c

t εφ φδ φ µ

φ ∂ ∇

= ∇ ⋅ − − − − − + − ∂ ∇ (2.9)

where εδ is the regularized Dirac function. For the details of the implementation, we refer to the supplementary material.

A good segmentation initialization will seriously decrease the computational effort. Here we adopt the non-PDE based method of the CV model [38] to reach a reasonable initialization. Another issue of the level set approach is re-initialization of the level set function, which is very time-consuming. Some re-initialization free frameworks [39, 40] were proposed to overcome this issue. However, applied to our THG images, the results are not as accurate as the conventional re-initialization approach. Here we adopt the reaction diffusion method [40] to partly get rid of the issue, but the level set function is reinitialized every 10-th iteration step by a signed distance function [41] to reach better segmentation results.

Figure 2.7 The segmentation results, with boundaries in red, of a THG image by the ACPE model, in comparison with the CV model. The first raw shows the results of bright structures, while the second raw for the dark objects. (A) A slice of the raw image for bright structures. (B) The overlap of the ACPE segmented image of (A) with (A). (C) The CV segmented image of (A) with foreground in white. (D) The same slice as (A) enhanced by POSHE. (E) The overlap of the ACPE segmented image of (D) with (D). (F) The CV segmented image of (D) with foreground in white.

Page 36: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

27

From Fig. 2.6B-C, we see the difference of the segmentation results between the ACPE and CV model. The ACPE succeeded to extract the darkest hole out of the image, but the CV model detected more regions than the darkest parts. Fig. 2.7 shows segmentation results of the dark objects and bright structures of a THG image, by the ACPE model, in comparison with the CV model. One observes that the standard 2-phase CV model fails to segment THG images. The reason of the failure of segmenting bright structures is the existence of intensity inhomogeneity. In Fig. 2.7A, the intensities in the bottom-right corner are higher than other corners. The reason of the failure to segment dark objects is the existence of the large “shadows” aforementioned (Fig. 2.6C), causing an overestimated mean of the foreground.

2.3.4 Post-processing Before we combine the segmentation results of the dark objects and bright structures, tiny objects (<100 voxels) are ignored and the large shadows in the top slices caused by the rough surfaces of the tissue are removed. To preserve the dark objects which are very close to the surface edges, we apply the following procedure to the segmented image of the dark objects,

(1) Apply distance transform to the foreground regions, and inverse the foreground voxels.

(2) Use extended h-minima transformation [42] with threshold hExt to define watershed seeds.

(3) Apply the seeded watershed transform to the foreground regions of the image obtained from step (1), and remove components whose area in the first slice, a, is larger than 100×100 pixels.

Similarly, morphological erosion [42] and seed watershed transform are combined to separate cell clumps where cells are slightly touching.

2.3.5 SHG/AF segmentation and validation method The complete 3D ground truth for a THG image is impossible to obtain using manual delineation due to complexity and irregularity of the object shapes and rich information contained. Meanwhile, SHG images contain 2- or 3-photon excited auto-fluorescence (AF) signals from lipofuscin granules in brain cells, microtubules or collagen from small blood vessel walls, and can be acquired simultaneously from the same tissue along with THG images. Except for the small line-like neuropil and small “dark” cells, most of the large structures in a THG image agree with those of the corresponding SHG/AF image, whereas the SHG/AF images are easier to process since they are background free and standard methods can be used. Therefore, we can use the segmentation result of the corresponding SHG/AF image to confirm if the main features of a THG image also appear in the SHG/AF image, and thus confirm the correctness of the main THG features detected. To avoid the possibility that correlated segmentation errors in SHG/AF and THG treatment are being interpreted as correct THG identification results, we adopt standard methods to segment SHG/AF images: each image is first smoothed by a 3×3×3 Gaussian filter, and then segmented by the CV model. Fig. 2.8 shows a segmentation result of the corresponding SHG/AF image.

After segmentation, we compare THG and SHG/AF segmentations pixel by pixel (Fig. 2.8D). All the THG detections that do not overlap with the SHG/AF segmentations are picked out, as F-P candidates. F-P candidates with relatively large size are passed to a human observer for further confirmation. Precisely, the dark objects larger than 5000 voxels ( 3900 mµ ) that tend to represent brain cells and the bright structures larger than 1000 voxels ( 3180 mµ ) are passed to the human observer.

Page 37: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

28

A B C D

50μm 50μm 50μm 50μm

A B C D

E

60μm 60μm

12μm 25μm

40μm 40μm

F

In addition, we also evaluate the precision of THG segmentation results both in 2D and 3D to reach a thorough validation. Since the active contour model gives closed and smooth contours of target objects, we use a contour-based metric [25] to measure the mean error of the resulting contours to the ground truth (see supplementary material). Several 2D slices of different depths are selected from the tested THG images and manually segmented by a human observer, as ground truth. 2D segmentation results of these slices are taken from our 3D segmentation results and compared to the ground truth using the contour-based metric. Note that the mean error obtained here from 2D slices is more critical than that of 3D images, and the under- and over-segmented objects also contribute to the mean error. In addition, one test image is randomly selected by the human observer for complete 3D visual inspection. Each THG detection is identified as true positive (T-P) or false positive (F-P).

Figure 2.8 The segmentation result of the corresponding SHG/AF image of a THG image. (A) A slice of the raw SHG/AF image. (B) The segmentation result of (A) with the detected boundaries in red. (C) The overlap of the THG slice with the boundaries shown in (B). (D) The overlap of the THG, and segmentation results of THG and SHG/AF, with dark objects in brown, bright structures in green, and boundaries of SHG/AF in red.

Figure 2.9 Segmentation results of a volume of THG images by our proposed method, with boundaries of the dark objects and bright structures shown in red and green respectively, overlapped with the raw images. (A) A slice of a THG image with field of view 240 240m mµ µ× . (B) 3D surface rendering of (A). (C) A typical neuron with lipofuscin granules inside. (D) A typical microvessel with red blood cells inside. (E-F) Typical neuropil.

Page 38: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

29

2.4 Results and validation All the algorithms were implemented in Visual Studio C++ 2010 on a PC with a 3.40-GHz Intel(R) Core(TM) 64 processor and a 8 GB memory.

We applied our proposed algorithms on 11 pairs of THG and SHG/AF images of structurally normal ex-vivo human brain tissue. Each image has a size of around 1000×1000×50 voxels, which corresponds to a tissue volume of around 300 300 100m m mµ µ µ× × . One pair was used for training to obtain the optimized parameter settings, and the remaining 10 pairs were used to test the algorithms with the same parameter settings. Fig. 2.9 shows segmentation results of a typical THG image (Fig. 2.9A-B), with neurons (Fig. 2.9C), microvessels (Fig. 2.9D), and neuropil (Fig. 2.9E-F). A 3D surface rendering of the object in Fig. 2.9E made in ParaView is shown in Fig. 2.9F. A movie of the 3D surface rendering of the whole image (Fig. 2.9B) can be seen in the movie S1 (see the supplementary data that can be downloaded using the link, https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/btx035).

2.4.1 Parameter settings The parameters of the HOS model were set to 2n = , 10h = and 0.2τ = for filtering dark objects, and 1n = ,

20h = and 0.8τ = for filtering bright structures. The parameters of ACPE were fixed to 20.001 255 ,µ = ×

0.8w = for the dark objects segmentation and 0.98w = for the bright structures segmentation. In post-processing, the parameters to remove the rough surfaces of the tissue, were set to 1hExt = , 100 100a = ×pixels.

2.4.2 Segmentation evaluation We give the general segmentation information of the tested THG and SHG/AF images in Table 2.1. 11 pairs of images resulted in a total number of 20409 detected objects in THG images, and 5806 detected objects in SHG/AF images. Among the detected objects of THG images, dark objects (DO) contribute 4944 and bright structures (BS) take up the rest, 15465. The validation of large objects outlined in Section 2.3.5 yields 138 F-P candidates of DO and 736 F-P candidates of BS, from which 14 (14/854=1.6%) DO and no BS are confirmed as real false positive. Most of the real F-P DO are dark fragments in the top slices caused by the rough surfaces of the imaged tissues. Comparing the third (left) with the fifth (left) column of Table 2.1, we get the correspondence of the relatively large DO in THG and SHG/AF images, 1-138/854= 83.8%. From this we conclude that the correspondence between THG and the SHG/AF images is high, and thus the main feature, brain cells, of THG images has been correctly detected.

In Fig. 2.10 we show the size distributions of the F-P candidates of the DO and BS, respectively. There are 3214 DO and 9293 BS that disagree with SHG/AF images. Most of the candidates have size smaller than 1000 voxels, and they should therefore be either (fragments of) neuropil or small dark cells, that simply are not detected in the SHG/AF images.

Moreover, ten 2D slices of different depths from different tested images were selected for 2D validation. The average mean error of THG detections of all selected slices to the ground truth is 2.7 pixels ( 0.8 mµ ) for DO, and 1.4 ( 0.4 mµ ) pixels for BS. Image 3 was selected for 3D validation, via visual inspection, and the segmentation accuracy is above 99%. We refer to the supplementary material for details of the validation.

Page 39: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

30

Cell density is an informative parameter of the nature and pathological state of the tissue, and we show the number of large dark objects in the third column (left) of Table 2.1. Most of these large dark objects represent neurons or glial cells. Moreover, since cell size is also an informative parameter, it’s more convenient to consider the total number of voxels of detected cells. If this total number is normalized by the image size, then we get the percentage of space (PoS) taken by detected cells. More precisely, PoS of objects is the total number of voxels of detected objects divided by the image size. We show PoS of large and small dark objects of each tested THG image in Fig. 2.11.

Table 2.1. Comparison of THG and SHG/AF segmentation results. DO, BS and C denote dark objects, bright structures and candidate, respectively.

Image No. DO/BS Large DO/BS SHG/AF F-P C DO/BS F-P DO/BS

1(training) 369/1267 81/107 567 9/27 2/0

2 433/1206 65/157 328 10/56 0/0

3 441/1426 74/157 360 12/62 2/0

4 390/1731 68/226 296 11/121 3/0

5 514/1743 90/171 545 8/50 1/0

6 324/1189 65/121 552 8/37 1/0

7 477/1794 65/276 341 13/152 1/0

8 392/1282 68/173 713 15/47 2/0

9 474/985 84/130 764 23/50 1/0

10 428/1041 84/128 469 12/34 0/0

11 702/1801 110/299 871 17/100 1/0

Total 4944/15465 854/1945 5806 138/736 14/0

Figure 2.10 The size distributions of the F-P candidates of the DO (left) and BS (right), respectively.

Page 40: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

31

Figure 2.11 The density of relatively large (blue) and small (red) dark cells in terms of PoS. The X-axis denotes the image No., and Y-axis denotes PoS.

2.5 Discussion Statistical analysis of THG images of human brain tissue can yield valuable information on the nature or pathological state of the tissue. The automatic extraction of the key features from these THG images for this statistical purpose is hampered by the rich morphological information contained. We adapted two universally used models for image filtering and segmentation, respectively. Tests on THG images of normal human brain tissue showed that we were able to extract the rich morphological information contained. From these extractions, pattern descriptors such as eccentricity and sphericity can be applied to separate the pathologically relevant features—brain cells, blood vessels and neuropil, and then the basic parameters of each feature can be computed. For instance, for our test images, the radius of the brain cells was around 15µm, and the thickness of the most prominent neuropil fibers was around 2 µm. Furthermore, one potential application of THG images and the proposed algorithms would be providing pathologically relevant parameters in the operating room during the brain tumor surgery to distinguish the tumor and non-tumor areas, without the presence of a pathologist. Another potential application would be using THG after surgery in the pathology department to select tissue areas for further research or clinical investigation, e.g. DNA-sequencing.

Both object density and object size are informative parameters of the pathological state of the tissue. Towards the clinical application of THG images, the PoS, a combined parameter of the density and size, can be a better representation of each pathological feature. The underlying reason is that the number of cells or neuropil of tumor tissues can be extremely high [7], resulting in clusters of cells and nets of neuropil. For the same reason, we did not estimate the exact numbers of over- and under-segmented objects. Instead, the small mean error of the detected objects to the ground truth indicates that the boundaries of objects can be accurately located by the proposed algorithms, and thus the PoS can be precisely computed. Similarly, the small objects become less important because they are sparsely distributed in THG images (See Fig. 2.11).

Page 41: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

32

The quantitative comparison of THG with SHG/AF images facilitates the estimation of F-P detections and confirms the interpretation of large dark objects as brain cells. Complete validation of all the small neuropil is not easy because the THG signals of neuropil are sometimes not so strong which causes the segmentation of neuropil to be incomplete. However, for clinical applications, the accurate number of neuropil is not necessarily needed.

We think the proposed improvements on the two classical algorithms are more widely applicable than THG images. For instance, the ACPE model can be extended for n-phase cell segmentation and tracking, where the extreme of each region can be incorporated.

2.6 Conclusion and outlook In this paper, we adapted two classical image processing tools to extract rich morphological information contained in THG images of brain tissue. Tests on THG and SHG/AF images of normal human brain tissue validated the proposed algorithms. We will be able to segment THG images of human brain tumor tissue, using these image processing tools and the same processing procedure as in Fig. 2.2. Key features such as cell density and neuropil density will be computed to classify normal and tumor brain tissues. Our future work also includes applying classical machine algorithms to classify cell types and using deep learning models to directly classify THG images of normal and tumor tissues.

2.7 Supplementary data Third harmonic generation (THG) microscopy has been recently shown to have great potential for brain tumor diagnosis and surgery. THG is a nonlinear optical process that depends on the third-order susceptibility χ(3) of the tissue and the phase-matching conditions. Three incident photons are converted into one photon with triple energy and one third of the wavelength, which enables 3D high-resolution imaging of living tissues [8].

THG images of brain (tumor) tissue contain such rich morphological information that makes it challenging to extract all the features from THG images. We provide here the details of image acquisition, the implementation of the proposed algorithms, and the validation.

2.7.1 Image sample and acquisition Structurally normal brain samples were cut from the temporal cortex and subcortical white matter that had to be removed for the surgical treatment of deeper brain structures for epilepsy. After resection, the brain tissue samples were placed within 30s in ice-cold artificial cerebrospinal fluid (ACSF) at 4°C, and transported to the laboratory, located within 200m distance from the operating room. The transition time between resection of the tissue and the start of preparing slices was less than 15 min. We prepared a 300–350 μm thick coronal slice of the freshly-excised structurally normal tissue in ice-cold ACSF solution with a vibratome (Microm, HM 650V, Thermo Fisher Scientific), placed in a plastic Petri dish (diameter 50 mm) and covered with a 0.17 mm thick glass cover slip to provide a flat sample surface during multiphoton imaging. All procedures on human tissue were performed with the approval of the Medical Ethical Committee of the VU University Medical Center and in accordance with Dutch license procedures and the declaration of Helsinki. All patients gave a written informed consent for tissue biopsy collection and signed a declaration permitting the use of their biopsy specimens in scientific research.

The imaging setup to generate and collect these signals consisted of a commercial two-photon laser-scanning microscope (TriMScope I, LaVision BioTec GmbH) and a femtosecond laser source. The laser

Page 42: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

33

source was an optical parametric oscillator (Mira-OPO, APE) pumped at 810 nm by a Ti-sapphire oscillator (Coherent Chameleon Ultra II). The OPO generates 200 fs pulses at 1200 nm and repetition rate of 80 MHz. We focused the OPO beam on the sample using a 25×/1.10 (Nikon APO LWD) water-dipping objective. Two high-sensitivity GaAsP photomultiplier tubes (PMT, Hamamatsu H7422-40) equipped with narrowband filters at 400 nm and 600 nm were used to collect the THG and SHG/AF signals, respectively, as a function of position of the focus in the sample. The signals were filtered from the 1200 nm fundamental photons by a dichroic mirror (Chroma T800LPXRXT), split into SHG/AF and THG channels by a dichroic mirror (Chroma T425LPXR), and passed through narrow-band interference filters for SHG/AF (Chroma D600/10X) and THG (Chroma Z400/10X) detection. The efficient back-scattering of the harmonic signals allowed for their epi-direction. The laser beam was transversely scanned over the sample by a pair of galvo mirrors. THG and SHG/AF modalities are intrinsically confocal and therefore provide direct depth sectioning. We obtained a full 3D image of the tissue volume by scanning of the microscope objective with a stepper motor in vertical direction. Imaging data was acquired with the TriMScope I software (“Imspector Pro”), images stacks were stored in 16-bit tiff-format.

2.7.2 Anisotropic diffusion driven by salient edges The diffusion equation of anisotropic diffusion filtering (ADF) reads as follows:

3

, 1( ) ( ),t i ij j

i ju div D u x d x u

=∂ = ∇ = ∂ ∂∑ (2.10)

where u denotes a 3D image and D is the diffusivity tensor, depending on the gradient of a Gaussian smoothed version of the image uσ∇ . The diffusivity tensor D is constructed from the structure tensor

( )J uρ σ∇ . The structure tensor J can be decomposed as the production of the eigenvectors and the diagonal matrix of eigenvalues. We denote the eigenvectors of J as , 1, 2,3iv i = with their corresponding eigenvalues iµ . The eigenvectors are ordered decreasingly according to their eigenvalues. Then the diffusivity tensor D is constructed from J by replacing the all the eigenvalues iµ by iλ .

We combine the HOS [34] and the MED model [16] together, and propose the HOS model of ADF as follows,

( )2 21

2 1 1 3

3

exp HOS ( ) ;

( ) ( );1.

n

plane

h

h Cτ

λ

λ λ λ λλ

= − = − − =

x (2.11)

Here ( )hτ ⋅ is a fuzzy threshold function between 0 and 1 that allows a better control of the transition between 2D membrane structures and other regions [16, 43], as follows,

tanh[ ( )] 1( )= , [0,1].tanh[ (1.0 )] 1

xh x xτγ τγ τ

− +∈

− + (2.12)

γ is a scaling factor that controls the transition and we set it to 100.

planeC is the plane-confidence measure [16] defined as follows:

Page 43: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

34

1 2

1 2

-= .+planeC µ µ

µ µ (2.13)

Smoothing behaviors of our proposed diffusion method will be different for different regions: in background regions, ( )HOS ( )n x is almost 0 and |HOS | 0lim 1, 1, 2in iλ→ = = , so smoothing is encouraged all the

directions at an equal level (isotropic smoothing). In the vicinity of salient edges, since ( )HOS ( )n x is large and |HOS | 1lim 0n λ→∞ = , smoothing at the first direction is discouraged. In plate-like regions, the fuzzy function tends to 1, gives 2 1λ = , and smoothing at the second and third directions is allowed. In 1D structure regions, 2λ tends to 1λ and both are close to 0. Smoothing at the third direction is allowed only.

HOS is then incorporated into the diffusivity tensor D, and should be updated at each iteration step. The iteration number is quite crucial for ADF, and we use the stopping criterion proposed by Kim [34] to find the optimal number of iterations.

The discretization framework of anisotropic diffusion is quite important. We adopt the center difference scheme to approximate the diffusion equation and the structure tensor. More specifically, we adopt the framework proposed in by Weickert [44] to approximate the spatial derivates and use the standard center difference [32, 45] for the mixed derivatives as follows,

1, , , , 1, , , , 1, , , , , , 1, ,1( ) ;2 2 2 2 2

n n n nx y z x y z x y z x y z x y z x y z x y z x y zn

x xa a U U a a U U

a U + + − − + − + − ∂ ∂ = −

(2.14)

1, 1, 1, 1, 1, 1, 1, 1,1, , 1, ,

1( ) ;2 2 2

n n n nx y z x y z x y z x y zn

x y x y z x y zU U U U

b U b b+ + + − − + − −+ −

− − ∂ ∂ = −

(2.15)

1, 1, 1, 1, 1, 1, 1, 1,, 1, , 1,

1( ) ;2 2 2

n n n nx y z x y z x y z x y zn

y x x y z x y zU U U U

b U b b+ + − + + − − −+ −

− − ∂ ∂ = −

(2.16)

For the implementation, we use the semi-implicit AOS-stabilized scheme proposed by Weickert [15]. The explicit discretization of (2.10) is given by the finite difference scheme,

31

, 1.

n nn nij

i j

U U L Ut

+

=

−=

∆ ∑ (2.17)

In this notation, U describe a vector containing the values at each voxel of the 3D image u. The upper index denotes the time level and ijL is a central difference approximation to the operator ( )i ij jx d x∂ ∂ . Unfortunately, such explicit schemes require very small time step in order to be stable [15]. Combined with HOS, the algorithm of HOS model implemented in the semi-implicit scheme is summarized as follows.

Algorithm HOS

while (stopping criterion or predefined iterative steps)

do

Page 44: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

35

1. Calculation of the structure tensor J at each voxel.

2. Calculation of HOS at each voxel.

2. Decomposition of the structure tensor J at each voxel. Meanwhile, calculation of new structure tensor D at each voxel.

3. Calculation of

3

1: .n n n

iji j i

V I t L U= ≠

= + ∆

∑∑ (2.18)

4. Calculation of

( ) 11 : 3n n nl llW I tL V

−+ = − ∆ (2.19)

by means of the Thomas algorithm [44].

5. Calculation of

31 1

1

1: .3

n nl

lU W+ +

== ∑ (2.20)

Remark: The semi-implicit AOS-stabilized scheme used here allows the larger time step t∆ than the explicit scheme, which we fix as 0.2t∆ = . For the anisotropic diffusion models mentioned in the manuscript, the common parameters involves, the characteristic size of the texture ρ , the noise scaleσ , and the fuzzy threshold τ used to control the transition between 2D membrane structures and other regions, which we set to 1σ = , 2ρ = and 0.8τ = . Only one key parameter, the coherenceκ , is needed to be tuned for the CED model, and we set it to 10. Only one key parameter, the threshold C to distinguish the flat and not-flat regions, is needed to be tuned for the EED model, and we set it to 50. For the MED model, the threshold K to distinguish the flat and not-flat regions, is set to 10.

2.7.3 Active contour weighted by prior extremes We propose a novel model, active contour weighted by prior extremes (ACPE), by adding a penalty term to the energy function of the classical CV model to force the foreground towards the desired regions, as follows,

2 21 2 2 1

2

1 2 2

( , , ) | ( ( )) | | ( ) | (1 ( ( ))) | ( ) | ( ( ))

(1 ) | ( ) | ( ( ))

( , , ) (1 ) ( , , )ext

CV CV ext

E c c H d I c H d w I c H d

w I c H d

wE c c w E c c

φ µ φ φ φ

φ

φ φ

Ω Ω Ω

Ω

= ∇ + − − + −

+ − −

= + −

∫ ∫ ∫

x x x x x x x x

x x x (2.21)

Remark: Using the last equation of (2.21), we see that the energy function of the ACPE model actually is a weighted sum of the CV model and a special case of the CV model. Since the theoretical background of the CV model has been proven [21, 22, 46], and we expect that no artifacts can be introduced by the

Page 45: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

36

ACPE model. Although the selection of the weight w still depends on the used dataset, it provides the flexibility to control the outputs by means of controlling the mean intensity of the foreground. With this flexibility, the ACPE model can be used for segmenting different types of images.

For other applications than the THG images used in this article, it is possible to make the ACPE model even more independent to the tuning of weight w. For instance, to segment dark objects, w could be defined as follows at each voxel,

( )( ) ,max

max min

I IwI I

−=

−xx (2.22)

while to segment the bright features, we could define w as follows,

( )( ) .min

max min

I IwI I

−=

−xx (2.23)

where maxI and minI denote the highest and lowest intensity of the input image, respectively. The obtained results of this adaptive ACPE model on the used THG images are similar to those of the ACPE model.

The Euler-Lagrange equation of (2.21) is,

2 2 21 2( ) ( ) (1 )( ) ( ) .extw I c w I c I c

t εφ φδ φ µ

φ ∂ ∇

= ∇ ⋅ − − − − − + − ∂ ∇ (2.24)

The central difference scheme is used to compute all the spatial partial derivatives, and the forward difference scheme is used to compute the temporal partial derivative [40]. Then the discretization of (2.24) is,

12 2 2

1 2( ) : ( ) ( ) (1 )( ) ( ) .n n n

n nextn

L w I c w I c I ct ε

φ φ φφ δ φ µφ

+ − ∇ = = ∇ ⋅ − − − − − + − ∆ ∇

(2.25)

The algorithm of the ACPE model is summarized as follows.

Algorithm ACPE

1. Initialization: nφ = 0φ , n=0. 0φ is the initial contour obtained from the non- PDE based method [38], or simply one/several circles.

2. If n%10 is 0, reinitialize nφ by signed distance function.

3. Update the means 1 2, c c .

4. Compute: 1/2nφ + as

1/21 ( )n n nt Lφ φ φ+ = + ∆ . (2.26)

Page 46: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

37

5. Compute: 1nφ + as

1 1/2 1/22

n n ntφ φ φ+ + += + ∆ ⋅∆ . (2.27)

6. If 1nφ + satisfies stationary condition or n+1 reaches the number of predefined iterative steps, stop; otherwise, n=n+1 and return to Step 2.

Remark: The time steps 1 2, t t∆ ∆ used here, we fix them to 1 20.1, 0.1t t∆ = ∆ = , as recommended by Zhang [40].

2.7.4 Validation method The validation for THG images is very difficult due to complexity and irregularity of the object shapes and rich information contained. We first use the following contour-based metric [25] for precise evaluation of the segmentation results because active contour model provides closed and smooth contours of target objects. We compute the distance of each point , 1, ,i i n=x on the resulting contour C to the ground truth S, denote by ( , )idist Sx . Then the mean error of the resulting contour C to the ground truth S is,

1

1 ( , )n

ii

ME dist Sn =

= ∑ x . (2.28)

Several 2D slices of different depths are selected from the tested THG images and manually segmented by a human observer, as ground truth. 2D segmentations of these slices are taken from the 3D segmentation results and compared to the ground truth using the contour-based metric described above. Moreover, one test image is randomly selected by the human observer for visually inspection. Each THG detection is identified as true positive (T-P) or false positive (F-P).

Remark: Note that the mean error obtained here from 2D slices is more critical than that of 3D images, and the under- and over-segmented objects also contribute to the mean error. The 3D ground truth normally is delineated slice by slice, and the 2D delineated ground truth are combined as the 3D ground truth. If we designate jS as the 2D ground truth of the jth slice, and 3DS as the 3D ground truth, then the mean error of each point calculated from 2D ground truth is larger than that obtained from 3D ground truth,

3( , ) ( , )i j i Ddist S dist S≥x x . (2.29)

Therefore, for all the points of the selected slices,

31 1

1 1( , ) ( , ).n n

i j i Di i

ME dist S dist Sn n= =

= ≥∑ ∑x x (2.30)

2.7.5 Validation results We applied our proposed algorithms on 11 pairs of THG images of structurally normal ex-vivo human brain tissue. Each image has a size of around 1000 1000 50× × voxels, which corresponds to a tissue volume of around 300 300 100m m mµ µ µ× × .

Page 47: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

38

Table 2.2. The mean errors of the selected 2D slices. DO denotes dark objects and BS denotes bright structures, etc.

Slice No. Depth ME DO ME BS

1 1 2.25 1.21

2 4 2.87 1.37

3 7 1.84 1.67

4 10 2.90 1.82

5 15 2.43 1.19

6 20 2.96 1.65

7 25 2.88 1.41

8 35 2.38 1.19

9 40 3.74 1.28

10 50 2.98 0.78

Average 2.7 1.4

Ten 2D slices of different depths from different tested THG images are selected and manually segmented by a human observer, as ground truth. 2D segmentation results of these slices are taken from our 3D segmentation results and compared to the ground truth using (2.28). The mean errors of the selected 2D slices are listed in Table 2.2. The average mean error of THG detections of all selected slices to the ground truth is 2.7 pixels ( 0.8 mµ ) for dark objects, and 1.4 pixels ( 0.4 mµ ) for bright structures. In Fig. 2.12 we show one of the selected slices (Slice 5 in Table 2.2) for manual delineation and validation. We see most of the features have been manually delineated, especially the dark objects. Compared to the ground truth, our result is highly accurate and all the delineated objects in the ground truth have been detected by our proposed algorithms, which means that the percentage of false negative detections is quite low. Note that the object with the green arrow, has a bright spot inside, the ground truth of dark objects did not exclude this spot, but our algorithm was still able to segment this dark object correctly. Complete validation of all the small neuropil is not easy because the THG signals of neuropil are sometimes not as strong as those of lipofuscin granules, causing the segmentation of neuropil incomplete. However, for clinical applications, the accurate number of neuropil is not necessarily needed. The over-segmented (yellow arrow in Fig. 2.12) and under-segmented objects (red arrow in Fig. 2.12) also contribute to the mean error defined in (2.28). The mean error of dark objects of the 9th slice is slightly larger than that of others, and the reason is two-fold. This slice is a very deep slice of the image; the average size of cells in this slice is slightly larger than that of others, because the field of view of this slice is smaller and thus the cells look larger (in voxels).

One image (Image 3) was randomly selected for complete 3D visual inspection. 441 dark objects and 1426 bright structures have been detected by our proposed algorithms. Only 4 dark objects and no bright

Page 48: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

39

A B

E

C

60μm

F D

structure have been identified as false positive detections. Hence, the segmentation accuracy of the selected THG image is above 99%.

Figure 2.12 One example of the 2D manual delineation. (A) The raw THG image. (B) The manually delineated ground truth of (A) for bright structures. (C) The 2D segmentations of (A) taken from the 3D segmentation result, for the bright structures. (D) The manually delineated ground truth of (A) for dark objects. (E) The 2D segmentations of (A) taken from the 3D segmentation result, for the dark objects. (F) The overlap of (D) and (E). The boundaries of the detected dark objects in (E) are shown in red.

References [1] J. C. Jung and M. J. Schnitzer, "Multiphoton endoscopy," Optics Letters, vol. 28, pp. 902-904, Jun

1 2003. [2] R. M. Williams, A. Flesken-Nikitin, L. H. Ellenson, D. C. Connolly, T. C. Hamilton, A. Y. Nikitin, and

W. R. Zipfel, "Strategies for High-Resolution Imaging of Epithelial Ovarian Cancer by Laparoscopic Nonlinear Microscopy," Translational Oncology, vol. 3, pp. 181-194, Jun 2010.

[3] I. Paylova, K. R. Hume, S. A. Yazinski, J. Flanders, T. L. Southard, R. S. Weiss, and W. W. Webb, "Multiphoton microscopy and microspectroscopy for diagnostics of inflammatory and neoplastic lung," Journal of Biomedical Optics, vol. 17, Mar 2012.

[4] M. B. Ji, D. A. Orringer, C. W. Freudiger, S. Ramkissoon, X. H. Liu, D. Lau, A. J. Golby, I. Norton, M. Hayashi, N. Y. R. Agar, G. S. Young, C. Spino, S. Santagata, S. Camelo-Piragua, K. L. Ligon, O. Sagher, and X. S. Xie, "Rapid, Label-Free Detection of Brain Tumors with Stimulated Raman Scattering Microscopy," Science Translational Medicine, vol. 5, Sep 4 2013.

Page 49: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

40

[5] D. M. Huland, M. Jain, D. G. Ouzounov, B. D. Robinson, D. S. Harya, M. M. Shevchuk, P. Singhal, C. Xu, and A. K. Tewari, "Multiphoton gradient index endoscopy for evaluation of diseased human prostatic tissue ex vivo," Journal of Biomedical Optics, vol. 19, Nov 2014.

[6] M. Jain, N. Narula, A. Aggarwal, B. Stiles, M. M. Shevchuk, J. Sterling, B. Salamoon, V. Chandel, W. W. Webb, N. K. Altorki, and S. Mukherjee, "Multiphoton Microscopy A Potential "Optical Biopsy'' Tool for Real-Time Evaluation of Lung Tumors Without the Need for Exogenous Contrast Agents," Archives of Pathology & Laboratory Medicine, vol. 138, pp. 1037-1047, Aug 2014.

[7] N. V. Kuzmin, P. Wesseling, P. C. Hamer, D. P. Noske, G. D. Galgano, H. D. Mansvelder, J. C. Baayen, and M. L. Groot, "Third harmonic generation imaging for fast, label-free pathology of human brain tumors," Biomed Opt Express, vol. 7, pp. 1889-904, May 01 2016.

[8] D. Debarre, W. Supatto, A. M. Pena, A. Fabre, T. Tordjmann, L. Combettes, M. C. Schanne-Klein, and E. Beaurepaire, "Imaging lipid bodies in cells and tissues using third-harmonic generation microscopy," Nature Methods, vol. 3, pp. 47-53, Jan 2006.

[9] S. Y. Chen, C. S. Hsieh, S. W. Chu, C. Y. Lin, C. Y. Ko, Y. C. Chen, H. J. Tsai, C. H. Hu, and C. K. Sun, "Noninvasive harmonics optical microscopy for long-term observation of embryonic nervous system development in vivo," Journal of Biomedical Optics, vol. 11, Sep-Oct 2006.

[10] N. Olivier, M. A. Luengo-Oroz, L. Duloquin, E. Faure, T. Savy, I. Veilleux, X. Solinas, D. Debarre, P. Bourgine, A. Santos, N. Peyrieras, and E. Beaurepaire, "Cell Lineage Reconstruction of Early Zebrafish Embryos Using Label-Free Nonlinear Microscopy," Science, vol. 329, pp. 967-971, Aug 20 2010.

[11] S. Witte, A. Negrean, J. C. Lodder, C. P. De Kock, G. T. Silva, H. D. Mansvelder, and M. L. Groot, "Label-free live brain imaging and targeted patching with third-harmonic generation microscopy," Proceedings of the National Academy of Sciences, vol. 108, pp. 5970-5975, 2011.

[12] S. Y. Chen, S. U. Chen, H. Y. Wu, W. J. Lee, Y. H. Liao, and C. K. Sun, "In Vivo Virtual Biopsy of Human Skin by Using Noninvasive Higher Harmonic Generation Microscopy," Ieee Journal of Selected Topics in Quantum Electronics, vol. 16, pp. 478-492, May-Jun 2010.

[13] G. G. Lee, H. H. Lin, M. R. Tsai, S. Y. Chou, W. J. Lee, Y. H. Liao, C. K. Sun, and C. F. Chen, "Automatic Cell Segmentation and Nuclear-to-Cytoplasmic Ratio Analysis for Third Harmonic Generated Microscopy Medical Images," Ieee Transactions on Biomedical Circuits and Systems, vol. 7, pp. 158-168, Apr 2013.

[14] J. Y. Kim, L. S. Kim, and S. H. Hwang, "An advanced contrast enhancement using partially overlapped sub-block histogram equalization," Ieee Transactions on Circuits and Systems for Video Technology, vol. 11, pp. 475-484, Apr 2001.

[15] J. Weickert, "Coherence-enhancing diffusion filtering," International Journal of Computer Vision, vol. 31, pp. 111-127, Apr 1999.

[16] S. Pop, A. C. Dufour, J. F. Le Garrec, C. V. Ragni, C. Cimper, S. M. Meilhac, and J. C. Olivo-Marin, "Extracting 3D cell parameters from dense tissue environments: application to the development of the mouse heart," Bioinformatics, vol. 29, pp. 772-779, Mar 15 2013.

[17] M. A. Luengo-Oroz, J. L. Rubio-Guivernau, E. Faure, T. Savy, L. Duloquin, N. Olivier, D. Pastor, M. Ledesma-Carbayo, D. Debarre, P. Bourgine, E. Beaurepaire, N. Peyrieras, and A. Santos, "Methodology for Reconstructing Early Zebrafish Development From In Vivo Multiphoton Microscopy," Ieee Transactions on Image Processing, vol. 21, pp. 2335-2340, Apr 2012.

[18] S. Lavanya, B. N. Kumar, R. Obuliraj, and S. Dhanalakshmi, "Gradient Watershed Transform Based Automated Cell Segmentation for THG Microscopy Medical Images to Detect Skin Cancer," The International Journal of Science and Technoledge, vol. 2, pp. 98, Mar 2014.

[19] T. Chang, M. S. Zimmerley, K. P. Quinn, I. Lamarre-Jouenne, D. L. Kaplan, E. Beaurepaire, and I. Georgakoudi, "Non-invasive monitoring of cell metabolism and lipid production in 3D

Page 50: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

41

engineered human adipose tissues using label-free multiphoton microscopy," Biomaterials, vol. 34, pp. 8607-8616, Nov 2013.

[20] M. Kass, A. Witkin, and D. Terzopoulos, "Snakes - Active Contour Models," International Journal of Computer Vision, vol. 1, pp. 321-331, 1987.

[21] D. Mumford and J. Shah, "Optimal Approximations by Piecewise Smooth Functions and Associated Variational-Problems," Communications on Pure and Applied Mathematics, vol. 42, pp. 577-685, Jul 1989.

[22] T. F. Chan and L. A. Vese, "Active contours without edges," Ieee Transactions on Image Processing, vol. 10, pp. 266-277, Feb 2001.

[23] L. Wang, L. He, A. Mishra, and C. M. Li, "Active contours driven by local Gaussian distribution fitting energy," Signal Processing, vol. 89, pp. 2435-2447, Dec 2009.

[24] A. Dufour, V. Shinin, S. Tajbakhsh, N. Guillen-Aghion, J. C. Olivo-Marin, and C. Zimmer, "Segmenting and tracking fluorescent cells in dynamic 3-D microscopy with coupled active surfaces," Ieee Transactions on Image Processing, vol. 14, pp. 1396-1410, Sep 2005.

[25] C. M. Li, R. Huang, Z. H. Ding, J. C. Gatenby, D. N. Metaxas, and J. C. Gore, "A Level Set Method for Image Segmentation in the Presence of Intensity Inhomogeneities With Application to MRI," Ieee Transactions on Image Processing, vol. 20, pp. 2007-2016, Jul 2011.

[26] X. H. Cai, R. Chan, and T. Y. Zeng, "A Two-Stage Image Segmentation Method Using a Convex Variant of the Mumford-Shah Model and Thresholding," Siam Journal on Imaging Sciences, vol. 6, pp. 368-390, 2013.

[27] Y. P. Duan, H. B. Chang, W. M. Huang, J. Y. Zhou, Z. K. Lu, and C. L. Wu, "The L-0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images," Ieee Transactions on Image Processing, vol. 24, pp. 3927-3938, Nov 2015.

[28] O. Dzyubachyk, W. A. van Cappellen, J. Essers, W. J. Niessen, and E. Meijering, "Advanced Level-Set-Based Cell Tracking in Time-Lapse Fluorescence Microscopy (vol 29, pg 852, 2010)," Ieee Transactions on Medical Imaging, vol. 29, pp. 1331-1331, Jun 2010.

[29] M. Maska, O. Danek, S. Garasa, A. Rouzaut, A. Munoz-Barrutia, and C. Ortiz-de-Solorzano, "Segmentation and Shape Tracking of Whole Fluorescent Cells Based on the Chan-Vese Model," Ieee Transactions on Medical Imaging, vol. 32, pp. 995-1006, Jun 2013.

[30] T. J. Collins, "ImageJ for microscopy," Biotechniques, vol. 43, Jul 2007. [31] F. de Chaumont, S. Dallongeville, N. Chenouard, N. Herve, S. Pop, T. Provoost, V. Meas-Yedid, P.

Pankajakshan, T. Lecomte, Y. Le Montagner, T. Lagache, A. Dufour, and J. C. Olivo-Marin, "Icy: an open bioimage informatics platform for extended reproducible research," Nature Methods, vol. 9, pp. 690-696, Jul 2012.

[32] J. Weickert, Anisotropic diffusion in image processing vol. 1: Teubner Stuttgart, 1998. [33] R. Manniesing, M. A. Viergever, and W. J. Niessen, "Vessel enhancing diffusion - A scale space

representation of vessel structures," Medical Image Analysis, vol. 10, pp. 815-825, Dec 2006. [34] W. Kim and C. Kim, "Active Contours Driven by the Salient Edge Energy Model," Ieee

Transactions on Image Processing, vol. 22, pp. 1665-1671, Apr 2013. [35] P. Perona and J. Malik, "Scale-Space and Edge-Detection Using Anisotropic Diffusion," Ieee

Transactions on Pattern Analysis and Machine Intelligence, vol. 12, pp. 629-639, Jul 1990. [36] M. K. Tsatsanis and G. B. Giannakis, "Object and Texture Classification Using Higher-Order

Statistics," Ieee Transactions on Pattern Analysis and Machine Intelligence, vol. 14, pp. 733-750, Jul 1992.

[37] V. Murino, C. Ottonello, and S. Pagnan, "Noisy texture classification: A higher-order statistics approach," Pattern Recognition, vol. 31, pp. 383-393, Apr 1998.

[38] B. Song, "Topics in variational PDE image segmentation, inpainting and denoising," University of California Los Angeles, 2003.

Page 51: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 2

42

[39] C. M. Li, C. Y. Xu, C. F. Gui, and M. D. Fox, "Distance Regularized Level Set Evolution and Its Application to Image Segmentation," Ieee Transactions on Image Processing, vol. 19, pp. 3243-3254, Dec 2010.

[40] K. H. Zhang, L. Zhang, H. H. Song, and D. Zhang, "Reinitialization-Free Level Set Evolution via Reaction Diffusion," Ieee Transactions on Image Processing, vol. 22, pp. 258-271, Jan 2013.

[41] P. Felzenszwalb and D. Huttenlocher, "Distance transforms of sampled functions," ed: Cornell University, 2004.

[42] P. Soille, Morphological image analysis: principles and applications: Springer Science & Business Media, 2013.

[43] T. Romulus, O. Lavialle, M. Borda, and P. Baylou, "Flow coherence diffusion linear and nonlinear case," Advanced Concepts for Intelligent Vision Systems, Proceedings, vol. 3708, pp. 316-323, 2005.

[44] J. Weickert, B. M. T. Romeny, and M. A. Viergever, "Efficient and reliable schemes for nonlinear diffusion filtering," Ieee Transactions on Image Processing, vol. 7, pp. 398-410, Mar 1998.

[45] A. S. Frangakis and R. Hegerl, "Noise reduction in electron tomographic reconstructions using nonlinear anisotropic diffusion," Journal of Structural Biology, vol. 135, pp. 239-250, Sep 2001.

[46] S. Osher and J. A. Sethian, "Fronts Propagating with Curvature-Dependent Speed - Algorithms Based on Hamilton-Jacobi Formulations," Journal of Computational Physics, vol. 79, pp. 12-49, Nov 1988.

Page 52: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

43

Chapter 3

Quantitative comparison of 3D third harmonic generation

and fluorescence microscopy images

This chapter is based on: Z. Zhang, N. V. Kuzmin, M. L. Groot, and J. C. de Munck, Journal of Biophotonics, doi: https://doi.org/10.1002/jbio.201600256, 2017 May.

Page 53: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 3

44

3.1 Abstract Third harmonic generation (THG) microscopy is a label-free imaging technique that shows great potential for rapid pathology of brain tissue during brain tumor surgery. However, the interpretation of THG brain images should be quantitatively linked to images of more standard imaging techniques, which so far has been done qualitatively only. We establish here such a quantitative link between THG images of mouse brain tissue and all-nuclei-highlighted fluorescence images, acquired simultaneously from the same tissue area. For quantitative comparison of a substantial pair of images, we present here a segmentation workflow that is applicable for both THG and fluorescence images, with a precision of 91.3% and 95.8% achieved respectively. We find that the correspondence between the main features of the two imaging modalities amounts to 88.9%, providing quantitative evidence of the interpretation of dark holes as brain cells. Moreover, 80% bright objects in THG images overlap with nuclei highlighted in the fluorescence images, and they are 2 times smaller than the dark holes, showing that cells of different morphologies can be recognized in THG images. We expect that the described quantitative comparison is applicable to other types of brain tissue and with more specific staining experiments for cell type identification.

3.2 Introduction Third harmonic generation (THG) microscopy [1-4] shows great potential for real-time pathology of brain tumor tissue [5, 6]. THG microscopy is also establishing itself as an important tool for studying intact tissue. It has been successfully applied to image unstained samples such as insect embryos, plant seeds and intact mammalian tissue [3], epithelial tissues [7-9], zebrafish embryos [10], zebrafish nervous system [4], and mouse brain [5]. THG is a nonlinear optical process that depends on the third-order susceptibility χ(3) of the tissue and the phase-matching conditions. Three incident photons are converted into one photon with triple energy and one third of the wavelength [1-6].

Like other multi-photon techniques [10-18], THG not only enables the recording of label-free images of unfixed 3D volumes of tissue [1-10] free from spatial distortion artifacts inherent to histopathology, but also potentially allows feedback on the nature of the tissue, i.e. whether it is healthy or tumorous, to the surgeon during surgery, as the relative speed of the imaging modalities approaches ‘real’ time, and no preparation steps of the tissue are required [5, 6, 19-23]. THG was shown recently to yield label-free images of ex-vivo human tumor tissue of histopathological quality, in real-time [6]. Increased cellularity, nuclear pleomorphism and rarefaction of neuropil in THG tumor images of fresh, unstained human brain tissue could be recognized clearly. This provided the first evidence that, applying the same microscopic criteria that are used by the pathologist, THG ex-vivo microscopy can be used to recognize the presence of diffuse infiltrative glioma in fresh, unstained human brain tissue [6]. Therefore, 3D THG images of brain tissue (THG brain images) contain a wealth of quantitative parameters relevant for the characterization of the pathological state of the tissue: cell size, cell types, and cell density of each type, etc.

The interpretation of the observed features in THG brain images is usually linked to more standard imaging techniques. For example, THG images of mouse brain have been compared one-to-one with two-photon fluorescence images where cell nuclei of interneurons had been labeled with Green Fluorescent Protein (GFP) [5]. THG images of structurally normal and tumor tissues have been compared, not one-to-one, with histopathological images of the same samples stained with hematoxylin and eosin (H&E) [6]. Similarly, the interpretation of other label free imaging techniques, such as stimulated Raman scattering (SRS) microscopy and optical coherence tomography (OCT), has been qualitatively linked to H&E stained images [20, 21, 24].

Page 54: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images

45

However, the link between THG brain images (or brain images of the aforementioned techniques) and fluorescence/H&E stained images has so far only been established qualitatively. Such a qualitative link may verify that certain structures can be detected in both images, but remains indirect because it does not guarantee that each observed object or even the major part of it, indeed corresponds to, e.g., a brain cell. If the quantitative link between these novel images and fluorescence/H&E stained images can be established, it will facilitate and verify the interpretation of these novel images, and further strengthen their potential for real-time pathology of human brain tumors [6, 21, 24, 25].

Automatic cell quantification is one of the most important steps in this process and automatic cell segmentation has received increasing attention in the past years [26, 27]. Segmentation methods based on watersheds and active contours form the core of automatic cell segmentation procedures. Several methods based on morphological filters and the watershed algorithm were proposed to detect cells or cell nuclei in fluorescence images [28-32]. The active contour models [33-43] became popular for automatic cell/nuclei segmentation at the beginning of this century when the classical CV model [33] was proposed. THG images, with Raman and other nonlinear microscopy images, differ from labeled-fluorescence images in their complexity, inherent to their high information density [5, 6, 20, 21, 24, 25, 44-48]. A limited number of segmentation methods has been explored on THG images of several tissue types. The watershed-based approach combined with the convergence index filter were used to segment 2D THG images acquired from human skin tissue [46], where skin cells appeared as dark nuclei surrounded by bright cytoplasm. The viscous watershed transform method was used to delineate cell membranes in THG images of zebrafish embryos [10, 47]. THG images of stem cells were segmented but not automatically [49], probably because the stem cells appeared with very irregular shapes. THG images of different tissues present different morphologies, and a different segmentation strategy is needed to extract the cellular features of THG brain images. For automatic cell identification of THG brain images, we have developed novel segmentation methods, reported in an accompanying paper [48].

In this study, we will establish a quantitative link between THG images of mouse brain tissue and two-photon fluorescence images with all nuclei highlighted. We first describe the imaging setup, with which 3D THG and fluorescence images are acquired simultaneously from the same tissue area. For the quantitative comparison of a large number of THG and fluorescence images, we present an integrated segmentation workflow that is applicable for both THG and fluorescence images. Quantitative comparison is then performed to verify the interpretation of the main features in THG images as brain cells, from which the quantitative link between THG and fluorescence images can be established. Moreover, since both dark holes and bright objects appear in THG images, this quantitative link will reveal whether THG is able to infer their origins. Although we use ex-vivo mouse neocortex tissue stained with an unspecific nuclear counterstain to establish the link, we expect that the described imaging, image processing and quantitative comparison procedures are applicable with other types of brain tissue and more specific fluorescence staining experiments.

3.3 Sample preparation and image acquisition The imaging setup was described before [5, 6]. Briefly, it consisted of a commercial two-photon laser-scanning microscope (2PLSM, TrimScope I, Lavision BioTec GmbH) and a femtosecond laser source (Fig. 3.1A). The laser source is an optical parametric oscillator (Mira-OPO, APE) pumped at 810 nm by a Ti-sapphire laser (Coherent Chameleon Ultra II). The OPO generated 200 fs pulses at 1200 nm with linear polarization and a repetition rate of 80 MHz. A microscope water-dipping objective 25×/1.10

Page 55: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 3

46

A B

(Nikon APO LWD) provided transverse resolution of 0.5 µm and axial resolution of 1 µm. The laser power on the sample was adjusted between 20 mW and 150 mW to attain sufficient signal-to-noise ratio, i.e. to compensate for increased scattering of the excitation beam at deep tissue layers (up to few hundred µm). Depth scanning was accomplished by moving the objective with a stepper motor.

All experimental procedures with mouse brain tissues were carried out according to the animal welfare guidelines of the VU University Amsterdam, the Netherlands. Brains of C57/BL6 wild-type mice, postnatal delay 13–15 days (P13–P15), were rapidly removed after decapitation, dissected in ice-cold slicing solution into 300–350 μm thick coronal slices and kept in the holding chamber with carbogenated artificial cerebrospinal fluid (ACSF) at 20°C before imaging [50, 51]. During imaging, the brain slice was mechanically stabilized by a “harp” (Fig. 3.1B) and perfused by the carbogenated ACSF at 37°C.

Hoechst-33342 dye (H-342) was used to label nuclear DNA of mouse brain cells [11, 14, 52]. H-342 is a cell-permeable nuclear counterstain for live and fixed cells and tissue sections. When bound to the AT regions of DNA it can be excited via a three-photon transition with 1200 nm photons, whereupon fluorescence with a maximum at 460 nm is emitted. During the staining procedure the ACSF perfusion of the slice was stopped and the ready-to-use liquid formulation of high-purity H-342 (“NucBlue Live ReadyProbes Reagent”, prod. nr. R37605, Molecular Probes) was applied from the dropper bottle on the brain tissue slice right in the imaging chamber, two drops per 1 ml of ACSF. The slice was then incubated for 15–20 minutes and ACSF perfusion was resumed.

The back-scattered THG and fluorescence (HOE) photons were filtered from the pump by a dichroic mirror (Chroma, T800lpxrxt), split into THG and HOE channels by a dichroic mirror (Chroma, T425lprx), passed through the interference filters (IF) for THG (Chroma, Z400/10X) and HOE (Chroma, HQ500/140M-2P) and focused (lens L, Thorlabs, LA1576-A, plano-convex, dia. 9 mm, f = 12 mm) on the photomultiplier tube cathodes (PMT, Hamamatsu, H7422-40). Data acquisition was performed with the TriMScope I software (“Imspector Pro”), and images stacks were stored in 16-bit tiff-format. Since we started THG and fluorescence imaging (which took five minutes per image pair) when DNA staining was not yet maximal, the acquired images were indexed according to the staining time and split into the “early-stage” and “late-stage” datasets.

Figure 3.1 (A) The setup of the multi-photon laser-scanning microscope TriMScope I. See Fig. 1.1 also. (B) A photograph of a brain sample in the imaging environment: the slice was perfused by ACSF+CO2+O2 via pipes (on the left and right).

Page 56: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images

47

3.4 Image processing

3.4.1 THG images segmentation Dark holes interpreted as brain cells that appear in a bright background of neuropil are the main features of THG images of mouse brain tissues [5, 6]. In a complete cell identification strategy, usually four steps are involved: pre-processing, denoising, image segmentation, post-processing. We propose a cell identification workflow that contains all those four steps, and Fig. 3.2 gives a detailed flowchart of the major sub-steps.

Figure 3.2The proposed segmentation workflow.

3.4.1.1 Pre-processing Three sub-steps are used in the preprocessing of THG images. First, partially overlapped sub-block histogram-equalization (POSHE) [53] is applied to enhance the local contrast and attenuate the intensity inhomogeneity. Second, THG signal decays as a function of imaging depth due to the increased light scattering. To correct the intensity loss, we calculate the mean intensity value of each slice, subtract the best fitting 3rd order polynomial to the mean intensity of each slice and shift the intensity of each voxel by the same threshold to prevent negative values. Finally, the voxels of a 3D THG image are non-cubic, with 0.5µm on the plane compared to 1µm along the depth. A linear intensity interpolation is performed along the depth to make cubic voxels.

3.4.1.2 Image denoising The anisotropic diffusion filter (ADF) has been one of the standard choices for noise reduction [41, 54-57] because it is able to remove noise while keeping the object edges sharp. We use the edge-enhancing ADF proposed by Weickert in 1998 [55] to remove image noise, known as the EED model. The EED model acts like a Gaussian filter in flat regions while at the locations of edges, smoothing across the edges is forbidden but smoothing along the edge directions is allowed.

Page 57: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 3

48

3.4.1.3 Image segmentation We use a special variant of the active contour model, active contour weighted by prior extremes (ACPE), proposed in an accompanying paper [48] to binarize the denoised images. This model minimizes the following energy function,

21 2 2

2 21

( , , ) | ( ( )) | | ( ) | (1 ( ( )))

| ( ) | ( ( )) (1 ) | ( ) | ( ( )) .ext

E c c H d I c H d

w I c H d w I c H d

φ µ φ φ

φ φΩ Ω

Ω Ω

= ∇ + − −

+ − + − −

∫ ∫∫ ∫

x x x x x

x x x x x x (3.1)

The ACPE model is a novel extension of the classical active contour model, the CV model [33], obtained by adding the last term of equation (3.1). Let Ω be the image domain, and ( )I x is the image intensity at the location ( , , )x y z= ∈Ωx . A segmentation of the image I is achieved by finding a contour C which partitions the image domain Ω into the foreground and the background, and a piecewise constant function u which takes value 1c inside the foreground, and 2c inside the background. H is the Heaviside function and φ is the zero level set function of the contour C which partitions the image domain Ω into foreground and background regions, 1 | ( ) 0φΩ = >x x and 2 | ( ) 0φΩ = <x x . The ACPE model makes use of the extremes of the images to force the foreground towards the desired regions. In [48], it was used to segment both the dark objects and bright objects in THG images of normal human brain tissues. extc is the intensity extreme of the image and [0,1]w∈ is the weight between the mean 1c and extc . extc is assigned the lowest intensity of the image to segment dark holes, and the highest intensity to segment bright objects.

3.4.1.4 Post-processing To complete the segmentation workflow, three problems are addressed in post-processing: removing the large dark “shadows” in top slices caused by the rough surfaces of the imaged tissue, cell clump splitting and components validation. First, we use the strategy of [48] to remove the large dark shadows. Second, morphological erosion [58] and watersheds are combined to split the cell clumps. Morphological erosion is applied to generate markers for the seeded watershed transform (SWT) [30] and the background of the input is used as the mask. In more detail:

(1) Apply erosion with a structuring element of size s, with connected components generated as seeds for SWT. Tiny components caused by the irregular shapes of the components are ignored.

(2) Apply the distance transform to compute the distance of the background voxels to the seeds.

(3) Combine the input mask, the markers generated from step (1), with the distance map of step (2), we apply STW to split the cell clumps.

This splitting strategy can be applied several times with different scales of a structuring element, because of the varying cell sizes and shapes. Finally, we ignore components smaller than 500 voxels and larger than 30000 voxels.

3.4.2 Fluorescence images segmentation DNA labeling was applied to stain cell nuclei before THG imaging. Along with a THG image, a two-photon excited fluorescence image from the same tissue area with nuclei highlighted was generated simultaneously by the fluorescence channel of our imaging system. The proposed workflow is slightly

Page 58: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images

49

simplified for nuclei identification of fluorescence images. Intensity correction along the depth is not needed in preprocessing. The denoising step is the same as for the THG images. In the segmentation step,

extc is assigned with the highest intensity of the input to segment the nuclei. In post-processing, the sub-step to remove the dark shadows on the top slices is not required. We ignore components smaller than 400 voxels.

3.4.3 Quantitative comparison We perform the following three-step comparison method to quantify the similarity between segmentation results of THG and the corresponding fluorescence images:

(1) Overlap the segmentation results of each pair of THG and fluorescence images.

(2) For each THG detection, it is determined whether one (Fig. 3.3A) or more nuclei (Fig. 3.3D) can be found in the fluorescence image by counting overlapping components. Note that the correspondence between the detected cells and nuclei may not be one-to-one. The over-segmented case (Fig. 3.3C), i.e., more than one THG detection overlapping with one nucleus, does also occur.

(3) Those THG detections without overlapping voxels with all the fluorescence detections, are marked as non-counterpart detections (Fig. 3.3B).

Then the correspondence between a pair of THG and fluorescence images, defined as,

non-count( )1 erpartnumbercorrespondencetotal detections

= − , (4.2)

is computed to address the question whether the detected dark holes are really brain cells.

Figure 3.3 Typical cartoons for correspondence. The cells are shown in green, and the nuclei in red. (A) 1-1 correspondence. (B) No correspondence. (C) N-1 correspondence, over-segmentation. (D) 1-n correspondence, under-segmentation.

3.5 Results and discussion The workflow was implemented in Visual Studio C++ 2010, and the computational time for segmenting one 1000 1000 21× × (1000 1000 41× × after preprocessing) THG image was around 75 minutes on a PC with a 3.40-GHz Intel(R) Core(TM) 64 processor and a 8 GB memory.

We applied the proposed workflow to 22 pairs of THG and fluorescence images from ex-vivo mouse brain tissue to identify the main features, using the optimal parameter settings obtained from 2 pairs of the

Page 59: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 3

50

A

training images. The parameters of ACPE were fixed to 0.001 255 255µ = × × and 0.5w = . We applied the splitting strategy twice to split the cell clumps in THG images, once with a structuring element of size7 7 7× × and another of size 17 17 17× × , while we only applied the splitting strategy once to split fluorescence detections, with a structuring element of size 7 7 7× × . The remaining 20 pairs were split equally into the “early-stage” and “late-stage” datasets according to the imaged time of the tissue. The manual validation was done on the early-stage dataset, while the quantitative comparison was done on the late-stage dataset where staining had been complete and maximal.

A software tool was developed to help human experts to validate all the THG detections one by one. The segmented fluorescence images were overlapped with the counterparts, and served as an important reference of validating true positive (T-P), false positive (F-P), under-segmented (U-S), and over-segmented (O-S) detections.

Figure 3.4 A comparison of THG and fluorescence images acquired simultaneously from the same mouse brain tissue. (A) The top slice of a THG image. (B) Enlarged version of (A) by 5 times. (C) The corresponding fluorescence slice of (A). (D) The green curve shows the intensity profile taken over the horizontal white line, indicating intensity inhomogeneity.

3.5.1 Segmentation challenges Several factors inherent to THG images make them different from fluorescence images, and pose a number of additional challenges for automatic segmentation. Fig. 3.4 illustrates these differences. First, there are three classes of interest ‒ dark brain cells, bright objects and background with an intermediate intensity, that can be found in each THG image, compared to only two classes in a typical fluorescence

Page 60: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images

51

image (Fig. 3.4A-C). Second, to avoid possible photo damage of brain tissues, relatively low laser powers in the THG imaging are used, leading to noise. Third, THG intensity decreases along the depth direction causing a strong depth-dependent contrast [5]. Finally, ex-vivo tissues may have a rough surface (the left part of Fig. 3.4A), resulting in intensity inhomogeneity in the slices at each depth (Fig. 3.4D).

These challenges have motivated some key steps of the proposed workflow. The intensity inhomogeneity and depth-dependent contrast call for intensity correction along the depth and the adoption of the local histogram equalization POSHE to enhance the local contrast. The relatively high noise level, which is even amplified by the POSHE, calls for a strong denoising technique—the EED model. To solve the 3-phase segmentation problem, the ACPE model is applied [48].

Figure 3.5 Segmentation results of the selected THG image. (A) The 6th slice of the THG image. The contrast in the green square is not that high. (B) The result enhanced by POSHE. See the contrast in the green square. (C) The result of the EED model. (D) The binary result segmented by the ACPE model, with the foreground in white. In the red square, we see an example of a cell clump. (E) The final result of our proposed workflow, with detected cells in green. The cell clump in the red square has been correctly split. (F) 3D surface rendering of the segmentation result.

3.5.2 Segmentation results Here we use one pair of THG and fluorescence images to illustrate the performance of the proposed workflow. Fig. 3.5A shows the 6th slice of the THG image, whose contrast in the corners is very low. Enhanced by the local histogram equalization POSHE, the contrast in the corners has been enhanced significantly and the image seems homogeneous (Fig. 3.5B), but the noise has been amplified as well. Then the EED model is applied to remove the noise while keeping the cell boundaries sharp, as shown in

Page 61: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 3

52

A B C

100μm

Fig. 3.5C. The segmentation result of the ACPE model is shown in Fig. 3.5D, where dark shadows (the rough surfaces of the imaged tissue) can be seen on the left and up-right. Post-processing including dark shadows removal, cell clump splitting and components validation is applied to give the final output, as shown in Fig. 3.5E, where the detected dark cells (in green) are overlapped with the raw input THG image. We see that most of the large shadows caused by the rough surfaces of the imaged tissue have been removed, and that some cell clumps have been correctly split. The 3D surface rendering of the final segmentation result is given in Fig. 3.5F, visualized with ParaView [59]. The segmentation result of corresponding fluorescence image is shown in Fig. 3.6.

Figure 3.6 Segmentation results of the corresponding fluorescence image of Fig. 3.5. (A) The 6th slice of the raw image. (B) The denoised result of the EED model. (C) 3D surface rendering of the final segmentation result, with nuclei in red.

Figure 3.7 The motivation of the quantitative comparison. (A-B) An enlarged view of a pair of images. The two spots at the green cross lines show a one-to-one correspondence between a dark cell and its corresponding nucleus. (C) A typical detection of a brain cell (3D surface rendering in green) with its corresponding nucleus in red. Note that we make the right part of the detected cell transparent to indicate that the detected nucleus is completely inside the cell. (D) 3D surface rendering of the selected pair of images. The detected cells and nuclei are shown in green and red, respectively.

Page 62: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images

53

3.5.3 Validation and quantitative comparison To further illustrate our approach for quantitative comparison between the main features of the two modalities, we show in Fig. 3.7A-B the correlation between the spots detected in the two separate channels. Ideally, one dark cell detected in a THG image corresponds to one nucleus detected in the corresponding fluorescence image. In Fig. 3.7C, we show an ideal example of correspondence between a pair of cell and nucleus, while in Fig. 3.7D we overlap the 3D surface renderings of the selected pair of THG and fluorescence images (Fig. 3.5F and 3.6C), to show the correspondence between the two modalities and enable the verification of the interpretation of dark holes as brain cells.

Before quantitatively comparing the main features of the two modalities, the segmentation accuracy of each modality is determined. The manual validations of the main features observed by the two modalities were done on the early-stage dataset. The details of the accuracy analysis of THG images and the corresponding fluorescence images are shown in Table 3.1 and Table 3.2, respectively. The numbers of detected nuclei in the first two fluorescence images are smaller than those of detected dark cells in the first two THG images, while in the remaining pairs of images, the number of detected nuclei exceeds that of detected dark cells (Fig. 3.8). This indicates that more nuclei were highlighted after some delay during which the staining became maximal. In total, the average segmentation precision (T-P/Total) of the dark cells in THG images was 91.3%. An average 2.9% of the detections were incorrectly identified as brain cells, resulting in 2.9% false positive (F-P, Fig. 3.9A) detections. An average 5.6% of the detected dark cells were under-segmented (U-S, Fig. 3.9B). Only an average 0.2% of the detected dark cells were over-segmented (O-S) due to the markers generated by the morphological erosion. For the fluorescence images, the average segmentation precision was 95.8%. An average 1.3% of the detections were incorrectly or incompletely identified as nuclei. An average 1.7% of the detected nuclei were under-segmented, and an average 1.2% of the detected nuclei were over-segmented.

Table 3.1 Accuracy analysis of THG images

Image No. Total nr F-P U-S O-S T-P

1 653 25 (3.8%) 24 (3.7%) 0 (0.0%) 604 (92.5%)

2 571 11 (1.9%) 27 (4.7%) 0 (0.0%) 533 (93.3%)

3 644 9 (1.4%) 34 (5.3%) 2 (0.3%) 599 (93.0%)

4 686 17 (2.5%) 41 (6.0%) 0 (0.0%) 628 (91.5%)

5 528 8 (1.5%) 49 (9.3%) 0 (0.0%) 471 (89.2%)

6 360 6 (1.7%) 21 (5.8%) 0 (0.0%) 333 (92.5%)

7 463 15 (3.2%) 23 (5.0%) 0 (0.0%) 425 (91.8%)

8 559 21 (3.8%) 36 (6.4%) 0 (0.0%) 502 (89.8%)

9 512 11 (2.1%) 21 (4.1%) 6 (1.2%) 474 (92.6%)

10 415 29 (7%) 22 (5.3%) 2 (0.5%) 362 (87.2%)

Page 63: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 3

54

Table 3.2 Accuracy analysis of fluorescence images

Image No. Total nr F-P U-S O-S T-P

1 508 9 (1.8%) 2 (0.4%) 7 (1.4%) 490 (96.4%)

2 427 8 (1.9%) 3 (0.7%) 1 (0.2%) 415 (97.2%)

3 659 8 (1.2%) 10 (1.5%) 5 (0.8%) 636 (96.5%)

4 835 8 (1.0%) 12 (1.4%) 5 (0.6%) 810 (97.0%)

5 787 9 (1.1%) 32 (4.1%) 11 (1.4%) 735 (93.4%)

6 739 9 (1.2%) 14 (1.9%) 9 (1.2%) 707 (95.7%)

7 888 5 (0.6%) 9 (1.0%) 7 (0.8%) 867 (97.6%)

8 925 11 (1.2%) 17 (1.8%) 10 (1.0%) 887 (96.0%)

9 803 8 (1.0%) 17 (2.1%) 14 (1.7%) 764 (95.2%)

10 774 18 (2.3%) 15 (2.0%) 26 (3.3%) 715 (92.4%)

Figure 3.8 Y-axis: The number of THG detections divided by the number of fluorescence detections, for each image pair. X-axis: Image index. The curve indicates the rate of staining as a function of time.

Page 64: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images

55

Quantitative comparison was done on the late-stage dataset, where the staining was maximal. In total, 6447 dark brain cells and 12616 nuclei were detected in THG and fluorescence images, respectively. The typical size of a THG detection with a corresponding detectable nucleus was around 6000 voxels. Therefore, to estimate the correspondence between the two modalities, we excluded all the small THG detections (<2000 voxels, Fig. 3.9C), because if the corresponding nucleus would scale proportionally, its size would be around 400 voxels and would be too small to detect in the fluorescence images. Then the correspondence was estimated on 4097 THG detections, and 3643 (88.9%) of them found their corresponding nuclei in the fluorescence images. Considering that in the “late-stage” dataset the staining affects the THG images in such a way that overlap is reduced (Fig. 3.9D-E), the actual correspondence between the two modalities is likely higher than the obtained 88.9%, providing further evidence that most of the dark holes represent brain cells.

We also investigated, in Fig. 3.10, the spatial distribution of the THG detections (in the late-stage dataset) without counterparts detected in the fluorescence images. The centers of all such detections were computed and their distribution was studied as function of depth (Fig. 3.10A) and in plane (Fig. 3.10B). Fig. 3.10A shows that 73% of these detections (in the late-stage dataset) occur in the deeper 20 slices, whereas 27% occurs in superficial slices. In Fig. 3.10B, we show the x-y distribution of these THG detections of the image shown in Fig. 3.5. The clusters, in the green square, were large fragments caused by the rough surfaces of the imaged tissue. So, the discrepancy in the number of THG and fluorescence detections appears to be caused by technical issues related to degradation of image quality at larger depths and near the edge of the images. If we consider those components lying in the superficial 20 slices, the correspondence between the two imaging modalities was 94.5%. Shape describers, like sphericity, could be invoked to reduce detections at rough tissue surfaces of the imaged samples [31].

Figure 3.9 Typical examples in validation. The THG detections are shown in green and the nuclei in red. (A) Typical F-P cell detections, with large size. It is an enlarged version of the white square in Fig. 3.7D. (B) A typical U-S detection. (C) A typical small T-P detection, in the yellow circle, without overlap with fluorescence counterparts. (D-E) Because of the staining, the interior of some cells become “bright”, as shown in (D), making the THG detections incomplete. The resulting torus-shaped THG detections, may not have immediate overlap with a fluorescence nucleus, as shown in (E).

Page 65: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 3

56

Figure 3.10 The spatial distribution of the THG detections without nuclei counterparts detected. (A) The distributions in the top (<20) and deep (>=20) slices. (B) The x-y distribution of the THG detections of the image shown in Fig. 3.5. In the green square, we see clusters highly relative to the rough surfaces of the imaged tissue.

Conversely, we investigated the size difference of nuclei in the superficial 20 slices of fluorescence images between those nuclei with and without counterparts detected in the THG images. For each fluorescence image, we grouped the detected nuclei into two sets. S1 denotes the nuclei having counterparts detected in THG images and S2 denotes the remaining ones. On average, for each fluorescence image of the late-stage dataset, the mean size of S1 was 460 voxels (29%) larger than that of S2. Application of the two sided Student's t-test on the two sets yielded a p-value of 1.3e-05, confirming that these size differences were significant. This reveals that the S1 dark holes presented in THG images are neurons rather than glial cells, because their nuclei are generally larger than those of glial cells [60].

As shown in Fig. 3.11, most nuclei highlighted in fluorescence images coincide with dark holes while the remaining nuclei partly coincide with bright objects in THG images, suggesting that these bright objects also represent brain cells, but of a different origin, possibly glial cells or apoptotic neurons typical for the young mouse brain [61]. To further provide quantitative evidence for this suggestion, we applied the same segmentation procedure to identify the bright objects. The total sums of the dark holes and bright objects detected in THG images were around 75% of the number of nuclei detected in the fluorescence images. Around 80% of the large bright objects (>2000 voxels) had overlap with the detected nuclei. The comparison of the size difference between the dark holes and bright objects with corresponding nuclei detected in the fluorescence images showed that the average size of these bright objects was 2 times smaller than that of the dark objects.

Finally, we investigated, to which extent, the depth dependent signals of THG and fluorescence affect the quantitative comparison. We inspected the distribution of THG detections with nuclei highlighted in the fluorescence images as a function of depth, and we did not see a decay of the correspondence between THG and fluorescence signals. The reason is that intensity correction and local histogram equalization have been applied in preprocessing to enhance the contrast. One example obtained from the image pair of the late-stage dataset is shown in Fig. 3.12.

Page 66: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images

57

A B

A

Figure 3.11 An example of bright objects. The boundaries of detected bright objects (THG) and nuclei (fluorescence) are shown in green and red, respectively.

Figure 3.12 Each dot denotes the number of dark holes whose nuclei are highlighted in the fluorescence image, as a function of depth.

3.6 Discussion New imaging techniques for visualizing tumor margins during surgery are needed to improve the surgical outcomes. However, every such novel imaging technique applied to brain samples with rich information contained, require a proper interpretation of the observed objects.

Page 67: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 3

58

Towards an objective interpretation of THG brain images, in this study, we have established the quantitative link between THG of ex-vivo mouse brain tissue and two-photon fluorescence images with all nuclei highlighted by HOE. Their quantitative comparison provides an independent validation of the interpretation of dark holes revealed by THG as brain cells. Conversely, if the interpretation of the main features is not quantitatively validated, the clinical application built on such an interpretation will become less reliable. In this sense, the quantitative comparison will facilitate the interpretation of pathological THG brain images and unlock potential clinical applications of unstained microscopic images for tumor surgery. Such a quantitative comparison can be helpful to figure out the best interpretation of other novel imaging modalities such as SRS microscopy [20, 21], where such a quantitative comparison is currently lacking.

The comparison also shows that in the superficial slices of THG images of young mouse brain tissues, the cellular features resolved by the proposed workflow are highly reliable. This property is very significant for the potential clinical application of brain tumor surgery, because in the current clinical practice, the often used H&E staining images are commonly 2D and therefore top slices of THG images provide already reliable and sufficient cellular information. The depth limitations of THG images found in this study is caused by the circumstance that the imaged young mouse brain tissue is relatively opaque compared to the human brain tissue. Lipid bodies have been demonstrated a major source of contrast in THG microscopy of brain tissues [3, 5], while the lipid-rich structures in young mouse brain, such as axons and dendrites, are not fully developed. When applying THG to human brain tissues and older mouse brain tissues, the depth penetration can be down to 150 μm [6] and 300 μm [5], respectively.

The described quantitative comparison also suggests that the dark holes represent neurons, via the converse study of the size differences between the nuclei with and without counterparts detected in THG images. We also find that the average size of the dark holes with nuclei detected in the fluorescence images was 2 times larger than that of the bright objects. These bright objects may be either glial cells or apoptotic neurons typical for the young mouse brain. Because the used HOE staining is expected to also stain glial cells, assignment of the bright objects to glial cells seems most reasonable at this point. However, studies with more specific staining [62] for different types of glial cells or apoptotic neurons, combined with the quantitative comparison methods presented here, are necessary to shed the light on this. There are also a few tiny bright objects without nuclei highlighted in the fluorescence images. So far, we regard them as artificial objects created by the HOE staining because their shape differs from other bright ones.

We expect that the way we have established to quantitatively link THG images of mouse brain tissue to HOE stained fluorescence images, can be generalized to human brain tissue and other types of staining. First, the proposed segmentation workflow will be applicable to THG images of human brain tissue, due to the similarity between THG images of mouse brain tissues and human brain (tumor) tissues [5, 6] in the sense that all relevant structures appear as dark or bright. Note that the ACPE model is the core of the proposed segmentation workflow and it has been applied on THG images of the structurally normal human brain tissue in [48]. Moreover, with the described imaging setup, THG brain images and two-photon fluorescence images of other fluorescence dyes can also be simultaneously acquired, for example, sulforhodamine-101 (SR-101) was employed to label astrocytes in an acute slice of mouse prefrontal cortex where THG and SR-101 fluorescence images were acquired simultaneously [5]. Finally, the two-

Page 68: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images

59

photon fluorescence images using other fluorescence dyes will be similar to the images shown here, and we expect that the proposed segmentation workflow is also applicable.

We have resolved the quantitative comparison of different modalities by the proposed integrated workflow. Although the segmentation problem of THG brain images has been addressed in the accompanying paper [48] and two-photon fluorescence images have been widely studied, it is the first time that both types of images have been segmented by the same workflow, with high precision achieved. Under-segmentation like Fig. 3.9B is the main error of the proposed segmentation workflow, and more advanced cell clump splitting methods in post-processing may be helpful. A more practical solution is to combine THG microscopy with other label-free techniques that offer additional nuclei or cell boundary information. For clinical application however local cell density and the total number of voxels of detected cells can be used as quantitative features to distinguish tumor images from the healthy cases. With such image quantification of the features, the under-segmentation problem will become less relevant. Another important issue hampering the application of THG in a clinical context is that the current PDE-based workflow is time-consuming, see also the PDE approach used in [57] (~40 mins for a 252 274 31× ×volume). Hardware implementation of the workflow could decrease the computation time [57]. The anisotropic diffusion was the most time consuming step, which took up to ~80% of the total computational time. Our future study includes reducing the computational effort of anisotropic diffusion, for instance, by only allowing tensor decomposition in non-flat regions instead of the whole domain.

3.7 Conclusion In this paper, we quantitatively compared THG images of ex-vivo mouse brain tissue with fluorescence images acquired simultaneously from the same tissue area. The quantitative comparison not only confirmed the correctness of interpreting the main features as brain cells, but also suggested that the dark holes should be interpreted as neurons while the bright objects are either glial cells or apoptotic neurons. Our future work includes the application of specific fluorescence staining to highlight solely neurons or glial cells, to confirm the suggestions obtained in this paper. Our future work also includes applying the proposed segmentation workflow to extract cell morphologies from the THG images of healthy and diseased human brain tissue, and applying machine learning algorithms to classify the detected cells into different types and states. These techniques will aid in enabling studies of living brain tissue in its natural environment, and the clinical use of THG images during tumor surgery will come into reach.

References [1] Y. Barad, H. Eisenberg, M. Horowitz, and Y. Silberberg, "Nonlinear scanning laser microscopy by

third harmonic generation," Applied Physics Letters, vol. 70, pp. 922-924, Feb 24 1997. [2] J. A. Squier, M. Muller, G. J. Brakenhoff, and K. R. Wilson, "Third harmonic generation

microscopy," Optics Express, vol. 3, pp. 315-324, Oct 26 1998. [3] D. Debarre, W. Supatto, A. M. Pena, A. Fabre, T. Tordjmann, L. Combettes, M. C. Schanne-Klein,

and E. Beaurepaire, "Imaging lipid bodies in cells and tissues using third-harmonic generation microscopy," Nature Methods, vol. 3, pp. 47-53, Jan 2006.

[4] S. Y. Chen, C. S. Hsieh, S. W. Chu, C. Y. Lin, C. Y. Ko, Y. C. Chen, H. J. Tsai, C. H. Hu, and C. K. Sun, "Noninvasive harmonics optical microscopy for long-term observation of embryonic nervous system development in vivo," Journal of Biomedical Optics, vol. 11, Sep-Oct 2006.

Page 69: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 3

60

[5] S. Witte, A. Negrean, J. C. Lodder, C. P. De Kock, G. T. Silva, H. D. Mansvelder, and M. L. Groot, "Label-free live brain imaging and targeted patching with third-harmonic generation microscopy," Proceedings of the National Academy of Sciences, vol. 108, pp. 5970-5975, 2011.

[6] N. V. Kuzmin, P. Wesseling, P. C. Hamer, D. P. Noske, G. D. Galgano, H. D. Mansvelder, J. C. Baayen, and M. L. Groot, "Third harmonic generation imaging for fast, label-free pathology of human brain tumors," Biomed Opt Express, vol. 7, pp. 1889-904, May 01 2016.

[7] B. Weigelin, G.-J. Bakker, and P. Friedl, "Intravital third harmonic generation microscopy of collective melanoma cell invasion: principles of interface guidance and microvesicle dynamics," IntraVital, vol. 1, pp. 32-43, Jul 2012.

[8] J. Adur, V. B. Pelegati, A. A. de Thomaz, M. O. Baratti, D. B. Almeida, L. A. Andrade, F. Bottcher-Luiz, H. F. Carvalho, and C. L. Cesar, "Optical biomarkers of serous and mucinous human ovarian tumor assessed with nonlinear optics microscopies," PLoS One, vol. 7, p. e47007, 2012.

[9] P. C. Wu, T. Y. Hsieh, Z. U. Tsai, and T. M. Liu, "In vivo Quantification of the Structural Changes of Collagens in a Melanoma Microenvironment with Second and Third Harmonic Generation Microscopy," Scientific Reports, vol. 5, Mar 9 2015.

[10] N. Olivier, M. A. Luengo-Oroz, L. Duloquin, E. Faure, T. Savy, I. Veilleux, X. Solinas, D. Debarre, P. Bourgine, A. Santos, N. Peyrieras, and E. Beaurepaire, "Cell Lineage Reconstruction of Early Zebrafish Embryos Using Label-Free Nonlinear Microscopy," Science, vol. 329, pp. 967-971, Aug 20 2010.

[11] I. Gryczynski, H. Malak, and J. R. Lakowicz, "Multiphoton excitation of the DNA stains DAPI and Hoechst," Bioimaging, vol. 4, pp. 138-148, Sep 1996.

[12] J. C. Jung and M. J. Schnitzer, "Multiphoton endoscopy," Optics Letters, vol. 28, pp. 902-904, Jun 1 2003.

[13] N. G. Horton, K. Wang, D. Kobat, C. G. Clark, F. W. Wise, C. B. Schaffer, and C. Xu, "In vivo three-photon microscopy of subcortical structures within an intact mouse brain," Nature Photonics, vol. 7, pp. 205-209, Mar 2013.

[14] S. W. Chu, S. P. Tai, C. L. Ho, C. H. Lin, and C. K. Sun, "High-resolution simultaneous three-photon fluorescence and third-harmonic-generation microscopy," Microsc Res Tech, vol. 66, pp. 193-7, Mar 01 2005.

[15] A. C. Kwan, K. Duff, G. K. Gouras, and W. W. Webb, "Optical visualization of Alzheimer's pathology via multiphoton-excited intrinsic fluorescence and second harmonic generation," Optics Express, vol. 17, pp. 3679-3689, Mar 2 2009.

[16] I. Paylova, K. R. Hume, S. A. Yazinski, J. Flanders, T. L. Southard, R. S. Weiss, and W. W. Webb, "Multiphoton microscopy and microspectroscopy for diagnostics of inflammatory and neoplastic lung," Journal of Biomedical Optics, vol. 17, Mar 2012.

[17] M. Jain, N. Narula, A. Aggarwal, B. Stiles, M. M. Shevchuk, J. Sterling, B. Salamoon, V. Chandel, W. W. Webb, N. K. Altorki, and S. Mukherjee, "Multiphoton Microscopy A Potential "Optical Biopsy'' Tool for Real-Time Evaluation of Lung Tumors Without the Need for Exogenous Contrast Agents," Archives of Pathology & Laboratory Medicine, vol. 138, pp. 1037-1047, Aug 2014.

[18] D. M. Huland, M. Jain, D. G. Ouzounov, B. D. Robinson, D. S. Harya, M. M. Shevchuk, P. Singhal, C. Xu, and A. K. Tewari, "Multiphoton gradient index endoscopy for evaluation of diseased human prostatic tissue ex vivo," Journal of Biomedical Optics, vol. 19, Nov 2014.

[19] C. L. Evans, E. O. Potma, M. Puoris'haag, D. Cote, C. P. Lin, and X. S. Xie, "Chemical imaging of tissue in vivo with video-rate coherent anti-Stokes Raman scattering microscopy," Proceedings of the National Academy of Sciences of the United States of America, vol. 102, pp. 16807-16812, Nov 15 2005.

[20] M. B. Ji, D. A. Orringer, C. W. Freudiger, S. Ramkissoon, X. H. Liu, D. Lau, A. J. Golby, I. Norton, M. Hayashi, N. Y. R. Agar, G. S. Young, C. Spino, S. Santagata, S. Camelo-Piragua, K. L. Ligon, O.

Page 70: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images

61

Sagher, and X. S. Xie, "Rapid, Label-Free Detection of Brain Tumors with Stimulated Raman Scattering Microscopy," Science Translational Medicine, vol. 5, Sep 4 2013.

[21] M. Ji, S. Lewis, S. Camelo-Piragua, S. H. Ramkissoon, M. Snuderl, S. Venneti, A. Fisher-Hubbard, M. Garrard, D. Fu, A. C. Wang, J. A. Heth, C. O. Maher, N. Sanai, T. D. Johnson, C. W. Freudiger, O. Sagher, X. S. Xie, and D. A. Orringer, "Detection of human brain tumor infiltration with quantitative stimulated Raman scattering microscopy," Sci Transl Med, vol. 7, p. 309ra163, Oct 14 2015.

[22] R. M. Williams, A. Flesken-Nikitin, L. H. Ellenson, D. C. Connolly, T. C. Hamilton, A. Y. Nikitin, and W. R. Zipfel, "Strategies for High-Resolution Imaging of Epithelial Ovarian Cancer by Laparoscopic Nonlinear Microscopy," Translational Oncology, vol. 3, pp. 181-194, Jun 2010.

[23] A. D'Arco, N. Brancati, M. A. Ferrara, M. Indolfi, M. Frucci, and L. Sirleto, "Subcellular chemical and morphological analysis by stimulated Raman scattering microscopy and image analysis techniques," Biomedical Optics Express, vol. 7, pp. 1853-1864, May 1 2016.

[24] C. Kut, K. L. Chaichana, J. F. Xi, S. M. Raza, X. B. Ye, E. R. McVeigh, F. J. Rodriguez, A. Quinones-Hinojosa, and X. D. Li, "Detection of human brain cancer infiltration ex vivo and in vivo using quantitative optical coherence tomography," Science Translational Medicine, vol. 7, Jun 17 2015.

[25] H. J. Bohringer, D. Boller, J. Leppert, U. Knopp, E. Lankenau, E. Reusche, G. Huttmann, and A. Giese, "Time-domain and spectral-domain optical coherence tomography in the analysis of brain tumor tissue," Lasers Surg Med, vol. 38, pp. 588-97, Jul 2006.

[26] E. Bengtsson, C. Wahlby, and J. Lindblad, "Robust cell image segmentation methods," Pattern Recognition and Image Analysis C/c of Raspoznavaniye Obrazov i Analiz Izobrazhenii., vol. 14, pp. 157-167, May 2004.

[27] E. Meijering, "Cell Segmentation: 50 Years Down the Road," Ieee Signal Processing Magazine, vol. 29, pp. 140-145, Sep 2012.

[28] C. Wahlby, J. Lindblad, M. Vondrus, E. Bengtsson, and L. Bjorkesten, "Algorithms for cytoplasm segmentation of fluorescence labelled cells," Analytical Cellular Pathology, vol. 24, pp. 101-111, 2002.

[29] G. Lin, U. Adiga, K. Olson, J. F. Guzowski, C. A. Barnes, and B. Roysam, "A hybrid 3D watershed algorithm incorporating gradient cues and object models for automatic segmentation of nuclei in confocal image stacks," Cytometry Part A, vol. 56a, pp. 23-36, Nov 2003.

[30] C. Wahlby, I. M. Sintorn, F. Erlandsson, G. Borgefors, and E. Bengtsson, "Combining intensity, edge and shape information for 2D and 3D segmentation of cell nuclei in tissue sections," J Microsc, vol. 215, pp. 67-76, Jul 2004.

[31] X. W. Chen, X. B. Zhou, and S. T. C. Wong, "Automated segmentation, classification, and tracking of cancer cell nuclei in time-lapse microscopy," Ieee Transactions on Biomedical Engineering, vol. 53, pp. 762-766, Apr 2006.

[32] M. Wang, X. B. Zhou, F. H. Li, J. Huckins, R. W. King, and S. T. C. Wong, "Novel cell segmentation and online SVM for cell cycle phase identification in automated microscopy," Bioinformatics, vol. 24, pp. 94-101, Jan 1 2008.

[33] T. F. Chan and L. A. Vese, "Active contours without edges," Ieee Transactions on Image Processing, vol. 10, pp. 266-277, Feb 2001.

[34] L. A. Vese and T. F. Chan, "A multiphase level set framework for image segmentation using the Mumford and Shah model," International Journal of Computer Vision, vol. 50, pp. 271-293, Dec 2002.

[35] A. Dufour, V. Shinin, S. Tajbakhsh, N. Guillen-Aghion, J. C. Olivo-Marin, and C. Zimmer, "Segmenting and tracking fluorescent cells in dynamic 3-D microscopy with coupled active surfaces," Ieee Transactions on Image Processing, vol. 14, pp. 1396-1410, Sep 2005.

Page 71: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 3

62

[36] O. Dzyubachyk, W. A. van Cappellen, J. Essers, W. J. Niessen, and E. Meijering, "Advanced Level-Set-Based Cell Tracking in Time-Lapse Fluorescence Microscopy (vol 29, pg 852, 2010)," Ieee Transactions on Medical Imaging, vol. 29, pp. 1331-1331, Jun 2010.

[37] D. R. Padfield, J. Rittscher, T. Sebastian, N. Thomas, and B. Roysam, "Spatio-temporal cell cycle analysis using 3D level set segmentation of unstained nuclei in line scan confocal fluorescence images," 3rd IEEE ISBI conference, pp. 1036-1039, Apr 2006.

[38] C. Zanella, M. Campana, B. Rizzi, C. Melani, G. Sanguinetti, P. Bourgine, K. Mikula, N. Peyrieras, and A. Sarti, "Cells Segmentation From 3-D Confocal Images of Early Zebrafish Embryogenesis," Ieee Transactions on Image Processing, vol. 19, pp. 770-781, Mar 2010.

[39] I. Ersoy, F. Bunyak, J. M. Higgins, and K. Palaniappan, "Coupled edge profile active contours for red blood cell flow analysis," 9th IEEE ISBI conference, pp. 748-751, May 2012.

[40] M. K. Bashar, K. Komatsu, T. Fujimori, and T. J. Kobayashi, "Automatic Extraction of Nuclei Centroids of Mouse Embryonic Cells from Fluorescence Microscopy Images," Plos One, vol. 7, May 8 2012.

[41] M. Maska, O. Danek, S. Garasa, A. Rouzaut, A. Munoz-Barrutia, and C. Ortiz-de-Solorzano, "Segmentation and Shape Tracking of Whole Fluorescent Cells Based on the Chan-Vese Model," Ieee Transactions on Medical Imaging, vol. 32, pp. 995-1006, Jun 2013.

[42] G. L. Xiong, X. B. Zhou, and L. Ji, "Automated segmentation of Drosophila RNAi fluorescence cellular images using deformable models," Ieee Transactions on Circuits and Systems I-Regular Papers, vol. 53, pp. 2415-2424, Nov 2006.

[43] R. S. Gonzalez and P. Wintz, "Digital image processing," 1977. [44] A. Medyukhina, T. Meyer, M. Schmitt, B. F. M. Romeike, B. Dietzek, and J. Popp, "Towards

automated segmentation of cells and cell nuclei in nonlinear optical microscopy," Journal of Biophotonics, vol. 5, pp. 878-888, Nov 2012.

[45] W. L. Wu, J. Y. Lin, S. Wang, Y. Li, M. Y. Liu, G. Q. Liu, J. Y. Cai, G. N. Chen, and R. Chen, "A novel multiphoton microscopy images segmentation method based on superpixel and watershed," Journal of Biophotonics, vol. 10, pp. 532-541, Apr 2017.

[46] G. G. Lee, H. H. Lin, M. R. Tsai, S. Y. Chou, W. J. Lee, Y. H. Liao, C. K. Sun, and C. F. Chen, "Automatic Cell Segmentation and Nuclear-to-Cytoplasmic Ratio Analysis for Third Harmonic Generated Microscopy Medical Images," Ieee Transactions on Biomedical Circuits and Systems, vol. 7, pp. 158-168, Apr 2013.

[47] M. A. Luengo-Oroz, J. L. Rubio-Guivernau, E. Faure, T. Savy, L. Duloquin, N. Olivier, D. Pastor, M. Ledesma-Carbayo, D. Debarre, P. Bourgine, E. Beaurepaire, N. Peyrieras, and A. Santos, "Methodology for Reconstructing Early Zebrafish Development From In Vivo Multiphoton Microscopy," Ieee Transactions on Image Processing, vol. 21, pp. 2335-2340, Apr 2012.

[48] Z. Zhang, N. V. Kuzmin, M. Louise Groot, and J. C. de Munck, "Extracting morphologies from third harmonic generation images of structurally normal human brain tissue," Bioinformatics, Jan 27 2017.

[49] T. Chang, M. S. Zimmerley, K. P. Quinn, I. Lamarre-Jouenne, D. L. Kaplan, E. Beaurepaire, and I. Georgakoudi, "Non-invasive monitoring of cell metabolism and lipid production in 3D engineered human adipose tissues using label-free multiphoton microscopy," Biomaterials, vol. 34, pp. 8607-8616, Nov 2013.

[50] G. Testa-Silva, M. B. Verhoog, N. A. Goriounova, A. Loebel, J. J. J. Hjorth, J. C. Baayen, C. P. J. De Kock, and H. D. Mansvelder, "Human synapses show a wide temporal window for spike-timing-dependent plasticity," Frontiers in synaptic neuroscience, vol. 2, 2010.

[51] I. Bureau, F. von Saint Paul, and K. Svoboda, "Interdigitated paralemniscal and lemniscal pathways in the mouse barrel cortex," Plos Biology, vol. 4, pp. 2361-2371, Dec 2006.

Page 72: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images

63

[52] F. Bestvater, E. Spiess, G. Stobrawa, M. Hacker, T. Feurer, T. Porwol, U. Berchner-Pfannschmidt, C. Wotzlaw, and H. Acker, "Two-photon fluorescence absorption and emission spectra of dyes relevant for cell imaging," J Microsc, vol. 208, pp. 108-15, Nov 2002.

[53] J. Y. Kim, L. S. Kim, and S. H. Hwang, "An advanced contrast enhancement using partially overlapped sub-block histogram equalization," Ieee Transactions on Circuits and Systems for Video Technology, vol. 11, pp. 475-484, Apr 2001.

[54] J. Weickert, "Coherence-enhancing diffusion filtering," International Journal of Computer Vision, vol. 31, pp. 111-127, Apr 1999.

[55] J. Weickert, Anisotropic diffusion in image processing vol. 1: Teubner Stuttgart, 1998. [56] T. J. Collins, "ImageJ for microscopy," Biotechniques, vol. 43, Jul 2007. [57] S. Pop, A. C. Dufour, J. F. Le Garrec, C. V. Ragni, C. Cimper, S. M. Meilhac, and J. C. Olivo-Marin,

"Extracting 3D cell parameters from dense tissue environments: application to the development of the mouse heart," Bioinformatics, vol. 29, pp. 772-779, Mar 15 2013.

[58] P. Soille, Morphological image analysis: principles and applications: Springer Science & Business Media, 2013.

[59] A. Henderson, J. Ahrens, and C. Law, "The ParaView Guide, Kitware Inc," Clifton Park, NY, 2004. [60] D. Purves, G. J. Augustine, D. Fitzpatrick, L. C. Katz, A. S. LaMantia, J. O. McNamara, and S. M.

Williams, "Neuroscience. second," Sunderland: Sinauer, 2001. [61] P. S. Sastry and K. S. Rao, "Apoptosis and the nervous system," J Neurochem, vol. 74, pp. 1-20,

Jan 2000. [62] H. P. Du, L. W. Jiang, X. F. Wang, G. Q. Liu, S. Wang, L. Q. Zheng, L. H. Li, S. M. Zhuo, X. Q. Zhu,

and J. X. Chen, "Label-free distinguishing between neurons and glial cells based on two-photon excited fluorescence signal of neuron perinuclear granules," Laser Physics Letters, vol. 13, Aug 2016.

Page 73: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 3

64

Page 74: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

65

Chapter 4

Active contour models for microscopic images with global

and local intensity inhomogeneities

Submitted

Page 75: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 4

66

4.1 Abstract The presence of both global and local intensity inhomogeneities in microscopic images hampers existing active contour models (ACMs) to segment objects of interest. Meanwhile, microscopic images are generally two- or three-phase, making prior information of global intensity extremes (GIEs) important for segmentation. In this work, we demonstrate that GIEs can be easily incorporated into existing ACMs to segment microscopic images with intensity inhomogeneities. We first summarize a general form for the energy functions of ACMs. Then the foreground energy term is replaced by the weighted sum of this term and an additional penalty term carrying GIEs information. We show that such modification enables the new ACMs to adjust the intensity means of the resulting foreground and background, and to segment microscopic images for which previous ACMs failed. The implementation of the new ACMs only requires a small adaptation of the original Euler-Lagrange equations. The new ACMs become less sensitive to initialization and more robust to re-initialization. Four new ACMs are tested on fluorescence and higher harmonic generation images, resulting in a 20 fold segmentation accuracy at least compared to the original ACMs.

4.2 Introduction To study animal and human tissues at the (sub-)cellular level, fluorescence imaging [1] or more recently label-free imaging, e.g. higher harmonic generation are used [2-7]. Due to the complexity of the natural imaging environments, the acquired microscopic images are usually associated with both global and local (G&L) intensity inhomogeneities, making image segmentation challenging. The active contour model (ACM) [8-10] has been widely applied on microscopic images to extract cellular/nuclear features embedded in a homogeneous background [11, 12]. However, the varying foreground and background of these microscopic images hampers existing ACMs to extract objects of interest.

Basically, the ACM is formulated as an energy function that measures the error of approximating the original image by a naive one, e.g. the piecewise constant (PC) image. Given an initial contour, the energy minimization iteratively promotes the evolution of the contour towards the desired object boundaries and image partitioning. The energy function is usually minimized within the level set framework [13] so that the minimization problem can be solved with well-established mathematical theories. Level sets have remarkable advantages in handling complex topological changes during contour evolution, in providing closed and smooth contours, and in performing the numerical computations on a fixed Cartesian grid without having to parameterize the contours.

Existing ACMs can be generally categorized into edge-based and region-based models. Edge-based models are applicable to images whose gradients are well given, but they suffer from serious boundary leakage problem at weak object boundaries and they are in general quite sensitive to initialization [10, 14]. Region-based models use a certain region descriptor to define regions of interest, according to which the given contour can evolve to the object boundaries and an image can be partitioned into pre-described regions. The CV model [10] is a typical region-based model that assumes intensity homogeneity within each region of the image to be segmented, and therefore it cannot deal with intensity inhomogeneity. The CV model is a simplified version of the piecewise smooth (PS) model [15]. The PS model and its extensions [16, 17] do not assume region homogeneity, and they are able to handle some inhomogeneous images. However, the drawbacks of computational cost and sensitivity to initialization greatly limit the utility of these methods [14]. Recently, local intensity information has been demonstrated useful for designing highly accurate, computationally economic and robust to initialization ACMs to handle

Page 76: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Active contour models for microscopic images with global and local intensity inhomogeneities

67

intensity inhomogeneity. The local binary fitting (LBF) model [18] outperforms the PS model because it only assumes intensity homogeneity in local regions defined by a truncated Gaussian kernel. It has also been shown that the LBF model can attain better performance when it is combined with another global guidance, e.g., the CV model [19]. In this context, the local intensity clustering (LIC) model [14] considers both the local intensity variation and global intensity guidance. It assumes that the target image is the multiplication of a bias field with a PC function, and that the bias field is locally constant. Benefit from such a consideration, the LIC model and its probabilistic extension [20] outperform most ACMs on images with intensity inhomogeneity. Nevertheless, it has still been shown that the LIC model gives better segmentations when it is combined with the CV model [21]. Different from the LBF/LIC model that minimizes the sums of local errors defined via a Gaussian kernel, the robust local similarity factor (RLSF) model [22] measures the local similarity defined via a distance function and pre-defined local windows. The RLSF model has been demonstrated robust to strong noise and intensity inhomogeneity.

Most of the aforementioned ACMs have been validated on MRI images, and here we notice that intensity inhomogeneity within these MRI images is large scale, e.g., see the bias fields estimated from the MRI images in [14]. However, we observe both global and local intensity inhomogeneity in some microscopic images, caused by uneven staining or uneven illumination. Specially, label-free second harmonic generation (SHG) and third harmonic generation (THG) images show great potential for clinical application of brain tumor surgery [5], but they contain G&L intensity inhomogeneities caused by the rough surfaces of the imaged tissues. To extract objects of interest from these non-classical images, we have recently proposed a variant of the CV model that incorporates the global intensity extremes (GIEs) of the input image [6] in order to reduce the sensitivity for these intensity inhomogeneities. We have added an extra foreground penalty term to guide the contours towards the darkest or brightest objects.

In this work, we further demonstrate that GIEs can be easily incorporated into other ACMs to segment the microscopic images in the presence of G&L intensity inhomogeneity. To do so, we express the energy functions of ACMs as a sum of a length term, a foreground energy term and a background energy term. Also, we replace the foreground energy term by the weighted sum of this term and an extra penalty term carrying GIEs information. It appears from our experiments that these modifications enable the new ACMs to adjust the means of foreground and background regions of the output, and thus to enable segmentation of microscopic images for which this was previously impossible. Compared to the original ACMs, the new ACMs only need a small modification for implementation rendering them robust to initialization and re-initialization.

The rest of this paper is organized as follows: We review the level set formulation of several existing ACMs in Section 4.3. The proposed new ACMs are explained in detail in Section 4.4. Fluorescence, SHG, and THG images are tested to demonstrate the efficiency and robustness of the new ACMs in Section 4.5. Conclusions follow in Section 4.6.

4.3 Existing active contour models

4.3.1 Level set formulation of ACMs LetΩ be the image domain, and ( )I x is the image intensity at the location ∈Ωx . A segmentation of the two-phase image is to find a contour C that partitions the image domain into the foreground 1Ω and the background 0Ω . Generally, the energy function of an ACM consists of two parts, a data fidelity term and a

Page 77: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 4

68

length regularization term. With the level set representation, the data fidelity term is usually split into terms of foreground and background, respectively. Then the energy function E can be expressed as follows,

0 1( ) ( ) ( ) ( ).E L F Fφ µ φ φ φ= + + (4.1)

In this expression, L is the length regularization term, , 0,1iF i = are the data fidelity terms of the background and foreground respectively, and φ is the level set function. To minimize the equation (4.1), one needs to iteratively solve the time-dependent Euler-Lagrange equation,

0 1( ) ,tE f fφφ δ φ µφ φ

∂ ∇= − = ∇ ⋅ + − ∂ ∇

(4.2)

where the functions , 0,1if i = satisfy,

1 1 0 0, and (1 ) .F f Hd F f H d= = −∫ ∫x x (4.3)

H is the Heaviside function and δ is the Dirac function [10]. Note that in this work we omit the domain Ω in the subscript of the integral symbol if the integration is over the entire domain.

4.3.2 CV model The CV model [10] is one of the state-of-arts of ACMs. It segments the image I via finding a PC function that takes value 1c inside the foreground and 0c outside. It can be formulated by substituting

2( ) | ( ) | , 0,1i if I c i= − =x x into the equation (4.1),

12

0( ) | ( ) | .

iCV i

iE L I c dµ φ

Ω== + −∑∫ x x (4.4)

By minimizing the energy function (4.4) with respective to , 0,1ic i = , it follows that they are the means of the foreground and background, respectively. If we ignore the length term, the CV model amounts to threshold the image by 1 0( ) / 2c c+ , the middle point of 1c and 0c . It has been demonstrated that the CV model and its multiple-phase extensions are powerful tools for cell/nuclei segmentation [11, 12], but they are not applicable to images with intensity inhomogeneity.

4.3.3 LIC model To deal with intensity inhomogeneity, the LIC model [14] assumes that the image I can be modeled as the multiplication of a bias field b and a PC function J, ( ) ( ) ( )I b J=x x x . The bias field b accounts for the intensity inhomogeneity that varies slowly. With a Gaussian kernel K of standard deviationσ , truncated in the local square window of width 4 1ρ σ= + , the energy of the LIC model is formulated as follows,

12

0( ) ( ) | ( ) ( ) | .

iLIC i

iE L K I b c d dµ φ

Ω=

= + − − ∑∫ ∫ y x x y x y (4.5)

It measures the total of all local errors introduced by the multiplicative approximation. The corresponding function if of the equation (4.3) is given by,

Page 78: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Active contour models for microscopic images with global and local intensity inhomogeneities

69

2( ) ( ) | ( ) ( ) | .i if K I b c d= − −∫x y x x y y (4.6)

4.3.4 CVPE model In our previous study [6], we have proposed the CV model weighted by prior extremes (CVPE) to deal with global intensity inhomogeneity. It is a weighted sum of 1 0( , )CVE c c and 0( , )CVE e c ,

0 1 1

1 0 1 0 02 2 2

0 1

( , ) ( , ) (1 ) ( , )

( ) | ( ) | | ( ) | (1 ) | ( ) | .CVPE CV CVE c c wE c c w E e c

L I c d w I c d w I e dµ φΩ Ω Ω

= + −

= + − + − + − −∫ ∫ ∫x x x x x x (4.7)

Here e is a fixed global intensity extreme of the image I, and [0,1]w∈ is a fixed weight. The 0( , )CVE e c is a special case of the CV model that roughly corresponds to threshold the image by 0( ) / 2e c+ . Dependent on the value of e, it keeps only the darkest or brightest parts of the image I. Ignoring the length term, the CVPE model results in thresholding the image at 1 0( (1 ) ) / 2wc w e c+ − + . The weight w controls the trade-off between the darkest/brightest parts and the surrounding areas. Therefore, this model provides an approach to control the output foreground and background regions. It overcomes intensity inhomogeneity by sacrificing some foreground pixels, but it gives much better results in some cases where current state-of-arts fail.

4.4 Three-phase active contours weighted by prior extremes Here we investigate if the GIEs can be incorporated into more advanced ACMs than the CV model. If so, it would provide an economic approach to extend the applicability of these ACMs to cases when they fail to give the correct outputs. Such a minor modification is based on the observation that the microscopic images are usually two- or three-phase and the target objects are bright or/and dark, making the prior information of GIEs important.

4.4.1 ACMs weighted by prior extremes For a three-phase segmentation problem, two contours segment the whole domain Ω into two foregrounds , 1, 2i iΩ = and one background 0Ω , with intensity mean , 1, 2ic i = and 0c respectively. Each foreground iΩ is represented by a level set function iφ with | ( ) 0i iφΩ = >x x . A three-phase microscopic image usually consists of foreground objects that are relatively dark and bright, and a background with intermediate intensity. Therefore, the background provides a natural barrier between the dark and bright objects, and assures that the two zero level set functions never intersect. No processing is needed to prevent the intersections of the two zero level set functions. We represent the constants , 0,1, 2ic i = with a vector 0 1 2( , , )c c c=c , and the two level set functions with Φ , such that ( ) ( ), 1, 2,i iM H iφΦ = =

0 1 2( ) (1 ( ))(1 ( ))M H Hφ φΦ = − − . We make a small modification to obtain the three-phase counterpart of equation (4.1) as follows,

( )2

01

( ) ( ) ( ) ( ) (1 ) ( ) ,i i i ii

E L F w F w Pµ=

Φ = Φ + Φ + Φ + − Φ∑ (4.8)

where the functions , 0,1,2iF i = and , 1, 2iP i = are,

,i i iF f M d= ∫ x (4.9)

Page 79: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 4

70

2, ( ) | ( ) | .i i i i iP p M d p I e= = −∫ x x x (4.10)

We let each data fidelity term iF of the foreground be weighted by iP . Intuitively, the more the intensity of a foreground pixel differs from the GIE, the larger the error of the whole data fidelity term is. This intuitive observation makes the foreground regions favor those pixels with intensity closer to the GIE. The Euler-Lagrange equation corresponding to (4.8) is the same as (4.2) apart from one additional term,

0( ) (1 ( )) (1 ) .i ii j i i i i i

iH f w f w p

tφ φδ φ µ φ

φ ≠ ∂ ∇

= ∇ ⋅ + − − − − ∂ ∇ (4.11)

In the next sections we study similar adaptations of four existing ACMs: the three-phase CV, LBF, LIC and RLSF models. Note that these novel three-phase ACMs, named, the CVPE, LBFPE, LICPE, and RLSFPE models, are completely specified by giving the functions , 0,1, 2if i = and the involved variables.

4.4.2 Three-phase CVPE model With the means of foreground and background regions c, it suffices to only specify

2( ) | ( ) | , 0,1, 2i if I c i= − =x x for the CVPE model.

Minimizing the energy function with respect to the vector c, one obtains the form of ic ,

( ), 0,1,2.

( )

ii

i

IM dc i

M d

Φ= =

Φ∫∫

x

x (4.12)

4.4.3 Three-phase LBFPE model The three-phase LBF model partitions the image I into three smooth regions represented by functions,

, 0,1, 2ig i = . For each fixed location ∈Ωy , the functions , 0,1, 2if i = have the following form,

2( ) ( ) | ( ) ( ) |i if K I g d= − −∫x y x x y y . (4.13)

Minimizing the energy function with respect to ,ig one obtains,

[ ][ ]

* ( ), 0,1,2.

* ( )i

ii

K IMg i

K MΦ

= =Φ

(4.14)

4.4.4 Three-phase LICPE model The function if of the LICPE model is given in (4.6). Minimizing the energy function with respect to the vector c and b, we obtain,

2

02 2

2

0

( )( ) ( ), and .

( ) ( )( )

i ii i

ii

i ii

I c M Kb K IM dc b

b K M dc M K

Ω=

Φ ∗ ∗ Φ = =

∗ ΦΦ ∗

∑∫∫ ∑

x

x (4.15)

Page 80: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Active contour models for microscopic images with global and local intensity inhomogeneities

71

The CVPE model can overcome global intensity inhomogeneity by sacrificing some foreground pixels, but it is not able to deal with local intensity inhomogeneity. The new LICPE model theoretically overcomes both G&L intensity inhomogeneities. The bias field b accounts for local intensity changes, whereas the global guidance introduced by GIEs addresses global intensity.

4.4.5 Three-phase RLSFPE model Different from the LBF/LIC model, the RLSF model measures the local similarity factor (LSF) defined as,

| ( ) ( ) |( ) .( , )N

I lcLSF dd∈ ≠

−= ∫

xy x

y xx yx y

(4.16)

In this expression, Nx is a local window defined as a 5×5 neighborhood of x, the function d is the spatial Euclidean distance between two pixels. The function lc is the local average intensity value, defined within a local region, as follows,

( , ) ( ) ( ( ))( ) , 0,1,2.

( , ) ( ( ))

ii

i

W I M dlc i

W M d

Φ= =

Φ∫∫

x y y y yx

x y y y (4.17)

The local region is defined by the function W, which takes 1 for | | 1− <x y , otherwise 0. The RLSF model follows by substituting the LSF functions into the energy function (4.1), i.e.,

( ) ( ), 0,1,2.i if lc i= =x x (4.18)

4.4.6 Numerical implementation The implementation of the new ACMs are almost the same as the original ACMs except for the evolution equations. The Euler-Lagrange equations can be implemented by using the finite difference scheme for the DRLSE method [23] or the reaction-diffusion (RD) method [24]. The DRLSE and RD methods provide different re-initialization free frameworks to overcome the issue of re-initialization. However, applied to the microscopic images, the results of the original ACMs are not as accurate as the conventional re-initialization approach. Therefore, the level set functions of the original ACMs are re-initialized by a signed distance function [25] in every iterative step. For the new ACMs, we use the RD method [24] to reduce the need for re-initialization, but the level set functions are reinitialized in every 10-th iterative step.

In the numerical implementation, the Heaviside function H is replaced by the smoothed Heaviside function Hε [10]. Accordingly, the Dirac delta function δ , which is the derivative of the Heaviside function H, is replaced by the regularized Dirac delta function εδ [10]. For all the ACMs, µ was set to

0.001 255 255× × . The time step was fixed as 0.1t∆ = . We set the 4σ = for all the ACMs involved with a Gaussian kernel. Other parameters will be specified for each dataset used. Note that for each dataset and each ACM, we use the same parameter settings for all test images in the dataset.

4.5 Experimental results Fluorescence imaging has been widely used to study biological processes [1, 11, 12, 26, 27]. More recently, label-free imaging such as SHG and THG show great potential for clinical application of brain tumor surgery [2, 4-7, 28]. We first use an example of a THG image to illustrate the intensity

Page 81: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 4

72

Raw A B C D

E

100μm

200μm 10μm 10μm

inhomogeneities that a microscopic image may have. Second, two-phase fluorescence images with the presence of global intensity inhomogeneity are used to show that the existing ACMs are not able to deal with such intensity inhomogeneity but the new ACMs can. Third, two-phase SHG images with local intensity inhomogeneity are used to demonstrate that optimal results can be obtained with the new ACMs, although the existing ACMs are also able to extract objects of interest. Finally, we show that the existing ACMs fail to segment three-phase THG images, with the presence of G&L intensity inhomogeneities, while the new ACMs succeed. Note that the fluorescence images and THG images have been enhanced and denoised by the POSHE histogram equalization [29] and the EED anisotropic diffusion [30], respectively. The SHG images are smoothed by a Gaussian filter. The reason is that these microscopic images usually have low contrast in corners and are so noisy that the ACMs fail without denoising.

4.5.1 Intensity inhomogeneities within microscopic images We use an example of 3-phase THG images of mouse brain tissue (Fig. 4.1) to indicate that both global and local intensity inhomogeneities can be present in a microscopic image. Fig. 4.1A is the raw image, where both dark and bright objects are visible and have been interpreted as brain cells (neurons and glial cells) [4, 7]. The presence of local intensity inhomogeneity can be confirmed in Fig. 4.1B-C. The Fig. 4.1D is the ground truth of the image. The boundaries of dark and bright cells are shown in red and green, respectively. According to the ground truth, we plot the intensity histograms for dark cells, bright cells and the background. The histograms indicate a large intersection between the intensity range of the background and that of the dark and bright cells. Such an intersection strongly demonstrates the presence of the global intensity inhomogeneity, which can also been confirmed by noting the low contrast in the corners.

Figure 4.1 A THG image of mouse brain tissue, with the presence of G&L intensity inhomogeneities. (A) A THG image. Both the dark and bright objects are brain cells. Note that the contrast in the corners is lower. (B) A dark brain cell. The bottom part of the cell is obviously darker that the top part. (C) A bright cell. The area indicated by the yellow array is darker. (D) The ground truth of the image. The boundaries of dark and bright cells are delineated in red and green, respectively. (E) The intensity histogram of the dark cells (red), bright cells (green), and the background (black), respectively, indicating global intensity inhomogeneity. The x-axis and y-axis denote the intensity level and the number of pixels.

Page 82: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Active contour models for microscopic images with global and local intensity inhomogeneities

73

CV CVPE

LBF LIC

RLSF

LBFPE

LICPE RLSFPE

Raw

100μm

4.5.2 Comparison on two-phase images Fluorescence images have been widely studied in the past decades via the two- and multi-phase CV models [11, 12]. We apply the original and new ACMs to special fluorescence images acquired from ex-vivo mouse brain tissue wherein the staining is uneven and the imaged areas are very close to the tissue boundary. One example is shown in Fig. 4.2. Because of the uneven staining, the background contains “dragon” like global intensity inhomogeneity. We see that the CV and LIC models are able to extract some nuclei but they also detect parts of varying background caused by the tissue boundary. The LBF and RLSF model completely fail to segment the nuclei because no global guidance for the level-set evolution is present. With the flexibility to manipulate the means of the output foreground via the weight w, the new ACMs are able to correctly segment this unevenly stained fluorescence image.

Figure 4.2 From top-left to bottom-right: the fluorescence image, the segmentation results of the CV, LBF, LIC, RLSF (r=10), and CVPE (w=0.7), LBFPE (w=0.8), LICPE (w=0.98), RLSFPE (r=10, w=0.1) models.

Page 83: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 4

74

CV CVPE

LBF LBFPE

LICPE RLSF RLSFPE

LIC

Raw

80μm

Figure 4.3 From top-left to bottom-right: the SHG/MPAF image, the segmentation results of the CV, LBF, LIC, RLSF (r=30), and CVPE (w=0.9), LBFPE (w=0.9), LICPE (w=0.98), RLSFPE (r=30, w=0.1) models.

Besides the special kind of fluorescence images shown above, we compare results of the original and new ACMs on SHG/multi-photon auto-fluorescence (SHG/MPAF) images collected from structural normal human brain tissues, with one example shown in Fig. 4.3. The SHG/MPAF signals arise from lipofuscin granules in brain cells, microtubules or collagen from small blood vessel walls. Thus most of the square and circular objects indicate brain cells and line like objects represent microtubules. The results of the LBF and RLSF models are similar to those of the fluorescence images, and they fail to extract the main

Page 84: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Active contour models for microscopic images with global and local intensity inhomogeneities

75

100μm

features (brain cells) shown in the SHG/MPAF image. The CV and LIC models are able to correctly segment the main features shown in the SHG/MPAF image. However, the LIC model is so “robust” that quite some background pixels are incorrectly identified as foreground pixels because of the cluttered character of the background. The CV model fails to split the objects pointed with green and yellow arrows, which should have been detected as separated objects. The microvessel pointed out by the blue arrow is also incompletely detected by the CV model. The new ACMs are able to capture the main features in the SHG/MPAF image compared to the results of the original ACMs. The CVPE model sacrifices an acceptable number of foreground pixels, resulting in the separation of the two objects indicated by the green arrow. The LBFPE model only detects the brightest pixels, but the objects indicated by the green and yellow arrows have been separated. Although the detection of the microvessel (blue arrow) is incomplete, the LBFPE model greatly improves the output of the LBF model. The LICPE and RLSFPE models show the best balance between the cluttered background and the completeness of detections. The objects with arrows have been correctly identified and separated. These examples show that the new ACMs are equipped with enough flexibility to give the required segmentation outputs. The local intensity inhomogeneity can only be overcome by the new ACMs.

Figure 4.4 From top-left to bottom-right: the fluorescence image, the number of detected nuclei by the ACMs as a function of the initial radius, the initialization, and the segmentation results of the RLSFPE model initialized with circles of radii 5 and 10 pixels. The segmentation results of the RLSFPE model initialized with circles of other radii are almost the same, and they are ignored.

Page 85: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 4

76

CV

CVPE

LBF

LBFPE

LIC

LICPE

RLSF

RLSFPE

Raw

100μm

4.5.3 Robustness to initialization We also tested the performance of the new ACMs with different initializations on an image from the fluorescence image dataset, shown in Fig. 4.4. The selected fluorescence image is a perfect test image for the performance of an initialization, because the nuclei are sparsely distributed. The parameter settings of each new ACM are the same as used for Fig. 4.2. We use the circles shown in Fig. 4.4 as initialization for the ACMs. The distance between the two centers is set to 3 times of the radius, except at the image boundaries. We test the new ACMs with different radii, i.e., 5, 10, 15, 20, and 25 pixels. For each initialization, we count the number of nuclei with size large than 100 pixels detected by each new ACM. The resulting curve of each new ACM shows that the number of detected nuclei does not substantially depend on the initialization, indicating that the new ACMs are robust to initialization. As an illustration, we show the segmentation results of the RLSFPE model initialized with circles of different radii (in the bottom right panel of Fig. 4.4), and the results are almost the same. We notice that the number of nuclei detected by the LICPE model slightly decreases as the radius changes from 15 to 25 pixels. As indicated by these curves, initialization with circles of radius 10 pixels is a good choice for all the new ACMs. Note that all methods give quite different numbers of nuclei, because some nuclei are only dimly visible and are therefore difficult to detect for some methods.

Figure 4.5 From top-left to bottom-right: the THG image, the segmentation results of the CV, LBF, LIC, RLSF (r=10), and CVPE (w1=w2=0.5), LBFPE (w1=0.85, w2=0.9), LICPE (w1=w2=0.8), RLSFPE (r=10, w1=w2=0.1) models.

Page 86: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Active contour models for microscopic images with global and local intensity inhomogeneities

77

CV

CVPE

LBF

LBFPE

LIC

LICPE

RLSF

RLSFPE

Raw

50μm

4.5.4 Comparison on three-phase THG images We further compare the results of the original and new ACMs on THG images acquired from ex-vivo mouse and human brain tissues, in which both global and local intensity inhomogeneities are present. These THG images are intrinsically three-phase images that consist of dark, bright objects and an intermediate background. In Fig. 4.5, we show a typical THG image of mouse brain tissue. Both dark and bright objects represent brain cells. From the unsegmented THG image, we see that the contrast in the corners is very low. The reasons include that the imaging of the corners is at slightly different depth, and/or that the illumination is affected by the rough surfaces of the sample. Most dark brain cells have circular shapes while the shapes of bright brain cells are more complicated, as indicated in Fig. 4.1B-1C. The segmentation result of the CV model confirms the existence of large scale intensity inhomogeneity in the corners. All the original ACMs are able to segment the dark objects in the middle of the image where there is enough contrast, but they fail to segment the dark objects in the corners. They also fail to segment the bright objects due to the local intensity inhomogeneity. Thanks to the prior information of GIEs, the new ACMs correctly segment both the dark and bright objects in the whole image.

Figure 4.6 From top-left to bottom-right: the THG image, the segmentation results of the CV, LBF, LIC, RLSF (r=10), and CVPE (w1=0.5, w2=0.9), LBFPE (w1=0.9, w2=0.99), LICPE (w1=0.8, w2=0.99), RLSFPE (r=10, w1=0.1, w2=0.99) models.

Page 87: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 4

78

CV

CVPE

LBF

LBFPE

LIC

LICPE

RLSF

RLSFPE

Raw

50μm

We finally compare the results of the original and new ACMs for THG images of normal human brain tissues. One example acquired from gray matter (GM) is shown in Fig. 4.6 and one acquired from white matter (WM) is shown in Fig. 4.7, using the same parameter settings. According to the previous study [6], dark objects have been confirmed as brain cells, and the bright fiber-like objects are neuropil (including axons and dendrites). The nuclei of dark brain cells are sometimes dimly seen (the object indicated by the green arrow in Fig. 4.6 Raw). Brain cells and neuropil are sparsely located in THG images of GM (Fig. 4.6), while more neuropil is observed in those of white matter (Fig. 4.7). The density of the brain cells and neuropil highly correlates with the presence of a brain tumor [5], but these objects are difficult to detect because THG images are associated with both global and local intensity inhomogeneity. The original ACMs are able to segment the neuropil, but fail to segment dark brain cells because of the varying background. In contrast, the new ACMs successfully detect both the dark and bright objects. Note that the contrast of nuclei (the dark object indicated by the green arrow in Fig. 4.6 Raw) is not high enough to be detected by any of the ACMs.

Figure 4.7 From top-left to bottom-right: the THG image, the segmentation results of the CV, LBF LIC, RLSF, and CVPE, LBFPE, LICPE, RLSFPE models.

Page 88: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Active contour models for microscopic images with global and local intensity inhomogeneities

79

Mea

n er

ror(

in p

ixel

s)

Image Index

50μm

To quantitatively analyze the performance of the new ACMs, we manually segmented the dark brain cells of 10 THG images acquired from gray matter as shown in Fig. 4.6. One manually delineated ground truth is shown in Fig. 4.8. Due to the complexity of the objects, some of the salient dark brain cells have been delineated. For each segmentation result, we compute the distance of each point ix on the resulting contour C to the ground truth S. Then the mean error of the resulting contour C to the ground truth S is,

1

1 ( , ).N

ii

ME dist SN =

= ∑ x (4.19)

This contour-based metric can be used to evaluate a sub-pixel accuracy of a segmentation result [14, 31]. The mean errors of the original and new ACMs are summarized in Fig. 4.9 and Table 4.1. The errors of new ACMs are significantly smaller than those of the original ACMs. The averaged error of each original ACMs is higher than 20 pixels, but that of the new ACMs is around 2 pixels. Therefore, the accuracy of new ACMs is improved by at least 10 times.

Figure 4.8 An example of a manually delineated ground truth. The left one is the raw THG image and the right one is the ground truth.

Figure 4.9 Comparison of the original and new ACMs in terms of mean error.

Page 89: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 4

80

TABLE 4.1 AVERAGED MEAN ERRORS ON 10 IMAGES

ACMs CV CVPE LBF LBFPE LIC LICPE RLSF RLSFPE Mean Errors

44.81 2.09 28.93 2.14 29.23 2.10 81.69 3.66

4.6 Conclusion In this work, we have demonstrated that prior information of global intensity extremes is valuable for segmenting several types of microscopic images. We have introduced a strategy by which the global intensity extremes can be easily incorporated into many of the well-known ACMs such that they become less sensitive for both global and local intensity inhomogeneities. With the prior intensity extremes, the new ACMS are equipped with additional flexibility to control the segmentation results. Although the introduced weight parameter needs some tuning for each tissue type, we found that the outputs are very stable and highly repeatable over different samples. More important is that the new ACMs are able to segment microscopic images for which the original ACMs gave unsatisfactory results because of the varying contrast. Experimental results on fluorescence images demonstrated that the new ACMS are robust to initialization. Much less re-initialization is needed for the new ACMs. Quantitative analysis of the performance of the new ACMs on label-free images have demonstrated the superior performance of new ACMs in terms of accuracy. The new ACMs will play an important role in label-free higher harmonic microscopic images and therefore our results will help to unlock the clinical application potential of label-free higher harmonic microscopy.

References [1] D. Dormann and C. J. Weijer, "Imaging of cell migration," Embo Journal, vol. 25, pp. 3480-3493,

Aug 9 2006. [2] D. Debarre, W. Supatto, A. M. Pena, A. Fabre, T. Tordjmann, L. Combettes, M. C. Schanne-Klein,

and E. Beaurepaire, "Imaging lipid bodies in cells and tissues using third-harmonic generation microscopy," Nature Methods, vol. 3, pp. 47-53, Jan 2006.

[3] N. Olivier, M. A. Luengo-Oroz, L. Duloquin, E. Faure, T. Savy, I. Veilleux, X. Solinas, D. Debarre, P. Bourgine, A. Santos, N. Peyrieras, and E. Beaurepaire, "Cell Lineage Reconstruction of Early Zebrafish Embryos Using Label-Free Nonlinear Microscopy," Science, vol. 329, pp. 967-971, Aug 20 2010.

[4] S. Witte, A. Negrean, J. C. Lodder, C. P. De Kock, G. T. Silva, H. D. Mansvelder, and M. L. Groot, "Label-free live brain imaging and targeted patching with third-harmonic generation microscopy," Proceedings of the National Academy of Sciences, vol. 108, pp. 5970-5975, 2011.

[5] N. V. Kuzmin, P. Wesseling, P. C. Hamer, D. P. Noske, G. D. Galgano, H. D. Mansvelder, J. C. Baayen, and M. L. Groot, "Third harmonic generation imaging for fast, label-free pathology of human brain tumors," Biomed Opt Express, vol. 7, pp. 1889-904, May 01 2016.

[6] Z. Zhang, N. V. Kuzmin, M. Louise Groot, and J. C. de Munck, "Extracting morphologies from third harmonic generation images of structurally normal human brain tissue," Bioinformatics, Jan 27 2017.

Page 90: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Active contour models for microscopic images with global and local intensity inhomogeneities

81

[7] Z. Zhang, N. V. Kuzmin, M. L. Groot, and J. C. de Munck, "Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images," J Biophotonics, May 02 2017.

[8] M. Kass, A. Witkin, and D. Terzopoulos, "Snakes - Active Contour Models," International Journal of Computer Vision, vol. 1, pp. 321-331, 1987.

[9] V. Caselles, R. Kimmel, and G. Sapiro, "Geodesic active contours," International Journal of Computer Vision, vol. 22, pp. 61-79, Feb-Mar 1997.

[10] T. F. Chan and L. A. Vese, "Active contours without edges," Ieee Transactions on Image Processing, vol. 10, pp. 266-277, Feb 2001.

[11] A. Dufour, V. Shinin, S. Tajbakhsh, N. Guillen-Aghion, J. C. Olivo-Marin, and C. Zimmer, "Segmenting and tracking fluorescent cells in dynamic 3-D microscopy with coupled active surfaces," Ieee Transactions on Image Processing, vol. 14, pp. 1396-1410, Sep 2005.

[12] O. Dzyubachyk, W. A. van Cappellen, J. Essers, W. J. Niessen, and E. Meijering, "Advanced Level-Set-Based Cell Tracking in Time-Lapse Fluorescence Microscopy (vol 29, pg 852, 2010)," Ieee Transactions on Medical Imaging, vol. 29, pp. 1331-1331, Jun 2010.

[13] S. Osher and J. A. Sethian, "Fronts Propagating with Curvature-Dependent Speed - Algorithms Based on Hamilton-Jacobi Formulations," Journal of Computational Physics, vol. 79, pp. 12-49, Nov 1988.

[14] C. M. Li, R. Huang, Z. H. Ding, J. C. Gatenby, D. N. Metaxas, and J. C. Gore, "A Level Set Method for Image Segmentation in the Presence of Intensity Inhomogeneities With Application to MRI," Ieee Transactions on Image Processing, vol. 20, pp. 2007-2016, Jul 2011.

[15] D. Mumford and J. Shah, "Optimal Approximations by Piecewise Smooth Functions and Associated Variational-Problems," Communications on Pure and Applied Mathematics, vol. 42, pp. 577-685, Jul 1989.

[16] A. Tsai, A. Yezzi, and A. S. Willsky, "Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification," Ieee Transactions on Image Processing, vol. 10, pp. 1169-1186, Aug 2001.

[17] L. A. Vese and T. F. Chan, "A multiphase level set framework for image segmentation using the Mumford and Shah model," International Journal of Computer Vision, vol. 50, pp. 271-293, Dec 2002.

[18] C. M. Li, C. Y. Kao, J. C. Gore, and Z. H. Ding, "Minimization of region-scalable fitting energy for image segmentation," Ieee Transactions on Image Processing, vol. 17, pp. 1940-1949, Oct 2008.

[19] L. Wang, C. M. Li, Q. S. Sun, D. S. Xia, and C. Y. Kao, "Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation," Computerized Medical Imaging and Graphics, vol. 33, pp. 520-531, Oct 2009.

[20] H. L. Zhang, X. J. Ye, and Y. M. Chen, "An Efficient Algorithm for Multiphase Image Segmentation with Intensity Bias Correction," Ieee Transactions on Image Processing, vol. 22, pp. 3842-3851, Oct 2013.

[21] L. X. Liu, Q. Zhang, M. Wu, W. Li, and F. Shang, "Adaptive segmentation of magnetic resonance images with intensity inhomogeneity using level set method," Magnetic Resonance Imaging, vol. 31, pp. 567-574, May 2013.

[22] S. J. Niu, Q. Chen, L. de Sisternes, Z. X. Ji, Z. M. Zhou, and D. L. Rubin, "Robust noise region-based active contour model via local similarity factor for image segmentation," Pattern Recognition, vol. 61, pp. 104-119, Jan 2017.

[23] C. M. Li, C. Y. Xu, C. F. Gui, and M. D. Fox, "Distance Regularized Level Set Evolution and Its Application to Image Segmentation," Ieee Transactions on Image Processing, vol. 19, pp. 3243-3254, Dec 2010.

[24] K. H. Zhang, L. Zhang, H. H. Song, and D. Zhang, "Reinitialization-Free Level Set Evolution via Reaction Diffusion," Ieee Transactions on Image Processing, vol. 22, pp. 258-271, Jan 2013.

Page 91: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 4

82

[25] P. Felzenszwalb and D. Huttenlocher, "Distance transforms of sampled functions," ed: Cornell University, 2004.

[26] M. Maska, O. Danek, S. Garasa, A. Rouzaut, A. Munoz-Barrutia, and C. Ortiz-de-Solorzano, "Segmentation and Shape Tracking of Whole Fluorescent Cells Based on the Chan-Vese Model," Ieee Transactions on Medical Imaging, vol. 32, pp. 995-1006, Jun 2013.

[27] S. Pop, A. C. Dufour, J. F. Le Garrec, C. V. Ragni, C. Cimper, S. M. Meilhac, and J. C. Olivo-Marin, "Extracting 3D cell parameters from dense tissue environments: application to the development of the mouse heart," Bioinformatics, vol. 29, pp. 772-779, Mar 15 2013.

[28] A. Medyukhina, T. Meyer, M. Schmitt, B. F. M. Romeike, B. Dietzek, and J. Popp, "Towards automated segmentation of cells and cell nuclei in nonlinear optical microscopy," Journal of Biophotonics, vol. 5, pp. 878-888, Nov 2012.

[29] J. Y. Kim, L. S. Kim, and S. H. Hwang, "An advanced contrast enhancement using partially overlapped sub-block histogram equalization," Ieee Transactions on Circuits and Systems for Video Technology, vol. 11, pp. 475-484, Apr 2001.

[30] J. Weickert, Anisotropic diffusion in image processing vol. 1: Teubner Stuttgart, 1998. [31] W. Kim and C. Kim, "Active Contours Driven by the Salient Edge Energy Model," Ieee

Transactions on Image Processing, vol. 22, pp. 1665-1671, Apr 2013.

Page 92: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

83

Chapter 5

Tensor regularized total variation for third harmonic

generation brain images

This chapter is based on a conference paper accepted, with peer review, by the joint conference of the European Medical and Biological Engineering Conference (EMBEC) and the Nordic-Baltic Conference on Biomedical Engineering and Medical Physics (NBC), in Tampere, Finland, in June 2017. The full paper is in preparation.

Page 93: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 5

84

5.1 Abstract Third harmonic generation (THG) microscopy is a label-free imaging technique that shows great potential to visualize brain tumor margins during surgery. However, the complexity of THG brain images makes image denoising challenging. Anisotropic diffusion filtering (ADF) has recently been applied to reconstruct the noise-free THG images, but the reconstructed edges are in fact smooth and the existing methods are time-consuming. In this work, we propose a robust and efficient scheme for ADF to overcome these drawbacks, by expressing an ADF model as a tensor regularized total variation (TRTV) model. First, the gradient magnitude of a Gaussian at each point is used to estimate the first eigenvalue of the structure tensor, with which flat and non-flat areas can be distinguished. Second, tensor decomposition is performed only in non-flat areas. Third, the robust-to-outlier Huber norm is used for the data fidelity term to maintain image contrast. Finally, a recently developed primal-dual algorithm is applied to efficiently solve the resulting convex problem. Several experiments on THG brain images show promising results.

5.2 Introduction Third harmonic generation (THG) microscopy is a novel imaging technique for label-free 3D imaging of live tissues without exogenous contrast agents. Recently this technique was applied to image mouse brain tissues revealing rich morphological information [1]. THG microscopy has great potential for clinical applications such as real time in-vivo pathology of brain tumor during surgery [2]. Automatic quantification of THG images will reveal pathologically (tumor) relevant features presented in the imaged tissue. However, quantification is hampered by the low signal-to-noise ratio (SNR) of THG brain images, especially, with the consideration to reconstruct the rich morphologies observed.

Anisotropic diffusion filtering (ADF) lies at the core of image denosing techniques that are able to remove image noise while maintain the sharp edges of objects [3]. It has also been used to reconstruct certain kinds of structures, such as 1D flow-like [3] and 2D membrane-like structures [4] even if parts of these objects are missing. In a previous study we have applied the classical edge-enhancing ADF model to restore the “dark” and “bright” brain cells contained in THG images of mouse brain tissue [5]. We have further developed in [6] a salient edge-enhancing ADF model to reconstruct the rich morphologies contained in THG images of structurally normal human brain tissue. However, all the ADF models have the drawback that the denoised images are in fact smooth [7]. So far, most ADF models are implemented within either the explicit or semi-implicit scheme [3] and each scheme has its own problem.

To overcome the drawbacks of ADF, ADF has been linked with the total variation (TV) model which has been studied mathematically for over decades [7]. For a general overview of the TV models, we refer to [8-11]. It has been proven that an ADF model can be formulated as a tensor regularized total variation (TRTV) model [7]. In [11], a 2D adaptive TRTV model was proposed and solved by the primal-dual algorithm [10] that originally solves the convex optimization problem of the TV models. These first-order primal-dual algorithms [8-11] enable fast and easy-to-code implementation of the TRTV model.

In this paper, we develop a robust and efficient TRTV model to denoise 2D and 3D THG brain images. The contributions of this paper are fourfold. First, we propose an efficient approach to design the diffusion tensor, which is applicable for the ADF and TRTV models. We use the gradient magnitude of a Gaussian at each point to estimate the first eigenvalue of the structure tensor and to distinguish flat and non-flat areas. In the flat areas, the identity matrix is used as the diffusion tensor, while in the non-flat

Page 94: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Tensor regularized total variation for third harmonic generation brain images

85

regions, the application-driven diffusion tensor is used. Second, the 2D adaptive TRTV model cannot be trivially extended into higher dimension, but the proposed diffusion tensor regularizer can be easily be generalized to the higher dimensional models. Third, we use the robust Huber norm for the data fidelity term instead of the L2 norm to keep image contrast. Finally, we solve the new TRTV model with the efficient primal-dual algorithm from [9].

5.3 Related works

5.3.1 The ADF model Let u be a mD (m=2 or 3) image. The partial differential equation (PDE) of an ADF model is defined as follows,

( ).tu div D u∂ = ∇ (5.1)

Here D is the diffusion tensor. By taking the input image f as starting condition and evolving the equation (5.1) over some time, the image is smoothed in flat areas and along the object edges, whereas the edges themselves are maintained. Both the explicit and semi-implicit schemes have been widely employed to implement ADF. The explicit scheme is easy-to-code yet time consuming to execute and the semi-implicit scheme is more efficient because larger time steps are allowed, but more difficult to code.

Mathematically, the equation (5.1) can be interpreted as the Euler-Lagrange (E-L) equation resulting from a functional that is designed to achieve a balance between a requested smoothness and a closeness to the input image,

min ( ) || || .u

R u u fλ+ − (5.2)

In this functional, the first term is the regularization term (regularizer) that depends on the diffusion tensor D. The second term is the data fidelity term that uses a norm ||.|| to measure the closeness of u to the input image f. The implementation of this functional therefore depends on the construction of the diffusion tensor D, the choices of the regularizer and the fidelity norm.

The diffusion tensor D is usually computed in three consecutive steps. First, the structure tensor J defined as follows is computed to estimate the distribution of local gradients,

( ) * ( ), 0,J u K u uρ σ ρ σ σ ρ∇ = ∇ ⊗∇ ≥ (5.3)

where uσ is the Gaussian smoothed image of u, i.e., u is convolved with a Gaussian kernel Kσ of standard deviationσ ,

( )( ) ( ) ( ) .m

K u K u dσ σ∗ = −∫x x y y y

(5.4)

To study the distribution of local gradients, the outer product of uσ∇ is computed and each component of the resulting matrix is convolved with a Gaussian of standard deviation ρ . The standard deviation σ denotes the noise scale and ρ is the integration scale that reflects the characteristic size of the texture [3]. Second, the structure tensor J is decomposed into the product of eigenvalues and eigenvectors,

Page 95: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 5

86

( ) .TiJ Qdiag Qµ= (5.5)

The diagonal matrix ( )idiag µ is the eigenvalue matrix of all the eigenvalues iµ ordered descending, and all the corresponding eigenvectors iq form the eigenvector matrix Q. Finally, the eigenvalue matrix in (5.5) is replaced by the application-driven diffusion matrix,

( ) .TiD Qdiag Qλ= (5.6)

iλ represents the desired amount of diffusivity along iq .

5.3.2 Connection between the ADF and TV models The connection between the ADF model and the functional (5.2) was studied in [7], via the following model,

22min | | || || .

2uS u u fλ

Ω∇ + −∫ (5.7)

The matrix S satisfies TD S S= , with a given diffusion tensor D. This TRTV model can be regarded as a generalized TV model with a diffusion tensor D in the L1 regularizer. The fidelity norm used is the L2 norm. Moreover, we note that the E-L equation of (5.7) is given by,

( ) | | .| |t

Du div u u fS u

λ∂ = ∇ − −∇

(5.8)

The first term on the right corresponds to ADF with diffusion tensor / | |D S u∇ .

5.3.3 The adaptive TRTV model In the adaptive TRTV model [11], the Huber penalty gα is used to avoid the so-called staircase effect caused by the L1 regularizer, and an adaptive diffusion tensor has been designed,

22min ( ) || || ,

2ug S u u fα

λΩ

∇ + −∫ (5.9)

2| | /2 if | |<( ) .| | / 2 if | |

u ug uu u

αα α

α α

∇ ∇∇ = ∇ − ∇ ≥

(5.10)

Here the matrix S, with 1/241 2max( , ) ( ) T

iS diag Qα µ µ µ −= + , is used to rotate and scale the axes of the local ellipse to coincide with the coordinate axes of the image domain [11]. Note that the designed matrix S is only applicable to 2D images, and the extension to 3D is not so straightforward because it is not unambiguous how to analyze the diffusion behavior along the second direction.

5.4 The proposed method When applied to THG brain images, all the above methods have their own problems. The ADF models are either computationally expensive or hard-to-code, and often give unsatisfactory results. The TRTV model in [7] suffers from the staircase effect, and its implementation results in slow convergence. The asymmetric matrix S used in the adaptive TRTV model reduces the stability of the algorithm and results in artifacts. To deal with these drawbacks and to make the TRTV model applicable to THG images

Page 96: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Tensor regularized total variation for third harmonic generation brain images

87

corrupted by strong noise, we present an efficient estimation of the diffusion tensor and we replace the L2 norm used in the data fidelity term by the robust Huber norm.

5.4.1 Efficient estimation of the diffusion tensor According to the last section, the diffusion tensor D or the matrix S needs to be estimated at each point, which is time consuming but not of interest in flat areas. For 3D images, this tensor decomposition procedure takes half of the total computational time. If tensor decomposition is only computed in non-flat areas, the procedure will be substantially accelerated. To do this, we exploit the fact that flat regions consist of points whose first (largest) eigenvalue is small, and that this eigenvalue can be roughly estimated by 2| |uσ∇ [12]. This fact motivates the idea of thresholding 2| |uσ∇ to distinguish flat and non-flat regions. Before thresholding, we use the following function g to normalize and scale exponentially

2| |uσ∇ to the range [0,1],

44( ) exp , 0.

( / )Cg s s

s λ

−= >

(5.11)

This function has been used in the edge-enhancing ADF model [12] to define the diffusivity along the first direction. Here C4=3.31488, and λ is the threshold to control the trend of the function [12]. Then we regard the points with 2(| | )g u hσ∇ < (here h is always set to 0.9) as flat regions and the other points as non-flat regions. In the flat regions, the diffusion tensor D reduces to the identity matrix I. In the non-flat regions, the diffusion tensor D is defined as a weighted sum of the identity matrix and the application-driven diffusion tensor. Note that most of the application-driven ADF models can be accelerated using the procedure described here without changing the results. When applied to 3D THG images, we use the following eigenvalue system to estimate the diffusivity iλ ,

21 2 1 1 3 31 (| | ); ( ) ( ); 1,planeg u h Cσ τλ λ λ λ λ λ= − ∇ = − − = (5.12)

while for 2D THG images, the second diffusivity 2λ is ignored. We refer to [4, 6] for the definition of hτ and planeC .

5.4.2 Robust anisotropic regularization Given a diffusion tensor D designed as above, we consider the same regularizer as in equation (5.9),

( ) ( ),R u g S uαΩ= ∇∫ (5.13)

but we use a symmetric S, S=D. To analyze the behavior of this regularizer in terms of ADF, we note that the E-L equation that minimizes R(u) is,

1( ).max( ,| |)

Ttu div S S u

S uα∂ = ∇

∇ (5.14)

It is a scaled version of ADF with diffusion tensor TS S whose behavior is similar to ADF with diffusion tensor D.

Page 97: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 5

88

5.4.3 A robust TRTV model Different from the adaptive TRTV model [11], we consider the following robust minimization problem,

min ( ) || || ,2 Huberu

g S u u fαλ

Ω∇ + −∫ (5.15)

where we use the Huber norm for the data fidelity term, instead of the L2 norm. The Huber norm is a remedy of L1 norm which is contrast invariant. Compared to the L2 norm, it is less sensitive to outliers [13]. To solve the minimization problem (5.15), we adapt the primal-dual method in [9] and reach an efficient and easy-to-code algorithm. We refer to [13] for the definition of the Huber norm and to [8-11] for a more detailed description of the primal-dual method.

Figure 5.1 A 2D THG example. From top-left to bottom-right: the raw image, the results of the ADF [6], Adaptive TRTV [11] and the proposed models.

Page 98: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Tensor regularized total variation for third harmonic generation brain images

89

5.5 Results We applied the proposed TRTV model on 2D and 3D THG images of structurally normal human brain tissue in gray matter, that have been used before in [6]. One 2D example (273×273 µm2, 1125×1125 pixel) is shown in Fig. 5.1, and one 3D example (320×320×80 µm3, 1000×1000×40 voxel) is shown in Fig. 5.2. Note that the field of view (FOV) in the 3D image is larger than that of the 2D image, resulting in less fibers visible. We compare our result of the 2D THG image to the results of our previous ADF model [6] and the adaptive TRTV model [11], while for the 3D image, we only compare our result to our previous result [6]. All the parameter settings are optimized.

In Fig. 5.1, we see that the THG image is corrupted by strong noise and it contains rich morphologies, e.g., dark brain cells and bright fibers. Our previous ADF model [6] already gives a very satisfying result. The noise has been properly removed, but the denoised image is still smoothed to some extent, e.g., the fiber with the green arrow. The adaptive TRTV model [11] fails to remove the noise because of the insufficient diffusions along all the eigenvector directions. The proposed TRTV model significantly improves our previous result with all the sharp edges kept, and it also has the best image contrast.

From the results of Fig. 5.2, we see that the noise has been removed by our previous ADF model [6] and the sharp edges of some salient objects have been restored, e.g., dark holes, but some fibers have been smoothed, e.g., the fiber indicated by the yellow arrow. The proposed TRTV model succeeds to reconstruct all the sharp edges. Moreover, we also find that the proposed TRTV model is about 50% more time efficient than the existing ADF models. The efficiency gains partially from allowing tensor decomposition only in flat regions that roughly take up to 80% of the whole image domain. Finally, more small bright objects are reconstructed by the proposed method, most of which are cross-sections of fibers aligned in z-direction.

Figure 5.2 A 3D THG example, with one slice shown. From left to right: the raw image, the results of the ADF [6] and the proposed models.

Page 99: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 5

90

5.6 Conclusion In this work, we have developed a robust and efficient TRTV model for THG brain images. The proposed method can be generalized to other application driven projects. The efficient estimation of the diffusion tensor we proposed here can be used to accelerate most of the existing ADF models, by allowing tensor decomposition only in the non-flat regions. Compared to the adaptive TRTV model, the approach in which we combined the diffusion tensor and total variation can easily be extended from the existing ADF models. The robust Huber norm makes the proposed TRTV model contrast invariant and less sensitive to noise. One limitation of this study is that only THG images of structurally normal human brain tissue are tested and future works would include testing the applicability of the proposed model to THG images of human brain tumor tissue.

References [1] S. Witte, A. Negrean, J. C. Lodder, C. P. De Kock, G. T. Silva, H. D. Mansvelder, and M. L. Groot,

"Label-free live brain imaging and targeted patching with third-harmonic generation microscopy," Proceedings of the National Academy of Sciences, vol. 108, pp. 5970-5975, 2011.

[2] N. V. Kuzmin, P. Wesseling, P. C. Hamer, D. P. Noske, G. D. Galgano, H. D. Mansvelder, J. C. Baayen, and M. L. Groot, "Third harmonic generation imaging for fast, label-free pathology of human brain tumors," Biomed Opt Express, vol. 7, pp. 1889-904, May 01 2016.

[3] J. Weickert, "Coherence-enhancing diffusion filtering," International Journal of Computer Vision, vol. 31, pp. 111-127, Apr 1999.

[4] S. Pop, A. C. Dufour, J. F. Le Garrec, C. V. Ragni, C. Cimper, S. M. Meilhac, and J. C. Olivo-Marin, "Extracting 3D cell parameters from dense tissue environments: application to the development of the mouse heart," Bioinformatics, vol. 29, pp. 772-779, Mar 15 2013.

[5] Z. Zhang, N. V. Kuzmin, M. L. Groot, and J. C. de Munck, "Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images," J Biophotonics, May 02 2017.

[6] Z. Zhang, N. V. Kuzmin, M. Louise Groot, and J. C. de Munck, "Extracting morphologies from third harmonic generation images of structurally normal human brain tissue," Bioinformatics, Jan 27 2017.

[7] M. Grasmair and F. Lenzen, "Anisotropic Total Variation Filtering," Applied Mathematics and Optimization, vol. 62, pp. 323-339, Dec 2010.

[8] M. Zhu and T. Chan, "An efficient primal-dual hybrid gradient algorithm for total variation image restoration," UCLA CAM Report, pp. 08-34, 2008.

[9] E. Esser, X. Q. Zhang, and T. F. Chan, "A General Framework for a Class of First Order Primal-Dual Algorithms for Convex Optimization in Imaging Science," Siam Journal on Imaging Sciences, vol. 3, pp. 1015-1046, 2010.

[10] A. Chambolle and T. Pock, "A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging," Journal of Mathematical Imaging and Vision, vol. 40, pp. 120-145, Mar 2011.

[11] V. Estellers, S. Soatto, and X. Bresson, "Adaptive Regularization With the Structure Tensor," Ieee Transactions on Image Processing, vol. 24, pp. 1777-1790, Jun 2015.

[12] J. Weickert, Anisotropic diffusion in image processing vol. 1: Teubner Stuttgart, 1998. [13] P. J. Rousseeuw and A. M. Leroy, Robust regression and outlier detection, John wiley & sons, vol.

589, Feb 2005.

Page 100: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

91

Chapter 6

Rich histopathological morphology revealed by quantitative

third harmonic generation microscopy for detecting human

brain tumors

To be submitted

Page 101: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 6

92

6.1 Abstract Neurosurgeons need real-time tissue diagnosis to determine brain tumor boundaries and improve surgical outcomes. A new imaging technique capable of directly revealing tumor boundaries with histopathological quality is highly desirable. We have recently reported on third harmonic generation (THG) microscopy as a fast, label-free imaging technique to visualize rich pathological morphology in fresh, unprocessed human brain tumor tissues. Here, we have quantified the morphology observed by THG in surgical tissues from 12 patients undergoing neurosurgery. Statistical analysis of the key features—brain cells, nuclei, nueropil and large bright cells, revealed the density difference of these features in tumor and normal brain tissues. Density thresholds of these features were generated to detect tumor infiltration with 98.1% sensitivity and 98.3% specificity. Therefore, it is concluded that quantitative THG microscopy holds strong potential for improving the accuracy of brain tumor surgery, without the need for expert interpretation. This study also opens the door towards a more comprehensive study of the rich morphology observed by THG.

6.2 Introduction Patients with diffuse glioma are still associated with very poor survival [1, 2]. According to the World Health Organization (WHO) grading system, diffuse gliomas are histologically classified into low-grade and high-grade tumors [3]. The widespread nature of diffuse gliomas makes this type of tumors extensively invade into the surrounding normal brain [4, 5]. The prognosis and therapy of patients with diffuse gliomas usually correlate with the extent of resection [6-8]. However, the technologies, e.g., state-of-the-art bright field neurosurgical microscopes, that are currently used clinically cannot visualize the tumor boundary of this tumor type. Various approaches, including intraoperative magnetic resonance imaging [9], intraoperative computed tomography [10], ultrasound [11], fluorescence-guided resections [12], have been developed to reveal the tumor boundary, but their effects on surgical outcomes have been questioned [13-15].

More recently, several label-free optical techniques, i.e., optical coherence tomography (OCT), Raman spectroscopy (RS), stimulated Raman scattering (SRS) microscopy and third harmonic generation (THG) microscopy have been emerging to establish tumor boundaries at the cellular level [5, 16-18]. High-speed 3D swept-source OCT (SS-OCT) uses optical attenuation differences between tumor and normal brain tissues to reflect the tissue state [16]. Raman spectroscopy [17] has been reported to reliably detect tumor tissue in patients’ brain and SRS microscopy has shown tumor boundaries in surgical specimens from neurosurgical patients ex-vivo [18]. In particular, THG microscopy [19-23] has been shown to provide real-time feedback of tumor boundaries in fresh, unprocessed human brain tissues [5]. These label-free techniques show great potential for clinical use because there is no need for tissue preparation and staining. On the other hand, Hematoxylin-Eosin (H&E) morphology has been used by pathologists to study tissue state for over a century and there is a strong belief among many pathologists that H&E will continue to be the common practice over the next 50 years [24, 25] because it is the golden standard of the current clinical practice.

Therefore, a technique that can directly visualize the same features as the classical H&E morphology will have the maximal chance to be applied in the operating room, because extra training to transfer from the current knowledge to the new technique is minimized. The four aforementioned label-free techniques can to some extent reveal cellular morphology but only THG has an excellent agreement with standard H&E morphology. Increased cellularity, nuclear pleomorphism, and rarefaction of neuropil have been clearly

Page 102: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative third harmonic generation microscopy for detecting human brain tumors

93

recognized in THG images of brain tumor tissues [5]. The color-coded attenuation map used by SS-OCT [16] provides little cellular morphology and its resolution is currently insufficient to map H&E morphology. RS extensively compares experimental spectra against libraries of reference spectra to detect the subtle differences in the vibrational spectra of tumor tissue and healthy tissue [5, 17], but reveals no H&E morphology. Although SRS [18, 26] is able to visualize increased cellularity and rarefaction of neuropil, it still needs the ratio of protein and lipid to discriminate tumor to non-tumor areas, whereas this ratio has no corresponding interpretation in H&E morphology.

THG microscopy has established itself as an important tool for studying intact tissues [20-22, 27, 28] and shows much promise for application in clinical practice. Excellent agreement with standard histopathology has been demonstrated for THG in the case of skin cancer diagnosis [29]. It shows great potential for breast tumor diagnosis [30, 31] , and, as discussed above, THG yields high-quality images of brain tissue [5, 22]. Reliable automatic image processing tools have been developed for THG images of structurally normal brain tissue that will facilitate future clinical application [32]. THG images have also been quantitatively one-to-one compared to two-photon fluorescence and second harmonic generation (SHG)/auto-fluorescence images to verify the interpretation of observed features [32, 33]. For other label-free techniques such a comparison is currently lacking.

To bridge the gap between THG research and its clinical use, we here evaluate the ability of quantitative THG microscopy to detect brain tumor infiltration in tissue samples from 12 neurosurgical patients, 8 diagnosed with low-grade glioma, 2 diagnosed with high-grade glioma, and 2 structurally normal references from patients undergoing epilepsy surgery1

6.3 Results

. We demonstrate that the key features of H&E morphology revealed by THG microscopy can be quantified with high accuracy in an automated manner. With the quantitative features, we statistically show that the low-grade tumor, high grade tumor and normal brain tissues can be pairwise differentiated from each other with excellent sensitivity and specificity. We conclude that without the need for expert interpretation, quantitative THG microscopy is able to detect the tumor boundary of infiltrative tumor and that it holds potential for improving the accuracy of brain tumor surgery.

6.3.1 Quantitative THG microscopy The THG microscope used for imaging human brain tissues has been previously described (Fig. 6.1A) [22]. THG is a nonlinear optical process that depends on the third-order susceptibility χ(3) of the tissue and certain phase-matching conditions [19, 34]. The long laser wavelength used to generate the third harmonic signals (1200 nm) results in deep penetration and ensures a low risk of damaging the imaged tissue, while the short wavelength resulting from the THG process (400 nm) enables efficient back-scatter detection. THG signals were collected at 400 nm and are depicted in the images in green color. Each THG image had a field of view (FOV) of around 300µm×300µm. By setting the focal volume of the incident laser beam to several times the size of a typical dendrite (0.3–2μm), the brain cells appeared as dark holes with lipofuscin granules inside, on a green carpet of neuropil (Fig. 6.1B). Nuclei sometimes were dimly

1 Part of the data was collected by N.V. Kuzmin and used for publication [5] N. V. Kuzmin, P. Wesseling, P. C. Hamer, D. P. Noske, G. D. Galgano, H. D. Mansvelder, J. C. Baayen, and M. L. Groot, "Third harmonic generation imaging for fast, label-free pathology of human brain tumors," Biomed Opt Express, vol. 7, pp. 1889-904, May 01 2016.

Page 103: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 6

94

15μ

A B C D E

F G H I

10μm 10μm 40μm 10μ

10μm 50μm 20μm 15μ

seen inside the dark cell somata (Fig. 6.1C). Axons and dendrites, being lipid-rich, appeared as bright fibers (Fig. 6.1D). A rich morphology of H&E quality was observed in THG images of tumor tissue, e.g., a glial cell with an indented nucleus (Fig. 6.1E) and a glial cell with large nuclear-to-cytoplasm (NC) ratio observed in a low-grade tumor tissue (Fig. 6.1F). A clear transition from normal brain to low-grade tumor areas was observed in low-grade tumor tissue (Fig. 6.1G). High cell density areas with multiple pleomorphic tumor cell nuclei were observed in high-grade glioma of white matter (Fig. 6.1H) and brain cells with vacuolated cytoplasm were observed in the neocortex of the high-grade glioma tissue (Fig. 6.1I).

Figure 6.1 Quantitative THG microscopy and the summary of cell morphologies revealed by THG. (A) Setup of THG microscopy. See Fig. 1.1 also. (B-I) Cellular morphologies observed by THG. (B) Brain cells appeared as dark holes with with lipofuscin granules inside, in normal brain tissue. (C) Nuclei of some brain cells in normal GM are sometimes dimly visible. (D) Neuropil consisting of axons and dendrites appeared as linear, lipid-rich bright fibers. (E) A cell with an indented nucleus observed in low-grade tumor tissue. (F) A cell with a large nuclear/cytoplasm ratio observed in low-grade tumor tissue. (G) A very clear transition from normal brain (top, full of neuropil) to low-grade tumor areas (bottom, high nucleus density) was observed in low-grade tumor tissue. (H) High cell density area with multiple pleomorphic tumor cell nuclei was observed in high-grade glioma of white matter. (I) Brain cells with vacuolated cytoplasm were observed in the neocortex of the high-grade glioma tissue.

Figure 6.2 A general overview of the proposed quantification workflow. Image denoising, segmentation and feature extraction are involved.

Page 104: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative third harmonic generation microscopy for detecting human brain tumors

95

A2

A1

A3

B2

B1

B3

C2

C1

C3

D2

D1

D3

Normal GM Normal WM Low-grade tumor High-grade tumor

A4 B4 C4 D4

100μm 100μm 100μm 100μm

Figure 6.3 Quantification of typical THG images of structurally normal, low-grade and high-grade tissues. Columns from the left to the right show the raw images (row 1), segmentation results (row 2&3) and histology (row 4) of normal GM tissue, normal WM tissue, low-grade tumor of WM, high-grade tumor of WM, respectively. Rows from the top to the bottom show the raw images (row 1), the detected dark objects (row 2), the detected bright objects (row 3), and the corresponding histology (row 4) respectively. The histology images (A4) and (B4) were taken from the online library (http://141.214.65.171/Histology/Central%20Nervous%20System/view.apml.), (C4) and (D4) were taken from a textbook on pathology [35], figure 43.3 and 23.2, respectively.

Page 105: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 6

96

B A C D

100μm

The interpretation of histopathological data highly depends on the expertise of a pathologist, but the interpretation may have a subjective component and it is time-intensive. Automated approaches have been increasingly used to extract histopathologically relevant features in human tumors [36, 37] to assist pathologists with the interpretation. To ensure the THG images can be automatically interpreted, we developed an integrated workflow to quantify histopathologically relevant features in an automatic manner (Fig. 6.2). In brief, the proposed workflow included three major steps, image denoising, 3-phase segmentation, and feature extraction. The image denoising step reconstructed noise-free THG images. The 3-phase segmentation method segmented a THG image into dark objects, bright objects and a background. The last step quantified each histopathologically relevant feature.

6.3.2 Quantification of histopathological morphology Dark objects mainly consisted of brain cells or cell cytoplasm and surrounding small cells (Fig. 6.3). Blood vessels with red blood cells inside were occasionally observed. Bright objects included neuropil, cell nuclei and large bright cells (Fig. 6.3). Dark objects and most of the bright objects were detected by the 3-phase segmentation algorithm, while some nuclei were not detected because the nuclear contrast was too weak or obscured by other histopathological features. A circle Hough transform (CHT) was used to quantify the nuclei remaining in the background (Fig. 6.2). We segmented 800 THG images (140 structurally normal, 414 low-grade tumor and 246 high-grade tumor images) obtained from 1 epilepsy, 7 low-grade tumor and 2 high-grade tumor patients. This segmentation procedure was able to detect dark objects and bright objects with a mean error of 2 pixels (0.6 µm) in terms of distance to the delineated ground truth [32, 38]. A similar workflow for 3D THG images of structurally normal human brain tissues was designed in our previous study, with the segmentation results comprehensively verified by comparison to manual delineation and SHG/auto-fluorescence images [32].

Figure 6.4 Secondary effects of high-grade tumor in GM. The large bright cells may arise from tumor-induced edema of neuronal/glial cells. (A) Raw image. (B) Detected dark objects. (C) Detected bright objects. (D) Histology image taken from a textbook on pathology [35], figure 10.4.

Brain cells and neuropil were sparsely detected in gray matter (GM) areas of structurally normal tissue (Fig. 6.3, A1-A3). Bundles of neuropil appeared and were detected in white matter (WM) areas (Fig. 6.3, B1-B3). The low cell density observed in THG images of normal tissues (Fig. 6.3, A1 & B1) coincided with that observed in the online brain atlas images of histological sections of similar areas, with luxol fast blue (LFB) staining to demonstrate myelin (Fig. 6.3, A4 & B4). More brain cells and less yet thicker neuropil were detected in low-grade tumors of white matter (Fig. 6.3, C1-C3). The dark holes appearing in low-grade tumor were interpreted as cell cytoplasm whose nuclei were detected as bright objects. The

Page 106: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative third harmonic generation microscopy for detecting human brain tumors

97

abnormal high cell density indicated that these brain cells should be tumor cells. In a high-grade tumor tissue from white matter, the axon matrix had totally disappeared and the whole area was filled with tumor nuclei (Fig. 6.3, D1-D3). Secondary effects of high-grade gliomas in neocortex tissue of GM were also observed (Fig. 6.4). The detected large bright cells may arise from tumor-induced edema of neuronal/glial cells [39, 40]. The information observed in THG images of tumor tissues coincided with histology images of similar brain areas and grade (Fig. 6.3, C4, D4 & Fig. 6.4D).

6.3.3 Difference of feature density between normal and tumor tissues The densities of brain cells (including dark holes and bright nuclei) and neuropil is the first pathological information considered by pathologists. We statistically studied the density difference of brain cells, nuclei, neuropil and large bright cells appearing in 800 segmented THG images of normal brain and tumor tissues. We represented the density of a feature by the percentage of space (PoS) taken by that feature. For example, PoS of nuclei is the total number of pixels of the detected nuclei divided by the image size in pixels. PoS is a combined parameter of object number and object size. It is an adequate representation of each pathological feature because the number of cells or neuropil of brain tissues can be extremely high, resulting in clusters of cells and nets of neuropil [32].

One criterion currently used by pathologists for low-grade tumor diagnosis is high cell density. We therefore compared the density of the brain cells detected in normal brain tissue (GM&WM) and that of low-grade tumor tissues, in terms of PoS (Fig. 6.5A). 98% of normal images had PoS of brain cells smaller than 0.15 while 99% of low-grade images had PoS of brain cells larger than 0.2. The means of the two group were 0.06 and 0.26, respectively. Note that the mean PoS of the THG images of normal brain tissue agreed with our previous result obtained from 3D THG images of normal brain tissue [32]. This density difference of brain cells fully agreed with the current criteria used by pathologists. From the obtained density difference, a suitable threshold of cell density that can differentiate THG images of normal brain and low-grade tumor tissues should be within the interval [0.15, 0.2] and we used 0.19.

In pathology, high-grade tumor is characterized by high nucleus density and rarefaction of neuropil. THG images of high-grade tumor were filled with tumor nuclei and the cytoplasm a of nucleus were dimly visible (Fig. 6.3, D1). In contrast, nuclei were sparsely seen in normal brain tissue because of the low cell density (Fig. 6.3, A1&B1). In terms of PoS, almost all the high-grade tumor images of white matter (187 images) had nuclei density larger than 0.04 and all normal brain images had nuclei density smaller than 0.03 (Fig. 6.5B). The means of the two groups were 0.086 and 0.01, respectively. A reasonable threshold of nuclei density to distinguish normal and high-grade tissues should therefore be chosen between 0.03 and 0.04, and we used 0.04. The secondary effects of high-grade glioma in neocortex tissue of GM is the presence of tumor-induced edema. Thus we compared the density of large bright cells detected in THG images of normal brain tissue and high-grade tumor tissue of neocortex (59 images), in terms of PoS (Fig. 6.5C). All THG images of high-grade tumor had a PoS of large bright cells larger than 0 but only 4 images of normal brain had PoS larger than 0. The PoS means of the two groups were 0.029 and 0.0001, respectively. The thresholds we used to distinguish high-grade tumor tissues of GM to normal tissue of GM and WM were 0.01 and 0.02, respectively. A conservative threshold was used for WM to prevent that the axonal bundles may result in large bright cells detected.

Page 107: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 6

98

B A

C D

E F

Figure 6.5 Statistic study of the difference of quantified features in normal and tumor tissues. The densities of features in normal, low-grade and high-grade tissues are shown in green, blue and red respectively. The x-axis indicates the image index and the y-axis indicates the density (see text). (A) Most images of normal and low-grade tissues were separable by thresholding the density of brain cells at 0.19. (B) The images of normal and high-grade WM tissues were completely separated by thresholding the nuclei density at 0.04. (C) Normal and high-grade GM tissues were separable by thresholding the density of large bright cells at 0.01 for GM and 0.02 for WM. (D) The low-grade and high-grade tissues were distinguished by neuropil density at threshold 100. (E) The high-grade images of WM and (GM) were separable at threshold 0.04, using the nuclei density. Note that the circular dot indicates a point of high-grade GM while the square dot indicates that of high-grade WM. (F) The normal tissues from WM and GM were separable at threshold 50, using the neuropil density.

Page 108: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative third harmonic generation microscopy for detecting human brain tumors

99

6.3.4 Low-grade versus high-grade, WM versus GM The density of neuropil was used to distinguish low-grade tumor and high-grade tumor (GM & WM). The density of neuropil was quantified by the perimeter (in pixels) of all detected neuropil divided by 100. 97% of the THG images of low-grade tumor had a density of neuropil larger than 150 and only 2 THG images of high-grade tumor had neuropil density larger than 100 (Fig. 6.5D). The means of the two groups were 306 and 12, respectively. We used a threshold of 100 to distinguish low-grade tumor and high-grade tumor. Note that the THG images of high-grade tumor in WM and neocortex were combined here to enhance the statistics.

As indicated by Fig. 6.5E, a threshold of 0.04 of nuclei density was used to distinguish high-grade tumor in WM and GM. Since the normal brain tissue in WM is full of neuropil, a threshold of 50 of neuropil density was used to distinguish normal tissue in WM and GM (Fig. 6.5F).

6.3.5 Quantification of infiltrative tumor boundary Using a total number of 800 THG images from 10 patients, we demonstrated that low-grade tumor, high-grade tumor and normal brain tissue can be differentiated from each other by pair-wise comparison using one single feature. The feature used by each comparison was summarized in Table 6.1. Towards a clinical application, we integrated the quantified THG image attributes (density of brain cells, nuclei density, neuropil density and density of large bright cells) into one single metric. The trained thresholds were combined to classify the THG images (Table 6.2) and thus to detect tumor boundaries.

To estimate the sensitivity and specificity of the combined thresholds obtained from the statistical study, mosaic THG images (272 images, 58 healthy and 214 tumor) of 1 structurally normal, 1 low-grade tumor and 2 high-grade tumor patients were used. The healthy and low-grade infiltration tissues used were different from the tissues used in the statistical analysis, although the high-grade tissues were obtained from the same patients because of the limited number of patients that were available in this study. The sensitivity and specificity estimated from these additional data were 98.1% and 98.3%, respectively.

One difficulty for surgery of diffuse glioma is that no clear boundary is present between tumor infiltrated and non-tumor infiltrated areas. One low-grade tissue with both low-grade infiltrated and non-tumor infiltrated areas was used to test the performance of quantitative THG on tumor boundary detection (Fig. 6.6). Quantitative THG was able to correctly identify the tumor boundary, which had been confirmed by our pathologist P.W. in our previous study [5]. The cell density increased from the normal brain area (left) to tumor infiltrated area (right), and the transition zone was correctly identified by quantitative THG according to Table 6.2.

6.3.6 H&E morphologies detected by quantitative THG We used the densities of brain cell, nuclei, neuropil and large bright cells to detect tumor boundaries, but there were also a few H&E morphological parameters that were not used, e.g, the NC ratio. We summarized the H&E morphology detected by quantitative THG in Fig. 6.7. We were able to quantify the most common brain cells (Fig. 6.7, A-D) observed in normal tissue, glial cells with an indented nucleus (Fig. 6.7, E-F), as well as large bright cells (Fig. 6.7, G-H) and multiple pleomorphic tumor cell nuclei observed in low-grade and high-grade glioma tissue (Fig. 6.7, I-L). Note that the nuclei detected in low-grade tumor were relatively rounder than nuclei detected in high-grade glioma, reflecting pleomorphism. Such a difference could be used as additional important parameter to distinguish low-grade and high-grade tumor.

Page 109: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 6

100

Table 6.1 Feature used for each comparison. The density of brain cells was used to distinguish the normal tissue (WM&GM) from the low-grade tumor tissue. The density of nuclei was used to distinguish the normal tissue from the high-grade tumor tissue of WM. The density of large bright cells/neuropil was used to distinguish the normal tissue from the high-grade tumor tissue of GM. The density of neuropil was used to distinguish the low-grade tissue to the high-grade tissue, and distinguish the normal tissue of WM from GM.

Table 6.2 Thresholds for classifying THG images. Cell density, nuclei density, neuropil density, and density of large bright cells are integrated into one single metric. If the attribute of an image is not identified as one of the case listed in this table, it is set as “unknown”.

N WM N GM LG HG WM HG GM

N WM N/A Neuropil Brain cells Nuclei Large bright cells

/Neuropil

N GM N/A N/A Brain cells Nuclei Large bright cells

LG N/A N/A N/A Neuropil Neuropil

HG WM N/A N/A N/A N/A Nuclei

HG GM N/A N/A N/A N/A N/A

Brain cells Nuclei Neuropil Large bright cells

Normal WM <0.19 <0.04 ≥50 <0.02

Normal GM <0.19 <0.04 <50 <0.01

Low-grade ≥0.19 ≥0 ≥100 ≥0

High-grade ≥0 ≥0.04 <100 <0.02

High-grade (Secondary) <0.19 <0.04 <50 ≥0.01

Page 110: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative third harmonic generation microscopy for detecting human brain tumors

101

C

A B Low-grade mosaic Classifier values

Brain cell density D Nuclei density

E Neuropil density F Large bright cell density

Healthy area

Tumor area

300μm

Figure 6.6 Detecting tumor boundary of an infiltrative low-grade tumor. (A) The 6×4 mosaic image of an infiltrative low-grade tumor, with a transition zone. (B) Classification result using the thresholds summarized in Table 6.2. The transition zone, from the healthy area to the tumor area, detected by classification fully agrees with visual inspection. (C) Density of brain cells. The density increases from the top-left to the bottom-right. (D) Density of nuclei. (E) Density of neuropil. The neuropil density in the low-grade infiltrative area appears higher than the normal brain area. The reason is that there are more number of neuropil detected in the normal brain area, but the neuropil in low-grade tumor are thicker. (F) Density of large bright cells.

Page 111: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 6

102

J

A C D E

G H I

B F

K L

Figure 6.7 Summary of cell morphologies revealed by quantitative THG microscopy. (A-B) The most common brain cells in the THG images (A). The cell soma detected is shown in red and the lipofuscin granules are shown in green (B). (C-D) A brain cell in normal GM with nucleus dimly visible (C). The cell soma detected is shown in red and the nucleus is shown in green (D). (E-F) A glial cell with an indented nucleus observed in the low-grade tumor tissue (E). The cell soma detected is shown in red and the nucleus is shown in green (F). (G-H) A large bright cell with vacuolated cytoplasm in the edematic peritumoral neocortex of the high-grade glioma tissue (G). The cell soma detected is shown in white (H). (I-J) High cell density area with multiple pleomorphic tumor cell nuclei observed in the high-grade glioma of white matter (I). The cell nuclei detected are shown in white (J). (K-L) High nucleus density area observed in low-grade glioma (K). The cell nuclei detected are shown in white (L). Note that the nuclei shown in (K) are relatively rounder than nuclei shown in the high-grade glioma (I). Images were denoised according to the protocol summarized in Fig. 6.2.

6.4 Discussion Resection serves as the first-line treatment of diffuse gliomas. Safely maximizing the extent of resection (EOR)—removing tumor regions while sparing healthy brain—remains a challenge. Diffuse gliomas are highly invasive and they invade beyond the visible MRI borders [41]. Lack of a defined biological interface between tumor and normal brain areas increases the operation risk. Residual tumor gives rise to recurrence, e.g., more than 85% of glioblastomas (GBMs) recurrences occur at the resection cavity margin [42]. Gross total resection incurs surgery-related deficits which also reduces median survival of patients [7, 43, 44]. Towards optimal surgical outcomes, several label-free optical techniques have been proposed to enable real-time feedback of tissue state, but which one has the most potential for clinical use is still controversial.

In this paper, we demonstrate how quantitative THG imaging can be used to detect brain tumor by revealing microscopic tumor infiltration in ex-vivo surgical tissue samples. We have developed an automated segmentation workflow to quantify the H&E relevant features. Statistical analysis of the resulting features indicates an excellent separation of the tumor infiltration in given specimens, reducing reliance on the interpretation of histopathologic data. Based on the analysis of 800 THG image from 10 patients, we obtain density thresholds to distinguish four key histopathological features: the density of brain cells, nucleus density, neuropil density and the density of large bright cells. With the combined thresholds, our data suggests that quantitative THG microscopy can accurately quantify the transition area between tumor infiltration and the normal brain.

Page 112: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative third harmonic generation microscopy for detecting human brain tumors

103

Although OCT, RS and SRS have also been shown to be able to differentiate tumor infiltration, THG has its own distinctive merits. The imaging speed of SS-OCT is impressively faster than THG, and its FOV is also larger than THG, but THG provides a higher resolution and richer cellular morphology that is currently unfeasible for OCT techniques. It has also been mentioned that SS-OCT is not able to distinguish low-grade infiltrated and high-grade infiltrated zones [16], but their therapies are different [8]. Full-field OCT [45] has higher resolution than SS-OCT, but this results in a significant increase of imaging time and the resolution achieved is still not sufficient to reveal tissue morphology in a way comparable to THG. Moreover, it has been reported that full field OCT is not sensitive to detect tumor infiltrations [45]. An important limitation of RS is the lack of tools to help with histopathological interpretation based on Raman spectra. SRS microscopy is able to detect infiltrated tumor areas, but the required protein/lipid ratio has no direct equivalence in H&E stained images reducing SRS’s potential to replace the current clinical standard. The Raman technique has been reported to be especially sensitive in densely tumor-infiltrated areas and it requires extensive comparison of experimental spectra against libraries of reference spectra to obtain the subtle differences in the vibrational spectra of tumor tissue and normal brain tissue [5, 17]. Compared with Raman microscopy, THG directly visualizes more histopathological features currently used by pathologists, which could potentially be more reliable and make the transition from the current practice to in-situ optical biopsy much easier. In addition, implementation of THG microscopy is less complicated and it costs less than SRS microscopy because SRS requires two laser sources to be overlapped spatially and temporally. Towards a low-cost setup, current THG setup can still be optimized, e.g., the OPO can be removed, instead, one can use a femto-second laser of 1500 nm wavelength and then the current THG setup will be optimized into reasonable size. As a 3-photon technique, THG holds the potential of deep tissue imaging down to 1.4 mm [5, 46] to reveal the pathological information of a tumor in deep white matter.

The cellular and nuclear morphology observed in normal brain, low-grade glioma and high-grade glioma were so different that quantitative THG will open the door for a more comprehensive study of the morphology observed by THG, e.g., the comparison of the morphological differences of nuclei observed in low-grade and high-grade glioma and studying the NC ratio of tumor cells. Even more, the quantitative features relevant to pathology observed with THG are so rich that it will open the door to study tumor cells in their natural environment, e.g., study the tumor ecosystem [47]. Tumors are evolving ecosystems where cancer subclones and the microenvironment interact [48]. Spatial distribution of tumor and healthy cells of different kinds are often studied to understand a tumor ecosystem, which is so far mainly achieved by analyzing H&E images. Because no staining is needed and still the same rich H&E morphology can be observed with THG, label-free THG is an excellent and unique tool for long term studies of the tumor ecosystem in its natural environment. The proposed image quantification procedure will enable identification of tumor and healthy cells of different kinds, and study their spatial distribution and interaction.

To translate THG microscopy into clinical use, a proper selection of sampling locations and an independent experiment to demonstrate the robustness of THG to a real surgical environment are still needed. As any microscopic imaging modality, THG microscopy generates FOVs much smaller than a typical tumor cavity. A sampling protocol needs to be developed to ensure representative imaging of a tumor cavity. Also, we will try to apply the developed quantitative ex-vivo THG on an in-vivo murine model harboring human brain tumor to simulate surgical conditions where blood, dissected and/or coagulated tissue, and movement associated with respiratory and cardiac cycles are present.

Page 113: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 6

104

In summary, we demonstrate the power of ex-vivo quantitative THG microscopy on detecting the tumor infiltrations in the human brain. THG microscopy holds the potential to improve the accuracy of brain tumor surgery. Our data provide strong justification that THG microscopy is the top choice among the current label-free imaging techniques having the same clinical target.

6.5 Materials and methods

6.5.1 THG microscopy and tissue preparation The setup of the used THG microscope has previously been described in detail [22] (Fig. 6.1A). In brief, the imaging setup consisted of a commercial two-photon laser-scanning microscope and a femto-second laser source. The laser source was an optical parametric oscillator (OPO) pumped at 810 nm by a Ti-sapphire oscillator. The OPO generates 200 fs pulses at 1200 nm and repetition rate of 80 MHz. We focused the OPO beam on the sample using a 25×/1.10 water-dipping objective. Two high-sensitivity GaAsP photomultiplier tubes (PMT) equipped with narrowband filters at 400 nm and 600 nm were used to collect the THG and SHG signals, respectively. The signals were filtered from the 1200 nm fundamental photons by a dichroic mirror (DM), split into SHG and THG channels by a DM, and passed through narrow-band interference filters for SHG and THG detection. The efficient back-scattering of the harmonic signals allowed for their epi-direction. The laser beam was transversely scanned over the sample by a pair of galvo mirrors. Imaging data were stored in 16-bit tiff-format.

All procedures on human tissue were performed with the approval of the Medical Ethical Committee of the VU University Medical Center and in accordance with Dutch license procedures and the declaration of Helsinki. All patients gave a written informed consent for tissue biopsy collection and signed a declaration permitting the use of their biopsy specimens in scientific research. We imaged brain tissue samples from eight patients diagnosed with low-grade glioma and two patients diagnosed with high-grade glioma (one from WM and one from GM), as well as two structurally normal reference with THG microscopy. Structurally normal brain samples were cut from the temporal cortex and subcortical white matter that had to be removed for the surgical treatment of deeper brain structures affected by epilepsy. Tumor brain samples were cut from tumor margin areas (especially in low-grade diffuse glioma cases) and from the tumor core and peritumoral areas (in high-grade glioma cases).

After tissue resection, the brain tissue samples were placed within 30s in ice-cold artificial cerebrospinal fluid (ACSF) at 4°C containing (in mM): NaCl (125); NaH2PO4 (1.25); NaHCO3 (26); KCl (3); MgSO4 (2); CaCl2 (2); glucose (10); with an osmolarity of 300 mosmol/kg. They were transported to the laboratory, located within 250m distance from the operating room, within 15 min. We prepared a ~300 μm thick coronal slice of the freshly-excised normal tissue in ice-cold ACSF solution with a vibratome (Microm, HM 650V, Thermo Fisher Scientific). The slice was then placed in a plastic Petri dish (diameter 50 mm) and covered with a 0.17 mm thick glass cover slip to provide a flat sample surface during THG imaging. Freshly-excised tumor tissue samples were cut with a surgical scalpel in several individual slices to generate flat surfaces, rinsed with ACSF to remove blood contamination, embedded in agar blocks and flattened with thin glass cover slips. After THG imaging, tumor samples were fixated in 4% formaldehyde, sliced in 5-μm-thick histological sections and routinely stained with hematoxylin and eosin (H&E) and luxol fast blue (LFB) for microscopic examination.

Page 114: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative third harmonic generation microscopy for detecting human brain tumors

105

6.5.2 Quantification workflow The segmentation of THG brain images of mouse and structurally normal human brain tissues had been discussed in our previous papers [32, 33]. We proposed here an integrated workflow (Fig. 6.2) to quantify all the pathologically relevant features from THG brain images of normal brain and tumor tissues. THG images were first preprocessed by histogram truncation and local histogram equalization [49] to enhance the global and local contrast. The local histogram equalization was able to remove the local intensity inhomogeneity caused by the rough surfaces of the imaged tissue. The enhanced images were denoised by salient edge-enhancing anisotropic diffusion [32], which was able to remove noise while keeping the edges sharp. We applied active contour weighted by prior extreme (ACPE) [32] for segmenting dark and bright objects. More precisely, we addressed the 3-phase segmentation problem, the dark objects, bright objects and an intermediate background, by splitting it into two separated ones, each of which was solved by ACPE. The background was further processed by a slightly modified version of the circular Hough transform (CHT) [50] to extract those round or elliptical nuclei that were not detected by the 3-phase segmentation method. Our CHT implementation contains a voting process that maps image edge points into an accumulation matrix. Peaks in the accumulation corresponded to the center of the circles. We only allowed the edge pixels voting along the gradient direction.

To quantify the dark objects, a hole filling algorithm was applied to all the dark objects and an object splitting algorithm [33] was applied to separate slightly touching objects. Dark objects with size less than 1000 pixels (~6µm×6µm) were ignored. For the quantification of neuropil, large bright cells and nuclei, all the bright objects with size less than 500 pixels were ignored. The remaining bright objects were sorted by sphericity/compactness [51]. Those bright objects with size larger than 5000 pixels and relatively high sphericity (> 0.3) were regarded as large bright cells. Bright objects with sphericity < 0.1 were considered as neuropil. The object splitting algorithm [33] was applied to the rest of the bright objects to separate slightly touching nuclei. The bright objects with sphericity larger than 0.5 were combined with the nuclei detected in the background, as the nuclei contained in the whole image. The dark objects and nuclei were combined as detected brain cells.

References [1] J. A. Schwartzbaum, J. L. Fisher, K. D. Aldape, and M. Wrensch, "Epidemiology and molecular

pathology of glioma," Nat Clin Pract Neurol, vol. 2, pp. 494-503; quiz 1 p following 516, Sep 2006. [2] D. Orringer, D. Lau, S. Khatri, G. J. Zamora-Berridi, K. Zhang, C. Wu, N. Chaudhary, and O. Sagher,

"Extent of resection in patients with glioblastoma: limiting factors, perception of resectability, and effect on survival," J Neurosurg, vol. 117, pp. 851-9, Nov 2012.

[3] L. Chin, M. Meyerson, K. Aldape, D. Bigner, T. Mikkelsen, S. VandenBerg, A. Kahn, R. Penny, M. L. Ferguson, D. S. Gerhard, G. Getz, C. Brennan, B. S. Taylor, W. Winckler, P. Park, M. Ladanyi, K. A. Hoadley, R. G. W. Verhaak, D. N. Hayes, P. T. Spellman, D. Absher, B. A. Weir, L. Ding, D. Wheeler, M. S. Lawrence, K. Cibulskis, E. Mardis, J. H. Zhang, R. K. Wilson, L. Donehower, D. A. Wheeler, E. Purdom, J. Wallis, P. W. Laird, J. G. Herman, K. E. Schuebel, D. J. Weisenberger, S. B. Baylin, N. Schultz, J. Yao, R. Wiedemeyer, J. Weinstein, C. Sander, R. A. Gibbs, J. Gray, R. Kucherlapati, E. S. Lander, R. M. Myers, C. M. Perou, R. McLendon, A. Friedman, E. G. Van Meir, D. J. Brat, G. M. Mastrogianakis, J. J. Olson, N. Lehman, W. K. A. Yung, O. Bogler, M. Berger, M. Prados, D. Muzny, M. Morgan, S. Scherer, A. Sabo, L. Nazareth, L. Lewis, O. Hall, Y. M. Zhu, Y. R. Ren, O. Alvi, J. Q. Yao, A. Hawes, S. Jhangiani, G. Fowler, A. San Lucas, C. Kovar, A. Cree, H. Dinh, J. Santibanez, V.

Page 115: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 6

106

Joshi, M. L. Gonzalez-Garay, C. A. Miller, A. Milosavljevic, C. Sougnez, T. Fennell, S. Mahan, J. Wilkinson, L. Ziaugra, R. Onofrio, T. Bloom, R. Nicol, K. Ardlie, J. Baldwin, S. Gabriel, R. S. Fulton, M. D. McLellan, D. E. Larson, X. Q. Shi, R. Abbott, L. Fulton, K. Chen, D. C. Koboldt, M. C. Wendl, R. Meyer, Y. Z. Tang, L. Lin, J. R. Osborne, B. H. Dunford-Shore, T. L. Miner, K. Delehaunty, C. Markovic, G. Swift, W. Courtney, C. Pohl, S. Abbott, A. Hawkins, S. Leong, C. Haipek, H. Schmidt, M. Wiechert, T. Vickery, S. Scott, D. J. Dooling, A. Chinwalla, G. M. Weinstock, M. O'Kelly, J. Robinson, G. Alexe, R. Beroukhim, S. Carter, D. Chiang, J. Gould, S. Gupta, J. Korn, C. Mermel, J. Mesirov, S. Monti, H. Nguyen, M. Parkin, M. Reich, N. Stransky, L. Garraway, T. Golub, A. Protopopov, I. Perna, S. Aronson, N. Sathiamoorthy, G. Ren, H. Kim, S. K. Kong, Y. H. Xiao, I. S. Kohane, J. Seidman, L. Cope, F. Pan, D. Van Den Berg, L. Van Neste, J. M. Yi, J. Z. Li, A. Southwick, S. Brady, A. Aggarwal, T. Chung, G. Sherlock, J. D. Brooks, L. R. Jakkula, A. V. Lapuk, H. Marr, S. Dorton, Y. G. Choi, J. Han, A. Ray, V. Wang, S. Durinck, M. Robinson, N. J. Wang, K. Vranizan, V. Peng, E. Van Name, G. V. Fontenay, J. Ngai, J. G. Conboy, B. Parvin, H. S. Feiler, T. P. Speed, N. D. Socci, A. Olshen, A. Lash, B. Reva, Y. Antipin, A. Stukalov, B. Gross, E. Cerami, W. Q. Wang, L. X. Qin, V. E. Seshan, L. Villafania, M. Cavatore, L. Borsu, A. Viale, W. Gerald, M. D. Topal, Y. Qi, S. Balu, Y. Shi, G. Wu, M. Bittner, T. Shelton, E. Lenkiewicz, S. Morris, D. Beasley, S. Sanders, R. Sfeir, J. Chen, D. Nassau, L. Feng, E. Hickey, C. Schaefer, S. Madhavan, K. Buetow, A. Barker, J. Vockley, C. Compton, J. Vaught, P. Fielding, F. Collins, P. Good, M. Guyer, B. Ozenberger, J. Peterson, E. Thomson, C. G. A. R. Network, T. S. Sites, G. S. Ctr, C. G. C. Ctr and P. Teams, "Comprehensive genomic characterization defines human glioblastoma genes and core pathways," Nature, vol. 455, pp. 1061-1068, Oct 23 2008.

[4] F. K. Albert, M. Forsting, K. Sartor, H. P. Adams, and S. Kunze, "Early postoperative magnetic resonance imaging after resection of malignant glioma: objective evaluation of residual tumor and its influence on regrowth and prognosis," Neurosurgery, vol. 34, pp. 45-60; discussion 60-1, Jan 1994.

[5] N. V. Kuzmin, P. Wesseling, P. C. Hamer, D. P. Noske, G. D. Galgano, H. D. Mansvelder, J. C. Baayen, and M. L. Groot, "Third harmonic generation imaging for fast, label-free pathology of human brain tumors," Biomed Opt Express, vol. 7, pp. 1889-904, May 01 2016.

[6] N. Sanai and M. S. Berger, "Glioma extent of resection and its impact on patient outcome," Neurosurgery, vol. 62, pp. 753-64; discussion 264-6, Apr 2008.

[7] N. Sanai, M. Y. Polley, M. W. McDermott, A. T. Parsa, and M. S. Berger, "An extent of resection threshold for newly diagnosed glioblastomas," J Neurosurg, vol. 115, pp. 3-8, Jul 2011.

[8] I. Y. Eyupoglu, M. Buchfelder, and N. E. Savaskan, "Surgical resection of malignant gliomas-role in optimizing patient outcome," Nat Rev Neurol, vol. 9, pp. 141-51, Mar 2013.

[9] P. M. Black, T. Moriarty, E. Alexander, 3rd, P. Stieg, E. J. Woodard, P. L. Gleason, C. H. Martin, R. Kikinis, R. B. Schwartz, and F. A. Jolesz, "Development and implementation of intraoperative magnetic resonance imaging and its neurosurgical applications," Neurosurgery, vol. 41, pp. 831-42; discussion 842-5, Oct 1997.

[10] E. Uhl, S. Zausinger, D. Morhard, T. Heigl, B. Scheder, W. Rachinger, C. Schichor, and J. C. Tonn, "Intraoperative computed tomography with integrated navigation system in a multidisciplinary operating suite," Neurosurgery, vol. 64, pp. 231-9; discussion 239-40, May 2009.

[11] R. M. Comeau, A. F. Sadikot, A. Fenster, and T. M. Peters, "Intraoperative ultrasound for guidance and tissue shift correction in image-guided neurosurgery," Med Phys, vol. 27, pp. 787-800, Apr 2000.

[12] W. Stummer, F. Rodrigues, P. Schucht, M. Preuss, D. Wiewrodt, U. Nestler, M. Stein, J. M. Artero, N. Platania, J. Skjoth-Rasmussen, A. Della Puppa, J. Caird, S. Cortnum, S. Eljamel, C. Ewald, L. Gonzalez-Garcia, A. J. Martin, A. Melada, A. Peraud, A. Brentrup, T. Santarius, H. H. Steiner, and A. L. A. P. B. T. S. G. European, "Predicting the "usefulness" of 5-ALA-derived tumor fluorescence

Page 116: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative third harmonic generation microscopy for detecting human brain tumors

107

for fluorescence-guided resections in pediatric brain tumors: a European survey," Acta Neurochir (Wien), vol. 156, pp. 2315-24, Dec 2014.

[13] O. M. Rygh, T. Selbekk, S. H. Torp, S. Lydersen, T. A. Hernes, and G. Unsgaard, "Comparison of navigated 3D ultrasound findings with histopathology in subsequent phases of glioblastoma resection," Acta Neurochir (Wien), vol. 150, pp. 1033-41; discussion 1042, Oct 2008.

[14] D. G. Barone, T. A. Lawrie, and M. G. Hart, "Image guided surgery for the resection of brain tumours," Cochrane Database Syst Rev, p. CD009685, Jan 28 2014.

[15] Y. Li, R. Rey-Dios, D. W. Roberts, P. A. Valdés, and A. A. Cohen-Gadol, "Intraoperative fluorescence-guided resection of high-grade gliomas: a comparison of the present techniques and evolution of future strategies," World neurosurgery, vol. 82, pp. 175-185, Aug 2014.

[16] C. Kut, K. L. Chaichana, J. F. Xi, S. M. Raza, X. B. Ye, E. R. McVeigh, F. J. Rodriguez, A. Quinones-Hinojosa, and X. D. Li, "Detection of human brain cancer infiltration ex vivo and in vivo using quantitative optical coherence tomography," Science Translational Medicine, vol. 7, Jun 17 2015.

[17] M. Jermyn, K. Mok, J. Mercier, J. Desroches, J. Pichette, K. Saint-Arnaud, L. Bernstein, M. C. Guiot, K. Petrecca, and F. Leblond, "Intraoperative brain cancer detection with Raman spectroscopy in humans," Sci Transl Med, vol. 7, p. 274ra19, Feb 11 2015.

[18] M. Ji, S. Lewis, S. Camelo-Piragua, S. H. Ramkissoon, M. Snuderl, S. Venneti, A. Fisher-Hubbard, M. Garrard, D. Fu, A. C. Wang, J. A. Heth, C. O. Maher, N. Sanai, T. D. Johnson, C. W. Freudiger, O. Sagher, X. S. Xie, and D. A. Orringer, "Detection of human brain tumor infiltration with quantitative stimulated Raman scattering microscopy," Sci Transl Med, vol. 7, p. 309ra163, Oct 14 2015.

[19] Y. Barad, H. Eisenberg, M. Horowitz, and Y. Silberberg, "Nonlinear scanning laser microscopy by third harmonic generation," Applied Physics Letters, vol. 70, pp. 922-924, Feb 24 1997.

[20] D. Debarre, W. Supatto, A. M. Pena, A. Fabre, T. Tordjmann, L. Combettes, M. C. Schanne-Klein, and E. Beaurepaire, "Imaging lipid bodies in cells and tissues using third-harmonic generation microscopy," Nature Methods, vol. 3, pp. 47-53, Jan 2006.

[21] N. Olivier, M. A. Luengo-Oroz, L. Duloquin, E. Faure, T. Savy, I. Veilleux, X. Solinas, D. Debarre, P. Bourgine, A. Santos, N. Peyrieras, and E. Beaurepaire, "Cell Lineage Reconstruction of Early Zebrafish Embryos Using Label-Free Nonlinear Microscopy," Science, vol. 329, pp. 967-971, Aug 20 2010.

[22] S. Witte, A. Negrean, J. C. Lodder, C. P. De Kock, G. T. Silva, H. D. Mansvelder, and M. L. Groot, "Label-free live brain imaging and targeted patching with third-harmonic generation microscopy," Proceedings of the National Academy of Sciences, vol. 108, pp. 5970-5975, 2011.

[23] H. Lim, D. Sharoukhov, I. Kassim, Y. Q. Zhang, J. L. Salzer, and C. V. Melendez-Vasquez, "Label-free imaging of Schwann cell myelination by third harmonic generation microscopy," Proceedings of the National Academy of Sciences of the United States of America, vol. 111, pp. 18025-18030, Dec 16 2014.

[24] H. Fox, "Is H&E morphology coming to an end?," Journal of Clinical Pathology, vol. 53, pp. 38-40, Jan 2000.

[25] M. N. Gurcan, L. E. Boucheron, A. Can, A. Madabhushi, N. M. Rajpoot, and B. Yener, "Histopathological image analysis: a review," IEEE Rev Biomed Eng, vol. 2, pp. 147-71, 2009.

[26] M. B. Ji, D. A. Orringer, C. W. Freudiger, S. Ramkissoon, X. H. Liu, D. Lau, A. J. Golby, I. Norton, M. Hayashi, N. Y. R. Agar, G. S. Young, C. Spino, S. Santagata, S. Camelo-Piragua, K. L. Ligon, O. Sagher, and X. S. Xie, "Rapid, Label-Free Detection of Brain Tumors with Stimulated Raman Scattering Microscopy," Science Translational Medicine, vol. 5, Sep 4 2013.

[27] S. Y. Chen, C. S. Hsieh, S. W. Chu, C. Y. Lin, C. Y. Ko, Y. C. Chen, H. J. Tsai, C. H. Hu, and C. K. Sun, "Noninvasive harmonics optical microscopy for long-term observation of embryonic nervous system development in vivo," Journal of Biomedical Optics, vol. 11, Sep-Oct 2006.

Page 117: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 6

108

[28] P. C. Wu, T. Y. Hsieh, Z. U. Tsai, and T. M. Liu, "In vivo Quantification of the Structural Changes of Collagens in a Melanoma Microenvironment with Second and Third Harmonic Generation Microscopy," Scientific Reports, vol. 5, Mar 9 2015.

[29] S. Y. Chen, S. U. Chen, H. Y. Wu, W. J. Lee, Y. H. Liao, and C. K. Sun, "In Vivo Virtual Biopsy of Human Skin by Using Noninvasive Higher Harmonic Generation Microscopy," Ieee Journal of Selected Topics in Quantum Electronics, vol. 16, pp. 478-492, May-Jun 2010.

[30] E. Gavgiotaki, G. Filippidis, H. Markomanolaki, G. Kenanakis, S. Agelaki, V. Georgoulias, and I. Athanassakis, "Distinction between breast cancer cell subtypes using third harmonic generation microscopy," Journal of Biophotonics, Nov 2016.

[31] W. Lee, M. M. Kabir, R. Emmadi, and K. C. Toussaint, Jr., "Third-harmonic generation imaging of breast tissue biopsies," J Microsc, vol. 264, pp. 175-181, Nov 2016.

[32] Z. Zhang, N. V. Kuzmin, M. Louise Groot, and J. C. de Munck, "Extracting morphologies from third harmonic generation images of structurally normal human brain tissue," Bioinformatics, Jan 27 2017.

[33] Z. Zhang, N. V. Kuzmin, M. L. Groot, and J. C. de Munck, "Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images," J Biophotonics, May 02 2017.

[34] D. Yelin and Y. Silberberg, "Laser scanning third-harmonic-generation microscopy in biology," Opt Express, vol. 5, pp. 169-75, Oct 11 1999.

[35] R. Prayson, B. K. Kleinschmidt-DeMasters, M. Cohen, and M. B. C. B. U. h. b. g. n. b. i. g. David Elder, Brain Tumors: Springer Publishing Company, 2009.

[36] D. C. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber, "Mitosis detection in breast cancer histology images with deep neural networks," Med Image Comput Comput Assist Interv, vol. 16, pp. 411-8, 2013.

[37] M. Veta, J. P. W. Pluim, P. J. van Diest, and M. A. Viergever, "Breast Cancer Histopathology Image Analysis: A Review (vol 61, pg 1400, 2014)," Ieee Transactions on Biomedical Engineering, vol. 61, pp. 2819-2819, Nov 2014.

[38] C. M. Li, R. Huang, Z. H. Ding, J. C. Gatenby, D. N. Metaxas, and J. C. Gore, "A Level Set Method for Image Segmentation in the Presence of Intensity Inhomogeneities With Application to MRI," Ieee Transactions on Image Processing, vol. 20, pp. 2007-2016, Jul 2011.

[39] Z. X. Lin, "Glioma-related edema: new insight into molecular mechanisms and their clinical implications," Chinese Journal of Cancer, vol. 32, pp. 49-52, Jan 2013.

[40] A. Khanna, K. T. Kahle, B. P. Walcott, V. Gerzanich, and J. M. Simard, "Disruption of Ion Homeostasis in the Neurogliovascular Unit Underlies the Pathogenesis of Ischemic Cerebral Edema," Translational Stroke Research, vol. 5, pp. 3-16, Feb 2014.

[41] A. Pirzkall, S. J. Nelson, T. R. McKnight, M. M. Takahashi, X. J. Li, E. E. Graves, L. J. Verhey, W. W. Wara, D. A. Larson, and P. K. Sneed, "Metabolic imaging of low-grade gliomas with three-dimensional magnetic resonance spectroscopy," International Journal of Radiation Oncology Biology Physics, vol. 53, pp. 1254-1264, Aug 1 2002.

[42] K. Petrecca, M. C. Guiot, V. Panet-Raymond, and L. Souhami, "Failure pattern following complete resection plus radiotherapy and temozolomide is at the resection margin in patients with glioblastoma," Journal of Neuro-Oncology, vol. 111, pp. 19-23, Jan 2013.

[43] J. S. Smith, E. F. Chang, K. R. Lamborn, S. M. Chang, M. D. Prados, S. Cha, T. Tihan, S. VandenBerg, M. W. McDermott, and M. S. Berger, "Role of extent of resection in the long-term outcome of low-grade hemispheric gliomas," Journal of Clinical Oncology, vol. 26, pp. 1338-1345 %@ 0732-183X, 2008.

[44] M. J. McGirt, D. Mukherjee, K. L. Chaichana, K. D. Than, J. D. Weingart, and A. Quinones-Hinojosa, "Association of Surgically Acquired Motor and Language Deficits on Overall Survival after Resection of Glioblastoma Multiforme," Neurosurgery, vol. 65, pp. 463-470, Sep 2009.

Page 118: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Quantitative third harmonic generation microscopy for detecting human brain tumors

109

[45] O. Assayag, K. Grieve, B. Devaux, F. Harms, J. Pallud, F. Chretien, C. Boccara, and P. Varlet, "Imaging of non-tumorous and tumorous human brain tissues with full-field optical coherence tomography," Neuroimage-Clinical, vol. 2, pp. 549-557, 2013.

[46] N. G. Horton, K. Wang, D. Kobat, C. G. Clark, F. W. Wise, C. B. Schaffer, and C. Xu, "In vivo three-photon microscopy of subcortical structures within an intact mouse brain," Nature Photonics, vol. 7, pp. 205-209, Mar 2013.

[47] K. S. Korolev, J. B. Xavier, and J. Gore, "Turning ecology and evolution against cancer," Nature Reviews Cancer, vol. 14, pp. 371-380, May 2014.

[48] S. Nawaz and Y. Y. Yuan, "Computational pathology: Exploring the spatial dimension of tumor ecology," Cancer Letters, vol. 380, pp. 296-303, Sep 28 2016.

[49] J. Y. Kim, L. S. Kim, and S. H. Hwang, "An advanced contrast enhancement using partially overlapped sub-block histogram equalization," Ieee Transactions on Circuits and Systems for Video Technology, vol. 11, pp. 475-484, Apr 2001.

[50] Y. F. Wang and G. X. Cheng, "Application of gradient-based Hough transform to the detection of corrosion pits in optical images," Applied Surface Science, vol. 366, pp. 9-18, Mar 15 2016.

[51] X. W. Chen, X. B. Zhou, and S. T. C. Wong, "Automated segmentation, classification, and tracking of cancer cell nuclei in time-lapse microscopy," Ieee Transactions on Biomedical Engineering, vol. 53, pp. 762-766, Apr 2006.

Page 119: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

110

Page 120: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

111

Chapter 7

Discussion and outlook

Page 121: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 7

112

7.1 General discussion In this thesis THG images of ex-vivo brain tissues have been comprehensively studied in a quantitative manner. For this purpose, new image processing tools, i.e., a new ADF for image denoising (chapter 2 and 5) and new ACMs (chapter 2 and 4), have been developed. The resulting new image quantification pipelines enable quantitative comparison of THG and fluorescence/SHG/auto-fluorescence images of brain tissues, which facilitates the interpretation of the acquired THG data (chapter 2 and 3). Both dark and bright objects were confirmed as brain cells in healthy human brain tissue (chapter 2) and mouse brain tissue (chapter 3). The developed image processing tools were applied to detect key pathologically relevant features observed with THG microscopy in healthy and tumor human brain tissues, after which the statistical study revealed the quantitative difference between healthy and tumor tissues (chapter 6). Therefore, this thesis has significantly strengthened the clinical potential of THG microscopy as a tool for in-situ improving the surgical outcomes of brain tumor.

In this chapter these results are briefly discussed in a broader perspective, and an outlook is given on potential future research of THG microscopy and the further applications of the developed image processing tools.

7.1.1 Automatic diagnosis of human brain tumor Hardware developments on THG microscopy have enabled its application for brain tumor imaging. However, this is only half the job. The other half, equally important, is to develop suitable image processing tools to quantify the histopathological features observed by THG microscopy. The interplay between chapter 2, 3 and 6 of this thesis nicely illustrates this. The THG instrument described in chapter 1 formed the basis of the image analysis in this thesis. It enabled high-contrast imaging of live brain tissue at cellular resolution, without the need for fluorescent probes. On the other hand, the automatic analysis of the acquired THG brain images has not been studied before and is challenging because of the 3-phase segmentation problem, low signal-to-noise ratio, intensity inhomogeneity, low local contrast, post-processing and validation. Initially all these challenges were addressed simultaneously in chapter 2. THG images of structurally normal human brain tissue were used to test the developed algorithms and pipeline. The salient edge-enhancing model of anisotropic diffusion was able to reconstruct all the salient dark and bright objects with sharp edges remaining, but the anisotropic diffusion approach was too computationally expensive for routine clinical application in the operating theater. To accelerate the computations of anisotropic diffusion models, a new scheme for anisotropic diffusion was presented in chapter 5, by allowing diffusion only in the non-flat areas and by reformulating anisotropic diffusion as a convex minimization problem. The resulting model was 50% more time efficient than the existing anisotropic diffusion models, which took 30mins to process a 1000×1000×50 voxel 3D image, and around 10 seconds to process a 1000×1000 pixel 2D image. Considering that the often used H&E staining images of current clinical practice are commonly 2D, the time needed was approaching real-time.

The active contour model proposed in chapter 2 was able to detect both the dark and bright objects. The global intensity extremes (GIEs) were incorporated into the CV model, and the resulting model was able to overcome the problem of intensity inhomogeneity within THG images by sacrificing some foreground pixels/voxels. To further deal with the global and local intensity inhomogeneities, it was demonstrated in chapter 4 that GIEs can easily be incorporated into more recent active contour models to solve the intensity inhomogeneity problem completely. One limitation of the new active contour models is that two

Page 122: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Discussion and outlook

113

new parameters need to be tuned for each type of microscopic images, but we also found that with the tuned parameters the segmentation outputs were quite predictable and stable.

The image denoising and segmentation algorithms developed in this thesis were improvements of classical models that have been widely applied. Therefore, the resulting image quantification workflow was able to accurately detect the brain cells, nuclei, neuropil and large bright cells in human brain tumor tissues. In chapter 6, the workflow was applied on THG image from 12 patients undergoing neurosurgery. Statistical analysis of the density of the quantified features revealed the quantitative difference between tumor and healthy tissue. The generated density thresholds of these features enabled detection of tumor infiltration and thus the tumor boundary with high sensitivity and specificity. Until here, we succeeded to demonstrate the feasibility of automatic diagnosis of human brain tumor.

Before reaching the goal of in-situ imaging and automatic diagnosis of brain tumor in real time, either imaged ex-vivo or in-situ, several issues still need to be discussed. One question is how fast the algorithms of automatic diagnosis should be. The image acquisition time of THG is around 2 minutes for a 1000×1000×50 voxel 3D image and around 1 second for 1000×1000 pixel 2D image, while the corresponding image processing time is around 40 minutes for 3D and 20 seconds for 2D. The most time consuming steps are the anisotropic diffusion and the active contour. To make the image processing time comparable to that of image acquisition, GPU implementation of the anisotropic diffusion model and the active contour model is a good candidate. We will explore in future to what extent the algorithms can benefit from the GPU implementation, and to what extent the accuracy of the image analysis will be influenced by replacing these PDE-based models by other more efficient yet less accurate methods, e.g., replacing the anisotropic diffusion by Gaussian smoothing and replacing active contour by fixed thresholds. Moreover, another question is whether 3D is necessary for in-situ brain tumor diagnosis. The maximal penetration depth of THG is around 300µm which only reaches the superficial surfaces of a brain. Considering that tumors lie in deeper tissue area of white matter, a handheld probe could be an essential valuable component within the clinical setting of THG. Complete 3D imaging with such a handheld probe would be infeasible because the image acquisition time needed will be too long to avoid the spatial distortion. Hence 2D THG imaging is more feasible for now and 3D will be the best choice in future when the image acquisition and image processing time can approach real time. Furthermore, more THG data need to be collected as an independent validation of the conclusion obtained in chapter 6, and a critical evaluation of the degree of agreement between THG and H&E images for detecting the presence of tumor infiltration is also required. Finally, more independent experiments are needed, e.g., using an in-vivo mouse model with patient-derived tumor brain (tumor cells being injected into a mouse brain to induce a brain tumor) to simulate surgical conditions where blood, dissected and/or coagulated tissue, and movement associated with respiratory and cardiac cycles are present.

7.1.2 Quantitative comparison of THG and other imaging techniques Next to automatic image classification, the other main problem addressed in this thesis is how to interpret the features observed by THG. The interpretation of the observed features is usually linked to more standard imaging techniques. So far however, the link between THG and other imaging techniques had been established qualitatively. THG images of mouse brain have been compared one-to-one with two-photon fluorescence images where cell nuclei of interneurons have been labeled with Green Fluorescent Protein [1]. THG images of structurally normal and tumor tissues have been compared, not one-to-one,

Page 123: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 7

114

with histopathological images of the same samples stained with hematoxylin and eosin [2]. Such a qualitative link does not guarantee that each observed object indeed corresponds to what it should be.

This thesis provided quantitative evidence that confirmed the interpretation of dark and bright objects as brain cells. The two light collecting channels of our THG setup enabled collection of THG and SHG/auto-fluorescence/fluorescence signals simultaneously from the same tissue area. In chapter 2, THG and SHG/auto-fluorescence signals were collected at the same time from healthy human brain tissue. The same image segmentation workflow was used to quantify THG and SHG/auto-fluorescence images and high accuracy was achieved. The quantitative comparison between these two types of images confirmed that dark holes appeared in healthy human brain tissue are brain cells (neuron or glial cells). In chapter 3, a more comprehensive comparison was done on young mouse brain tissue, in which dark holes and bright objects were nicely and sparsely distributed in the acquired THG images. Before imaging, the mouse brain tissue was stained by Hoechst-33342 dye to highlight all the nuclei of mouse brain cells. THG and two-photon fluorescence images were collected and segmented at the same time to attain the purpose of quantitative comparison. The high correspondence between the observed features of THG and stained nuclei confirmed correctness of interpreting the dark and bright objects as brain cells. Moreover, we also found that bright objects were 2 times smaller than the dark objects, suggesting that the dark holes represent neurons while bright objects may be either glial cells or apoptotic neurons typical for the young mouse brain. The quantitative evidence obtained here is not strong enough to draw the conclusion that THG microscopy is able to distinguish neurons and glial cells. To attain such a conclusion, more specific staining experiments, e.g., β-tubulin (TUJ-1) to stain neurons and glial fibrillary acidic protein (GFAP) immunostaining to highlight glial cells, are needed. However, we expect that the algorithms, workflows and quantitative comparison developed in this thesis can be generalized to analyze such experimental data when they become available.

The quantitative comparison of THG images of human brain tumor tissues and other imaging techniques, e.g., fluorescence imaging, will also be very meaningful to further develop clinical applications of THG and it can easily be attained using the strategy described in this thesis. We also expect that the described quantitative comparison can be generalized to help the interpretation of other label free imaging techniques, such as simulated Raman scattering (SRS) microscopy and optical coherence tomography (OCT), where such a quantitative comparison is currently lacking [3-6].

7.2 Pushing towards the future: Outlook The research in this thesis demonstrates the feasibility of automatic diagnosis of ex-vivo human brain tumor tissue. The potential research with THG microscopy stretches further. Here, we first present an alternative way to classify THG brain images and briefly discuss the merits and cons of the approach developed in this thesis. Second, we discuss how the methods in this thesis could be applied to THG images of other tissue types. Third, we look further ahead and ask what is the best solution for in-situ brain tumor pathology from the perspective of optical imaging instruments. Fourth, beyond the automatic diagnosis of brain tumor, we show further how the methods in this thesis can study the tumor ecosystem using label-free THG microscopy. Finally, we ask what is the next step of THG microscopy and we discuss approaches that can push THG towards super-resolution. This section is not intended to be conclusive; instead it explores potential applications that are relevant to THG imaging.

Page 124: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Discussion and outlook

115

7.2.1 Applying deep learning to classify THG images directly We have built a complete pipeline to standardize the image processing procedure of THG brain images, towards the clinical diagnosis of brain tumor tissue in an automatic manner. This image quantification pipeline mainly includes image denoising, segmentation, pattern recognition and classification steps. As an alternative, deep learning [7] recently has been increasingly applied on (bio)medical data because of its capability of learning and classifying in a direct way [8, 9]. A deep learning network needs to be trained on a large number of images to recognize the most prominent shapes in the image dataset. When applied to THG images, one does not have to be concerned about the proper interpretation of the features observed in a THG brain image once the deep learning network has been fully trained. The network will learn the difference between tumor and healthy images automatically, and the cumbersome statistical analysis of chapter 6 can be skipped. If such a deep learning network can be trained for THG datasets of brain tissues, it will greatly facilitate the clinical application of THG images. However, (bio)medical data, including THG data, related to a certain disease might be too scarce to allow sufficient training of a deep learning network, possibly leading to misdiagnosis or no diagnosis at all.

A solution to the problem of data scarcity is so-called transfer learning [10]. Transfer learning uses a pre-trained neural network and transfers the features of this network to the network designed for a specific application. The parameters of the neural network are trained on another dataset with more data available and meanwhile the dataset is either closely or distantly related to the source task. For specific deep learning networks, e.g. convolutional neural networks (CNNs), that are trained on natural images (ships, dogs, cars, houses, etc), the learned features have been used to classify cell nuclei of histopathological images [11]. Similarly, we can evaluate the transferability of CNN architectures trained on natural images or histopathological images to the THG brain dataset.

Compared to deep learning, the approach taken to process THG data in chapter 2-6 is more classical, but it does not mean that our approach is out of date. First, as a new image type that is on its way towards a clinical device, our approach offers a way to better understand what we see in the images. The features observed by THG need to be quantitatively compared with more classical imaging techniques, and this cannot be replaced by deep learning. Second, the statistical study of chapter 6 tells exactly what features observed by THG can be used for tissue classification, which may reversely benefit the future designing the architecture of a deep learning network. Moreover, the performance of deep learning on classifying THG images still needs to be evaluated, and the THG images labeled by our approach will facilitate the training of a deep learning network. Finally, at present it is still unknown to what extent THG images need to be pre-processed to obtain the best classification results with the least amount of training using the deep learning approach.

7.2.2 Applying the developed algorithms to THG images of other tissue types This thesis is focused on processing THG images of brain tissue. We would in future apply the developed tools on THG images of other tissue types. Besides brain tissue, THG has been successfully applied to image intact tissues such as insect embryos, plant seeds and intact mammalian tissue [12], epithelial tissues [13-15], zebrafish embryos [16, 17] and the zebrafish nervous system [16]. THG microscopy also shows great potential as a clinical practice of skin cancer diagnosis [18] and breast tumor diagnosis [19, 20]. To illustrate how the developed image processing tools can be generalized for THG images of other tissue types, Fig. 7.1 shows a THG image of fat tissue, which is a major component of the breast. The main features present in the image are the boundaries of fat cells. The anisotropic diffusion models

Page 125: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 7

116

A B C

presented in chapter 2 and 5 have taken into account the diffusion along the second diffusion direction and therefore these plate-like structures can be enhanced in the filtered image. The active contour model presented in chapter 2 may only detect the brightest fat cells but the modified model of chapter 4 will detect also the fat cells with lower contrast (the cell indicated by the yellow arrow in Fig. 7.1B).

Figure 7.1 A slice of a 3D SHG & THG image of fat tissue. (A) The SHG channel. (B) The THG channel. (C) The combined image of the SHG and THG channels. The data shown were collected by Laura van Huizen.

7.2.3 Combination of THG with other imaging techniques Besides THG microscopy, several other label-free optical techniques, i.e., optical coherence tomography (OCT), Raman spectroscopy (RS), stimulated Raman scattering (SRS) microscopy have been emerging to establish tumor boundaries at the cellular level [4, 5, 21]. High-speed 3D swept-source OCT (SS-OCT) uses optical attenuation differences between tumor and normal brain tissues to reflect the tissue state [5]. Raman spectroscopy [21] and SRS microscopy [4] have been reported to reliably detect tumor tissue in patients’ brain or ex-vivo show tumor boundaries. Compared to Raman techniques, THG microscopy can directly visualize the classical H&E morphology and the implementation of THG microscopy is technically less complicated and less expensive than SRS microscopy. SS-OCT has impressive scanning speed and FOV that is superior to THG microscopy while THG provides much higher image resolution than the current OCT techniques. Therefore, the combination of SS-OCT and THG may be a better solution for in-situ brain tumor pathology than using a single technique.

From the perspective of hardware compatibility, the combination of SS-OCT and THG is feasible. A single laser source is needed for both OCT and THG. The wavelength used for SS-OCT was 1310 nm [5] while that used for THG was 1200 nm [2], both of which are located in the near infrared light range. Considering that wavelengths in the range of 1200–1350 nm provide optimum contrast for THG microscopy [1], THG and OCT can share the same laser source with the same wavelength. Moreover, as the produced THG signal will be at exactly one third of the excitation wavelength, the OCT signal and THG signal can be perfectly separated. Note that the SHG signals can be well separated from THG and OCT signals as well. Another consideration is the power needed for OCT and THG. It has been reported in [5] and [2] that the power needed to generate the THG signal is higher than that of OCT signal, but the THG setup can still be optimized [2] so that lower powers may suffice. Before integrating the OCT and THG systems, one could separately investigate if the two different modalities agree with each other on the pathology of the same tissue.

Page 126: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Discussion and outlook

117

7.2.4 Studying the tumor ecosystem with quantitative THG It has been shown in chapter 6 that quantitative THG is able to visualize and detect pathologically relevant features that identify the tumor boundary. Another application of quantitative THG wherein the absence of staining is exploited is the long-term study of tumor evolution in its natural environment, i.e. the study of the tumor ecosystem. There is an increasing interest in studying the process of genetic changes in tumor cells during their evolution from a novel perspective: ecology [22, 23]. In this perspective, tumors are considered as evolving ecosystems where cancer subclones and their microenvironment interact [23]. Computational pathology borrows the idea of spatial statistics from ecological studies to study the spatial distribution of different kinds of cells in the tumor microenvironment to infer the complex relations including predator–prey, resource dependency and co-evolution. However, these spatial distributions are so far only studied by analyzing H&E images. As a result, the obtained spatial distribution only indicates one instant state of the studied tumor ecosystem, because the tumor tissue dies quickly after H&E staining.

THG microscopy is a perfect imaging tool for long-term studying the tumor ecosystem in its natural environment, because no staining is needed to provide contrast. The rich H&E morphology observed with label-free THG makes it unique for this purpose. The work in this thesis indicates that tumor and healthy areas can be distinguished by the proposed image quantification methods, and thus tumor cells and healthy cells can be spatially located. Moreover, several papers in the literature show that THG microscopy is able to visualize immune cells, e.g., white blood cells [1, 2, 24-26]. Blood cells appear as bright objects in THG brain images while other types of cells appear as dark objects. Because the algorithms developed in chapter 2, 4 and 5 can accurately detect both dark and bright cells, they hold the potential to classify these cells into different types, in terms of intensity, size, and shape. Therefore we also expect that quantitative THG imaging of tumor ecosystems will enable the study of the predation, mutualism, commensalism and parasitism of the tumor ecosystem with an unprecedented time resolution.

7.2.5 Towards super-resolution THG THG microscopy has proven to be a high-contrast imaging technique of live tissue at cellular resolution, without the need for fluorescent probes. The resolution achieved was dlateral ~0.4 μm and daxial ~2.4 μm. The question is if we can push THG microscopy towards super-resolution. Nearly all the reported super-resolution imaging techniques rely on fluorescent molecules to break the diffraction limit. These fluorescent techniques exploit real energy states of fluorescent molecules to manipulate fluorescent light emitted from probe molecules, which reduces the effective size of the point spread function (PSF) and thus enhances the spatial resolution of the microscope. These fluorescent super-resolution techniques include stimulated emission depletion (STED) microscopy [27, 28], ground state depletion (GSD) microscopy [29, 30], saturated structured-illumination microscopy (SSIM) [31], photoactivated localization microscopy (PALM) [32, 33], stochastic optical reconstruction microscopy (STORM) [34], saturation of transient absorption in electronic states [35], and saturation of scattering due to surface plasmon resonance [36, 37].

In contrast, the contrast mechanism of harmonic generation occurs via virtual energy states. The application of most super-resolution imaging methods to virtual energy states is not possible. Several examples of harmonic generation microscopy under the diffraction limit have been reported [38-40]. In [38, 39], the polarization state of an illuminating laser pulse was manipulated to reduce the size of the effective PSF, but it was reported in [40] that both methods were not robust to imaging at depth in tissue.

Page 127: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 7

118

To provide superresolved imaging of complex media, multiphoton spatial frequency-modulated imaging (MP-SPIFI) [41, 42] was used for superresolved imaging of second-harmonic generation (SHG) and two-photon excited fluorescence (TPEF) [40]. However, until now, no super-resolution of third harmonic generation has been reported on real tissue, and we will in future explore the feasibility of using MP-SPIFI for this purpose.

References [1] S. Witte, A. Negrean, J. C. Lodder, C. P. De Kock, G. T. Silva, H. D. Mansvelder, and M. L. Groot,

"Label-free live brain imaging and targeted patching with third-harmonic generation microscopy," Proceedings of the National Academy of Sciences, vol. 108, pp. 5970-5975, 2011.

[2] N. V. Kuzmin, P. Wesseling, P. C. Hamer, D. P. Noske, G. D. Galgano, H. D. Mansvelder, J. C. Baayen, and M. L. Groot, "Third harmonic generation imaging for fast, label-free pathology of human brain tumors," Biomed Opt Express, vol. 7, pp. 1889-904, May 01 2016.

[3] M. B. Ji, D. A. Orringer, C. W. Freudiger, S. Ramkissoon, X. H. Liu, D. Lau, A. J. Golby, I. Norton, M. Hayashi, N. Y. R. Agar, G. S. Young, C. Spino, S. Santagata, S. Camelo-Piragua, K. L. Ligon, O. Sagher, and X. S. Xie, "Rapid, Label-Free Detection of Brain Tumors with Stimulated Raman Scattering Microscopy," Science Translational Medicine, vol. 5, Sep 4 2013.

[4] M. Ji, S. Lewis, S. Camelo-Piragua, S. H. Ramkissoon, M. Snuderl, S. Venneti, A. Fisher-Hubbard, M. Garrard, D. Fu, A. C. Wang, J. A. Heth, C. O. Maher, N. Sanai, T. D. Johnson, C. W. Freudiger, O. Sagher, X. S. Xie, and D. A. Orringer, "Detection of human brain tumor infiltration with quantitative stimulated Raman scattering microscopy," Sci Transl Med, vol. 7, p. 309ra163, Oct 14 2015.

[5] C. Kut, K. L. Chaichana, J. F. Xi, S. M. Raza, X. B. Ye, E. R. McVeigh, F. J. Rodriguez, A. Quinones-Hinojosa, and X. D. Li, "Detection of human brain cancer infiltration ex vivo and in vivo using quantitative optical coherence tomography," Science Translational Medicine, vol. 7, Jun 17 2015.

[6] R. Galli, O. Uckermann, A. Temme, E. Leipnitz, M. Meinhardt, E. Koch, G. Schackert, G. Steiner, and M. Kirsch, "Assessing the efficacy of coherent anti-Stokes Raman scattering microscopy for the detection of infiltrating glioblastoma in fresh brain samples," Journal of Biophotonics, vol. 10, pp. 404-414, Mar 2017.

[7] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, pp. 436-444, May 28 2015. [8] G. Litjens, C. I. Sanchez, N. Timofeeva, M. Hermsen, I. Nagtegaal, I. Kovacs, C. Hulsbergen-van de

Kaa, P. Bult, B. van Ginneken, and J. van der Laak, "Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis," Scientific Reports, vol. 6, May 23 2016.

[9] K. Sirinukunwattana, S. E. A. Raza, Y. W. Tsang, D. R. J. Snead, I. A. Cree, and N. M. Rajpoot, "Locality Sensitive Deep Learning for Detection and Classification of Nuclei in Routine Colon Cancer Histology Images," Ieee Transactions on Medical Imaging, vol. 35, pp. 1196-1206, May 2016.

[10] S. J. Pan and Q. A. Yang, "A Survey on Transfer Learning," Ieee Transactions on Knowledge and Data Engineering, vol. 22, pp. 1345-1359, Oct 2010.

[11] N. Bayramoglu and J. Heikkilä, "Transfer Learning for Cell Nuclei Classification in Histopathology Images," 2016, pp. 532-539.

[12] D. Debarre, W. Supatto, A. M. Pena, A. Fabre, T. Tordjmann, L. Combettes, M. C. Schanne-Klein, and E. Beaurepaire, "Imaging lipid bodies in cells and tissues using third-harmonic generation microscopy," Nature Methods, vol. 3, pp. 47-53, Jan 2006.

Page 128: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Discussion and outlook

119

[13] B. Weigelin, G.-J. Bakker, and P. Friedl, "Intravital third harmonic generation microscopy of collective melanoma cell invasion: principles of interface guidance and microvesicle dynamics," IntraVital, vol. 1, pp. 32-43, Jul 2012.

[14] J. Adur, V. B. Pelegati, A. A. de Thomaz, M. O. Baratti, D. B. Almeida, L. A. Andrade, F. Bottcher-Luiz, H. F. Carvalho, and C. L. Cesar, "Optical biomarkers of serous and mucinous human ovarian tumor assessed with nonlinear optics microscopies," PLoS One, vol. 7, p. e47007, 2012.

[15] P. C. Wu, T. Y. Hsieh, Z. U. Tsai, and T. M. Liu, "In vivo Quantification of the Structural Changes of Collagens in a Melanoma Microenvironment with Second and Third Harmonic Generation Microscopy," Scientific Reports, vol. 5, Mar 9 2015.

[16] S. Y. Chen, C. S. Hsieh, S. W. Chu, C. Y. Lin, C. Y. Ko, Y. C. Chen, H. J. Tsai, C. H. Hu, and C. K. Sun, "Noninvasive harmonics optical microscopy for long-term observation of embryonic nervous system development in vivo," Journal of Biomedical Optics, vol. 11, Sep-Oct 2006.

[17] N. Olivier, M. A. Luengo-Oroz, L. Duloquin, E. Faure, T. Savy, I. Veilleux, X. Solinas, D. Debarre, P. Bourgine, A. Santos, N. Peyrieras, and E. Beaurepaire, "Cell Lineage Reconstruction of Early Zebrafish Embryos Using Label-Free Nonlinear Microscopy," Science, vol. 329, pp. 967-971, Aug 20 2010.

[18] S. Y. Chen, S. U. Chen, H. Y. Wu, W. J. Lee, Y. H. Liao, and C. K. Sun, "In Vivo Virtual Biopsy of Human Skin by Using Noninvasive Higher Harmonic Generation Microscopy," Ieee Journal of Selected Topics in Quantum Electronics, vol. 16, pp. 478-492, May-Jun 2010.

[19] E. Gavgiotaki, G. Filippidis, H. Markomanolaki, G. Kenanakis, S. Agelaki, V. Georgoulias, and I. Athanassakis, "Distinction between breast cancer cell subtypes using third harmonic generation microscopy," Journal of Biophotonics, Nov 2016.

[20] W. Lee, M. M. Kabir, R. Emmadi, and K. C. Toussaint, Jr., "Third-harmonic generation imaging of breast tissue biopsies," J Microsc, vol. 264, pp. 175-181, Nov 2016.

[21] M. Jermyn, K. Mok, J. Mercier, J. Desroches, J. Pichette, K. Saint-Arnaud, L. Bernstein, M. C. Guiot, K. Petrecca, and F. Leblond, "Intraoperative brain cancer detection with Raman spectroscopy in humans," Sci Transl Med, vol. 7, p. 274ra19, Feb 11 2015.

[22] S. Nawaz and Y. Y. Yuan, "Computational pathology: Exploring the spatial dimension of tumor ecology," Cancer Letters, vol. 380, pp. 296-303, Sep 28 2016.

[23] K. S. Korolev, J. B. Xavier, and J. Gore, "Turning ecology and evolution against cancer," Nature Reviews Cancer, vol. 14, pp. 371-380, May 2014.

[24] C. K. Chen and T. M. Liu, "Imaging morphodynamics of human blood cells in vivo with video-rate third harmonic generation microscopy," Biomedical Optics Express, vol. 3, pp. 2860-2865, Nov 1 2012.

[25] Z. Zhang, N. V. Kuzmin, M. Louise Groot, and J. C. de Munck, "Extracting morphologies from third harmonic generation images of structurally normal human brain tissue," Bioinformatics, Jan 27 2017.

[26] M. Rehberg, F. Krombach, U. Pohl, and S. Dietzel, "Label-Free 3D Visualization of Cellular and Tissue Structures in Intact Muscle with Second and Third Harmonic Generation Microscopy," Plos One, vol. 6, Nov 28 2011.

[27] S. W. Hell and J. Wichmann, "Breaking the Diffraction Resolution Limit by Stimulated-Emission - Stimulated-Emission-Depletion Fluorescence Microscopy," Optics Letters, vol. 19, pp. 780-782, Jun 1 1994.

[28] S. W. Hell, "Toward fluorescence nanoscopy," Nature Biotechnology, vol. 21, pp. 1347-1355, Nov 2003.

[29] S. W. Hell and M. Kroug, "Ground-State-Depletion Fluorescence Microscopy - a Concept for Breaking the Diffraction Resolution Limit," Applied Physics B-Lasers and Optics, vol. 60, pp. 495-497, May 1995.

Page 129: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Chapter 7

120

[30] S. Bretschneider, C. Eggeling, and S. W. Hell, "Breaking the diffraction barrier in fluorescence microscopy by optical shelving," Physical Review Letters, vol. 98, May 25 2007.

[31] M. G. L. Gustafsson, "Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution," Proceedings of the National Academy of Sciences of the United States of America, vol. 102, pp. 13081-13086, Sep 13 2005.

[32] E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, "Imaging intracellular fluorescent proteins at nanometer resolution," Science, vol. 313, pp. 1642-1645, Sep 15 2006.

[33] E. Betzig, "Proposed Method for Molecular Optical Imaging," Optics Letters, vol. 20, pp. 237-239, Feb 1 1995.

[34] M. J. Rust, M. Bates, and X. W. Zhuang, "Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)," Nature Methods, vol. 3, pp. 793-795, Oct 2006.

[35] P. Wang, M. N. Slipchenko, J. Mitchell, C. Yang, E. O. Potma, X. F. Xu, and J. X. Cheng, "Far-field imaging of non-fluorescent species with subdiffraction resolution," Nature Photonics, vol. 7, pp. 450-454, Jun 2013.

[36] S. W. Chu, T. Y. Su, R. Oketani, Y. T. Huang, H. Y. Wu, Y. Yonemaru, M. Yamanaka, H. Lee, G. Y. Zhuo, M. Y. Lee, S. Kawata, and K. Fujita, "Measurement of a Saturated Emission of Optical Radiation from Gold Nanoparticles: Application to an Ultrahigh Resolution Microscope," Physical Review Letters, vol. 112, Jan 7 2014.

[37] S. W. Chu, H. Y. Wu, Y. T. Huang, T. Su, H. Lee, Y. Yonemaru, M. Yamanaka, R. Oketani, S. Kawata, S. Shoji, and K. Fujita, "Saturation and Reverse Saturation of Scattering in a Single Plasmonic Nanoparticle," Acs Photonics, vol. 1, pp. 32-37, Jan 2014.

[38] O. Masihzadeh, P. Schlup, and R. A. Bartels, "Enhanced spatial resolution in third-harmonic microscopy through polarization switching," Optics Letters, vol. 34, pp. 1240-1242, Apr 15 2009.

[39] J. Liu, I. H. Cho, Y. Cui, and J. Irudayaraj, "Second Harmonic Super-resolution Microscopy for Quantification of mRNA at Single Copy Sensitivity," Acs Nano, vol. 8, pp. 12418-12427, Dec 2014.

[40] J. J. Field, K. A. Wernsing, S. R. Domingue, A. M. A. Motz, K. F. DeLuca, D. H. Levi, J. G. DeLuca, M. D. Young, J. A. Squier, and R. A. Bartels, "Superresolved multiphoton microscopy with spatial frequency-modulated imaging," Proceedings of the National Academy of Sciences of the United States of America, vol. 113, pp. 6605-6610, Jun 14 2016.

[41] G. Futia, P. Schlup, D. G. Winters, and R. A. Bartels, "Spatially-chirped modulation imaging of absorbtion and fluorescent objects on single-element optical detector," Optics express, vol. 19, pp. 1626-1640, Jan 2011.

[42] E. E. Hoover, J. J. Field, D. G. Winters, M. D. Young, E. V. Chandler, J. C. Speirs, J. T. Lapenna, S. M. Kim, S. Y. Ding, R. A. Bartels, J. W. Wang, and J. A. Squier, "Eliminating the scattering ambiguity in multifocal, multimodal, multiphoton imaging systems," Journal of Biophotonics, vol. 5, pp. 425-436, May 2012.

Page 130: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Index of Abbreviation

121

Index of Abbreviation

Index Abbreviation Full name Page first introduced

1 ACM active contour model 8

2 ACPE active contour weighted by prior extremes 25

3 AD Anisotropic diffusion 7

4 ADF Anisotropic diffusion filtering (=Anisotropic diffusion)

7

5 AF auto-fluorescence 20

6 BS “bright” structures 18

7 CED model CED model 21

8 CV model Chan and Vese model 9,25

9 CVPE the CV model weighted by prior extremes(=ACPE) 69

10 DM Dichroic mirror 2

11 DO “dark” objects 18

12 EED model EED model 21

13 EL Euler-Lagrange 6

14 FOV field of view 89

15 F-P false positive 28

16 GBM Glioblastoma (grade 4 brain tumor) 102

17 GFP Green Fluorescent Protein 44

18 GIEs global intensity extremes (minima and maxima) 66

19 GM gray matter 78

20 GRIN optic needle with a graded index 3

21 G&L global and local 66

Page 131: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Index of Abbreviation

122

22 H&E hematoxylin and eosin 44

23 HOE Hoechst dye (a fluorescence dye) 46

24 HOS higher order statistics 22

25 IF Interference filter 2

26 LBF local binary fitting 11

27 LBFPE LBF weighted by prior extremes 70

28 LHE local histogram equalization 20

29 LIC local intensity clustering 11

30 LICPE LIC weighted by prior extremes 70

31 MED membrane-enhancing diffusion 21

32 MO Microscope objective 2

33 MPAF multi-photon auto-fluorescence 74

34 MS Mumford-Shah 9

35 NC nuclear-to-cytoplasm 94

36 OCT optical coherence tomography 44

37 OPO Optical parametric oscillator 2

38 O-S over-segmented 49 (figure 3.3, C), 50

39 PDE Partial differential equation 5

40 PM model Perona-Malik model 6

41 PMT photomultiplier tubes 2

42 POSHE Partially overlapped sub-block histogram equalization 5,20

43 PC piecewise constant 9

44 PoS percentage of space 30

45 PS piecewise smooth 9

46 RLSF robust local similarity factor 67

Page 132: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Index of Abbreviation

123

47 RLSFPE RLSF weighted by prior extremes 70

48 RS Raman spectroscopy 92

49 SHG second harmonic generation 2

50 SL scan lens 2

51 SNR signal-to-noise ratio 84

52 SRS stimulated Raman scattering 44

53 SS-OCT swept-source optical coherence tomography 92

54 THG third harmonic generation 2

55 TL Tube lens 2

56 T-P true positive 28

57 TRTV tensor regularized total variation 84,86

58 TV total variation 6

59 U-S under-segmented 49 (figure 3.3, D),50

60 WM white matter 78

Page 133: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Summary

124

Summary

Patients with diffuse glioma are still associated with very poor survival. The widespread nature of diffuse gliomas makes this type of tumors extensively invade into the surrounding normal brain. The prognosis and therapy of patients with diffuse gliomas usually correlate with the extent of resection. The technologies currently used clinically cannot visualize the tumor boundaries of this tumor type. Therefore, a new imaging technique capable of directly revealing tumor boundaries with histopathological quality is highly desirable. Third harmonic generation (THG) is a label-free technique that shows great potential for this purpose. THG microscopy has been shown to provide real-time feedback of tumor boundary in fresh, unprocessed human brain tissues. The morphology observed by THG has an excellent agreement with standard H&E morphology. However, the hardware development is only half the job, and the other half, equally important, is to develop suitable image processing tools to quantify the morphology observed by THG.

In this thesis we focus on processing THG images of brain tissues, especially the automatic diagnosis of brain tumor. The automatic analysis of the acquired THG brain images has not been studied before and is challenging because of the 3-phase segmentation problem, low signal-to-noise ratio, intensity inhomogeneity, low local contrast, post-processing and validation. In chapter 2, all these challenges were initially addressed. The salient edge-enhancing model of anisotropic diffusion was developed to reconstruct all the salient dark and bright objects. A novel active contour model was proposed to detect both the dark and bright objects, by introducing the global intensity extremes into the CV model. The resulting model overcome the problem of intensity inhomogeneity by sacrificing some foreground pixels/voxels. THG images of structurally normal human brain tissue were used to test the developed algorithms and pipeline. Chapter 4 and chapter 5 generalized and deepened the main idea of active contour and anisotropic diffusion presented in chapter 2. Global intensity extremes were incorporated into more recent active contour models to deal with the intensity inhomogeneity present in THG images. A novel framework was proposed to accelerate the existing anisotropic diffusion models (including the model developed in chapter 2). Also, anisotropic diffusion was reformulated as a convex model, resulting in an efficient and easy-to-code algorithm. In chapter 6, the developed image processing tools were applied to detect key pathologically relevant features observed with THG microscopy in healthy and tumor human brain tissues. Statistical analysis of the density of the quantified features revealed the quantitative difference between tumor and healthy tissue. The generated density thresholds of these features enabled detection of tumor infiltration and tumor boundaries with high sensitivity and specificity.

The interpretation of the features is another important issue relevant to brain tumor diagnosis, which is usually linked to more standard imaging techniques. The link between THG and fluorescence/H&E has been established qualitatively only. In chapter 2 & chapter 3 of this thesis, we quantitatively compared THG brain images with fluorescence/SHG/auto-fluorescence images acquired simultaneously from the same tissue area. Such a comparison provided quantitative evidence that confirmed the interpretation of dark and bright objects as brain cells.

In summary, this thesis has significantly strengthened the clinical potential of THG microscopy as a tool for brain tumor diagnosis and surgery, in a way that proper image analysis tools have been developed.

Page 134: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Samenvatting

125

Samenvatting

Patiënten met een diffuse glioom hebben nog steeds een zeer slechte overlevingskans. Het diffuse karakter van dit tumortype zorgt er voor dat het sterk invadeerd in het omliggende gezonde hersenweefsel. De prognose en therapie van patiënten met diffuse gliomen correleren normaal gesproken met de mate van resectie. De technologieën die momenteel klinisch gebruikt worden kunnen de tumorranden van dit tumortype niet visualiseren. Daarom is een nieuwe beeldvormingstechniek zeer wenselijk die in staat is om tumorranden met histopathologische kwaliteit direct te onthullen. Third harmonic generation (THG) is een labelvrije techniek met een grote potentie om dit doel te bereiken. Aangetoond is dat THG-microscopie real-time feedback kan geven van de tumorranden in verse, onbewerkte menselijke hersenweefsels. De met THG waargenomen morfologie heeft een uitstekende overeenkomst met de standaard H&E morfologie. Echter, de hardwareontwikkeling voor deze techniek is slechts de helft van het werk. De andere, even belangrijke, helft is het ontwikkelen van geschikte beeldverwerkingstechnieken om de met THG waargenomen morfologie te kwantificeren.

In dit proefschrift richten we ons op het verwerken van THG beelden van hersenweefsels, met name op de automatische diagnose van hersentumoren. De automatische analyse van de gegenereerde THG hersenafbeeldingen is niet eerder bestudeerd en is uitdagend vanwege het 3-fase segmentatieprobleem, de lage signaal-ruisverhouding, de intensiteit inhomogeniteit, het laag lokale contrast, de nabewerking en de validatie. In hoofdstuk 2 werden al deze uitdagingen als eerste aangepakt. Het belangrijke randversterkende model van anisotropische diffusie was ontwikkeld om alle duidelijk aanwezige donkere en lichte objecten te reconstrueren. Een nieuw actief-contourmodel werd voorgesteld om zowel de donkere als lichte objecten te detecteren, door de globale intensiteit extrema te introduceren in het CV model. Het resulterende model ondervangt het probleem van de inhomogene intensiteit door sommige voorgrondpixels/voxels op te offeren. THG beelden van structureel gezien normaal menselijk hersenweefsel werden gebruikt om de ontwikkelde algoritmen en de sequentie te testen. Hoofdstuk 4 en hoofdstuk 5 generaliseerden en verdiepten de hoofdgedachte van actief contour en anisotrope diffusie die in hoofdstuk 2 werden voorgesteld. Globale intensiteit extrema werden opgenomen in meer recente actief-contourmodellen om zo om te gaan met de intensiteit inhomogeniteit die aanwezig is in THG-afbeeldingen. Er werd een nieuw kader voorgesteld om de bestaande anisotrope diffusiemodellen te versnellen (inclusief het model dat in hoofdstuk 2 is ontwikkeld). Ook werd anisotrope diffusie geherformuleerd als een convex model, wat resulteerde in een efficiënt en makkelijk te coderen algoritme. In hoofdstuk 6 werden de ontwikkelde beeldverwerkingstechnieken toegepast om belangrijke pathologische relevante kenmerken te detecteren in THG beelden van gezonde en tumor menselijk hersenweefsels. Door middel van een statistische analyse van de dichtheid van de gekwantificeerde kenmerken kon er een kwantitatief onderscheid worden gemaakt tussen tumor en gezond weefsel. De gegenereerde dichtheid drempelwaarden van deze kenmerken maakte het mogelijk om tumorinfiltratie en dus de tumorrand met hoge gevoeligheid en specificiteit te detecteren.

De interpretatie van de kenmerken is een ander belangrijk onderwerp dat relevant is voor de diagnose van hersentumoren, die meestal gekoppeld is aan meer standaard beeldvormingstechnieken. De relatie tussen THG en fluorescentie/H&E is alleen kwalitatief vastgesteld. In hoofdstuk 2 en hoofdstuk 3 van dit proefschrift zijn de THG-hersenbeelden kwantitatief vergeleken met fluorescentie/SHG/auto-

Page 135: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Samenvatting

126

fluorescentiebeelden die tegelijkertijd van hetzelfde weefselgebied werden gegenereerd. Deze vergelijking gaf kwantitatieve bewijzen die de interpretatie van donkere en lichte voorwerpen als hersencellen bevestigden .

Samenvattend heeft dit proefschrift de klinische potentie van THG-microscopie als hulpmiddel voor diagnose en operatie van hersentumoren aanzienlijk versterkt, door toepasbare beeldanalyse technieken te ontwikkelen.

Page 136: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

总结

127

总结

弥漫性胶质脑肿瘤患者的存活率依旧很低。通常情况下,这种恶性肿瘤会大范围地侵入到周围的

正常脑组织区域。弥漫性胶质瘤患者的预后与治疗通常与肿瘤切除程度密切相关。目前临床使用

的成像技术无法检测到这类脑肿瘤与正常区域的边界。因此,一种能够在细胞尺度上直接显示肿

瘤病理以及肿瘤边界的新型成像技术是迫切需要的。三次谐波显微(THG)作为一种非荧光标记

技术,已经展现出解决此问题的巨大潜力。我们近期已经证明 THG显微镜可以为新鲜的、未经处

理的人脑组织中提供肿瘤边界的实时反馈。THG 所观察到的细胞形态与 H&E 黄金法则有很好的

一致性。然而,THG 显微镜的设备研发只是通向肿瘤自动化诊断的其中一半工作,另一半同样重

要的工作是开发合适的图像处理工具来量化 THG所观察到的细胞形态。

本论文专注于处理 THG 脑图像,特别是脑肿瘤的自动化诊断。目前还没有关于 THG 脑图像自动

化分析的相关研究。其挑战性源于三相图像分割难,信噪比低,亮度不均匀,局部对比度低,后

期处理和验证难等问题。在第 2 章中,所有这些挑战都得到了初步解决。我们研发了具有增强显

著边缘效果的各向异性扩散模型来重建所有显著的黑暗和明亮物体。通过将全局亮度极值引入到

CV 模型中,我们提出了一种新型的主动轮廓模型来检测黑暗和明亮物体。所提出的模型解决了

亮度不均匀的问题,但也导致某些待检测物体的检测不完整。结构性正常的 THG脑图像被用来测

试所开发的算法。第 4 章和第 5 章深化了第 2 章所提出的主动轮廓和各向异性扩散模型。全局亮

度极值被纳入到更新的主动轮廓模型中来进一步完善处理 THG图像中的亮度不均匀问题。我们提

出了一个新的算法框架来加速现有的各向异性扩散模型(包括第 2 章所提出的模型)。此外,通

过将各向异性扩散模型被转化为变分凸模型,我们研发出一种高效且易于编码的算法。在第 6 章

中,我们应用所开发的 THG图像处理工具来检测健康和肿瘤脑组织中的关键性病理相关特征。对

所提取特征的密度统计分析显示了肿瘤与健康组织之间的定量差异。所产生的密度阈值能够以高

灵敏度和特异性来检测肿瘤浸润区域和肿瘤边界。

如何翻译 THG图像中的形态是与脑肿瘤诊断相关的另一个重要问题。通常的解决办法是将其与更

广为人知的成像技术联系起来。以往,THG与荧光标记或 H&E之间的联系仅仅是定性的。 在本

论文第 2 章和第 3 章中,我们定量比较了从同一脑组织区域同时获取的 THG 脑图像与荧光/ 二次

Page 137: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

总结

128

谐波 /自动荧光图像。这样的对比提供了定量证据来证明将 THG 图像中黑暗和明亮的物体解释为

脑细胞的合理性。

总之,本论文通过研发适当的 THG 图像分析工具,极大地加强了 THG 显微镜在脑肿瘤诊断和手

术等临床应用的可能性。

Page 138: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

List of Publications

129

List of Publications

Publications (partially) included in this thesis:

1. Zhiqing Zhang, Nikolay V. Kuzmin, Marie Louise Groot, and Jan C. de Munck, “Extracting morphologies from third harmonic generation images of structurally normal human brain tissue,” Bioinformatics 33 (11), 1712-1720 (2017). Chapter 2.

2. Zhiqing Zhang, Nikolay V. Kuzmin, Marie Louise Groot, and Jan C. de Munck, “Quantitative comparison of 3D third harmonic generation and fluorescence microscopy images,” Journal of Biophotonics, DOI: 10.1002/jbio.201600256 (2017). Chapter 3.

3. Zhiqing Zhang, Marie Louise Groot, and Jan C. de Munck, “Tensor regularized total variation for third harmonic generation brain images,” Proceeding of the European Medical and Biological Engineering Conference (EMBEC) and the Nordic-Baltic Conference on Biomedical Engineering and Medical Physics (NBC), pp. 129-132, Springer (2017). Chapter 5.

4. Zhiqing Zhang, Marie Louise Groot, and Jan C. de Munck, “Active contour models for microscopic images with global and local intensity inhomogeneities,” submitted. Chapter 4.

5. Zhiqing Zhang, Marie Louise Groot, and Jan C. de Munck, “Tensor regularized total variation for third harmonic generation images of brain tumors,” submitted. The full paper of Chapter 5.

6. Zhiqing Zhang, Jan C. de Munck and Marie Louise Groot, “Quantitative third harmonic generation microscopy for human brain tumor infiltration detection, ” to be submitted. The full paper of Chapter 6.

Page 139: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Acknowledgement

130

Acknowledgement

Daily work without science, especially the part without math, was boring to me. In 2011, I finished my master on theoretical mathematics and after that I worked for two high-tech companies as a programmer and a team leader until I came to the Netherlands in the end of 2013. It seems that everything was on the right track at that time but one day the idea of going back to science flushed into my mind. Then I rejected a promotion, looked for labs and applied for scholarships. On the first day of Nov 2013, I arrived Amsterdam and started this new journey of learning. At the end, it turns out to be a wise decision. Doing my PhD here with Marloes and Jan is one of the most important decisions I have even made, not only because of the scientific training I got but also the way of thinking I learnt from all of you. I would like to take this opportunity to express my gratitude to everyone that shares some important moments with me during the past four years.

I am indebted to my supervisor Marloes Groot. Thank you for offering me this chance to work in this multi-discipline group when I had limited background on microscopy and image processing in the beginning. Your confidence and insight on scientific research not only greatly increased my knowledge on bio-imaging but also pointed out a clear way that I can explore in future. Our regular group meetings were helpful for the scientific discussions and they were served as great venues for me to improve my presentation skills. Although it took me some time to realize the importance and the full picture of our project, you were always there encouraging me and repeating the importance of our project. Thank you for your great supervision, they were essential in the completion of my PhD study and this thesis. Thanks for your housewarming party and the BBQ from Frank.

To my co-supervisor Jan C. de Munck who is 1/32 Chinese. As I always mentioned to you, I am a lucky boy because you are always there helping me with my scientific problems as well as the problems of my daily life. You paid in advanced for me for almost all the conferences we attended because you knew I am a poor boy. The working experience with you was full of happiness and pleasure. Your knowledge on image processing, physics, math and medical imaging has significantly broadened my research horizons. Thank you for the daily supervision as well as the valuable comments and suggestions in our published papers, my presentations and this thesis. You have invited me twice to your lovely home in Leiden. The first time was on our way back after the conference in Delft 2015, and the second time was the Christmas Eve 2016. The Christmas dinner was the most special western food that I have even had. The chat with you, your wife, your daughters and your little son was one of my most precious moments in the Netherlands.

To dr. Peter Peverelli, who recruited me from China to VU. I was one of the three PhDs that you recruited from the PhD Workshop China 2013. Without your help and instruction, I cannot find such a good project and such nice supervisors. In the meantime, I would like to thank China Scholarshop Council (CSC) for the financial support.

Thanks to the reading committee, prof.dr. J. Hulshof, J. Popp, A.G.J.M. van Leeuwen, and M. van Herk, dr. I.H.M. van Stokkum and P. de Witt Hamer for devoting your precious time to reviewing my thesis.

Page 140: research.vu.nl dissertation.pdf · Chapter 1 Introduction to quantitative third harmonic generation 1 1.1 Third harmonic generation microscopy

Acknowledgement

131

Many thanks to all my dear colleagues because of the fantastic time here with you guys. Dear Nikolay, thanks for collecting data for me in the beginning of my PhD. Dear Laura, thanks for teaching me how to do THG measurements and interpreting the Dutch summary of this thesis. Thanks to Enis for the Turish candy. Best wishes to our new PhD Ludo. Thanks to my currents roommates Judith, Max, Fabio, Mathi, Kari for answering my questions and offering beers. Good luck to Judith with your new job. Hope Max, Fabio and Mathi can finish your PhD smoothly. Hope Kari happy with your new boss. Best wishes for Sheng (Kingson) Zhou, Steven, Luca, Marica, Nelda, Rene and Morris Cui from the David’s group. Thank everybody in our section for the valuable comments in our section meetings. Thanks to Sining and Yan Liang for your support at the beginning of my PhD. Thanks to the colleagues Martin and Fabio from VUMC for sharing your clinical knowledge. Without you, the conference at Finland would be boring. Thanks to Domenique for organizing the PhD colloquium. Thank Max Blokker for your great effort on the deep learning project. Thanks to the neurosurgeons Philip de Witt Hamer and your colleagues for providing the human brain tumor samples. Thank the pathologist Pieter Wesseling and your colleague Petra Scholten for the preparation of the histology samples.

Life will be more difficult without the accompany of good friends. Dear Yongjie and Meichen, thank you for picking me up at the airport when I arrived. You are my little brothers at Amsterdam and I will miss the days we lived together. Dear Ting Zhou, thank you for living with me and Yongjie in the past three years. Our life at Amsterdam would be less colorful without your accompany. Dear Shanliang, thank you for sharing the new ways to play cards. I also enjoyed some of our serious discussions on science, faith and religion. Dear Liang Yu, we were recruited by Peter and we came to Amsterdam together with Ting Zhou on the same flight. Thank you for the spicy hotpot you made us and good luck with your future research. Thanks to Wenjing Cai and your family (Zhi Zhang & Elisa) for the frequent invitations to your lovely home. Best wishes to Li Li and your family (Hong Wu and Sunny), we were all bnuers and I am sure we will see each other at bnu. Best wishes to the friends, Shu Gao, Huirong Yu, Gaofeng Wu and Hechen Zhang who came to Amsterdam at the same year but has left before me, I really appreciate the talks we had during lunch and dinner. Many thanks to the friends at Uilenstede or ACTA, Jin He, Tingting Ji and your partner (Yi Ding), Ronghua Wang, Yu Mu, Xuguang Song, Gongxing Guo, Chun Yang, Yin Yu, Yuan Shang, Fei Peng, Huan Li, Enqi He, Yixuan Cao and Dongdong Zheng, Xingnan Lin and Ying Liao, Xiaobo Sun, Gongjin Lan, Guangwen Song, Qing Wang and Lulu Zhang. Thanks to Junsheng Wang for your kindness. Dear Shiyao, Lulu and Ella, nice to meet you at Amsterdam and look forward to our next party. Thanks to Jianan Li, Lijin Tian and Pengqi Xu for the talks we had at the corridors of the Physics department. Thank Adonis, Edcel and Meissa for the wonderful lunch time. I really appreciate Tianlong, Cunfeng and Yuanqing for the regular coffee time every morning. Special thanks to prof. Kun Liang, Xinyu Mao and Xinli Ke for sharing your scientific experience. Especially, I want to thank my dear friend Yang Pang, who is always there supporting me in the past ten years.

In the end, I want to thank my family. Dear Ting, I am lucky to have you. Thank you and your family for supporting me and our long distance’s love in the past years. Dear Liang Xu, Li Huang and Xiaoxiao, thanks for being a part of my life in the Netherlands. Our thirteen year’s friendship make us not just

friends and classmates but also a family. 我人生中最幸运的事情就是出生在这个充满爱的大家庭,大

家相濡以沫,互相支持。即使在最困难的时候,也全力支持我的学习,让我可以心无旁骛的追求

理想。你们是我生命中最重要的部分,永远是我前进的动力。