A Novel Hybrid Machine Learning Model for Auto ...

6
A Novel Hybrid Machine Learning Model for Auto-Classification of Retinal Diseases C.-H. Huck Yang* ; Jia-Hong Huang* ; Fangyu Liu ; Fang-Yi Chiu ; Mengya Gao ; Weifeng Lyu; I-Hung Lin M.D.; Jesper Tegner [email protected]* ; [email protected]* ; [email protected] ; [email protected] ; [email protected] ; [email protected]; [email protected] ; [email protected] Living Systems Laboratory, BESE, CEMSE, King Abdullah University of Science and Technology, Thuwal, KSA* ; National Taiwan University, Taipei City, Taiwan*; University of Waterloo, ON, Canada ; University of California San Diego, CA, USA.; Georgia Institute of Technology, GA, USA.; Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taiwan Abstract Automatic clinical diagnosis of retinal dis- eases has emerged as a promising approach to facilitate discovery in areas with limited ac- cess to specialists. We propose a novel visual- assisted diagnosis hybrid model based on the support vector machine (SVM) and deep neu- ral networks (DNNs). The model incorpo- rates complementary strengths of DNNs and SVM. Furthermore, we present a new clinical retina label collection for ophthalmology in- corporating 32 retina diseases classes. Using EyeNet, our model achieves 89.73% diagnosis accuracy and the model performance is com- parable to the professional ophthalmologists. 1. Introduction Computational Retinal disease methods (Tan et al., 2009; Lalezary et al., 2006) has been investigated extensively through different signal processing tech- niques. Retinal diseases is accessible to machine driven techniques due to their visual nature in contrast other common human diseases requiring invasive techniques for diagnosis or treatments. Typically, the diagnosis accuracy of retinal diseases based on the clinical retinal images is highly dependent on the practical experience of physician or ophthalmologist. However, not every doctor has sufficient practical experience. Therefore, developing an automatic retinal diseases detection sys- tem is important and it will broadly facilitate diagnos- Proceedings of the 35 th International Conference on Ma- chine Learning, Stockholm, Sweden, 2018. JMLR: W&CP volume 28. Copyright 2018 by the author(s). UNet PCA SVM Output the name of retinal disease Figure 1. This figure represents our proposed hybrid- model. A raw retinal image as provided as input of DNNs, U-Net, and then we pass the output of U-Net to the dimen- sion reduction module, PCA. Finally, the output of PCA module sent as input to the retina disease classifier, SVM, which outputs the name of predicted retina disease. tic accuracy of retinal diseases. For the remote rural area, where there are no ophthalmologists locally to screen retinal disease, the automatic retinal diseases detection system also can help non-ophthalmologists to find the patient of the retinal disease, and further, refer them to the medical center for further treatment. The developing of automatic diseases detection (ADD) (Sharifi et al., 2002) alleviate enormous pressure from social healthcare systems. Retinal symptom analysis (Abr` amoff et al., 2010) is one of the important ADD applications. Moreover, the increasing number of cases of diabetic retinopathy globally requires extending ef- forts in developing visual tools to assist in the analytic of the series of retinal disease. These decision support systems for retinal ADD, as (Bhattacharya, 2014) for non-proliferative diabetic retinopathy have been im- proved from recent machine learning success on the high dimensional images processing by featuring de- tails on the blood vessel. (Lin et al., 2000) demon- strated an automated technique for the segmentation of the blood vessels by tracking the center of the vessels on Kalman Filter. However, these pattern recognition based classification still rely on hand-crafted features arXiv:1806.06423v1 [cs.CV] 17 Jun 2018

Transcript of A Novel Hybrid Machine Learning Model for Auto ...

Page 1: A Novel Hybrid Machine Learning Model for Auto ...

A Novel Hybrid Machine Learning Model for Auto-Classification ofRetinal Diseases

C.-H. Huck Yang* ; Jia-Hong Huang* ; Fangyu Liu ; Fang-Yi Chiu ; Mengya Gao ; Weifeng Lyu;I-Hung Lin M.D.; Jesper [email protected]* ; [email protected]* ; [email protected] ;[email protected] ; [email protected] ; [email protected]; [email protected] ;[email protected]

Living Systems Laboratory, BESE, CEMSE, King Abdullah University of Science and Technology, Thuwal,KSA* ; National Taiwan University, Taipei City, Taiwan*; University of Waterloo, ON, Canada ; University ofCalifornia San Diego, CA, USA.; Georgia Institute of Technology, GA, USA.; Department of Ophthalmology,Tri-Service General Hospital, National Defense Medical Center, Taiwan

Abstract

Automatic clinical diagnosis of retinal dis-eases has emerged as a promising approach tofacilitate discovery in areas with limited ac-cess to specialists. We propose a novel visual-assisted diagnosis hybrid model based on thesupport vector machine (SVM) and deep neu-ral networks (DNNs). The model incorpo-rates complementary strengths of DNNs andSVM. Furthermore, we present a new clinicalretina label collection for ophthalmology in-corporating 32 retina diseases classes. UsingEyeNet, our model achieves 89.73% diagnosisaccuracy and the model performance is com-parable to the professional ophthalmologists.

1. Introduction

Computational Retinal disease methods (Tan et al.,2009; Lalezary et al., 2006) has been investigatedextensively through different signal processing tech-niques. Retinal diseases is accessible to machine driventechniques due to their visual nature in contrast othercommon human diseases requiring invasive techniquesfor diagnosis or treatments. Typically, the diagnosisaccuracy of retinal diseases based on the clinical retinalimages is highly dependent on the practical experienceof physician or ophthalmologist. However, not everydoctor has sufficient practical experience. Therefore,developing an automatic retinal diseases detection sys-tem is important and it will broadly facilitate diagnos-

Proceedings of the 35 th International Conference on Ma-chine Learning, Stockholm, Sweden, 2018. JMLR: W&CPvolume 28. Copyright 2018 by the author(s).

U­Net

PCA

SVM Output the nameof retinal disease

Figure 1. This figure represents our proposed hybrid-model. A raw retinal image as provided as input of DNNs,U-Net, and then we pass the output of U-Net to the dimen-sion reduction module, PCA. Finally, the output of PCAmodule sent as input to the retina disease classifier, SVM,which outputs the name of predicted retina disease.

tic accuracy of retinal diseases. For the remote ruralarea, where there are no ophthalmologists locally toscreen retinal disease, the automatic retinal diseasesdetection system also can help non-ophthalmologiststo find the patient of the retinal disease, and further,refer them to the medical center for further treatment.

The developing of automatic diseases detection (ADD)(Sharifi et al., 2002) alleviate enormous pressure fromsocial healthcare systems. Retinal symptom analysis(Abramoff et al., 2010) is one of the important ADDapplications. Moreover, the increasing number of casesof diabetic retinopathy globally requires extending ef-forts in developing visual tools to assist in the analyticof the series of retinal disease. These decision supportsystems for retinal ADD, as (Bhattacharya, 2014) fornon-proliferative diabetic retinopathy have been im-proved from recent machine learning success on thehigh dimensional images processing by featuring de-tails on the blood vessel. (Lin et al., 2000) demon-strated an automated technique for the segmentationof the blood vessels by tracking the center of the vesselson Kalman Filter. However, these pattern recognitionbased classification still rely on hand-crafted features

arX

iv:1

806.

0642

3v1

[cs

.CV

] 1

7 Ju

n 20

18

Page 2: A Novel Hybrid Machine Learning Model for Auto ...

A Novel Hybrid Machine Learning Model for Auto-Classification of Retinal Diseases

(a) (b) (c)

Figure 2. The figure shows the result of U-Net tested on(a), an unseen eyeball clinical image. (b) is the groundtruth and (c) is the generated result of U-Net. Based on(b) and (c), we discover that the generated result is highlysimilar to the ground truth.

and only specify for evaluating single retinal symp-tom. Despite extensive efforts using wavelet signalprocessing, retinal ADD remains a viable target forimproved machine learning techniques applicable forpoint-of-care (POC) medical diagnosis and treatmentin the aging society (Cochocki & Unbehauen, 1993).

To the best of our knowledge, the amount of clinicalretinal images are less compared to other cell imag-ing data, such as blood cell and a cancer cell. Yet,a vanilla deep learning based diseases diagnosis sys-tem requires large amounts of data. Here we, there-fore, propose a novel visual-assisted diagnosis algo-rithm which is based on an integration of support vec-tor machine and deep neural networks. The primarygoal of this work is to automatically classify 32 spe-cific retinal diseases for human beings with the reliableclinical-assisted ability on the intelligent medicine ap-proaches. To foster the long-term visual analytics re-search, we also present a visual clinical label collection,EyeNet, including several crucial symptoms as AMNMacular Neuroretinopathy, and Bull’s Eye Maculopa-thy Chloroquine.

Contributions.

• We design a novel visual-assisted diagnosis algo-rithm based on the support vector machine anddeep neural networks to facilitate medical diagno-sis of retinal diseases.

• We present a new clinical label collection, EyeNet,for Ophthalmology with 32 retina diseases classes.

• Finally, we train a model based on the proposedEyeNet. The consistent diagnostic accuracy ofour model would be a crucial aid to the ophthal-mologist, and effectively in a point-of-care sce-nario.

2. Methodology

In this section, we present the workflow of our pro-posed model, referring to Figure 1.

(a) (b) (c)

Figure 3. This figure illustrates the qualitative results of U-Net tested on the colorful images on (a) Myelinated retinalnerve fiber layer (b) age-related macular degeneration (c)disc hemorrhage

2.1. U-Net

DNNs has greatly boosted the performance of imageclassification due to its power of image feature learn-ing (Simonyan & Zisserman, 2014). Active retinal dis-ease is characterized by exudates around retinal vesselsresulting in cuffing of the affected vessels (Khurana,2007). However, ophthalmology images from clini-cal microscopy are often overlayed with white sheath-ing and minor features. Segmentation of retinal im-ages has been investigated as a critical (Rezaee et al.,2017) visual-aid technique for ophthalmologists. U-Net (Ronneberger et al., 2015) is a functional DNNsespecially for segmentation. Here, we proposed a mod-ified version of U-Net by reducing the copy and cropprocesses with a factor of two. The adjustment couldspeed up the training process and have been verified asan adequate semantic effect on small size images. Weuse cross-entropy for evaluating the training processesas:

E =∑x∈Ω

w(x)log(pl(x)) (1)

where pl is the approximated maximum function, andthe weight map is then computed as:

w(x) = wc(x) + w0 · exp(−(dx1 + dx2)

2σ2)2 (2)

dx1 designates the distance to the border of the near-est edges and dx2 designates the distance to the bor-der of the second nearest edges. LB score is shownas (Cochocki & Unbehauen, 1993). We use the deepconvolutional neural network (CNNs) of two 3×3 con-volutions. Each step followed by a rectified linear unit(ReLU) and a 2×2 max pooling operation with stride2 for downsampling; a layer with an even x- and y-sizeis selected for each operation. Our proposed modelconverges at the 44th epoch when the error rate of the

Page 3: A Novel Hybrid Machine Learning Model for Auto ...

A Novel Hybrid Machine Learning Model for Auto-Classification of Retinal Diseases

model is lower than 0.001. The accuracy of our U-Netmodel is 95.27% by validated on a 20% test set amongEyeNet shown as Figure 2. This model is robust andfeasible for different retinal symptoms as illustrated inFigure 3.

2.2. Principal Component Analysis

Principal component analysis (PCA) is a statisticallymatrix-based method by orthogonal transformations.We use PCA combined with SVM classifier to lowerthe computing complexity and avoid the result of over-fitting on the decision boundary. We optimize SVMclassifier with PCA at the 62nd principle component.

2.3. Support Vector Machine

Support Vector Machine is a machine learning tech-nique for classification, regression, and other learningtasks. Support vector classification (SVC) in SVM,map data from an input space to a high-dimensionalfeature space, in which an optimal separating hyper-plane that maximizes the boundary margin betweenthe two classes is established. The hinge loss functionis shown as:

1

n

[n∑

i=1

max(0, 1− yi(~w · ~xi))

]+ λ ‖~w‖2 (3)

Where the parameter λ determines the trade off be-tween increasing the margin-size and ensuring that the~xi lie on the right side of the margin. Parameters arecritical for the training time and performance on ma-chine learning algorithms. We pick up cost functionparameter c as 128 and gamma as 0.0078. The SVMhas comparably high performance when the cost coef-ficient higher than 132. We use radial basis function(RBF) and polynomial kernel for SVC.

3. Efforts on Retinal Label Collection

Retina Image Bank (RIB) is an international clinicalproject launched by American Society of Retina Spe-cialists in 2012, which allows ophthalmologists aroundthe world to share the existing clinical cases onlinefor medicine-educational proposes. Here we presentEyeNet which is mainly based on the RIB. To thisend, we manually collected the 32 symptoms fromRIB, especially on the retina-related diseases. Differ-ent from the traditional retina dataset (Staal et al.,2004) focused on the morphology analysis, our givenopen source dataset labeled from the RIB project isconcentrated on the difference between disease for fea-sible aid-diagnosis and medical applications. Withthe recent success on collecting high-quality datasets,such as ImageNet (Krizhevsky et al., 2012), we believethat collecting and mining RIB for a more developer-friendly data pipeline is valuable for both Ophthalmol-

alexnet-False-loss_log

alexnet-random-init alexnet-pretrained

0 3.666160583 3.590094566

1 3.58461833 2.896918774

2 3.6315763 2.502811909

3 3.449587107 2.756837368

4 3.253777742 2.130362272

5 3.52757144 1.587982178

6 3.608493567 1.619264841

7 3.370359421 1.14400506

8 3.581372261 0.737226427

9 3.363235712 0.957286477

10 3.24192071 1.382514358

11 3.437996387 1.00694406

12 3.5454669 1.02465713

13 3.176216364 0.608178258

14 3.454157352 1.262039661

15 3.3796978 0.724880159

16 3.302175045 0.746238589

17 3.093326569 0.755675793

18 3.164379358 0.233669013

19 3.054489136 0.337032825

20 2.60687995 0.942878604

21 3.348948002 0.148062408

22 3.324405909 0.547939956

23 2.941486597 0.562262714

24 2.934125423 0.555497348

25 2.835474491 0.224011511

26 2.680896282 0.758932769

27 3.007849216 0.373458713

28 2.64580369 0.241468608

29 2.927389145 0.314895838

30 2.873617172 0.594036281

31 3.108062983 1.095380068

32 3.11122942 0.691637993

33 2.721595526 0.793950558

34 2.799368382 0.52251476

35 2.923089981 0.826461613

36 3.039463282 0.273575544

37 2.575837851 0.5639956

38 2.909730673 0.750030816

39 2.471717119 0.718963265

40 3.067299128 0.394764483

41 2.804168224 0.266648769

42 2.35027957 0.825052738

43 2.673094749 0.79281646

44 2.149283171 0.265032887

45 2.848750591 0.900116682

46 2.587204456 0.447658151

47 2.75915432 0.234400898

48 2.969370365 0.205391884

49 2.950274706 0.595977485

50 2.802531958 0.371225983

51 2.848757029 0.232946396

52 2.659086466 0.673110783

53 2.777962685 0.126393065

54 2.886642933 0.381513238

55 2.57334733 0.400713027

56 2.805475473 0.405330598

57 2.53250289 0.520650685

58 2.540155888 0.365362704

59 2.660394192 0.224320143

60 2.273350954 0.609473705

61 2.532250881 0.540541589

loss

0

1

2

3

4

epochs

0 50 100 150 200 250 300 350

alexnet-random-init alexnet-pretrained

1

Figure 4. A plot of loss training AlexNet initialized withImageNet pretrained weights (orange) and initialized withrandom weights (blue).

ogy and Computer Vision community enabling devel-opment of advanced analytical techniques.

4. Experiments

In this section, we describe the implementation de-tails and experiments we conducted to validate ourproposed method.

4.1. Dataset

For experiments, the original EyeNet is randomly di-vided into three parts: 70% for training, 10% for val-idation and 20% for testing. All the training datahave to go through the PCA before SVM. All clas-sification experiments are trained and tested on thesame dataset.

4.2. Setup

The EyeNet has been processed to U-Net to gener-ate a subset with the semantic feature of a bloodvessel. For the DNNs and Transfer Learning mod-els, we directly use the RGB images from reti-nal label collection. EyeNet is published online:github.com/huckiyang/EyeNet

4.3. Deep Convolutional Neural Networks

CNNs have demonstrated extraordinary performancein visual recognition tasks (Krizhevsky et al., 2012),and the state of the art is in a great many vision-related benchmarks and challenges (Xie et al., 2017).With little or no prior knowledge and human effortin feature design, it yet provides a general and effec-tive method solving variant vision tasks in variant do-mains. This new development in computer vision hasalso shown great potential helping/replacing humanjudgment in vision problems like medical imaging (Es-

Page 4: A Novel Hybrid Machine Learning Model for Auto ...

A Novel Hybrid Machine Learning Model for Auto-Classification of Retinal Diseases

teva et al., 2017), which is the topic we try to addressin this paper. In this section, we introduce severalbaselines in multi-class image recognition and comparetheir results on the EyeNet.

4.3.1. Baseline1-AlexNet

AlexNet (Krizhevsky et al., 2012) verified the feasi-bility of applying deep neural networks on large scaleimage recognition problems, with the help of GPU.It brought up a succinct network architecture, with5 convolutional layers and 3 fully-connected layers,adopting ReLU (Nair & Hinton, 2010) as the activa-tion function.

4.3.2. Baseline2-VGG11

VGG(Simonyan & Zisserman, 2014) uses very smallfilters (3x3) repeatedly to replace the large filters(5x5,7x7) in traditional architectures. By pushingdepths of the network, it achieved state-of-the-art re-sults on ImageNet with fewer parameters.

4.3.3. Baseline3-SqueezeNet

Real world medical imaging tasks may require a smallyet effective model to adapt to limited resources ofhardware. As some very deep neural networks cancost several hundred megabytes to store, SqueezeNet(Iandola et al., 2016) adopting model compressiontechniques has achieved AlexNet level accuracy with∼500x smaller models.

4.4. Transfer Learning

We exploit a transfer learning framework from normal-ized ImageNet (Krizhevsky et al., 2012) to the EyeNetfor solving the small samples issue on the computa-tional retinal visual analytics. With sufficient and uti-lizable training classified model, Transfer Learning canresolve the challenge of Machine Learning in the limitof a minimal amount of training labels by means ofTransfer Learning, which drastically reduce the datarequirements. The first few layers of DNNs learn fea-tures similar to Gabor filters and color blobs and thesefeatures appear not to be specific to any particular taskor dataset and thus applicable to other datasets andtasks (Yosinski et al., 2014). Experiments have shownsignificant improvement after applying pretrained pa-rameters on our deep learning models, referring to Ta-ble 1 and Table 2.

4.5. Hybrid-SVMs Results

All SVM are implemented in Matlab with libsvm(Chang & Lin, 2011) module. We separate both theoriginal retinal dataset and the subset to three partsincluded 70% training set, 20% test set, and 10% val-idation set. By training two multiple-classes SVMmodels on both original EyeNet and the subset, we

Hybrid-Ratio RBF kernel Polyno. kernel0% : 100% 0.8203 0.843940% : 60% 0.8371 0.838147% : 53% 0.8973 0.878161% : 39% 0.8903 0.8940100% : 0% 0.8626 0.8733

Table 1. Accuracy comparison of Hybrid-SVM with RBFand Polynomial kernel. We introduce a hybrid-ratio of themixed weighted voting between two multi-SVCs trainedfrom EyeNet and the U-Net subset.

Model Pretrained Random Init.AlexNet 0.7903 0.4839VGG11 0.8871 0.7581

SqueezeNet 0.8226 0.5633

Table 2. Accuracy comparison of three DNNs baselines

implement a weighted voting method to identify thecandidate of retina symptom. We have testified differ-ent weight ratio as Hybrid−Ratio, SVM model withRGB Images: SVM model with U-Net subset, be-tween EyeNet and the subset with Vessel features tomake a higher accuracy at Table 1. We have verifiedthe model without over-fitting by the validation set vianormalization on the accuracy with 2.31% difference.

4.6. Deep Neural Networks Results

All DNNs are implemented in PyTorch. We use identi-cal hyperparameters for all models. The training lasts400 epochs. The first 200 epochs take a learning rateof 1e-4 and the second 200 take 1e-5. Besides, weapply random data augmentation during training. Inevery epoch, there is 70% probability for a trainingsample to be affinely transformed by one of the op-erations in flip, rotate, transpose×random crop.Though ImageNet and our Retinal label collection aremuch different, using weights pretrained on ImageNetrather than random ones has boosted test accuracyof any models with 5 to 15 percentages, referring toTable 2. Besides, pretrained models tend to convergemuch faster than random initialized ones as suggestedin Figure 4. The performance of DNNs on our retinaldataset can greatly benefit from a knowledge of otherdomains.

5. Conclusion and Future Work

In this work, we have designed a novel hybrid modelfor visual-assisted diagnosis based on the SVM and U-Net. The performance of this model shows the higheraccuracy, 89.73%, over the other pre-trained DNNsmodels as an aid for ophthalmologists. Also, we pro-pose the EyeNet to benefit the medical informatics re-search community. Finally, since our label collection

Page 5: A Novel Hybrid Machine Learning Model for Auto ...

A Novel Hybrid Machine Learning Model for Auto-Classification of Retinal Diseases

not only contains images but also text information ofthe images, Visual Question Answering (Huang et al.,2017b;c;a) based on the retinal images is one of theinteresting future directions. Our work may also helpthe remote rural area, where there are no ophthalmol-ogists locally, to screen retinal disease without the helpof ophthalmologists in the future.

Acknowledgement

This work is supported by competitive research fund-ing from King Abdullah University of Science andTechnology (KAUST). Also, we would like to acknowl-edge Google Cloud Platform and Retina Image Bank,a project from the American Society of Retina Spe-cialists.

References

Abramoff, Michael D, Garvin, Mona K, and Sonka,Milan. Retinal imaging and image analysis. Reviewsin biomedical engineering, 3:169–208, 2010.

Bhattacharya, Sharbani. Watermarking digital imagesusing fuzzy matrix compositions and (α, β)-cut offuzzy set. International Journal of Advanced Com-puting, (2051-0845), 2014.

Chang, Chih-Chung and Lin, Chih-Jen. Libsvm: alibrary for support vector machines. TIST, 2(3):27,2011.

Cochocki, A and Unbehauen, Rolf. Neural networksfor optimization and signal processing. John Wiley& Sons, Inc., 1993.

Esteva, Andre, Kuprel, Brett, Novoa, Roberto A,Ko, Justin, Swetter, Susan M, Blau, Helen M, andThrun, Sebastian. Dermatologist-level classificationof skin cancer with deep neural networks. Nature,542(7639):115, 2017.

Huang, Jia-Hong, Alfadly, Modar, and Ghanem,Bernard. Robustness analysis of visual qa modelsby basic questions. arXiv:1709.04625, 2017a.

Huang, Jia-Hong, Alfadly, Modar, and Ghanem,Bernard. Vqabq: visual question answering by basicquestions. arXiv:1703.06492, 2017b.

Huang, Jia-Hong, Dao, Cuong Duc, Alfadly, Modar,and Ghanem, Bernard. A novel frameworkfor robustness analysis of visual qa models.arXiv:1711.06232, 2017c.

Iandola, Forrest N, Han, Song, Moskewicz,Matthew W, Ashraf, Khalid, Dally, William J, andKeutzer, Kurt. Squeezenet: Alexnet-level accuracy

with 50x fewer parameters and¡ 0.5 mb model size.arXiv:1602.07360, 2016.

Khurana, AK. Comprehensive ophthalmology. NewAge International Ltd, 2007.

Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geof-frey E. Imagenet classification with deep convolu-tional neural networks. In Advances in NIPS, pp.1097–1105, 2012.

Lalezary, Maziar, Medeiros, Felipe A, Weinreb,Robert N, Bowd, Christopher, Sample, Pamela A,Tavares, Ivan M, Tafreshi, Ali, and Zangwill,Linda M. Baseline optical coherence tomographypredicts the development of glaucomatous changein glaucoma suspects. American journal of ophthal-mology, 142(4):576–582, 2006.

Lin, Ching-Yung, Wu, Min, Bloom, Jeffrey A, Cox,Ingemar J, Miller, Matt L, and Lui, Yui Man.Rotation-, scale-, and translation-resilient publicwatermarking for images. In Security and water-marking of multimedia contents II, volume 3971, pp.90–99. International Society for Optics and Photon-ics, 2000.

Nair, Vinod and Hinton, Geoffrey E. Rectified linearunits improve restricted boltzmann machines. InProceedings of the 27th international conference onmachine learning (ICML-10), pp. 807–814, 2010.

Rezaee, Khosro, Haddadnia, Javad, and Tashk,Ashkan. Optimized clinical segmentation of retinalblood vessels by using combination of adaptive fil-tering, fuzzy entropy and skeletonization. AppliedSoft Computing, 52:937–951, 2017.

Ronneberger, Olaf, Fischer, Philipp, and Brox,Thomas. U-net: Convolutional networks forbiomedical image segmentation. In InternationalConference on MICCAI, pp. 234–241. Springer,2015.

Sharifi, Mohsen, Fathy, Mahmood, and Mahmoudi,Maryam Tayefeh. A classified and comparativestudy of edge detection algorithms. In Informa-tion Technology: Coding and Computing, 2002. Pro-ceedings. International Conference on, pp. 117–120.IEEE, 2002.

Simonyan, Karen and Zisserman, Andrew. Very deepconvolutional networks for large-scale image recog-nition. arXiv:1409.1556, 2014.

Staal, Joes, Abramoff, Michael D, Niemeijer, Mein-dert, Viergever, Max A, and Van Ginneken, Bram.Ridge-based vessel segmentation in color images ofthe retina. TMI, 23(4):501–509, 2004.

Page 6: A Novel Hybrid Machine Learning Model for Auto ...

A Novel Hybrid Machine Learning Model for Auto-Classification of Retinal Diseases

Tan, Ou, Chopra, Vikas, Lu, Ake Tzu-Hui, Schuman,Joel S, Ishikawa, Hiroshi, Wollstein, Gadi, Varma,Rohit, and Huang, David. Detection of macular gan-glion cell loss in glaucoma by fourier-domain opti-cal coherence tomography. Ophthalmology, 116(12):2305–2314, 2009.

Xie, Saining, Girshick, Ross, Dollar, Piotr, Tu,Zhuowen, and He, Kaiming. Aggregated residualtransformations for deep neural networks. In CVPR,pp. 5987–5995. IEEE, 2017.

Yosinski, Jason, Clune, Jeff, Bengio, Yoshua, and Lip-son, Hod. How transferable are features in deepneural networks? In NIPS, pp. 3320–3328, 2014.