Adaptive neuro-fuzzy inference system (ANFIS) to predict ...
Integrating adaptive neuro-fuzzy inference system and local binary pattern operator for robust...
-
Upload
ahmad-reza -
Category
Documents
-
view
213 -
download
0
Transcript of Integrating adaptive neuro-fuzzy inference system and local binary pattern operator for robust...
ORIGINAL ARTICLE
Integrating adaptive neuro-fuzzy inference system and localbinary pattern operator for robust retinal blood vesselssegmentation
Abdolhossein Fathi • Ahmad Reza Naghsh-Nilchi
Received: 12 June 2011 / Accepted: 25 July 2012 / Published online: 9 August 2012
� Springer-Verlag London Limited 2012
Abstract Automatic extraction of blood vessels is an
important step in computer-aided diagnosis in ophthal-
mology. The blood vessels have different widths, orienta-
tions, and structures. Therefore, the extracting of the proper
feature vector is a critical step especially in the classifier-
based vessel segmentation methods. In this paper, a new
multi-scale rotation-invariant local binary pattern operator
is employed to extract efficient feature vector for different
types of vessels in the retinal images. To estimate the
vesselness value of each pixel, the obtained multi-scale
feature vector is applied to an adaptive neuro-fuzzy infer-
ence system. Then by applying proper top-hat transform,
thresholding, and length filtering, the thick and thin vessels
are highlighted separately. The performance of the pro-
posed method is measured on the publicly available
DRIVE and STARE databases. The average accuracy
0.942 along with true positive rate (TPR) 0.752 and false
positive rate (FPR) 0.041 is very close to the manual seg-
mentation rates obtained by the second observer. The
proposed method is also compared with several state-
of-the-art methods. The proposed method shows higher
average TPR in the same range of FPR and accuracy.
Keywords Retinal image segmentation �Blood vessel detection � Local binary pattern �Adaptive neuro-fuzzy inference system
1 Introduction
The detection and quantitative measurement of variations
in the retinal blood vessels can be used for diagnosis of
several diseases such as diabetic retinopathy, hypertension,
occlusion, glaucoma, obesity, etc. For example, vessel
occlusion makes vessels longer; hypertension reduces
arteries, while diabetes creates new blood vessels. There-
fore, several blood vessel detection methods can be found
in the literature for diagnosis of such diseases [1–7]. Also,
the retinal blood vessel distribution is unique for each
person, and therefore, it could be used for personal iden-
tification [8].
Developments of acquisition equipments enable us to
capture high-resolution images from retina. Therefore,
manual or semiautomatic blood vessel extraction tech-
niques are labor intensive and time consuming, especially
in large database of retinal images. Thus, the develop-
ments of automatic methods for robust blood vessel
extraction are valuable. In the literature, several tech-
niques have been reported for blood vessel segmenta-
tion. These methods generally can be categorized into
three classes: (1) kernel-based, (2) tracking-based, and (3)
classifier-based.
In the kernel-based methods, the retinal images are fil-
tered by various vessel-like kernels. The blood vessel
structures are detected by maximizing the responses of
applied kernels. The mathematical morphology operators
[6, 9] and matched filters [1, 2, 10, 11] are two examples of
this category. In the matched filters, a series of different
Gaussian-shaped filters like simple Gaussian model [1, 2],
dual-Gaussian model [10], or derivative of Gaussian
function [11] are used to detect the blood vessels. How-
ever, the matched filters have strong responses not only to
blood vessels but also to non-vessel edges like bright blobs.
A. Fathi (&) � A. R. Naghsh-Nilchi
Department of Computer Engineering,
The University of Isfahan, Isfahan, Iran
e-mail: [email protected]
A. R. Naghsh-Nilchi
e-mail: [email protected]
123
Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174
DOI 10.1007/s00521-012-1118-8
They also have to use several kernels to detect vessels with
different thickness orientations.
In the tracking-based methods, the vessel seems as a line
and they try to follow vessel edges by exploiting local
information. In these methods, various vessel profile
models such as Gaussian profile [12], generic parametric
model [13], Bayesian probabilistic model [14], and multi-
scale profile [15] are used to find the path that has the best
matches to the vessel profile model. Although these
methods have high performance in detecting blood vessels,
they usually have 2 week spots: the limitation in handling
of bifurcations especially in thin vessels and the needing of
manual seek points.
The classifier-based methods are divided into two sub-
classes: supervised and unsupervised. In the supervised
methods [16–18], some prior information of the labeled
vessels is exploited to decide whether a pixel belongs to
vessel or non-vessel. For this propose, different classifiers
such as artificial neural network [16], Gaussian mixture
model classifier [17], and KNN classifier [18] were used. In
the unsupervised methods, the vessel segmentation is done
without any prior labeling knowledge [19, 20]. In the
classifier-based methods, the performance of detected
vessels heavily depends on the features that are extracted
from retinal images. Various types of feature extraction
methods such as Gabor wavelet transform [17], ridge
detection [18], matched filters [19], and trench detection
[20] were reported in the literature.
Other techniques tried to combine these methods and
improve the performance [21–23]. Mendonca et al. [21]
used morphological operators and region growing algo-
rithm, while Palomera-Perez et al. [22] and Martinez-Perez
et al. [23] employed Hessian-based vesselness and region
growing techniques to extract blood vessels.
In this paper, an efficient and easy-to-implement clas-
sifier-based method is presented for automatically extract-
ing blood vessels. An adaptive neuro-fuzzy inference
system (ANFIS) is used as classifier, and a proper exten-
sion of local binary pattern (LBP) operator is employed to
extract multi-scale statistical and structural features of
blood vessels. The combination of ANFIS and LBP is used
to calculate the vesselness measure of each pixel in retinal
images. A proper and simple procedure is applied in the
postprocessing phase to extract the thin and thick vessels
separately. By applying length filter on the thin and thick
vessels and integrating them, the retinal blood vessel net-
work is detected.
The rest of this paper is organized as follows: a brief
review of adaptive neuro-fuzzy inference system and local
binary patterns is presented in Sects. 2 and 3, respectively.
The proposed method for robust blood vessel detection is
presented in Sect. 4. Experimental results are reported in
Sect. 5. And finally conclusion is given in Sect. 6.
2 Adaptive neuro-fuzzy inference system
The fuzzy logic, proposed by Zadeh [24], not only can be
used as a control methodology but also can be employed as
a data processing tool. Unlike binary logic, which is based
on crisp values of 0 (‘‘false’’) and 1 (‘‘true’’), fuzzy logic
uses a degree of truth by using membership functions,
rules, and fuzzy logic operators. By using of membership
functions, it would be possible to determine the weight of
each input to define final output. The final output will be
obtained using fuzzy ‘‘if–then’’ rules. These rules combine
the various dependencies between input variables using
fuzzy logic operators to describe the final output.
The most critical issue in the fuzzy systems is appro-
priately determining their parameters such as the shape and
location of membership functions and the fuzzy rules
composition. In addition to using trial-and-error method,
one can use learning methods such as artificial neural
network to obtain optimal fuzzy logic parameters from
training data. An adaptive neuro-fuzzy inference system
(ANFIS) was obtained by combination of neural network
and fuzzy inference system [25]. In the ANFIS, either
backpropagation or combination of least square estimate
and backpropagation may be used to estimate membership
function parameters. Although in the fuzzy inference sys-
tem, both premise (if part) and consequence (then part)
parts of fuzzy if–then rules can be fuzzy proposition, in the
ANFIS, the consequence part is a zero- or first-order
polynomial. Such kind of models are called Sugeno-type
fuzzy model [26]. For a first-order Sugeno fuzzy model, a
common rule set with two fuzzy if–then rules is as follows:
Rule 1 : if x is A1 and y is B1 then f1 ¼ p1xþ q1yþ r1
ð1ÞRule 2 : if x is A2 and y is B2 then f2 ¼ p2xþ q2yþ r2
ð2Þ
The corresponding equivalent ANFIS structure and its
reasoning mechanism are shown in Fig. 1. This network
has two kinds of nodes: fixed nodes and adaptive nodes.
The adaptive nodes, which are depicted by rectangles,
contain parameters that may be trained using learning
algorithm, while fixed nodes, which are depicted by circles,
are constant and do not contain any parameters.
In the Fig. 1, the first layer consists of adaptive neurons
used to determine the extent of membership function of
each fuzzy set.
The second layer consists of fixed nodes that simply
perform multiplication operation.
O2;i ¼ wi ¼ lAiðxÞ � lBi
ðyÞ for i ¼ 1; 2 ð3Þ
where O2;i is the output of the ith node in the layer 2 and lS
is an appropriate parameterized membership function for
S164 Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174
123
fuzzy set S. lAiðxÞ and lBi
ðyÞ are the output of the first-
layer nodes that specify the membership value of each
input x or y to its corresponding fuzzy set (Ai for x and Bi
for y), respectively.
In the third layer, constant nodes are used to normalize
the outputs of the previous layers’ nodes as below:
O3;i ¼ �wi ¼ wi
X
k
wk for i ¼ 1; 2
,ð4Þ
The fourth layer consists of adaptive neurons which they
compute a weighted first-order polynomial function as
below:
O4;i ¼ �wifi ¼ �wiðpixþ qiyþ riÞ for i ¼ 1; 2 ð5Þ
where pi, qi, and ri are its parameters obtained during the
training process.
The last layer include a single fixed neuron which it
collects the outputs of the nodes in the previous layer.
Overall output ¼ O5;1 ¼X
i
�wifi ð6Þ
In this paper, a hybrid learning method is used to
estimate the parameters of adaptive neurons. In this type of
learning, in the first step, the parameters of neurons in the
first layer are set to fix random values. Then the parameters
of neurons in the fourth layer are trained with least square
error method. In the next step, these trained parameters are
considered as constants, and the neurons in the first layer
are trained with error backpropagation gradient descent
algorithm. These steps are iterated till the condition of
stopping is satisfied.
3 Local binary pattern operator
Local binary pattern, which was proposed by Ojala et al.
[27, 28], is a very effective multi-resolution statistical
and structural texture primitives descriptor that can be
applied in many applications such as face recognition
[29], fingerprint classification [30], and remote sensing
analysis [31]. In the LBP operator, the primitive patterns
are extracted by comparing the value of P equally spaced
neighborhood points (gi i = 0 to P-1) on the circle (with
radius R) with the value of central pixel (gc). The
primitive patterns are represented with binary codes
(BCP,R):
BCP;RðiÞ ¼1 gi� gc
0 gi\gc
�i ¼ 0; 1; . . .;P� 1 ð7Þ
If the position of each neighborhood point (gi) does not
fall into the center of a pixel, we rounded it to fall into the
center of a nearest pixel. But one can obtain the value of gi
by using the interpolation of the corresponding pixels. The
reason for using the rounding process is to speedup the
calculation of the proposed LBP. In the classical LBP
(LBPriu2) [28], only uniform patterns are selected as local
texture features. The uniform patterns contain at most two
bitwise transitions from 0 to 1 or vise versa in the obtained
binary code (T(BCP,R)) when it is considered as a circular
structure:
LBPriu2P;R ¼
PP�1i¼0 BCP;RðiÞ TðBCP;RÞ� 2
Pþ 1 Otherwise
�ð8Þ
where
TðBCP;RÞ ¼ BCP;RðP� 1Þ � BCP;Rð0Þ�� ��
þXP�2
i¼0
BCP;RðiÞ � BCP;Rðiþ 1Þ�� �� ð9Þ
The uniformity measure T corresponds to the number of
transitions from 0 to 1 or from 1 to 0 between successive
bits in circular representation of the obtained binary code
(BCP,R). The superscript ‘‘riu2’’ refers to the use of rota-
tion-invariant uniform patterns that have a T value of at
most two. The classical LBP is rotation invariant, because
it assigns a unique label to each pattern based on the
number of its ‘‘1’’ bits, and the placement of ‘‘1’’ bits does
Fig. 1 Two ANFIS fuzzy rules. a ANFIS structure. b Reasoning or membership functions
Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174 S165
123
not have any effect on the LBP outputs. An example of
calculating the LBP value is shown in Fig. 2.
By applying this operator, only uniform pattern such as
flat area, spots, corners, line-ends, and edges, which is
shown in Fig. 3, can be extracted, and all non-uniform
patterns are neglected by integrating them as one pattern
with label P ? 1.
Since in the retinal images, the blood vessel structure is
line pattern with T value greater than 2, the classical
LBPriu2 cannot efficiently describe it. Therefore, we used
an extension of LBP (LBPNE), proposed by authors [32],
which can describe the line patterns efficiently. The for-
mulation of this version of LBP is given in below:
LBPNEP;R ¼
PP�1i¼0 BCP;RðiÞ if TðBCP;RÞ\4
P� 1þPP�1
i¼0 BCP;RðiÞ if TðBCP;RÞ ¼ 4
2P� 5þ TðBCP;RÞ=2 if TðBCP;RÞ[ 4
8<
:
ð10Þ
Since the patterns with line-shaped structures have four
bitwise transitions in their binary code (T(BCP,R) = 4), as
shown in Fig. 4, in this version of LBP, the line patterns are
noticed separately. And also, for the other non-uniform
patterns instead of assigning one label to all of them, we
use one label for each group of them that have same bitwise
transitions (T) value.
By employing different values for P and R, we can
extract multi-resolution patterns as shown in Fig. 4. The
value of R (R [ 0) is referred to radius of circle that P
(P [ 1) equally spaced neighbor points are considered on it
to extract the LBP values. Although each value for P can
be used, the best value for P is equal to the number of
pixels that exists in the perimeter of the corresponding
circle to utilize all vessels’ points for extracting the LBP
values. By detecting multi-resolution patterns in the retinal
images, the efficient feature vectors for blood vessels with
different diameters can be extracted easily.
4 The proposed blood vessel detection method
In this paper, a robust method for automatic blood vessel
extraction is introduced. In the first step, a new and
efficient rotation-invariant LBP operator (LBPNEP;R) with
different values for P and R is applied to extract multi-scale
feature vector for all pixels in the retinal images. Next, the
obtained feature vectors are applied to the trained ANFIS
to indicate the vesselness value of each pixel. Then thin
and thick vessels are separately extracted by applying
simple and proper postprocessing procedures. Finally, the
blood vessel networks are obtained by applying simple
logical OR operation on the detected thin and thick vessels.
The details of these steps are given in the following of this
section. Also the flowchart of the proposed system is
depicted in Fig. 5.
4.1 LBP feature extraction
When the colored images of retinal vessels and their red,
green, and blue channels are visualized separately, as
shown in Fig. 6, the green channel shows the best vessel/
background contrast. Therefore, this channel is selected to
be processed by the LBPNE operators. Since the width of
blood vessels in the retinal images with size of about
700 9 600 pixels is usually in the range of 2 and 10 pixels,
the LBPNE operators in three scales LBPNE18;3, LBPNE
32;5, and
LBPNE48;9are applied to each pixel to cover all vessels’
widths. For other data sets, that their maximum width of
vessels are greater than 10 pixels, these parameters should
be set based on the maximum value of vessels’ width, to
span all vessels’ widths. Another choice is employing a
resizing algorithm to resize the images to about
700 9 605 pixels. The obtained values for these LBP
operators and their corresponding bitwise transition
(T) values are used as multi-scale feature vector:
LBP Feature Vector
¼ LBPNE18;3; TðBC18;3Þ;LBPNE
32;5; TðBC32;5Þ;LBPNE48;9; TðBC48;9Þ
n o
ð11Þ
This feature vector, which can reflect the characteristics
of different vessels, is extracted for all pixels in the retinal
images and then applied to the trained ANFIS (see Sect. 4.2)
to estimate the vesselness values of them.
4.2 Vesselness degree measurement using ANFIS
To estimate the vesselness value of each pixel, an adap-
tive neuro-fuzzy inference system (ANFIS) is employed.
The architecture of the used ANFIS is shown in Fig. 7.
The training data set is directly extracted from the real
retinal images. To this end, we selected five images from
the training set of the DRIVE data set [34]. We ran-
domly selected 100000 vessel and non-vessel points
from the selected training images. For each point, a fea-
ture vector as explained in the Eq. 11 was extracted.
Fig. 2 The details of obtaining the LBP value
S166 Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174
123
The output of ANFIS is set in the range of 1 and -1: the
value 1 for vessel and -1 for background. The ANFIS is
trained using a combination of the least squares and the
backpropagation gradient descent method to emulate
training data set. In this type of learning, in the first step,
the parameters of input membership functions (IMF neu-
rons) are set to fix random values. Then the parameters of
output membership functions (OMF neurons) are trained
with least square error method. In the next step, these
trained parameters are considered as constants, and the
neurons in the IMF layer are trained with error backprop-
agation gradient descent algorithm. These steps are iterated
till the condition of stopping is satisfied.
In the test phase, the LBP feature vectors will be
extracted for all pixels in the input retinal images and
applied to the trained ANFIS to indicate the vesselness
degree of them. To reduce the effect of noise, a simple
uniform averaging filter with 5 9 5 kernel structure is
applied to the obtained vesselness values. The results of
this step, shown in Fig. 6e, are used to enhance and detect
thin and thick vessels separately.
4.3 Thin vessel enhancement
To extract thin vessels, the morphological top-hat operator
with suitable circular structuring elements is employed.
Circular structuring elements of radii 2 and 4 are applied to
the obtained vesselness values to highlight the thin vessels
with a specific range of widths. The final thin vessels are
extracted by applying global threshold value, which was
proposed by Otsu [33] to select the threshold value such
that minimizes the intraclass variance in the output binary
images. Since several small regions of non-vessels may be
extracted, and a proper length filter is also applied to
eliminate them. The obtained result of this phase is shown
in Fig. 6f.
4.4 Thick vessel enhancement
The thick vessels are extracted by applying a proper
thresholding process followed by a simple length filtering.
Since, the ratio of blood vessel pixels in the retinal images
is less then 15 %, the threshold value (TV) is adopted such
that its value to be greater than 85 % of existing vesselness
values. For this purpose, we use cumulative density
Fig. 3 The uniform structures
flat area, spot, corner, line-end,
and edge patterns for P = 8.
Dark circle is used to indicate 0
and white to indicate 1
Fig. 4 Multi-resolution line
patterns for (P = 16 and
R = 3), (P = 12 and R = 2),
and (P = 8 and R = 1) from
left to right, respectively. Darkcircle is used to indicate 0 and
white to indicate 1
Fig. 5 The flowchart of the proposed method
Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174 S167
123
function (CDF) of the obtained vesselness values to obtain
threshold value as below:
TV ¼ argkfCDF(k) ¼ 0:85g ð12Þ
where k is the quantized vesselness values. After applying
the obtained threshold value, a proper length filter is also
applied to eliminate small regions. The obtained results for
thick vessels are shown in Fig. 6g.
4.5 Label filtering for small region removing
To eliminate small regions, connected component labeling
is used to identify individual objects in the thin and thick
vessel images. Connected component labeling is a simple
image analysis technique that scans an image pixel-
by-pixel and groups its pixels into components based on
pixel connectivity. The label filtering is employed to isolate
the individual objects by using the 4-connected neighborhood
Fig. 6 The obtained results for different steps of the proposed method. a Color image. b Red channel of image. c Green channel of image.
d Blue channel of image. e Smoothed vesselness result. f Detected thin vessels. g Detected thick vessels. h Final detected vessel network
Fig. 7 The architecture of the
trained ANFIS
S168 Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174
123
and label propagation. The number of pixels, in the labeled
components, is used as a measure of length feature of
regions. If the area of the region is smaller than a certain
value then that region will be removed. We experimentally
tried different values for eliminating small regions from the
thin and thick vessels, and we found that the best limit
for thin vessels is 60 and 150 and for thick vessels is
300 pixels. These values were obtained for retinal images
with size of about 700 9 600 pixels. The details of these
experiments are given in Sect. 5.2.
4.6 Final vessel network detection
The final blood vessels are obtained by integrating of the
thin and thick networks using logical OR function. The
obtained vessels for final blood vessel network are shown
in Fig. 6h.
5 Experimental results
In the first section of our experiments, the effect of dif-
ferent parameters of the proposed method was evaluated on
images from publicly available DRIVE database [34]. The
DRIVE database consists of 40 images along with manual
segmentation of vessels. It has been divided into training and
test sets, each of which contains 20 images. These images are
captured in digital form using a Canon CR5 3CCD camera at
45� field of view (FOV). The size of images is 565 9
584 pixels and used 8 bits per each color channel.
To evaluate the proposed method, we used detection
accuracy (ACC), true positive rate (TPR), and false posi-
tive rate (FPR) as performance measures. The ACC is
defined as the ratio of the number of correctly classified
pixels to the total number of existing pixels. The TPR is
defined as the ratio of the number of correctly detected
vessel pixels to the total number of vessel pixels that exist
in the ground truth images. The FPR is also defined as the
ratio of the number of non-vessel pixels were classified as
vessels to the total number of non-vessel pixels.
The training of the ANIFS was done using the ima-
ges 1–5 of the training set of the DRIVE database. A
combination of the least-squares method and the back-
propagation gradient descent method for training FIS
membership function parameters is applied to emulate a
given training data set as explained in the Sect. 4.2. The
hand-labeled images by the first expert human were used as
ground truth.
5.1 Experiment on CDF-based thresholding
This experiment was done to evaluate the effect of CDF-
based threshold on the proposed method. This threshold
was employed to vesselness values to extract thick vessels.
We evaluated TPR and FPR of the proposed method when
different values for CDF-based threshold have been used.
We used the images 6–20 from the training set of the
DRIVE database in this experiment. Figure 8 illustrates the
obtained results. From the figure, when the value of K was
reduced from 1 to 0.85, the variation in TPR is more than
FPR; and from 0.85 to 0.7, the value of TPR is fixed, and
only FPR is increased. Therefore, a good trade-off between
the TPR and FPR values is obtain when the CDF is equal to
0.85. Base on this experiment, we used the vesselness value
(k) of this point as threshold value for detection of thick
vessels. Also the ROC curve of the proposed method, when
only the CDF threshold was changed, was extracted from
this figure and shown in Fig. 9 for better understanding of
the effect of CDF threshold.
5.2 Experiment on the size of length filters
To evaluate the effect of length filtering on the proposed
method, we applied different values from 0 to 500 pixels
for thin and thick vessels. We separately calculated the
accuracy (ACC) of thin and thick vessels when different
values for length filtering were used. In this experiment, the
images 6–20 from the training set of the DRIVE database
were used. Figure 10 illustrates the obtained results. From
the figure, the best accuracy for thin vessels with radii 2
and 4 is obtained when the size of length filter is equal to
60 and 150 pixels, respectively. Also the best accuracy for
thick vessels obtains when the size of length filter is equal
to 300 pixels.
Fig. 8 The effect of the CDF thresholding parameter on TPR and
FPR values of the proposed method. The proper value was
emphasized with circle
Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174 S169
123
5.3 Experiment on feature vector
To evaluate the benefit of the proposed LBP operator and
selected feature vector, we implemented the proposed
method using different feature vectors obtained by RGB
values, LBPriu2 values, and the proposed LBP (LBPNE)
values. For both LBP operators, not only the obtained LBP
values but also the combination of LBP values and their
transition values (T) were used as feature vector. In these
experiments to train the ANFIS, the images 1–5 from the
training set of the DRIVE database were used, and then all
images in the test set of the DRIVE database were used as
test samples. The obtained results are shown in Table 1.
From the obtained results, it is clear that the using of LBP
operator is superior to RGB value. The highest perfor-
mance rate was achieved when the combination of the
proposed LBP values (LBPNE) and their transition values
(T) were used. It is better in all performance measures and
outperforms 2 % in the TPR greater than the classical LBP
(LBPriu2).
5.4 Comparison with other methods using the DRIVE
database
To emphasize the ability of the proposed method, we
compared it with some state-of-the-art blood detection
methods on all images in the test set of the DRIVE data-
base [34]. For this purpose, the methods proposed by
Chaudhuri et al. [1], Niemeijer et al. [3], Jiang et al. [9],
Zhang et al. [10], Delibasis et al. [13], Soares et al. [17],
Stall et al. [18], Mendonca et al. [21], Palomera-Perez et al.
[22], and Martinez-Perez et al. [23] were used for com-
parison. The results of other methods can be obtained from
the DRIVE database web site [34] or from their original
papers. These results are summarized in Table 2.
The TPR of the proposed method is higher than others,
while its FPR dose not exceed from 3.91 %. Also the
obtained results on the image 16 of the DRIVE database
for the proposed method and some state-of-the-art methods
for better comparison are shown in Fig. 11.
5.5 Comparison with other methods using the STARE
database
The proposed method was also compared on the STARE
database [2] with some state-of-the-art methods. We
selected 20 images from the STARE database that ten of
which contain pathology. These images are captured in
digital form using a TopCon TRV-50 fundus camera at 35�field of view (FOV). The size of images is 700 9
605 pixels and used 8 bits per each color channel. Two
observers manually segmented all images. The perfor-
mance of all methods is compared with first observer as
ground truth. The previous trained ANIFS, which was
trained using DRIVE images, was used again to assess the
robustness of the proposed method. In this experiment, the
methods proposed by Chaudhuri et al. [1], Hoover et al.
[2], Stall et al. [18], Soares et al. [17], Martinez-Perez et al.
[23], Mendonca et al. [21], Palomera-Perez- et al. [22] and
Zhang et al. [10] were used for comparison. The results of
other methods are extracted from their original papers. The
obtained results are presented in Table 3.
In the obtained results, the TPR value of the proposed
method is 75.9 % and higher than others while its
FPR value dose not exceed from 4.4 %. The accuracy
value of the proposed method is similar to the others.
Fig. 9 The obtained ROC curve of the proposed method, on the
images 6–20 of training set of DRIVE database, only by changing
the value of CDF(K) from 1 to 0.7. The true and false positive rates of
the second human observer were indicated with star
Fig. 10 The effect of the length filtering parameter on accuracy
values of the proposed method. The proper values were emphasized
with circles
S170 Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174
123
Table 1 The obtained results of
the proposed method for
different setups on the DRIVE
database
Method True positive rate (%) False positive rate (%) Accuracy (%)
ANFIS ? RGB 61.3 7.6 88.4
ANFIS ? LBPriu2 69.4 4.8 92.1
ANFIS ? LBPriu2 ? T 72.1 4.1 93.2
ANFIS ? LBPNE 70.2 4.4 92.7
ANFIS ? LBPNE ? T 74.4 3.9 94.2
Table 2 The obtained vessel
extraction performance of all
methods on the DRIVE
database
Method True positive rate False positive rate Accuracy
Non-expert human 0.7761 0.0275 0.9473
Chaudhuri et al. [1] 0.6168 0.0259 0.9284
Niemeijer et al. [3] 0.6793 0.0199 0.9416
Jiang et al. [9] 0.6478 0.0375 0.9222
Zhang et al. [10] 0.7120 0.0276 0.9382
Delibasis et al. [13] 0.6731 0.0241 0.9377
Soares et al. [17] 0.7283 0.0212 0.9466
Stall et al. [18] 0.7192 0.0227 0.9442
Mendonca et al. [21] 0.7344 0.0236 0.9452
Palomera-Perez- et al. [22] 0.6600 0.0380 0.9220
Martinez-Perez et al. [23] 0.7246 0.0345 0.9344
Proposed method 0.7442 0.0391 0.9418
Fig. 11 The results of different methods on the image 16 from
the database DRIVE. a Original image. b Reference vessel image.
c The result of Chaudhuri et al. [1] method. d The result of Niemeijer
et al. [3] method. e The result of Jiang et al. [9] method. f The result
of Stall et al. [18] method. g The result of Martinez-Perez et al. [23]
method. h The result of the proposed method
Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174 S171
123
The obtained results of the proposed method on four
images of the STARE database are also shown in Fig. 12.
Since in this experiment, the test set and training set are
completely independent, the obtained results show the
robustness of the proposed method.
To perform a fair comparison, the TPR values of the
proposed method and some state-of-the-art methods at the
same FPR values on the both DRIVE and STARE
databases are presented in Tables 4 and 5. The methods
proposed by Chaudhuri et al. [1], Hoover et al. [2],
Niemeijer et al. [3], Zana et al. [6], Jiang et al. [9], Zhang
et al. [10], Delibasis et al. [13], Soares et al. [17], Stall et al.
[18], Palomera-Perez- et al. [22], and Martinez-Perez et al.
[23] were used. For each method, the TPR value directly
was extracted from its ROC curve. From these tables, the
proposed method has high TPR values compared to most of
existing methods and competes with the best existing
method on the both DRIVE and STARE databases. Its
average TPR value is greater than 75 %.
Furthermore, the proposed method requires low com-
putational cost and competes with existing fast methods,
see Table 6. Without optimization of its MATLAB code,
Table 3 The obtained vessel
extraction performance of all
methods on the STARE
database
Method True positive rate False positive rate Accuracy
Second observer 0.8949 0.0610 0.9354
Chaudhuri et al. [1] 0.6134 0.0245 0.9384
Hoover et al. [2] 0.6751 0.0433 0.9267
Stall et al. [18] 0.6970 0.0190 0.9516
Soares et al. [17] 0.7165 0.0252 0.9480
Martinez-Perez et al. [23] 0.7506 0.0431 0.9410
Mendonca et al. [21] 0.6996 0.0270 0.9440
Palomera-Perez- et al. [22] 0.7790 0.0551 0.9240
Zhang et al. [10] 0.7177 0.247 0.9484
Proposed method 0.7588 0.0435 0.9414
Fig. 12 The obtained results on four images of the STARE database. Top the original images. Middle the reference images. Bottom the obtained
results of the proposed method
S172 Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174
123
it will take about 3.7 min to process one image in the
DRIVE database and 4.3 min to process one image in the
STARE database on a PC with a Pentium-IV 3.2 GHz CPU
and 2.0 GB RAM. These running times are obtained by
averaging the running times of all images of the DRIVE
and STARE databases. In real applications, the computa-
tion time can be significantly reduced by implementing the
algorithm in C/C?? programming.
6 Conclusion
In this paper, we proposed a novel and easy-to-implement
algorithm for automatic blood vessel extraction, which
combines the multi-resolution LBP operator and adaptive
neuro-fuzzy inference system. Since it uses multi-scale
features, which are obtained using LBP, all vessels with
different thicknesses and orientations can be detected
efficiently. In the proposed method, the thin and thick
blood vessels are extracted separately by applying top-hat
transform and simple thresholding as well as length filter-
ing. The final vessels are obtained by combining the thin
and thick vessels using logical OR function.
Experiments on different test images from the DRIVE
and STARE databases are conducted to access the perfor-
mance of the proposed method in comparison with some of
the best state-of-the-art methods. The proposed method is
competitive with or better than other state-of-the-art
methods. On the DRIVE and STARE databases, the TPR
value of the proposed method is 74.4 and 75.9 %, respec-
tively, while its FPR value is 3.9 and 4.3 %, respectively.
The overall accuracy of the proposed method is greater
than 94 %. And also, the running time of the proposed
method competes with existing fast methods. It can process
one image in 3.7 min.
To improve the performance of the proposed method
and reduce its FPR value, we need to use more complex
postprocessing procedure and also use more efficient LBP
operator to extract line, junction, as well as bifurcation
patterns. We will further investigate these aspects in our
future works.
References
1. Chaudhuri S, Chatterjee S, Katz N, Nelson M, Goldbaum M
(1989) Detection of blood vessels in retinal images using two-
dimensional matched filters. IEEE Trans Med Imaging 8(3):
263–269
2. Hoover A, Kouznetsova V, Goldbaum M (2000) Locating blood
vessels in retinal images by piecewise threshold probing of a
matched filter response. IEEE Trans Med Imaging 19(3):203–210
3. Niemeijer M, Staal JJ, VanGinneken B, Loog M, Abramoff MD
(2004) Comparative study of retinal vessel segmentation methods
on a new publicly available database, SPIEMed. Imaging 5370:
648–656
4. Sopharak A, Uyyanonvara B, Barman S, Williamson TH (2008)
Automatic detection of diabetic retinopathy exudates from non-
dilated retinal images using mathematical morphology methods.
Comput Med Imaging Graph 32(8):720–727
Table 4 A comparison of the proposed method with some state-of-
the-art methods at the same FPR values on the DRIVE database
Method TPR
(at FPR =
3.9 %)
Chaudhuri et al. [1] 67.3 %
Niemeijer et al. [3] 74.0 %
Zana et al. [6] 69.7 %
Jiang et al. [9] 65.5 %
Zhang et al. [10] 73.6 %
Delibasis et al. [13] 72.8 %
Soares et al. [17] 79.8 %
Stall et al. [18] 78.3 %
Palomera-Perez et al. [22] 66.2 %
Proposed method 74.4 %
Table 5 A comparison of the proposed method with some state-of-
the-art methods at the same FPR values on the STARE database
Method TPR
(at FPR =
4.3 %)
Chaudhuri et al. [1] 67.1 %
Hoover et al. [2] 67.5 %
Zhang et al. [10] 74.3 %
Soares et al. [17] 78.1 %
Stall et al. [18] 80.3 %
Martinez-Perez et al. [23] 75.0 %
Proposed method 75.9 %
Table 6 The average running time of different methods measured for
an image from the DRIVE database
Method Time
(mins)
PC Software
Mendonca et al.
[21]
3 MATLAB
Palomera-Perez-
et al. [22]
11.5 Pentium-IV PC 2.6 GHz,
500 MB RAM
C??
Soares et al. [17] 3.2 PC 2167 MHz, 1.0 GB
RAM
MATLAB
Stall et al. [18] 15 Pentium-III PC 1.0 GHz,
1.0 GB RAM
MATLAB
Proposed method 3.7 Pentium-IV PC 3.2 GHz,
2.0 GB RAM
MATLAB
Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174 S173
123
5. Doi K (2007) Computer-aided diagnosis in medical imaging:
historical review, current status and future potential. Comput Med
Imaging Graph 31(4–5):198–211
6. Zana F, Klein JC (2001) Segmentation of vessel-like patterns
using mathematical morphology and curvature evaluation. IEEE
Trans Image Process 10(7):1010–1019
7. Matsopoulos GK, Asvestas PA, Delibasis KK, Mouravliansky NA,
Zeyen TG (2008) Detection of glaucomatous change based on vessel
shape analysis. Comput Med Imaging Graph 32(3):183–192
8. Lin T, Zheng Y (2003) Node-matching-based pattern recognition
method for retinal blood vessel images. Opt Eng 42(11):3302–3306
9. Jiang X, Mojon D (2003) Adaptive local thresholding by verifi-
cation based multi threshold probing with application to vessel
detection in retinal images. IEEE Trans Pattern Anal Mach Intell
25(1):131–137
10. Zhang B, Zhang L, Zhang L, Karray F (2010) Retinal vessel
extraction by matched filter with first-order derivative of
Gaussian. Comput Biol Med 40:438–445
11. Narasimha-Iyer H, Mahadevan V, Beach JM, Roysam B (2008)
Improved detection of the central reflex in retinal vessels using a
generalized dual-Gaussian model and robust hypothesis testing.
IEEE Trans Inf Technol Biomed 12(3):406–410
12. Zhou L, Rzeszotarsk MS, Singerman LJ, Chokreff JM (1994) The
detection and quantification of retinopathy using digital angio-
grams. IEEE Trans Med Imaging 13(4):619–626
13. Delibasis KK, Kechriniotis AI, Tsonos C, Assimakis N (2010)
Automatic model-based tracing algorithm for vessel segmenta-
tion and diameter estimation. Comput Methods Programs Bio-
med. doi:10.1016/j.cmpb.2010.03.004
14. Adel M, Moussaoui A, Rasigni M, Bourennane S, Hamami L
(2010) Statistica-based tracking technique for linear structures
detection: application to vessel segmentation in medical images.
IEEE Signal Process Lett 17(6):555–558
15. Vlachos M, Dermatas E (2010) Multi-scale retinal vessel seg-
mentation using line tracking. Comput Med Imaging Graph
34(3):213–227
16. Perfetti R, Ricci E, Casali D, Costantini G (2007) Cellular neural
networks with virtual template expansion for retinal vessel seg-
mentation. IEEE Trans Circuits Syst II 54:141–145
17. Soares JVB, Leandro JJG, CesarJr RM, Jelinek HF, Cree MJ
(2006) Retinal vessel segmentation using the 2-d gabor wavelet
and supervised classification. IEEE Trans Med Imaging
25:1214–1222
18. Staal JJ, Abramoff MD, Niemeijer M, Viergever MA, Van-
Ginneken B (2004) Ridge based vessel segmentation in color
images of the retina. IEEE Trans Med Imaging 23(4):501–509
19. Supot S, Thanapong C, Chuchart P, Manas S (2007) Automatic
segmentation of blood vessels in retinal images based on Fuzzy
K-Median clustering, in: Proceedings of the IEEE International
Conference on Integration Technology. Shenzhen, China, pp 584–
588
20. Garg S, Sivaswamy J, Chandra S (2007) Unsupervised curvature-
based retinal vessel segmentation. In: Proceedings of the IEEE
international symposium on bio-medical imaging pp 344–347
21. Mendonca AM, Campilho A (2006) Segmentation of retinal
blood vessels by combining the detection of centerlines and
morphological reconstruction. IEEE Trans Med Imaging 25(9):
1200–1213
22. Palomera-Perez MA, Martinez-Perez ME, Benitez-Perez H,
Ortega-Arjona JL (2010) Parallel Multiscale feature extraction
and region growing: application in retinal blood vessel detection.
IEEE Trans Inf Technol Biomed 14(2):500–506
23. Martinez-Perez ME, Hughes AD, Thom SA, Bharath AA, Parker
KH (2007) Segmentation of blood vessels from red-free and
fluorescein retinal images. Med Image Anal 11(1):47–61
24. Zadeh LA (1965) Fuzzy sets. Inf Control 8:338–353
25. Jang JSR, Sun CT, Mizutani E (1997) Neuro-fuzzy and soft
computing: a computational approach to learning and machine
intelligence. Upper Saddle River, Prentice Hall
26. Jang JSR (1993) ANFIS: adaptive-network-based fuzzy inference
systems. IEEE Trans Syst Man Cybern 23(3):665–685
27. Ojala T, Pietikainen M, Harwood D (1996) A comparative study
of texture measures with classification based on feature distri-
bution. Pattern Recogn 29:51–59
28. Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-
scale and rotation invariant texture classification with local binary
patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987
29. Ahonen T, Hadid A, Pietikainen M (2006) Face description with
local binary patterns: application to face recognition. IEEE Trans
Pattern Anal Mach Intell 28(12):2037–2041
30. Nanni L, Lumini A (2008) Local binary patterns for a hybrid
fingerprint matcher. Pattern Recogn 41:3461–3466
31. Lucieer A, Stein A, Fisher P (2005) Multivariate texture-based
segmentation of remotely sensed imagery for extraction of
objects and their uncertainty. Int J Remote Sens 26(14):2917–
2936
32. Fathi A, Naghsh-Nilchi AR (2012) Noise tolerant local binary
pattern operator for efficient texture analysis. Pattern Recognit
Lett 33:1093–1100
33. Otsu N (1979) A threshold selection method from Gray-Level
histograms. IEEE Trans Syst Man Cybern 9(1):62–66
34. http://www.isi.uu.nl/Research/Databases/DRIVE/results.php
S174 Neural Comput & Applic (2013) 22 (Suppl 1):S163–S174
123