JOURNAL OF LA Multi-focus Image Fusion: A Benchmark · 2020-05-05 · Index Terms—multi-focus...
Transcript of JOURNAL OF LA Multi-focus Image Fusion: A Benchmark · 2020-05-05 · Index Terms—multi-focus...
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 1
Multi-focus Image Fusion: A BenchmarkXingchen Zhang
Abstract—Multi-focus image fusion (MFIF) has attracted con-siderable interests due to its numerous applications. While muchprogress has been made in recent years with efforts on developingvarious MFIF algorithms, some issues significantly hinder the fairand comprehensive performance comparison of MFIF methods,such as the lack of large-scale test set and the random choicesof objective evaluation metrics in the literature. To solve theseissues, this paper presents a multi-focus image fusion benchmark(MFIFB) which consists a test set of 105 image pairs, a codelibrary of 30 MFIF algorithms, and 20 evaluation metrics. MFIFBis the first benchmark in the field of MFIF and provides thecommunity a platform to compare MFIF algorithms fairly andcomprehensively. Extensive experiments have been conducted us-ing the proposed MFIFB to understand the performance of thesealgorithms. By analyzing the experimental results, effective MFIFalgorithms are identified. More importantly, some observationson the status of the MFIF field are given, which can help tounderstand this field better.
Index Terms—multi-focus image fusion, image fusion, bench-mark, image processing, deep learning
I. INTRODUCTION
Clear images are desirable in computer vision applica-tions. However, it is difficult to have all objects in focus inan image since most imaging systems have a limited depth-of-field (DOF). To be more specific, scene contents within theDOF remain clear while objects outside that area appear asblurred. Multi-focus image fusion (MFIF) aims to combinemultiple images with different focused areas into a singleimage with everywhere in focus, as shown in Fig. 1.
MFIF has attracted considerable interests recently and vari-ous MFIF algorithms have been proposed, which can be gener-ally divided into spatial domain-based methods and transformdomain-based methods. Spatial domain-based methods operatedirectly in spatial domain and can be roughly divided intothree categories: pixel-based [1], block-based [2] and region-based [3]. In contrast, transform domain-based methods firstlytransform images into another domain and then perform fusionin that transformed domain. The fused image is then obtainedvia the inverse transformation. The representative transformdomain-based methods are sparse representation (SR) methods[4, 5] and multi-scale methods [6, 7].
In recent years, with the development of deep learning,researchers have begun to solve the MFIF problem with deeplearning techniques. Both supervised [9–12] and unsupervisedMFIF algorithms [13–16] have been proposed. To be morespecific, various deep learning models and methods have beenemployed, such as CNN [17, 18], GAN [19] and ensemblelearning [20].
X. Zhang is with the Department of Electrical and Electronic En-gineering, Imperial College London, London, United Kingdom. e-mail:[email protected]
Fig. 1. The benefit of multi-focus image fusion. In image 1, the backgroundis not clear while in image 2 the foreground is not clear. After fusion, boththe background and foreground in the fused image are clear. The fused imageis produced by CBF [8].
However, current research on MFIF is suffering fromseveral issues, which hinder the development of this fieldseverely. First, there is not a well-recognized MFIF benchmarkwhich can be used to compare performance under the samestandard. Therefore, it is quite common that different imagesare utilized in experiments in the literature, which makes itdifficult to fairly compare the performance of various algo-rithms. Although the Lytro dataset [33] is used frequently,many researchers only choose several image pairs from it inexperiments, resulting in bias results. This is very differentfrom other image processing-related areas like visual objecttracking where several benchmarks [34, 35] are available andevery paper has to show results on some of them. Second, asthe most widely used dataset, the Lytro dataset only consists of20 pairs of multi-focus images which are not enough for large-scale comparison. Also, Xu et al. [36] showed that the defocusspread effect (DSE) is not obvious in Lytro dataset thuspopular methods perform very similar on it. Third, althoughmany evaluation metrics have been proposed to evaluate theimage fusion algorithms, none of them is better than allother metrics. As a result, researchers normally choose severalmetrics which support their methods in the literature. Thismakes it not trivial to compare performances objectively. TableI lists some algorithms published in top journals (conferences)and the number of image pairs, compared algorithms, andevaluation metrics. As can be seen, these works present resultsof different evaluation metrics on different number of imagepairs, making it quite difficult to ensure a fair and comprehen-
arX
iv:2
005.
0111
6v1
[cs
.CV
] 3
May
202
0
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 2
TABLE ISOME MFIF ALGORITHMS PUBLISHED IN TOP JOURNALS AND CONFERENCES. THE NUMBER OF TESTED IMAGE PAIRS, THE NUMBER OF COMPARED
ALGORITHMS, THE NUMBER OF UTILIZED EVALUATION METRICS ARE ALSO GIVEN. THE DETAILS OF THE PROPOSED MFIFB ARE ALSO SHOWN.
Reference Year Journal Image pairs Algorithms Metrics
GFF [21] 2013 IEEE Transactions on Image Processing 10 7 5 (MI, QY , QC , QG, QP )
RP SR [22] 2015 Information Fusion 10 8 5 (SD, EN, QG, QP , QW )
MST SR [22] 2015 Information Fusion 10 8 5 (SD, EN, QG, QP , QW )
NSCT SR [22] 2015 Information Fusion 10 8 5 (SD, EN, QG, QP , QW )
QB [23] 2015 Information Fusion 6 N/A 3 (QGM , QAB/F , NMI)
DSIFT [1] 2015 Information Fusion 12 9 6 (NMI, QNCIE , QG, PC, QY , QCB )
MRSR [24] 2016 IEEE Transactions on Image Processing 7 9 4 (MI, QG, ZNCC PC, QPC )
CNN [25] 2017 Information Fusion 40 6 4 (NMI, QAB/F , QY , QCB )
BFMF [26] 2017 Information Fusion 18 6 4 (NMI, PC, QMSSI , QC )
[5] 2018 Information Fusion 10 9 5 (MI, QG, QS , QZP , QPC )
p-CNN [27] 2018 Information Science 12 5 4 (NMI, QPC , QW , QCB )
CAB [28] 2019 Information Fusion 34 15 5 (QAB/F , NMI, FMI, QY , QNCIE )
mf-CRF [29] 2019 IEEE Transactions on Image Processing 52 11 4 (MI, QG, QY , QCB )
DIF-Net [16] 2020 IEEE Transactions on Image Processing 20 9 7 (MI, FMI, QX , QSCD, QH , QP , QM )
DRPL [30] 2020 IEEE Transactions on Image Processing 20 7 5 (MI, QAB/F , AG, VIF, EI)
IFCNN [10] 2020 Information Fusion 20 4 5 (VIFF, ISSIM, NMI, SF, AG)
FusionDN [31] 2020 AAAI 10 5 4 (SD, EN, VIF, SCD)
PMGI [32] 2020 AAAI 18 5 6 (SSIM, QAB/F , EN, FMI, SCD, CC)
MFIFB 2020 105 30 20 (CE, EN, FMI, NMI, PSNR, QNCIE , TE, AG, EI, QAB/F , QP ,
SD, SF, QC , QW , QY , SSIM, QCB , QCV , VIF)
sive performance comparison. Besides, many researchers onlychoose several algorithms which may be outdated to comparewith their own algorithms, making it more difficult to knowthe real performance of these algorithms. More importantly,it is frequent that methods are compared with those whichare not designed for this task [37]. For example, the per-formance of a MFIF algorithm is compared with a methoddesigned for visible-infrared image fusion. Finally, althoughthe source codes of some MFIF algorithms have been madepublicly available, the usage of these codes are different. Forexamples, different codes have different interfaces to readand write images, and may have various dependencies toinstall. Therefore, it is inconvenient and time-consuming toconduct large scale performance evaluation. It is thus desirablethat results on public datasets are available and a consistentinterface is available to integrate new algorithms convenientlyfor performance comparison.
To solve these issues, in this paper a multi-focus imagefusion benchmark (MFIFB) is created, which includes 105pairs of multi-focus images, 30 publicly available fusionalgorithms, 20 evaluation metrics and an interface to facilitatethe algorithm running and performance evaluation. The maincontributions of this paper lie in the following aspects:
• Dataset. A test set containing 105 pairs of multi-focusimages is created. These image pairs cover a wide rangeof environments and conditions. Therefore, the test set isable to test the generalization ability of fusion algorithms.
• Code library. 30 recent MFIF algorithms are collectedand integrated into a code library, which can be easilyutilized to run algorithms and compare performances. An
interface is designed to integrate new image fusion al-gorithms into MFIFB. It is also convenient to compareperformances using fused images produced by otheralgorithms with those available in MFIFB.
• Comprehensive performance evaluation. 20 evaluationmetrics are implemented in MFIFB to comprehensivelycompare fusion performance. This is much more thanthose utilized in the MFIF literature as shown in TableI. Extensive experiments have been conducted usingMFIFB, and the comprehensive comparison of thosealgorithms are performed.
The rest of this paper is organized as follows. Section IIgives some background information about MFIF. Then, theproposed multi-focus image fusion benchmark is introducedin detail in Section III, followed by experiments and analysisin Section IV. Finally, Section V concludes the paper.
II. MULTI-FOCUS IMAGE FUSION METHODS
A. The background of multi-focus image fusion
MFIF aims to produce an all-in-focus image by fusingmultiple partially focused images of the same scene [38]. Nor-mally, MFIF is solved by combining focused regions withsome fusion rules. The key task in MFIF is thus the iden-tification of focused and defocused area, which is normallyformulated as a classification problem.
Various focus measurements (FM) were designed to classifywhether a pixel is focused or defocused. For example, Zhaiet al. [9] used the energy of Laplacian to detect the focuslevel of source images. Tang et al. [27] proposed a pixel-wise
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 3
convolutional neural network (p-CNN) which was a learnedFM that can recognize the focused and defocused pixels.
B. Conventional multi-focus image fusion methods
Generally speaking, conventional MFIF algorithms can bedivided into spatial domain-based methods and transformdomain-based methods. Spatial domain-based methods operatedirectly in spatial domain. According to the adopted ways,these methods can be classified as pixel-based [1], block-based [2] and region-based [3]. In pixel-based methods, theFM is applied at pixel-level to judge whether a pixel isfocused or defocused. In block-based methods, the sourceimages are firstly divided to blocks with fixed size, and thenthe FM is applied to these patches to decide their blurringlevels. However, the performance of block-based methodsheavily dependent on the division of blocks and may induceartifacts easily. In region-based methods, the source imagesare firstly segmented into different regions using segmentationtechniques, and then the blurring levels of these regions arecalculated based on FM.
The transform domain-based methods normally consist ofthree steps. First, the source images are transformed to an-other domain using some transformations, such as wavelettransform and sparse representation. The source images can berepresented using some coefficients in this way. Second, thecoefficients of source images are fused using designed fusionrules. Finally, the fused image is obtained by applying inversetransformation to those fused coefficients. Transform domain-based algorithms mainly contain sparse representation-based[39, 40], multi-scale-based [41, 42], subspace-based [43],edge-preserving-based [44, 45] and others.
C. Deep learning-based methods
In recent years, deep learning has been applied to MFIFand an increasing number of deep learning-based methodsemerge every year. Liu et al. [25] proposed the first CNN-based method which utilized a CNN to learn a mapping fromsource images to the focus map. Since then, more than 40deep learning-based MFIF algorithms have been proposed.
The majority of deep learning-based MFIF algorithms aresupervised algorithms, which need a large amount of trainingdata with ground-truth to train. For instance, Zhao et al. [46]developed a MFIF algorithm based on multi-level deeplysupervised CNN (MLCNN). Tang et al. [27] proposed a pixel-wise convolutional neural network (p-CNN) which was alearned FM that can recognize the focused and defocusedpixels in source images. Yang et al. [47] proposed a multi-levelfeatures convolutional neural network (MLFCNN) architecturefor MFIF. Li et al. [30] proposed the DRPL, which directlyconverts the whole image into a binary mask without any patchoperation. Zhang et al. [10] proposed a general image fusionframework based on CNN (IFCNN).
In supervised learning-based methods, a large amount oflabeled training data is needed, which is labor-intensive andtime-consuming. To solve this issue, researchers have began todevelop unsupervised MFIF algorithms. For example, Yan etal. [13] proposed the first unsupervised MFIF algorithm based
on CNN, namely MFNet. The key to achieve unsupervisedtraining in that work was the usage of a loss function basedon SSIM, which is a widely used image fusion evaluationmetric that measures the structural similarity between sourceimages and the fused image. Ma et al. [14] proposed anunsupervised MFIF algorithm based on an encoder-decodernetwork (SESF), which also utilized SSIM as a part of the lossfunction. Other unsupervised methods include DIF-Net [16]and FusionDN [31].
Apart from CNN, some other deep learning models havealso been utilized to perform MFIF. For example, Guoet al. [19] presents the first GAN-based MFIF algorithm(FuseGAN). Deshmukh et al. [48] proposed to use deepbelief network (DBN) to calculate weights indicating the sharpregions of input images. Unlike above-mentioned methodswhich only use one model in their methods, Naji et al. [20]proposed a MFIF algorithm based on the ensemble of threeCNNs.
III. MULTI-FOCUS IMAGE FUSION BENCHMARK
As presented previously, in most MFIF works, the algorithmwere tested on a small number of images and compared with avery limited number of algorithms using just several evaluationmetrics. This makes it difficult to comprehensively evaluate thereal performance of these algorithms. This section presents amulti-focus image fusion benchmark (MFIFB), including thedataset, baseline algorithms, and evaluation metrics.
A. DatasetThe dataset in MFIFB is a test set including 105 pairs of
multi-focus images. Each pair consists of two images withdifferent focus areas. Because most researches in MFIF areabout fusing two images, therefore at the moment only imagepairs consisting of two images are collected in MFIFB. Sincethis paper aims to create a benchmark in the field of MFIF, thusto maximize its value, this test set consists of existing datasetswhich do not have code library and results. Specifically, thetest set is collected from Lytro [33], MFFW [36], the dataset ofSavic et al. [49], Aymaz et al. [50], and Tsai et al. 1. By doingthis, we not only provide benchmark results on the wholedataset, but also give benchmark results for these existingdatasets, which will make it more convenient for researcherswho are familiar with these datasets to compare results.
The images included in MFIFB are captured with variouscameras at various places, and they cover a wide rangeof environments and working conditions. The resolutions ofimages vary from 178×134 to 1024×768. Therefore, theseimages can be used to test the performance of MFIF algorithmscomprehensively. Table II lists more details about differentkinds of images included in MFIFB.
TABLE IITHE NUMBER OF DIFFERENT KINDS OF IMAGES IN MFIFB.
Category Color/gray Real/simulated Registered/not well registered
Number 71/34 64/41 98/7
1https://www.mathworks.com/matlabcentral/fileexchange/45992-standard-images-for-multifocus-image-fusion
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 4
Fig. 2. A part of dataset in MFIFB.
B. Baseline algorithms
MFIFB currently contains 30 recently published multi-focusimage fusion algorithms including ASR [40], BFMF [26],BGSC [51], CBF [8], CNN [25], CSR [52], DCT Corr [6],DCT EOL [6], DRPL [30], DSIFT [1], DWTDE [53], ECNN[20], GD [54], GFDF [55], GFF [21], IFCNN [10], IFM[56], MFM [57], MGFF [58], MST SR [22], MSVD [59],MWGF [60], NSCT SR [22], PCANet [61], QB [23], RP SR[22], SESF [14], SFMD [62], SVDDCT [63], TF [64]. Inthese algorithms, some were specifically designed for multi-focus image fusion, such as ASR and BADNN, while somewere designed for general image fusion including multi-focusimage fusion, such as CBF and GFF. It should be noted
that some algorithms were originally developed for fusinggrayscale images, e.g. BFMF and CBF. These algorithms wereconverted to fuse color images in this study by fusing R, Gand B channels, respectively. More details about the categoryof the integrated algorithms can be found in Table III.
The algorithms in MFIFB cover almost every kind of MFIFalgorithms, thus can represent the development of the field tosome extent. However, it should be noted that only a partof published MFIF methods provide the source code, thus inMFIFB we cannot cover all published MFIF algorithms.
The source codes of various methods have different inputand output interfaces, and they may require different runningenvironment. These factors hinder the usage of these codes
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 5
TABLE IIIMULTI-FOCUS IMAGE FUSION ALGORITHMS THAT HAVE BEEN INTEGRATED IN MFIFB.
Category Method
Spatial domain-based BFMF [26], BGSC [51], DSIFT [1], IFM [56], QB [23], TF [64]
Transform domain-basedSR-based ASR [40], CSR [52]
Multi-scale-based CBF [8], DCT Corr [6], DCT EOL [6], DWTDE [53], GD [54], MSVD [59], MWGF [60],SVDDCT [63]
edge-preserving-based GFDF [55], GFF [21], MFM [57], MGFF [58]
subspace-based SFMD [62]
Hybrid MST SR [22], NSCT SR [22], RP SR [22]
Deep learning-basedSupervised CNN [25], DRPL [30], ECNN [20], IFCNN [10], PCANet [61]
Unsupervised SESF [14]
Fig. 3. The source images and fused images of Lytro19 image pair. (a) and (b) are the source images. From (c) to (ff) are the fused images produced by 30integrated MFIF algorithms in MFIFB. The magnified plot of area within red box near the focused/defocused boundary are given at the top right corner ofeach fused image. The magnified plot of area within green box near the focused/defocused boundary are given at the bottom right corner of each fused image.
to produce results and compare performances. To integratealgorithms into MFIFB and for the convenience of users,an interface was designed to integrate more algorithms intoMFIFB. Besides, for researchers who do not want to maketheir codes publicly available, they can simply put their fusedimages into MFIFB and then their algorithms can be comparedwith those integrated in MFIFB easily.
C. Evaluation metricsThe assessment of MFIF algorithms is not a trivial task since
the ground-truth images are normally not available. Generallythere are two ways to evaluate MFIF algorithms, namelysubjective or qualitative method and objective or quantitativemethod.
Subjective evaluation means that the fusion performance isevaluated by human observers. This is very useful in MFIFresearch since a good fused image should be friendly tohuman visual system. However, it is time-consuming andlabor-intensive to observe each fused image in practice. Be-sides, because each observer has different standard whenobserving fused images, thus biased evaluation may be easilyproduced. Therefore, qualitative evaluation alone is not enoughfor the fusion performance evaluation. Therefore, objectiveevaluation metrics are needed for quantitative comparison.
As introduced in [65], image fusion evaluation metrics canbe classified into four types as
• Information theory-based
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 6
Fig. 4. The source images and fused images of MMFW12 image pair. (a) and (b) are the source images. From (c) to (ff) are the fused images produced by30 integrated MFIF algorithms in MFIFB. The magnified plot of area within red box near the focused/defocused boundary are given in the red dash box. Themagnified plot of area within green box near the focused/defocused boundary are given in the green dash box.
• Image feature-based• Image structural similarity-based• Human perception inspired
Numerous evaluation metrics for image fusion have beenproposed. However, none of them is better than all other met-rics. To have comprehensive and objective performance com-parison, 20 evaluation metrics were implemented in MFIFB2. The evaluation metrics integrated in MFIFB cover all fourcategories of metrics, thus are capable of quantitatively show-ing the quality of a fused image. Specifically, the implementedinformation theory-based metrics include cross entropy (CE)[66], entropy (EN) [67], feature mutual information (FMI),normalized mutual information (NMI) [68], peak signal-to-noise ratio (PSNR) [69], nonlinear correlation informationentropy (QNCIE) [70, 71], and tsallis entropy (TE) [72]. Theimplemented image feature-based metrics include average gra-dient (AG) [73], edge intensity (EI) [74], gradient-based sim-ilarity measurement (QAB/F ) [75], phase congruency (QP )[76], standard division (SD) [77] and spatial frequency (SF)[78]. The implemented image structural similarity-based met-rics include Cvejie’s metric QC [79], Peilla’s metric (QW )[80], Yang’s metric (QY ) [81], and structural similarity indexmeasure (SSIM) [82]. The implemented human perception
2The implementation of some metrics are kindly provided by Zheng Liu athttps://github.com/zhengliu6699/imageFusionMetrics
inspired fusion metrics are human visual perception (QCB)[83], QCV [84] and VIF [85].
Due to the page limitation, the mathematical expression ofthese metrics are not given here. For all metrics except CE andQCV , a larger value indicates a better fusion performance. InMFIFB, it is convenient to compute all these metrics for eachmethod, making it easy to compare performances. Note thatmany metrics are designed for gray images. In this work, eachmetric was computed for every channel of RGB images andthen the average value was computed. More information aboutevaluation metrics can be founded in [65, 69, 86].
IV. EXPERIMENTS AND ANALYSIS
This section presents experimental results withinMFIFB. All experiments were performed using a computerequipped with an NVIDIA RTX2070 GPU and i7-9750HCPU. Default parameters reported by the correspondingauthors of each algorithm were employed. Note that pre-trained models of each deep learning algorithm were providedby the corresponding authors of each algorithm. The datasetin MFIFB is only used for performance evaluation of thosealgorithms but not for the training.
A. Results on the Lytro dataset
Many papers utilize Lytro in the experiments, thus the Lytrodataset is collected as a subset in MFIFB and in this Section
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 7
TAB
LE
IVA
VE
RA
GE
EV
AL
UA
TIO
NM
ET
RIC
VA
LU
ES
OF
AL
LM
ET
HO
DS
ON
TH
ELY
TR
OD
ATA
SE
T(2
0IM
AG
EPA
IRS).
TH
EB
ES
TT
HR
EE
VA
LU
ES
INE
AC
HM
ET
RIC
AR
ED
EN
OT
ED
INR
ED
,GR
EE
NA
ND
BL
UE
,R
ES
PE
CT
IVE
LY.T
HE
TH
RE
EN
UM
BE
RS
AF
TE
RT
HE
NA
ME
OF
EA
CH
ME
TH
OD
DE
NO
TE
TH
EN
UM
BE
RO
FB
ES
TV
AL
UE
,SE
CO
ND
BE
ST
VA
LU
EA
ND
TH
IRD
BE
ST
VA
LU
E,R
ES
PE
CT
IVE
LY.B
ES
TV
IEW
ED
INC
OL
OR
.
Met
hod
CE
EN
FMI
NM
IPS
NR
QN
CIE
TE
AG
EI
QA
B/F
QP
SDSF
QC
QW
QY
SSIM
QC
BQ
CV
VIF
BFM
F(0
,1,0
)0.
0196
387.
5249
900.
8998
901.
0884
4264
.613
500
0.83
9420
77.4
2350
06.
8438
4071
.110
150
0.73
9521
0.81
3292
61.3
2930
019
.183
530
0.80
9169
0.92
8667
0.97
2186
1.66
9885
0.79
1523
22.2
5942
00.
9245
54B
GSC
(1,0
,0)
0.01
5643
7.50
3900
0.88
2147
0.92
4923
64.8
1300
00.
8315
7746
.645
900
5.40
7235
55.5
8870
00.
5177
300.
4638
0159
.794
650
15.9
4277
00.
7546
990.
5893
410.
8471
031.
6430
500.
6550
1913
1.57
760
0.75
7423
DSI
FT(0
,2,2
)0.
0203
537.
5264
100.
8999
701.
0970
9864
.542
300
0.83
9410
78.6
1330
06.
9370
3572
.050
850
0.74
4813
0.81
8027
61.5
8380
019
.486
470
0.81
1880
0.94
6815
0.97
4128
1.66
7965
0.79
6375
16.7
5135
00.
9373
78IF
M(0
,0,0
)0.
0204
097.
5285
300.
8973
941.
0687
9164
.497
000
0.83
7834
73.1
8055
06.
9314
7071
.971
200
0.73
6289
0.79
3869
61.5
1205
019
.455
460
0.80
5784
0.93
7143
0.96
6946
1.66
2950
0.78
5079
21.3
1207
00.
9304
32Q
B(1
,0,3
)0.
0203
247.
5262
250.
8998
981.
0969
0964
.542
850
0.83
9371
78.6
6445
06.
9292
6571
.970
600
0.74
4117
0.81
5647
61.5
8510
019
.482
670
0.81
1833
0.94
5604
0.97
4253
1.66
8035
0.79
6537
17.0
9434
00.
9363
97T
F(2
,3,0
)0.
0201
777.
5261
950.
8997
911.
0908
0064
.572
850
0.83
8992
77.6
1235
06.
9081
5571
.751
500
0.74
5418
0.81
9273
61.5
5325
019
.409
010
0.81
2919
0.94
7745
0.97
4405
1.67
0550
0.79
5914
16.8
2072
00.
9374
93A
SR(0
,0,2
)0.
0191
547.
5275
450.
8981
100.
9068
3264
.829
750
0.82
8414
37.9
5995
06.
7782
5570
.215
350
0.72
4654
0.78
5958
60.9
2405
019
.067
270
0.80
1011
0.94
3607
0.94
9396
1.67
1965
0.71
0252
27.0
3378
00.
8973
71C
SR(0
,1,0
)0.
0202
967.
5279
750.
8996
031.
0306
0864
.578
600
0.83
5017
62.0
3820
06.
8397
4071
.083
900
0.73
3271
0.80
6346
61.4
9010
019
.321
700
0.79
4129
0.94
6077
0.94
4537
1.67
0385
0.76
8573
16.5
4867
00.
9328
77C
BF
(0,2
,0)
0.01
9302
7.53
0505
0.89
7206
0.97
6413
64.8
5330
00.
8321
3448
.834
550
6.72
6525
69.7
6070
00.
7320
170.
7935
3760
.907
850
18.6
7613
00.
8085
020.
9332
650.
9537
321.
6792
300.
7498
6426
.241
540
0.91
4540
DC
TC
orr
(0,0
,0)
0.02
0061
7.52
7230
0.89
9367
1.08
9727
64.5
4785
00.
8390
6476
.490
450
6.92
2705
71.9
0175
00.
7403
260.
8059
0261
.501
650
19.4
2695
00.
8092
300.
9347
700.
9686
331.
6670
250.
7871
5121
.457
040
0.93
0913
DC
TE
OL
(0,0
,0)
0.02
0249
7.52
7355
0.89
9547
1.09
4470
64.5
4165
00.
8392
8277
.509
100
6.93
2225
72.0
0090
00.
7422
830.
8105
7461
.552
050
19.4
6077
00.
8100
060.
9389
880.
9695
551.
6672
150.
7891
9420
.117
330
0.93
3844
DW
TD
E(0
,1,0
)0.
0189
577.
5279
750.
8988
091.
0068
5864
.606
700
0.83
3793
61.7
3010
06.
7526
0070
.104
950
0.71
6354
0.78
4280
61.2
5440
018
.920
690
0.80
2234
0.93
2183
0.95
2613
1.66
7810
0.76
7808
29.0
8350
00.
9188
95G
D(3
,0,0
)0.
3961
747.
6073
350.
8872
660.
5119
3661
.272
450
0.81
3712
275.
1116
06.
8902
1071
.746
800
0.67
7429
0.69
3197
57.3
0655
018
.024
320
0.73
2460
0.86
9108
0.83
8173
1.55
7465
0.58
8832
127.
1186
01.
0005
38M
SVD
(2,0
,0)
0.02
2366
7.49
7410
0.87
9994
0.78
9597
65.2
4750
00.
8226
9844
.968
990
4.80
3970
49.5
4295
00.
5002
330.
5182
2858
.954
500
14.4
3333
00.
7244
870.
7048
620.
7908
781.
6924
650.
6042
8186
.149
790
0.74
8725
MW
GF
(1,0
,0)
0.02
0377
7.52
7275
0.90
0344
1.06
5568
64.5
4015
00.
8379
3771
.925
800
6.81
5705
70.8
7690
00.
7324
720.
8097
2161
.469
200
19.2
5757
00.
8096
850.
9293
810.
9739
001.
6682
050.
7853
7919
.699
000
0.92
8065
SVD
DC
T(0
,0,0
)0.
0201
177.
5264
700.
8996
561.
0947
9364
.545
150
0.83
9381
78.3
7690
06.
9279
6071
.954
200
0.74
2519
0.81
1085
61.5
5500
019
.465
680
0.80
6462
0.94
1773
0.96
9724
1.66
7475
0.78
8470
18.0
2500
00.
9344
66G
FDF
(2,2
,3)
0.02
0140
7.52
6160
0.89
9992
1.09
0750
64.5
7315
00.
8389
6177
.344
550
6.90
2755
71.7
0090
00.
7450
610.
8194
6861
.549
350
19.3
9705
00.
8128
990.
9477
400.
9744
581.
6706
300.
7963
9416
.640
480
0.93
7175
GFF
(0,0
,0)
0.02
0287
7.52
9880
0.89
9365
1.04
0993
64.6
1260
00.
8359
9564
.040
550
6.89
1830
71.5
5285
00.
7421
550.
8152
5161
.463
850
19.3
4727
00.
8104
630.
9469
580.
9690
021.
6707
300.
7823
7917
.139
130
0.93
4393
MFM
(0,1
,0)
0.02
0050
7.52
6425
0.89
9556
1.08
8707
64.5
8470
00.
8389
9276
.910
750
6.89
8890
71.6
5340
00.
7452
180.
8174
2661
.517
200
19.3
6816
00.
8124
550.
9458
570.
9742
371.
6709
150.
7947
2817
.405
300
0.93
5602
MG
FF(1
,1,1
)0.
0460
117.
5348
600.
8879
960.
7664
3064
.010
550
0.82
1890
28.0
7950
06.
1873
7564
.494
550
0.64
4779
0.67
7019
63.2
1800
017
.443
950
0.77
3243
0.88
7198
0.87
2198
1.67
1740
0.65
3263
381.
7599
00.
9758
55SF
MD
(3,2
,1)
0.03
1383
7.53
5560
0.88
6008
0.75
9986
63.5
9470
00.
8217
6524
.306
030
8.15
3865
84.4
0340
00.
6404
470.
6806
2862
.995
500
23.4
0900
00.
7728
580.
8890
090.
8965
681.
6243
200.
6366
7277
.419
900
0.94
9093
MST
SR(1
,0,1
)0.
0222
627.
5283
450.
8990
670.
9524
1664
.601
550
0.83
0625
43.7
7645
06.
9322
9071
.998
900
0.73
5014
0.80
6941
61.7
6585
019
.445
200
0.80
5785
0.94
7812
0.95
6312
1.66
9545
0.75
6733
19.6
1703
00.
9461
96N
SCT
SR(0
,0,0
)0.
0200
217.
5285
150.
8995
311.
0594
9764
.612
850
0.83
7055
69.1
2880
06.
9089
7071
.736
300
0.74
1461
0.81
4431
61.4
3835
019
.399
950
0.81
0613
0.94
7641
0.96
6277
1.67
1110
0.78
1873
16.8
9540
00.
9348
44R
PSR
(0,1
,0)
0.02
2805
7.52
9890
0.89
5937
0.92
5847
64.6
0710
00.
8292
7538
.968
250
6.92
8545
71.8
9220
00.
7202
920.
7761
6061
.649
750
19.5
8656
00.
8032
180.
9402
190.
9474
801.
6711
500.
7387
5220
.643
050
0.93
9251
CN
N(0
,0,3
)0.
0199
207.
5262
450.
8997
651.
0823
8064
.602
850
0.83
8458
75.4
7255
06.
8752
0071
.408
200
0.74
4251
0.81
8436
61.4
9535
019
.298
950
0.81
2681
0.94
6088
0.97
3324
1.67
2075
0.79
4012
17.1
0897
00.
9355
37D
PRL
(0,1
,2)
0.02
0867
7.52
8040
0.89
9082
1.01
7829
64.5
4905
00.
8342
6556
.287
000
6.93
9190
72.0
2435
00.
7377
340.
8170
4761
.591
950
19.5
1940
00.
8101
740.
9465
620.
9712
861.
6687
050.
7736
7917
.734
790
0.93
7124
EC
NN
(3,2
,0)
0.02
0178
7.52
5930
0.90
0061
1.10
1154
64.5
3670
00.
8396
7280
.749
550
6.91
1405
71.7
5585
00.
7410
150.
8051
0361
.580
400
19.4
7688
00.
8105
270.
9444
470.
9709
871.
6673
700.
7924
4916
.262
140
0.93
4347
IFC
NN
(0,0
,0)
0.03
7096
7.53
0950
0.89
6192
0.88
2476
64.6
4510
00.
8269
9837
.368
850
6.93
3880
71.9
8935
00.
7127
800.
7718
3561
.444
050
19.4
4241
00.
8037
880.
9429
350.
9389
471.
6708
950.
7059
2119
.832
180
0.93
3636
PCA
Net
(0,0
,1)
0.02
0317
7.52
4715
0.90
0045
1.08
7111
64.5
3190
00.
8385
2576
.624
150
6.88
0480
71.4
7150
00.
7384
380.
8140
5861
.550
950
19.3
9540
00.
8101
760.
9396
270.
9733
511.
6678
500.
7927
0316
.999
560
0.93
4086
SESF
(0,0
,0)
0.02
0269
7.52
6615
0.89
9455
1.08
1988
64.5
4125
00.
8384
4077
.493
900
6.90
9530
71.7
8865
00.
7398
730.
8119
0561
.588
100
19.4
5679
00.
8092
540.
9462
880.
9713
801.
6681
200.
7937
4418
.023
750
0.93
8644
TAB
LE
VA
VE
RA
GE
EV
AL
UA
TIO
NM
ET
RIC
VA
LU
ES
OF
AL
LM
ET
HO
DS
ON
TH
EW
HO
LE
MF
IFB
DA
TAS
ET
(105
IMA
GE
PAIR
S).
TH
EB
ES
TT
HR
EE
VA
LU
ES
INE
AC
HM
ET
RIC
AR
ED
EN
OT
ED
INR
ED
,GR
EE
NA
ND
BL
UE
,R
ES
PE
CT
IVE
LY.T
HE
TH
RE
EN
UM
BE
RS
AF
TE
RT
HE
NA
ME
OF
EA
CH
ME
TH
OD
DE
NO
TE
TH
EN
UM
BE
RO
FB
ES
TV
AL
UE
,SE
CO
ND
BE
ST
VA
LU
EA
ND
TH
IRD
BE
ST
VA
LU
E,R
ES
PE
CT
IVE
LY.B
ES
TV
IEW
ED
INC
OL
OR
.
Met
hod
CE
EN
FMI
NM
IPS
NR
QN
CIE
TE
AG
EI
QA
B/F
QP
SDSF
QC
QW
QY
SSIM
QC
BQ
CV
VIF
BFM
F(0
,1,2
)0.
0473
297.
1809
480.
8970
341.
1110
9663
.332
440
0.84
3143
317.
5801
007.
6094
2774
.920
800
0.74
3772
0.78
8129
56.2
3586
023
.330
390
0.81
6095
0.90
6927
0.94
9828
1.66
3249
0.78
7603
118.
5796
00.
8679
44B
GSC
(1,0
,1)
0.03
7523
7.17
3506
0.88
4569
1.00
8714
63.5
0565
00.
8415
6522
0.51
3200
6.65
8554
64.9
3957
00.
5890
860.
5086
8154
.880
960
21.0
3671
00.
7735
710.
6460
120.
8351
541.
6227
420.
6846
4221
4.43
930
0.74
6569
DSI
FT(0
,0,0
)0.
0424
527.
1823
280.
8966
551.
1099
0563
.293
060
0.84
2911
207.
3331
007.
7101
5675
.839
980
0.74
7565
0.78
4749
56.3
2367
023
.644
280
0.81
4340
0.91
7656
0.94
3182
1.65
8877
0.78
5830
78.2
1647
00.
8726
79IF
M(1
,0,0
)0.
0530
927.
1851
990.
8953
621.
0844
9363
.276
100
0.84
1046
8824
.745
007.
6731
2075
.426
660
0.74
1902
0.77
8086
56.2
9977
023
.584
020
0.81
4678
0.91
2465
0.94
2871
1.66
1107
0.77
0410
81.0
6998
00.
8706
43Q
B(4
,1,1
)0.
0454
627.
1796
380.
8969
871.
1178
9263
.284
990
0.84
3277
286.
8813
007.
7121
8975
.787
760
0.74
8466
0.78
8975
56.3
9007
023
.677
010
0.81
7610
0.91
8305
0.95
0054
1.66
0263
0.78
9534
113.
7424
00.
8723
85T
F(0
,0,1
)0.
0443
517.
1807
140.
8970
061.
1000
2163
.319
040
0.84
2044
238.
2963
007.
6560
8575
.287
980
0.74
7829
0.79
1328
56.3
0803
023
.537
160
0.81
6977
0.92
1313
0.94
4670
1.66
5278
0.78
6408
71.2
9177
00.
8752
04A
SR(1
,1,0
)0.
0518
787.
1956
390.
8960
240.
9410
3463
.647
060
0.83
2749
825.
2252
007.
4957
2573
.495
640
0.72
7783
0.76
1004
55.7
1759
023
.147
130
0.81
5282
0.91
7409
0.92
1470
1.67
3467
0.72
4397
74.2
1954
00.
8486
02C
SR(0
,0,0
)0.
0633
747.
2127
980.
8953
180.
9269
6363
.344
380
0.83
1827
324.
2799
007.
4312
7073
.575
560
0.71
9437
0.76
3687
56.1
3908
023
.000
090
0.78
2021
0.91
8944
0.89
3531
1.66
1639
0.74
1709
71.7
3562
00.
8634
96C
BF
(1,1
,1)
0.04
6291
7.19
0175
0.89
4211
0.99
2856
63.6
0644
00.
8362
2913
5.39
7800
7.37
3120
72.6
1880
00.
7305
520.
7494
7255
.492
600
22.4
4982
00.
8126
680.
9108
160.
9100
661.
6792
550.
7455
1470
.333
860
0.86
4294
DC
TC
orr
(0,0
,0)
0.05
4290
7.18
3771
0.89
4585
1.10
8963
63.1
1450
00.
8428
7132
2.04
5100
7.70
8113
75.9
2108
00.
7399
570.
7605
4356
.216
510
23.7
1099
00.
8122
110.
9079
210.
9382
171.
6552
810.
7725
1011
6.74
420
0.86
7278
DC
TE
OL
(0,1
,0)
0.04
7559
7.18
3358
0.89
4729
1.11
1800
63.1
1393
00.
8429
1922
2.82
4100
7.72
1065
76.0
4891
00.
7419
910.
7647
2456
.321
780
23.7
4372
00.
8120
700.
9120
160.
9384
461.
6550
070.
7737
3211
4.02
590
0.86
9921
DW
TD
E(0
,0,0
)0.
0440
997.
1915
230.
8942
291.
0024
0563
.407
360
0.83
7050
1610
.877
007.
2571
1972
.359
080
0.70
9249
0.73
9962
55.8
5539
021
.644
490
0.80
3161
0.89
5561
0.90
9535
1.66
2259
0.75
7619
106.
9646
00.
8611
50G
D(2
,2,0
)0.
8369
367.
4780
910.
8827
390.
5187
8859
.720
350
0.81
6733
443.
0255
008.
2834
1181
.765
120
0.67
4440
0.67
9038
56.2
7995
023
.674
930
0.72
6548
0.83
9127
0.79
8234
1.52
7683
0.60
1711
159.
6461
01.
1086
64M
SVD
(0,0
,0)
0.18
5617
7.18
5232
0.88
0630
0.82
1143
63.4
6302
00.
8268
0635
72.7
8600
5.84
7351
57.1
5819
00.
5629
320.
5626
3853
.741
960
18.7
9628
00.
7404
710.
7270
400.
7848
931.
6629
200.
6261
2712
8.50
170
0.75
1377
MW
GF
(0,1
,0)
0.05
2617
7.19
3130
0.89
7046
1.05
6287
63.2
5324
00.
8391
5415
65.9
4100
7.60
4590
74.8
9354
00.
7371
200.
7863
5456
.332
190
23.4
3404
00.
8152
980.
9144
900.
9469
891.
6614
680.
7774
7011
6.48
420
0.87
5608
SVD
DC
T(0
,0,0
)0.
0492
387.
1837
450.
8947
151.
1105
0563
.116
950
0.84
2871
318.
7000
007.
7094
9175
.936
780
0.74
0761
0.76
3561
56.3
2819
023
.710
590
0.80
9084
0.91
0232
0.93
8168
1.65
5304
0.77
3117
119.
4174
00.
8698
50G
FDF
(3,2
,1)
0.04
4624
7.18
1165
0.89
7068
1.10
4549
63.3
2187
00.
8422
4619
9.68
0400
7.65
6499
75.3
1688
00.
7483
810.
7925
9456
.291
310
23.5
1194
00.
8184
670.
9217
290.
9472
031.
6655
350.
7882
1271
.242
840
0.87
6088
GFF
(0,0
,0)
0.08
3664
7.23
7063
0.88
9518
0.82
6873
63.2
2798
00.
8270
9223
3.42
9800
7.63
8879
75.1
6304
00.
6782
990.
6925
5956
.175
990
23.3
4551
00.
7490
230.
9100
940.
8561
171.
6344
120.
7248
2970
.534
500
0.86
0479
MFM
(0,0
,0)
0.05
0854
7.18
1246
0.89
6919
1.09
4574
63.3
2897
00.
8417
6226
4.70
0300
7.64
2581
75.1
5060
00.
7469
940.
7908
4956
.283
410
23.4
9335
00.
8174
100.
9201
850.
9439
081.
6660
190.
7849
3171
.491
590
0.87
4145
MG
FF(0
,2,1
)0.
3315
317.
2848
760.
8877
290.
8454
5961
.976
560
0.82
8354
639.
6019
005.
8436
5858
.833
030
0.66
6799
0.68
4678
60.1
5005
022
.440
790
0.59
1809
0.85
9168
0.83
5316
1.63
8539
0.67
8664
333.
7394
01.
0426
27SF
MD
(4,1
,1)
0.14
2828
7.28
5385
0.87
2729
0.67
3841
61.6
8612
00.
8212
3029
2.68
4100
10.7
8023
103.
9486
00.
5587
640.
5973
2460
.499
420
33.8
3234
00.
7294
470.
8034
340.
8099
581.
5616
840.
6022
5618
0.40
710
0.94
3558
MST
SR(0
,1,2
)0.
0560
937.
1953
810.
8952
290.
9796
8163
.369
550
0.83
5584
1749
.294
007.
7481
6976
.077
150
0.73
2042
0.76
2195
56.7
0153
023
.699
760
0.81
3895
0.92
1444
0.92
0897
1.66
7245
0.75
3582
70.3
1295
00.
8982
05N
SCT
SR(1
,1,1
)0.
0489
417.
1943
300.
8961
781.
0446
7663
.379
310
0.83
9234
3926
.167
007.
6784
1775
.363
690
0.73
9404
0.77
4890
56.1
3033
023
.577
820
0.81
3787
0.92
1544
0.92
5396
1.66
7406
0.76
5249
69.4
6690
00.
8758
30R
PSR
(0,1
,1)
0.05
8435
7.19
4332
0.89
2773
0.96
2425
63.3
6585
00.
8347
3111
11.9
0900
7.77
8523
76.0
1300
00.
7189
070.
7345
8756
.444
050
24.1
1246
00.
8108
090.
9111
390.
9114
921.
6661
390.
7387
1383
.209
500
0.88
3536
CN
N(0
,3,0
)0.
0420
127.
1782
800.
8970
481.
0974
8163
.337
300
0.84
1981
202.
5940
007.
6159
1074
.917
820
0.74
5743
0.79
1459
56.4
0350
023
.412
440
0.81
7460
0.91
9990
0.94
6716
1.66
7055
0.78
5701
72.0
1626
00.
8745
74D
PRL
(2,0
,2)
0.04
2576
7.18
4383
0.89
5820
1.07
9219
63.2
9920
00.
8395
7923
2.43
3700
6.12
9938
76.4
2578
00.
7487
620.
7828
1856
.439
470
24.0
7308
00.
8236
920.
9161
910.
9474
601.
6684
050.
7769
5780
.815
330
0.88
1630
EC
NN
(0,0
,1)
0.04
2090
7.18
4773
0.89
3811
1.00
9942
63.2
2281
00.
8370
5812
6.76
2800
7.72
9687
75.8
9830
00.
7161
960.
7353
4556
.371
520
23.6
9057
00.
7867
390.
9121
330.
9007
641.
6452
000.
7598
8575
.727
840
0.86
5427
IFC
NN
(0,0
,1)
0.11
5571
7.23
5051
0.88
8923
0.82
6030
63.4
2825
00.
8267
2119
4.99
4000
6.03
7408
76.7
1800
00.
6843
460.
6946
0556
.378
290
23.6
0887
00.
7851
200.
9045
060.
8709
771.
6660
350.
6974
6774
.739
380
0.88
7171
PCA
Net
(0,1
,2)
0.04
6250
7.18
0769
0.89
6946
1.11
0546
63.3
3639
00.
8432
4221
8.56
6100
7.68
6064
75.5
4325
00.
7438
630.
7856
6256
.327
740
23.5
7483
00.
8136
660.
9109
470.
9482
501.
6605
940.
7879
0511
8.87
970
0.87
0008
SESF
(0,1
,0)
0.04
9232
7.18
5491
0.89
5610
1.08
5557
63.2
7591
00.
8412
8886
14.7
6300
7.68
8576
75.7
0257
00.
7415
290.
7804
2456
.385
010
23.6
7423
00.
8102
450.
9182
520.
9414
031.
6583
700.
7790
7286
.897
670
0.87
4885
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 8
the results on the Lytro dataset are presented.1) Qualitative performance comparison: Figure 3 illus-
trates the fused images of all integrated algorithms in MFIFBon the Lytro19 image pair. As can be seen, most algorithmscan produce a clear image in this case while the BGSCand MSVD give blurring ones. To further investigate the fo-cused/defocused boundary in the fused images, two magnifiedplots are given in Fig. 3 for each image. As can be seen, manyalgorithms cannot well handle the boundary area contained inthe red box, including ASR, BGSC, CBF, DWTDE, GD, GFF,IFCNN, IFM, MFM, MGFF, MSVD, NSCT SR, RP SR,SFMD. Besides, some algorithms cannot fuse the boundaryarea contained in the green box well, including BFMF, BGSC,CBF, DCT Corr, DCT EOL, DWTDE, GD, IFCNN, IFM,MSVD, SFMD, SVDDCT. To sum up, the remaining methods,namely CNN, CSR, DPRL, DSIFT, ECNN, GFDF, MSR SR,MWGF, PCANet, QB, SESF, and TF, have similar visual per-formances on the Lytro19 image pair. Among these methods,five are deep learning-based methods, QB and TF are spatialdomain-based methods while the rest are transform domain-based ones.
2) Quantitative performance comparison: Table IV liststhe average value of 20 evaluation metrics for all methodson the Lytro dataset. As can be seen, the top three methodsare SFMD, ECNN and GD, respectively. Specifically, GD andSFMD are transform domain-based methods while ECNN is adeep learning-based approach. This means that the transformdomain-based methods achieve the best results on the Lytrodataset, and deep learning-based methods also obtain compet-itive results. Note that although these three methods all havethe best value in three evaluation metrics, they show differentcharacteristics. To be more specific, SFMD only performswell in image feature-based metrics, while ECNN and GDexhibit good performances in both information theory-basedand human perception inspired metrics.
Actually, from the table one can find more about the per-formance of each kind of methods. First, the spatial domain-based approaches do not show competitive performances ex-cept TF. In transform domain-based methods, SR-based onesperform poorly in most metrics. Multi-scale-based approacheshave better performance in information theory-based meth-ods while edge-preserving-based algorithms generally performbetter in image feature-based metrics. Similar to SR-basedones, the hybrid methods which combines multi-scale andSR approaches do not show good performance in most met-rics. Regarding deep learning-based methods, although ECNNranks the second among all 30 algorithms, other deep learning-based methods do not perform well. This is supervisingbecause deep learning can provide good features and can learnfusion rules automatically. This may because that most deeplearning-based algorithms were trained using simulated multi-focus images, which are different from real-world multi-focusimages, thus the generalization abilities of these algorithmsare not good.
The results on Lytro dataset indicate that various MFIFalgorithms may have very different performances on differentevaluation metrics, therefore it is necessary to use differentkinds of metrics when evaluating MFIF approaches. Besides,
although the qualitative performances are not very consistentwith the overall quantitative performances, they are consistentwith the human perception inspired metrics to some extent,especially QCB and QCV .
Xu et al. [36] pointed out that the defocus spread effect(DSE) is not obvious on the images of the Lytro dataset,thus the fused images produced by many algorithms haveno significant visual differences. To further compare MFIFalgorithms, the comparison of fusion results on the wholeMFIFB dataset will be presented in the following Section.
B. Results on the whole MFIFB dataset
1) Qualitative performance comparison: Figure 4 presentsthe qualitative (visual) performance comparison of 30 fusionmethods on the MMFW12 image pair. One can see that thiscase is more difficult than the Lytro19 case, since many algo-rithms do not produced satisfactory fused image on this imagepair. To be more specific, some methods, including BGSC,CBF, CSR, DCT Corr, DCT EOL, DPRL, DSIFT, DWTDE,ECNN, NSCT SR, QB, SESF, SVDDCT, and TF, have obvi-ous visual artifacts. Besides, some algorithms show obviouscolor distortion, namely MGFF, MST SR, PR SR. Becausethe rest of algorithms do not show obvious visual artifacts andcolor distortion, thus two focused/defocused boundary areasare illustrated in the magnified plots to see more details. As canbe seen, BFMF, GFDF, IFM, MFM, PCANet do not handle theboundary area contained in the red box well. Besides, BFMF,CNN, GD, GFDF, IFCNN, IFM, MFM, MSVD, PCANetand QB cannot deal with the DSE in the boundary areacontained in the green box as can be seen from the magnifiedplots. Overall, ASR, GFF and MWGF show good performanceon the MMFW12 image pair.
2) Quantitative performance comparison: Table V presentsthe average value of 20 evaluation metrics for all methods onthe whole MFIFB dataset. From the table one can see thatthe top three methods on the whole MFIFB dataset are QB,SFMD and GFDF, respectively. Although SFMD and QB havethe same number of top three metric values, QB is ranked thefirst while SFMD the second. This is because that QB performswell in information theory-based, image structural similarity-based and human perception inspired metrics. In contrast,SFMD only shows good performances in image feature-basedmetrics but performs poorly in other kinds of metrics.
The performances of MFIF algorithms on the whole MFIFdataset are very different from that on the Lytro subset. First,the spatial domain-based approaches perform better than thetransform domain-based ones. Second, deep learning-basedmethods have worse performances on the whole MFIFBdataset than on the Lytro dataset. Specifically, the best deeplearning-based methods, namely DPRL, only ranks the fifthon the whole dataset. Apart from Lytro, the MFIFB datasetalso contains other subsets such as MFFW and those proposedby Savic et al. [49] and Aymaz et al. [50]. In other words,the whole MFIFB dataset is more challenging than the Lytrodataset. The reason why the performances of deep learning-based approaches degrade is that they do not perform well onthe remaining subsets in MFIFB except Lytro. For instance,
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 9
TABLE VIRUNNING TIME OF ALGORITHMS IN MFIFB (SECONDS PER IMAGE PAIR)
Method Average running time Category Method Average running time CategoryASR [40] 549.95 SR-based CBF [8] 21.11 Multi-scale-basedCSR [52] 466.27 SR-based DCT Corr [6] 0.34 Multi-scale-basedCNN [25] 184.78 DL-based DCT EOL [6] 0.24 Multi-scale-basedDRPL [30] 0.17 DL-based DWTDE [53] 7.84 Multi-scale-basedECNN [20] 62.92 DL-based GD [54] 0.55 Multi-scale-basedIFCNN [10] 0.03 DL-based MSVD [59] 0.92 Multi-scale-basedPCANet [61] 20.77 DL-based MWGF [60] 2.76 Multi-scale-basedSESF [14] 0.16 DL-based SVDDCT [63] 1.09 Multi-scale-basedBGSC [51] 6.52 Spatial domain-based GFDF [55] 0.23 Edge-preserving-basedBFMF [26] 1.36 Spatial domain-based GFF [21] 0.42 Edge-preserving-basedDSIFT [1] 7.53 Spatial domain-based MFM [57] 1.45 Edge-preserving-basedIFM [56] 2.18 Spatial domain-based MGFF [58] 1.17 Edge-preserving-basedQB [23] 1.07 Spatial domain-based MST SR [22] 0.75 HybridTF [64] 0.48 Spatial domain-based NSCT SR [22] 91.95 HybridSFMD [62] 0.81 Subspace-based RP SR [22] 0.81 Hybrid
the MFFW dataset has strong defocus spread effect but thesimulated training data of deep learning-based methods do nothave, so they cannot learn how to handle defocus spread effect.
C. Running time comparison
Table VI lists the average running time of all algorithmsintegrated in MFIFB. As can be seen, the running timeof image fusion methods varies significantly from one toanother. Generally speaking, SR-based methods are most com-putational expensive, which take more than 7 minutes to fusean image pair. Besides, transform domain-based methods aregenerally faster than their spatial domain-based counterpartsexcept some cases like CBF. Regarding the deep learning-based algorithms, the computational efficiency also variesgreatly. For example, the running time of CNN is more than6000 times that of IFCNN. Note that all deep learning-basedalgorithms in MFIFB do not update the model online.
V. CONCLUDING REMARKS
In this paper, a multi-focus image fusion benchmark(MFIFB), which includes a dataset of 105 image pairs, acode library of 30 algorithms, 20 evaluation metrics and allresults is proposed. To the best of our knowledge, this isthe first multi-focus image fusion benchmark to date. Thisbenchmark facilitates better understanding of the state-of-the-art MFIF approaches and provides a platform for comparingperformance among algorithms fairly and comprehensively. Itshould be noted that, the proposed MFIFB can be easilyextended to contain more images, fusion algorithms and moreevaluation metrics.
In the literature, MFIF algorithms are usually tested on asmall number of images and compared with a very limitednumber of algorithms using just several evaluation metrics,therefore the performance comparison may not be fair andcomprehensive. This makes it difficult to understand the state-of-the-art of the MFIF field and hinders the future developmentof new algorithms. To solve this issue, in this study extensiveexperiments have been carried out based on MFIFB to evaluatethe performance of all integrated fusion algorithms.
We have several observations on the status of MFIF basedon the experimental results. First, unlike other fields in com-puter vision where deep learning is almost the dominantmethod, deep learning methods do not provide very com-petitive performances on challenging MFIF datasets at themoment. Conventional methods, namely spatial domain-basedand transform domain-based ones, still have good perfor-mances. This is very supervising because many deep learning-based MFIF methods were claimed to have the state-of-the-artperformances. However, this is not really true on challengingMFIF dataset according to our experiments using the proposedMFIFB. The possible reason is that most deep learning-basedMFIF algorithms were trained on simulated MFIF data whichdo not show much defocus spread effect, thus these algorithmscannot generalize well to other real-world MFIF dataset. Be-sides, those methods were only compared with a small numberof methods using several evaluation metrics on a small datasetwhich does not have much defocus spread effect, thus theperformances were not fully evaluated. However, due to thestrong representation ability and end-to-end property of deeplearning, we believe that the deep learning-based approachwill be an important research direction in future. Second, aMFIF algorithm usually cannot have good performances in allaspects in terms of evaluation metrics. Some algorithms mayachieve good values in information theory-based metrics whileothers may perform well in other kinds of metrics. There-fore, it is very important to use several kinds of evaluationmetrics when conducting quantitative performance comparisonfor MFIF algorithms. Finally, the results of qualitative andquantitative comparisons may not be consistent for a MFIFalgorithm, therefore they are both crucial when evaluating aMFIF method.
We will continue extending MFIFB by including moreimage pairs, algorithms and metrics. We hope that MFIFBcan serve as a good starting point for researchers who areinterested in multi-focus image fusion.
ACKNOWLEDGMENT
The author would like to thank Mr Shuang Xu from Xi’anJiao Tong University for providing the MFFW dataset.
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 10
REFERENCES
[1] Y. Liu, S. Liu, and Z. Wang, “Multi-focus image fusionwith dense sift,” Information Fusion, vol. 23, pp. 139–155, 2015.
[2] I. De and B. Chanda, “Multi-focus image fusion usinga morphology-based focus measure in a quad-tree struc-ture,” Information Fusion, vol. 14, no. 2, pp. 136–146,2013.
[3] M. Li, W. Cai, and Z. Tan, “A region-based multi-sensor image fusion scheme using pulse-coupled neuralnetwork,” Pattern Recognition Letters, vol. 27, no. 16,pp. 1948–1956, 2006.
[4] Q. Zhang, T. Shi, F. Wang, R. S. Blum, and J. Han,“Robust sparse representation based multi-focus imagefusion with dictionary construction and local spatialconsistency,” Pattern Recognition, vol. 83, pp. 299–313,2018.
[5] Q. Zhang, Y. Liu, R. S. Blum, J. Han, and D. Tao, “Sparserepresentation based multi-sensor image fusion for multi-focus and multi-modality images: A review,” InformationFusion, vol. 40, pp. 57–75, 2018.
[6] M. Amin-Naji and A. Aghagolzadeh, “Multi-focus imagefusion in dct domain using variance and energy oflaplacian and correlation coefficient for visual sensornetworks,” Journal of AI and Data Mining, vol. 6, no. 2,pp. 233–250, 2018.
[7] L. Kou, L. Zhang, K. Zhang, J. Sun, Q. Han, andZ. Jin, “A multi-focus image fusion method via regionmosaicking on laplacian pyramids,” PloS one, vol. 13,no. 5, 2018.
[8] B. K. Shreyamsha Kumar, “Image fusion based on pixelsignificance using cross bilateral filter,” Signal, Imageand Video Processing, vol. 9, no. 5, pp. 1193–1204, Jul2015.
[9] H. Zhai and Y. Zhuang, “Multi-focus image fusionmethod using energy of laplacian and a deep neuralnetwork,” Applied Optics, vol. 59, no. 6, pp. 1684–1694,2020.
[10] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang,“IFCNN: A general image fusion framework basedon convolutional neural network,” Information Fusion,vol. 54, no. August 2018, pp. 99–118, 2020.
[11] C. Wang, Z. Zhao, Q. Ren, Y. Xu, and Y. Yu, “A novelmulti-focus image fusion by combining simplified verydeep convolutional networks and patch-based sequen-tial reconstruction strategy,” Applied Soft Computing, p.106253, 2020.
[12] H. Xu, F. Fan, H. Zhang, Z. Le, and J. Huang, “A deepmodel for multi-focus image fusion based on gradientsand connected regions,” IEEE Access, vol. 8, pp. 26 316–26 327, 2020.
[13] X. Yan, S. Z. Gilani, H. Qin, and A. Mian, “Unsu-pervised deep multi-focus image fusion,” arXiv preprintarXiv:1806.07272, 2018.
[14] B. Ma, X. Ban, H. Huang, and Y. Zhu, “Sesf-fuse: Anunsupervised deep model for multi-focus image fusion,”arXiv preprint arXiv:1908.01703, 2019.
[15] H. T. Mustafa, F. Liu, J. Yang, Z. Khan, and Q. Huang,“Dense multi-focus fusion net: A deep unsupervisedconvolutional network for multi-focus image fusion,” inInternational Conference on Artificial Intelligence andSoft Computing. Springer, 2019, pp. 153–163.
[16] H. Jung, Y. Kim, H. Jang, N. Ha, and K. Sohn, “Unsu-pervised Deep Image Fusion with Structure Tensor Rep-resentations,” IEEE Transactions on Image Processing,vol. 29, pp. 3845–3858, 2020.
[17] H. Li, R. Nie, D. Zhou, and X. Gou, “Convolutionalneural network based multi-focus image fusion,” in Pro-ceedings of the 2018 2nd International Conference onAlgorithms, Computing and Systems, 2018, pp. 148–154.
[18] C. Du and S. Gao, “Image segmentation-based multi-focus image fusion through multi-scale convolutionalneural network,” IEEE access, vol. 5, pp. 15 750–15 761,2017.
[19] X. Guo, R. Nie, J. Cao, D. Zhou, L. Mei, K. He,S. Member, R. Nie, J. Cao, and D. Zhou, “FuseGAN:Learning to fuse Multi-focus Image via ConditionalGenerative Adversarial Network,” IEEE Transactions onMultimedia, vol. 21, no. 8, pp. 1–1, 2019.
[20] M. Amin-Naji, A. Aghagolzadeh, and M. Ezoji, “Ensem-ble of CNN for Multi-Focus Image Fusion,” InformationFusion, vol. 51, no. February, pp. 201–214, 2019.
[21] S. Li, X. Kang, and J. Hu, “Image fusion with guided fil-tering,” IEEE Transactions on Image processing, vol. 22,no. 7, pp. 2864–2875, 2013.
[22] Y. Liu, S. Liu, and Z. Wang, “A general framework forimage fusion based on multi-scale transform and sparserepresentation,” Information Fusion, vol. 24, pp. 147–164, 2015.
[23] X. Bai, Y. Zhang, F. Zhou, and B. Xue, “Quadtree-based multi-focus image fusion using a weighted focus-measure,” Information Fusion, vol. 22, pp. 105–118,2015.
[24] Q. Zhang and M. D. Levine, “Robust multi-focus imagefusion using multi-task sparse representation and spa-tial context,” IEEE Transactions on Image Processing,vol. 25, no. 5, pp. 2045–2058, 2016.
[25] Y. Liu, X. Chen, H. Peng, and Z. Wang, “Multi-focusimage fusion with a deep convolutional neural network,”Information Fusion, vol. 36, pp. 191–207, 2017.
[26] Y. Zhang, X. Bai, and T. Wang, “Boundary finding basedmulti-focus image fusion through multi-scale morpho-logical focus-measure,” Information fusion, vol. 35, pp.81–101, 2017.
[27] H. Tang, B. Xiao, W. Li, and G. Wang, “Pixel convo-lutional neural network for multi-focus image fusion,”Information Sciences, vol. 433, pp. 125–141, 2018.
[28] M. S. Farid, A. Mahmood, and S. A. Al-Maadeed,“Multi-focus image fusion using content adaptive blur-ring,” Information fusion, vol. 45, pp. 96–112, 2019.
[29] O. Bouzos, I. Andreadis, and N. Mitianoudis, “Condi-tional random field model for robust multi-focus imagefusion,” IEEE Transactions on Image Processing, vol. 28,no. 11, pp. 5636–5648, 2019.
[30] J. Li, X. Guo, G. Lu, B. Zhang, Y. Xu, F. Wu, and
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 11
D. Zhang, “Drpl: Deep regression pair learning formulti-focus image fusion,” IEEE Transactions on ImageProcessing, vol. 29, pp. 4816–4831, 2020.
[31] H. Xu, J. Ma, Z. Le, J. Jiang, and X. Guo, “Fusiondn: Aunified densely connected network for image fusion,” inThirty-Fourth AAAI Conference on Artificial Intelligence,2020.
[32] H. Zhang, H. Xu, Y. Xiao, X. Guo, and J. Ma, “Re-thinking the image fusion: A fast unified image fusionnetwork based on proportional maintenance of gradientand intensity,” in Proceedings of the AAAI Conferenceon Artificial Intelligence, 2020.
[33] M. Nejati, S. Samavi, and S. Shirani, “Multi-focus im-age fusion using dictionary-based sparse representation,”Information Fusion, vol. 25, pp. 72–84, 2015.
[34] Y. Wu, J. Lim, and M.-H. Yang, “Object tracking bench-mark,” IEEE Transactions on Pattern Analysis and Ma-chine Intelligence, vol. 37, no. 9, pp. 1834–1848, 2015.
[35] M. Kristan, J. Matas, A. Leonardis, T. Vojir,R. Pflugfelder, G. Fernandez, G. Nebehay, F. Porikli,and L. Cehovin, “A novel performance evaluationmethodology for single-target trackers,” IEEETransactions on Pattern Analysis and MachineIntelligence, vol. 38, no. 11, pp. 2137–2155, Nov2016.
[36] S. Xu, X. Wei, C. Zhang, J. Liu, and J. Zhang, “Mffw: Anew dataset for multi-focus image fusion,” arXiv preprintarXiv:2002.04780, 2020.
[37] X. Deng and P. L. Dragotti, “Deep Convolutional NeuralNetwork for Multi-modal Image Restoration and Fusion,”pp. 1–15, 2019.
[38] G.-P. Fu, S.-H. Hong, F.-L. Li, and L. Wang, “A novelmulti-focus image fusion method based on distributedcompressed sensing,” Journal of Visual Communicationand Image Representation, vol. 67, p. 102760, 2020.
[39] B. Yang and S. Li, “Multifocus image fusion and restora-tion with sparse representation,” IEEE Transactions onInstrumentation and Measurement, vol. 59, no. 4, pp.884–892, 2010.
[40] Y. Liu and Z. Wang, “Simultaneous image fusion and de-noising with adaptive sparse representation,” IET ImageProcessing, vol. 9, no. 5, pp. 347–357, 2014.
[41] W.-W. Wang, P.-L. Shui, and G.-X. Song, “Multifocusimage fusion in wavelet domain,” in Proceedings of the2003 International Conference on Machine Learning andCybernetics (IEEE Cat. No. 03EX693), vol. 5. IEEE,2003, pp. 2887–2890.
[42] J. Zhi-guo, H. Dong-bing, C. Jin, and Z. Xiao-kuan,“A wavelet based algorithm for multi-focus micro-imagefusion,” in Third International Conference on Image andGraphics (ICIG’04). IEEE, 2004, pp. 176–179.
[43] D. P. Bavirisetti, G. Xiao, and G. Liu, “Multi-sensorimage fusion based on fourth order partial differentialequations,” in 2017 20th International Conference onInformation Fusion (Fusion). IEEE, 2017, pp. 1–9.
[44] Y. Chen, J. Guan, and W.-K. Cham, “Robust multi-focusimage fusion using edge model and multi-matting,” IEEETransactions on Image Processing, vol. 27, no. 3, pp.
1526–1541, 2017.[45] W. Zhao, H. Lu, and D. Wang, “Multisensor image fusion
and enhancement in spectral total variation domain,”IEEE Transactions on Multimedia, vol. 20, no. 4, pp.866–879, 2017.
[46] W. Zhao, D. Wang, and H. Lu, “Multi-focus imagefusion with a natural enhancement via a joint multi-level deeply supervised convolutional neural network,”IEEE Transactions on Circuits and Systems for VideoTechnology, vol. 29, no. 4, pp. 1102–1115, 2018.
[47] Y. Yang, Z. Nie, S. Huang, P. Lin, and J. Wu, “Multilevelfeatures convolutional neural network for multifocusimage fusion,” IEEE Transactions on ComputationalImaging, vol. 5, no. 2, pp. 262–273, 2019.
[48] V. Deshmukh, A. Khaparde, and S. Shaikh, “Multi-focusimage fusion using deep belief network,” in InternationalConference on Information and Communication Technol-ogy for Intelligent Systems. Springer, 2017, pp. 233–241.
[49] S. Savic and Z. Babic, “Multifocus image fusion basedon empirical mode decomposition,” in 19th IEEE In-ternational Conference on Systems, Signals and ImageProcessing (IWSSIP), 2012.
[50] S. Aymaz, C. Kose, and S. Aymaz, “Multi-focus imagefusion for different datasets with super-resolution usinggradient-based new fusion rule,” Multimedia Tools andApplications, pp. 1–40, 2020.
[51] J. Tian, L. Chen, L. Ma, and W. Yu, “Multi-focusimage fusion using a bilateral gradient-based sharpnesscriterion,” Optics communications, vol. 284, no. 1, pp.80–87, 2011.
[52] Y. Liu, X. Chen, R. K. Ward, and Z. J. Wang, “Imagefusion with convolutional sparse representation,” IEEEsignal processing letters, vol. 23, no. 12, pp. 1882–1886,2016.
[53] Y. Liu and Z. Wang, “Multi-focus image fusion based onwavelet transform and adaptive block,” Journal of imageand graphics, vol. 18, no. 11, pp. 1435–1444, 2013.
[54] S. Paul, I. S. Sevcenco, and P. Agathoklis, “Multi-exposure and multi-focus image fusion in gradient do-main,” Journal of Circuits, Systems and Computers,vol. 25, no. 10, p. 1650123, 2016.
[55] X. Qiu, M. Li, L. Zhang, and X. Yuan, “Guided filter-based multi-focus image fusion through focus regiondetection,” Signal Processing: Image Communication,vol. 72, pp. 35–46, 2019.
[56] S. Li, X. Kang, J. Hu, and B. Yang, “Image mattingfor fusion of multi-focus images in dynamic scenes,”Information Fusion, vol. 14, no. 2, pp. 147–162, 2013.
[57] J. Ma, Z. Zhou, B. Wang, and M. Dong, “Multi-focusimage fusion based on multi-scale focus measures andgeneralized random walk,” in 2017 36th Chinese ControlConference (CCC). IEEE, 2017, pp. 5464–5468.
[58] D. P. Bavirisetti, G. Xiao, J. Zhao, R. Dhuli, and G. Liu,“Multi-scale guided image and video fusion: A fastand efficient approach,” Circuits, Systems, and SignalProcessing, vol. 38, no. 12, pp. 5576–5605, Dec 2019.
[59] V. Naidu, “Image fusion technique using multi-resolutionsingular value decomposition,” Defence Science Journal,
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XX 2020 12
vol. 61, no. 5, pp. 479–484, 2011.[60] Z. Zhou, S. Li, and B. Wang, “Multi-scale weighted
gradient-based fusion for multi-focus images,” Informa-tion Fusion, vol. 20, pp. 60–72, 2014.
[61] X. Song and X.-J. Wu, “Multi-focus Image Fusion withPCA Filters of PCANet,” in IAPR Workshop on Multi-modal Pattern Recognition of Social Signals in Human-Computer Interaction. Springer, 2018, pp. 1–17.
[62] H. Li, L. Li, and J. Zhang, “Multi-focus image fusionbased on sparse feature matrix decomposition and mor-phological filtering,” Optics Communications, vol. 342,pp. 1–11, 2015.
[63] M. Amin-Naji, P. Ranjbar-Noiey, and A. Aghagolzadeh,“Multi-focus image fusion using singular value decompo-sition in dct domain,” in 2017 10th Iranian Conference onMachine Vision and Image Processing (MVIP). IEEE,2017, pp. 45–51.
[64] J. Ma, Z. Zhou, B. Wang, L. Miao, and H. Zong, “Multi-focus image fusion using boosted random walks-basedalgorithm with two-scale focus maps,” Neurocomputing,vol. 335, pp. 9–20, 2019.
[65] Z. Liu, E. Blasch, Z. Xue, J. Zhao, R. Laganiere, andW. Wu, “Objective assessment of multiresolution im-age fusion algorithms for context enhancement in nightvision: A comparative study,” IEEE Transactions onPattern Analysis and Machine Intelligence, vol. 34, pp.94–109, 2012.
[66] D. M. Bulanon, T. Burks, and V. Alchanatis, “Imagefusion of visible and thermal images for fruit detection,”Biosystems Engineering, vol. 103, no. 1, pp. 12–22, 2009.
[67] V. Aardt and Jan, “Assessment of image fusion pro-cedures using entropy, image quality, and multispec-tral classification,” Journal of Applied Remote Sensing,vol. 2, no. 1, p. 023522, 2008.
[68] M. Hossny, S. Nahavandi, and D. Creighton, “Commentson’information measure for performance of image fu-sion’,” Electronics letters, vol. 44, no. 18, pp. 1066–1067,2008.
[69] P. Jagalingam and A. V. Hegde, “A review of qualitymetrics for fused image,” Aquatic Procedia, vol. 4, no.Icwrcoe, pp. 133–142, 2015.
[70] Q. Wang, Y. Shen, and J. Q. Zhang, “A nonlinearcorrelation measure for multivariable data set,” PhysicaD: Nonlinear Phenomena, vol. 200, no. 3-4, pp. 287–295, 2005.
[71] Q. Wang, Y. Shen, and J. Jin, “Performance evaluation ofimage fusion techniques,” Image fusion: algorithms andapplications, vol. 19, pp. 469–492, 2008.
[72] N. Cvejic, C. Canagarajah, and D. Bull, “Image fusionmetric based on mutual information and tsallis entropy,”Electronics letters, vol. 42, no. 11, pp. 626–627, 2006.
[73] G. Cui, H. Feng, Z. Xu, Q. Li, and Y. Chen, “Detailpreserved fusion of visible and infrared images usingregional saliency extraction and multi-scale image de-composition,” Optics Communications, vol. 341, pp. 199– 209, 2015.
[74] B. Rajalingam and R. Priya, “Hybrid multimodality med-ical image fusion technique for feature enhancement in
medical diagnosis,” International Journal of EngineeringScience Invention, 2018.
[75] C. S. Xydeas and P. V. V., “Objective image fusion per-formance measure,” Military Technical Courier, vol. 36,no. 4, pp. 308–309, 2000.
[76] J. Zhao, R. Laganiere, and Z. Liu, “Performance assess-ment of combinative pixel-level image fusion based on anabsolute feature measurement,” International Journal ofInnovative Computing, Information and Control, vol. 3,no. 6, pp. 1433–1447, 2007.
[77] Y.-J. Rao, “In-fibre bragg grating sensors,” Measurementscience and technology, vol. 8, no. 4, p. 355, 1997.
[78] A. M. Eskicioglu and P. S. Fisher, “Image qualitymeasures and their performance,” IEEE Transactions oncommunications, vol. 43, no. 12, pp. 2959–2965, 1995.
[79] N. Cvejic, A. Loza, D. Bull, and N. Canagarajah, “A sim-ilarity metric for assessment of image fusion algorithms,”International journal of signal processing, vol. 2, no. 3,pp. 178–182, 2005.
[80] G. Piella and H. Heijmans, “A new quality metric forimage fusion,” in Proceedings 2003 International Con-ference on Image Processing (Cat. No. 03CH37429),vol. 3. IEEE, 2003, pp. III–173.
[81] C. Yang, J.-Q. Zhang, X.-R. Wang, and X. Liu, “Anovel similarity based quality metric for image fusion,”Information Fusion, vol. 9, no. 2, pp. 156–160, 2008.
[82] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelliet al., “Image quality assessment: from error visibilityto structural similarity,” IEEE transactions on imageprocessing, vol. 13, no. 4, pp. 600–612, 2004.
[83] Y. Chen and R. S. Blum, “A new automated quality as-sessment algorithm for image fusion,” Image and visioncomputing, vol. 27, no. 10, pp. 1421–1432, 2009.
[84] H. Chen and P. K. Varshney, “A human perception in-spired quality metric for image fusion based on regionalinformation,” Information fusion, vol. 8, no. 2, pp. 193–207, 2007.
[85] Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusionperformance metric based on visual information fidelity,”Information Fusion, vol. 14, no. 2, pp. 127–135, 2013.
[86] J. Ma, Y. Ma, and C. Li, “Infrared and visible imagefusion methods and applications: A survey,” InformationFusion, vol. 45, pp. 153–178, 2019.