Fast and Adaptive Bidimensional Empirical Mode...

6
Fast and Adaptive Bidimensional Empirical Mode Decomposition for the Real-time Video Fusion Maciej Wielgus Institute of Micromechanics and Photonics Warsaw University of Technology Warsaw, Poland [email protected] Adrian Antoniewicz, Michał Bartyś, Barbara Putz Institute of Automatic Control and Robotics Warsaw University of Technology Warsaw, Poland [email protected], [email protected], [email protected] Abstract — The Bidimensional Empirical Mode Decomposition (BEMD) method proved to be capable of producing high quality results of infrared (IR) and visible (VIS) images fusion. However, large complexity of this algorithm does not contemporarily allow for real-time implementation, necessary in many typical applications of VIS-IR fusion, e.g., in environment monitoring. In contrast, the Fast and Adaptive Bidimensional Empirical Mode Decomposition (FABEMD), the variant of BEMD, in which signal envelope is extracted by means of statistical filters rather than 2D spline interpolation, has an ability to overcome this shortcoming. We evaluate FABEMD method outputs in the context of VIS-IR fusion and present developed real-time VIS-IR video fusion system based on one chip Field Programmable Gate Array. Keywords - real-time image fusion, multimodal image fusion, infrared (IR) image, Bidimensional Empirical Mode Decomposition (BEMD), Field Programmable Gate Array (FPGA) I. INTRODUCTION The method of Empirical Method Decomposition (EMD) was introduced in [1] as a preprocessing step for Hilbert Spectral Analysis (HSA). With EMD, signal is decomposed into series of zero-mean, oscillatory subsignals, so-called Intrinsic Mode Functions (IMFs). Being an adaptive and data- driven technique that possesses ability to deal with nonstationary and nonlinear input, EMD rapidly became a widely recognized tool of the signal analysis. It was soon introduced to the field of image processing [2], where two- dimensional generalization of the EMD is usually referred to as Bidimensional EMD (BEMD) while IMFs are typically described as Bidimensional IMFs (BIMFs). In [3] EMD algorithm was proposed as an image fusion technique. However, being a 1D method applied subsequently to the image rows, EMD ignores correlation between rows and is more sensitive to noise than 2D methods. More variants of EMD and BEMD were discussed in such a context in [4-6]. Particularly, BEMD application for infrared (IR) and visible (VIS) image fusion presented in [6], proved high quality of obtained results. In comparison, given in [7] BEMD clearly outperformed often favored discrete wavelet transform (DWT) for VIS-IR fusion. Unfortunately, heavy computational load of this method gave little hope of real-time image fusion application. Recently, Fast and Adaptive Bidimensional Empirical Mode Decomposition (FABEMD) algorithm was introduced in [8] and further developed in [9]. In [10] the proper quality of FABEMD-based fusion was demonstrated, although not in the context of multimodal image fusion, but rather in case of multi- focus images. Among other advantages, that will be discussed further, FABEMD offers a significant reduction of computation time in comparison to BEMD. Together with a low-level implementation in Field Programmable Gate Array (FPGA) it allowed to develop a real-time video fusion system with VIS and IR inputs, intended for an environment monitoring purpose. This paper is organized as follows. In section II a brief description of the EMD algorithm is given. Section III describes the FABEMD algorithm, contrasting it with the regular, interpolation-based EMD. In section IV discussion of the fusion algorithm is given. Section V presents the details of fusion implementation on FPGA circuit. Finally, results are discussed in the section VI and conclusions are given in the section VII. II. EMD OVERVIEW EMD consists of performing so-called sifting procedure, during which signal maxima and minima are identified and serve as nodes for upper and lower signal envelopes interpolation, respectively. Interpolation is mainly based on cubic splines in 1D case [1]. In case of image processing with BEMD, radial basis functions [11] or bicubic splines, possibly coupled with domain triangulation [12], are utilized. Then the mean envelope E m is calculated as an average of upper and lower envelopes and subtracted from the data subsequently. If a certain condition is fulfilled, result becomes the first decomposed element, IMF 1 , otherwise the procedure is repeated until the condition match (internal loop of EMD). The reason for these iterations is to improve the symmetry between upper and lower envelopes and to ensure that there is a zero- crossing between each two extrema, so that extracted IMFs were truly zero-mean and oscillatory. Stop condition commonly takes form of small enough normalized difference between the results of consecutive iterations [1], as a perfect IMF would be a fixed point of this loop. Finally, obtained IMF is subtracted from the initial data and the algorithm is iterated on the result (external loop of EMD). The number of extrema is diminishing along with the algorithm progression, eventually leading to the __________________________________________________ This work has been supported in part by the research project No O R00 0019 07 of National Centre for Research and Development in Poland 649

Transcript of Fast and Adaptive Bidimensional Empirical Mode...

Page 1: Fast and Adaptive Bidimensional Empirical Mode ...fusion.isif.org/proceedings/fusion12CD/html/pdf/088_223.pdf · Fast and Adaptive Bidimensional Empirical Mode Decomposition for the

Fast and Adaptive Bidimensional Empirical Mode Decomposition for the Real-time Video Fusion

Maciej Wielgus Institute of Micromechanics and Photonics

Warsaw University of Technology Warsaw, Poland

[email protected]

Adrian Antoniewicz, Michał Bartyś, Barbara Putz

Institute of Automatic Control and Robotics Warsaw University of Technology

Warsaw, Poland [email protected], [email protected],

[email protected]

Abstract — The Bidimensional Empirical Mode Decomposition (BEMD) method proved to be capable of producing high quality results of infrared (IR) and visible (VIS) images fusion. However, large complexity of this algorithm does not contemporarily allow for real-time implementation, necessary in many typical applications of VIS-IR fusion, e.g., in environment monitoring. In contrast, the Fast and Adaptive Bidimensional Empirical Mode Decomposition (FABEMD), the variant of BEMD, in which signal envelope is extracted by means of statistical filters rather than 2D spline interpolation, has an ability to overcome this shortcoming. We evaluate FABEMD method outputs in the context of VIS-IR fusion and present developed real-time VIS-IR video fusion system based on one chip Field Programmable Gate Array.

Keywords - real-time image fusion, multimodal image fusion, infrared (IR) image, Bidimensional Empirical Mode Decomposition (BEMD), Field Programmable Gate Array (FPGA)

I. INTRODUCTION The method of Empirical Method Decomposition (EMD)

was introduced in [1] as a preprocessing step for Hilbert Spectral Analysis (HSA). With EMD, signal is decomposed into series of zero-mean, oscillatory subsignals, so-called Intrinsic Mode Functions (IMFs). Being an adaptive and data-driven technique that possesses ability to deal with nonstationary and nonlinear input, EMD rapidly became a widely recognized tool of the signal analysis. It was soon introduced to the field of image processing [2], where two-dimensional generalization of the EMD is usually referred to as Bidimensional EMD (BEMD) while IMFs are typically described as Bidimensional IMFs (BIMFs).

In [3] EMD algorithm was proposed as an image fusion technique. However, being a 1D method applied subsequently to the image rows, EMD ignores correlation between rows and is more sensitive to noise than 2D methods. More variants of EMD and BEMD were discussed in such a context in [4-6]. Particularly, BEMD application for infrared (IR) and visible (VIS) image fusion presented in [6], proved high quality of obtained results. In comparison, given in [7] BEMD clearly outperformed often favored discrete wavelet transform (DWT) for VIS-IR fusion. Unfortunately, heavy computational load of this method gave little hope of real-time image fusion application.

Recently, Fast and Adaptive Bidimensional Empirical Mode Decomposition (FABEMD) algorithm was introduced in [8] and further developed in [9]. In [10] the proper quality of FABEMD-based fusion was demonstrated, although not in the context of multimodal image fusion, but rather in case of multi-focus images. Among other advantages, that will be discussed further, FABEMD offers a significant reduction of computation time in comparison to BEMD. Together with a low-level implementation in Field Programmable Gate Array (FPGA) it allowed to develop a real-time video fusion system with VIS and IR inputs, intended for an environment monitoring purpose.

This paper is organized as follows. In section II a brief description of the EMD algorithm is given. Section III describes the FABEMD algorithm, contrasting it with the regular, interpolation-based EMD. In section IV discussion of the fusion algorithm is given. Section V presents the details of fusion implementation on FPGA circuit. Finally, results are discussed in the section VI and conclusions are given in the section VII.

II. EMD OVERVIEW EMD consists of performing so-called sifting procedure,

during which signal maxima and minima are identified and serve as nodes for upper and lower signal envelopes interpolation, respectively. Interpolation is mainly based on cubic splines in 1D case [1]. In case of image processing with BEMD, radial basis functions [11] or bicubic splines, possibly coupled with domain triangulation [12], are utilized. Then the mean envelope Em is calculated as an average of upper and lower envelopes and subtracted from the data subsequently. If a certain condition is fulfilled, result becomes the first decomposed element, IMF1, otherwise the procedure is repeated until the condition match (internal loop of EMD). The reason for these iterations is to improve the symmetry between upper and lower envelopes and to ensure that there is a zero-crossing between each two extrema, so that extracted IMFs were truly zero-mean and oscillatory. Stop condition commonly takes form of small enough normalized difference between the results of consecutive iterations [1], as a perfect IMF would be a fixed point of this loop. Finally, obtained IMF is subtracted from the initial data and the algorithm is iterated on the result (external loop of EMD). The number of extrema is diminishing along with the algorithm progression, eventually leading to the

__________________________________________________ This work has been supported in part by the research project No O R00 0019 07of National Centre for Research and Development in Poland

649

Page 2: Fast and Adaptive Bidimensional Empirical Mode ...fusion.isif.org/proceedings/fusion12CD/html/pdf/088_223.pdf · Fast and Adaptive Bidimensional Empirical Mode Decomposition for the

monotonic residual signal rN(x). The initial signal s(x) can be therefore represented as:

.

(1)

Typically small number of decomposition levels N and often their physical meaningfulness for the real data constitute additional advantages of the EMD algorithm.

III. FABEMD METHOD DETAILS

A. Algorithm overview The most significant difference between FABEMD and

regular BEMD is that the first one utilizes statistical MAX/MIN filters with additional smoothing by averaging to estimate the envelopes and, as argued in [8], does not demand additional iterations for a single IMF extraction (no internal loop of EMD). The FABEMD algorithm can be therefore summarized by the flowchart in Fig. 1.

The order statistics (MAX/MIN) and smoothing window size is calculated based on the distribution of signal extrema and distances between the neighboring maxima (minima). Several methods of smoothing window size selection were proposed and discussed in [8-9], e.g., this could be the rounded lowest Euclidean distance between extrema of the same type (Lowest Distance Order Statistics Filter Width, LD-OSFW). The simply non-weighted average is typically used.

B. Comparison with BEMD There is no 2D spline interpolation on irregular grid in

FABEMD method, which is the main reason for the time efficiency improvement. Secondly, the method does not introduce overshooting and undershooting errors to the envelopes estimation, which is the case with the BEMD. Moreover, it reduces the typical problem of BEMD with the interpolation at the image border. This was particularly troublesome with methods involving triangulation in which effective interpolation could only be obtained inside the convex hull of the extrema set. Only relatively basic operations, such as 2D convolution, are demanded to perform FABEMD. This is a reason why the method is preferred for the low-level FPGA implementation.

IV. IMAGE FUSION WITH FABEMD EMD-based approach to image fusion benefits from

algorithm adaptivity, which enables to recognize characteristic scales and image features better than linear methods. The idea of EMD-based fusion is to perform decomposition of images to be fused and on each of the decomposition levels locally select (based on certain decision rule) signal that contains more valuable information. In [6-7] high quality of VIS-IR images fusion results with BEMD was noted, favoring BEMD over more common methods such as contrast pyramid or discrete

Figure 1. Flowchart of the FABEMD algorithm.

wavelet transform. However, with computationally expensive radial basis functions interpolation and rather sophisticated decision rule [6], that method could not be utilized for the real-time application. Such possibility emerged with the introduction of the FABEMD. Fusion based on FABEMD can be summarized in the following steps:

1. Perform FABEMD of both initial images 2. For each decomposition level combine values of

two respective BIMFs 3. Combine two residues 4. Sum up all combined components to obtain the

result of the fusion.

Clearly, for a meaningful comparison between respective BIMFs extracted from different images, their scales should be matched. This is why the size of statistical MAX/MIN filter used in step (1) has to be agreed for both images on every level of the decomposition. Some assumptions on value of the filter window size can be utilized to avoid redundant computations, e.g., for LD-OSFW choice first BIMF is for almost any real data calculated with the smallest possible window size. In Fig. 2 (c-h), exemplary BIMFs are shown along with the residual image after extraction of 3 BIMFs in Fig. 2 (i-j).

650

Page 3: Fast and Adaptive Bidimensional Empirical Mode ...fusion.isif.org/proceedings/fusion12CD/html/pdf/088_223.pdf · Fast and Adaptive Bidimensional Empirical Mode Decomposition for the

(a) (b)

(c) (d)

(e) (f)

(g) (h)

(i) (j)

(k) (l)

(m) (n)

(o) Figure 2. Image “Street”: VIS input (a), IR input (b); BIMF1 for VIS and IR (c-d); BIMF 2 for VIS and IR (e-f); BIMF3 for VIS and IR (g-h); residues for

VIS and IR after subtraction of 3 BIMFs (i-j); results of combining BIMFs at 3 decomposition levels (k-m); result of combining residues (n) and the final

result of FABEMD fusion (o).

The method of combining two BIMFs in step (2) has a significant influence on the final result. Clearly, for multi-focus image fusion ([5], [10]) the reasonable choice is to select the BIMF which has locally larger variance (or other measure of spatial activity), which is expected to be larger for the image being in-focus. In VIS-IR fusion case, however, local variance method did not perform significantly better than much less computationally expensive criterion of MAX(ABS), given in (2):

(2)

where Ci(x) represents the combination of IR and VIS image BIMFs on i-th decomposition level. The example of such combinations is given in Fig. 2 (k-m).

Residues should be combined as well, which is particularly important for the multimodal images fusion, in which case residues may vary significantly, unlike with multifocus fusion.

651

Page 4: Fast and Adaptive Bidimensional Empirical Mode ...fusion.isif.org/proceedings/fusion12CD/html/pdf/088_223.pdf · Fast and Adaptive Bidimensional Empirical Mode Decomposition for the

MAX(ABS) criterion has no motivation in this case, as residues are not oscillatory in nature. Therefore in step (3) we adopted a simple arithmetic average of residues as a combination rule.

In experiments, it was revealed that for highly satisfactory fusion results, a full decomposition is not demanded. In fact, it is crucial to show in the fused image the small but palpable details present only in one of the inputs. Fusion on the larger scales can be performed by taking mean value without significant quality loss - see Fig. 2n. This is why we only select 3-5 first BIMFs, treating the remaining parts as residues.

Final result in step (4) is obtained as a sum of Ci(x) components with the combination of residues, see Fig. 2o.

An issue, which has a huge impact on the fusion quality, is the images alignment matching. Presented system is designed to work in dynamic environment, potentially outdoor or on the moving vehicle, possibly tracking objects in varying distances. It is therefore exposed to excitations such as vibrations and varying temperature influencing cameras' properties. These effects demand real-time software corrections of the image misalignment, additionally with the correction of fixed relative positions of cameras. The computational load of images alignment matching diminish the resources available for the fusion and must be taken into account when designing the real-time implementation.

V. REAL-TIME FUSION IMPLEMENTATION

A. Hardware Implementation of real-time image registration and fusion

has been announced in [13-15]. In [13] an FPGA architecture based system called Ad-FIRE is described. Intentionally, it is suited for military purposes and none detailed information about applied image fusion approaches have been given. Reference [14] presents bulky real-time image registration and fusion prototype system implemented on a embedded PC hardware platform composed from off-the-shelf components. Reference [15] demonstrates real-time implementation of image fusion system on the Octec ADEPT60 VME card. This card is typically used for video tracker based appliances. Authors announce the design of the tailor made image fusion card underway.

Implementation of multispectral fusion at video frames rates requires high computational power. Taking into account portable and out-door applications, principally for high data streams rates, either Field Programmable Arrays (FPGA) or CPU based architecture processors are to be considered.

Application of high power FPGA’s instead of CPU based processors take advantage of the low power/heat dissipation ratio [15] combined with massive processing throughput. As show results of our experiments, power dissipation of FPGA image fusion system is rated under 4W level. While running real-time FABEMD based image fusion, FPGA itself dissipates less then 1W electrical power. This allows for implementation of FPGA based fusion systems in a battery powered and/or passively cooled cabinets. Flexible architecture of FPGA gives also possibility of making overall electronic system extremely compact. Due to its programmable

and flexible architecture, it allows for replacement of large amount of external components and peripherals (glue-logic) needed in processor based systems.

Commercially are available off-the-shelf hardware video development boards. These boards however, may be useful only in early development phases of image registering and fusion approaches. Typically they do not comply with electromagnetic compatibility (EMC) requirements and may not be used for extended ambient temperature range. Therefore, we have decided to build-up a specific FPGA based custom system called UFO. UFO was principally intended to manage real-time image alignment and fusion operating at 50Hz and above. The structure of UFO system is presented in Fig. 3.

Different modality analogue CCIR coded video streams from visible and infrared spectrum cameras are fed into low power multi-channel NTSC/PAL integrated video decoder via appropriate external passive filters. External filters are build-up by application of a few passive external tiny discrete elements. Video parameters such as hue, contrast, brightness, saturation and sharpness are programmed for each channel by means of IIC serial interface. Video decoder generates digital video outputs and provides synchronization, blanking, lock and clock signals for FPGA. Both luminance and chrominance 8 bits parallel standard coded (ITU-R BT.656) digital video outputs are presented to FPGA, but for further processing, the only luminance data streams are used.

FPGA chosen for UFO prototype board is a low cost low power device providing 150K logic elements, 6.48 Mb of embedded memory, 360 18 × 18 multipliers, and 475 user I/O arranged in 11 banks. FPGA is running at 150MHz clock frequency. Two banks of external DDRAM2 memories extend available memory space up to 2 x 64M x 32b. Additionally, synchronous burst static RAM (2M x 18b) complete memory resources of the system. Though, it should be mentioned that acceleration of image fusion processing was accomplished in such a way that minimize total amount of necessary memory transfers. Additionally, to speed-up calculations, FPGA embedded memory is intensively addressed.

Figure 3. Simplified block schematics of the prototype UFO board.

652

Page 5: Fast and Adaptive Bidimensional Empirical Mode ...fusion.isif.org/proceedings/fusion12CD/html/pdf/088_223.pdf · Fast and Adaptive Bidimensional Empirical Mode Decomposition for the

Fused image is fed to external video decoder in the form of digital graphics output signal and then is displayed by means of integrated video encoder. Encoder accepts a digital graphics signal and transmits video signal stream through a TV output (s-video). The device accepts data over from FPGA by 12-bit wide data port and outputs TV standard signal by means of 10-bit video Digital-to-Analogue Converters (DACs).

Encoder’s TV processor performs non-interlace to interlace conversion with scaling and flicker filters and encode the data into any of the NTSC or PAL video standards. It supports 8 graphics resolutions up to 1024 by 768 pixels.

B. Software Early implementations of FABEMD image fusion have

been carried out by means of commercially available experimental Altera Cyclone III Video Development Kit. Firstly a NIOS II processor was embedded in FPGA of the kit. NIOS II processor was foreseen to manage other computational blocks developed for fusion processing. After power up, processor initializes video inputs for acquisition of video data streams.

Block diagram of FABEMD fusion implementation in FPGA is shown in Fig. 4, where “Window” blocks denote window size calculation operations, “Filter” blocks represent MAX/MIN filtration and smoothing (Fig. 1). Fusion processor performs steps 2-4 of fusion algorithm summary (section IV).

Figure 4. Block diagram of FABEMD fusion implementation in FPGA.

TABLE I. FABEMD FUSION IMPLEMENTATION – ALTERA CYCLONE III REQUIREMENTS

Resource type

FABEMD implementation requirements Used Available Used [%]

Logic cells 17860 119088 15

Registers 7862 119088 7 Memory Bits 617472 3981312 15

DSP elements 0 576 0

Buffered image frames from both video data streams are

transferred by Direct Memory Access (DMA) channels directly to FABEMD fusion processing block and then to the buffer of image frames of digital graphics output channel. Each FABEMD decomposition level is performed parallel in order to increase overall system throughput. It was accomplished by application of special input image buffers. The data from DMA channels overwrite image buffers providing simultaneously immediate access to each input data sample. The calculated output coefficients from each decomposition level (BIMFs) are processed by the fusion processor generating output image. Implemented fusion processing structure with parallelized and pipelined processing flow is capable to compute one pixel of fused image per one FPGA system clock. The output data resulting from fusion process are saved in fusion output frame buffer and are displayed in video output device.

The resource requirements for FABEMD image fusion implementation are shown in Table I. In comparison to other FPGA multilevel fusion implementations [16], the presented system structure requires small amount of RAM blocks and allows to achieve higher clock speed (tested up to 150 MHz). This gives the possibility to obtain fusion of high resolution images.

VI. DISCUSSION OF RESULTS In Fig. 2 an exemplary FABEMD decomposition to 3 BIMFs and residue is presented as well as partial combinations results and final fusion result (Fig. 2o). For quantitative evaluation of fusion quality, the values of objective image fusion performance measure (OIFPM) [17] were used. OIFPM reflects how exact is the information about image gradient magnitude and orientation transferred to the fused image. Results of OIFPM evaluation for two exemplary images, presented in Fig. 2 and Fig. 5, are given in Table II. Comparison with several popular fusion methods (similar as in [6]) indicates FABEMD superiority.

TABLE II. OIFPM FUSION QUALITY MEASURE RESULTS

Image Algorithm

Mean Contrast pyramid

DWT (DBS(2,2)) FABEMD

Plane and trees 0.4535 0.6235 0.6393 0.6918

Street 0.4454 0.4831 0.5724 0.5986

653

Page 6: Fast and Adaptive Bidimensional Empirical Mode ...fusion.isif.org/proceedings/fusion12CD/html/pdf/088_223.pdf · Fast and Adaptive Bidimensional Empirical Mode Decomposition for the

(a) (b)

(c) (d)

(e) (f)

Figure 5. Image „Plane and trees”: VIS input (a), IR input (b), fusion by mean value (c), fusion by contrast pyramid (d), fusion by discrete wavelet transform

(e), fusion by FABEMD (f).

In Fig. 5 resulting images of fusion with four different algorithms are shown for the image „Plane and trees”. Note how hot plane engines, present exclusively in the IR image, are transferred to the fusion result, while trees, which are out of focus in IR image, remain unblurred. In presented example FABEMD image (Fig. 5f) represents best fusion quality, providing better contrast than DWT (Fig. 5e), not introducing smoothing as averaging (Fig. 5c) or unnatural background inhomogeneity, as contrast pyramid method (Fig. 5d).

VII. CONCLUSIONS We have presented an FPGA implementation of the

FABEMD-based image fusion for the real time video fusion application. Proposed solution allows to take advantage of FABEMD properties valuable for the image fusion application while ensuring high processing speed, demanded for real-time processing purposes. In performed tests, FABEMD proved to be more efficient than several popular fusion methods for multimodal (VIS and IR) images fusion. Developed prototype

of real-time fusion system is able to process 25 pairs of image frames per second with resolution up to 640x480 pixels. This is highly satisfactory result considering typically low resolution of IR cameras.

ACKNOWLEDGMENT M. Wielgus thanks Professor K. Patorski for the valuable

introduction to the methods of BEMD and FABEMD.

REFERENCES [1] N. E. Huang, Z. Sheng, S. R. Long, M. C. Wu, W. H. Shih, Q. Zeng, N.

C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for non-linear and nonstationary time series analysis,” Proc. Roy. Soc. Lond. A 454, pp. 903-995, 1998.

[2] A. Linderhed, “2-D empirical mode decompositions in the spirit of image compression”, Proc. SPIE, Wavelet and Independent Component Analysis Applications IXI, vol. 4738, pp. 1-8, 2002.

[3] H. Hariharan, A. Gribok, M. Abidi, and A. Koschan, “Image Fusion and Enhancement via Empirical Mode Decomposition”, Journal of Pattern Recognition Research, Vol. 1, No. 1, pp. 16-32, 2006.

[4] D. P. Mandic, M. Golz, A. Kuh, D. Obradovic, and T. Tanaka, Signal Processing Techniques for Knowledge Extraction and Information Fusion, Springer, New York, 2008.

[5] D. Looney and D. P. Mandic, “Multiscale Image Fusion Using Complex Extensions of EMD”, IEEE Transactions on Signal Processing, Vol 57, No.4, pp. 1626-1630, 2009.

[6] W. Liang, Z. Liu, “Region-based fusion of infrared and visible images using Bidimensional Empirical Mode Decomposition”, Int. Conf. on Educ. and Inf. Techn. ICEIT, pp. V3-358 – V3-363, 2010.

[7] X. Zhang, Q. Chen, and T. Men, "Comparison of fusion methods for the infrared and color visible images", Int. Conf. on Comp. Science and Inf. Techn. ICCSIT 2009, pp. 421-424.

[8] S. M. A. Bhuiyan, R. R. Adhami, and J. F. Khan, “A novel approach of fast and adaptive bidimensional empirical mode decomposition,” IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 1313-1316, 2008.

[9] S. M. A. Bhuiyan, R. R. Adhami, and J. F. Khan, “Fast and adaptive bidimensional empirical mode decomposition using order-statistics filter based envelope estimation,” EURASIP J. Adv. Signal Proc., ID728356, pp. 1-18, 2008.

[10] M. U. Ahmed, D. P. Mandic, “Image fusion based on Fast and Adaptive Bidimensional Empirical Mode Decomposition”, IEEE 13th Conference on Information Fusion, Fusion 2010, pp. 1-6.

[11] J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, Ph. Bunel, “Image analysis by bidimensional empirical mode decomposition”, Image Vis. Comput., vol. 21, pp. 1019–1026, 2003.

[12] C. Damerval, S. Meignen and V. Perrier, “A fast algorithm for bidimensional EMD”, IEEE Signal Process. Lett., vol. 12, pp. 701–704, 2005.

[13] T. Waters, L. Swan, R. Rickman “Real-time Image Registration and Fusion in a FPGA Architecture (Ad-FIRE)”, Proc. SPIE 8042, 80420Y (2011); http://dx.doi.org/10.1117/12.883807.

[14] J. P. Heather, M.I. Smith, J. Sadler, D. Hickman “Issues and challenges in the development of a commercial image fusion system” , Proc. SPIE 7701, 77010A (2010), http://dx.doi.org/10.1117/12.850018.

[15] D. Dwyer, M. Smith, J. Dale, J. Heather “Real time implementation of image alignment and fusion” In: Electro-Optical and Infrared Systems: Technology and Applications; R.G. Driggers, D.A. Huckridge, Editors, Proc. SPIE Vol. 5612 (2004), pp. 85-93.

[16] O. Sims, J. Irvine, “An FPGA implementation of pattern-selective pyramidal image fusion”, IEEE Proceedings of FPL’2006, pp 1-4.

[17] C. Xydeas, V. Petrovic, “Objective image fusion performance measure”, Electronics Letters 36, pp. 308-309, 2002.

654