AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT ... · AN IMPLEMENTATION OF UNDERWATER IMAGE...

4
AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT TECHNIQUE USING FPGA AND MATLAB Dr.V.Latha, Deebadarsene.S, Badmaja.RS, Amritha.S, Sahana.B Department of Electronics and Communication Engineering, Velammal Engineering college, Chennai. AbstractThe present study has been undertaken to enhance the images which are captured underwater and degraded. Single image approach is used and hence specialized hardware or information on scene structures and conditions in underwater are not needed for this approach. The color compensated and white- balanced versions of the original degraded image are blended to obtain the desired enhanced image. We adapt a multi scale fusion strategy for the blending process. These enhanced images and videos are characterized by better exposedness of the dark regions, improved global contrast, and edges sharpness. The algorithm used in this approach is reasonably independent of the camera settings. Index Termsunderwater image, White balancing, colour compensation, discrete wavelet transform I. INTRODUCTION Underwater environment offers many rare attractions such as marine animal, fishes, amazing landscape and shipwrecks. Underwater imaging has conjointly been a very important supply in numerous branches of technology and scientific researches like examination of underwater infrastructures , detection of manmade objects, control of underwater vehicles, marine biology research, and archeology . Unlike common images, underwater images suffer from poor visibility due to the attenuation of the propagated light, mainly because of absorption and scattering effects. The absorption phenomenon reduces the light energy and the direction of light propagation is changed by scattering, thus resulting in foggy appearance and contrast degradation, making distant objects misty. Practically, in sea water images, objects at a distance of more than 10 meters are almost inconspicuous, and the colors are faded as their composing wavelengths are cut according to the depth of the water. Several attempts to restore and enhance the visibility of such degraded images have been introduced. Traditional enhancing techniques such as gamma correction, histogram equalization appear to be strongly limited for deterioration of underwater scenes resulting from multiplicative and additive processes. Hence adaptive histogram equalization is used. II. LITERATURE SURVEY The existing underwater dehazing techniques can be grouped in several classes. An important class corresponds to the methods that use specialized hardware [1], [10], [15]. The divergent- beam underwater Lidar imaging (UWLI) system employs an optical/laser-sensing technique. This technique is used to capture turbid underwater images. But these systems cost more and consume high power. A second class involves polarization-based methods. These approaches use several images of the same scene but captured with different degrees of polarization by rotating a polarizing filter fixed to the camera. For instance, Schechner and Averbuch exploit the polarization associated to back-scattered light to estimate the transmission map. Though effective in recovering distant regions, they are not applicable for video acquisition, and hence are of limited help when dealing with dynamic scenes. A third class of is one in which employs multiple images are employed [9], [16] or a rough approximation of the scene model is done [17]. Narasimhan and Nayar [9] used changes in intensities of scene points under different weather conditions to detect depth discontinuities in the scene. Due to the requirement of additional information (images and depth approximation) which are generally not available, these methods are impractical for common users. A fourth class of methods exploits the similarities between light propagating in fog and under water. Recently, several single image dehazing techniques have been introduced to restore images of outdoor foggy scenes [18]–[23]. By inverting the Koschmieder’s visibility model, reconstruction of the intrinsic brightness of the objects is done by these dehazing techniques.[14]. Despite this model being initially formulated under strict assumptions (e.g. homogeneous atmosphere illumination, unique extinction coefficient whatever the light wavelength, and space-uniform scattering process) [24],several works have relaxed those strict constraints, Restoring underwater images using algorithms based on Dark Channel Prior (DCP) [19], [26] have been introduced. This algorithm initially was proposed for outdoor scenes dehazing. It defines regions of small transmission as the ones with large minimal value of colors assuming that radiance of the object in natural scene is small. In the underwater context, the approach of Chiang and Chen [27] segments the foreground and the background regions based on DCP, and color compensation to remove the haze and color variations. Drews, Jr., et al. [28] assume that the predominant source of visual information in underwater lies in the blue and green color channels. Their Underwater Dark Channel Prior (UDCP) can estimate the transmission of scenes in under water better than the conventional DCP. Galdran et al. [29] observe that, under water, the red component reciprocal increases with the distance to the camera, and hence introduce the Red Channel prior to the recovery of colors associated with short wavelengths in International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 6, 2019 (Special Issue) © Research India Publications. http://www.ripublication.com Page 22 of 25

Transcript of AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT ... · AN IMPLEMENTATION OF UNDERWATER IMAGE...

Page 1: AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT ... · AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT TECHNIQUE USING FPGA AND MATLAB . Dr.V.Latha, Deebadarsene.S, Badmaja.RS,

AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT

TECHNIQUE USING FPGA AND MATLAB

Dr.V.Latha, Deebadarsene.S, Badmaja.RS, Amritha.S, Sahana.B

Department of Electronics and Communication Engineering, Velammal Engineering college, Chennai.

Abstract—The present study has been undertaken to enhance

the images which are captured underwater and degraded. Single

image approach is used and hence specialized hardware or

information on scene structures and conditions in underwater are

not needed for this approach. The color compensated and white-

balanced versions of the original degraded image are blended to

obtain the desired enhanced image. We adapt a multi scale fusion

strategy for the blending process. These enhanced images and

videos are characterized by better exposedness of the dark regions,

improved global contrast, and edges sharpness. The algorithm

used in this approach is reasonably independent of the camera

settings.

Index Terms— underwater image, White balancing, colour

compensation, discrete wavelet transform

I. INTRODUCTION

Underwater environment offers many rare attractions such as marine animal, fishes, amazing landscape and shipwrecks. Underwater imaging has conjointly been a very important supply in numerous branches of technology and scientific researches like examination of underwater infrastructures , detection of manmade objects, control of underwater vehicles, marine biology research, and archeology . Unlike common images, underwater images suffer from poor visibility due to the attenuation of the propagated light, mainly because of absorption and scattering effects. The absorption phenomenon reduces the light energy and the direction of light propagation is changed by scattering, thus resulting in foggy appearance and contrast degradation, making distant objects misty. Practically, in sea water images, objects at a distance of more than 10 meters are almost inconspicuous, and the colors are faded as their composing wavelengths are cut according to the depth of the water. Several attempts to restore and enhance the visibility of such degraded images have been introduced. Traditional enhancing techniques such as gamma correction, histogram equalization appear to be strongly limited for deterioration of underwater scenes resulting from multiplicative and additive processes. Hence adaptive histogram equalization is used.

II. LITERATURE SURVEY

The existing underwater dehazing techniques can be grouped in several classes. An important class corresponds to the methods

that use specialized hardware [1], [10], [15]. The divergent-beam underwater Lidar imaging (UWLI) system employs an optical/laser-sensing technique. This technique is used to capture turbid underwater images. But these systems cost more and consume high power. A second class involves polarization-based methods. These approaches use several images of the same scene but captured with different degrees of polarization by rotating a polarizing filter fixed to the camera. For instance, Schechner and Averbuch exploit the polarization associated to back-scattered light to estimate the transmission map. Though effective in recovering distant regions, they are not applicable for video acquisition, and hence are of limited help when dealing with dynamic scenes. A third class of is one in which employs multiple images are employed [9], [16] or a rough approximation of the scene model is done [17]. Narasimhan and Nayar [9] used changes in intensities of scene points under different weather conditions to detect depth discontinuities in the scene. Due to the requirement of additional information (images and depth approximation) which are generally not available, these methods are impractical for common users. A fourth class of methods exploits the similarities between light propagating in fog and under water. Recently, several single image dehazing techniques have been introduced to restore images of outdoor foggy scenes [18]–[23]. By inverting the Koschmieder’s visibility model, reconstruction of the intrinsic brightness of the objects is done by these dehazing techniques.[14]. Despite this model being initially formulated under strict assumptions (e.g. homogeneous atmosphere illumination, unique extinction coefficient whatever the light wavelength, and space-uniform scattering process) [24],several works have relaxed those strict constraints, Restoring underwater images using algorithms based on Dark Channel Prior (DCP) [19], [26] have been introduced. This algorithm initially was proposed for outdoor scenes dehazing. It defines regions of small transmission as the ones with large minimal value of colors assuming that radiance of the object in natural scene is small. In the underwater context, the approach of Chiang and Chen [27] segments the foreground and the background regions based on DCP, and color compensation to remove the haze and color variations. Drews, Jr., et al. [28] assume that the predominant source of visual information in underwater lies in the blue and green color channels. Their Underwater Dark Channel Prior (UDCP) can estimate the transmission of scenes in under water better than the conventional DCP. Galdran et al. [29] observe that, under water, the red component reciprocal increases with the distance to the camera, and hence introduce the Red Channel prior to the recovery of colors associated with short wavelengths in

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 6, 2019 (Special Issue) © Research India Publications. http://www.ripublication.com

Page 22 of 25

Page 2: AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT ... · AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT TECHNIQUE USING FPGA AND MATLAB . Dr.V.Latha, Deebadarsene.S, Badmaja.RS,

Wavelet Transform

Enhanced Image

underwater. Emberton et al. [30] designed a hierarchical rank based method, using a set of features to find the image regions that are the most haze- opaque, and refining the back-scattered light estimation, which improves the light transmission model inversion. Lu et al. [31]. employ color lines, as in [32], to estimate the ambient light, and implement a variant of the DCP to estimate the transmission.

III. PROPOSED SYSTEM

Our novel approach is to remove the haze in the images captured in underwater based on a single image captured with a conventional camera. Attenuation of the propagated light occurs mainly due to absorption and scattering effects in the underwater. Absorption reduces light energy. The scattering changes the light propagation direction. This approach is based on the fusion of multiple inputs and derives these two inputs by correcting the contrast and by sharpening a white- balanced version of a single native input image. The color cast induced by underwater light scattering is removed by white balancing of the input image. This helps in providing a natural appearance of the sub-sea images. The multi-scale fusion process results in an artifact-free blending. The Grey World is the best method to remove the bluish tone. But we can observe that this method suffers from severe red artifacts. Those artifacts are caused by a very small mean value for the red channel. This, in turn, leads to overcompensation of this channel in locations where red is present.

A. FLOW CHART

Fig1: Block diagram of the implementation

B. WHITE BALANCING

White-balancing aims to improve the image aspect, primarily by removing the undesired color castings caused by illumination or medium attenuation properties. In underwater, the color perception is highly correlated with the depth and the green-bluish appearance is an important problem that needs to be

corrected. As the light penetrates through the water, the process of attenuation selectively affects the wavelength spectrum, thus affecting the intensity and the appearance of a colored surface

C. COLORSPACE CONVERSION

The RGB images of underwater image is converted into colorspace representation. The principle of color space is to facilitate the specification of colors in some standard, generally accepted way. The colorspace conversion used is HSV.

D. COMPENSATION

For the compensation of the loss of red channel, the following four observations/principles are built

1. The channel is relatively well preserved in underwater, compared to the red channel and blue channel is the green channel. When traveling in clear water, the light with the longest wavelength ,i.e. the red light is lost first.

2. Compared to the red channel, the green channel contains opponent color information and it is essentially important to compensate for the stronger attenuation induced on red, compared to green. Therefore, compensation for the red attenuation is done by adding a fraction of the green channel to red. This allows better recovery of the entire color spectrum while maintaining a natural appearance of the background.

3. The compensation should be proportional to the difference between the mean green and the mean red values because, under the Gray world assumption (all channels have the same mean value before attenuation) this difference reflects the unbalance between attenuation of red and green color channels.

4. During the Gray World step that follows the red loss compensation, to avoid saturation of the red channel, the the pixels with small red channel values should be primarily affected. Changes should not be made to pixels that already include a significant red component. In other words, in regions where the information of the red channel is still significant, the information of the green channel should not be transferred. Mathematically the compensated red channel 𝐼𝑟𝑐 at every pixel location (x) is expressed as shown below

𝑰𝒓𝒄(𝒙) = 𝑰𝒓(𝒙) + 𝜶(̅�̅��̅� − �̅�𝒓). (𝟏 − 𝑰𝒓(𝒙)). 𝑰𝒈(𝒙)

Eq(1)

where 𝐼𝑟, 𝐼𝑔 represent the red and green color channels of image I, each channel being in the interval [0, 1]. When blue is strongly attenuated and the compensation of the red channel appears to be insufficient, the compensation for the blue channel attenuation, mathematically is shown below

𝑰𝒃𝒄(𝒙) = 𝑰𝒃(𝒙) + 𝜶(̅�̅��̅� − �̅�𝒃). (𝟏 − 𝑰𝒃(𝒙)). 𝑰𝒈(𝒙) Eq(2)

where 𝐼𝑏, 𝐼𝑔 represent the blue and green color channels of image I, and α is also set to one.

E. WHITE BALANCING USING FPGA

The process of white balancing can be implemented using either hardware or using software. The hardware

Input Image

User input

White Balanced

Image

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 6, 2019 (Special Issue) © Research India Publications. http://www.ripublication.com

Page 23 of 25

Page 3: AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT ... · AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT TECHNIQUE USING FPGA AND MATLAB . Dr.V.Latha, Deebadarsene.S, Badmaja.RS,

implementation involves the use of FPGA(Field Programmable Gate Array). Spartan 6 is the kit that is used in the implementation of the process. The equations (1) and (2) can be used for this particular task. The compensation of red channel and blue channel results in an enhanced image which is achieved using the equations mentioned above.

Fig2:Hardware implementation of White

balancing

Here I represents the image. 𝐼𝑏 and 𝐼𝑔 represent the blue and the green color channels of the image.

F. HISTOGRAM EQUALIZATION

Histogram equalization is a method in image processing involving the contrast adjustment using the image's histogram. It increases the global contrast of many images,especially close contrast values are used to represent the usable data of the image. The intensities can be distributed better on the histogram using this adjustment. Areas of lower local contrast can gain a higher contrast through this process. This is accomplished by effectively spreading out the intensity values that are most frequent. It is a straightforward technique. Theoretically, if the histogram equalization function is known, the original histogram can be recovered. This calculation is not computationally intensive. A major disadvantage of the method is that it is arbitrary. While increasing the contrast of background noise, it may decrease the usable signal. This is a technique of adjusting image intensities to enhance contrast. Let f be a given image represented as integer pixel intensities ranging from 0 to L − 1.Here L is the number of possible values of intensity, often 256. Let p denote the normalized histogram of f with a bin for each possible intensity.

𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒑𝒊𝒙𝒆𝒍𝒔 𝒘𝒊𝒕𝒉 𝒊𝒏𝒕𝒆𝒏𝒔𝒊𝒕𝒚 𝒏 𝒑 =

𝑻𝒐𝒕𝒂𝒍 𝒏𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒑𝒊𝒙𝒆𝒍𝒔

Eq(3)

The value of n ranges from 0, 1… L-1.But this equalization technique does not provide a high quality image.

G. ADAPTIVE HISTOGRAM EQUALIZATION

Adaptive histogram equalization (AHE) is a computer image processing technique which is implemented to

improve contrast in images. The adaptive method computes several histograms, each corresponding to a distinct section of the image and it then uses them to redistribute the lightnessvalues of the image. Local contrast and the definitions of edges in each region of an image can be improved significantly . Due to the nature of histogram equalization, the result value of a pixel under AHE and its rank among the pixels in its neighborhood are proportional. Though this method is effective compared to Histogram equalization, it still provides less quality image.

H. FUSION USING DISCRETE WAVELET

TRANSFORM

In order to avoid the disadvantages of histogram and adaptive histogram equalization techniques, we introduce a method wherein we involve fusion using Discrete Wavelet transform. Wavelet transform is a signal analysis method which is similar to image pyramids. While image pyramids lead to an over complete set of transform coefficients, the wavelet transform leads to a non redundant image representation. The discrete 2- dim wavelet transform is computed first by the recursive application of low pass and high pass filters in every direction of the input image (i.e. rows and columns) and next by sub sampling. One major drawback of the wavelet transform is its shift dependency, i.e. a simple shift of the input signal may lead to a completely different transform coefficients. The input images must be decomposed into a shift invariant representation to overcome this shift dependency. There are several ways to achieve this. One is to compute the wavelet transform for all possible circular shifts of the input signal. Another approach is to drop the sub sampling during the process of decomposition and modify the filters at each decomposition level. This results in a highly redundant signal representation. The basic idea in the wavelet transform is to represent any arbitrary function as a superposition of a set of wavelets or basis functions. These basis functions which are also known as baby wavelets are obtained from a single prototype wavelet called the mother wavelet, either by dilations or by contractions (scaling) and translations (shifts). They are advantageous over traditional Fourier methods in analyzing physical situations where in the signal contains discontinuities and sharp spikes. Wavelets are basis functions in wavelet transform. Wavelets tend to be irregular and symmetric. Generally all wavelet functions, w(2kt - m), are derived from a single mother wavelet, w(t).

IV. RESULTS AND DISCUSSIONS

Based on the approach mentioned above, various underwater images are captured and analyzed. MSE (Mean Square Error), RMSE (Root Mean Square Error) and PSNR (Peak Signal to Noise Ratio) are calculated for the images involved in the process. The images involved include input image, white balanced image, histogram equalization image, adaptive histogram equalization image, fusion image. Their corresponding PSNR, MSE and RMSE values show that the fusion output obtained using discrete wavelet transform has reduced RMSE and increased PSNR values compared to the images enhanced using histogram and adaptive histogram equalization methods.

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 6, 2019 (Special Issue) © Research India Publications. http://www.ripublication.com

Page 24 of 25

Page 4: AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT ... · AN IMPLEMENTATION OF UNDERWATER IMAGE ENHANCEMENT TECHNIQUE USING FPGA AND MATLAB . Dr.V.Latha, Deebadarsene.S, Badmaja.RS,

Fig.A: Input image, Fig.B: White balanced image,

Fig.C: AHE image, Fig.D: Fusion image

Table I

Comparison of results of different image processes

Process

Mean

Square

Error

Root

Mean

Square

Error

Peak

Signal to

Noise

Ratio

Histogram

equalization

1.1201e+04 105.8337 7.6383

Adaptive

Histogram

equalization

1.0535e+04 102.6396 7.9045

Discrete Wavelet

Transform

281.7371 16.7850 23.6324

From the above table we can find that the resulting image from Discrete Wavelet Transform has high PSNR value indicating that it has a very high quality compared to the input image.

V. CONCLUSION

In this paper, we have provided an alternative approach to enhance underwater videos and images . The image captured by a conventional camera can be easily used as the input and hence is cost effective. This approach provides high accuracy and can recover important faded features and edges. From our results it can be inferred that our fusion process provides a high PSNR value indicating good image quality.

REFERENCES

[1] M. D. Kocak, F. R. Dalgleish, M. F. Caimi, and Y. Y. Schechner, “A focus on recent developments and trends in underwater imaging,” Marine Technol. Soc. J., vol. 42, no. 1, pp. 52–67, 2008.

[2] G. L. Foresti, “Visual inspection of sea bottom structures by an autonomous underwater vehicle,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 31, no. 5, pp. 691–705, Oct. 2001.

[3] A. Ortiz, M. Simó, and G. Oliver, “A vision system for an underwater cable tracker,” Mach. Vis. Appl., vol. 13, pp. 129–140, Jul. 2002.

[4] A. Olmos and E. Trucco, “Detecting man-made objects in unconstrained subsea videos,” in Proc. BMVC, Sep. 2002, pp. 1–10.

[5] B. A. Levedahl and L. Silverberg, “Control of underwater vehicles in full unsteady flow,” IEEE J. Ocean. Eng., vol. 34, no. 4, pp. 656–668, Oct. 2009.

[6] C. H. Mazel, “In situ measurement of reflectance and fluorescence spectra to support hyperspectral remote sensing and marine biology research,” in Proc. IEEE OCEANS, Sep. 2006, pp. 1–4.

[7] Y. Kahanov and J. G. Royal, “Analysis of hull remains of the Dor D Vessel, Tantura Lagoon, Israel,” Int. J. Nautical Archeol., vol. 30, pp. 257–265, Oct. 2001.

[8] R. Schettini and S. Corchs, “Underwater image processing: state of the art of restoration and image enhancement methods,” EURASIP J. Adv. Signal Process., vol. 2010, Dec. 2010, Art. no. 746052.

[9] S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Learn., vol. 25, no. 6, pp. 713–724, Jun. 2003.

[10] D.-M. He and G. G. L. Seet, “Divergent-beam LiDAR imaging in turbid water,” Opt. Lasers Eng., vol. 41, pp. 217–231, Jan. 2004.

[11] Y. Y. Schechner and Y. Averbuch, “Regularized image recovery in scattering media,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 9, pp. 1655–1660, Sep. 2007.

[12] B. L. McGlamery, “A computer model for underwater camera systems,” Proc. SPIE, vol. 208, pp. 221–231, Oct. 1979.

[13] J. S. Jaffe, “Computer modeling and the design of optimal underwater imaging systems,” IEEE J. Ocean. Eng., vol. 15, no. 2, pp. 101–111, Apr. 1990.

[14] H. Koschmieder, “Theorie der horizontalen sichtweite,” Beitrage Phys. Freien Atmos., vol. 12, pp. 171–181, 1924.

[15] M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” in Proc. ACM SIGGRAPH, Aug. 2004, pp. 825–834.

[16] S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Proc. IEEE ICCV, Sep. 1999, pp. 820–827.

[17] J. Kopf et al., “Deep photo: Model-based photograph enhancement and viewing,” ACM Trans. Graph., vol. 27, Dec. 2008, Art. no. 116. [18]

[18] R. Fattal, “Single image dehazing,” in Proc. ACM SIGGRAPH, Aug. 2008, Art. no. 72.

[19] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proc. IEEE CVPR, Jun. 2009, pp. 1956–1963. [20]

[20] J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proc. IEEE ICCV, Sep. 2009, pp. 2201– 2208.

[21] J.-P. Tarel, N. Hautiere, L. Caraffa, A. Cord, H. Halmaoui, and D. Gruyer, “Vision enhancement in homogeneous and heterogeneous fog,” IEEE Intell. Transp. Syst. Mag., vol. 4, no. 2, pp. 6–20, Aug. 2012. [22]

[22] L. Kratz and K. Nishino, “Factorizing scene albedo and depth from a single foggy image,” in Proc. IEEE ICCV, Sep. 2009, pp. 1701–1708.

[23] C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Trans. Image Process., vol. 22, no. 8, pp. 3271–3282, Aug. 2013.

[24] H. Horvath, “On the applicability of the koschmieder visibility formula,” Atmos. Environ., vol. 5, no. 3, pp. 177–184, Mar. 1971

[25] C. Ancuti, C. O. Ancuti, C. De Vleeschouwer, and A. Bovik, “Night-time dehazing by fusion,” in Proc. IEEE ICIP, Sep. 2016, pp. 2256–2260.

[26] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2341–2353, Dec. 2011.

[27] J. Y. Chiang and Y.-C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Trans. Image Process., vol. 21, no. 4, pp. 1756–1769, Apr. 2012.

[28] P. Drews, Jr., E. Nascimento, F. Moraes, S. Botelho, M. Campos, and R. Grande-Brazil, “Transmission estimation in underwater single images,” in Proc. IEEE ICCV, Dec. 2013, pp. 825–830.

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 14, Number 6, 2019 (Special Issue) © Research India Publications. http://www.ripublication.com

Page 25 of 25