19 minute read

Samarth Borkar, Sanjiv V. Bonde

International Journal of Advances in Applied Sciences (IJAAS)

Vol. 7, No. 1, March 2018, pp. 38~45 ISSN: 2252-8814, DOI: 10.11591/ijaas.v7.i1.pp38-45  38

Advertisement

A Fusion Based Visibility Enhancement of Single Underwater Hazy Image

Samarth Borkar, Sanjiv V. Bonde

Department of Electronics and Telecommunication Engineering, Shri. Guru Gobind Singhji Institute of Engineering and Technology, SRTMUN University, India

Article Info

Article history:

Received May 21, 2017 Revised Jan 23, 2017 Accepted Feb 11, 2018

Keyword:

Color constancy Contrast enhancement Image dehazing Image fusion Underwater image restoration

ABSTRACT

Underwater images are prone to contrast loss, limited visibility, and undesirable color cast. For underwater computer vision and pattern recognition algorithms, these images need to be pre-processed. We have addressed a novel solution to this problem by proposing fully automated underwater image dehazing using multimodal DWT fusion. Inputs for the combinational image fusion scheme are derived from Singular Value Decomposition (SVD) and Discrete Wavelet Transform (DWT) for contrast enhancement in HSV color space and color constancy using Shades of Gray algorithm respectively. To appraise the work conducted, the visual and quantitative analysis is performed. The restored images demonstrate improved contrast and effective enhancement in overall image quality and visibility. The proposed algorithm performs on par with the recent underwater dehazing techniques.

Copyright © 2018 Institute of Advanced Engineering and Science. All rights reserved.

Corresponding Author:

Samarth Borkar, Computer Vision and Pattern Recognition lab. Department of Electronics and Telecommunication Engineering, Shri. Guru Gobind Singhji Institute of Engineering and Technology, Vishnupuri - Nanded, Maharashtra, 431606, India. Email: borkarsamarth@sggs.ac.in

1. INTRODUCTION

Underwater images are inherently dark in nature are also plagued by various small suspending particles and marine snow in an aqueous medium. To increase the visibility range and vision depth, an artificial light is utilized. The rays of light are scattered by particles in the underwater medium and along with color attenuation results in problems such as contrast reduction, blurring of an image and color loss driving the images beyond recognition. For underwater applications such as observation of the oceanic floor, monitoring of fish, study of coral reef etc. demands dehazing of images so as to recover color, enhance visibility and increase visual details present in the degraded image for computer vision and object recognition. In absence of any dehazing technique, the performance and usability of a standard enhancement algorithm may fail to produce desirable results. Basically, dehazing is a process to restore the contrast of an image. Traditional approaches like histogram equalization, histogram specification, and various other contrast enhancement techniques do not deliver desired output images. Over the last few years, diverse techniques have been proposed to restore the underwater hazy images. The dehazing approaches can be grouped into software based and hardware based. Hardware based techniques refer to utilization of polarization filter [1], range gated imaging [2] and using multiple underwater images [3], whereas software based techniques are further grouped into a physical model based and non-physical models. In physical model underwater image processing, parameters of the model are estimated and then restoration is achieved. Estimating the depth of underwater haze is the major hindrance in such models.

Recently physical model-based techniques have gained wide attention. A pioneering work in this direction by He et al., based on dark channel prior (DCP) using minimum pixel intensity for each of the three channels in local patches to obtain the depth of haze changed the future course of research in image dehazing [4]. Carlevaris-Bianco et al. proposed dehazing based on the attenuation difference between RGB color channels [5]. Chiang and Chen presented wavelength compensation and dehazing based on modified DCP [6]. Following this method, underwater image restoration using joint trilateral filter [7], based on least attenuating color channel [8], contrast enhancement for underwater turbid images [9] was presented. But still, underwater dehazing based on the physical foundation from an original image is a challenging problem owing to statistical prior assumptions. Methods based on the non-physical model are presented by various researchers. Integrated color model based underwater image enhancement is presented in [10] and unsupervised color correction technique based on histogram stretching and color balance was proposed in [11]. Bazeille et al. proposed series of filters to enhance contrast, adjust colors and suppress noise [12]. Ancuti et al. presented inspiring work in the field of underwater image enhancement using Laplacian pyramid decomposition thereby increasing contrast [13]. In this method, two inputs to fusion framework are derived from the original underwater image using white balancing technique and color correction technique applied to single hazy underwater images. Weights for the fusion process included Laplacian contrast, local contrast, Saliency and degree of exposedness. A method based on Retinex was proposed by Fu et al. to handle blurring and underexposure of underwater images [14]. Sheng et al. restored blurred and defocussed underwater images using Biorthogonal wavelet transform [15]. Amidst all these underwater dehazing algorithms it is arduous to pick the best of the available algorithms on account of the absence of ground truth images. There is an inherent need for creation of a standard database in underwater imaging science. Our strategy is based on multi- resolution DWT fusion framework. The two inputs to fusion are derived using effective contrast enhancement algorithm and robust color constancy algorithm. The fused image is then subjected to contrast stretching operation to improve the global contrast and visibility of dark regions. The remainder of this article is as organized. In Section 2, we describe the proposed underwater dehazing algorithm in detail regarding the choice of selection of color constancy algorithm, discrete wavelet transform (DWT) fusion, and color enhancement. In Section 3, we report the outcomes of qualitative and quantitative analysis and lastly, conclusions are outlined in Section 4.

2. PROPOSED RESEARCH METHOD

In this paper, we propose the enhancement of single underwater images based on multiresolution DWT fusion. We generate the two inputs for the fusion framework from contrast enhanced and color constancy algorithm from the original input image. The restoration of the hazy image is strongly dependent on the selection of inputs. The need for the selection of these two inputs is as explained in following subsections. The block diagram of the proposed algorithm is as shown in Figure 1.

Figure 1. Schematic Diagram of Proposed Algorithm for Single Underwater Image Enhancement

2.1 Obtaining Contrast Enhanced Image

Underwater images suffer from low contrast due to diminished illumination. The traditional contrast enhancement techniques exhibit severe limitations in underwater imaging. So in this work we adopted and A Fusion based Visibility Enhancement of Single Underwater Hazy Image (Samarth Borkar)

modified SVD and DWT based technique as presented in [16]. We target illumination component of the underwater image. We apply contrast enhancement to DWT LL sub-band of V plane using the SVD and DWT combinational method to solve the problem of low contrast. This illumination information is exhibited in singular value matrix of SVD. We modify the coefficients of singular value matrix to obtain desired enhancement in illumination component and other details in SVD are not altered. Also, in the underwater scenario, the formation of haze is a uniform intensity function, and the haze affected areas have analogously more brightness. This makes wavelet or scale space representation as provided by DWT, a useful tool to model hazy regions. The approximation coefficients (LL sub-band) correlates to localized haze information, whereas detail coefficients (LH, HL, and HH) embed edge information. So by applying the enhancement operation only on LL sub-band results in enhanced image with sharp edges. In our presented work, we have applied contrast enhancement technique using SVD and DWT on V channel of HSV color space, thereby not affecting the color composition of original input hazy image. The V channel is processed by histogram equalization to obtain vHE. The two images V and vHE are decomposed by DWT in four sub-bands. The singular value matrix correction coefficient is obtained using the equation:

max(������������) max(��������) (1)

where ��������

and ������������ are SVD values of the input image and the resulting image of the histogram equalized technique. The modified LL band is given by:

(2)

.�������� ������ .�������� (3)

By applying IDWT to ������ ����� , ������, ������ and ������ sub-bands, the new equalized V channel image is generated

�� � = IDWT (������ ����� , ������, ������, ������) (4)

The �� � component is then concatenated with H and S component in HSV color space to obtain contrast enhanced RGB image.

2.2. Obtaining White Balanced Image

Apart from diminished contrast problems, underwater images are prone to color loss attributed to wavelength attenuation [17]. Also, due to an artificial source of light at appreciable depth, unwanted color casts arises. Turbidity is not a big problem at deeper depths as compared to shallow waters due to the absence of marine snow and no appreciable underwater flow movement. So white balancing technique is a necessity. A large number of white balancing techniques are available in the literature [18]. We tried, white patch algorithm, MaxRGB algorithm, Gray-Edge algorithm [19] and Shades of Gray algorithm [20]. We obtained the best possible results for Shades of Gray algorithm. As listed in [13] the probable reasons for the failure of white patch algorithm are attributed to limited specular reflection and failure of Gray Edge algorithm is on account of low contrast and diminished edges as compared to outdoor images. Finlayson and Trezzi presented shades of Grey algorithm based on the assumption that image scene average is a function of some shades of Grey. It calculates the weighted average of the pixel intensity by assigning higher weight to pixel with higher intensity, based on Minkowski-norm p and is given as:

1

(5)

where ��(��) is the input image, �� is the spatial coordinate in the image and k is constant. Shades of Grey algorithm is trade-off between Gray World (�� =1) and MaxRGB algorithm (��=∞).

2.3. Multimodal Image Fusion Using DWT

The wavelet based fusion techniques are widely used in medical image restoration [21], satellite imaging [22], outdoor scene imaging [23] etc. The transform coefficients of DWT are representatives of

image pixels. It transforms the input series ��0,��1,��2 …,���� into high pass wavelet coefficients and low pass coefficients series of length �� each given by (6) and (7)2

���� =∑��−1 ��=0��2��−��.����(��) (6)

���� =∑��−1 ��=0��2��−��.����(��) (7)

where ����(��) and ����(��) are called wavelet filters and �� =0,…,(��

2

-1) and k is the length of the filter [24][29]. As discussed earlier, there are two components in DWT, approximation, and detail sub-bands. In our application, we propose to fuse contrast enhanced and white balanced image, in order to obtain a dehazed underwater single image. For the approximation sub-bands we use, the corresponding mean coefficients and for retaining edges we use maximum coefficients of the two (contrast enhanced and white balanced) detailed sub-bands. We decompose the first input and second input into individual approximation sub-bands ��1 �� and ��2 �� where notation 1 and 2 corresponds to first input and second input and a signifies approximation sub-band. Similarly we decompose two input images into ��1 �� and��1 ��, where d signifies corresponding detail sub-bands (LH, HL and HH). Algorithm for DWT image fusion is as follows:

Step 1: Compute approximation coefficient for fused image ����, using fusion rule as:

1 2(��1 �� +��2 ��)

Step 2: Compute detailed coefficient for fused image ����, using fusion rule as:

���� �� =������(��1 �� ,��2 ��)

Step 3: Follow Step 1 and Step 2 for the desired level of resolution. Step 4: Reconstruct the enhanced image using inverse discrete wavelet transform as:

�� =������������� �� ,���� ��� This process preserves the dominant features without introducing any artifacts. (8)

(9)

(10)

2.4. Post-Processing Using Contrast Stretching

The image after fusion needs to be post-processed using contrast stretching so as to increase the dynamic range of pixels and enhance contrast. A simplified contrast stretching algorithm [11][30] uses a linear scaling function of the normalized pixel value. The image restored ����(��,��) is given as

−�������� �+���������� (11)

where ����(��,��) is the normalized pixel intensity after contrast stretching, �������� is the lowest intensity in the existing image, �������� is the highest intensity in the existing image, ���������� is the minimum pixel intensity in the desired image and ���������� is the maximum pixel intensity in the desired image.

3. EXPERIMENTAL RESULTS AND ANALYSIS

As we do not have a ground truth or separate reference image as such, the choice of image assessment for quantitative as well as qualitative analysis becomes a difficult task. Figure 2, shows the histogram of images before and after restoration. It is seen that our proposed method is able to enhance the contrast range. We have compared our proposed methods with contemporary methods such as Ancuti et al. [13], Bazeille et al. [12], Carlevaris et al. [5], Chiang and Chen [6], Galdran et al. [8], and Serikawa & Lu [25]. Of these methods, [5-6] and [25] are based on image restoration model and employ modified dark channel prior as proposed in [4], and [12-13] are representatives of enhancement methods. Our work is based on the non-physical model, but we have compared our work with both restoration as well as enhancement techniques.

3.1. Qualitative Evaluation

As shown in Fig 2-5, we have compared our results with the standard methods. It can be seen that our method is able to restore visibility and remove color cast along with [12], [13] and [8] work, whereas in the results of [5], [6], [25], the color and details are not improved. Ancuti et al.’s method remove the haze completely exhibiting optimum visibility but rendering an oversaturated appearance, whereas Bazeille et al.’s method, the color fidelity loss is prominent. The method of Galdran et al. is able to recover and restore the natural appearance of the underwater scene but with lesser brightness as compared to our work. The method of Serikawa and Lu also fails to produce the enhanced visibility, as is the case with Chiang and Chen method.

Figure 2. Visual comparison on image Fish with size 512x384: (a) input image; (b) by [13]; (c) by [12]; (d) by [5]; (e) by [6]; (f) by [25]; (g) by [8]; and (h) with proposed method

Figure 3. Visual comparison on image Coral with size 512x384: (a) input image; (b) by [13]; (c) by [12]; (d) by [5]; (e) by [6]; (f) by [25]; (g) by [8]; and (h) with proposed method

Figure 4. Visual comparison on image Shipwreck with size 512x384: (a) input image; (b) by [13]; (c) by [12]; (d) by [5]; (e) by [6]; (f) by [25]; (g) by [8]; and (h) with proposed method

Figure 5. Visual comparison on image Diver with size 512x384: (a) input image; (b) by [13]; (c) by [12]; (d) by [5]; (e) by [6]; (f) by [25]; (g) by [8]; and (h) with proposed method

3.1. Quantitative evaluation

For quantitative analysis, we referred to earlier works to select the assessment parameters. We have used quality metrics such as a measure of ability to restore edges & gradient mean ratio of the edges [26], image entropy [21], structural similarity index [27], and PSNR [28]. Table 1 shows the visibility recovery in terms of a measure of ability to restore edges ‘e’ and Table 2 shows the gradient mean ratio of the edges ‘r’.

Table 1: Measure of Ability to Restore Edges ‘e ’

Methods Fish Coral Shipwreck Diver Ancuti et al. [13] 1.723 -0.05 0.462 10.34 Bazeille et al. [12] 0.928 -0.11 0.228 3.728 Carlevaris et al. [5] 0.147 -0.13 0.326 0.486 Chiang and Chen [6] 0.275 0.025 0.070 0.839 Galdran et al. [8] 1.276 0.163 0.458 6.648 Serikawa and Lu [25] 0.621 -0.01 -0.02 1.414 Proposed Method 1.108 0.100 0.457 5.483

Table 2: Measure of Gradient Mean Ratio of the Edges ‘r ’

Methods Fish Coral Shipwreck Diver Ancuti et al. [13] 4.624 1.602 2.972 4.103 Bazeille et al. [12] 5.939 3.575 5.196 5.264 Carlevaris et al. [5] 1.124 2.872 2.568 1.310 Chiang and Chen [6] 1.405 1.385 1.992 1.352 Galdran et al. [8] 2.202 1.152 2.343 2.081 Serikawa and Lu [25] 1.861 1.662 2.033 1.715 Proposed Method 2.419 1.499 1.401 1.942

As can be seen in Table 3, the proposed method surpass other methods in terms of entropy. Furthermore, our method performs on par with, Carlevaris et al., Chiang & Chen and Serikawa & Lu in terms of structural similarity index (SSIM). The Peak signal-to-noise ratio, although not the best in the list, still exhibit higher values. The results overall demonstrate that our method works efficiently to remove the underwater haze.

Image

Fig. 2 to 5 Table 3: Average values in terms of entropy, SSIM, and PSNR

Methods

Entropy SSIM PSNR Ancuti et al. [13] 7.65 0.50 21.29 Bazeille et al. [12] 7.50 0.19 18.72 Carlevaris et al. [5] 7.41 0.89 29.29 Chiang and Chen [6] 7.63 0.86 27.93 Galdran et al. [8] 7.57 0.29 19.40 Serikawa and Lu [25] 7.51 0.83 25.87 Proposed Method 7.75 0.82 24.33

4. CONCLUSION

In this proposed work we have implemented simple yet effective underwater single image enhancement technique to address the problem of visibility restoration and unwanted color cast. The proposed technique has been evaluated for visual and quantitative analysis. Experimental outcome reveals that our results are comparable to and performance wise effective as compared to other recent techniques. The proposed work greatly enhance the visual details in an image, enhance the clarity from a contrast point of view and preserve the natural color without losing image information as observed in state of the art techniques. Also, using this proposed method we are able to overcome the limitations encountered in a patch based underwater image dehazing problems such as computation of atmospheric light value, difficulty with large objects similar to the color of haze and criteria for selection of edge preserving smoothing operators.

ACKNOWLEDGEMENTS

The authors would like to thank A. Galdran for setting up an online repository which helped us to improve evaluation and comparison and Nicholas Carlevaris-Bianco for supporting us by providing algorithm functions.

REFERENCES

[1] Y.Y. Schechner and Y. Averbuch, “Regularized image recovery in scattering media,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 9, pp.1655-1660, 2007. [2] D. M. He and G. L. Seet, “Divergent beam lidar imaging in turbid water,” Optics and Lasers in Engineering, 2004. [3] S. Narasimhan and S. Nayar, “Contrast restoration of weather degraded images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp.713-724, 2003. [4] K. He, et al., “Single image haze removal using dark channel prior,” Transactions on Pattern Analysis and Machine Intelligence., vol. 33, no. 12, pp. 2341-2353, 2011. [5] N. Carlevaris-Bianco, et al., “Initial results in underwater single image dehazing,” in IEEE International Conference on Oceans, pp.1-8, 2010. [6] J. Y. Chiang and Y. C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1756-1769, 2012. [7] H. Lu, et al., “Underwater image enhancement using guided trigonometric bilateral filter and fast automatic color correction,” in IEEE International Conference on Image Processing, Melbourne, pp. 3412-3416, 2013. [8] A. Galdran, et al., “Automatic red-channel underwater image restoration,” Elsevier Journal of Visual Communication and Image Representation, vol. 26, pp. 132-145, 2015. [9] H. Lu, et al., “Contrast enhancement for images in turbid water,” Journal of Optical Society of America A, vol. 32, no. 5, pp. 886-893, 2015. [10] K. Iqbal, et al., “Underwater image enhancement using an integrated color model,” IAENG International Journal of Computer Science, vol. 34, no. 2, pp. 239- 244, 2007. [11] K. Iqbal, et al., “Enhancing the low quality images using unsupervised color correction methods,” in IEEE International Conference on Systems Man and Cybernetics (SMC), Istanbul, pp. 1703-1709, 2010. [12] S. Bazeille, et al., “Automatic underwater image processing,” in Proceedings of Caracterisation Du Milieu Marin, 2006. [13] C. Ancuti, et al., “Enhancing underwater images and videos by Fusion,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 81-88, 2012.

[14] X. Fu, et al., “A retinex based enhancing approach for single underwater image,” in International Conference on Image Processing (ICIP), Paris, pp. 4572-4576, 2014. [15] M. Sheng, Y. Pang, L. Wan, H. Huang, “Underwater image enhancement using multi-wavelet transform and median filter,” TEKOMNIKA Indonesian Journal of Electrical Engineering, vol. 12, no. 3, pp. 2306-2313, 2014. [16] H. Demirel, et al., “Satellite image contrast enhancement using discrete wavelet transform and singular value decomposition,” IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 2, pp. 333-337, 2010. [17] R. Schettini and S. Corchs, “Underwater image processing: state of the art of restoration and image enhancement methods,” EURASIP Journal of Advanced Signal Processing- Special issue on advances in Signal Processing for maritime applications, vol. 14. pp. 1-14, 2010. [18] M. Ebner, Color Constancy, Wiley 1st edition, Hoboken, NJ, 2007. [19] J. Van de Weijer, et al., “Edge based color constancy,” IEEE Transactions on Image Processing, vol. 16, no. 9, pp. 2207–2214, 2007. [20] G. D. Finlayson and E. Trezzi, “Shades of gray and color constancy,” in IS&T/SID Twelfth Color Imaging Conference: Color Science, Systems and Applications, Society for Imaging Science and Technology, pp. 37-41, 2004. [21] Y. Yang, et al., “Medical image fusion via an effective wavelet-based approach”, EURASIP Journal on Advances in Signal Processing, vol. 44, pp.1-13, 2010. [22] Y. Du, et al., “Haze detection and removal in high resolution satellite image with wavelet analysis,” IEEE Transactions on Geoscience and Remote Sensing, vol. 40, no. 1, p. 210–216, 2002. [23] W. Wang, et al., “Multiscale single image dehazing based on adaptive wavelet fusion,” Hindawi Mathematical Problems in Engineering, vol. 1, pp.1-14, 2015. [24] B. Mamatha and V.V. Kumar, “ISAR Image classification with wavelets and watershed transforms,” International Journal of Electrical and Computer Engineering (IJECE), vol. 6, no. 6, p. 3087–3093, 2016. [25] S. Serikawa and H. Lu, “Underwater image dehazing using joint trilateral filter,” Elsevier Computers and Electrical Engineering, vol. 40, no. 1, pp.41-50, 2014. [26] N. Hautiere, et al., “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Analysis and Stereology, vol. 27, no. 2, pp. 87-95, 2011. [27] Z. Wang, et al., “Image quality assessment: from error measurement to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no.1, pp. 600-612, 2004. [28] S. Sahu, et al., “Comparative analysis of image enhancement techniques for ultrasound liver image,” International Journal of Electrical and Computer Engineering (IJECE), vol. 2, no. 6, p. 792–797, 2016. [29] L. Chiroen, “Medical image fusion based on discrete wavelet transform using JAVA technology,” Proceedings of the ITI 2009, International Conference on Information Technology Interfaces, 2009. [30] J. Banerjee, et al., “Real time underwater image enhancement: An improved approach for imaging with AUV150,” Sadhana, vol. 41, 2016.

This article is from: