Received June 9, 2020, accepted June 24, 2020, date of publication June 26, 2020, date of current version July 7, 2020. Digital Object Identifier 10.1109/ACCESS.2020.3005249
Low-Light Image Enhancement Using Volume-Based Subspace Analysis WONJUN KIM 1 , (Member, IEEE), RYONG LEE 2 , MINWOO PARK SANG-HWAN LEE 2 , AND MYUNG-SEOK CHOI2
2,
1 Department 2 Research
of Electrical and Electronics Engineering, Konkuk University, Seoul 05029, South Korea Data Sharing Center, Korea Institute of Science and Technology Information, Daejeon 34141, South Korea
Corresponding author: Ryong Lee (ryonglee@kisti.re.kr) This work was supported by a Research and Development project, ‘‘Enabling a System for Sharing and Disseminating Research Data,’’ of Korea Institute of Science and Technology Information (KISTI), South Korea, under Grant K-20-L01-C04-S01.
ABSTRACT Low-light image enhancement is a key technique to overcome the quality degradation of photos taken under challenging illumination conditions. Even though the significant progress has been made for enhancing the poor visibility, the intrinsic noise amplified in low-light areas still remains as an obstacle for further improvement in visual quality. In this paper, a novel and simple method for low-light image enhancement is proposed. Specifically, the subspace, which has an ability to separately reveal illumination and noise, is constructed from a group of similar image patches, so-called volume, at each pixel position. Based on the principal energy analysis onto this volume-based subspace, the illumination component is accurately inferred from a given image while the unnecessary noise is simultaneously suppressed. This leads to clearly unveiling the underlying structure in low-light areas without loss of details. Experimental results show the efficiency and robustness of the proposed method for low-light image enhancement compared to state-of-the-art methods. INDEX TERMS Low-light image enhancement, quality degradation, subspace, volume-based principal energy analysis, illumination component.
I. INTRODUCTION
With the rapid development of mobile devices equipped by cameras, especially smartphones, a vast amount of photos are taken and shared everyday. However, due to complicated lighting conditions under real-world environments, e.g., uneven illuminations, backlight, and casting shadows, acquired images are often underexposed and low-visible, severely degrading the user’s viewing experience. Furthermore, loss of details and color distortions in such degraded images lead to the significant performance drop in further applications of computer vision such as object detection, tracking, and segmentation, which demand high-quality inputs for precise results. To tackle this problem, diverse methods for low-light image enhancement have been introduced for last decades, which can be mainly categorized into twofold: statistical information-based approach and decomposition-based approach. The former stretches the dynamic range of the The associate editor coordinating the review of this manuscript and approving it for publication was Guitao Cao 118370
.
low-light image by modifying the distribution of intensity values, whereas the latter utilizes the physical model of the light reflection to adjust illumination components independently from scene structures. In the early stage, statistical information-based methods, such as histogram equalization (HE) and its variants [1], [2], have been widely adopted due to their simplicity and effectiveness for enhancing the low contrast. However, images restored via these methods are likely to be partially exaggerated or weakly enhanced under uneven lighting conditions. On the other hand, inspired by the Retinex theory [3] which assumes that the given scene can be regarded as the product of illumination and reflectance, decomposition-based methods have been actively studied. Most algorithms belonging to this category attempted to separate lighting components from a given scene and subsequently combine adjusted illuminations back with the reflectance layer to generate enhanced results. To do this, various optimization techniques, e.g., variational framework [4], direction minimization [5], etc., have been adopted to accurately estimate the illumination component by minimizing the difference between target and estimated results with
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 8, 2020