This paper shares the ECCV 2022 paper “Unsupervised Night Image Enhancement: When Layer Decomposition Meets Light-Effects Suppression”. The National University of Singapore (NUS) proposed a night image enhancement algorithm, that suppresses light-effects and enhances low-light regions, the code has been open-sourced!
Link to the paper: https://arxiv.org/pdf/2207.10564.pdf
Link to the code: https://github.com/jinyeying/night-enhancement
Night image enhancement encounters two problems: 1. the low-light problem and 2. the light-effects/glare/flood/glow problem.
Fig. 1: Problems faced by nighttime images.
Existing nighttime visibility enhancement algorithms focus on enhancing low-light regions (purple arrows). This inevitably leads to over-enhancement/overexposure of bright areas, e.g., areas affected by light-effects (red arrows).
Existing nighttime defogging algorithms can suppress glow on foggy days but cannot enhance low-light regions or suppress light-effects on clear nights.
Fig. 2: Research Motivation
Can we 1. enhance light intensity in dark regions while 2. suppressing light-effects in bright regions? This would improve the visibility of nighttime images in a more comprehensive way.
Fig. 3: Our Task
02 Challenges and Key Ideas
The challenges of nighttime light-effect suppression include:
- a lack of paired training data.
- generating physically realistic night light-effect images is challenging.
Therefore, in this paper, an unsupervised nighttime image enhancement algorithm is designed with the key ideas:
- using layer decomposition.
- using unpaired training data to design a light-effects suppression network.
Fig. 4: Layer Decomposition
If you think of the night light-effects map as a blended image, layer decomposition refers to how the light-effects layer and the background layer are decomposed. When the light-effects layer is successfully decomposed, the light-effect of the background layer is suppressed.
Fig. 5: Unpaired Data
While paired training data is difficult to collect, unpaired light-effects data is available. In a data-driven approach, the light effect can be further suppressed, and the dark light region can be enhanced.
03 Introduction to the paper
Fig. 6: Framework
The input night image is fed into the layer decomposition module. The goal of layer decomposition is to obtain a background layer that is unaffected by light-effects. This paper then feeds the background layer into a light-effects suppression network to obtain the final output, a night image with dark light regions enhanced and overexposed regions suppressed.
3.1 Layer Decomposition Network
Fig. 7: Layer Decomposition Network
Input a map of (a) nighttime light-effects and output three separate layers, (b) light-effect layer G, (c) shading layer L and (d) reflection layer R.
Fig. 8: light-effects layer G
Because the light-effect layer G is relatively smooth, the gradient has a short-tailed distribution effect as shown in (c). In this paper, we assume that the second-order derivative of the light-effect layer is 0. The light-effect layer can be separated from the background layer using the Laplacian filter, gradient exclusion loss, color constancy and other loss functions.
3.2 Light-Effects Suppression Network
Fig. 9: Light-Effects Suppression Network
The light-effect suppression network is data-driven and based on unpaired data labelled with light-effect (class label 1) and without light-effect (class label 0). The weights of the network can be obtained through this binary classification and CAM (class activation map). The feature map is multiplied by the CAM weights to produce an attention feature map. The attention feature map shows that the network can focus on the light-effect region, thus further suppressing the light-effect in that region.
04 Experiments and Results
4.1 Light-Effects Suppression Results
Fig. 10: Input and model output
4.2 Low-light Enhancement Results
Fig. 11: Input, ground truth and model output
This paper presents an unsupervised learning framework for nighttime image enhancement that simultaneously enhances dark regions and suppresses light-effect regions. The method can separate light effects more accurately through the guidance of the light effect layer.