Underwater images often suffer from severe quality degradation due to wavelength-dependent light absorption, scattering, and non-uniform illumination conditions present in aquatic environments. These effects lead to low contrast, color distortion, poor visibility, and loss of fine details, which significantly impact downstream applications such as marine exploration, underwater robotics, and visual inspection. This paper presents a perception-driven deep underwater image enhancement approach that integrates Retinex-based illumination modeling with a Generative Adversarial Network (GAN). The proposed method estimates illumination maps to guide the enhancement process, enabling effective separation of illumination and reflectance components. A GAN framework is employed to learn perceptually enhanced illumination correction while preserving structural details and natural color appearance. Post-processing techniques such as gamma correction and edge-preserving refinement are applied to further improve visual quality. Experimental results demonstrate improved brightness, contrast, and color balance compared to raw underwater images. Quantitative evaluation using PSNR, SSIM, UIQM, and UCIQE metrics confirms the effectiveness of the proposed method in enhancing underwater imagery while maintaining perceptual realism.
Introduction
The text presents a comprehensive study on underwater image enhancement, addressing the severe visual degradation caused by light absorption, scattering, color cast, and uneven illumination in underwater environments. These degradations limit both human visual perception and the performance of automated underwater vision systems used in applications such as marine exploration, robotics, environmental monitoring, and search-and-rescue.
Traditional enhancement methods (e.g., histogram equalization, white balancing, and physics-based models) offer limited improvements and often introduce artifacts or rely on hard-to-estimate physical parameters. Recent supervised deep learning approaches, particularly CNNs and GANs, have achieved better results by learning direct mappings from degraded images to high-quality references. GAN-based methods are especially effective in producing perceptually realistic images, but they often struggle with uneven illumination and color consistency.
To address these limitations, the text proposes a supervised perception-driven underwater image enhancement framework that integrates Retinex-based illumination modeling with a GAN architecture. Retinex theory decomposes images into illumination and reflectance components, enabling effective correction of non-uniform lighting while preserving structural details. Illumination maps generated through Retinex modeling guide the GAN generator to enhance brightness, contrast, and color balance without amplifying noise.
The methodology includes dataset preparation using public benchmarks (UIEB, EUVP, U45), preprocessing, illumination map estimation, supervised GAN-based enhancement, post-processing refinement, and training stabilization strategies. The system is modular, consisting of input acquisition, preprocessing, illumination estimation, GAN enhancement, post-processing, and output modules, making it scalable and suitable for real-world applications.
Performance is evaluated using both full-reference and no-reference metrics (PSNR, SSIM, UIQM, and UCIQE), along with qualitative visual assessment. Experimental results show that the proposed Retinex-guided GAN framework consistently outperforms traditional and baseline deep learning methods, delivering improved illumination uniformity, contrast, color fidelity, and perceptual realism. While minor limitations remain under extreme lighting or highly turbid conditions, the approach demonstrates strong robustness and effectiveness for practical underwater imaging tasks.
Conclusion
This paper presented a perception-driven supervised underwater image enhancement framework that integrates Retinex-based illumination modeling with a Generative Adversarial Network (GAN). The proposed approach effectively addresses common underwater image degradation issues such as uneven illumination, low contrast, and color distortion by guiding the enhancement process using illumination maps derived from Retinex theory. The supervised GAN architecture further improves perceptual quality by learning realistic enhancement patterns from paired training data. Experimental results demonstrate that the proposed method significantly improves visual clarity, brightness uniformity, and color fidelity compared to traditional image enhancement techniques and baseline deep learning approaches. Quantitative evaluation using PSNR, SSIM, UIQM, and UCIQE metrics confirms the effectiveness of the model in enhancing both structural accuracy and perceptual quality. The integration of illumination guidance and supervised adversarial learning enables stable training and consistent enhancement across diverse underwater scenes.
Although the proposed framework produces promising results, minor limitations such as slight over-brightness and residual color imbalance in highly challenging underwater conditions remain. Future work will focus on improving adaptive color correction, incorporating advanced perceptual loss functions, and extending the framework to real-time and video-based underwater enhancement applications. Additionally, expanding the training dataset and exploring lightweight architectures will further enhance generalization and computational efficiency. Overall, the proposed supervised Retinex-guided GAN framework offers an effective and robust solution for underwater image enhancement, with strong potential for practical deployment in marine exploration and underwater vision systems.
References
[1] J. Y. Chiang and Y. C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1756–1769, Apr. 2012.
[2] C. Li, J. Guo, C. Guo, R. Cong, and J. Gong, “Emerging from water: Underwater image enhancement via illumination-aware networks,” IEEE Transactions on Image Processing, vol. 28, no. 6, pp. 2866–2881, Jun. 2019.
[3] X. Fu, Z. Fan, M. Ling, Y. Huang, and X. Ding, “Two-step underwater image enhancement based on gray world assumption and relative global histogram stretching,” Signal Processing: Image Communication, vol. 38, pp. 1–11, 2015.
[4] E. H. Land and J. J. McCann, “Lightness and Retinex theory,” Journal of the Optical Society of America, vol. 61, no. 1, pp. 1–11, 1971.
[5] R. Akkaynak and T. Treibitz, “A revised underwater image formation model,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6723–6732.
[6] I. Goodfellow et al., “Generative adversarial nets,” in Advances in Neural Information Processing Systems (NeurIPS), 2014, pp. 2672–2680.
[7] C. Li, J. Guo, and C. Guo, “Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior,” IEEE Transactions on Image Processing, vol. 25, no. 12, pp. 5664–5677, Dec. 2016.
[8] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, Dec. 2011.
[9] C. Li, R. Cong, J. Guo, Y. Pang, B. Wang, and S. Kwong, “UIEB: A benchmark dataset for underwater image enhancement,” IEEE Transactions on Image Processing, vol. 29, pp. 655–669, 2020.
[10] Y. Wang, J. Zhang, Y. Cao, and Z. Wang, “A deep CNN method for underwater image enhancement,” in Proceedings of the IEEE International Conference on Image Processing (ICIP), 2017, pp. 1382–1386.