Images captured in foggy weather conditions often suffer from poor visibility, which will create a lot of impacts on the outdoor computer vision systems, such as video surveillance, intelligent transportation assistance system, remote sensing space cameras and so on. In such situations, traditional visibility restoration approaches usually cannot adequately restore images due to poor estimation of haze thickness and the persistence of color cast problems. In our work, we propose a visibility restoration approach to effectively solve inadequate haze thickness estimation and alleviate color cast problems. By doing so, a high-quality image with clear visibility and vivid color can be generated to improve the visibility of single input image (with fog or haze), as well as the image’s details. Our approach stems from two important statistical observations about haze-free images and the haze itself. First, Wavelet decomposition is applied and LUM Filter is applied on the decomposed image. Finally, we can get the dehazed output.
In this project, the disasters due to hazy environments are increasing, development of real-time dehazing technique require immediate action. Fundamentally, intelligent vision systems such as in Advance Driver Assistance System (ADAS), surveillance and monitoring systems etc., rely on the attributes of the input images gravely affecting the precision of the objects recognition. The absorption and scattering of natural light due to atmospheric particles present as a result of air pollution, varied weathers and water droplets causing mist, fog, haze in most parts of the world, lowers contrast of the images mostly captured in the outdoors and appear dimmer. The rapid technological advancements, surveillance and monitoring systems, remote sensing systems, ADAS etc., are in place in almost every other technology utilization. The accuracy of these systems largely depends on the quality of the input images. Therefore, the demands for the improved quality of the input images are increasing and is the dire need of the hour. There exist some cases of disasters caused by the haze in modern life such as TESLA’s self-driving car had an accident in China due to the hazy weather costing the life of the driver. Further we also encounter a foggy scenario while executing surgeries, surveillance in the foggy, hazy weathers, utilizing ADAS in hazy road condition and so on.
Since all these circumstances are a matter of life and death, it is critical to develop a concurrent video dehazing technique which can be employed in all the applications requiring de-hazed input images.
Several experiments have been carried out over the years by different groups of researchers. Here are some of the following groups:
Yang, Y., Zhang, J.L., Liu, C., Zhang, H.W., and Li, X. has proposed "Visibility restoration of haze and dust image using color correction and composite channel prior" Visibility restoration of images under haze and dust weather is essential in computer vision tasks. In this work, an algorithm for image visibility restoration based on color correction and composite channel prior (CCP) is proposed. First, the color of a dust image is corrected by color compensation and white balance for blue and red channels. Secondly, the composite channels are defined by simple multiplication and subtraction, and the composite channels of haze image and clear image have a very close pixel distribution. To eliminate the brightness difference, an adaptive gamma correction function based on haze density is proposed.
Minaee S, Boykov YY, Porikli F, Plaza AJ, Kehtarnavaz N, Terzopoulos D. has proposed "Image segmentation using deep learning." Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.
Y.-T. Peng, Z. Lu, F.-C. Cheng, Y. Zheng and S.-C. Huang. has proposed "Image haze removal using airlight white correction local light filter and aerial perspective prior" Light is scattered and absorbed when travelling through atmosphere particles, leading to visibility attenuation for images captured, especially in hazy scenes. The experimental results demonstrate that the proposed method outperforms other state-of-the-art dehazing methods in three ways: 1) our results have better visual quality; 2) our method performs the best in terms of color restoration; and 3) our method is very efficient at removing haze and color casts.
III. PROPOSED SYSTEM
There are number of images available in web, image retrieval methods make it possible to use web images as external information and with the use of this we can improve the image dehaze. In this paper we propose a method that going to use haze free image as reference for the dehazing. The main difference between the existing system and this system is using haze free image instead of additional information about the image. Our proposed system includes the following methods.
A. Advantages of Proposed Method
Superior Resolution, when compared to existing techniques.
Greater performance Ratio.
In the proposed model , the main theme of the project is to remove the haze from the haze image using haze removal step with filters. when we captured image in the haze environment the image is being blurred (or) not clear due to fog and weather conditions.
The main theme of the project is to remove the fog from the fog images using haze Removal step with filters. Thus the defects in the satellite images is identified with the help of image processing, in that the image processing concept is used to remove the haze in the image. The test images are processed by add noise. After that the DWT was applied on the image and processed by soft thresholding then the IDWT was applied it was remove the haze which is present in the image. In order to evaluate the image we are used clustering followed by segmentation. Based on total process, the detection is achieved.
 [Yang, Y., Zhang, J.L., Liu, C., Zhang, H.W., Li, X.: Visibility restoration of haze and dust image using color correction and composite channel prior, IEEE Trans Image Process. Vol.1, pp.1-15, May-2022.
 Minaee S, Boykov YY, Porikli F, Plaza AJ, Kehtarnavaz N, Terzopoulos D. Image segmentation using deep learning: A survey. IEEE Trans Pattern Anal Mach Intell.Vol.2, pp. 22-39, June-2021.
 Y.-T. Peng, Z. Lu, F.-C. Cheng, Y. Zheng and S.-C. Huang, \"Image haze removal using airlight white correction local light filter and aerial perspective prior\", IEEE Trans. Circuits Syst. Video Technol., Vol. 30, no. 5, pp. 1385-1395, May 2020.
 Y. Pang, J. Xie and X. Li, \"Visual haze removal by a unified generative adversarial network\", IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 11, pp. 3211-3221, November 2019.
 B. Li et al., \"Benchmarking single-image dehazing and beyond\", IEEE Trans. Image Process., Vol. 28, no. 1, pp. 492-505, January 2019.
 Y.-T. Peng, K. Cao and P. C. Cosman, \"Generalization of the dark channel prior for single image restoration\", IEEE Trans. Image Process., Vol. 27, no. 6, pp. 2856-2868, Jun. 2018.
 Engin D, Genç A, Kemal Ekenel H, Cycle-dehaze: Enhanced cyclegan for single dehazing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Vol. 1, pp 825–833, March-2018.
 Galdran A, Vazquez-Corral J, Pardo D, Bertalmío M. Fusion-based variational image dehazing. IEEE Signal Process Lett. Vol.24 ,No 2,pp.151–155,April-2017.
 Zhu, M.Z., He, B.W. and Zhang, L.W. 2017. Atmospheric light estimation in hazy images based on color-plane model. Computer Vision and Image Understanding. Vol.165, pp.33-42, June -2017.
 Singh D, Garg D, Singh Pannu, Efficient landsat image fusion using fuzzy and stationary discrete wavelet transform. Imaging Sci J.Vol.65. No.2, pp.108–114, May-2017.