NEWS

CDRSHNET: VARIANCE-GUIDED MULTISCALE AND SELF-ATTENTION FUSION WITH HYBRID LOSS FUNCTION TO RESTORE TRAFFIC-SIGN IMAGES CAPTURED IN ADVERSE CONDITIONS


(Received: 10-Nov.-2023, Revised: 2-Jan.-2024 and 23-Jan.-2024 , Accepted: 25-Jan.-2024)
In challenging weather conditions, various visual impediments such as raindrops, shadows, haze and distortions from dirty camera lenses and codec errors adversely affect the quality of traffic-sign images. Existing methods struggle to address these issues comprehensively, necessitating an innovative approach to restoration. This paper introduces the Codec Dirty Rainy Shadow Haze Network (CDRSHNet) architecture, integrating self-attention (SA) and variance-guided multiscale attention (VGMA) mechanisms. SA captures global dependencies, enabling focused processing of relevant image regions, while VGMA emphasizes informative channels and spatial locations for enhanced representation. A hybrid loss function, combining Gradient Magnitude Similarity Deviation (GMSD) and Charbonnier loss, boosts image quality. When trained on a diverse dataset, CDRSHNet attains a remarkable 99.3% restoration accuracy, yielding an average SSIM of 0.978 and an average PSNR of 39.58 on the Real Image Dataset (RID). On the Synthetic Image Dataset (SID), the average SSIM is 0.963 and the average PSNR is 39.46. The proposed model significantly improves image clarity and facilitates precise interpretation.

[1] D. Temel, G. Kwon, M. Prabhushankar and G. AlRegib, "CURE-TSR: Challenging Unreal and Real Environments for Traffic Sign Recognition," arXiv (Cornell University), DOI: 10.48550/arxiv.1712.02463, Dec. 2017.

[2] J. Su, B. Xu and H. Yin, "A Survey of Deep Learning Approaches to Image Restoration," Neurocomputing, vol. 487, pp. 46–65, DOI: 10.1016/j.neucom.2022.02.046, May 2022.

[3] Z. Shen and D. Dang, "Mixed Hierarchy Network for Image Restoration," arXiv (Cornell University), , DOI: 10.48550/arxiv.2302.09554, Feb. 2023.

[4] M. Maru and M. C. Parikh, "Image Restoration Techniques: A Survey," Int. Journal of Computer Applications, vol. 160, no. 6, pp. 15–19, DOI: 10.5120/ijca2017913060, Feb. 2017.

[5] L.-Y. Chang and A. I. Kirkland, "Comparisons of Linear and Nonlinear Image Restoration," Microscopy and Microanalysis, vol. 12, no. 6, pp. 469–475, DOI: 10.1017/s1431927606060582, Oct. 2006.

[6] Z. Liu, "Literature Review on Image Restoration," Journal of Physics, Conference Series, vol. 2386, no. 1, p. 012041, IOP Publishing, DOI: 10.1088/1742-6596/2386/1/012041, Dec 2022.

[7] L. Yu, J. Guo and Y. Chen, "Research Status and Development Trend of Image Restoration Technology," Journal of Physics, vol. 2303, no. 1, DOI: 0.1088/1742-6596/2303/1/012081, 2022.

[8] C. Zhang, F. Du and Y. Zhang, "A Brief Review of Image Restoration Techniques Based on Generative Adversarial Models," Lecture Notes in Electrical Engineering, pp. 169–175, DOI: 10.1007/978-981-32-9244-4_24, 2019.

[9] S. Ahmed, U. Kamal and Md. K. Hasan, "DFR-TSD: A Deep Learning Based Framework for Robust Traffic Sign Detection under Challenging Weather Conditions," IEEE Transactions on Intelligent Transportation Systems, pp. 1–13, DOI: 10.1109/tits.2020.3048878, 2021.

[10] R. Huang, Y. Zhang and Z. Luo, "Inpainting of Compressed Images with Autoencoder-based Prior Learning," Proc. of the 26th ACM Int. Conf. on Multimedia, 236-244.

[11] S. Jeon, H. Kim and H. Kwon, "Compressed Image Restoration Using Autoencoder Regularization," Journal of Imaging Science and Technology, vol. 63, no. 6, pp.060403-1 - 060403-11, DOI: 10.2352/J.ImagingSci.Technol.2019.63.6.060403, 2019.

[12] K. Zhang, Y. Li and Y. Wang, "A Two-stage Method for Video Codec Error Concealment Using Deep Learning," IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 7, pp. 2102-2115, DOI: 10.1109/TCSVT.2020.2977168, 2020.

[13] M. Uricar et al., "Let’s Get Dirty: GAN Based Data Augmentation for Camera Lens Soiling Detection in Autonomous Driving," Proc. of the IEEE/CVF Winter Conf. on Applications of Computer Vision, pp. 766-775, [Online], Available: http://arxiv.org/pdf/1912.02249.pdf, Dec. 2021.

[14] X. Li, B. Zhang, J. Liao and P. V. Sander, "Let’s See Clearly: Contaminant Artifact Removal for Moving Cameras," Proc. of the Int. Conf. on Computer Vision, pp. 2011–2020, Montreal, Canada, Oct. 2021.

[15] J. Mohd, Sandra Mamani Reyes and J. Xiao, "Camera Lens Dust Detection and Dust Removal for Mobile Robots in Dusty Fields," Proc. of the 2021 IEEE Int. Conf. on Robotics and Biomimetics (ROBIO), DOI: 10.1109/robio54168.2021.9739233, Dec. 2021.

[16] H. Wang, Y. Wu, Q. Xie, Q. Zhao, Y. Liang et al., "Structural Residual Learning for Single Image Rain Removal," Knowledge-based Systems, vol. 213, p. 106595, Feb. 2021.

[17] S. Li, W. Ren, J. Zhang, J. Yu and X. Guo, "Single Image Rain Removal via a Deep Decomposition–Composition Network," Computer Vision and Image Understanding, vol. 186, pp. 48–57, Sep. 2019.

[18] M. Umair Arif, M. U. Farooq, R. H. Raza, Z. U. A. Lodhi and M. A. R. Hashmi, "A Comprehensive Review of Vehicle Detection Techniques under Varying Moving Cast Shadow Conditions Using Computer Vision and Deep Learning," IEEE Access, vol. 10, pp. 104863–104886, 2022.

[19] Z. Liu, H. Yin, Y. Mi, M. Pu and S. Wang, "Shadow Removal by a Lightness-guided Network with Training on Unpaired Data," IEEE Transactions on Image Processing, vol. 30, pp. 1853–1865, Jan. 2021.

[20] H. van Le and D. Samaras, "From Shadow Segmentation to Shadow Removal," arXiv (Cornell University), DOI: 10.48550/arxiv.2008.00267, Aug. 2020.

[21] H. Fan, M. Han and J. Li, "Image Shadow Removal Using End-to-End Deep Convolutional Neural Networks," Applied Sciences, vol. 9, no. 5, p. 1009, DOI: 10.3390/app9051009, Mar. 2019.

[22] X. Hu, Y. Jiang, C.-W. Fu and P.-A. Heng, "Mask-ShadowGAN: Learning to Remove Shadows from Unpaired Data," Proc. of the 2019 IEEE/CVF Int. Conf. on Computer Vision (ICCV), vol. 2019, pp. 2472–2481, Seoul, S. Korea, Jan. 2019.

[23] L. Ren, Z. Pan, J. Cao, J. Liao and Y. Wang, "Infrared and Visible Image Fusion Based on Weighted Variance Guided Filter and Image Contrast Enhancement," Infrared Physics & Technology, vol. 114, p. 103662, DOI: 10.1016/j.infrared.2021.103662, May 2021.

[24] Q. Yang, C. Zhang, H. Wang, Q. He and L. Huo, "SV-FPN: Small Object Feature Enhancement and Variance-guided RoI Fusion for Feature Pyramid Networks," Electronics, vol. 11, no. 13, pp. 2028–2028, DOI: 10.3390/electronics11132028, Jun. 2022.

[25] X. Yang, "An Overview of the Attention Mechanisms in Computer Vision," Journal of Physics: Conference Series, vol. 1693, p. 012173, DOI: 10.1088/1742-6596/1693/1/012173, Dec. 2020.

[26] D. Zhao, L. Xu, Y. Yan, J. Chen and L.-Y. Duan, "Multi-scale Optimal Fusion Model for Single Image dehazing," Signal Processing-Image Communication, vol. 74, pp. 253–265, DOI: 10.1016/j.image.2019.02.004, May 2019.

[27] W. Yi et al., "Towards Compact Single Image Dehazing via Task-related Contrastive Network," Expert Systems with Applications, vol. 235, p. 121130, 2024.

[28] X. Zhu et al., "GAN-based Image Super-resolution with a Novel Quality Loss," Mathematical Problems in Engineering, vol. 2020, p. e5217429, DOI: 10.1155/2020/5217429, Feb. 2020.

[29] B. Wu, H. Duan, Z. Liu and G. Sun, "SRPGAN: Perceptual Generative Adversarial Network for Single Image Super Resolution," arXiv (Cornell University), DOI: 10.48550/arXiv.1712.05927, Dec. 2017.

[30] B. Vasant Gajera, S. Raj Kapil, D. Ziaei, J. Mangalagiri, E. L. Siegel and D. Chapman, "CT-Scan Denoising Using a Charbonnier Loss Generative Adversarial Network," IEEE Access, vol. 9, pp. 84093–84109, DOI: 10.1109/access.2021.3087424, Jun. 2021.

[31] W. Xue, L. Zhang, X. Mou and A. C. Bovik, "Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index," IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 684–695, DOI: 10.1109/tip.2013.2293423, Feb. 2014.

[32] D. Ren, W. Zuo, Q. Hu, P. Zhu and D. Meng, "Progressive Image Deraining Networks: A Better and Simpler Baseline," Proc. of the 2019  IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 3937-3946, DOI: 10.1109/cvpr.2019.00406, Long Beach, USA, Jun. 2019.

[33] Y. Wang, R. Wan, W. Yang, B. Wen, L.-P. Chau and A. C. Kot, "Removing Image Artifacts from Scratched Lens Protectors," arXiv (Cornell University), DOI: 10.48550/arxiv.2302.05746, Feb. 2023.

[34] W. Yi, M. Liu, L. Dong, Y. Zhao, X. Liu and M. Hui, "Restoration of Haze-free Images Using Generative Adversarial Network," Proceedings of the SPIE, vol. 11432, DOI: 10.1117/12.2541893, Feb. 2020.