• Home
  • Submit Paper
  • Check Paper Status
  • Download Certificate/Paper
  • FAQs
  • Feedback
  • Contact Us
Email: ijraset@gmail.com
IJRASET Logo
Journal Statistics & Approval Details
Recent Published Paper
Our Author's Feedback
 •  ISRA Impact Factor 7.894       •  SJIF Impact Factor: 7.538       •  Hard Copy of Certificates to All Authors       •  DOI by Crossref for all Published Papers       •  Soft Copy of Certificates- Within 04 Hours       •  Authors helpline No: +91-8813907089(Whatsapp)       •  No Publication Fee for Paper Submission       •  Hard Copy of Certificates to all Authors       •  UGC Approved Journal: IJRASET- Click here to Check     
  • About Us
    • About Us
    • Aim & Scope
  • Editorial Board
  • Impact Factor
  • Call For Papers
    • Submit Paper Online
    • Current Issue
    • Special Issue
  • For Authors
    • Instructions for Authors
    • Submit Paper
    • Download Certificates
    • Check Paper Status
    • Paper Format
    • Copyright Form
    • Membership
    • Peer Review
  • Past Issue
    • Monthly Issue
    • Special Issue
  • Pay Fee
    • Indian Authors
    • International Authors
  • Topics
ISSN: 2321-9653
Estd : 2013
IJRASET - Logo
  • Home
  • About Us
    • About Us
    • Aim & Scope
  • Editorial Board
  • Impact Factor
  • Call For Papers
    • Submit Paper Online
    • Current Issue
    • Special Issue
  • For Authors
    • Instructions for Authors
    • Submit Paper
    • Download Certificates
    • Check Paper Status
    • Paper Format
    • Copyright Form
    • Membership
    • Peer Review
  • Past Issue
    • Monthly Issue
    • Special Issue
  • Pay Fee
    • Indian Authors
    • International Authors
  • Topics

Ijraset Journal For Research in Applied Science and Engineering Technology

  • Home / Ijraset
  • On This Page
  • Abstract
  • Introduction
  • Conclusion
  • References
  • Copyright

Real Image Restoration Using VAEs

Authors: Rakshitha Reddy Potu, Naalla Sushma, Baru Shiva Kumar, Aruna Kumari Kumbhagiri

DOI Link: https://doi.org/10.22214/ijraset.2022.43964

Certificate: View Certificate

Abstract

Old photos are an integral part of everybody’s life; they remind us of how one person has spent their life. As people used hard copies of photos before, those photos suffered severe degradation. This degradation in real-time images is intricate, causing the typical restoration that might be solved through supervised learning to fail to generalize due to the domain gap between synthetic and real images. Therefore, this method uses various autoencoders to restore and colourize old images. Furthermore, this model uses a unique triplet domain translation network on real images and synthetic photo pairs. Precisely, VAEs, which are variational autoencoders, are trained to transform old pictures and clean pictures into two latent spaces. Therefore, the translation between these two latent spaces is comprehended with simulated paired data. This translation generalizes well to authentic images because the domain gap is encompassed in the close-packed latent space. Moreover, to manoeuvre numerous degradations present in one old picture, this model designs a world branch with a partial non-local block targeting the structured faults, like scrapes and dirt marks, and an area branch targeting unstructured faults, like noises and fuzziness. Two branches are blended within the latent space, resulting in an improved ability to renew old pictures from numerous defects. Additionally, it applies another face refinement network to revive fine details of faces within the old pictures, thus generating photos with amplified quality. Another autoencoder is encoded with colour images, and then the decoder decodes the features extracted from the encoder. Once a model is trained, testing is performed to colourize the photographs.

Introduction

I. INTRODUCTION

Photos are taken to capture the happy memories that are otherwise gone. Although time flies by, one can still invoke the moments of the past by watching them. However, old photo prints disintegrate when kept in poor environment, which causes the content of photo permanently damaged. Luckily, as mobile cameras and scanners become more convenient, people can now digitalize the photos and invite a talented specialist for reconstruction. However, manual retouching is typically burdensome and time-taking, which leaves stack of old photos impossible to restore. Hence, it's appealing to style automatic algorithms which can instantaneously repair old photos for people that wish to bring old photos back to life. Before the deep learning times, there are some trails that restore photos by automatically detecting the localized faults like scratches and freckles, and filling within the damaged areas with in painting process.

II. LITERATURE SURVEY

Siqi Zhang et al. [13] proposed a “unique Consecutive Context Perceive Generative Adversarial Networks (CCPGAN) for serial sections inpainting that can learn semantic information from its neighboring image and reinstate the damaged regions of serial sectioning images to the maximum extent.”

Lingbo Yang et al. [12] proposed “HiFaceGAN, a collaborative suppression, and replenishment framework that works in a dual-blind fashion, reducing dependence on degradation prior or structural guidance for training.”

X.Lu et al. [10] proposed a “A feed-forward image processing using CNN with multiple arbitary holes various sizes at the time of testing.”

III. METHODOLOGY

In this paper, we proposed a novel network to address the problem of old photo restoration via deep latent space translation. We are using deep learning to restore old photos that suffered from severe degradation there are many approaches currently available but the main problem with previous conventional restoration techniques was that they were not able to generalize. This is caused because they are all using supervised learning which is a problem caused by the domain gap between the real old picture and the ones that are synthesized for training. There is a big difference with the synthesized old images and the real old ones. We can observe that the synthesized image is already in high definition even with the fake scratches and colour changes compared to the other one that contains way less details they addressed this issue by creating their own new network specifically for the task basically they used two variational auto encoders called VAEs.

IV. PROPOSED SYSTEM

The architecture of Image Restoration is as shown within the Fig 2. Before the image is restored it passes through some stages. In order to decrease the domain gap, this formulates the old photo restoration problem, where to find out the mapping in between the clean images and old photos as images are taken from distinct domains model.

V. IMPLEMENTATION

The issue with the conventional restoration techniques is addressed by creating new network specifically for the task basically we used two variational auto encoders called VAEs.

The translation into latent spaces “Tz” (Fig 4) is learned through synthetic paired data, but is able to generalize well on real photos since this same domain gap is way smaller on such compact latent spaces.

The domain gap from the two latent spaces produced by the VAEs is closed by training an adversarial discriminator.

From the Fig 4 we can observe that the new domains from the latent spaces, “Zx” and “Zr”, are much closer to each other than the original pictures “R” and synthetic old pictures “X”.

The mapping to restore the degraded photos is dome in this latent space.

In order to recover the fine details of faces in the old photos from the picture in the latent space “z”(Fig 7) we have added a face refinement network, and using the degraded face into multiple regions of the network. This widely enhances the perceptual quality of the faces.

trained and fitted.

Number of epochs= 300 epochs Batch size = 16 per batch Output: Outputs coloured image.

???????VII. RESULTS

This model is mainly categorized into two parts i) Image restoration a) Enhancing unscratched images. b) Removing scratches and folds from photos. ii) Image colorization. This model takes input a image and outputs an image with no scratches and has a high resolution or improves colouring of image. This model is trained on 300 epochs and achieved 86%accuracy

A. Image Colorization

1. Accuracy: The model in the project is checked against the accuracy measure and we are attaining accuracy of 86% with 300 epochs.

Fig 17 shows that the amount of images coloured increased with a number of epochs. So with 300 epochs, it achieved 86% accuracy. With the increase in the number of epochs loss is reduced.

???????VIII. FUTURE ENHANCEMENTS

Nonetheless, as the dataset used contains a few old photos with faults, this method cannot handle complex shading, which can be addressed by including more such photographs in the training network or explicitly evaluating the shading effects during synthesis.

Conclusion

This project concludes that the domain gap between synthetic photos and authentic old images is reduced, and latent space is used to translate to clean images. Compared with prior methods, this method suffers more minor generalization problems. Using this method, the scrapes can be in-painted with better structural consistency. To reconstruct the face areas of old images the Coarse-to-fine generator with the spatial adaptive condition is proposed. The black and white images are colorized with 86% accuracy. This method displays good performance in restoring severely damaged old photographs.

References

[1] H. Liu, B. Jiang, Y. Xiao, and C. Yang, “Coherent semantic attention for image inpainting,” arXiv preprint arXiv:1905.12384, 2019. [2] L. Xu, J. S. Ren, C. Liu, and J. Jia, “Deep convolutional neural network for image deconvolution,” in Advances in Neural Information Processing Systems, 2014, pp. 1790– 1798. [3] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time- scale update rule converge to a local nash equilibrium,” in Advances in Neural Information Processing Systems, 2017, pp. 6626–6637. [4] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European conference on computer vision. Springer, 2016, pp. 154– 169. [5] Y. Ren, X. Yu, R. Zhang, T. H. Li, S. Liu, and G. Li, “Structureflow: Image inpainting via structure- aware appearance flow,” arXiv preprint arXiv:1908.03852, 2019. [6] F. Stanco, G. Ramponi, and A. De Polo, “Towards the automated restoration of old photographic prints: a survey,” in The IEEE Region 8 EUROCON 2003. Computer as a Tool., vol. 2. IEEE, 2003, pp. 370– 374. [7] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro, “Image inpainting for irregular holes using partial convolutions,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 85–100. [8] I. Giakoumis, N. Nikolaidis, and I. Pitas, “Digital image processing techniques for the detection and removal of cracks in digitized paintings,” IEEE Transactions on Image Processing, vol. 15, no. 1, pp. 178– 188, 2005. [9] J. Sun, W. Cao, Z. Xu, and J. Ponce, “Learning a convolutional neural network for non- uniform motion blur removal,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 769–777. [10] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative image in painting with contextual attention,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5505–5514. [11] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3113–3155, 2017 [12] Lingbo Yang , Shanshe Wang , Siwei Ma , Chang Liu , Pan Wang,” HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment,” in Proceedings of the IEEE Conference on Computer Vision, 2021. [13] Siqi Zhangi , Lei Wangi , Jie Zhang2 , Ling GU2 , Xiran Jhai 1 , Xiaoyue Zhai2 , Xianzheng Shai , And Shijie Chang 1,” Consecutive Context Perceive Generative Adversarial Networks for Serial Sections Inpainting,” in Proceedings of the IEEE Conference on Big Data Research,2020 [14] Xiaodong Li , Xingfan Zhu ,Ziyin Zhou , Qilong Sun , Qingyu Liu,” Focusing on Persons: Colorizing Old Images Learning from Modern Historical Movies,” in Proceedings of the IEEE Conference on Computer Vision, 2021. [15] “Meitu,”https://www.meitu.com/en [16] “Remini photo enhancer,” https://www.bigwinepot.com/index en.htm

Copyright

Copyright © 2022 Rakshitha Reddy Potu, Naalla Sushma, Baru Shiva Kumar, Aruna Kumari Kumbhagiri. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

ijraset43964

Download Paper

Authors : Rakshitha Reddy Potu

Paper Id : IJRASET43964

Publish Date : 2022-06-08

ISSN : 2321-9653

Publisher Name : IJRASET

DOI Link : Click Here

About Us

International Journal for Research in Applied Science and Engineering Technology (IJRASET) is an international peer reviewed, online journal published for the enhancement of research in various disciplines of Applied Science & Engineering Technologies.

Quick links
  • Privacy Policy
  • Refund & Cancellation Policy
  • Shipping Policy
  • Terms & Conditions
Quick links
  • Home
  • About us
  • Editorial Board
  • Impact Factor
  • Submit Paper
  • Current Issue
  • Special Issue
  • Pay Fee
  • Topics
Journals for publication of research paper | Research paper publishers | Paper publication sites | Best journal to publish research paper | Research paper publication sites | Journals for paper publication | Best international journal for paper publication | Best journals to publish papers in India | Journal paper publishing sites | International journal to publish research paper | Online paper publishing journal

© 2022, International Journal for Research in Applied Science and Engineering Technology All rights reserved. | Designed by EVG Software Solutions