Loading...

Abstract

Domain

DEEP LEARNING

Title

Deep Learning-Based Image Inpainting for Object Removal and Completion

Abstract

Image inpainting (or Image completion) is the process of reconstructing lost or corrupted parts of images. It can be used to fill in missing or corrupted parts of an image, such as removing an object from an image, removing image noise, or restoring an old photograph. The goal is to generate new pixels that are consistent with the surrounding area and make the image look as if the missing or corrupted parts were never there. Image inpainting can be done using various techniques such as texture synthesis, patch-based methods, and deep learning models. Deep learning-based Image inpainting typically involves using a neural network to generate new pixels to fill the missing parts of an image. Different network architectures can be used for this purpose, including Convolutional Neural Networks (CNNs), Generative Adversarial Networks(GANs), Transformer-based models, Flow-based models, and Diffusion models. In this work, we focus on Image Inpainting using Diffusion models whose task is to provide a set of diverse and realistic inpainted images for a given deteriorated image. Diffusion models use a diffusion process to fill in missing pixels, where the missing pixels are iteratively updated based on the surrounding context. The diffusion process is controlled by a set of parameters, which can be learned from data. The advantage of diffusion models is that they can handle large missing regions, while still producing visually plausible results. The challenges involved in the training of these models will be discussed.