AI-Powered Face Reconstruction from Sketches Using GANs & RNNs
DOI:
https://doi.org/10.65138/ijmdes.2025.v4i12.289Abstract
Converting a simple pencil sketch into a realistic, full-color image is still a difficult challenge in the field of computer vision and artificial intelligence. In this work, we design a system that brings together Generative Adversarial Networks (GANs) [1] and Recurrent Neural Networks (RNNs) to generate realistic facial images from grayscale sketches. The generator starts with a convolution-based encoder, then uses a bidirectional LSTM to understand how different parts of the image relate to each other, before reconstructing the output in color through a decoder. To judge image quality, we apply a PatchGAN discriminator [2] that examines small areas, encouraging detailed textures and sharp edges. Training uses paired sketches and photographs, combining adversarial, perceptual, and reconstruction losses so that the output retains both realistic features and correct structure. For ease of use, we developed a lightweight Flask web tool [12] that lets a user upload a sketch and instantly see the generated result. Our experiments show that this approach produces faces that are smoother, more coherent, and visually closer to real images than those from baseline methods.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 M. Deepak Rao, Akshatha Kamath, Deeksha Patkar, Akshatha, Adithi Nayak

This work is licensed under a Creative Commons Attribution 4.0 International License.