Chinese Academy of Sciences and the City University of Hong Kong led a research team that developed an AI-powered machine learning approach capable of generating lifelike human portraits from simple sketches. Current deep image-to-image translation techniques may be faster at generating human face images from sketches, but they often overfit their inputs, or in other words, require a professional artist to sketch the face first. Read more for a video and additional information.
Other deep learning based solutions for sketch-to-image translation interpret input sketches as fixed constraints and then attempt to recreate the missing texture information between strokes. This new approach learns the space of plausible face sketches from real ones and finds the point in this space that best approximates the input sketch treating it as a ‘soft’ constraint used to guide image synthesis. It consists of three modules: CE (Component Embedding), FM (Feature Mapping), and IS (Image Synthesis).
- Nvidia GeForce GTX 1650 4GB graphics (base: 1395MHz, Boost: 1560MHz, TDP: 50W)
- Quad-core AMD Ryzen 5 r5-3550h processor
- 15.6” 120Hz full HD (1920x1080) IPS-type display
- 256GB NVMe SSD | 8GB DDR4 RAM | Windows 10 Home
- Gigabit wave 2 Wi-Fi 5 (802.11AC)
Recent deep image-to-image translation techniques allow fast generation of face images from freehand sketches. However, existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input. To address this issue, our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch,” said the paper.