NVIDIA has developed a deep learning model capable of transforming rough sketches into photorealistic art, thanks to generative adversarial networks, or GANs, to convert segmentation maps into lifelike images. GauGAN, a tribute to post-Impressionist painter Paul Gauguin, is the interactive app using the model. “It’s much easier to brainstorm designs with simple sketches, and this technology is able to convert sketches into highly realistic images,” said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. Read more for a video and additional information.
Simply put, GauGAN enables users to draw their own segmentation maps and manipulate the scene, labeling each segment with labels like sand, sky, sea or snow. The deep learning model is trained on a million images and then fills in the landscape with incredible results. For example, if you draw in a pond, nearby elements like trees and rocks will appear as reflections in the water. Or, simply switch a segment label from “grass” to “snow” and the entire image changes to a winter scene.