Turning Your Doodles Into Reality
NVIDIA has already displayed their talent when it comes to AI before, for example, they have previously made the AI that could generate (in most cases) perfect human face. But NVIDIA, determined to constantly showcase more and more of their AI powers, released a deep learning model that can turn drawings into full-blown paintings.
Using a network called GauGAN (the “Gau” being a reference to the famous French painter, Gauguin), you can turn a simple doodle into a painting with all details such as shadows and textures. This is all done using the concept of adversarial networks so a network doesn’t know what your doodle actually is, it simply tries to make it as life-like as possible by having one network make your image more realistic (generator) and the other trying to make a guess of whether it’s real or not (discriminator) and them both competing to get better results. This leads to some great results albeit trained mostly on natural and urban imagery.
The technology doesn’t aim to replace or try and guess what you make, it just simply extends it to what it would be if it was real, an example of this would be if a kid handled you a doodle of a mountain, (an upside down V with a line down the middle), you can imagine a massive mountain with lots of detail down to the snow on the caps. The program aims to do something similar with the doodles passed into it.
This process of imagining detail from a simple concept is inherently the same as how most artists go about drawing in a realistic style. If an artist wanted to draw a mountain, they would first imagine the detail in their head, then draw simple outlines and fill in the details and shading later to make it as realistic as possible.
All of this goes to show that computers not being good at creativity wasn’t something inherent computers and AIs but due to lack of techniques which can be potentially overcome in the future.