Fake(r) Models

A new deep learning algorithm can generate high-resolution, photorealistic images of people — faces, hair, outfits, and all — from scratch.

The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trust in digital media.

All these models are purely AI generated, none of these are real people

All these models are purely AI generated, none of these are real people

The algorithm was developed by DataGrid, a tech company housed on the campus of Japan’s Kyoto University, according to a press release.

In a video showing off the tech, the AI morphs and poses model after model as their outfits transform, bomber jackets turning into winter coats and dresses melting into graphic tees.

Specifically, the new algorithm is a Generative Adversarial Network (GAN). That’s the kind of AI typically used to churn out new imitations of something that exists in the real world, whether they be video game levels or images that look like hand-drawn caricatures.

Photorealistic Media

Past attempts to create photorealistic portraits with GANs focused just on generating faces. These faces had flaws like asymmetrical ears or jewelry, bizarre teeth, and glitchy blotches of color that bled out from the background.

DataGrid’s system does away with all of that extraneous info that can confuse algorithms, instead posing the AI models in front of a nondescript white background and shining realistic-looking light down on them.

Each time scientists build a new algorithm that can generate realistic images or deepfakes that are indistinguishable from real photos, it seems like a new warning that AI-generated media could be readily misused to create manipulative propaganda. Here’s hoping that this algorithm stays confined within the realm of fashion catalogs.