The good and the bad of Lensa’s AI portraits

Lensa can create dozens of personalized images in an assortment of artistic styles.
a collage of lensa's AI-generated portraits
Here are some of the portraits Lensa came up with for me. Harry Guinness / Lensa

Share

Lensa is an AI-powered photo editing app that has risen to the top of app stores around the world. Although it has been available since 2018, it’s only with the release of its Magic Avatars feature last month that it became a worldwide social media hit. If you’ve been on Twitter, Instagram, or TikTok in the last few weeks, you’ve almost certainly seen some of its AI-generated images in a variety of styles.

Lensa relies on Stable Diffusion (which we’ve covered before) to make its Magic Avatars. Users upload between 10 and 20 headshots with the iOS or Android app, and Lensa trains a custom version of Stable Diffusion’s image generation model with them. By using a personalized AI model, Lensa is able to create dozens of images in an assortment of artistic styles that actually resemble a real person instead of the abstract idea of one. Or at least, it’s able to do it just enough of the time to be impressive. There is a reason that Magic Avatars are only available in packs of 50, 100, and 200 for $3.99, $5.99, and $7.99 respectively. 

Of course, Lensa’s Magic Avatars aren’t free from artifacts. AI models can generate some incredibly weird images that resemble monsters or abstract art instead of a person. The shapes of eyes, fingers, and other smaller details are more likely to be imperfect than, say, the position of someone’s mouth or nose. 

And like most AI-generators, Lensa’s creations aren’t free from gender, racial, and other biases. In an article in The Cut called “Why Do All My AI Avatars Have Huge Boobs,” Mia Mercado (who is half white, half Filipina) wrote that her avatars were “underwhelming.” According to Mercado, “the best ones looked like fairly accurate illustrations.” Most, though, “showed an ambiguously Asian woman,” often with “a chest that can only be described as ample.”

[Related: Shutterstock and OpenAI have come up with one possible solution to the ownership problem in AI art]

Writing for MIT Technology Review, Melissa Heikkilä (who is similarly of Asian heritage) calls her avatars “cartoonishly pornified.” Out of 100 portraits that she generated, 16 were topless and another 14 had her “in extremely skimpy clothes and overtly sexualized poses.” And this problem isn’t limited to Lensa. Other AI image generators that use Stable Diffusion have also created some incredibly questionable images of people of color.

The issue is so widespread that in an FAQ on its website, Prisma Labs, the company behind Lensa, had to give a response to the question: “Why do female users tend to get results featuring an over sexualised look?” The short answer: “Occasional sexualization is observed across all gender categories, although in different ways.”

Per the FAQ, the problem can be traced back to the dataset that Stable Diffusion is initially trained on. It uses the Laoin-5B dataset, which contains almost 6 billion unfiltered image-text pairs scraped from around the internet. Stability AI (the makers of Stable Diffusion) has openly acknowledged that “the model may reproduce some societal biases and produce unsafe content.” This includes sexualized images of women and generic, stereotypical, and racist images of people of color. 

Both Stability AI and Prisma claim to have taken steps to minimize the prevalence of NSFW outputs, but these AI models are black boxes by design, meaning that sometimes the human programmers don’t even fully know about all the associations that the model is making. Short of creating a bias-free image database to train an AI model on, some societal biases are probably always going to be present in AI generators’ outputs.

And that’s if everyone is operating in good faith. TechCrunch was able to create new NSFW images of a famous actor using Lensa. They uploaded a mixture of genuine SFW images of the actor and photoshopped images of the actor’s face on a topless model. Of the 100 images created, 11 were “topless photos of higher quality (or, at least with higher stylistic consistency) than the poorly done edited topless photos the AI was given as input.” Of course, this is against Lensa’s terms of service, but that hasn’t exactly stopped people in the past. 

The most promising feature of these AI generators, though, is how fast they are improving. While its undeniable that marginalized groups are seeing societal biases reflected in their outputs right now, if these models continue to evolve—and if the developers remain as receptive to feedback—then there is reason to be optimistic that they can do more than just reflect back the worst of the internet.