Adobe built its Firefly AI art generator to avoid bias and copyright issues

The goal of the new AI image-generator is to be as user-friendly as possible. Here's how it will work.
Firefly is currently in beta. Adobe

Share

Artificial intelligence systems that can generate images have been big news for the past year. OpenAI’s DALL-E 2 and Stable Diffusion have dominated the headlines, and Google, Meta, and Microsoft have all announced features they are working on. But one huge name has been conspicuously absent: Adobe. Today, with the announcement of Firefly, which is a family of generative AI models, that changes.

For more than two decades, Adobe has led the digital image making and manipulation industries. Its flagship product, Adobe Photoshop, has become a verbagainst its will. And while its products have always had AI-powered features, like Content Aware Fill and Neural Filters, Firefly represents Adobe’s first publicly announced image-generating AI. Initially, the beta will integrate with Express, Photoshop, Illustrator, and the marketing-focused Adobe Experience Manager.

What Adobe’s Firefly will do 

Like DALL-E 2 and Stable Diffusion, Firefly can take a text-prompt and turn it into an image. Unlike those two apps, however, Firefly is designed to give more consistent results. Alexandru Costin, Adobe’s vice president of Generative AI and Sensei, called the kind of prompts most people use as “word soup” on a video call with PopSci. To get great results with Stable Diffusion, for example, you often need to add buzzwords to your prompt, like “4K,” “trending on artstation,” “hyper-realistic,” “digital art,” and “super detailed.” 

So, instead of saying something like “batman riding a scooter,” you say “batman riding a scooter, cinematic lighting, movie still, directed by Chris Nolan.” It’s very hack-y, but for most generative AIs, it’s the best way to get good results. 

Firefly is taking a different approach. The overall look and feel of a generated image is determined by drop-downs and buttons. You can type “batman riding a scooter” and then select from the various options to dial in the look you want. Costin also explained that the images don’t regenerate each time you select a new style, so if you’re happy with the content of the image, you don’t have to worry that changing the style will create something completely different. It aims to be a lot more user-friendly. 

“many fireflies in the night” Adobe

As well as creating new images from text prompts, Firefly will also be able to generate text effects. The example that Costin showed (above) was rendering the word “Firefly” with “many fireflies in the night, bokeh effect.” It looks impressive, and it shows how generative AIs can integrate with other forms of art and design. 

What Firefly aims not to do

According to Costin, Adobe wants to employ AI responsibly, and in his presentation he directly addressed two of the most significant issues with generative AI: copyright concerns and biases. 

Copyright is a particularly thorny issue for generative AIs. StabilityAI, the makers of Stable Diffusion, is currently being sued by a collection of artists and the stock image service Getty Photos for using their photos to train Stable Diffusion without licensing them. The example images where Stable Diffusion creates a blurry Getty-like logo are particularly damning. 

Adobe has sidestepped these kinds of copyright problems by training Firefly on hundreds of millions of Adobe Stock images, as well as openly licensed and public domain content. It protects creators from any potential copyright problems, especially if they intend to use generated content for commercial purposes. 

This llama is stylish. Adobe

Similarly, Costin says that Adobe has dealt with the potential biases in its training data by designing Firefly to deliberately generate diverse images of people of different ages, genders, and ethnicities. “We don’t want to carry over the biases in the data,” he says, so he says that Adobe has proactively addressed the issue. Of course, you can still prompt the AI to render a specific thing, but when left to its own devices it should hopefully avoid producing biased results. 

While Firefly is launching in beta, Adobe has big plans. “The world is going to be transformed by AI,” says Costin, and Adobe intends to be part of it. 

Going forward, Adobe wants a future where creators are able to train their own AI models on their work, and where generative AIs integrate seamlessly across its full range of products. In theory, this would allow artists to generate whatever assets they needed right in Photoshop or Illustrator, and treat them as they do any other image or block of text. 

If you want to check Firefly out, you can apply to join the beta now.