Technology

Adobe brings Firefly’s generative AI to Photoshop

Photoshop is getting an infusion of generative AI today with the addition of a number of Firefly-based features that will allow users to extend images beyond their borders with Firefly-generated backgrounds, use generative AI to add objects to images and use a new generative fill feature to remove objects with far more precision than the previously available content-aware fill.

For now, these features will only be available in the beta version of Photoshop. Adobe is also making some of these capabilities available to Firefly beta users on the web (Firefly users, by the way, have now created over 100 million images on the service).

Image Credits: Adobe

The neat thing here is that this integration allows Photoshop users to use natural language text prompts to describe the kind of image or object they want Firefly to create. As with all generative AI tools, the results can occasionally be somewhat unpredictable. By default, Adobe will provide users with three variations for every prompt, though unlike with the Firefly web app, there is currently no option to iterate on one of these to see similar variations on a given result.

To do all of this, Photoshop sends parts of a given image to Firefly — not the entire image, though the company is also experimenting with that — and creates a new layer for the results.

Maria Yap, the vice president of Digital Imaging at Adobe, gave me a demo of these new features ahead of today’s announcement. As with all things generative AI, it’s often hard to predict what the model will return, but some of the results were surprisingly good. For instance, when requested to generate a puddle beneath a running corgi, Firefly appeared to take the overall lighting of the image into account, even generating a realistic reflection. Not every result worked quite as well  — a bright purple puddle was also an option — but the model does seem to do a pretty good job at adding object and especially at extending existing images beyond their frame.

Given that Firefly was trained on the photos available in Adobe Stock (as well as other commercially safe images), it’s maybe no surprise that it does especially well with landscapes. Similar to most generative image generators, Firefly struggles with text.

Adobe also ensured that the model returns safe results. This is partly due to the training set used, but Adobe has also implemented additional safeguards. “We married that with a series of prompt engineering things that we know,” explained Yap. “We exclude certain terms, certain words that we feel aren’t safe. And then we’re even looking into another hierarchy of ‘if Maria selects an area that has a lot of skin in it, maybe right now — and you’ll actually see warning messages at times — we won’t expand a prompt on that one, just because it’s unpredictable. We just don’t want to go into a place that doesn’t feel comfortable for us.”

As with all Firefly images, Adobe will automatically apply its Content Credentials to any images utilizing these AI-based features.

A lot of these features would also be quite useful in Lightroom. Yap agreed and while she wouldn’t commit to a timeline, she did confirm that the company is planning on bringing Firefly to its photo management tool as well.

Adobe brings Firefly’s generative AI to Photoshop by Frederic Lardinois originally published on TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *