Review: Adobe Firefly Integration Into Creative Cloud
While there are strong use cases, the beta launch exposes Firefly’s deficiencies.
While there are strong use cases, the beta launch exposes Firefly’s deficiencies.
Adobe’s recent announcement about Firefly integration into Creative Cloud Apps unleashed Twitter memes and hype galore. The power of visual generative AI is now available in primary creative industry software programs like Premier, Photoshop, Illustrator, and After Effects.
Most notably, Adobe offers a meaningful user interface for generative AI, a critical deficiency with other visual generative AI tools like Midjourney, Stable Diffusion, and Dall. E. The additions to these tools empower new AI-fueled creative executions within traditional media, as well as stronger manipulations of generative AI.
The tools have already found immediate use. Social media trends have been ablaze with the extension of famous album covers, such as the above Abbey Road album image featured in Ars Technica last week. But outside of neat tricks like that, how useful are the Firefly integrations for marketers?
How Firefly Works Inside Adobe
Well, there is good news and bad news. The good news is the interface is familiar. You can use layers to protect existing work and use the magic wand and other selection tools to access generative fill. Images and videos become a creative canvas to “paint” AI upon.
Suggested uses are adding to an image, removing or replacing parts of an image, or extending an image. Adobe promises the AI takes into account the original image’s lighting, color, and tone, and applies it to the additions. Experienced users will see this as an awesome evolution of Adobe’s Content Aware tool.
The bad news is that Firefly is still Firefly, which means it can be severely limited. Per my review earlier this Spring, Firefly seems to be about six to nine months behind Midjourney. Adobe’s late start will slow widespread adoption.
Experienced generative AI users may get frustrated with some of the clunkiness. Professional graphic artists who demand high-quality work will also tire of wonky images and will use the tool for selective purposes. Twitter users, well, that’s a different story!
Extending Images
As a photographer, I used a few recent images to test out Adobe’s capabilities. Let’s start with meme creators’ favorite generative fill use case, extending images.
I loved the above Pride image shot at CityCenter DC. However, it is a tad off-center, with the central concrete line running a little to the left. It could also use a little more negative space. Let’s see how Adobe’s generative fill tool worked to extend it to the left and perhaps add a little more negative space below.
The image was successfully edited with about a centimeter added to the left and bottom. What you don’t see in the finished image are the variants that the generative AI missed. It only took two tries and six iterations to get a workable image.
Then I needed to perform some additional retouching to finish the image. But all in all, my vision was met, and the edit took me less than 10 minutes.
Adding Elements to an Image
I also took photographs of CityCenter DC adorned in Pride umbrellas. It would make sense to post an image of an LGBTQ+ couple in the walkway. However, it was a street photo and I did not have models with me. Could Adobe Firefly save the day?
I tried four different times to insert a gay couple in the walkway, and this was as close to my vision as I could attain. It seems that if a human is not the sole subject of the photograph, then Firefly cannot render the image very well.
A second attempt to insert a lesbian couple into the image also failed. The above couple looked good at first, but the woman on the left has a missing arm and two legs merged into one foot. Nope.
Worse were the filters that prevented me from generating subjects like drag queens, gay couples, etc. To get the above, I had to create a workaround and prompt “two men walking together holding hands.” Certainly, these words are blocked to prevent hateful image generation. However, this image is not that, on the contrary, the intent was to celebrate Pride.
If I needed the image for commercial use, I would have to license a photo with an LGBTQ+ couple and create a composite with them. However, Firefly did approximate the lighting correctly, including the shadow work.
How about a simpler image?
I decided to go more with a meme-ish type of picture since that’s how most people seem to be using Adobe’s generative AI. Using the above picture of my 12-year-old rescue dog Michelle begging, I inserted images of what she was probably hoping for.
And there you have it, visual generative AI at work. It’s kind of like we’re back in the 2000s when Twitter first launched.
Unfortunately, for those looking to make complex photorealistic composites, using Firefly offers a painful experience. In the short term, it probably makes sense to source your own image elements and blend as you would normally.
Removing Items, Cleaning Up Images
Perhaps the use case I liked most was clean-up. I captured a sunrise at National Harbor last week that had some small power wires on the left side and a weird cloud on the right side of the frame.
Normally, I may not have edited the image just because while dramatic, cleaning up power lines can be a pain in the butt, even with content-aware tools. It may not seem like a lot for a photo like this, but photography is currently a hobby for me. So, if it’s not fun…
Still, this seemed like the perfect image to try some simple generative fill edits to see if it can simplify matters. Within five minutes I had the following image.
Not only did I clean up the power lines and the cloud, but I also added a silhouetted boat in the foreground to offer a little perspective. Not bad.
How about a more complex image? I love the below image I shot in Portugal, but removing the two people and complex background makes for a replacement process that surpasses the Photoshop Content Aware tool’s capabilities, and would take a significant amount of manual edits.
Generative fill did an ace job, quickly replacing the arm on the right and the young woman on the lower left within two minutes. The only thing preventing it from moving even faster was my input process. The black barrier, street lamp, and yellow street barrier took a little longer, but certainly with good results.
I also slightly extended the photo on the bottom and to the right with generative fill. Again, this was almost effortless with a few touch-ups and some burning to help blend the image. Then I cropped it down for composition. Overall, I am much happier with this image!
The Final Verdict
The Firefly integration into Photoshop, and by default other Creative Cloud apps, offers immediate and helpful use cases for professional creatives. They can help social media communicators develop and clean up content, too. Overall, I would give it 3 1/2 out of 5 stars.
However, Firefly’s core image generation capabilities lag behind the competition, limiting generative fill as a primary source for professional-grade composites and other imagery. Future evolutions need to better attain photorealistic imageries, address the Achilles Heel of generative AI — body parts and perspective — and continue evolving composite blending.
There are other visual tools beyond Creative Suite. Eventually, partnerships and acquisitions will occur between the Apples, CaptureOnes, Luminars, Stable Diffusions, and Midjourneys of the world. Of course, Microsoft and Open AI have their partnership, but Microsoft graphic design apps and Dall.E 2 are not market-leading products… yet.
Can Adobe establish enough leadership to fend off inevitable competition? It certainly has the resources to do so. Adobe may make an acquisition, too, depending on how long it takes. Stay tuned for more developments.
Stay updated with the latest news and updates in the creative AI space — follow the Generative AI publication.