To date, Firefly has been used by numerous Adobe enterprise customers to optimize workflows and scale content creation, including PepsiCo/Gatorade, IBM, Mattel, and more. This concern stems from the idea that eventually, AI-generated content will make up a large portion of training data, and the results will be AI slop — wonky, erroneous or unusable images. The self-perpetuating cycle would eventually render the tools useless, and the quality of the results would be degraded. It’s especially worrisome for artists who feel their unique styles are already being co-opted by generators, resulting in ongoing lawsuits over copyright infringement concerns.
- The samples shared in the announcement show a pretty powerful model, capable of understanding the context and providing coherent generations.
- IBM is experimenting with Adobe Firefly to optimize workflows across its marketing and consulting teams, focusing on developing reliable AI-powered creative and design outputs.
- Adobe has also improved its existing Firefly Image 3 Model, claiming it can now generate images four times faster than previous versions.
- It also emerged that Canon, Nikon and Leica will support its Camera to Cloud (C2C) feature, which allows for direct uploads of photos and videos to Frame.io.
But as the Lenovo example shows, there’s a lot of careful groundwork required to safely harness the potential of this new technology. If you look at the amount of content that we need to achieve end-to-end personalization, it’s pretty astronomical. To give you an example, we just launched a campaign for four products across eight marketing channels, four languages, and three variations. Speeding up content delivery in this way means that teams are then able to adjust and fine-tune the experience in real-time as trends or needs change.
However, at the moment, these latest generative AI tools, many of which were speeding up their workflows in recent months, are now slowing them down thanks to strange, mismatched, and sometimes baffling results. “The generative fill was almost perfect in the previous version of Photoshop to complete this task. Since I updated to the newest version (26.0.0), I get very absurd results,” the user explains. Since the update, generative fill adds objects to a person, including a rabbit and letters on a person’s face. Illustrator and Photoshop have received GenAI tools with the goal of improving user experience and allowing more freedom for users to express their creativity and skills. Our commitment to evolving our assessment approach as technology advances is what helps Adobe balance innovation with ethical responsibility.
