Adobe Summit - with Firefly, Adobe adds accountability to generative AI
- Summary:
- Adobe Firefly isn't just another generative AI tool for image creation. Alongside it, Adobe is introducing a whole new infrastructure of digital content attribution and creator compensation.
Adobe wants to bring accountability to the emerging technology of generative AI by building credentials into images created with its new generative AI imaging tool, Firefly, announced yesterday at Adobe Summit. Firefly has been trained on the company's Adobe Stock image library, along with openly licensed and public domain content, and it was telling that the sole moment of spontaneous applause during the opening keynote came when David Wadhwani, President of the Digital Media Business, said that when Firefly has its commercial launch, Adobe aims to make sure that content "contributors are compensated for their effort."
The move recognizes the derivative nature of anything produced by generative AI, which is trained on large volumes of existing content. This adds a new worry for businesses that use the technology, as there's a risk that the owners of that existing content may pursue a claim if their IP rights are infringed. Adobe is therefore promoting the inclusion of a 'content credentials' tag in images as an industry-wide open standard that will permanently encode information about its creation. Anyone can use the information in the tag to discover if the image has been altered, or if it is AI-generated. It also wants to give content creators the option of adding a universally recognized 'Do not train' tag to their images if they want to opt out of being included in an AI training set.
These moves are being pursued through the Content Authenticity Initiative (CAI), an industry body with more than 900 members worldwide, which Adobe founded several years ago to create a global standard for trusted digital content attribution. The CAI's open-source tools are developed in collaboration with the non-profit Coalition for Content Provenance and Authenticity (C2PA). Given Adobe's strong presence in the content creator community through its Digital Media product family and the support of a large number of other companies, the initiative has a good opportunity to gain momentum. Adobe believes it's an important initiative, not only to protect the IP rights of content creators, but also to restore confidence in the veracity of images, video and other content distributed on the Web. Dana Rao, EVP, General Council and Chief Trust Officer at Adobe, explains:
Once you realize you're always being deceived, because you can't tell what's true or not, you're not going to believe anything. So if the President of your country gets up and says there's a national emergency and you're, 'Is that really the President?' ...
So if you want to convince people that this is really you, you upload that image, you click on the content credentials, you make your changes and then publish it, and people can see it.
Content creators
Along with this concern to shore up trust and verification, Adobe also aims to allay the fears of content creators around the impact of generative AI technologies on their livelihoods, which, as the applause in the hall yesterday showed, are keenly felt. It says it will make known the details of how it plans to compensate Adobe Stock content creators for the use of their material in training sets once Firefly comes out of its current beta phase into general availability.
Firefly is initially being built into Adobe's Express, Experience Manager, Photoshop and Illustrator products. The first iteration focuses on the automated creation of images and text effects, based on text-based instructions entered by users. What this means is that users will be able to describe the images and effects they want in their own words and Firefly will automatically carry out those instructions. For example, the user could enter a short description of what they want the image to contain, and then select attributes from a sidebar such as color and tone, lighting, composition and so on, to modify the image. For text effects, they might describe the image they want applied to the text, and again select attributes such as the font, fit, and so on. The image shown above has been created from the text description "many fireflies in the night, bokeh light." The content creator can then edit the results into a final artwork.
Ultimately this will encompass images, audio, vectors, videos and 3D, as well as ingredients such as brushes, color gradients and video transformations. Future integrations will bring Firefly into relevant workflows across all Adobe products, and there will also be APIs to make it available for integration into custom workflows and automations.
Adobe also has plans to introduce the ability for customers to train Firefly with their own source material, which will allow them to generate content that follows their own personal style or brand language. This could open up new potential revenue streams for contributors, for example by marketing a particular look or style. Ely Greenfield, Chief Technology Officer for Digital Media at Adobe, says:
We think there's really interesting opportunities here for artists to be able to leverage Firefly and the ability to custom train, to then merchandize their style, or their content, to potential customers new and existing. So if people like my style, rather than going in and [working with] my style without my involvement, we'd love to see it where, as an artist, I license my style for people to use, together with Firefly, for generating images.
My take
Earlier this week, I compared the current status of generative AI to the early days of the Web. What I forgot to mention was that people often likened the Web back then to the Wild West, and that's an analogy that came to mind these past two days listening to Adobe explaining its approach, because it's rather like the Sheriff just walked into the room. Adobe is definitely the good guy in this analogy, affirming that it wants to bring "accountability, responsibility and transparency" to bear in its use of generative AI. That will come as a relief to enterprises surveying the current landscape and wondering how to harness the potential of this new technology without exposing themselves to unknown pitfalls and risks.
Adobe's commitment to ensuring that intellectual property rights are safeguarded, its concern for the veracity and trustworthiness of digital content, and its goal of setting up mechanisms to commercially compensate content creators, are all worthy initiatives. Putting these measures in place inevitably delays the pace at which it can bring its solutions to market versus less scrupulous competitors, but these are not things that are easily retrofitted after the fact. It also has to be said that, if they prove successful, they will provide a competitive moat because these are all measures that require an infrastructure that will act as a barrier to market entry for smaller entrants.
None of this however protects the content creator community from the disruptive impact of this new technology, even it builds in some welcome safeguards. As Ashley Still, SVP & GM, Digital Media at Adobe, commented today:
Jobs will change. It will go from being a production assistant to being a creative director, because absolutely as technology advances, some of the more manual tasks, of course, will get replaced by less manual tasks.
Adobe understands that there's a role it can play in helping this community to adjust and evolve, but it too has to adapt to survive the inevitable disruption to how this industry has operated in the past.