The annual Adobe MAX conference has traditionally focused on the creative community, and this year had plenty to please that audience. But the event in recent years has increasingly spoken to enterprise leaders too, and this week it had much for them to mull over, in particular the launch of GenStudio, which promises to harness all the new capabilities that generative AI is bringing to content creation for enterprise workflows. Other announcements reinforced the dramatic impact this new AI technology is having on the creative world.
Our normal practice at diginomica is to save the author's take for the end of the article — and I'll have more to say there — but on this occasion I'm going to insert a personal observation here at the start. The huge advances in capability announced this week, just six months since March's Adobe Summit, are truly astonishing, and what's promised to follow is equally mindblowing. Adobe has been able to race ahead, in part because creative content doesn't face the same challenges with generative AI as more transactional and factual content. While vendors in other application areas are having to advance cautiously because of the risk of hallucinations introducing inaccurate data and information into enterprise workflows, hallucination in the creative field is more of a feature than a bug. There are aspects where Adobe must tread carefully, and is doing so, particularly around proper accreditation and compensation of source material. But overall the rate of progress is remarkable.
Before I come back to GenStudio, let's start by looking at the three new generative AI models that Adobe unveiled at MAX, which provide the building blocks for many of the new capabilities. These three models are all incorporated into its Adobe Firefly generative AI imaging tool, first launched barely six months ago, and still, as Adobe is keen to re-emphasize, based on licensed and public domain content that is safe for commercial use. This is important for enterprise confidence and Adobe has made sure that any output from Firefly automatically has content credentials attached, which show information such as the creator’s name, date, edits made and tools used.
The three new models are:
- Adobe Firefly Image 2 Model — the next iteration of the core imaging model in Firefly, brings new Text-to-Image capabilities such as Generative Match, which applies the style of a reference image to generate new images at scale. This is especially useful in enterprise scenarios for adhering to brand guidelines and maintaining consistency across various assets, at the same time as catering for regional variations or sub-brands. New photo settings bring significant advancements in the model's ability to generate high-quality imagery, including more accurate rendering of details such as skin pores and foliage, and give users additional controls over image generation such as depth of field, motion blur, and field of view. It also offers more accurate understanding of text prompts, recognizing more landmarks and cultural symbols than the previous model, and includes improved guidance for prompt building.
- Firefly Vector Model — brings generative AI to vector graphics, introducing sophisticated new capabilities to the creation of logos, website graphics, product packaging, icons and other images. The model will power a new Text-to-Vector Graphic capability in Illustrator, now available in beta. The model's capabilities include gradients, organized grouping and layering of elements, seamlessly tileable patterns and precise geometry. It can also use generative match.
- Firefly Design Model — enables the instant generation of template designs such as flyers, posters and invites. This powers a new Text-to-Template capability, now available in beta, in Adobe Express.
In total, over 100 new features have been released across the flagship Adobe Creative Cloud applications, with the spotlight on new Firefly-powered features including those mentioned above. Adobe Express also gains generative fill capabilities and translation of text into 45 different languages, while Lightroom gains AI-powered tools such as Lens Blur. In addition, web versions of Illustrator and Photoshop are now available.
Of interest to larger enterprises, a new integration brings Adobe Express, the entry-level, general-purpose content creation tool, into AEM Assets, Adobe's enterprise digital asset manager. This allows employees across an organization to access enterprise content assets directly within Express, or to share content generated within Express across the organization.
GenStudio and the content supply chain
It is GenStudio that links together all of these various capabilities into enterprise creative and marketing workflows — or as Adobe prefers to call it, the content supply chain. It connects Adobe Express and other Creative Cloud products together with enterprise content creation and delivery products in Adobe Experience Cloud, plus collaboration and workflows in Frame.io and Workfront.
In addition, GenStudio allows customers to use Firefly's generative AI capabilities, including Generative Match, to generate content that is tailored to the brand’s own style, characters and objects. It also includes access to Firefly APIs, so that enterprises can integrate these customized Firefly models into their Creative Cloud workflows and automations. The goal is to enable the scalable creation of on-brand content across enterprise marketing and creative workflows. Teams can view the results in Adobe Analytics to gain insights into what content resonates and make iterative improvements in real time.
Adobe says it has been using GenStudio for its own digital marketing and in producing the Adobe MAX event. The use of generative AI tools and automated workflows has allowed it to double the volume of social media content produced, while using Firefly to generate different image variations for a test email marketing campaign resulted in a 12% lift in the average clickthrough rate - a foretaste of the potential the technology opens up for greater experimentation and testing.
In its traditional Adobe Sneaks session, which gives a preview of future capabilities, the vendor showed off future capabilities such as Project Stardust, a generative AI-powered object-aware editing engine that makes it possible to simply select an object within an image and move or delete it, along with Project See Through, an AI-powered tool that makes it easy to remove reflections from photos. Also on show in prototype were generative fill and automated voice dubbing for video, and various 3D image generation and editing capabilities.
There's been a lot to take in this week, but what I find most striking about the ambition outlined at MAX is its breadth and scope. Adobe has really branched out from its earlier focus on professional content creators to appeal to a broad mass of non-specialist users with its AI-enhanced Express tool. Yet at the same time, it has really been thinking through the enterprise use cases, with GenStudio coming in to act as a content hub that helps organizations ensure all this explosion of creativity still adheres to brand guidelines and remains respectful of intellectual property rights. For a vendor of this size to move at such speed on such a broad front is a huge achievement — and that's even without starting to talk about its plans for Document Cloud, which I understand we're destined to hear more about later in the year.