CCE 2023 - the great AI debate shifts from content creation to...data governance?

Jon Reed Profile picture for user jreed October 31, 2023
Summary:
Constellation's Connected Enterprise event was bursting with AI debates. A shakeup in the event format surfaced divergent views - and brought an essential discussion to the forefront. Here's my boil down of the enterprise AI implications.

CCE 2023 -  AI and data privacy panel
(CCE 2023 - AI + data privacy session - via Heather Willems)

AI fatigue is a real thing. After a couple days of AI-packed discussions at Constellation Connected Enterprise, moderator Paul Greenberg told attendees his panel would not be talking about AI - at all. His vow was well received.

And yet, the AI discussions needed to happen. I moderated one of them, on AI and data privacy (illustration on right).

This year, Ray Wang and the Constellation Research team decided to open up the panels to greater audience participation. A selection of so-called "super moderators," including yours truly, were given the green light to shake up the format - which we did.

When you consider the urgency of AI-related issues - not to mention the notable gap between gen AI marketing brandgasms and referenceable enterprise projects - this was the perfect time to switch up the format a bit, and get to the jugular.

Before I dig in, about the audience makeup: Wang told me the 2023 attendee makeup was 2:1 CXOs to vendors/service providers. The remaining ten percent: Constellation analysts, book authors and various independent analysts and traveling miscreants like myself. This show has just one session track in one larger room; the "one conversation" setup proves advantageous when you want to take a collective pulse. Alas, I can't boil down the disparate AI views to one collective agreement, or any kind of consensus for that matter. Instead, I'll give you my subjective roundup.

Humans in the loop - the one collective agreement

There was one AI topic that had near-universal agreement: the need for "human in the loop" AI design principles. But, on a concerning-but-not-surprising note, we did not necessarily agree about how AI should be regulated. Most believed firm regulation was needed, though concerns about getting regulations right (and not stifling innovation) were raised, as well as the problem of black hats and rogue states plowing ahead on AI, while the unavoidable hand-wringing of appropriate regulation happens.

The necessity of human-in-loop design to curb the limitations of today's generative AI tech was the first topic cited in the AI Today podcast, How to Avoid Getting Screwed with Generative AI. But the hosts brought out a key angle: it's not just about humans-in-the-loop. It's the right humans - the ones with the domain expertise to spot the problems in the (over)confident output from GPT-type systems.  As co-host Ronald Schmelzer put it:

One way that we see people getting screwed is when they don't have any of that knowledge themselves. Let's say you're in the construction industry, and you want GPT to create something like a construction proposal, and you're not a construction person. How are you going to even know that something's in there that isn't right?

At CCE, human-in-loop - and the costs of neglecting human input - came up frequently, exemplified by a panel on automation moderated by Constellation's Larry Dignan. Visual Strategist Heather Willems of Two Line Studios captured this well in her visual summary:

Automation session - Constellation Connected Enterprise
(Automation session - Constellation Connected Enterprise, via Heather Willems)

"Listen to honest feedback - it's crap!" "Thanks!" Yes, that's part of AI too. Put AI on top of bad corporate cultures where bottom-up input is ignored, and I don't like your chances.

Want a successful AI project? Get data governance right

One strength of CCE's event? The attendees are not solely focused on enterprise topics. The AI issues raised by the audience spilled over into educational disruption, the impact of gen AI on children's intellectual development, and the future of human creativity.

On the enterprise side, audience questions about data governance were a surprisingly common thread. Given generative AI's current sex appeal, I was struck by the contrast with the practical-but-essential data questions these CXOs emphasized. Example: "Who owns data governance?" How's that for a concise can of worms?

I openly question whether today's level of AI can be called "intelligent," but I did see agreement on the opposite side: bad, inadequate or poorly managed data results in dumb AI.

A detailed guide to "best practices" in AI data governance - if such a thing even exists - was beyond the scope of this event. But field lessons were shared:

  • Bear down on the current state of your data governance, prior to ambitious AI projects -  an obvious but important first step.
  • Generative AI projects will often require some form of internal foundational model to interface with external LLMs. Why? Because gen AI without your own enterprise data is going to yield generic/underwhelming results. Oh, and you'll need data governance in place before that foundational model is built.
  • Your data governance team should not be tied to product. It has a broader - and now mission critical - purview.

Prior to the event, Vijay Vijayasankar issued this relevant blog, GenAI will need a whole new look at Data Governance! Vijaysankar concluded:

There are two things to think about carefully here – the process of data governance itself, and the tooling and automation of it. I am less worried about the tooling part in relative terms – I am just not sure yet if enterprises have thought through all these “fringe” aspects of GenAI compared to all the cool applications they are excited about. If they don’t find the time and budget to get it done right – it will be a lot of grief to deal with.

That's a good summary of where CCE 2023 attendees were coming from on AI. That's encouraging - because it means these CXOs aren't taking the "gen AI revolution" cheese without a smell test. On the other hand, we don't have complete answers to many of these questions yet - though the project examples from Vijayasankar, as well from CCE speakers like Vijay Tella, CEO of Workato, are better than a blank slate. Tella's advice to attendees put AI in the context of ongoing transformation projects. But he warned us: to get governance right, technology leaders must change.

Transformation is fundamentally going to be a team sport. You want to put the power of change in the [hands of business users]. At the same time, those people don't understand the concept of security, scale, technical debt. The role if IT and CIOs is going to be very important. But it's a very different kind of role than they've had in the past. So when I say democratization, it is enablement, with a system of governance... Democratization in the fullest sense of the word is empowerment, doing it as a team, but with a system of governance and guardrails.

The gen AI creativity debate - potent and unresolved

The role of generative AI in content creation was the most divisive AI debate at CCE. This debate has implications far beyond content creation: it comes down what type of AI we want to have. Do we want AI that supports and enhances the realization of human potential, or do we want to celebrate a machine's supposed ability to replace us? Automating the administrivia is one thing. But when you claim machines can create human-level content, you and I are going to have problems. 

One thing this humans vs. machines debate sorely needs? The imagination to look beyond obvious AI use cases, into a more creative future:

This fall, I got into a special panel with AI startups and customers - no media allowed (but I snuck in). All the attendees - and even the startups - wanted to talk about how AI could soon write a movie as good as a Hollywood screenwriter. That's an enormous insult to the best screenwriters, and a fantastical overstatement of what's possible with today's AI (though AI can be a tremendous help with the elaborate syntax of script construction).

But one startup founder opened my eyes: he talked about the potential for interactive entertainment, where you could pause and interact directly with an AI-infused character - a character trained with all the knowledge of the fictional worlds that character inhabits (think of a deep world with rich character backstories, like "Lord of the Rings"). I left that panel wondering, "Why are we so limited in our thinking about the possibilities of machine-human interaction, and so damn obsessed with downgrading human creativity?"

To expose the big hole in my argument, this all comes down to instant mediocrity. When mediocre content is good enough - and in some case it is - then there is indeed a role for machines in enterprise content creation (I find that gen AI is better than mediocre at summarizing deeper content, but harnessing vast content plays to the machines' strength - exactly my point).

AI-generated job descriptions are a perfect AI content use case, where mediocre-but-well-trained content is just fine - and will alleviate a time-consuming human chore. Black hat SEO - filling garbage web sites with topical content - is an obvious, though undesirable case for instant mediocrity. The same goes for product FAQ pages, which can then become generative AI service bots. All good for AI there.

But when I hear, as I did at CCE (and in my inbox every day), that humans can now step back from content creation, let the machines take care of that, and humans can cherry pick from the content analytics, then the plot is lost. Cold shower time: in the attention economy, you still have to earn the attention of buyers. Have fun trying to buy attention through ads alone. Attention is not a given - even if your nifty gen AI can spray the known universe with spiffy-looking Instagram posts.

In B2B, that attention is earned through exceptional content, created by talented experts who paid their dues in road miles (ultimately, that content should stem from lively communities, such as the one Constellation has built around CCE). Marketers don't like that enduring truth, mostly because they struggle to create that type of content, and build that type of opt-in community. But I've addressed how to avoid that conundrum in my Reaching the Informed B2B Buyer dbook.

On the B2C side, I do see non-exceptional content earning attention, but it's a treacherous path with two forks: celebrity-driven content (still a mostly-human attribute for now), and dangerously viral/misinformed crud (machines can do that, alas, which isn't a huge help to our already-fractured political discourse, but let's put that aside for now, shall we?)

Yes, machines will shake up the marketing mix. Machines can do a standup job of personalizing content at scale across devices, personalizing the rhythm of content distribution, and surfacing insights from the analytics on that content - albeit with adult supervision when it comes to the all-important step of turning those insights into actions that actually add up to something resembling ROI.

Granted, the notion that my ability to create content - the skill I have fought the most for since birth - is now up for grabs with AI is something I take a bit personally. But: I also see this line of thinking as a failure of AI imagination.

During his CCE interview, Tella hit on a similar point:

When McKinsey did their big AI impact report, almost all of that was based on what jobs can AI replace? What tasks can AI replace? I think that is a little depressing, and also a very limited view of what AI can do...

I  think that's going to change. I think we're going to become smarter companies. In the long term, it is the strategic re-thinking: what's our core competence as a  company? And what role can [AI] play in business models, channels, and in products, and so on? So I think the real impact of AI is going to be in this in this second and third branches. What we're seeing happening now is 'Okay, let's replace that with AI.' But I think the real opportunity is what happens after that.

My take - where do we go from here?

I wasn't expecting the unsexy topic of data governance to jump out so frequently at CCE. I agree with Constellation's Steve Wilson, who asserted, in his event wrap:

Nevertheless, I feel CCE 2023 was actually all about data!

You may be wondering:

If the CXOs at CCE were so sensible about data, bias and governance, why does AI seem like a runaway freight train elsewhere, with even the smartest people issuing ludicrous hype-manifestos?"

Good question. I believe the answer comes down to the pressure CXOs face to act on AI now, whether there is a business case ready or not, whether the proper guardrails are in place or not.

That pressure can come from an excitable CEO/board ("What are we doing about AI?"), and also from the already-rampant use of "shadow AI" in a corporate context - usage that can obviously put your own IP at risk. Issue all the bans you want: avid ChatGPT users inside your organization aren't going to stop until there is an enterprise-grade alternative.

That's why customers need to learn how to push vendors on their AI offerings - before 'move fast and break things with AI' kicks in. If you want more tips on that, Constellation's Holger Mueller recently issued a notable post on this topic: Debunking your vendor's Generative AI Hype. I hit on this from a different angle in Enterprises will test the limits of LLMs - and ChatGPT is just the beginning.

During his CCE interview, Salesforce's John Taschek said that "AI makes good people better, and bad people worse." It was one of the best lines of the show, but is it true? AI can definitely make bad people worse, that is a deep fake given. Whether we can do good with AI is really our biggest collective question. Yes, the horse has left the barn; Pandora's Box is thoroughly opened. But whether we allow AI to diminish our humanity is really our decision. Our collective responsibility. Some may disagree. I would say to you - let's have that urgent debate. Now, next week, and next year at CCE 2024 (it will be happening this week on diginomica, via our UK AI Safety Summit coverage).

End note: to get that done, we need discussions with an incisive but inclusive spirit. No panel captured that at CCE better than super-duper moderator Tricia Wang's - so Heather Willem's illustration of that panel is a good note to end on here:

Future of AI and Humanity panel at CCE 2023
(Future of AI and Humanity panel - illustration by Heather Willems)

Loading
A grey colored placeholder image