Main content

What employees think their managers think about gen AI and vice versa - and can it come with an off switch please?

Barb Mosher Zinck Profile picture for user barb.mosher April 16, 2024
Summary:
A new Contentful study finds employees onboard with gen AI, but there are still a lot of questions to be asked and answered.

off

We're getting deeper into the use of generative AI in both content and code development, but how excited are employees about using this new technology? And is management on board? Contentful's new report, Generative AI Professional Usage and Perception Survey, asked both those questions and more. 

The goal was to understand the context and attitudes that shape Contentful customers’ priorities and the usage of generative AI. The responses are from 820 technical and non-technical professionals worldwide, including marketing, designers, developers, engineers, and digital strategists. 

Here’s what stood out to me, as well as some insights from Contentful’s Chief Evangelist, Nicole France.

There’s a big gap between those who know gen AI and those who don’t

There was a significant gap between those who said they were highly knowledgeable about gen AI and everyone else. These professionals work with it regularly, experimenting to determine where it can impact the most. It may not be surprising that technical professionals were likelier to rate their knowledge as high. They also used generative AI more personally (26% use it daily, 32% use it weekly). 

People who use generative AI tools like them so much that they appear to be willing to pay for them whether their company provides tools or not. In this study, 18% pay out of pocket for tools and don't expense them, and another five percent pay for additional professional use beyond what their company funds. Why? It might be because 38% reckon they save almost 5 hours a week, another 37% save 5-10 hours a week, and 11% save over 10 hours a week. In all cases, the technical professionals say they get the most time savings (although non-technical aren't far behind).

The reasons some professionals don't use gen AI at all are expected, including lack of knowledge and fear or concern. However, 29% said they had no interest in or needed to use it. This is interesting because, in the context of creating digital experiences, generative AI has many use cases, and anyone who doesn't at least start looking at what it can do to help them in their jobs runs the risk of finding themselves behind and struggling to learn these tools when others around them already know them well. 

Management and their teams are on the same page

When asked how employees rated their manager's enthusiasm for generative AI, most said that management was in agreement and enthusiastic about using it. The report noted some surprise at this agreement, so I asked France why. She said: 

Data from past surveys suggested that managers and business leaders were far more bullish on gen AI than rank-and-file employees. We wanted to test that idea among our more focused group of respondents. We had also observed anecdotally that people who were more exposed to gen AI typically had a far more nuanced view of what was possible and what might prove useful for doing work better and/or faster.

Contentful hypothesized that there would be a gap between what employees think and what they think their managers think about gen AI. France said: 

Would people feel their companies were pushing them to use something they didn’t feel comfortable using? Would they see leadership as holding them back from being able to realize the full potential of these new tools? We weren’t sure. 

As it turns out, in general, neither of those seems to be a predominant feeling. Of course, there’s far more interesting nuance when we drill into how knowledge of gen AI influences this perspective. Here, we see that there is indeed a difference in perception, with the “knows” being more positive and the “know-nots” far less so, with the latter group feeling less enthusiastic than they believe leadership to be.

Use cases for generative AI

There are many gen AI tools available, including the most well-known, ChatGPT. Depending on your needs, there is likely a tool to help you. Then, there are the tools that build generative AI capabilities in their products. For example, Contentful and other CMS providers are adding capabilities for supporting writing, SEO, and image creation).

There are many use cases for generative AI. Because Contentful is a content management platform built for both marketing and developers, this study's most common use cases support both groups. They included:

  • Technical documentation and product descriptions
  • Graphics and charts
  • Code
  • Audion and video
  • SEO

What was clear, though, is that most are looking for more guidance on how to use these tools, with over half (51%) wanting more guidance or guidance period. France noted: 

For the most part, people want to know that they aren’t going to inadvertently break something, violate a policy, or do something that might be harmful. They also want reassurance that experiments that don’t succeed won’t be held against them.

France explained what types of guidance employees are looking for, including:

  • What tools do they have access to;
  • How to use them (e.g., what kinds of data or content should or should not be shared in a given tool?);
  • Where outputs should be used (e.g., as part of a working process but not directly with customers, or directly with customers in a given context after a specified validation/vetting process); and
  • Expectations for sharing experiences or results of tests/experiments across the organization.

She explained: 

Effective guidance, like gen AI capabilities, continues to evolve—and that’s probably the main point. This shouldn’t be a “set it and forget it” kind of thing. From our open-ended responses and anecdotal observations among our customers, our own organization, and elsewhere in the market, it's clear that just saying “Go out and experiment” or “No, don’t use it” isn’t good enough.

Another interesting finding is that while 56% want generative AI integrated with their existing toolsets, they also want the ability to turn it on and off. Professor Sam Maglio at the University of Toronto's Rotman School of Business analyzed the survey data and had this to say about the desire to turn gen AI on and off:

This makes it sound like 'algorithm aversion' is alive and well. People can be hesitant when it comes to taking advice from an algorithm, especially in certain domains. They’ll let an algorithm tell them which tax prep software to buy but not what kind of clothes to buy with their refund. They’ll trust an algorithm on how best to drive to a movie theater but not what to see once they get there. So, of course, despite all the enthusiasm for this new technology, users still have reasonable reservations about it. The fact that everyone still wants the option to pump the brakes on gen AI tells me that people will always want a blend of human and machine.

To disclose or not disclose gen AI use

What is clear from this study is that people think the use of generative AI tools should be disclosed  (76%). Sometimes, that only means internally disclosed, but other times, it means letting the customer or audience know it's being used.

Those who said it should be disclosed gave reasons such as:

  • Improving work/operations.
  • Improve awareness/understanding.
  • Evolving capabilities.
  • Ethical obligation/legal disclosure.
  • Regulations and policy.
  • Fear of what gen AI could do.

That last one gives the impression that humans do not monitor generative AI, and it can potentially provide wrong information that could cause problems (which isn't untrue).

Divulging a business advantage was one reason some professionals didn't think gen AI's use should be disclosed. Another primary reason was that there is still a human control element to its use.

Maglio said that disclosure around generative AI is the “new frontier in business ethics and corporate social responsibility.”

The question is, is this desire for disclosure the same for written content and images as it is for code? France suggested that there is a difference: 

In short, yes, disclosure is more important with content (whether words or images) than with code, though there’s definitely some important nuance here. With code, the idea of open-source libraries and other repositories for reusing existing code is a long-extant best practice. The main issue in using gen AI to do the same thing is whether the output is going to work as anticipated.

Though humans have long produced derivative work—and indeed, almost all creative work is influenced by what came before—there are many important additional considerations in content. One is appropriately crediting original work. Another is vetting information to ensure it is accurate and up to date. Yet another is verifying that what is communicated is clear and makes sense. These seem to be some fairly straightforward and widely understood cases for disclosing the use of gen AI.

France said that when gen AI is used as part of the process and not the end output or result, it is less about public disclosure and more important in a work context to share experiences. She did note that this is still a subject of debate.

Key takeaways

I asked France what the key takeaways from this study were for Contentful and content management: 

 There is a lot in here that we’re continuing to discuss and work through, but we got, among other things, clear validation that people want gen AI to be integrated into the tools (and, presumably, their ways of working) that they already use. Contextualizing gen AI (and other types of AI) to help support an output or to accelerate a process seems to be the big priority — something we’ve already been working on in a number of ways. 

We also got a very strong signal that most businesses want to have gen AI tools that support their unique requirements and incorporate their valuable, proprietary content. Given the wealth of content that our customers already manage in and through Contentful, that’s important validation that what’s already in our customers’ systems will be a powerful asset in applying gen AI, regardless of how or where it’s used.

My take

It's an interesting report when you consider the value generative AI can bring to a company working on digital experiences. Although most employees in this study were onboard with gen AI and felt fairly knowledgeable, many still don't understand what it is and how to use it best. And they are looking to their companies to help them understand.

We know that gen AI is getting integrated into more tools and that employees will need to learn how to use it. They need proper guidance from their companies on how to do so and some training to use it effectively (that training will depend on the tool and the use case). 

What was the biggest takeaway for me? There is still a huge gap between the people who have adopted generative AI and those who haven't, and that gap has the potential to have serious challenges for companies that don't plan for it properly. 

Loading
A grey colored placeholder image