Main content

How generative AI has changed the face of digital teamwork

Phil Wainewright Profile picture for user pwainewright January 30, 2024
The advent of generative AI has brought a swathe of new capabilities to digital teamwork and work management - but as well as the productivity gains there are potential downsides, too.

AI and connected business tech © Funtap via
(© Funtap via

Just when we thought we'd all got the hang of digital teamwork tools from the likes of Zoom, Microsoft, Slack, Asana, Atlassian and others, the sudden rise of generative AI a year ago has added a whole new learning curve — both for vendors and for their customers. Coming out of the pandemic, these platforms were well on the way to becoming constant companions to knowledge workers, helping them harness the flow of work. But now they're competing for attention with a whole new generation of AI-powered intelligent assistants. How are the teamwork vendors — and we their users — responding?

The upside is that adding AI supercharges teamwork, bringing new superpowers to users that help them create new content, find answers faster and collaborate more effectively. In the past year, every digital teamwork vendor has added generative AI-based tooling that helps users draft content, summarize information — especially key points from meetings or discussion threads — find answers from content stores and knowledge bases, or prioritize actions. Here's a quick list of AI offerings from some of the most notable players:

Anu Bharadwaj, President of Atlassian, believes the impact of AI may be as huge as bringing collaboration online in the first place. She says:

Just doing teamwork with an Internet-enabled system versus without looks so different. You can now form teams across people that was not possible before, which gives rise to diverse teams, teams that can build things that they couldn't build before. It's going to be the same with AI ...

The aggregate impact is going to be your team got so much better. Now your team is able to go forth and achieve things that they were never able to in the past, because the bandwidth has been now freed up by domain-specific agents and AI-powered agents that are able to do some parts of your job a lot easier.

For example, Atlassian recently released AI capabilities in its Loom video recording tool that can instantly generate a digest of the key points in a video. She explains:

For distributed teams, the ability of generative AI to process so much information and give it in a little capsule that's digestible, is super helpful ... Loom AI, when you're creating asynchronous videos, or you've recorded a meeting, it's able to basically say, 'Here's a summary of everything that happened, and here are the key moments.' So it can reduce a one-hour video to five key moments, and all you have to do is watch five minutes of it. So you are also able to get the richness of video but you are very productive and connected into the team.

But there are potential downsides too with generative AI — inaccurate information being surfaced as fact; confidential data being shown to the wrong people; contamination of the dataset by AI-generated content; or failure to train users to properly instruct their AI assistants and vet their answers. And that's just for users.

Vendors who have specialized in digital teamwork face what could become an existential threat, in which their specialism becomes simply a feature of other vendors' products rather than a standalone category that customers still want to buy. This would be quite a turnaround from just a couple years ago, when it looked as though what we at diginomica call the Collaborative Canvas of enterprise digital teamwork seemed poised to become the entry point to every enterprise application. These messaging, content management and workflow automation players stood on the brink of becoming the conversational interface through which all users connect to enterprise information and functionality. Now generative AI offers a new, even more flexible and accommodating means of accessing digital resources, relegating teamwork apps to a subsidiary role where they have to work even harder to prove their worth.

Lost competitive advantage

Before this latest technology came along, the hard work required to build a useful AI model meant that investments in AI gave vendors a proprietary advantage. While those earlier investments still have some relevance, the arrival of generative AI's more general-purpose Large Language Models (LLMs) has swept away much of that earlier proprietary edge. Ben Kus, CTO of content management platform Box, says:

Not only are these new models just way better, but also there's a lot of models, and they're all very good, and everybody has access to them, and they're really easy to integrate. So it has stopped becoming a competitive advantage which models that you have ... To me, the competitive advantage is how well you use it.

Existing expertise in deploying AI carries across in areas such as setting up permissions properly so that users aren't served up answers that contain privileged information they shouldn't have access to. In Box's domain, Kus explains that an important capability is to ensure that the AI is able to work with all the various types of content, including audio and video as well as mixed-mode documents such as a PDF which might contain graphs and images alongside text. Another area where vendors can differentiate is in how they deploy a technique known as Retrieval-Augmented Generation (RAG), which connects the LLM into external data sources to help it produce more accurate results and reduce the risk of introducing hallucinations, where the AI simply makes up answers that sound good but have no factual basis.

RAG is particularly valuable in enterprise use cases, where users are typically wanting to find answers from an existing body of corporate knowledge. Sherif Mansour, Head of Product for Atlassian Intelligence, gives an example based on the company's Confluence content database that illustrates the type of highly specific answers AI-powered virtual agents can deliver using this kind of technology:

You can now connect these virtual agents to your company's knowledge base. So if you're using Confluence, say, 'For this agent, only source your information from the procurement and legal department FAQ documents inside my Confluence.' 'For this agent, source your knowledge from my company's IT knowledge base' — or maybe 'the whole of your intranet.' It's totally up to you to point it to the different sources of information in Confluence.

Now, when someone asks a question, in your virtual agent, you get an answer based on the knowledge that's available there. The awesome thing about this, it's in the context of your chat workflow. As we've learned, customers, employees, just want to ask questions and get help from chat. It's what they're used to. And so they can ask it a question and get an answer.

Vetting AI's answers

The reliability of these answers, however, depends on the quality of the source data. "AI capabilities are only as good as the data that they have," says Mansour. How does Atlassian vet the information the AI is finding? He says:

Assuming this content is there, we use signals such as date to indicate how stale the documentation might be, recency of editing and views. All those signals we use to try to understand how potentially accurate or stale this information is.

Now you will get organizations where their content is fairly old and outdated. The challenge there is they're getting old and outdated information as well. In those scenarios, also, we try to call out things like, 'Hey, this data might be inaccurate or not correct.' We also cite the sources for everything so that people are aware and can check the sources. And there's a feedback loop mechanism as well, where they can tell us when things have gone wrong, or things that need to be improved.

Box recently introduced a new capability called Box Hubs, which lets teams curate a subset of data for a specific purpose. This is another approach to guiding the AI to work with the right data, as Kus explains:

If you're not careful, that AI is going to find incorrect data, it's going to find old data, it's going to find things that aren't up to date, and it's just going to struggle with knowing what's authoritative ...

We recommend that you start out with using AI on the kind of documents that you're able to say, 'Here's a corpus of data, start here,' as opposed to, 'Let's just unleash AI constantly across everything on the enterprise, and then see what happens.' You get an awful lot of potentially problematic results, if you're not careful about the kind of things that you unleash it on.

Like many other teamwork vendors, Box has its own graph database which also helps filter what the AI should look at. This is important to ensure the LLM isn't overwhelmed by too much data. He explains:

A large language model, there's only so many tokens you can give it. You can't feed it your whole corpus of data in real time, you have to pre-filter it ... That's why some of these challenges of AI are dominated by how good you are at the rest of it — permissions, and the ability to do groups, and all these different pieces.

All of these caveats mean that users still need to be wary of the help that AI is offering. Organizations have to take care to ensure that the data the AI is working with is properly catalogued and filtered, and users relying on AI to make them more productive still have to take responsibility for the final outcomes. As Mike Berger, VP of Product Marketing at ClickUp, says, it's important to be aware of the limits of the technology:

If you're the person driving the project, you shouldn't be relying on AI to create your project plan and set up your tasks and so forth. But if you're an executive that wants a quick preview of what will happen in the meeting, or if you're someone who sits on the periphery, or just jumping into a project, it's a great way for other people to come up to speed.

Because of the support that organizations and their users will need to make best use of the technology, the advent of generative AI probably hasn't made these vendors' products redundant, but it will require them to add a new layer of capability to provide that needed support. Box's Kus believes that AI won't change the current pattern of enterprises relying on a selection of different tools to perform various functions. He says:

The thing that we're seeing evolve is that people are making AI platform choices for subsets of their overall data. For this next year, it's going to be about the models, but it's also going to be about where you specialize the model in combination with all the other things you need to make that particular functionality handle the challenges of that particular dataset. So we're big into the idea that AI platform for content is our focus area.

My take

The one certainty is that generative AI will dramatically increase the speed at which people can create content, find answers, digest information and structure their work. Since all of these activities are core to digital teamwork and work management, it's clearly going to have a big impact on how work gets done across the Collaborative Canvas of those digital tools and connections. While the basic ingredients haven't changed, how they're mixed together will get a turbo boost.

But big questions remain about the quality of the output. Vendors are clearly alert to the risks around data confidentiality and are mostly able to reuse existing frameworks that they've already put in place to ensure people don't see things that they don't have permission to view. But in researching this article, I didn't receive satisfactory answers about how vendors will validate whether the information being accessed is accurate or authoritative — mainly because I suspect they haven't yet worked out how they'll do that. There's a big load being left on the shoulders of users to perform that validation. Similarly, little thought currently seems to be going into labelling content that has been generated with the help of AI, even though some research suggests it may be wise to keep such content out of future training sets. There's a risk that seeding an enterprise knowledgebase with AI-generated content may lead to inferior or even unusable results when training future models.

These questions emphasize that it's still very early days for this technology and that, while it will certainly help to boost productivity in the short term, there are important considerations to bear in mind when rolling it out. It's crucial to share learnings between vendors and their customers as the technology rolls out.

A grey colored placeholder image