The current excitement about generative AI reminds me of the early days of the Web, before Google came along. The first generation of search engines could deliver results, but there was no way of knowing how authoritative they were. Then Google introduced PageRank, which weighted its results based on who had linked to them, adding a crucial element of validation. Before generative AI can be used to deliver trustworthy answers in the business world, its results need similar validation. Knowledge management vendor Guru yesterday launched a first step towards that goal.
Answers, initially rolled out in a limited private beta, uses generative AI to source results from enterprise content that's been linked to Guru, and then those answers can be passed through Guru's verification process to be checked and regularly updated by subject-matter experts. The breakthrough for Guru is using AI to make it easier to surface this knowledge from enterprise content, while still ensuring robust verification of the answers produced. Rick Nucci, CEO of Guru, tells me:
“We see this as basically a virtuous cycle — the idea that employees will ask questions, get their answers faster, and use those answers as future sources of knowledge ... Our point of view is, let's bring together the humans, what they're experts in, [with] the answers, and use that to create this ongoing base of knowledge that gets better and better over time. That's the loop we want to enable with this. ”
In an enterprise setting, the accuracy of those answers is crucial. Guru's core offering provides verified knowledge to customer service agents, and more recently branched out to include information that's traditionally been found in a company's intranet. Giving wrong advice to a customer, or to an employee looking up HR processes, isn't acceptable, so Guru has built in a series of guardrails to lock down the risk of generative AI introoducing spurious or incorrect answers into the verified knowledgebase. These include restricting the model to approved sources, and providing citations alongside the answer that link to the original source so that the user can evaluate its accuracy. Nucci says:
The sources of the answers only come from what the customer explicitly configures in Guru to be available. In the answer itself, you will see citations around where the answer was derived from. The citations will come obviously from trusted Guru cards, and additionally will come from sources that the customer deems appropriate to be leveraged in the answer itself. But in no way does the answer come from the predictive abilities of the large language model. What the large language models are being used for is helping to understand the intent of the question, and then helping to take the answers from the approved sources to create a concise, summarized, cohesive answer.
A broader enterprise knowledgebase
Protecting the customer's information is equally important. Guru's contract with OpenAI, which provides the LLM, forbids any storage of or training on its customers' information. Its permissioning processes will apply when users ask questions, so that the answers that come back will only include information the user is authorized to see. This will become increasingly important as Guru customers expand the information sources they connect into the Answers product. Nucci sees an unprecedented opportunity to build out a much broader enterprise knowledgebase than has previously been possible. He explains:
We really want to give the idea of the source of truth to the employee, in order to say, 'Ask questions more broadly.' Frankly, until this current era of AI, I don't think that would have been viable. But we now think it is ... We can create this flywheel of accurate knowledge, in tandem with these other knowledge sources that exist in a company.
Alongside Answers, two other Guru offerings based on generative AI contribute to this flywheel. Guru Assist, currently in a restricted beta, summarizes long-form content into bite-sized knowledge highlights, helping to reduce the time it takes for subject matter experts to compose a Guru card. A function introduced earlier this year, Slack Trending Topics automatically scans nominated Slack channels to surface recurring answers that could be added to the Guru knowledgebase. Nucci gives an example:
Imagine a topic that says the difference between our health insurance plans. That'll be an example of a topic that Guru will find and surface. It will associate all the Slack threads where that's coming up and allow you to say, 'Turn this into a Guru card,' or 'Delegate it from Slack to someone who should turn this into a Guru card.' It will give the author the conversations that are prompting all of this, it'll generate it in a draft for the appropriate person to capture and turn into knowledge.
Adding generative AI into knowledge management processes in this way has significantly expanded Guru's reach, he believes. He elaborates:
I don't know that there would have been a viable approach to do what we're now announcing before the LLMs reached the maturity they have. But now that they have, I think there's this sudden explosion of opportunity ...
There is always going to be knowledge that has to be written down somewhere, because it's just in the brain of the human or it's in a messaging tool in a very unusable format. That can be, using GenAI, converted, captured, organized, curated for discovering in a system.
We think of it as an extension. We think that this is the combination of things that companies are going to really need to broadly address knowledge management and broadly address the information overload burden that is, frankly, only getting worse. But we think we can wrangle it now in an even better way ...
We want to be in all the disparate places where people are working, and we want to respect that there's valuable data that can be made more valuable when you bring it together in ... other knowledgebases, other wikis, in addition to just Guru.
By layering Guru's verification processes on top of this more expansive knowledge sourcing, the LLM technology becomes part of a process that builds up a catalog of the most valuable and useful information. He goes on:
Every interaction in what we do is going to level up the accuracy for every other participant in the Q&A workflow. That is how we can leverage the LLMs for what they're good at, but ensure that the outcome, the output — accurate answers in this case — is in fact improved, as more people are using it. That the learning is happening.
Rather than replacing humans, the AI in this case is augmenting their productivity by helping them focus on the decisions where their expertise can add value. He concludes:
Everyone's thinking about efficiency, productivity, how to do more with less. AI is a great example of that. But to me, that is in service of how do you make the employee be able to have 30% of their time to do other things? Versus 'Oh, yeah, we'll just go and send machines off to do what humans used to do.'
The genius of Google PageRank was that it found a way of programmatically capturing the choices humans were making when adding links to their web pages, and paired that with the technology of keyword matching to produce a more useful search engine. Generative AI needs a similar pairing to keep it on track — not just collating existing content into a plausible form, but also having some kind of human judgement that rates its output for accuracy and pertinence. Guru's approach to keeping humans in the loop is a good first step along these lines and looks promising as a way to harness generative AI to help enterprises build out and maintain a growing library of validated knowledge.