Main content

"Zero tolerance" for hallucinations - Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle

Jon Reed Profile picture for user jreed May 16, 2023
Summary:
Dr. Vishal Sikka was knee deep in AI long before generative AI was a thing. So what does he make of the vortex of AI hype and possibility? Where does he stand on the ethical implications? And what breakthroughs are Vianai, his AI company, achieving?

vishal sikka vianai speaks at Oracle OpenWorld
(Dr. Vishal Sikka, CEO and Founder, Vianai)

Dr. Vishal Sikka is no stranger to AI. He might be the founder and CEO of a trending AI startup (Vianai), but he's been at the AI game a long time. I often met with Sikka when he was CTO of SAP; even back then he would extol the potential of enterprise AI - outside of any frenzied hype cycle.

So, during our recent video call, I asked Sikka: surely he must have conflicting emotions? Something Sikka has worked on for so long is now a hyped-drenched tech bandwagon, full of opportunists and techno-gimmickry. "Mixed emotions," admits Sikka. He remembers the roots of all this:

It's different emotions every day. It's like a barrage. I started in AI with natural language and NLU - Natural Language Understanding - was the phrase we used back then. I was 17. I started working on some basic techniques of natural language understanding. I had an idea that I wrote to a professor at MIT about; I was still in India at that time. That was Marvin Minsky, who was one of the fathers of AI. I got a chance to hang out with him when I was still an undergraduate student. He wrote my recommendation letter for my admission to Stanford for my PhD.

The highs and lows of the AI rollercoaster

Fast forward to today: AI suddenly has massive technical (and cultural) momentum. Sikka found himself signing on to an important, controversial - and, I would argue, widely misunderstood letter warning of the dangers of AI, and urging that infamous pause. No, not a pause in AI innovation, but a pause in Large Language Model (LLM) expansion. As Sikka told me:

I'm one of the signers of that letter. Stuart Russell was one of my academic siblings. We have the same PhD advisor; he was one of the main authors. He asked me to sign it. We were not advocating stopping AI research. This is critically important. We were talking about putting a pause on models bigger than GPT4 for six months, so that people who regulate have an opportunity to catch up to what is going on. The risks of the AI today are, to me, very worrisome.

Queue Sikka's mixed emotions. The risks of AI are significant, but so is the potential for life-changing applications. Via his work with Vianai, Sikka lives that every day:

On the positive side, it's an incredibly powerful technology; there is so much that can be done with it. It can be really transformational in how we work, and what we work on. Just this morning, somebody released a thing called RedPajama. It's a large language model that was released in open source, with an open data set and things like that. So the openness that people have to experiment, to tinker and to try things, the speed at which this is happening - there are incredibly exciting applications you can build with it.

hila - a new generative AI solution

But the enterprise is a different matter - with a much more discerning risk profile. Enterprise leaders are understandably wary of the risks of this particular AI hype cycle, just as they were with the Metaverse and blockchain before that. But they also want to study the use cases. They want a much better handle on what's possible today, and what's been overheated by marketing departments. LLM adoption is absolutely forcing the question.

So, what's possible today? Let's start with hila, Vianai's newly-announced generative AI solution, billed as "Your new financial research assistant. hila can help you quickly find answers in earnings transcripts, and we're adding new data all the time." Sikka:

hila is a tool for investment. It has three components, and you should try it yourself. Just go to hila.ai and sign up. [hila has] zero tolerance for hallucination. There are plenty of tools like this in light of ChatGPT. But what we worked really hard on is to make sure that these things don't hallucinate.

There are a few scenarios within hila. There is an ability to ask questions to earnings calls, and to 10 ks 10 Q's, documents like that, for public companies. It also has a data set that you can query. This is a SQL-type querying of structured data using natural language.

Jake Klein, CEO of Dealtale, a Vianai company - was helping Sikka to pursue this type of functionality 18 years ago while they were both at SAP. "But today, we can finally do this kind of thing," says Sikka. And the second scenario?

We also have a [feature] where you can upload your own document, and ask questions to it. But in all cases, the safety, the correctness, the zero tolerance for nonsense is the hallmark. As for Dealtale, our team did a lot of work in a particular area of AI called causality, which is around cause and effect relationships.

Causation versus correlation has given data scientists headaches for a long time:

Basically, there is causation, but there is also correlation. Generally, when we build a model from data, it is difficult to make a distinction between these. Sometimes people confuse correlation with causality and causation.

Our team did a lot of pioneering work in that area, where you could say, 'Hey, I put this offer in front of John, based on the behavior of people in his cohort, is he likely to click on this or not?' Coming up with a causal understanding of that, and what are the phenomenon that caused that behavior is what we were working on. So we acquired Dealtale as a way to collect the data on which we could build these causal models, and causal graphs. 

Then OpenAI released a little thing called ChatGPT, and inspired the Vianai team. Thus the release of another app on the Vianai platform, Dealtale IQ. Sikka:

This product, Dealtale IQ, that we launched a couple months ago, basically gives marketing analysts the ability to ask any question they can think of - any question, not only the four or five scenarios that we had around causality.

As always with AI, the caliber and comprehensiveness of your data makes the difference:

On top of this data, [we pull in data] from systems like Salesforce, Marketo, Microsoft Dynamics, Google ads, Facebook ads - there are 19 or 20 systems like that, We built this single view of the customer from their customer engagements.

It's generally for smaller companies - companies that are 100% digital in nature. HubSpot is one of these systems that we read from. And now marketing analysts have the ability to ask any question they can think of. Customers just absolutely love it.

We couldn't get into this without getting into coding. Sikka has managed some pretty big development organizations over the years. Like several other noted enterprise technologists I've spoken with, Sikka sees generative AI as particularly well-suited - and perhaps disruptive - for programming:

One interesting consequence of this large language model technology is that it is particularly effective at coding. If you have the right brackets in place, the right safeguards, guardrails, etc., you can actually do a very good job of generating SQL, generating JSON,  and even just generating code.

Where we are headed is: we can make the entire application dynamic; we can make the entire application virtual, and replace that with human language. You will see us make announcements on that front in the next weeks and months.

My take

I have an axe to grind over the technical limitations of LLMs, which I believe are being overlooked. I got Sikka's take on that; I'll share that in my next installment. But there's no denying the level of adoption these tools have achieved.

Some assumed that the overriding concern in the aforementioned "AI pause" letter was about the pending risk of Artificial General Intelligence (we are nowhere near that), and the sensationalized risk of mass unemployment - if/when AI becomes sophisticated at human-level problem solving and cognitive functions. I don't believe we're anywhere near that either, due to the technical limitations of generative AI. So what was the purpose of the letter, then?

It really depends on the person - there is no one correct view on this. Many who signed put aside big differences - both with each other, and on the future of AI. But some of those who signed, such as my former college classmate Gary Marcus, an AI expert in his own right, did so because they believe even with generative AI's current limitations, it's still an incredibly powerful tool - wide open to misuse, unintended consequences and/or exploitation by bad actors. As Sikka told me:

You probably saw the news of the 40,000 chemicals that were composed using these things. Each of those 40,000 is at the same level of lethality as VX, one of the most lethal chemical compounds known to humanity... And that's just one example.

The regulatory environment is too far behind to hedge this properly. Sikka, with his firsthand role in how AI has evolved, is exactly the type of voice this conversation needs.

But there is also the fascinating upside, as people experiment with ChatGPT across a range of use cases, including their own productivity. Sikka has already integrated GPT into his personal workflows:

One great use of this is to prompt you, and get you out of a rut... I had to write something last night, and I was just too tired. I went to ChatGPT and I said, 'Hey, write me a draft of a letter.' Now when I finished writing that letter, not one sentence in that letter was from ChatGPT. But it got me started. And it gave me ideas. And it gave me a frame and it got me going in the end.

Given Sikka's passion for education, I know he is concerned about the impact of this technology on junior roles, now that AI can fulfill a bigger chunk of the routine tasks: whether it's junior programmers, junior contract writers, or junior resource analysts, Sikka sees big changes coming.

You and I have talked about this before: the burden to educate. It is now even stronger than it has ever been.

Aspiring professionals need a viable path forward. I see that as a major shift in both educational requirements - and our approach to professional mentorship. But as Sikka points out, attitudes have to change also:

If those junior analysts don't want to learn what these things can do; if they don't want to use these in their world, then they will be disrupted. If they do use these things, they become far more productive, and they become much more effective as employees.

Plenty to think about - and new apps to try. Sikka also has advice for enterprises looking hard at AI apps and opportunities. I'll get into that in part two next week.

Loading
A grey colored placeholder image