Main content

Can enterprise tech redeem itself with generative AI? Vishal Sikka on doing AI right, and avoiding generative AI snake oil vendors

Jon Reed Profile picture for user jreed August 31, 2023
In the second part of my interview with Dr. Vishal Sikka, founder and CEO of Vianai Systems, we discuss a novel question: can enterprise tech redeem itself with generative AI? Sikka also shares the top questions customers should ask - to separate solid generative AI vendors from pretenders.

Vishal Sikka of Vianai Systems
(Vishal Sikka of Vianai Systems)

In my prior discussion with Dr. Vishal Sikka, founder and CEO of trending AI startup Vianai Systems, we talked about the highs and lows of the AI rollercoaster - and Sikka has seen plenty.

We got into why Sikka signed that widely misunderstood letter which warned of the dangers of AI, and urged that infamous pause. 

Of course, we talked about the problems of LLMs, including hallucination and explainability: Zero tolerance" for hallucinations - Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle. We covered how Vianai addresses that in the context of their next-gen AI applications, such as:

  • hila, Vianai's recently-announced generative AI solution, billed as "Your new financial research assistant." As Sikka told me: "hila can help you quickly find answers in earnings transcripts, and we're adding new data all the time." (You can try free with sign up).

Generative AI - enterprise tech's chance for redemption?

But there is more. What is Sikka's advice for customers assessing generative AI vendors? After all, enterprise success with generative AI is a totally different set of hoop jumps than experimenting with ChatGPT on your own time.

My partial list of generative AI enterprise obstacles: managing risk, problems of customer data/pricing, black box/explainability, mitigating the technical limitations of LLMs, difficulties with using third party LLMs and customizing them with customer training data while honoring data privacy and opt-outs, implications for customer pricing, use case pros/cons etc.

That's an imposing list of obstacles, and it's hardly a complete list. And yet, as I told Sikka, I see generative AI as the chance for enterprise tech redemption. It seems like consumer tech has been out in front of enterprise innovation for decades now, with smart phone app culture as exhibit A. But generative AI sorely needs responsible guardrails, and all the factors I cited above - isn't that what enterprises excel at?

As I told Sikka:

What the enterprise imposes upon AI, in my view, is exactly what AI needs right now, which is things like security, legal oversight, ethical oversight, proper use of data. So for example, one of the things Vianai addresses that ChatGPT doesn't is: different types of data sources that are going to give you a cleaner result.  I think you also, in these tools, address some of the explainability problems, at least in terms of where the information was sourced.

Sikka agrees: "This is an opportunity for enterprises to show the lead to trustworthy AI, responsible AI." Another potential component in better AI? Some form of reinforcement learning. OpenAI did a brute force version of this as well, to put some bias/bigotry "guardrails" on ChatGPT, though not without labor sourcing controversies. But enterprises, in theory, could use iterative model training to enable domain experts to fine tune the desired results. As Sikka explains, these approaches can also build user trust:

Reinforcement learning is one of the ways to do that. The other part is simply conversational. My mentor used to have this wonderful trick where he would ask you, 'Let me play this back.' So let's say you asked a complicated question. He would say, 'Let me play this back; did you mean to ask this question?' You  might say, 'Make a correction to it.' And then he would answer the question. Of course, it was also a kind of a trick, because it gave him time to think.

We do that [at Vianai] when we are not sure of the user's intent. So if you ask a question, which involves joins, or complex inner joins, or something complicated across multiple tables, we will then put it back in front of you saying, 'Hey, did you mean this?' And the user will say, 'Yes, this is what I meant,' or they will correct it. So that is a very simple way we have of disambiguating, or clarifying the intent of the user.

Similarly, when we provide an answer, whether it is on text-based data or on structured tabular data, we put it in front of the users saying, 'Here is where we got the data from; this was the query that was run, and this is what the answer is.' It's not easy to do that. But we do that  - and that is necessary for enterprises, in order for them to trust the results that you are putting in front of them.

Advice to enterprises - how to avoid generative AI snake oilers

Which leads us to enterprise customers - what is Sikka's advice? Judging from my inbox, every software vendor known to humankind now has a generative AI tool. Every startup, every incumbent vendor is supposedly right on the cutting edge.

So how can a customer better understand where the real stuff is, and discern the heavyweights from the pretenders? Obviously, part of the answer is  hands-on demos. But how does Sikks advise customers who want to avoid the generative AI snake oil factor? As he told me:

Last week alone, I spoke to probably 50 or so CEOs. On Friday I gave a speech; I always give them a checklist of things that they can ask vendors: 

  • What data is your language model trained on?
  • Can you ensure that there is no toxic data in there?
  • What is your position on hallucination?
  • Do you have the ability to explain the answers, or at least cite the references to the answers that you provide?
  • Does my data leave the enterprise?
  • Can my enterprise data be used for training other models?

 On the security side, a 'prompt injection attack' is now on the threats list:

There is a new emerging thing called a prompt injection attack, where the kinds of prompts that you pose to the system can be hacked into, and some nefarious element could put their prompts in the middle of your prompts. Then the answers you are getting include these other things that people injected [into your data]. So there are basic safeguards like that - without them, enterprises should not use these products.

My take - on job impact, AI education, and what's next for Vianai Systems

One of the best things about talking to Sikka? Unlike some AI execs, Sikka can shift on a dime from the business potentials of AI, right into cultural concerns. In our last installment, Sikka shared his conviction on the need for global AI education, and accounting for AI's jobs impact. Sikka has advised companies on how to frame ethical AI policies. Though such policies can admittedly be more symbolic than real at times, we have to start somewhere.

AI can be a democratizing force, by providing global access to valuable data such as medical info. But: Sikka is equally concerned about curbing AI's potential for inequality. He cites his wife Vandana Sikka's work:

My wife is on the board of for the last nine years, and they teach people computing and coding and all that. She has this statistic: basically half a percent of the world's population can program. If you're generous, it comes close to 1%...  If you think about AI, the situation is dramatically worse than that. We basically have less than 2 million people who could build you a model or something like that. And the number of people who could run an AI system, or who could operate a machine learning platform is less than 100,000 - 100,000 out of 8 billion people in the world.

Against that statistical backdrop, it's impossible to argue against the need for comprehensive education (I have pestered my alma mater about this so often, they probably have a filter for my emails). But the issue of job loss is more complicated - and loaded with counter-productive hyperbole. To wrap our heads around the AI job loss factor, we must first agree on the power of generative AI - but also its limitations. This generation of LLMs are not cognitive systems. Sikka, who has a far deeper grounding in the history of AI, sums it up:

No, [LLMs] cannot reason; it's nowhere close to that... They are incredibly good rote learners, learners of what has been seen before - and of repurposing that.

Limitations aside, these are still potent tools. Although I will need some convincing that "zero hallucations" with LLMs are possible, as I told Sikka, that's a welcome/bold claim, because if customers have any such issues, Sikka's team will be hearing about it.

I won't rehash Sikka's views on AI's threat to junior-level employees across industries, but Sikka sees vast potential for generative AI in areas such as content generation, support/digital assistance, and, especially in computer programming. And yet, today's generative AI impact is more about an imperfect-but-very-useful assistant, than it is about massive job loss. As per Sikka:

It's a very powerful tool. Now, does it replace jobs? There are two ways to look at it. AI does a good enough job of writing code, for example, or writing initial drafts of research reports, essays, contracts, technical documents - any of these kinds of things. However, in almost 100% of the cases, you need to provide scrutiny on what it does, because you carry the accountability. You can look at it and say, 'Okay, these parts are correct. These are not correct. Let me fix that, and then I'm done.' So if you are lucky, you have saved 30/40/50 percent of your time.

How far can we push the boundaries? Time will tell, but Sikka found his rhythm with Vianai. Having seen Sikka work in large enterprise roles, it's clear that startup life suits his energy: make ideas into products - fast. In the months to come, we can expect more product news from Vianai - and more conversational AI applications.

Traction is coming. Right before presstime, I heard from Vianai's PR firm that "Vianai announced that the company closed its first hila Enterprise deal with one of the largest and most respected banks in Asia." We need more field lessons from large scale LLM deployments; I look forward to the project updates.

A grey colored placeholder image