Generative AI has become the talk of the tech town — with companies going all out to up their gen AI game.
That’s because as generative AI becomes more advanced, it can unlock endless possibilities for businesses — including hyper-personalization, data monetization, and seamless customer experiences.
A master learner, generative AI is powered by large language models (LLMs) and decodes patterns from the data it's trained on, producing content or predictions that mirror the learned data.
And gen AI is driving a paradigm shift across our economies, so much so that McKinsey predicts generative AI applications have the potential to deliver substantial economic benefits to the global economy, valued as much as $2.6 to $4.4 trillion annually.
But potential and actualization are very different things. What’s important is how you can actually bring both LLMs and the right data to power them to the right place at the right time in a way that makes them truly useful. Done right, this can unlock new business capabilities and human potential, done wrong it can produce frustrating and broken experiences.
So how do you make this technology truly work for your business? First, you need to bring together the capabilities of generative AI with your company’s unique data and needs.
Why you need domain-specific and real-time data
Now that gen AI is here, the question is, what do you build with it?
The obvious things to build with gen AI are implementations related to knowledge work and information retrieval. For example, a salesperson might leverage gen AI to ingest their company’s product documentation, gather insights into the company they’re engaging with, and strategize how their technology can provide tailored solutions to better align products with customer needs.
It’s truly a paradigm shift in terms of how applicable it is to almost any form of knowledge work you can think of. In fact, McKinsey expects generative AI to automate tasks that take up 60%–70% of people’s working hours.
But here’s what businesses should be mindful of. While LLMs are helping bring AI to the mainstream for many businesses (because of their reusable nature), for generative AI to be truly enterprise-ready and unlock competitive advantage, it needs the data to be contextualized, trusted, and current — without which it can’t drive meaningful value.
Today, companies are grappling with how they bring domain-specific knowledge to LLMs. And there are two approaches today: prompt engineering (also known as in-context learning) and fine-tuning.
Prompt engineering — which entails optimizing textual input to effectively communicate with large language models — is one of the most effective ways your businesses can ground your generative AI application in your domain-specific context (within pre-trained general data) and improve accuracy. This helps scope down the spectrum of semantic meaning to your own domain.
Another way to augment LLMs with contextualized, domain-specific data is fine-tuning. This involves re-training pre-trained models on specific datasets, allowing the model to adapt to the specific context of your business needs. But when fine-tuning on a new task, the model's previous knowledge may be overwritten or forgotten, leading to a loss of performance on previously learned tasks.
And it’s not just bringing domain-specific data that companies need to worry about — it’s how they bring that knowledge in real time. Here’s why.
Let's say you work in customer service for an airline. Your problem domain is in airline-related issues, so you would want LLMs to answer questions pertaining to the airline industry and not cars.
Second, how do you make them answer questions in your domain? For example, if you're United Airlines, you want it to answer questions about United's flights and the way United does business.
You have to figure out how to bring that type of information together. And it only gets harder from there.
For example, if you are using a gen AI-powered assistant that helps with flight delays, it's not enough for it to know information that's just broadly applicable to United — it needs to know United right now.
You need to bring all of this real time information to the LLM at the moment of the invocation or the inference to make them smart. Because real time is what truly unlocks the intelligence of automated intelligence.
How data streaming platforms can help
I see data streaming as the fuel that powers the LLM flux capacitor, because no LLM is smart without real-time, business-specific, highly contextual information — and that's exactly what data streams are.
One of the core value propositions of data streaming when it comes to real-time gen AI-enabled applications is that you are not constrained by where your data lives. Data streaming frees your data from the silos it lives in — making data easily available and accessible to gen AI-enabled applications.
Data streaming helps businesses bring the right data, at the right place, at the right time, by routing relevant data streams anywhere they're needed in the business — and all in real time.
Data streaming platforms enable real-time generative applications at scale by:
- Integrating disparate operational data across the enterprise in real time for reliable, trustworthy use.
- Organizing unstructured enterprise data with embeddings into vector stores that can then help engineer prompts.
- Decoupling customer-facing applications from LLM call management to provide reliable, reactive experiences that scale horizontally.
- Enabling LLMs, vector stores, and embedding models to be treated as modular components that can be substituted as technology improves.
This allows businesses to quickly deliver the reactive, sophisticated experiences that consumers have come to expect.
But access to data also has to be secure and trusted. Data streaming ensures you have good governance around data — so you know who it was created by and what the lineage is. Once you have that, you have what's a reliable data asset or data product — for teams to then consume data that they can trust and access in a secure way.
The best bit about data streaming is once that work is done, taking it somewhere new in real time is trivial because that's what streams do.
Building gen AI apps isn’t just another project
No matter what gen AI applications you build, don’t treat it as just another project.
Don’t work on amassing diverse data feeds for the LLM by constructing custom technology or resorting to ETL and conventional batch integration methods. Don’t assume, "How complex could this be? Let's periodically extract data and place it here."
Approaching it on a project-by-project is a suboptimal approach and leads to issues, as people start demanding fresher data.
The preferable approach is to enable data flow throughout the organization. Then, you can selectively incorporate valuable data as needed, experiment, adapt — treating it like modular building blocks.
True success comes from tackling the challenge from the opposite end — starting from the ultimate goal and working backward.
According to a recent report published by Gartner, product leaders evaluating the value of GenAI should:
- Develop large language model (LLM)-enabled features to add to your existing software solutions by identifying text- and data-heavy tasks where GenAI can augment human performance and reduce operational inefficiency.
- Avoid GenAI-washing by developing scalable, sustainable use cases that deliver on key performance indicators (such as time to market, cost savings and operational efficiency).
- Add GenAI to your product roadmap now, so as not to risk falling behind peers in the conversational AI and data & analytics markets.
- Plan for future GenAI opportunities in simulation-related use cases (beyond the current activity in drug discovery) by assessing GenAI’s unique ability to deliver business value in generating designs, predictions, digital twins and more.
Want to learn more about how data streaming can move your business forward? Read the “2023 Data Streaming Report: Moving Up the Maturity Curve” how data streaming is delivering massive real-world benefits.