The insurance industry — just like every other traditional business sector — is no stranger to technological innovation.
Car insurers are exploring ways to provide a more personalized service by monitoring individual drivers with the help of smartphone apps, black boxes and other Internet of Things (IoT) devices.
Health insurance providers are actively monitoring their customers using wearable tracking devices that keep a watchful eye on diet and the amount of exercise people take in return for preferential premiums and offers.
While chatbots and information retrieval systems — powered by Artificial Intelligence (AI) — are being developed to both improve and personalize the customer experience with the aim, in some cases, to create a claims process that can be completed without customers ever having to pick up a phone.
Behind the scenes, though, it’s a different matter. While there can be little doubt that technology is changing the insurance experience for end users, it’s another story among the legacy systems of the back office.
Information is the lifeblood of underwriting
With its roots firmly planted in a more traditional approach to business, underwriting continues to be stifled by its past. Like so many sectors in the finance world, underwriting has been held back by an overreliance on manual processes across multiple disconnected legacy systems.
Over the years, as new technologies have emerged, businesses have tended to hold on to their systems — customizing, integrating and patching to keep things moving.
This ‘make do and amend’ approach is starting to show its age. And it’s a situation made worse when compared to the flexibility and speed of modern fintech challengers that are able to operate without being weighed down by legacy systems.
The concern is that if established businesses fail to embrace data in a meaningful and modern way — if they’re not engaged in digital transformation — then they are simply not going to survive.
Data in motion is a driver for change
What’s required is a break from the past and a shift to real-time data streams — also known as ‘data in motion’ — which harnesses continuous flows of information that are simultaneously processed and delivered as they’re collected.
It’s an approach that takes the computer science underlying traditional databases and turns it on its head. Instead of passively storing data and only accessing it periodically, data in motion connects all the different data stores into a coherent whole, enabling real-time, always-on, available-now access to data in any system as it's generated.
For instance, a potential client looking for insurance solutions can now visit a marketplace website, fill out a simple form about what they need, and immediately receive an array of current provider options and quotes, ready to compare.
Then, when the prospective client is ready to purchase, they can buy the policy through the website even though the policy may be underwritten by a separate insurance provider.
It’s an approach that can fundamentally reshape a customer’s entire journey. Whether it’s retaining critical data from a previous application, contextualizing a new complaint, or engaging on a customer’s favorite channel, this sort of ‘hyper-personalization’ can make a customer feel recognized and valued by an insurance provider.
Compared to the approach of previous decades — in which customers could be bumped from person to person, often repeating the same information — this is transformative.
Take Ladder for example, a life insurance company which designed its architecture around Apache Kafka. To improve the customer experience, make life insurance easier to get, and streamline the underwriting process, Ladder relies on a continuous flow of data from third-party providers to its AI underwriting engine.
As the company continued to grow, it wanted to improve scalability and reliability while reducing administrative overhead, so Ladder transitioned from self-managed architecture to a data streaming platform. This has allowed the engineering team to focus on adding value for customers, improve the underwriting process and achieve greater ROI.
Nick Hansen, Software Engineer and Platform Team Lead explains:
Our team mostly takes Kafka for granted these days. We’re not worried about outages, and we know we have an architecture that is easy to work with and will scale as Ladder continues to grow.
Being risk-averse should not hold back progress
The issue is, not everyone has bought into this new way of working.
This kind of customer experience can only happen if data is allowed to move freely and at speed. And it goes without saying that security has to meet all the standards of a highly regulated insurance industry.
Many insurers are – quite rightly – risk averse.
For digital transformation to work — for data to be allowed to ‘do its stuff’ — it means letting go of established systems and processes. It means acknowledging that the systems that have worked so well in the past simply aren’t fit for purpose in the modern day.
In truth, people are only just beginning to understand what data streaming is and the impact it can have on their business.
It is happening though, as organizations begin to realize that all new systems don’t need to be the huge logistical challenge they once were — and a complete rip and replace isn’t required.
In the case of data streaming, it’s not about replacing data silos, but instead breaking these down with real-time connectivity — tapping into enterprise applications and systems to access the right data at the right time.
Crucially, the whole digital transformation process in underwriting isn’t just about data. It’s about making that data available in real time — both of which should be an absolute priority for underwriters everywhere.