To tackle cloud integration, get a handle on data gravity
Jon Reed: Tell us about "data gravity," and why that concept is central to your approach.
Gaurav Dhillon: There is a big change afoot, which we address by asking, "Where does your data have gravity?" There are two things going on. First, the application portfolio of the modern enterprise is hybrid. In most large enterprises, there's a combination of on-premise and cloud services. These attributes are causing "data gravity" to be split, and because of the amount of data that is now available from website exhaust, there is an imperative to be able to tackle that cloud data using analytics, and using modern scientific techniques - what we now call data science - to be able to see the future first.
That leads us to the second point: Companies are saying to us, "Well, it's all good that I have business intelligence, and ETL, but I really want to be able to also look at the future. Why can't I have some of that?" The answer is that they can, but that's where data gravity comes into play.
Reed: Why is that?
Dhillon: For most enterprises, your data is likely to be hybrid. If you are focused on customer-facing functions, then you have a lot of exhaust data in the cloud. That means some of your analytics probably runs in the cloud, and some runs on-premise. If you are a company that collects information from sensors, or there's secure information, such as fraud protection, and heavy regulatory requirements, then chances are, your data lake is going to be on-premise. To do predictive right, you need to figure out where your data gravity lies, and integrate your cloud and on-premise systems accordingly.
Reed: But SnapLogic doesn't care where your "data gravity" is.
Dhillon: That's true. Our control plane is in the cloud, but as for the data plane, it depends on the customer. For example, if you are doing integration between, say, your Hadoop-based data lake, and your financial systems or other kinds of firewall data, then you probably want to run those integrations on-premise or in your data center.
Because data has gravity, you should make the choice of the deployment you want. The way we approach it, people aren't locked in. They can add more nodes where they need. They can rebalance their integration points as the data gravity changes. Factors include the volume of data, the number of data sources, and how fast you need the output. Then you do the math on the data momentum, and typically, most enterprises then divide it down the middle - half in the cloud, half on-premise.
iPaaS - do we really need another acronym?
Reed: In our talk last week, you brought up the acronym iPaaS to emphasize the need for a cloud integration platform ("integration platform as a service"). What kind of acceptance are you getting for your new acronym? Do you think iPaaS will stick to the wall?
Dhillon: (laughs) It's really not about the acronym as much as the awareness. The term is still not as well understood as it should be. iPaaS still has the feel of a successive term to ESB, rather than a successive term to all integration.
We believe this is really a "connective tissue" problem. You shouldn't have to change the data load, or use multiple integration platforms, if you are connecting SaaS apps, or if you are connecting an analytics subsystem. It's just data momentum. You have larger, massive data containers, sometimes moving more slowly into the data lake. In the cloud connection scenario, you have lots of small containers coming in very quickly. The right product should let you do both. That's where iPaaS comes in.
Reed: So your agenda behind pushing iPaaS is to really to bring integration under one umbrella.
Dhillon: Right. We think it's false reasoning to say, "You have to pick an ESB for this, and ETL for that." We politely disagree. Hopefully the iPaaS term continues to grow. In the end, whether we end up calling it integration, or data integration, time will tell.
Easier integrations require a better UX
Reed: We should touch on UX, because your view of integration as a technical AND business problem means you need a UI that doesn't intimidate people.
Dhillon: Right. The UX that we provide opens up a self-service model. If you can use Visio, chances are you can manage SnapLogic integrations. This is why the product is so attractive to enterprises that want to get out of manual labor, and get into architecture. You can elastically scale up or down the integration problem. We have 300 plus "Snaps". You are covered on the vast majority of endpoints that you are likely to encounter.
Jon Reed notes: I challenged SnapLogic to provide an example of an integration-made-easy UI from their product. Here's the screen shot:
Craig Stewart, Sr. Director Product Management at SnapLogic provided the following narrative on the screen shot:
This is a view of the SnapLogic pipeline - in this case integrating data from Twitter with Salesforce accounts and then loading filtered data to Amazon Redshift for analytics. Pipelines are easy to build in the cloud-based Designer, and can synchronize cloud and on-premise applications in a multi-step workflow as well as perform big data integration tasks to power modern analytics. Pipelines can be scheduled, run on events or called programmatically as REST-based APIs.
Wrap - what lies ahead for cloud integration?
Reed: Last week, we talked about your Adobe project and why they chose SnapLogic. But what benefits are your customers seeing?
Dhillon: There are two buckets of benefits our customers experience. The first bucket is the strategic bucket. You get to your SaaS benefits faster - i.e., your time to solution picks up. You're more agile. And agility pays. Whether it's email, calendaring, for full-blown CRM, the reason to have a cloud deployment is time-to-solution. Faster time-to-solution is a big payback for any company, particularly one that is growing quickly, that has no time to waste. They just want to get it done fast. They don't want to slow down their business people.
I think the other big payback to our customers is buying and installing the solution, because we get it done so much faster, through a combination of a really simple, strong, user experience, and a deployment model that works in multiple ways - a hybrid model for wherever your data resides. Then, the endpoint is having a choice of hundreds of snaps that you can connect to this, and connect to that. What is one percent improvement in agility for a 10 million dollar business? It's probably worth hundreds of millions. So, the payback gets to a pretty amazing scale.
Reed: It's your job to see around a few corners. Where do you see cloud integration going next?
Dhillon: Going from a human-intensive IT integration model to a modern deployment model has very distinct elements to it, where you just simply don't need as much manual labor. You are looking at leverage from a platform that gives you a lot of payback. I've been doing this for a long, long time, all the way back to the early 2000s, where I used to draw these pictures of a data tsunami, and the cloud, and so on. What I can say now is - stay tuned. And pay attention to the singularity.
The thing with singularity is that it sneaks up on you. What those of us in this industry should be noticing is that the singularity has a breathtaking aspect to it when the numbers get big. When you look at an enterprise where cloud, SaaS, mobile and social activity doubles, amazing things happen.
When the amount of data we have under management goes from terabyte to petabyte, amazing things happen. Singularity is something we should wake up to, because these data shifts are taking us from "nice to have," to "must have." This is the thing I would keep your eye on.
Image credit: Screen shot provided by SnapLogic and used with permission - all rights reserved. Feature image: detective on white © themanofsteel - Fotolia.com
Disclosure: Diginomica has no financial relationship with SnapLogic; I was approached by SnagLogic PR and thought the topic was compelling. Salesforce, SAP, Workday, Oracle and NetSuite are all diginomica premier partners as of this writing.