Multi-tenant or multi-instance? ServiceNow exposes the debate

Brian Sommer Profile picture for user brianssommer May 26, 2016
Multi-tenancy and multi-instance emerged as a discussion topic at a recent ServiceNow event. Brian Sommer unpicks the arguments.

During a keynote at ServiceNow’s recent Knowledge 16 user conference, COO Dan McGee discussed how ServiceNow offers a multi-instance solution instead of a multi-tenant cloud solution. I don’t know how much of that went over the heads of the audience but it did catch my attention. And, interestingly, since both approaches have their pros and cons, application software buyers should pay attention to the differences. It could matter to your firm.


In a multi-tenant cloud application, essentially all of the customers share the same copy of the application code. They also have their data stored in a single, shared (and often encrypted) database. When the vendor makes a new release available, there’s only one copy of the code to update and all customers get migrated to the new release simultaneously. Think of how Google changes its search screen imagery on Father’s Day and many other holidays. Every Google customer sees the new version of the screen the next time they access that website. No upgrade CDs/downloads are required. The new version just appears for all users.

There are a lot of advantages to the multi-tenant customer and to the vendor that licenses these applications. For the customer, fewer IT resources are needed to patch, maintain and upgrade the application (reduced TCO). End users get upgraded functionality regardless of what the IT backlog looks like (no delays in receiving new functionality). And, there are no additional hardware and systems software components that must be ordered, installed and maintained (lower TCO, lower capital consumption). For the vendors, there are markedly fewer versions of the software to support (often just the current version and maybe one prior) and far fewer technical architectures to support (often just one). Bottom line: vendors don’t need to charge as much to provide equal or better service to customers and customers can re-deploy a number of IT resources (people, capital, hardware) to more strategic initiatives.

In the private cloud and on-premises worlds, users can have their own copy of the application and their data is stored locally. When a new application version/release is available, the user (not the vendor) is on the hook to upgrade the software, do any database manipulations, convert files, etc.  This environment is great for those companies that want to: heavily customize a packaged application; maintain control over when (or if) the application is upgraded; or, keep their data and software under their total control.

This world is populated with on-premises users, private cloud implementations of cloud software and single-instance cloud solutions. Regardless of the exact deployment, the customers have a unique instance of the software and their data is physically (not just logically) separated from other customers’ data.

This world is expensive for all parties. Whenever a customer has an issue with the software, the vendor may need to recreate the environment that a given customer has. That could mean running an older version of the code on a non-standard relational database, on an operating system that may or may not have recent patches, with a unique combination of systems and security software all on an older piece of computing hardware. Of course, this assumes a vendor can actually create this configuration, has subject matter/technical experts in all of these disciplines, and, can emulate the data that is getting borked. This support environment creates long resolution times and frustrated customers.

Vendors are not too fond of this world anymore. All of those added costs slow their innovation efforts and suck away scarce capital. Just last week, one software executive shared with me that his last employer has to maintain 54 different financial accounting application suites that this company acquired (via mergers/acquisitions) over the years. Worse, this vendor is supporting multiple versions and technical architectures for each of these. This support cost is a major drain on their profitability. It also is siphoning away capital from their innovation/R&D budgets. That last point makes it hard for them to compete with pure-cloud competitors.  This sharp executive now works for a pure-play cloud application vendor – he’s loving his job again.


What ServiceNow is calling “multi-instance” is a variant of multi-tenancy but with some important twists. What is multi-instance (MI)?

MI is a public cloud deployment of an application where each customer is using their own copy of the software and has their data stored in their own specific RDMBS. It kind of sounds like a hosted, single-tenant solution but ServiceNow, not the customer or a reseller, is on the hook to do software updates, file conversions, etc.


One very, very large ServiceNow customer I interviewed has all of their data on a separate server that is not shared with any other ServiceNow customers. Furthermore, the customer controls exactly when they will upgrade to the next version of ServiceNow.  Poke any big cloud vendor hard and long enough and you’ll often find that they make a couple of architectural concessions to some mega-huge customers. So, I wasn’t surprised to hear of this customer’s configuration of ServiceNow.

That said, ServiceNow does park each customer’s data in a separate database/server combination on their cloud. This has several implications (see below).

What ServiceNow is giving customers via this architecture is:

  • Flexibility in choosing when they will upgrade to new releases. When I pressed them about how much flexibility is really available to customers, their CTO indicated that virtually all customers are kept within a release or two of the current version. As a result, ServiceNow’s customers cannot fall behind materially on new functionality.
  • Physical (not just logical) separation of their data from that of other customers. Isolating a customer’s data to a specific device creates an additional barrier to prying eyes. While encryption technology can achieve much of the same result, this is a physical (not just programmatic) solution.
  • Reduced opportunities for large numbers of customers to be affected by a key system component outage. Readers should see this recent com incident report that shows how a faulty electric switch created havoc for a large number of their public cloud customers. By keeping customer data confined to discrete physical devices, only the customers on that one device are inconvenienced when a fault occurs. In many multi-tenant environments, a single customer’s data could be spread around on several devices and numerous customers can be sharing a single device.

Multi-instance tradeoffs

But this solution presents some tradeoffs, too.

For example, when ServiceNow does an upgrade to its applications, it will need to work through each customer’s unique data store and copy of the software. It’s not applying a single master change to one database – it has to repeatedly apply the same changes to each customer’s database instance. Likewise, it may have to selectively apply the software upgrade to specific users and bypass others.  Multi-tenant vendors don’t always migrate all customers at once either but they tend to move large groups of customers to staging environments based on which upgrade weekend a customer has opted to upgrade upon.

It gets more interesting when you realize that ServiceNow will have customers on a couple of versions of its application software (and possibly database schema). ServiceNow will have to support each of these active versions of the software regardless if the solution has one or a thousand customers running that configuration. That gets expensive and it gets tricky, too, when there are, for example, security patches that must be implemented into all of the different versions.

One big issue I see with this architecture is that it makes the incorporation of big data into an application software suite appear more challenging. All of this choice and database separation is great for traditional transactions but how does it work when different customers want to supplement transaction data with their own specific IoT (internet of things), social sentiment or other big data? This big data is rarely predictable in size, doesn’t always fit well in RDMBS systems and varies immensely among customers. To support big data, these cloud vendors will likely need to create a big-data-instance strategy that shows how alternate data stores/utilities (e.g., Hadoop, in-memory, etc.) will complement the existing ones. I believe it’s solvable but as to how elegant it will be, I’m still wondering. Workday may be further along in this issue than other vendors.  Salesforce is already dealing with massive datasets as social sentiment information is critical to sales and marketing solutions. Other vendors are trying to bolt-on a big data capability to their existing solution without any fundamental rethinking of their technical architecture or customer needs.

Another issue with this architecture is that it could hamper ServiceNow’s ability to provide other aggregated/anonymized benchmarks in the future. For now, the benchmarks they offer cover more systemic metrics (e.g., uptime) and not items like key process times per customer. How they’d easily collect these other data points when customer data is stored in thousands of discrete databases with potentially different data models over multiple servers seems challenging.

The best part of multi-instance

No one seems to write about the challenges present in doing backup/recovery in cloud applications. I bring it up all of the time in software selections I do.

In a nutshell, when you use a cloud application solution, your data is protected by the vendor via a series of tools. They can detect a problem, flip the service to a failover center (that was mirroring your transactions), and, in the event of a catastrophic failure, even recover your (and every other customers’ data). But, what they don’t offer is the ability for just your firm to rollback and recover your information easily.

Why isn’t that offered? To do that in a multi-tenant cloud world, the vendor would have to take down the server/database with your data. Because that server contains your firm’s data as well as that of other customers, ALL customers would be forced offline while your company did a restore. This is not a desirable thing as inconveniencing potentially thousands of customers could hurt the cloud vendor’s image. In some cases, the other affected customers might have to reapply their transactions, too.

Recovery is also tough to do as some solutions run in-memory not off of a RDBMS. And, just to make it even tougher, you (the customer) only get to back up a flat-file of transactions – not an image copy of the production database. Without that database backup, you would have to create a new instance of the software and re-apply every single transaction your firm made since it moved to the cloud service. That’s right, the recovery would invoke an inception-to-date time period not a recovery from the last known clean version of the data.

If some malcontent in your firm blows away your cloud CRM’s customer database (or some less-than-brilliant clerk accidentally deletes a key accounting table), your options to recover are few, expensive and time-consuming. Since every solution is different, be sure to check this issue out in detail with your cloud software vendor.

In ServiceNow’s case, individual customers can halt their instance, do backups, and recover their data to a known point in time. Their actions would not affect other customers. Exactly what tools a customer (or ServiceNow) uses to support this, I do not know.

My take

Years ago, the debate would have been different. The on-premises fan boys and girls would have been attacking the multi-tenant cloud solutions. Those were interesting times when emotion and hyperbole were over-running the facts. But now, most businesses have something in the cloud and even financial institutions are starting to go that way, too. It looks like the cloud will be with us a long time after all.

But which style of cloud architecture will rule the day?

I’ve heard variants of the multi-instance approach before. Unit4 Agresso has had a similar capability for ages. At one time, SAP announced Mega-Tenancy when it launched its Business ByDesign product line. The line between cloud single-tenancy and multi-tenancy is indeed a blurry one. But at least it is all cloud. [Editor's note: see Phil Wainewright's discussion of Multi-tenant, multi-instance: the SaaS spectrum.]

That said, I think different approaches to cloud architectures are a good thing as they expose new ideas and options to buyers. They also force vendors to continue their innovation efforts.

Is any individual approach right? No. While I believe that most customers (and vendors) will find the economics behind a real multi-tenant solution to be the best overall (assuming all other factors are constant or equal), I’ve also been unimpressed with the backup/recovery options that some vendors provide here. So, if multi-instance brings some of the best of both worlds together, I’m okay with that.

What I don’t find acceptable are cloud environments where a vendor is supporting too many versions of its product and shows no backbone in standing up to large customers. Too many concessions to larger customers mean that the cost profile of the vendor goes up, innovation speed is lessened, customer service gets hampered, etc.  Balance is the key here as the advantages of public cloud solutions go away when it’s no different than a bunch of single-tenant implementations of various versions of a vendor’s solution.

In the end, the debate is not for me to decide (or referee). It is an issue that prospective customers will decide in the open market.

A grey colored placeholder image