At the recent Sapphire conference in Orlando, one of the main points of interest was the support of multi-cloud functionality in the SAP Cloud Platform (SCP). As I followed the coverage of the announcement, I got the feeling that SCP’s support of this functionality was often depicted in a superficial fashion as some sort of a stamp of approval –a checkbox in a list of mandatory capabilities for a modern Platform as a Service (PaaS). What I missed was a deeper coverage of the topic. What exactly did this functionality bring to SAP customers? This blog will explore “multi-cloud” support in general as well as it how specifically relates to SAP’s cloud strategy.
What is a “multi-cloud“ strategy and why is it important?
Let’s start with a quick definition of “multi-cloud” from Rackspace:
You rely on multiple cloud providers — such as AWS, Microsoft, OpenStack and VMware — for multiple applications. You may also rely on homegrown clouds, telecom clouds and other third-party clouds.
What is the difference between multi-cloud and hybrid cloud strategies?
Remember, a Multi Cloud solution is one where different Clouds from different providers are used for separate tasks. Hybrid Cloud is more like creating a solution that uses more than one Cloud or server option to perform a task that accesses both. One could say that in a Hybrid environment, the data gets intermingled between two different hosting infrastructure types, while in Multi Cloud, you are simply using multiple Clouds. [totalproductmarketing.com]
What are the benefits of this strategy?
Multi-cloud platforms provide a number of advantages over traditional single-vendor strategies, and chief among them is the ability to leverage the most appropriate unique cloud-services from multiple different providers at any given time. Not only does this allow enterprises to remain dynamic, but enables them to reduce cost and stream each cloud to one that best fits the business. The ability to prioritize different business areas in accordance with their broader strategy is a benefit unique to multi-cloud solutions. [betanews.com]
One of the important benefits of multi-cloud functionality is that, in theory, developers can move quickly to a different infrastructure with limited effort. Yet, this strategy is often difficult to achieve and requires a high degree of discipline as well as an excellent governance model. The usage of the provider-specific functionality is always a temptation that reduces the desired portability. Mario Szpuszta (Principal Software Development Engineer for Technical Evangelism - Microsoft HQ DX) describes this dilemma in his blog about SCP running on Azure.
When you’re running on Azure, it makes sense to use Azure-native services if you want to. At the end of the day, that’s when you can unleash the full power of running on a specific cloud platform, right:)? And… you still could implement your “portability” thorugh IoC/DI at the application level to have a more effective/efficient integration into native cloud services but still stay portable to a certain extent.
The fact that SAP provides SCP as a Managed PaaS gives it distinct advantages in this regard.
When using any of the backing services from the SCP market place, you indeed use SAP-deployed backing services. That gives you a higher level of portability since SAP and not Azure controls the versions and configurations used for the technologies that are the foundation for those backing services. [mszCool.com]
This issue isn’t specific to the multi-cloud scenarios for SCP, but is also evident in the underlying Cloud Foundry (CF) platform – in a video with Pivotal and Google Cloud Platform engineers, there is a description (I’m paraphrasing a bit) that CF runs great on many cloud platforms as well as exposes what makes a platform unique.
There is often a fundamental conflict between multi-cloud strategy’s focus on avoiding single-vendor lock-in and the desire to exploit the unique characteristics of the used providers. Once you start using the unique characteristics of a platform, then portability inevitably decreases. Therefore, the governance model of such scenarios is critical to avoid losing the benefits of a multi-cloud strategy.
Inasmuch as developers often desire portability between IaaS providers, what are the factors that influence their selection of a specific provider? Pivotal Cloud Foundry (Pivotal’s commercial CF Offering) (PCF) runs on all three main IaaS providers and its list of benefits for selecting Google Cloud Platform (GCP) provides a list of advantages that is an example of decision support regarding such choices.
- Rapid VM provisioning for scaling the platform to help meet your developer and user needs.
- Use Google's load balancer to help scale your apps to 1M+ requests in seconds without the need for pre-warming.
- Create multi-region global PCF deployments by tying them together with Google's HTTP(S) load balancer.
- Save up to 30% with sustained use discounts for virtual machines that run for a full month period.
- Save up to 80% with pre-emptible VMs for applications that can handle virtual machine restarts, compared to regular instances.
- Fully tailor your PCF deployment’s VMs with Google Compute Engine custom machine types.
This list is heavily focused on infrastructure-related aspects. Since SCP is a Managed PaaS, these benefits have an immediate benefit for SAP in that its Operations teams can exploit them to provide a more efficient service offering for its customers. In a similar fashion, a recent presentation from Dr. Andreas Binder (Chief Cloud, Head of Development SAP Hybris) provides operational details about how Hybris views the benefits of using AWS for its YaaS offering.
These infrastructure-focused benefits of one platform over another may be less relevant for the individual SCP developer since they are only indirectly impacted inasmuch as SAP is still responsible for the underlying infrastructure.
Cloud Foundry service brokers
In CF-based architectures, once a decision is made to exploit a platform’s unique characteristics, then the usual manner to meet such requirements is via service brokers (The CF-supported Open Service Broker API provides a generic framework to support such scenarios).
A service broker is the part of Cloud Foundry that connects platform applications to infrastructure services. Infrastructure services are typically things like databases, message queues, storage systems, and—now!—an enterprise knowledge graph.
A service broker does not run the service itself; rather it gathers resources on behalf of its clients. Among other things, its duties include creating services, managing credentials to access those services, tracking the services, and disposing of them when they are no longer needed. [Stardog.com]
An example demonstrating the ease of using service brokers in the SCP multi-cloud offering is provided by Szpuszta in his blog. Another description of using Google Cloud Platform’s (GCP) Service Broker in Pivotal Cloud Foundry (Pivotal’s commercial CF Offering) (PCF) shows the power of having a provider-independent API to easily access such services.
The big three IaaS providers all have service brokers for CF that expose platform-specific functionality.
- Amazon RDS for PostgreSQL
- Amazon S3
- Amazon RDS for MySQL
- Amazon Aurora
- Amazon RDS for SQL Server
- Amazon DynamoDB
- Amazon RDS for Oracle Database
- Amazon RDS for MariaDB
- Amazon SQS
- Google Cloud Storage
- Google BigQuery
- Google PubSub
- Google Cloud SQL
- Google Machine Learning APIs
- Google Bigtable
- Google Spanner
- Stackdriver Debugger
- Stackdriver Trace
- Azure Storage Service
- Azure Redis Cache Service
- Azure DocumentDB Service
- Azure Service Bus and Event Hub Service
- Azure SQL Database Service
- Azure Key Vault Service
Note: Some of the service broker lists originate from PCF but the source code should be present in github and thus available to others as well.
The use of service broker in such CF environments also provides advantages to the IaaS providers themselves and increases the “stickiness” of their platforms.
Additionally, for the service provider this gives them an on-ramp to new customers. People are much more likely to try something out if the pain of installing it and managing it is removed from the picture. Many of the CF services that are offering in publicly hosted CF offerings (such as Bluemix) offer a “free tier”. This means that with minimal effort people can provision and “kick the tires” a large variety of services. If they like them and want more than what the free tier offers then money will be involved. The service provider has just increased the changes of a potential new customer. [IBM.com]
Although it might counterintuitive, use of such service brokers (if used correctly) may also improve portability.
For the bulk services (blobstores, VMs, networking, queues, databases etc) the big 3 support, you can usually have some degree of migration without too much pain, if you're using one of the platforms correctly.
For example, on migrating a Cloud Foundry platform from AWS to GCP, you can switch your service brokers from AWS to GCP. Apps that ask for a database won't, in general, know or care who's providing it. [Hackernews.com]
If you look at the service offered via the brokers, the large majority are related to database (SAP also provides a CF-based service broker for HANA) or storage and other low-level infrastructure. There are some interesting exceptions in this infrastructure-centric focus – if you take a look at the list of GCP services, there are number of services (for example, “Google Machine Learning APIs”) that are less associated with low-level infrastructure aspects. I think that we will see other IaaS providers start to include more PaaS-associated services in their CF service brokers.
Beyond SCP – multi-cloud support in other SAP applications
One of the benefits involved in the multi-cloud strategy concerns data:
Data sovereignty and compliance issues are also leading to a surge in multi-cloud as organizations, particularly in Europe, worry about how to comply with current rules and their exposure if they operate in areas where no rules governing cloud services yet exist. Storing data locally minimizes issues over data sovereignty whilst directing traffic to data centers closest to users based on their location is vital for latency-sensitive applications. [NetworkWorld.com]
If you consider the decision to provide SCP on Azure, one reason is the ability of individual developers to exploit the specific services that the underlying platform offers. An announcement last year between SAP and Microsoft concerning SuccessFactors (SAP’s cloud-based HR application) (SF) adds a new angle to this discussion.
The agreement with Microsoft marks the "first time we'll be moving into a public cloud provider," [Mike] Ettling said. But SAP plans to continue to make SuccessFactors available across a mix of its own and Azure public cloud environments. This isn't a move away from its own datacenters and towards Azure; it's an expansion of SAP's datacenters with Azure capacity across the globe, he explained.
It won't be up to SuccessFactors customers as to whether their workloads run in SAP datacenters or in Azure ones. The agreement with Microsoft is about expanding SuccessFactors' overall capacity, Ettling noted.
What isn’t totally clear in this announcement is the associated hosting model but I can imagine that SAP would continue to manage the SuccessFactors solutions that are running in the Azure data centers – this is a similar model as that seen in terms of the Managed PaaS architecture in the SCP. A recent SuccessFactors job offer for Microsoft Azure Architect also provides more detail into this move and appears to show that SAP will be managing the instances.
Deploy Operate and maintain SuccessFactor workloads running in Azure
Helping SuccessFactors understand and configure the network aspect of Azure and to integrate with their local network (on-premises). Help to resolve issues related to Virtual Private Network in Azure Virtual Networks (VNET) in connections such as Site to Site, Virtual Network to Virtual Network, Point to Site (Client to Azure VPN connection), Virtual Machine to Virtual Machine network traffic performance, Azure DNS, VNet Peering.
The availability of SuccessFactors on Azure would mean that HR-data originating from this application would also be available in Azure via the normal APIs and integration points.
For those interested in HCM Extensions running on SCP and accessing Success Factors data, the SCP-based extensions could be located very close to where the associated data is generated. Yes, the same thing could be said for this scenario when both applications are located in SAP data centers. Bringing Azure into the picture, however, immediately increases the range of regions which can be supported: Microsoft Azure has over 140 data centers – much more that SAP currently supports.
One challenge here will be that such Azure-based scenarios using SCP and SF extensions would have to run on Cloud Foundry rather than Neo. At the moment, there is a tendency to associate Cloud Foundry with different scenarios than that required to exploit SF’s presence in Azure DCs. As Mark Williment (Head of Technology) at Keytree suggests:
There’s overlap, but I would broadly categorize them as: Neo provides services to extend and integrate business systems (mobile services, API Gateway, cloud integration and so on), while “Cloud Foundry” provides services for more exotic development such as IoT (MongoDB, Reddis, RabbitMQ).
SAP must work to move the extension-related SCP functionality to other IaaS providers to expand the scenarios associated with multi-cloud strategy. If the bulk of SCP-related work from partners remains primarily Neo-based, then much of momentum provided by CF may be lost.
Multi-cloud support in SCP provides definite advantages for customers but there are various aspects that must still be considered before actively moving in this direction:
Which neck to choke?
Multi-cloud functionality increases flexibility at the same time as increasing complexity. Yes, SCP is a Managed PaaS which means that customers have one neck to choke but if they take advantages of the platform-specific services from other providers then more parties become involved in such scenarios. For example, SAP has done a great job to synchronize all its maintenance windows for their cloud offers but multi-cloud scenarios that exploit platform-specific functionality might force customers to deal with different maintenance windows for same fundamental solution.
Customers with existing public cloud assets
Much of the attention regarding multi-cloud support almost describes an ideal world where customers are just starting on their cloud journey and don’t have non-SCP-based cloud activities. As Mark Williment (Head of Technology) at Keytree suggests, there are various aspects which must be considered for customers who have greater mature regarding their cloud maturity.
Will it be easy for organisations to leverage existing investment in those platforms? For example, if an organisation has invested in AWS Direct Connect for dedicated bandwidth between AWS and their on-premise network, will they be able to utilise that for fast connections to SAP Cloud Platform services on AWS?
Don’t forget microservices
Some customers/partners might use multi-cloud functionality in SCP to develop microservices in different IaaS platforms. As discussed above, there is no patent recipe for such scenarios. Some applications could focus on high portability and the ability to migrate easily to different providers in order that cost or scalability aspects might be better exploited. Other applications services may focus on using provider-specific functions at the expense of portability. Meeting business requirements will often require a mixture of both types of microservices as well as services that may somewhere in between. SAP must provide support to customers to design their multi-cloud applications correctly so that correct design designs are made.