Oracle CTO Larry Ellison took to the stage yesterday for his first keynote at Oracle OpenWorld in San Francisco to declare that the vendor is ‘marching towards the world’s first completely and truly autonomous cloud’. Central to Ellison’s argument was that autonomous environments will allow for customers to “eliminate human labour”, in order to avoid ‘pilot error’.
In other words, if you don’t want any security mishaps, don’t rely on people to do it. Ellison used the recent Capital One data breach as an example, which saw a hacker gain access to 100 million credit card applications and accounts. Capital One is hosted on AWS - but Amazon refused to take responsibility and stated that it “functioned as designed” and that the hacker gained access through a misconfiguration on Capital One’s side.
Simply put, the catastrophic breach was because of human error and responsibility lies with the customer. Ellison and Oracle are of the belief that if you instead put faith in its Autonomous Database - or down the line, it’s Autonomous Cloud - then those mistakes will just not happen. Ellison went as far as to say:
The system is responsible for preventing data loss. Not you. Us.
The $$$ question
This, therefore, inevitably begs the question - will Oracle actually be taking responsibility for customer data if its placed in its autonomous environments? Is the company confident enough - and legally prepared - to make that claim?
We got the chance to sit down with Steve Daheb, SVP of Oracle Cloud, to ask him exactly this. I asked Daheb - “If a customer is using your autonomous systems, will Oracle take responsibility for their data?”.
His answer, as one might expect, is mixed and includes some caveats. That being said, it does appear that Oracle is leaning towards taking responsibility. Daheb said:
I think it’s a shared responsibility, right? We are setting it as a default. A customer would have to actually come and request that you unencrypt the data. If you do that, then the customer is going to have some obligation.
But with respect to autonomous, allowing it to be self patching and apply updates as we publish them, and allowing the data to be encrypted at rest - you’ll have to look at the security policies, but there’d be some [responsibility] that would be on us, and some would be shared depending on the parameters set and how we treat the data.
But he went on to add:
But yeah, if you’re using everything to spec and to guidelines, then you know, we will be accountable. That’s how we’re doing things differently.
We are not going to make claims that we cannot back up. So if we need to update some of the policies [read: T&Cs] to reflect that...what Larry said on stage is what we are promising as a company.
I noted yesterday that it was a very bold of Ellison and Oracle to effectively say that in an autonomous environment, data protection is ‘on us’. Now, as per Daheb’s response, he and Oracle know that when dealing with enterprise buyers, not everything is straightforward and they will likely be getting requests that may limit what would be an effective autonomous system. In that scenario, it seems like Oracle is looking at shared responsibility with the customer (and rightly so).
However, it does seem that if you take Oracle’s autonomous systems as they’re meant to come and you let Oracle get on with it, they will be held accountable. As I noted yesterday, that will be music to the ears of many a C-suite executive, whom live in constant fear of reputational damage and financial loss as a result of a significant data breach. This is a clever move by Oracle, which is focusing on customer needs and outcomes - rather than just saying ‘we do AI and machine learning now’. Oracle must be confident, as it’s putting them directly in the line of fire. Kudos.