Encryption, back doors, and IoT software locks – a bad “experience” in the making

SUMMARY:

Recent developments have raised fundamental questions about encryption, back doors, and proprietary software locks. Smart devices raise the stakes. Here’s my review – and questions enterprises should be asking.

man-with-lockIn my last piece on encryption, I argued that the media blame festival is misleading.

But here’s what I left out: national security officials, including U.S. FBI Director James Comey, have been pushing for “back doors” in encrypted devices that only government agencies would have legal access to.

Then there’s an emerging problem in the Internet of Things space, where companies are protecting crucial aspects of their software with “software locks”. Those locks could have the unwanted side effect of creating access points for black hat hackers, potentially endangering the data – if not the well-being – of consumers.

Encryption back doors – framing the debate

In a new column by tech journalist Walt Mossberg, editor at large of Re/code, he takes a stance against the government back door agenda. Though he’s sympathetic to the predicament of intelligence officials, who are struggling with dark channels, he sees this particular solution as a “huge mistake.” Two companies in the hot seat are Apple and Google, both of which have been blasted by the FBI for issuing new phones with so-called “whole-device encryption” – meaning even Apple and Google can’t decrypt those devices.

A few months ago, the White House decided not to pursue a bill that would mandate such government back doors. But Mossberg believes this issue will resurface after the despicable terror attack in Paris.

There are serious problems with putting in such back doors, not the least of which is the topic I explored last time: terror networks are well aware of the vulnerabilities of phone messaging, and are actively seeking out more obscure communications methods. As Mossberg noted, there is no evidence encryption was used in the Paris terror attacks. Early reports indicate that an unencrypted cell phone, along with car GPS units, led to breakthroughs in the case. He adds:

And, even if terrorists do use encryption, there are many encrypted communications services that can run on almost any phone, and which have nothing to do with Apple or Google. In fact, according to The Wall Street Journal, an ISIS technical advice bulletin issued in January listed the messaging services of Apple, Google and Facebook as only “moderately safe,” and ranked them behind nine much less well-known services. One of these, called Telegram, has been reported to have become the ISIS messaging service of choice.

A bigger problem with mandated back doors is that once they are compromised, things get ugly. Mossberg:

Once an encryption system is breached, a cascade of other actors, from malevolent hackers to foreign dictatorships like China and Russia will waltz through that backdoor, either by hacking or by enacting laws requiring that U.S. companies provide them the same access provided to American agencies.

For their part, Apple and Google have forcefully stated that they have not yet provided back doors to any government bodies to date. In October, Apple CEO Tim Cook said, “I don’t know a way to protect people without encrypting.” And on back doors, Cook said: “You can’t have a backdoor that’s only for the good guys.”

Another security issue: software locks and smart devices

The Electronic Frontier Foundation is on a different mission: they want to end all the DMCA (Digital Millenium Copyright Laws) and similar laws within ten years. That’s a volatile issue for digital IP. But what does the DMCA have to do with the Internet of Things? As Cory Doctorow, co-editor of Boing Boing and “special advisor” to the EFF recently argued:

DMCA 1201 prohibits breaking “digital locks” that restrict access to copyrighted works. Though it was originally conceived as a means of preventing piracy, it has proved most useful at preventing competition and the creation of legitimate, otherwise legal technologies. Copyright law has many flexibilities and exclusions that product designers, developers, and users can freely exercise, without any permission from the copyright holder.

But under 1201, you can only make these uses if you do not have to break a lock. The penalties are steep: for the first conviction removing a software lock, it’s a felony punishable by up to five years in prison and a $500,000 fine.

So what’s the problem with software locks on IoT devices? During a recent speech (see video here) Doctorow argues there are two: stifled innovation and heightened security risks. On the innovation side, Doctorow says:

It’s not in your interests as someone who has bought something to have it be a felony for someone to make a third-party product that plugs into the thing you’ve bought.

The DMCA classifies any attempt to disclose information about defects, errors and vulnerabilities in systems that have software locks as a felony. Doctorow believes this hurts device security:

If there’s a bug in a system that you own and rely on, it’s against the law to tell you about that bug because if you knew about that bug, it might help you jailbreak the system, but of course, bugs in your devices aren’t just ways to jailbreak them, they’re also ways to compromise them.

If an attacker can leverage a bug in a device that you own to install malware, and if that device is already designed to treat you as an attacker and hide what it’s doing from you – in privileged modes that you’re not supposed to be able to see or terminate – well then, that attacker has enormous power over you and can wreak unbelievable harm against you.

A tad alarmist, but a point to consider. Doctorow argues security is about disclosing vulnerabilities, without fear of punishment:

Crimeware, spyware and malware run amok in privileged modes that are designed into our devices. Now, this is vital in security. Because we really only have one experimental methodology for figuring out whether or not we have good security and that’s disclosure. That’s telling people about mistakes they’ve made… In security, just as in every other discipline, the only way that we have to find out whether or not you are doing something that works is to tell as many people as you can find how you’re doing it, to see whether or not you’ve made some dumb mistakes that you’re kidding yourself about.

Doctorow cites examples from the IoT space of software locks run amock. He’s particularly concerned with smart cars:

At every single DEFCON… someone will stand up and show you how you can compromise the informatics in that car by going in through interfaces as innocuous as the Bluetooth sound interface and then take over the steering and brakes. The most salient fact about your car is not its transmission, it’s its informatics and its security model.

Doctorow drops a couple smart-cars-gone-wrong anecdotes, the kind of IoT FUD (Fear, Uncertainty and Doubt) I’ve written about.

But is proprietary software at the heart of the problem?

Some proprietary software companies have adopted a more open approach to security. Examples include programs for “white hat” security firms (I wrote about my visit with one such third party firm, Onapsis). Other tech companies offer bounties for finding bugs or security vulnerabilities.

I paid a visit to Onapsis in the context of the recent media frenzy over supposed SAP HANA security vulnerabilities – patches for which had already been released by the time the stories came out (see my rant on that here).

I spoke with SAP’s Siddhartha Rao, the Vice President in charge of Product Security Response at SAP SE. Rao shared how SAP’s approach to working with external parties on security has changed significantly. Rao’s Product Security Response Team manages the “responsible disclosure of vulnerabilities reported by security researchers and hackers, and facilitates a release of quality security fixes, monthly, on SAP’s Security Patch Day.”

As expected, Rao encourages SAP customers to be up to date on all patches. But he also sees the need for SAP to work with external parties on security:

Our approach on security is quite transparent. We do claim like every major software company that we do our best in securing our products. But at the same time, the fact is from a software engineering point of view, bugs happen. We back up our commitment by saying that when somebody finds a vulnerability, we give a secure channel for them to report it to us responsibly. That means that nobody can exploit anybody else, because only the finder and we are in possession of the vulnerability until a fix is available, so customers have the opportunity to protect themselves before the finder can disclose it.

It happens at security conferences very often, where researchers speak about their findings in SAP applications. It helps everybody because customers think of security as a serious topic, and then they go and fix their SAP systems. It’s quite a healthy ecosystem actually.

Most of SAP’s software is proprietary and subject to copyright protection. But in that context, a program for responsible disclosure is viable. Rao:

The important part of this program is that responsible disclosure by security researchers is voluntary. We do not tie them using any contract with terms and conditions or fine print. It’s just in interest of the public and the customer that our security is responsible.

This program can only work if security researchers follow disclosure guidelines. Rao adds:

A serious security researcher first gives the vendor an opportunity to deliver a fix before they disclose it. We make sure that the security researchers understand we have a genuine, legitimate and serious security response process implemented in SAP. They understand that the information they disclose to us is disclosed by an encrypted channel. We allow them to disclose information to us in an encrypted way, such that nobody can read the mail in the middle. We just make sure that they get the credit for it…

There is no money involved between the finder and the vendor, but it is a completely secure way of managing extremely confidential information, and ensuring the fixes are available for every customer out there.

My (quick) take – and customer action items

The problem with these debates is that the rhetoric amps up. Then we lose track of the practical steps. I made recommendations for both consumers and enterprises last time. For enterprises, I’ll add:

  • Ask vendors about their stance on “back doors” and providing government access to information, particularly in off-site data centers. Make sure you are happy with their grasp of the legal nuances and how they are addressing them.
  • Ask vendors about their program for identifying software vulnerabilities. Ask them to show you the process by which you would report a software bug or security loophole. Assess their comfort level with such probing questions, and plan accordingly.

Customer experience (CX) advocates are always telling us about all the great experiences we’re going to have now that companies can personalize services with data. But don’t these services come with a nasty aftertaste? Put aside hacking horrors. What about companies disabling your ignition if they haven’t received lease payments – payments which they’ve been applying to the wrong account? What about companies suspending access to your bank account because their algorithm detected “unusual activity,” based on a false correlation made by machines?

As companies offload risk onto consumers as defined by machine-powered formulas, isn’t that going to make a mockery of the so-called CX we’ve been assured is right around the corner? Hmm… I better quit this blog post now before I veer completely off the rails.

Image credit: Thief stealing information, Studio shot on black background. © Halfpoint – Fotolia.com.

Disclosure: SAP is a diginomica premier partner. Diginomica has no financial relationship with Onapsis.