Musings on China's 'Global Initiative on Data Security' and the problem of security "back doors"

Neil Raden Profile picture for user Neil Raden December 15, 2020
Summary:
A review of 'Global Initiative on Data Security' led me to an exchange with a company doing business in China. With new 5G security issues on the horizon, it's a good time to reflect on the implications of "back doors," ethical AI, and where the responsibility lies.

Hooded hacker cyber warfare security concept © yiorgosgr - Fotolia.com

In early September, China released The Global Initiative on Data Security at the International Seminar on Global Digital Governance. China calls on all countries to handle data security in a "comprehensive, objective, and evidence-based manner,' according to State Councilor Wang Yi. There is a short industry article about it here: China's Bid to Write the Global Rules on Data Security.

I became aware of this during a recent Zoom meeting of my global think tank t

hat wants to save the world. We are actually making some progress on the topic of the sustainable internet. This proclamation caused an energetic discussion. If you read the document, you will discern all sorts of hidden agendas, but what it amounts to in my estimation is that we (China and the U.S.) are in another Cold War, this time in cyberspace. 

The document rails against software and hardware backdoors. I laughed out loud when I read that, considering the rampant backdoor malware that, as per the FBI, is silently being installed on the networks of foreign companies operating in China via government-mandated tax software. It is inconceivable that China would not employ their advantage in telecommunications equipment, especially Huawei, which dominates the global market for next-generation 5G devices.

The concern is that all data passing through mobile devices, smart homes, and other internet-connected devices will become cyberattack vectors if Huawei equipment is used in 5G networks.

Governments have always exploited commercial communication companies to spy on other countries to protect their national security. The U.S. government has been adept perpetrators and victims as well. It is entirely understandable that China will continue to do this despite their lofty proclamation. 

There was one clause in China's Global Initiative on Data Security that was so sneaky, I have to comment on it: "And oppose mass surveillance against other States and unauthorized collection of personal information of other States (emphasis mine)." No other country in the world violates citizen's privacy like mass surveillance in China. The Chinese Communist Party (CCP) oversees the lives of Chinese citizens. It is estimated that by 2020, the number of surveillance cameras in mainland China is expected to reach 626 million. And that is just a part of the program. They sift through your online activity and any publications beyond. What hypocrisy.

So this prompted a conversation with a manufacturer of semiconductors, integrated host processors, and MCUs that does business in China. Here is our exchange:

Me: I bring this dreary news to you because in researching your company, I noticed that a substantial amount of your business is in China. Naturally, having some vigorous conversations in the tank, the topic of software and hardware backdoors came up. I searched for your company and backdoors to check your policy, and I was amused to get an entry from your community, asking how to open the backdoor in an automotive chip. I could tell you the name of the person inquiring, but it's in ... Chinese.

So, I'm not suggesting that your company is part of a conspiracy to spy on the U.S. I don't even know if it is. But this does address the question you asked recently:

"How do you frame and implement an ethical framework in an engineering company where all or most of its products are applied to useful application, but some may not be?" 

Let's say hypothetically that your company supplied devices to a Chinese company with a backdoor (there are good reasons for backdoors), and that company, at the direction of its Communist government, exploits the backdoor for surveillance or even disruption. Is it unethical for your company to do that? Is it unethical for a manufacturer of binoculars to supply South Korea to spy across the border to North Korea?  Was it unethical for Thomas J. Watson Sr., the founder of IBM, to provide technology to the Third Reich? You bet it was! So, where do you draw the line? An engineer designing something for a multitude of applications, or knowingly designing something that MAY be used for something unethical?

His response:

This is not a secret Backdoor for malicious users.

This mechanism is there to allow the original owner of the device to unlock it.

This type of access is usually known as "Secure privileged access."

It works as follows: The owner must program a key in the device during production, then the device can be locked (e.g., data inside the device is not readable, the software is not modifiable by the debug port), and there are two ways to unlock it:

  1. Completely erase the memory: making the original Software/Data gone and the microcontroller inactive.
  2. Use the "Backdoor" Key to unlock it. Only the original owner knows it. Simple patterns such as all 0s or 1s are not allowed so that this Key cannot easily be found out.

This method requires physical access to the device, and to know a secret Key stored during production cannot be used remotely. It is not usable by a 3rd party.

The wording is unfortunate, indeed. It is described in the full user manual as to the intent and use case, but it has to be taken in the context of the complete user guide. However, it's not a function related to the everyday use of the word "backdoor" as a hidden entry for a malicious user.

I spoke to some engineers and our experts in the security evaluation lab on the ethical aspect, asking them whether we have backdoors in our products. In every case, the answer was an unequivocal no, and they had good arguments: in our entire security development ecosystem, and for precisely that reason, no proprietary algorithms are allowed/deployed. For cryptography, only valid and peer-reviewed NIST-algorithms are supported. Our S.W. engineers are not allowed to inject self-developed code even to prevent accidental vulnerabilities that could be exploited as a backdoor.

I just have one comment about the "backdoor." Though it's not unusual to have a non-revocable backdoor key, it did make me wonder what happens if the original owner forgets the Key or is no longer available? This reminds me of a story.

In the early nineties, I was engaged to bring in my team to develop a data warehouse for Allied Signal for the CFO and the Tax department. A few months before, Allied Signal fired their long-time CEO, and brought in Larry Bossidy, who was Vice-Chairman of G.E. and Jack Welch's #2. The first thing Bossidy did (remember, this was the era of Business Process Reengineering) was to fire almost 40,000 people. One day the Treasurer sat me down with a pile of green bar and said, "If you can duplicate these reports, your project will be a success." No amount of convincing him that our goal was to do far more than that found any purchase with him. As we poured through the report, we found an Operational Cash Flow Report, but there was no accompanying document of the model. After some investigation, I discovered that the only thing they had was a compiled DL1 program, which was impossible to reverse engineer. 

Why am I telling you this story? The only person out of 140,000 employees (now 100,000) who know the logic had been laid off. I had to travel to Wisconsin and go fishing with him to get it. 

There is another issue I was thinking about. When the Key is set, the software is now secure. However, very little software is written these days that isn't composed of bits and pieces. What if the programmer includes one of those bits, say an external function to your program, and that function had been tampered with elsewhere, via some innocuous little subroutine? (Editors note: for a hot-off-the-presses example of what Raden is talking about here, see: U.S. Treasury, Commerce Depts. Hacked Through SolarWinds Compromise.)

My take

Still zeroing in on the question of ethics in an engineering company, and I think it comes down to who is responsible. There is some interesting discussion going on about the danger of anthropomorphizing A.I. Does A.I. make a decision? Does A.I. learn? These digital isomorphisms of the human brain are universal. Then there are lasso penalties, bagging (a portmanteau of bootstrap aggregation), and boosting. The net effect of all of this is to assign responsibility to the A.I. model. That's not acceptable. This factors into my thinking about responsibility.

But, it's going to take some more discussion to understand companies' allocation of responsibility that produce not for end-users. For example, an ag chemical company sells to distributors that sell to retailers who sell to planters. Or, a chip manufactures that sells to distributors, that sells to manufacturers, that combine chips into their products sold to users that deploy them. This is much more convoluted than developing an A.I. program to recommend products to individuals and is severely biased against protected classes. 

Loading
A grey colored placeholder image