"Privacy is not a feature" - how Zoho's approach to workplace privacy impacts AI development, and more
- Summary:
- With return-to-work on deck, workplace privacy is center stage. But are organizations ready? In the second part of our privacy dialogue, Raju Vegesna shares how Zoho's privacy stance impacts AI development. He also shares tips and gotchas along the way.
Last time around, I addressed Zoho's workplace privacy wake-up call: The Zen of workplace data privacy - Zoho's Raju Vegesna on productivity monitoring, privacy as a best-of-breed weakness, and Google's "cookieless" campaign.
Zoho wants us to think differently about privacy. How? By expanding the conversation. Too often, privacy tech headlines are a misunderstood frenzy of consumer privacy moves, such as Google's confusing "cookieless" Chrome announcements.
Zoho wants to push beyond consumer privacy, and bear down on privacy in the workplace. No, they aren't the only ones. But: they are one of the few with no third-party cookies on their web site. And: no use of Google's analytics tools on the back end. It goes back to hard lessons ad tech vendors are learning now: privacy is expensive. With the tensions between privacy and public health in the workplace about to ratchet sky high in the Vaccine Economy, we need to get this right.
One standout from my last conversation with Zoho's Raju Vegesna: Software vendors claim workplace data privacy is neutral - wrong. Products influence culture. Exhibit A: "workplace productivity tools" being used/misused as spyware-on-employees. As I wrote:
Productivity vendors tend to view their tools as neutral. Supposedly, it's all in how the customer uses it. I lean toward Vegesna's position: features directly influence culture.
"A change is needed" - and there is an alternate workplace privacy path
But let's face it: you want to be differentiated, but you don't want to be on an island. At the recent ZohoDay analyst event, I posed this question: Big tech (especially Google and Facebook) seem to be dominating with a radically different approach than what we just heard. How do you think about your model versus big tech companies that are propagating algorithmic inequality? Can your model be a part of a movement for broader business change?
After the event, Vegesna responded by email:
We will need to explore multiple business models and approaches in our industry. For example, the wild success of the ad-based business model gave rise to thousands of companies that followed the same ad-based business model (we can notice this trend in the entertainment space with superhero movies). We need an explosion of innovation in multiple regions where we experiment with different business models that are apt for the region.
The Zoho way works for Zoho. That may not work for another company. There will be companies that might do everything the opposite of what Zoho has done, and still succeed while making an impact. The one-size-fits-all-all-around-the-world approach of current business models has to give way to localized, dynamic business models. Also, the current set of business models focus too much on financial aspects.
Vegensa also contends: more private companies are doing this work than we realize.
Companies doing real work, but are private are also not given enough attention and exposure. Lack of exposure leads to the perception that there is no alternative business model. For example, we can look at the exposure Aldi gets versus Walmart. Or how often do we think about lessons we can learn from Mars, Inc. (which has been around for 110 years) or Anderson Corporation (has been around for 117 years) or its competition Marvin (has been around for over 100 years).
These are private, and family-owned companies doing lots of important work. We are ignoring the lessons from them and chasing after public companies (and this is driven by financial motivation).
But isn't a broader change in data privacy still necessary?
The first step towards change is acknowledging that a change is needed and there is an alternative path. Once we acknowledge this, there are several examples in our industry. There are several private companies out there doing well - SAS, Basecamp, MailChimp etc. come to mind - and we can learn lessons from them.
The privacy perils of "free" AI tooling
One area where companies are conflicted is AI - and for good reason. Many of the best/cheapest AI tools are offered by cloud providers, but that may present a data privacy conflict. Zoho certainly thinks so - to the point that they develop all their own AI tooling. I've written about Zoho's AI approach before, but it was time to dig further. I asked Vegesna:
You've taken a risk - and a big investment - building your own AI tools, versus taking advantage of the AI libraries, resources and data provided by big tech companies. What is your advice to enterprises who haven't taken a close look at how using cloud-based, third-party AI tools can potentially compromise their customers' privacy?
Vegesna responded:
Let us use a simple offline example (this has nothing to do with AI). Let us say a prospect walks into your boutique for some clothes. Imagine 10-20 cameras you randomly allowed in your store following this prospect, observing, documenting and recording every move. Would that prospect be comfortable with the experience? Will they trust your brand because these people you let in are intruding on their privacy? The answer is obvious. Yet, this is exactly what is happening online, with 10-20 trackers following every visitor on a website/online store, without their knowledge. We have come to accept it as a norm, and the customers are now starting to speak out about their dissatisfaction (and losing trust in companies doing this).
Protecting customer privacy (and preventing user tracking) starts from the first touchpoint (a website or an app) and goes all the way to the backend processing (AI etc.). We had to develop/replace multiple components like web analytics to fonts (yes, fonts are another form of pixel-tracking) to re-engineering some core components of our technology stack, all to protect customer privacy. In return, we gain customer trust (Trust in business is easy to lose, not easy to gain and retain).
It's not an easy thing to avoid third-party AI tools. Whatever app you're building, an AI requirement is bound to surface. As Vegesna wrote:
If we take translation as an example (which is done through AI), it is easy to integrate an online service and reap the benefit without doing much work. However, this is also a potential privacy leak. How often does business content go through a 'free' translation service, feeding data to the AI engines? You'll be surprised.
The same applies to other AI services like OCR, image recognition, object detection, transcription and on and on. All of these engines improve based on data. Sometimes this data could include customer or employee data. Are companies/businesses comfortable sending the private data of employees and customers to third-party (AI or otherwise) services for processing? Are employees and customers aware of this practice? These are questions businesses have to ask themselves. We were not comfortable transferring this data to third parties. So we ended up building our services in-house.
Customers are not going to trust businesses when they realize their information is being used as a currency in exchange for a 'free' service from big tech. Once that trust is gone, it is hard to win back.
"Data privacy is not a feature"
So where should organizations begin? Vegesna:
First, a stance on protecting the privacy of users should be an organizational stance. It is not a feature.
Second, it is not easy to achieve, unless this is an organizational commitment (all the way from the top). We made this commitment internally around 2015, and it took us several years to get to where we wanted to.
Once a commitment is made and achieved, it is going to be totally worth it (based on our experience). Our customers respect our stance and they trust us (and repeatedly thank us for our stance). Our partners have been proud to be associated with us, and of course our employees are ever-vigilant and proudly pursue protecting privacy.
If this sounds like the lofty talk of idealists, well, Vegesna would point you towards audits:
I was recently invited to participate in a Zoho One deployment celebration for a large customer (in the chemical industry). During the celebration, the customer said, among other things, there was one thing that really surprised him pleasantly. They were going through an audit internally, and the auditor came to them and said because they are running their business on Zoho, they don't have to worry about privacy, as they are in good hands and the auditor certified them. That of course made my day. In some instances, privacy might be the cherry on the top, but in other instances, that is a deciding factor. We have seen both from our customers.
My take - free tooling has a privacy price tag
Companies should be rewarded for taking bold steps towards privacy standards - standards that go beyond the regulatory minimum. Hopefully, that reward will come via more consumer spend, more enterprise contracts, and loyal/energized employees. Sounds like that privacy math works for Zoho.
It would be naive to hold Zoho's example as the only way. For companies sorting through the thorny workplace privacy issues of vaccine passports, covid testing and so on, transparency with employees is the proving ground. What is opt-in, what is opt-out? How is your employee data used, and why? When do state or federal regulations dictate workplace policy?
That leaves the question of: what to do about big tech? Or, for that matter, government officials who make privacy policy, without a firm grip on the digital issues in play? That's beyond the scope of this article, but not of diginomica (check our data privacy archives).
Free tooling has a price tag. Too often, that price tag is your data, or the data of those you vowed to protect. Vegensa worries about the deterioration of online behavior compared to offline. Granted, people's offline behavior isn't so swell either. But: digital can accelerate poor behavior. Vegesna cites the example of freely sharing contact information. If a stranger asked for the contact info of our friends and family, we'd slam a door behind them. And yet:
We don't mind uploading the entire contact list of friends and family to get free access to an app (latest example: Clubhouse).
The risk? A degrading trust in the digital systems.
I found that it is not a bad idea to ask ourselves a simple question: is this behavior acceptable in the offline world? If not, why is it acceptable online? More discussions have to happen publicly regarding privacy. I hope we are just at the beginning.
Given the room for vast improvement, I hope so too.
Updated, 9pm PT April 10, with a few minor tweaks for readability.