Dreamforce 2019 - ethical dilemmas and the way ahead with Chief Ethical and Humane Use Officer Paula Goldman

Stuart Lauchlan Profile picture for user slauchlan November 20, 2019
Summary:
Paula Goldman is approaching a year in her role as Salesforce's Chief Ethical and Humane Use Officer, a big role that has to deal with massive complexities.

Paula Goldman Salesforce

In January Salesforce welcomed an important new leader to its ranks. Paula Goldman joined the firm as its first Chief Ethical and Humane Use Officer, heading up the new Office of Ethical and Humane Use of Technology.

This was a response in part to crises in the tech sector, such as the exposure of data privacy abuses by social media platforms, but also to lead the wider debate about the impact of the likes of Artificial Intelligence on society.

Back in January, the new office was very much a work-in-progress. Flash forward eleven months and it’s still evolving, but the mandate is clearer. For her part, Goldman breaks that down into three components:

One is the direct impact of our technology in the world. And we address that by policies around how customers use our products. We think about use cases that might be risky we set policies around. Second is about embedding ethical use into the product development lifecycle. That's really like before you sit down and build a feature, have you thought about the anticipated and unanticipated consequences of that feature that product in the world And then finally, working on shared standards across the industry, because we can't do this alone. And it would not be great for the world if we've tried to do it alone. We need a set of shared standards about what responsibility looks like in technology. So it's really those three things.

The new office comes at an important time for the tech industry and society at large, she argues:

We're at this critical inflection point in technology. After decades of optimism about the role of technology in the world, all of a sudden the technology industry is kind of in the crosshairs. People are realizing how powerful technology is in our day to day lives and how it influences the way we communicate and digest information and whatnot. So I think with that increased scrutiny, we are finding that people care a lot about how companies embed ethics and the ethical use of their tech.

We did a study that we released earlier this year, and we surveyed consumers, and some of the findings in the study were super surprising even to me.  So 80% of the people that we surveyed said that they wanted technology companies to have an ethical advisory board to guide how their products are developed. I believe it was something like 71%  said they would be more loyal to a company that demonstrated good ethics in their technology. And this trend just keeps rising, and I think all of that is very much part of our thinking in terms of how we operate the Office of Ethical and Human Use.

Salesforce staffers - the Ohana - have been highly engaged in defining the role of the ethics office, says Goldman:

We actually surveyed our employees and asked them what principles should we set around ethical use? We had them rank them, and we adopted that ranking as our set of principles. And then we  kind of looked at what are the key issues that we really need to address and it turns out there's a handful of things when you really think about it where technology is directly involved in an issue. Whether that’s, for example, privacy or the ethics of AI, we're really looking for the places where our technology is making a big difference and that's where we need to focus. 

The other thing is that it's incredibly important that there’s almost a sense of the process as a product here. What I mean by that is that we have an Ethics Advisory Council. We have all kinds of ways of listening to employees and external stakeholders, whether that’s office hours, confidential focus groups, surveys etc etc. We keep trying to figure out new ways to listen. It's really important that people feel heard in the process and that we are listening to all perspectives and bringing that into the decision-making process. That's a large part of that part of the work.  Some of the really fun parts of my job this year have been working on that part of it.

Goldman cites work done around AI as a case in point:

We're really active in the partnership on AI and there you see a lot of very concrete sets of standards emerging. So for example, on the question of explainability of AI, there's a pilot project around model cards and data sheets. There are specific ideas that are emerging from that community of practice that are really promising, because that's exactly the kind of thing you'd want to see. For example, that idea is basically like a nutrition label for a product and you'd want something like standard like that across.

Product process 

As for embedding ethics into the product development process, Goldman insists that Salesforce employees are encouraged at all times to think about the consequences, including the unintended ones, of new technologies. Again she points to AI as an exemplar, in this case around the topic of bias:

When we're when we're building our own models, we're going to go out of our way to make sure that we've got representative data sets and whatnot. To the point about how customers use the product once it ships, if customers are building their own model we've created a set of features to help serve as kind of guideposts around that. I can give you a few examples. One thing is that we have a feature that came out this year called protected fields. It basically enables customers to say, 'Hey, you know, there are a couple of really sensitive data categories that actually we do or don't want for decision making.

So for example, we want to protect race. This feature will say, 'Hey, I see you've protected a race. But you have a category called zip code, and zip code can be correlated with race. Do you want to protect this as well?'. I'm personally incredibly excited about this. I think we've seen just such a tremendous response, both internally, from our employees  who are really proud to work on this kind of stuff, but also externally from our customers who understand increasingly the risks of these emerging technologies and see it as yet another layer of trust with Salesforce.

She adds another example to make her point:

We've done a series of pilots this year with a methodology that's called consequence scanning and actually this emerged from a group called dot everyone which is a UK based thinktank. It's getting a cross functional product feature team, looking at and walking them through the exercise on what are the anticipated consequences of this product, what are the unanticipated consequences of this product and really looking at this. You sort of see the light bulb go off and it is super generative because at the end of the day, people say, 'Oh, here's some really positive thing we hadn't thought about. If we designed it in this way we can increase the likelihood of that happening’. And you actually see the product roadmap changing as a result of it.

The process is the product in a way, it's really about, as you're watching this space, looking at how processes like that scale across tech. See how, if they become the new normal because that's really what I see as my remit. If you change the way the sausage is made, on the other side you get some really amazing results.

What the customers do...

But away from the internal enthusiasm for ethical thinking, there is the question of how to ensure that Salesforce customers share those values. The controversy around the company’s work with the US Customs and Border Protection is good example, but there are others. For example, Salesforce has a policy that “disallows the use of our e-commerce platform for the transacting of assault weapons” ie: gun manufacturers can’t use Commerce Cloud to sell directly weapons to customers.

Good! 

But what happens if the firm engages in a contract with a retail giant which, for the sake of argument and naming no names, sells a heady combination of groceries, diapers, washing power and assault rifles? How far does Salesforce’s responsibility extend to policing that sort of scenario? Goldman argues:

This is about use cases. It's not about casting judgment on an individual customer, it's about it's about like looking at the positive impact of our technology, what are the biggest risk areas that we're looking at, and setting policy around that.

My take

That last response is not really addressing the question. It’s clear that if such a retailer was using Salesforce Clouds to sell assault weapons, it would be in violation of the ethics policy. Goldman suggests that in such a situation Salesforce would seek a conversation with its customer and there would be an ultimate sanction of pulling out of the commercial relationship. That hasn’t happened yet, but it seems likely that it’s only a matter of time before this is put to the test.

I really don’t want to appear to be too critical here. I applaud without reservation the creation of the new ethics office and the thought leadership position that Salesforce is striving to take here. It’s an incredibly complex landscape in which to operate with vested interests on all sides. It’s still very early days and the long term impact of Goldman’s office will take shape over time. And this is a debate that can’t just be left to Salesforce to amplify. As Goldman says:

We are one actor in a broad ecosystem. We need governments on board and we need civil society on board and we need lots of companies doing this and and we make no pretence of having all the answers. It's really important that this is a shared effort.

It is indeed and diginomica will be tracking - and encouraging - the progress to that end. We look forward to reporting back on the next steps and contributing to this essential debate. 

Loading
A grey colored placeholder image