With the great power of AI comes great ethical responsibility

Kathy Baxter Profile picture for user Kathy Baxter April 8, 2020
A to-do list for developing ethical AI policies and practices from Salesforce's Kathy Baxter.

(via Pixabay)

Artificial Intelligence (AI) will increasingly underpin much of our personal and professional lives in years to come. It’s already improving and simplifying our lives in more ways than many people realise, directly within our computers, smartphones and other connected devices.

It’s enhancing patient services in healthcare, making online shopping easier and even proactively running diagnostics on our vehicles to keep us and our loved ones safe. And there’s huge potential for it to go further.

However, with convenience and capability comes responsibility. Important decisions exist around how businesses and governments test, deploy and manage AI. At Salesforce, we believe that if industries collaborate and innovate together, our collective AI legacy will be positive for humankind. To make sure of this, we’ve developed a series of ethical and trusted AI best-practices for the benefit of the wider technology community, and society at large.

Ethical AI in practice 

Everyone in an organization should feel that they are responsible for working ethically and transparently. This shared responsibility is similar to that of maintaining security across their workforce – be that locking the doors at night or being able to identify malicious phishing emails.

AI has the potential to transform the way we do business, our jobs and how we work. Fortunately, such transformation augments our work and makes us more efficient and productive, therefore making us more valuable to employers. Developing trusted and ethical AI systems takes time and may require employees to work differently – so an end-to-end approach is required.

This holistic approach must allow businesses to build diverse teams, incentivise ethical decision making and work towards removing bias in all business processes. AI gives organizations a window into their own pre-existing biases and can expose the most subtle preferences or aversions to particular choices.

Often, issues arise subconsciously - sometimes in the form of hiring decisions that lessen the likelihood of workforce diversity - or giving promotions based on gender or ethnicity attributes. It’s essential to have systems that identify such bias – if the data feeding AI algorithms is biased, AI applications will amplify these traits. AI use cases and projects are often aborted when this issue is recognised. Creating AI that our customers and society can trust is not only better for society as a whole, it is also better for business. Representative, fair, transparent and accountable AI systems are more accurate, easier to debug and more likely to comply for existing and future governance.

Three stages of building ethical AI systems

There are three stages that businesses can follow to build ethics into their AI systems.

  • A business must compose its strategy for an ethical working culture that pervades the work employees do every day.
  • Responsible product development methodologies should be implemented throughout the company with incentive structures that reward appropriate behaviours.
  • The organization must work to remove all chances of exclusion or bias. This begins by identifying areas of potential bias in business processes, such as hiring and promoting staff based on gender, age, race, social class or whatever it may be.

Fortunately, AI can help organizations see bias in their business processes, particularly when businesses did not know such bias existed. Once they identify bias in the data, businesses can then act. This includes finding more representative data sources or editing the data to remove that bias. Next, they’ll need to examine the model itself. If this is in a regulated industry, businesses may not be allowed to use sensitive demographic fields (e.g. age, race, gender) in their models.

However, data sets will often contain variables that are proxies for those sensitive fields. For example, during the hiring process, listing participation in certain sports on a CV can be a proxy for gender. If organisations use these fields in their models, they are inserting bias in their decision making and the outcomes are not as fair as intended.

The need for governance

Businesses in heavily regulated industries such as financial services and healthcare can be reluctant to adopt AI due to a lack of clarity about regulations and risks. Implementing AI without a clear understanding about how it will be regulated can lead to future problems. Collaboration is key, and regulators must work directly with industry leaders and technologists to clearly understand how AI can and should be designed and regulated. This collaborative approach ensures ethical best-practices at scale, from initial implementation to broad deployment. Without this collaboration, companies may choose not to invest in and fail to adopt valuable AI.

Regulators therefore need to take a nuanced approach when dealing with the many different AI applications across different industries. For example, they must distinguish between regulating the technology itself – such as appropriate uses for facial recognition technology – and the application of the technology by different users – such as private, public and governmental bodies.

Ultimately, a single sweeping piece of legislation leads to lost nuances, and therefore lesser benefit from AI. This is not conducive to forward-thinking, creative technology that improves human life. The positive path forward involves regulators working alongside other key stakeholders to understand what is most beneficial to consumers, businesses and governments.

The future of trusted and ethical AI

Creating and implementing AI responsibly not only benefits society, but also enables organisations to get ahead of impeding regulation. This will be a competitive differentiator moving forward. It is therefore incumbent upon every organisation using AI to ensure that they are putting ethics at the forefront of their operations.

At Salesforce, we create a range of tools and guides to help customers better understand their data and its applications, while taking into account responsible business best-practice. Ethical use of technology is one of our formalised, strategic initiatives. This is why we are implementing various processes to bake ethics in through our product development lifecycle.

As an industry, we have to move towards a future in which technology unequivocally makes the world a better place. We can make the world a better place by giving professionals the skills and the environment to fulfil purposeful, impactful work through technology.

A grey colored placeholder image