Main content

How to create a compass for ethical AI

Barbara Cosgrove Profile picture for user Barbara Cosgrove July 31, 2019
With the rise of artificial intelligence and machine learning, it's time to set a compass for ethical AI. Workday's Chief Privacy Officer Barbara Cosgrove offers four guiding principles

Businessman hand holding compass on global network background © PopTika - shutterstock

Technologies such as artificial intelligence (AI) and machine learning (ML) will have a significant impact on the enterprise. By increasing efficiencies and improving data-driven insights, they will bring fundamental improvements to the way we work and live. But what are the ethical considerations surrounding these powerful new tools?

AI isn't about supplanting human decision-makers. Instead, AI-powered applications should make predictions that, when combined with human judgment, help inform better decisions. But the success of AI, like any emerging technology, depends upon trust, and that trust will only exist if companies adhere to responsible, ethical practices.

Many companies are eager to maximise the value of AI and ML - whether the technology is being built in-house or tapped into via powerful analytics and planning tools - but in the face of such a profound technological and societal change, committing to an ethical compass that centres on humans and trust is a vital part of the process. As you start to develop this framework with your teams and stakeholders, consider using these guiding principles to help to steer you in the right direction:

Put your people first

The first question companies should ask themselves is: how can AI best support our people? Perhaps it's an opportunity to improve efficiency in a particular business process; or to use applied analytics to insights that will improve the accuracy or efficacy of decisions made by a certain group of employees. Once the benefits and any other expected changes are clearly outlined, it's important to communicate those to impacted employees.

Companies should also consider developing a reskilling program for employees, depending on the industry and business model of the organisation. Whether or not a formal reskilling strategy is put into place, it's important to work with employees to ensure they not only understand how to use the technology, but also feel empowered to add value through their own ideas.

Protect data

Be conscientious about what you will and won't use data for. Most companies sit on an incredible amount of data, and AI allows them to use that data in myriad ways.

But just because you can, doesn't mean that you should. You must have a clear policy on what you will and, more importantly, what you won't use data for.

Listen to customers, and build trust

When it comes to your customers, they will want to understand how their data is being used and how AI will enhance their experience - without compromising their privacy, and without losing the human contact they have with the people in your organisation.

Companies must be transparent about how they are using AI and the role that customer data plays, and demonstrate what benefits the customer can expect.

Actions speak louder than words

Talking about ethics and creating a set of guiding principles on how your company plans to use AI and manage its effects is not enough. Companies should have processes in place to ensure they are continually complying with their own guidelines. They should also regularly review their principles to incorporate new industry best practices and regulatory guidelines and have robust review and approval processes when introducing technology or new ways of using data.

Above all, companies should be focused on what their employees and customers need and how emerging technologies, such as AI, will allow them, and their people, to achieve more.

A grey colored placeholder image