Transparent truths about AI ethics - assessing a seven point set of principles from Capgemini

Chris Middleton Profile picture for user cmiddleton October 8, 2020
Transparency isn't the silver bullet that's going to address every ethical concern around AI deployment, but it's an essential bedrock on which to build.

(via Pixabay)

With debate raging about bias and lack of diversity in teams and data sources and the growing risk of AI automating social problems or giving them a veneer of digital neutrality, transparency is the key to unlocking the ethical conundrum. 

Put another way, if your organization can’t be open and transparent about its processes, and about why you are seeking personal data, applying algorithms to it, or denying someone services, then you shouldn’t be doing those things in the first place.

That’s the view expressed in a new Capgemini report - AI and the Ethical Conundrum – How organizations can build ethically robust AI systems and gain trust. The 44-page document breaks the ‘conundrum’ down into seven principles to help decision-makers build and strengthen public trust.

Human agency and oversight.

Capgemini says AI systems should support human autonomy and decision-making. They should be enablers of a democratic, flourishing, and equitable society by supporting the user’s agency, fostering fundamental rights, and allowing for human oversight. 

In my view, AI isn’t something that you can just outsource decisions to; it should always augment human processes – a line regularly pushed by the likes of IBM and Microsoft. However, the challenging issue is defining exactly what our fundamental rights are in a world where some governments see things very differently to others, and systems are exported from one culture to another.

Technical robustness and safety

Capgemini says, AI systems need to be resilient and secure. They should also be safe, offering a fall-back plan if something goes wrong, as well as being accurate, reliable, and reproducible. 

To my mind, this is something the UK government, knee deep in claims about ‘rogue algorithms’ and multimillion-dollar systems based on ancient spreadsheets, should take more seriously in its quest to be a leader in AI.

Privacy and data governance

As well as ensuring respect for privacy and data protection, Capgemini says  adequate governance mechanisms must be put in place, which consider both the quality and integrity of the data, and ensure legitimate access to it. 

I’d argue that much of this point is covered by regulations such as GDPR in Europe, and in the US by growing support for similar concepts, such the California Consumer Protection Act (and moves in that state against facial recognition systems). However, these restrictions pitch lawmakers in Europe and the US against Valley behemoths such as Facebook and Google, not to mention local police and security services. 

Diversity, non-discrimination, and fairness

The avoidance of unfair bias, encompassing accessibility, universal design, and stakeholder participation throughout the lifecycle of AI systems, as well as the need to enable and protect diversity and inclusion, posits Capgemini.

This has rightly been the focus of most AI ethics debates, centring on the risks of systems perpetuating racial discrimination and other forms of exclusion along gender, age, disability, sexuality, economic, or faith-based lines. 

Such problems may have different causes, including: biases in historic training data or flawed systems in the real world; confirmation bias at the outset of a project, when organizations automate flawed, inaccurate, or ideology-based assumptions; or a lack of diversity in development teams.

Societal and environmental wellbeing

AI systems should benefit all human beings, including future generations. This point is related to bias, but also means that systems should themselves be sustainable and environmentally friendly.


Closely linked to the principle of fairness, mechanisms must be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment, and use. 


AI systems should be based on the principles of explainability and transparency. The ability to communicate all of the elements in their design and use to anyone who demands that information is essential – the data, the system, and the business models.


That last point is the most important, says Jo Peplow, AI and analytics team lead for Capgemini UK and Ireland:

It’s definitely around the transparency and the explainability. Unless we can challenge, and we can understand and explain what's going in, then we'll never know if there is a bias in the data, because you won't have a big enough pool of people who are challenging that solution. We'll never be able to fully adopt things unless we're transparent and we understand that we can get the value out of them. And we won't know what's what's going on in there. That for me is the key: to remain transparent and explainable, but that needs to happen at the right level and in the right way.

She explains: 

We need to make sure that we're not asking our AI engineers and developers to spend all of their time focusing on the explainability side. There needs to be a balance, a focus on the value of actually generating these insights and these algorithms. But at the same time, ensuring we're doing it with a business value view, and an explainable view, in the backs of our minds at all times.

Transparency has a double edge, therefore: it is not just about clarity and openness to the customer, citizen, or end user, but also to the development team. For example, coders and decision-makers may not understand an inherent bias if it is simply not visible to them and they have no idea how it is impacting on the system’s output.

According to the report, customers say that organizations are not doing enough on these ethical issues, and especially on transparency. Capgemini says that the proportion of customers who believe that organizations are being transparent about how they are using personal data has fallen from 76% in 2019 to 62% today. 

However, organizations are making progress on the ‘explainability’ aspects of their algorithms – why they exist and what they are doing – but are still struggling to make the systems themselves transparent or auditable, adds the report.

Accountability also remains patchy: just 53% of organizations have a leader who is responsible for the ethics of their AI systems, while even fewer have a confidential hotline or ombudsman that allows customers or employees to raise ethical concerns.

So the question is, might this problem get worse in the future? At a time when some technologies are becoming more opaque to the average person – neural networks, black-box solutions, and cloud-based quantum platforms among them – is there a risk that transparency and audit may become more difficult? Peplow doesn’t think so:

I can't see why an organization would use things if they can't explain them, because the amount of proofs of concept that people have tried to do, the investment that people have put in when they've not taken the business on that journey with them... If there is no end-user adoption, it becomes a wasted investment if you can’t take the business along with you on the technology journey.

There are an awful lot of technologies out there. Many of them are not new, but the reason they're starting to be used more widely is that we are able to explain them and we are able to find the business value in generating use cases for them. 

It’s like Big Data. It’s only useful if you know the use cases for the data set, otherwise you're just spending an awful lot of money to hold that data, look after it, and keep it secure. Unless you can find the use cases, it’s not bringing any value to the organization.”

My take

Despite ethical concerns, public trust in AI systems appears to be growing, even as fears about transparency and accountability are rising. The report says that close to half of customers (49%) say that they now trust AI-enabled interactions with organizations, up from 30% in 2018.

This suggests that consumers are being increasingly exposed to AI systems that have caused no obvious harm to them, despite stories about the misuse of data and the flaws in some algorithms’ design. 

The lesson here is that those failings were predominantly human. Policymakers may blame ‘rogue algorithms’, but the root cause is dumb or dishonest decisions at the management table.

Overall, I’d rate this as a good, accessible report from Capgemini that’s well worth your time.

A grey colored placeholder image