AI Bill of Rights - industry should police itself, claims ML company
- Summary:
-
A software pioneer seems to agree with the US government over its AI Bill of Rights rather more than he claims
The Biden White House’s Blueprint for an AI Bill of Rights – ‘making automated systems work for the American people’ – seeks to minimize bias and the potential risks to citizens from technology overreach, data grabs, and intrusion. So why are some tech companies up in arms about it?
Perhaps some questions answer themselves. But on the face of it, the Blueprint contains a reasonable set of aims for a country with an insurance-based healthcare system, and where employment, finance, and credit decisions increasingly reside in inscrutable algorithms.
Moreover, it suggests a similar direction of travel to that of Europe’s regulators, and to the UK’s, who share a desire to rein in the power of tech giants (in Britain’s case via the new Digital Markets Unit within the Competition and Markets Authority).
The White House’s Blueprint, which – importantly – is hands-off guidance rather than a legislative imperative, notes:
Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased.
Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity – often without their knowledge or consent.
These outcomes are deeply harmful – but they are not inevitable.
However, the text is hardly one sided. It also reinforces the government’s commitment to unlocking the benefits of AI and data innovation, but via inclusion for all, as opposed to algorithmic exclusion for vulnerable individuals and minorities.
In short, the White House wants to nurture a powerful industry, but not so powerful that it threatens civil rights, democratic norms, and (whisper it) federal authority.
After all, we live in an age where titans like Elon Musk see an opportunity to challenge the White House openly and politically, via platforms they regard as their own personal mouthpieces. Core to that business model is the engenderment of mass distrust in government, media, and global institutions.
We also live in an era when clever products like Open AI’s ChatGPT have been adopted by a playful – and perhaps overawed – public with little awareness of their flaws and risks.
A recent report by the UK’s Guardian newspaper suggested that up to one-fifth of assessments examined by an Australian university already contained identifiable input by bots.
ChatGPT, the use of which may be harder to detect, has been found on occasion to have little basic ‘knowledge’ of fundamental physics, mathematics, or history, sometimes making elementary mistakes.
The implication of this is clear: erroneous information may be given a veneer of AI-generated veracity and trust, while some lazy humans see a short cut to less work and instant, ersatz credibility.
Meanwhile, singer, songwriter, and novelist Nick Cave – that most literate of musicians – called the system an exercise in “replication as travesty” after a fan sent him a ChatGPT lyric supposedly written in his own style. He wrote in his blog The Red Hand Files:
I understand that ChatGPT is in its infancy, but perhaps that is the emerging horror of AI – that it will forever be in its infancy.
An astute observation. Cave added that human beings’ thoughts, feelings, skills, memories, and desire to push themselves and experiment are poured into their art, while ChatGPT produces simulations. A photocopy of a life, perhaps, as opposed to decades of lived experience.
In this way, it implicitly renders authentic endeavour valueless, even as the network effect chips away at creative people’s ability to profit from their work. Today, that economy seems more adept at generating engagement through outrage, anger, and opposition rather than insight, empathy, and collective vision. Click Yes or No, ye bots and fake accounts, and thus simulate a democracy!
In spite of all this, the US government’s stated desire for safer, more effective systems and greater personal privacy – not to mention its call to explore human alternatives to AI where possible – has rattled some in Silicon Valley. Indeed, it has left “many concerned about the future of ethical AI if left in the hands of the government”.
At least, that’s the opinion of one opponent: CF Su, VP of Machine Learning at intelligent document processing provider, Hyperscience. In his view, AI ethics should be left in the hands of “those who know the technology the best”.
In other words, butt out, Mr President, and let the industry police itself, given that many providers, including some in Big Tech, have been spearheading their own ethical initiatives independently for years.
They have. However, the problem with this view is it suggests a troublingly short memory – which is ironic for a machine learning specialist like Hyperscience. Many technology behemoths were backed into making those ethical statements by public outcries, and in some cases – most notably Google in 2018 – by concerted employee rebellion against the use of its technology by the military.
Microsoft, Amazon, and Apple have in their own ways also been accused of unethical behaviour, such as handing private data to government agencies by backdoors (Apple and Microsoft), the pushing of flawed, real-time facial recognition systems to the police (Amazon), and more.
California and/or San Francisco, the cultural epicentre of Silicon Valley, have in recent years outlawed or limited a range of technology advancements: for example, the use of real-time facial recognition by law enforcement, the ability of police robots to kill criminals (remotely with a human in the loop), and even (shock horror!) the excessive presence of electric scooters.
The state has also advocated for greater citizen privacy and introduced legal data protections to that effect. These have all been moves by local government against technology initiatives that, signally, failed to police themselves effectively or protect the public.
Where’s the line?
So, in the long tail of the Facebook/Meta and Cambridge Analytica scandal, can tech behemoths really be trusted to police themselves during this data goldrush and landgrab, and when social platforms’ key products are their users?
To find out, I pulled up a chair with Hyperscience’s CF Su.
First, he explained that his own products have a simple and useful function: they seek to transform unstructured, human-readable content into structured, machine-readable data. The aim, he said, is to automate low-value tasks, reduce unnecessary costs, mitigate against error, and improve the overall quality of decision-making in business.
Fair enough. Such AI- and ML-enabled activities might include classifying the content of an email based on its perceived sentiment, urgency, and subject, so that it could be answered automatically, or routed to the correct department, he said.
This type of function plugs neatly into discussions about AI ethics for a simple reason: sentiment and emotion analysis – according to organizations such as the UK’s Information Commissioner’s Office (ICO) and others – is a flawed concept. Indeed, in some cases, the ICO believes it is fake science.
Sentiment is mutable and often misunderstood by humans, let alone by machines. Critically, it also varies from culture to culture, from language to language, and from ability to ability – including among neurodiverse people. There is no universal benchmark for sentiment. So, what if by making human-made, human-targeted content into machine-readable data, the machine gets it wrong, and the result is harm to a human being?
In other words, what’s so wrong with the US government seeking to protect the human rather than the software maker, via some non-binding ethical guidance? Aren’t many of these technologies at too early a stage to trust them with big decisions, let alone scale them across the enterprise to deal with people’s lives and finances?
He said:
This kind of automatic system is definitely picking up momentum. We see more and more applications. […] I think people are opening up to these kinds of automated systems.
Look at an application form: what's the name? What's the address of this applicant? Or look at this invoice or at that bank statement: what are the numbers, the account ID, and the total balance? All these are pretty easy to verify. So, people are more comfortable with this kind of automated system, because they are treating it as a tool.
And so, there's no major concern about bias or discrimination in this kind of system. But when it comes to extracting insight, sentiment, or making a decision – like approving a mortgage or job application – that's the grey area. High-risk areas that people are still trying to figure out. It depends on the application.
Exactly, and surely this all the Blueprint seeks to address.
Also, the low-risk applications he describes are hardly unstructured information: forms and boilerplate documents are highly standardized and therefore structured, in effect. Isn’t the real risk that we start distorting and simplifying other human-readable information to make it more digestible to machines – to algorithms and search engines – to help them make decisions about us or our data? Su said:
You're exactly right.
Some companies are using automated systems and AI assistants to pass a resumé, for example. So, humans, when applying for a job, start to put specialized keywords in, to stuff their resumés with fancy keywords they hope the machine will pick up. This is a scenario that could happen in some corner of the business world, and that's why this kind of application is classified as a high risk, because, essentially, we are using a machine to make a decision.
In other words, humans are learning how to game the machine. So – given that he seems to agree with both me and the Blueprint in this regard – what’s behind the growing trend in tech to criticize AI ethicists and claim the industry should, and does, police itself? He added:
It’s very important that the public is aware of the potential power in the benefits, but also the potential negative impacts of such a system.
But my position is that we shouldn’t let government pass laws to regulate this industry. It’s a daunting task for the government to carry out, right? I think the industry should be self-regulated based on the guidelines announced by the government.
But that’s all the government is doing: issuing guidelines. And the industry hasn’t shown that it can self-police or self-regulate. Su said:
What you say makes sense and there are a lot of benefits, but there are also a lot of downsides when governments directly regulate an industry like AI or machine learning. It’s a very fast-moving area. Research is rapidly developing and it’s impossible or impractical for a lawmaker to stay on top of that. And there are a lot of nuances in this. So, direct regulations may have huge downsides and unintended consequences.
OK, but if the problem is that tech is moving too fast, while lawmakers move incrementally, case by case and precedent by precedent, arguably the law becomes a useful brake on unintended consequences to society. Isn’t that what the law should be? Su continued:
I think artificial intelligence is, in a sense, closer to physics, or to medicine. And it’s difficult to say that laws or regulations will be an effective way of slowing down or stopping it. What’s important is education.
Yes, but what if AI starts to take the place of education, with people already beginning to trust its answers, even when they are provably incorrect?
We already spend less and less time ourselves looking for information, checking references, and verifying that data is correct. Most people don’t even look past page one of Google! Aren’t we all looking at a vast data landscape through a pinhole, and that problem is getting worse? He added:
I agree, this is definitely an issue. I think it's something that, as a society, we have to figure out a solution to.
My take
At least we can agree on that. And I would argue that is all the Blueprint seeks to do.
Perhaps the subtext is that CF Su – as far as I can tell –seems to agree with its aims, but understood that there are clicks and engagement in saying the government should stay out of the AI sector.
The core issue, then, appears to be that the industry is worried about the direction of travel. That the Blueprint may be a harbinger of stricter laws and regulations, reining in a fast-moving US industry and so allowing China and others to leap ahead.
It’s not clear that is the case: few American presidents would stomp into a growing market and reset it back to the 1970s, for example. Though Trump certainly tried it with green energy and renewables.