On September 15 2021, the UN issued a statement that AI must not interfere with human rights. This isn't a new sentiment - last year, a similar pronouncement was issued:
Michelle Bachelet, UN High Commissioner for Human Rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces. It also said Wednesday that countries should expressly ban AI applications which don't comply with international human rights law.
The September 15 announcement also comes with a new report:
As part of its work on technology and human rights, the UN Human Rights Office has today published a report that analyses how AI – including profiling, automated decision-making and other machine-learning technologies – affects people’s right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression. [The report can be downloaded as a Word document via this link].
Bachelet has further said that:
Applications that should be prohibited include government "social scoring" systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.
When I read the new report, I first thought about how much ethical AI principles and proclamations emanate from European governments and UN agencies based in Switzerland. The only detailed report I read coming from the US Federal government was from the JAIC (The Joint Artificial Intelligence Center of the Department of Defense's (DoD) Artificial Intelligence (AI) Center of Excellence.). This is a pattern of the past few years, where guidelines for responsible AI emanate from the EU, UNESCO, and the World Economic Forum. And it begs the question:
After a calamitous, genocidal twentieth century, why is Europe in the vanguard of human rights and primarily ethical AI not interfering with human rights, while the US is mute?
The first thing I wanted to understand is why the US government is so reticent about AI. The second thing is why Europe seems to have agency in pushing the problem with AI and human rights. But there is a third thing. Is it unusual for a significant technology to burst on the scenes and NOT be used to suppress people?
First Question: In 2020, the United States tech sector contributed around 1.99 trillion U.S. dollars to the country's overall gross domestic product (GDP), making up approximately 10.5 percent of total GDP. Too big to fail. As I see it, the government in the US is more concerned with protecting business than protecting people.
I believe that the EU, which does not allow campaign financing from corporations, labor unions, etc. is less susceptible to the needs of business and to a certain extent, is more focused on the welfare of its population than the US.
The second question more or less follows from the first. But also, the USA isn't a single country. It's a federal republic and the the states are very powerful. Germany, for example, or the entire EU, can issue a policy about AI and human rights, but the US federal government would have to campaign to all 50 states where it would likely be rejected by a fair number. We have one Department of Defense, so they are free to draft their own regulations.
Third question: Is it unusual for a significant technology to burst on the scenes and NOT be used to suppress people?
In a paper, The Impact of Technology on Human Rights, C.G. Weeramantry, Judge, International Court of Justice, The Hague, Netherlands, writes:
In English history, William the Conqueror launched the project of the Domesday Book, which recorded every farm in England, the number of cattle on the farm, its extent and value and debts, who owned it and how many people there were on each farm. If he could have done this in 1086, imagine what he would have done if he had lived in the computer age nine centuries later. In fact, there is technology now available to a dictator by which you can bounce off a light beam from the window pane of a house and capture and record the conversation that goes on within it. With all those devices available, imagine the life of people in that society.
As the world gets faster and more information-centered, it also gets meaner: disparities of wealth and power strengthen; opportunities change and often fade away. This isn't new, however. Since the discovery of the New Word, the history of African-Americans and their encounter with technology, has been irremediably devastating to their hopes, dreams, and possibilities.
After the War of 1812 the proliferation of steamboats "put the interior South on the cutting edge of technological advance in America." [PDF link]. The growth of steamboat traffic and of river-based trade, also revealed "a southern willingness to adopt new technology as a way to modernize slavery." The South developed a booming cotton market as well as markets for foodstuffs and lumber. Hundreds of thousands of slaves were pushed west to support the economy.
But by the end of the eighteenth century, the efficiency of the slave economy on cotton plantations was being questioned. Technology changed the picture and made life for plantation slaves worse. Eli Whitney invented a simple machine to allow harvested cotton to be picked clean of seeds - an essential step before milling - on a far greater scale than had previously been possible. Suddenly rendered cost-efficient, cotton farming became a way to get rich quick. Thousands of black Africans were imported to do the work; in Mississippi alone the number of slaves increased from 498 in 1784 to 195,211 by 1840.
It's not possible to summarize all the relevant examples in one post. But if history teaches us anything, it warns us that new technology is rarely neutral - it is often deployed by the powerful in a way that has far-reaching, and sometimes tragic impacts.
The concern over AI as a destructive force to human rights is not only palpable, it's inevitable. If history tells us anything at all, the answer is a resounding yes - it already is. Consider the way algorithmically-driven social media platforms have affected our elections and our faith in them, affected the mental health of young people, and dispense an endless stream of disinformation reinforces AI concerns. There are over 400 million surveillance cameras in China, and despite their lofty AI ethics manifestos, make no mistake, China is a country that keeps a firm grip on what its populations sees.
Bottom line: AI isn't the problem. We are. History shows us that technology innovation consistently aids the powerful and oppresses the weak. My suggestion to the UN High Commissioner is that the next planned doctrine on Human Rights should be mindful of history.