I wrote this column based on continuing discussions with a private group of experts and thinkers and with their reviews. We meet each week as a group and have separate issue-focused conversations. Out of those conversations, a collective position on a sustainable Internet has emerged – one that takes on the myths and realities of Artificial Intelligence.
The rapid deployment of Artificial General Intelligence (AGI), which makes decisions and takes action, can pose a potentially grave risk to the planet and destroy the autonomy of human beings. The growth of the internet of things (IoT), low/no cost computing power and data storage and lighting fast network speed raise the likelihood of the destruction of the energy grid, uncontrolled climate change, and the deployment of superbot armies. These new technologies also threaten the development of free will and independent thought that makes us human. Do we want a future of algorithm-driven fragmentation of nations and communities based on fake news, social media surveillance, and campaigns or a surveillance-based society where independent thought or action is hunted down and eradicated?
Global technology standards organizations such as IEEE, national technology standards organizations such as the US National Institute of Standards and Technology (NIST) along with technology leaders in government, academia and business, to come together to address the many long and short-term risks these technologies pose to the planet and its people. Some government, business and community leaders have begun recognizing these risks and exploring options. However, understanding the science underlying these technologies is necessary for proposed solutions to these problems to address these risks or make them worse. They can be addressed using a similar approach to managing nuclear weapons and energy by aligning international and national policy and law with the technical standard for designing and deploying these technologies. We suggest working groups in the following areas:
- Sustainable Intelligent Internet –An internet design that enables open market operations. The current internet design does not allow users to easily distinguish between the actions of humans and "bots," robots trained to generate content and push it to targeted individuals. There is no widely accepted standard for how information is acquired, protected, shared, or logged transactions. State-run or private entities may elect to suppress or promote information. The guiding principles and technical standards for designing and deploying an open, secure internet platform that protects participants' privacy and enhances users' ability to assess the accuracy of the information they acquire through the internet. This will include the following:
- Registries of prime entities ( persons, corporations, other institutions --possibly apps to allow users to distinguish the content and actions of humans from those of robots.
- Registries of qualified cyber currencies.
- Rules and templates for member entities by type (e.g., benevolent corporations, agents of an outside organization, associations).
- Designs and tools for managing content, including policies, procedures and technology standards for assessing the truthfulness of content while enhancing freedom of speech and protection from cyberbullying.
- Cyber warfare and weapon control: Cyber weapons range from sophisticated hacking tools that allow hackers to penetrate the defenses of the infrastructure such as the power grid, factories or hospitals to disruption of financial systems, voting systems and news and social media. They also include the disruption of IoT distributed systems from cars and medical devices to automated weapons systems. The guiding principles and technical standards will reduce the risk of destruction of the planet through the use of AGI-guided weapons with no human intervention, requiring the development of international treaties such as those that now govern the management of nuclear weapons. However, given the widespread ability of cyber warfare tools following the 2013 disclosure of the US National Security Agency by Edward Snowden, this treaty will need to control the many intragovernmental organizations that are now developing and deploying cyber warfare technology.
- Firm foundation: The operating systems that power the computers that run the power grid, factories and farm equipment have well-defined back doors that allow single hackers in basements or teams of state-sponsored cyber warriors to permanently disrupt operations or disable masses of equipment. With the introduction of IoT and 5G, millions of cars, medical devices and other connected things can be destroyed or directed to injure humans in seconds. Existing processes for closing back doors are slow, and rely on a widely dispersed user community to vigilantly apply patchwork solutions as they appear. Governments and businesses must require the rapid replacement of existing operating systems with a transparent unbreakable operating system. In addition, applications and services running on this improved foundation should follow design standards and processes to reduce the vulnerability of the application and data to cyber-attacks.
- The advanced hardened hardware platform, from memristive to fifth generation quantum technology (QuIST plus). The rapid evolution of technology promises ever faster resolution of complex problems.
Talent and technology to address these issues are, for the most part, widely available. However, the governance framework to guide development and deployment includes policy guidelines defining the standards of how these technologies will be designed, developed and deployed - including both a positive vision of how these technologies should be used to improve quality of life and defined constraints on how they should not be used.
These suggestions are an interventionist approach to anticipate emerging or potential threats and to develop countermeasures for outcomes that may never actually occur. However, it fundamentally differs from 'slow train' threats like global warming and income inequality, which result from habitual behavior and corruption rather than malicious acts.
I prefer to invest precious resources in looking for ways to improve the lives of everyone rather than trying to anticipate and stockpile responses to nebulous threats.
For those who want to make their impact felt, what organizations might be doing similar work addressing these issues?
- Cambridge University's Centre for Existential Risk
- Oxford Internet Institute
- Oxford Global Priorities Institute
- Berkeley's Machine Intelligence Research Institute
- The Future of Life Institute
- Tim Berners-Lee's foundation Contract for the Web
- The Long Now Foundation
Some questions and thoughts that we should all consider:
- What holes need to be filled?
- Possible needs to address.
- Policy: first is to get people aware of the problem and encourage entities to do the right thing.
- Bricolage: putting together what already exists in new ways to address identified needs.
- Thoughts about how an interventionist approach is different and may ignore meeting needs that improve the lives of everyone.
- Understanding of how issues inter-relate to understand and identify ways to resolve threat scenarios.
- Creating pathways to new services. For example, an agent that acts on my behalf to protect my finances, agency, security, technology, and privacy. My agent will negotiate the best income in return for using my data on my behalf. The agent would have the capacity and authority to deal with the suppliers of services and technology and ensure that my interests are enhanced and protected. Example: making adjustments to subscription services can be a tedious affair. We spend far too much time updating technology – can bot agents become advocates on our behalf?
These are only some of the positions we’ve taken and the questions we’ve grappled with. What questions are you taking on? Are there open forums you are organizing? Let us know. These issues are too big for any one individual to take on without a peer group.