UK government legislators took a none-too-deep-but-it’s-a-start look at the impact of AI on the national economy. Shortly after, it was the turn of the US with the outgoing Obama administration publishing two reports which set out some clear guidelines for AI and the government’s role in its development. Last week
Preparing for the Future of Artificial Intelligence looks at the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy, before making recommendations for specific actions.
Meanwhile the companion National Artificial Intelligence Research and Development Strategic Plan sets out long terms strategic objectives, noting:
The landscape for AI R&D is becoming increasingly complex. While past and present investments by the U.S. Government have led to groundbreaking approaches to AI, other sectors have also become significant contributors to AI, including a wide range of industries and non-profit organizations. This investment landscape raises major questions about the appropriate role of Federal investments in the development of AI technologies. What are the right priorities for Federal investments in AI, especially regarding areas and timeframes where industry is unlikely to invest? Are there opportunities for industrial and international R&D collaborations that advance U.S. priorities?
Specifically the two reports assume three key guiding philosophies
- AI needs to augment humanity instead of replacing it.
- AI needs to be ethical.
- There must be an equal opportunity for everyone to develop AI systems.
The Strategic Plan notes expands on these as follows:
- Strategy 1: Make long-term investments in AI research. Prioritize investments in the next generation of AI that will drive discovery and insight and enable the United States to remain a world leader in AI.
- Strategy 2: Develop effective methods for human-AI collaboration. Rather than replace humans, most AI systems will collaborate with humans to achieve optimal performance. Research is needed to create effective interactions between humans and AI systems.
- Strategy 3: Understand and address the ethical, legal, and societal implications of AI. We expect AI technologies to behave according to the formal and informal norms to which we hold our fellow humans. Research is needed to understand the ethical, legal, and social implications of AI, and to develop methods for designing AI systems that align with ethical, legal, and societal goals.
- Strategy 4: Ensure the safety and security of AI systems. Before AI systems are in widespread use, assurance is needed that the systems will operate safely and securely, in a controlled, well-defined, and well-understood manner. Further progress in research is needed to address this challenge of creating AI systems that are reliable, dependable, and trustworthy.
- Strategy 5: Develop shared public datasets and environments for AI training and testing. The depth, quality, and accuracy of training datasets and resources significantly affect AI performance. Researchers need to develop high quality datasets and environments and enable responsible access to high-quality datasets as well as to testing and training resources.
- Strategy 6: Measure and evaluate AI technologies through standards and benchmarks. Essential to advancements in AI are standards, benchmarks, testbeds, and community engagement that guide and evaluate progress in AI. Additional research is needed to develop a broad spectrum of evaluative techniques.
- Strategy 7: Better understand the national AI R&D workforce needs. Advances in AI will require a strong community of AI researchers. An improved understanding of current and future R&D workforce demands in AI is needed to help ensure that sufficient AI experts are available to address the strategic R&D areas outlined in the plan.
In 2015 US government spending on unclassified research and development in AI-related technologies was estimated to be around $1.1 billion. The Strategic Plan observes that government investment needs to remain a priority:
The Federal government is the primary source of funding for longterm, high-risk research initiatives, as well as near-term developmental work to achieve department- or agency-specific requirements or to address important societal issues that private industry does not pursue. The Federal government should therefore emphasize AI investments in areas of strong societal importance that are not aimed at consumer markets—areas such as AI for public health, urban systems and smart communities, social welfare, criminal justice, environmental sustainability, and national security, as well as long-term research that accelerates the production of AI knowledge and technologies.
The Preparing for the Future report also stakes claims for future direct and indirect government involvement in AI:
The U.S. Government has several roles to play. It can convene conversations about important issues and help to set the agenda for public debate. It can monitor the safety and fairness of applications as they develop, and adapt regulatory frameworks to encourage innovation while protecting the public. It can provide public policy tools to ensure that disruption in the means and methods of work enabled by AI increases productivity while avoiding negative economic consequences for certain sectors of the workforce. It can support basic research and the application of AI to public good. It can support development of a skilled, diverse workforce.
The Preparing for the Future report comes up with a mighty 23 recommendations in total, a number I don’t plan to go through here. But among the more notable are around ethical issues:
- Federal agencies that use AI-based systems to make or provide decision support for consequential decisions about individuals should take extra care to ensure the efficacy and fairness of those systems, based on evidence-based verification and validation.
- Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.
- The US Government should complete the development of a single, government- wide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons.
- Agencies involved in AI issues should engage their US Government and private-sector cybersecurity colleagues for input on how to ensure that AI systems and ecosystems are secure and resilient to intelligent adversaries.
The report concludes:
AI can be a major driver of economic growth and social progress, if industry, civil society, government, and the public work together to support development of the technology, with thoughtful attention to its potential and to managing its risks…As the technology of AI continues to develop, practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations…As the technology of AI continues to develop, practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations.
Both of these are far more robust discussion and policy documents than that from the UK government last week. It’s encouraging to see this level of engagement from the Obama administration, particularly during a febrile election campaign that hasn’t focused to any notable degree on technology aspects, other than private email servers! There’s a call for another report on the wider impact of AI on the jobs market by the end of the year. After that, it will up to President Clinton/Trump to make sure the discussion keeps on going.