NIST - responsible AI standards need your help. It's time to step up to the mark!

George Lawton Profile picture for user George Lawton November 10, 2023
Summary:
NIST has called for support on a new consortium to develop responsible AI metrics. In the long run, this is likely to have a far more substantive impact on the future of AI than political gatherings that talk about not regulating AI and taking nice photos. Your help is needed.

Uncle Sam

In response to the recent US White House Executive Order (EO) on responsible AI, the National Institute of Standards and Technology (NIST) has put out a call for help in crafting responsible AI metrics as part of a new AI Safety Institute Consortium.

Specifically, NIST has been given the mandate to develop evaluation, red-teaming, safety, and cybersecurity guidelines; facilitate the development of consensus-based standards; and provide testing environments for the evaluation of AI systems. 

The request suggests:

These guidelines and infrastructure will be a voluntary resource to be used by the AI community for trustworthy development and responsible use of AI.

I doubt they will be “voluntary” for long, which I will get to below. More importantly, the resulting metrics, standards, and reporting processes that NIST ultimately develops are likely to be the most important artefacts to shape the future of AI for decades to come. And not just in the US but globally. 

The importance of NIST

Before diving deeper into the specific request, it's important to appreciate NIST's role in shaping standards, not just in the US, but globally. NIST was formed as part of the US Department of Commerce in 1901. This was a time when the US had at least eight different gallons and four different feet in use. Inspectors were poorly trained and often worked with out-of-date equipment. NIST scientists brought new rigor to measurement standards that unified these measures and quite a few more since. 

NIST developed the first radiation standards and facilitated the first blind airplane landing in 1931, developed the first atomic clock in 1949, redefined the meter using light waves in 1960, and published the DES encryption standard in 1977 and the AES encryption standard in 2000. In 2018, it worked with the global scientific community to redefine four of the seven base units for the International System of Units based entirely on the unchanging fundamental properties of nature. These days, it's leading the charge to define quantum-resistant cryptography standards robust to quantum computing attacks. 

Before the recent EO, NIST had already been working on AI safety and responsibility research. It created the Trustworthy and Responsible AI Resource Center and released the AI Risk Management Framework (AI RMF) earlier this year. 

AI Safety Consortium

The AI Safety Consortium promises to take these efforts to the next level. The Consortium will develop a new measurement science to enable the identification of proven, scalable, and interoperable techniques and metrics to promote the development and responsible use of safe and trustworthy AI, particularly for the most advanced AI systems, such as the most capable foundation models.

NIST has invited organizations to describe technical expertise and products, data, and models to enable the development and deployment of safe and trustworthy AI systems through the AI RMF. It is calling on non-profit organizations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI. 

Top goals include the following:

  1. Develop new guidelines, tools, methods, protocols and best practices to facilitate the evolution of industry standards for developing or deploying AI in safe, secure, and trustworthy ways.
  2. Develop guidance and benchmarks for identifying and evaluating AI capabilities, focusing on capabilities that could potentially cause harm.
  3. Develop approaches to incorporate secure-development practices for generative AI, including special considerations for dual-use foundation models, including (a) guidance related to assessing and managing the safety, security, and trustworthiness of models and related to privacy-preserving Machine Learning; and (b) guidance to ensure the availability of testing environments.

4. Develop and ensure the availability of testing environments.

5. Develop guidance, methods, skills and practices for successful red-teaming and privacy-preserving machine learning.

6. Develop guidance and tools for authenticating digital content.

7. Develop guidance and criteria for AI workforce skills, including risk identification and management, Test, Evaluation, Validation and Verification (TEVV), and domain-specific expertise.

8. Explore the complexities at the intersection of society and technology, including the science of how humans make sense of and engage with AI in different contexts.

9. Develop guidance for understanding and managing the interdependencies between and among AI actors along the lifecycle.

Industry pushback

In the wake of the recent EO, most industry insiders expressed approval that the US was doing a great job at talking about the importance of AI safety while not actually regulating it. However, within days of US President Joe Biden’s Executive Order, the US Federal Trade Commission expressed an intent to the Copyright Office inquiry to regulate  AI issues adjacent to existing copyright law but not covered by any existing regulation. Then, NIST started forming this new consortium. 

A few industry insiders and AI leaders have started pushing back. For example, Martin Casado, a general partner at a venture firm A16Z, penned a letter to Joe Biden citing concerns about the adverse effect of potential AI regulations, co-signed by sixteen like-minded executives and AI researchers. Specifically, they were concerned that definitions of AI and dual-use technologies cast too wide a net over future software innovations. Worse, it could set up a gauntlet of reporting that would be crushing for smaller companies.

One big concern is open source, which is what essentially inspired the recent boom in generative AI. Let's face it, Google researchers pioneered the discovery of transformers in 2017, but since then, all eight authors of the seminal paper have left for greener pastures. If not for open source, the transformer research, which is powering the flood of Large Language Models (LLM) now inspiring generative AI innovations, might never have blossomed. 

Casado and associates observe that the EO points to the risks of open-source models while glazing over potentially greater risks in closed models that are harder to scrutinize. In open source models, biases and vulnerabilities can be openly inspected and hardened, while proprietary models strictly exclude broad oversight. 

On Twitter (X) Casado said:

Strangling open-source AI isn't a minor process hurdle—it's an intellectual lockdown. This isn't just about code; it's about keeping the keys to our digital future from being duplicated and locked in the hands of a privileged few.

We look forward to continued dialogue with the Administration to address these pressing issues. Our aim: an AI ecosystem that is open, competitive, and secure, harnessing collective expertise for the greater good.

My take

It’s easy to under-appreciate the gravity of this project for the future of AI regulation. The official NIST notice suggests that these metrics and measures will be “voluntary.” I would argue that this suggestion arises from scientific humbleness and their current politically correct mandate. What comes out of the NIST will be anything but “voluntary.”

Certainly, on day one, it might be. Then, the US Government will require these metrics as part of government procurement. Then, all the businesses that supply those businesses. Eventually, almost every government agency globally will start referencing these standards in their regulations and requirements. Look, the NIST is a pacesetter for many regulatory and scientific standards, including encryption, the meter, and the kilogram, and the US does not even use metric, at least not commercially.

Even if we take out the existential risks of Artificial General Intelligence (AGI) coming to life to kill us, there are still plenty of real risks to worry about. The big ones that grab our attention won't be about practical and everyday threats like AI bias, inequity, civil rights violations, and pollution. They will be political red meat baiting topics like the use of these tools for child sexual abuse, terrorist attacks, or to empower the ‘Chinese threat,’ which we need to regulate to protect our values. This trajectory is already changing the landscape of encryption standards, finance, and reporting practices that could adversely impact privacy, autonomy, and civil rights. 

It's also important to note the various financial and egoic interests at stake here. Social media companies that monetize hate, while pretending they don’t, will gain from watering down toxicity metrics. Early AI leaders will gain from overcomplicating reporting and crowding out open source to cement their current lead. Pioneers also have a strong stake in working with NIST to ensure their patented tech gets woven into upcoming metrics. 

Now, it's good that the NIST program is being run by some of the smartest people in the industry, who I honestly believe want to make AI safe for humanity while growing the industry and addressing sustainability goals. NIST scientists take great pride in improving the precision of measurements by a few decimal points. At the same time, those with a strong financial interest in a certain outcome will invest a lot of financial resources and top talent to help nudge things their way. 

This NIST program is the front line in the future of AI regulation. This will be a technical, political, and well-financed battleground that shapes how we adopt AI, foster healthy competition, and make the world a better or worse place for decades to come. 

Well-articulated letters and political platitudes are one thing. Sending your best people into the fray to ensure this goes well for business competition, people, and the planet will be a long, tenuous process, but hopefully, it will be a more rewarding outcome than a few Twitter likes. 

And let's be clear. NIST is looking for insight from a lot of different experts. They specifically asked for support for those with models to demonstrate safe pathways, infrastructure support, facility spaces, and technical expertise in one or more of the following areas:

  • Data and data documentation
  • AI Metrology
  • AI Governance
  • AI Safety
  • Trustworthy AI
  • Responsible AI
  • AI system design and development
  • AI system deployment
  • AI Red Teaming
  • Human-AI Teaming and Interaction
  • Test, Evaluation, Validation and Verification methodologies
  • Socio-technical methodologies
  • AI Fairness
  • AI Explainability and Interpretability
  • Workforce skills
  • Psychometrics
  • Economic analysis

I'll leave it at that for now with a reminder that this is an opportunity to make a difference in one of the  most important forums for shaping the future of safe, responsible and ethical AI. 

You have until at least December 4, 2023, to express interest here or via email to [email protected].

Loading
A grey colored placeholder image