OpenAI's meltdown prompts further questions around the future of AI safety surveillance
- Summary:
-
Efforts to regulate AI safety are still a work in progress. Governments are mostly leaving it up to AI companies since the risks are poorly understood, and they don’t want to slow down innovations. Lessons from other kinds of post-market surveillance could inform the future of these efforts.
With all eyes on the Open AI kerfuffle, it seems important to consider what this means for the future of AI safety. Although most details are murky, one prominent theory about what happened inside OpenAI suggests a rift between for-profit business interests and AI safety concerns.
Some leading experts have postulated that the rise of agentic or autonomous AI poses even bigger risks down the road. There is also speculation that the current Open AI rift may have emerged from the new Open AI service that allowed anyone to create their own bots.
Thus far, we have only encountered a few of the safety issues associated with Large Language Models around hallucinations, copyright, bias, and toxicity. Lessons from post-market surveillance in other domains, including finance, healthcare, and fire safety, could inform future efforts to identify, track, and report on these new risks.
The US White House Executive Order on AI did galvanize interest in ensuring responsible and safe AI. Indeed, shortly after the announcement, the US Federal Trade Commission informed the copyright office it planned to weigh in on some copyright-adjacent issues outside the existing purview of copyright law. And NIST called for support in developing responsible AI safety metrics.
But one thing that was noticeably absent from the Executive Order was operational monitoring. O’Reilly CEO Tim O’Reilly wrote:
Operational Metrics. Like other internet-available services, AI models are not static artifacts but dynamic systems that interact with their users. AI companies deploying these models manage and control them by measuring and responding to various factors, such as permitted, restricted, and forbidden uses; restricted and forbidden users; methods by which its policies are enforced; detection of machine-generated content, prompt-injection, and other cyber-security risks; usage by geography, and if measured, by demographics and psychographics; new risks and vulnerabilities identified during operation that go beyond those detected in the training phase; and much more. These should not be a random grab-bag of measures thought up by outside regulators or advocates, but disclosures of the actual measurements and methods that the companies use to manage their AI systems.
Post-market surveillance
This post inspired research into how post-market surveillance has evolved across other industries. In financial markets, these tools detect abusive market practices like collusion and trading on insider information. Post-market surveillance in the healthcare industry identifies new risks after approval as drugs and medical devices are used for longer periods and in a wider population than preclinical trials.
Other industries self-regulate. For example, independent bodies like Underwriters Labs certifies new products for fire safety and tracks incidents after the fact, to help recall products later discovered to be unsafe.
All the major AI developers are currently making promises to ensure safety and self-regulate. Even British Prime Minister Rishi Sunak has suggested we should not trust AI firms to “mark their own homework.” But many of them are dialing back AI safety research. For example, Meta recently disbanded its responsible AI team.
Abhishek Gupta, Founder and Principal Researcher at the Montreal AI Ethics Institute, said that enterprises need to consider AI safety metrics that go beyond traditional software performance indicators. He explains:
Current metrics should encompass a comprehensive range of factors including, but not limited to, ethical usage, user demographics, cybersecurity threats, and real-time vulnerabilities. These metrics are pivotal in creating a feedback loop for continuous improvement and adaptation of AI systems. They should be aligned with ethical AI principles, ensuring fairness, transparency, and accountability. The emerging trend is to integrate AI ethics into operational metrics, thereby reflecting the multi-dimensional impact of AI systems in real-world applications.
This will require a shift towards more holistic and dynamic measures. Traditional metrics primarily focused on accuracy and efficiency. These need to be augmented with parameters that evaluate ethical implications, societal impact, and real-time adaptability of AI systems. For example, new metrics will be required to effectively measure bias, fairness, and transparency in AI systems and their impact on different demographic groups.
Emerging safety risks
But it’s also important to think about developing metrics to identify emerging safety risks we have not previously considered. Gupta explains:
Additionally, metrics should be adaptable to the evolving nature of AI, capable of capturing new risks and challenges as they emerge. A key requirement here is the development of standardized metrics that can be universally applied, facilitating benchmarking and regulatory compliance.
Tools developers will need to place more emphasis on creating more integrated and user-friendly platforms that can manage the complete lifecycle of an AI system. These tools should not only monitor and audit AI systems but also provide actionable insights for mitigation of identified risks.
Gupta also sees a growing need for advanced tools that can automate the detection of biases, ethical lapses, and security vulnerabilities in real time. Better Integration with AI explainability tools could provide clear insights into AI decision-making processes for both technical and non-technical stakeholders. All of this will require investment in research and development, focusing on the intersection of AI technology, cybersecurity, and ethics.
This is not just a new tools thing either. Gupta stresses the importance of continuous education and interdisciplinary collaboration. This will require technical training and education in ethics, law, and social sciences. Enterprises will also need to foster collaboration across various disciplines – including AI developers, ethicists, legal experts, and end-users. There also needs to be an ongoing dialogue with regulatory bodies to ensure that best practices are in sync with evolving legal and ethical standards.
Gupta believes broader industry-wide efforts to improve AI safety monitoring and reporting will require work across three main areas.
- A Unified Framework for Standards and Guidelines should encompass standardized safety metrics, responsible AI use guidelines, and reporting and accountability protocols. This framework should address technical aspects of AI safety and ethical, legal, and societal implications. A unified set of standards would facilitate consistency in AI safety monitoring and reporting across different sectors and geographical regions, ensuring that AI systems are developed and deployed under a common set of ethical and safety standards.
- Collaboration Across Stakeholders needs to improve across various stakeholders, including AI developers, researchers, ethicists, legal experts, end-users, and policymakers. Industry-wide efforts should include establishing forums, workshops, and collaborative platforms where these stakeholders can share insights, best practices, and challenges. Public-private partnerships can play a crucial role here, leveraging the strengths and perspectives of different sectors to foster innovation while ensuring ethical and safe AI development.
- Emphasizing Continuous Learning and Adaptation involves staying abreast of technological advancements and also adapting to emerging ethical and societal concerns. Continuous learning programs and regular updates to safety protocols and standards are necessary. Additionally, there should be a focus on developing adaptive AI systems that can respond to changing environments and requirements, incorporating feedback mechanisms for ongoing improvement. This adaptability also extends to regulatory frameworks, which should be flexible enough to accommodate new developments in AI while maintaining rigorous safety and ethical standards.
Lessons from other industries
Gupta believes that AI safety surveillance could benefit from some of the best practices developed in other industries. The medical field, particularly in areas like pharmacovigilance, has established robust mechanisms for monitoring and reporting adverse events, ensuring patient safety, and maintaining compliance with regulatory standards. Gupta says:
These systems are adept at handling vast datasets and identifying patterns that may indicate risks, a capability directly applicable to AI surveillance. In AI, similar mechanisms can be developed to monitor for unintended consequences and biases, especially when AI is deployed in sensitive areas like healthcare or criminal justice. The medical field's emphasis on ethical considerations, informed consent, and patient confidentiality can also inform AI policies, particularly around data privacy and ethical use of AI.
Similarly, the financial industry has developed various surveillance mechanisms for detecting fraud, insider trading, and market manipulation. Here, Gupta explains:
These mechanisms employ complex algorithms and real-time analysis to monitor transactions and detect anomalies, a method that could be adapted for AI systems to identify unusual patterns, potential biases, or ethical breaches. The financial sector's experience in managing high-volume, high-velocity data can guide the development of scalable and efficient AI monitoring tools. Additionally, the regulatory compliance frameworks in finance, including mandatory reporting and transparent auditing processes, can serve as a model for establishing similar standards in AI governance.
Integrating surveillance mechanisms across sectors like medicine and finance can provide a more holistic approach to AI safety monitoring. Gupta observes:
By benchmarking against these established systems, AI monitoring can adopt best practices in risk assessment, data management, and regulatory compliance. This cross-sectoral integration also facilitates the sharing of knowledge and technologies, potentially leading to innovative approaches in AI surveillance.
My take
Silicon Valley innovators may be financially motivated to move fast and break things in innovative new ways. However, established enterprises need to consider operational AI tracking to mitigate unknown risks to their brand and bottom line. Just last week, UnitedHealth was hit with a massive class action lawsuit that it was imposing AI-powered decision-making on doctors that was wrong 90% of the time. It apparently rushed to market the new system after the $2.5 billion acquisition of NaviHealth in 2020.
It should be observed that United Health provides health insurance for 52.9 million Americans, and the lawsuit is asking for $5 million for each party. Depending on how many ultimately sign on and how the case fares, the insurance giant could face liabilities running into the tens of billions of dollars. Perhaps one of the most damning allegations is they did not just roll this thing out hoping for the best, the insurer allegedly threatened termination for human doctors and nurses that disagreed.
Although the EU is dialing back its proposed AI regulations, it still has GDPR algorithmic decision-making guidelines for enterprises to consider. US companies have no such ‘luxury’ to inform their AI safety efforts. But those that proactively develop their own operational safety metrics likely avoid some of the fallout of bad ideas running wild.