Policy researchers propose KYC for AI – could improve export controls, but also stifle innovation.

George Lawton Profile picture for user George Lawton February 27, 2024
Summary:
Researchers propose governments consider know-your-customer (KYC) for training AI foundation models. This might improve export controls and AI safety and subsidize academic AI. It also risks complicating AI innovation and facilitating regulatory capture that stifles competition.

caution

China’s rapid pace of AI innovation is raising alarm bells in Washington and other Western capitals. The US Government began export controls of high-performance chips in October 2022 that were strengthened last year. However, it left the door wide open when it comes to training large foundation models. 

The most recent White House Executive order relating to AI Safety mandated that organizations inform the US government and share models with safety testers about any foundation models trained with more than 1026 operations or biological models trained with more than 1023 operations. A Chinese team exceeded that threshold last July with xTrimoPGLM, a 100-billion parameter scale model for deciphering proteins, and several others are close

Policymakers are concerned these could be used to improve military tech, create biological weapons, or launch more effective hacking campaigns. Outside of China, easy access to cloud computing could also help well-funded hacking groups develop better AI-powered tools outside the safety controls of Western companies. 

A group of policy researchers from fifteen organizations, led by researchers at OpenAI,  the Centre for the Governance of AI, the University of Cambridge, Oxford, the Institute for Law & AI,  and the Vector Institute for AI, recently published a policy recommendation that governments affect Know-Your-Customer (KYC) requirements on computing capacity for training AI models. They argue that this is more effective than controls on data and algorithms since AI chip capacity is more detectable, excludable, and enforceable. 

Another benefit of KYC for AI would be to develop a subsidization scheme to allow academic research on trustworthy AI to keep pace with industry development. Companies would pay a foundation model tax to fund these trustworthy AI research initiatives.

The researchers caution that a poorly informed approach could increase new risks in privacy, economic impact, and the centralization of power for cloud providers. They advocate that the government consider five principles to balance out the potential benefits of increased export control against these risks:

  1. Exclude small-scale AI compute and non-AI compute from governance.
  2. Implement privacy-preserving practices and technologies. 
  3. Focus compute-based controls where ex ante measures are justified. 
  4. Periodically revisit controlled computing technologies.
  5. Implement all controls with substantive and procedural safeguards.

Many problems to address

Victor Botev, CTO at Iris.ai, which is developing AI-powered tools for research, agrees that policies aimed at AI hardware are important, given the physical nature of chips and servers and the consolidated supply chains. 

However, it falls short when ensuring ethical and safety oversight for all aspects of AI infrastructure, software, and hardware. He argues,   

Just focusing on hardware risks oversimplification. Policymakers should instead take a comprehensive, multi-layered approach spanning algorithms, data, software, and hardware. Governance must balance larger monolithic models' rapid pace of progress with specialized systems already delivering value to society.

The concern is that solely focusing on chips and servers fails to capture AI's intricate nature and rapid evolution. Better algorithms and data access can unlock new capabilities without requiring endless hardware upgrades. AI progress forms a triangle between data, algorithms, and computing. Relative deficits in one area inspire innovations in others over time. He says:

So, while chips and servers warrant some governance now, policymakers shouldn't over fixate there. If regulations over-constrain hardware, research will simply move to more efficient software techniques as bottlenecks shift. The key lies in proportional oversight. Technological restrictions hinder innovation, limit potential and force detours around regulation. Controls at the level of application instead reinforce accountability without sacrificing progress for society's benefit.

 Another concern is that although the intention aims to address KYC requirements for AI, it could prove infeasible given the current stage of practice and understanding. Botev explains:

The sheer volume and fragmented nature of low-level data involved, spanning model architectures, iteration parameters, training runs, and more, would necessitate expert analysis capacity we presently lack at scale.

Regulations on hardware could mitigate the development of clearly damaging AI applications like automated hacking tools. However, they are unlikely to prevent harmful systems entirely. For example, determined actors will circumvent rules. Botev believes that teaching communities to better understand technology and building open source models for experimentation is essential to cultivate responsible development. Therefore, community self-governance and collective practices for accountability must advance alongside any policy conversations.

 Botev is also concerned that new KYC requirements could help fill a regulatory moat around current AI leaders that stifles competition and innovation. Botev says:

Concerns about anti-competitive impacts always warrant consideration. While we should evaluate any policy on its own merits, we cannot ignore that moves encouraging regulation from large industry players are at least partly to form competitive moats. Overall, we must seek a balance between enabling cutting-edge R&D and controlling potential damages from emerging capabilities - easier said than done. Any responsible governance demands acknowledging complex trade-offs and avoiding regulatory capture without over-correcting. Progress requires nuance.

 Botev also has doubts about how an AI training allocation scheme might play out in a practical manner. Would large AI providers pay an academic/sustainability tax on AI usage that would require similar compute be allocated for non-profit/academic research or government AI governance efforts? While this academic/sustainability tax on AI companies to fund public interest research offers certain appeal, implementing a one-size-fits-all approach proves complex given diverse business models. He suggests more nuanced solutions warrant exploration:

For instance, some large entities reliant on advanced computational scale may shoulder additional costs comfortably, provided reasonable levels aligned with flexibility. Codifying similar opt-in commitments could incentivize broader adoption and yield crucial funding for external auditing or ethics-focused investigation, otherwise lacking transparency. However, varied use cases make universal prescriptive taxation risky without fuller analysis.

My take

New export controls will likely land on the willing ears of lawmakers and government leaders in Western nations concerned about the potential for Chinese AI imperialism. But upping the ante in AI export controls is a thorny problem that is as likely to stifle innovation in Western countries as to give adversaries a head start to achieve a relatively faster pace of innovation. At that point, we would all have to worry about export controls in the other direction. 

It will also hinder efforts on both sides to use new AI tools to address sustainable development goals to achieve climate and environmental security. This should be just as pressing as military security. It’s taken years to sort out how to account for carbon emissions. However, sustainability requirements tend to take a back seat to security concerns, which could drive faster action. 

In the meantime, businesses must consider how new KYC requirements might impact their AI development efforts. Regardless of which policy eventually gets implemented, it will be tough on everyone except those selling the new auditing and enforcement tools to meet the new requirements. 

Loading
A grey colored placeholder image