Earlier this year, I attended the Third Annual AI Summit at the Potomac Officers Club, a conference focused on AI for Defense and Intelligence. At the event, the Executive Director, Yii Bajaktari of the National Security Commission on Artificial Intelligence (NSCAI), covered the highlights of his magnum opus report, Final Report National Security Commission on Artificial Intelligence.
A massive piece of 750 pages carried an uncomfortable message: America is not prepared to defend or compete in the AI era. I reviewed the contents of the report here.
NSCAI released a few white papers recently, one also written by Dr. Bajaktari. It's a far more comestible report, "The Role of AI Technology In Pandemic Response and Preparedness: Recommended Investments and Initiatives," a scant thirty pages. Because there are some interesting and potentially novel ideas in this whitepaper, I'll try to highlight and summarize some of them.
The introduction was pretty tepid, introducing no new concepts:
While it's unclear if AI will significantly alter the course of COVID-19, these initial investigations illustrate how AI could be used to enhance pandemic preparedness and response in the future. With suitable investments over the next decade, AI could revolutionize how scientists understand data, carry out research, and design new pharmaceuticals, how doctors diagnose certain diseases and interact with their patients and how public health officials manage information and make decisions.
We recognize that AI is not a panacea, and some technical promises are more theoretical than practical today. Algorithms can perform differently in the field than in the lab. Bias can creep into the underlying data and model design. Many applications remain brittle with little transferability or efficacy outside their narrow and discrete use cases. It takes education and training to teach doctors and scientists to use new tools and integrate them into their workflow. Care must ensure that applied systems and tools are safe and trustworthy unbiased, and understood by users.
Part one report topics are largely aspirational or unquantified initiatives in scope, but Part two focuses on funding initiatives:
Analysis of part one
In part one, "Promising Uses of AI for Pandemic Response - Now and in the Future," the report reads:
Situation Awareness and Disease Surveillance: AI is used to understand disease spread better. For example, Boston-based AI company DataRobot built an AI model to predict COVID-19 spread down to the county level. The Canadian firm BlueDot detected the spread in Wuhan long before it was taken seriously.
Today's AI capabilities can already be used to sift through data to identify vulnerabilities and zoonotic spillovers (pathogen transmission from a vertebrate animal to a human. Although preliminary, current data suggest that bats are the most probable initial source of the current 2019 novel CoV (2019nCoV) outbreak, which began on December 2019 in Wuhan, China). AI will progress in bioforensics by enabling inferencing and predictive models. And probably play an increasing role in identifying novel sensing materials as well.
I see this as mainly aspirational. Being able to diagnose a problem seems like a good step, but there are already many failures. The healthcare system is driven by profit and incentive. I doubt this will have an impact.
My opinion on vaccines, therapeutics and medical devices: it's already been demonstrated that AI was instrumental in the rapid development of vaccines for COVID-19. However, many of these had been in development for years. Pharmaceutical companies redirected their research to this one topic, so the "miracle" of COVID-19 vaccines was not quite that.
However, the claim made in this paper that AI will facilitate repurposing existing therapies for novel applications through simulation is a promising idea and one already in practice. AI to drive supply chains and advanced manufacturing needs much more invention and resources. While AI may provide some capabilities to complex supply chains, the discipline is much more complicated than that. I wrote about that here.
Regarding "On-going healthcare - protecting the healthy and treating the sick," I reviewed an AI model, widely used in hospitals, which has severe ethical and functional problems. Nevertheless, healthcare is ripe for a generational leap with AI, but problems are awaiting it.
Analysis of part two
The report's "Recommended AI-Focused Investments and Initiatives" provides some context to part one. In particular, the need for government funding. In "Leveraging AI to Advance the Science":
a. Double Federal funding for non-Defense AI R&D to roughly $2 billion to leverage AI for future pandemic response. Novel Machine learning directions and testing evaluation verification and validation (TEVV). NSF-$400M, DOE $300, NIH $150 and NIST $50 million.
b. Create an NSF-led AI Institute for Health and Biomedicine. Six $4 million grants to educational institutions to spur foundational research.
c. Launch a prize challenge administered by the NSF to advance next-generation data integration and modeling. Advances in AI reasoning could help expand SIR modeling (An SIR model is an epidemiological model that computes the theoretical number of people infected with a contagious illness in a closed population over time).
This seems a little strange, as the US Congress already passed The United States Innovation and Competition Act of 2021, formerly known as the Endless Frontier Act, authorizing $110 billion for basic and advanced technology research over a five-year period, and a complete restructuring of the NSF. However, politics got in the way, and although it did pass on a bipartisan basis, it watered down funding to about half. The bill was meant to demonstrate the US commitment to compete with China in AI, but the message is mixed now.
The report continues:
d. Continue DARPA's work to advance the architecture for data sharing and collaboration for drug discovery and mirror efforts at the NIH.
This is a nice idea, but the most powerful supercomputers in the National Labs are air-gapped, meaning they connect to nothing, so real-time data sharing is not likely.
e. Make the COVID-19 high-performance computer consortium permanent.
As mentioned above, there are national security issues. Many of the largest computers are involved in nuclear weapons research.
From the section, "Build an AI-Enabled Foundation for Smart Response":
a. Create a federal Pandemic Response dataset.
This seems like a galactic effort unless it's tightly-focused. On the commercial side, C3.ai built a COVID-19 data lake open to the research community.
b. Invest in digital modernization of state and local health infrastructure for effective disease surveillance.
Interesting idea, but states aren't of one mind. Would states that threaten the arrest of teachers wearing masks cooperate?
c. Enhance global cooperation on smart disease surveillance and international health norms and standards.
I don't see how this is primarily an AI-driven policy.
d. Invest in AI-driven capabilities to maintain military preparedness.
In the report, there is a lot of detail around the simple premise of keeping the military from getting sick, utilizing the methods described above.
I actually think these are all admirable ideas. But it seems that any action by Congress, just like The United States Innovation and Competition Act of 2021, is doomed to partisan bickering.