AI lessons from financial market surveillance This article is sponsored by:
Innovations in financial market surveillance are an interesting story of their own. They also have some important lessons for the future of AI oversight.
George Lawton is a technology journalist that has been covering the technology industry for the last three decades. He is currently focused on digital transformation, enterprise automation, digital twins, privacy, and sustainable development. year He has written over 3,000 stories about computers, communications, knowledge management, business, health, and other areas. In the early 1990s, he helped build Biosphere II, sailed a Chinese Junk to Antarctica, and herded cattle on a 200,000-acre ranch in Australia.
Innovations in financial market surveillance are an interesting story of their own. They also have some important lessons for the future of AI oversight.
Researchers and vendors are developing a variety of complementary approaches for measuring hallucinations in AI. TruEra is baking this into the AI development process.
Hallucination research pioneer Simon Hughes passed away shortly after I spoke to him about his cutting-edge hallucination research. What he had to say is important stuff and diginomica is running his interview in tribute to him and his work.
Siemens and Microsoft's collaboration will embed generative AI into copilots for manufacturing, infrastructure, transportation, and healthcare. In the long run, this suggests how digital twins might compensate for AI hallucinations to improve trust.
Galileo Lab’s new metrics for detecting hallucinations promise to help improve generative AI accuracy.
-
Efforts to regulate AI safety are still a work in progress. Governments are mostly leaving it up to AI companies since the risks are poorly understood, and they don’t want to slow down innovations. Lessons from other kinds of post-market surveillance could inform the future of these efforts.
Earth AI pioneers a more efficient experimental process for discovering minerals required for Net Zero goals. Their promising results highlight the importance of combining data science, domain expertise, and systematic thinking to solve new problems relevant to all enterprises.
The AI model landscape is growing exponentially. Martian’s new model router architecture promises to select the best model for a task.
Qmerit, one of the largest home electrification providers in the US, is rolling out a new AI tool to automate the installation process. This could have a significant impact on improving home electrification efforts.
NIST has called for support on a new consortium to develop responsible AI metrics. In the long run, this is likely to have a far more substantive impact on the future of AI than political gatherings that talk about not regulating AI and taking nice photos. Your help is needed.
Modern Large Language Models have a transparency problem, confounding efforts to reduce hallucinations and other issues. Watchful is introducing a new set of open source tools and metrics to shine a light on and reduce these.
AI pioneers Yoshua Bengio and Geoffrey Hinton, along with 23 other academic researchers, have called for ‘urgent governance measures’ to protect society against the danger of ‘Autonomous AI.’ They argue national institutions need strong technical expertise and the authority to act quickly.
A switched-on look at the scope of challenges and opportunities in electric grid transformation efforts.