AI lessons from financial market surveillance This article is sponsored by:
Innovations in financial market surveillance are an interesting story of their own. They also have some important lessons for the future of AI oversight.

Innovations in financial market surveillance are an interesting story of their own. They also have some important lessons for the future of AI oversight.
Decisions, decisions - Raju Vegesna of Zoho makes the case for businesses making an informed decision about AI - and why the shiniest LLM in the box isn't necessarily the right choice.
There's skepticism of how this important legislation has been formulated, a view shared by influential thinktank the Legatum Institute. Here's why.
The House of Lords inquiry into LLMs has heard from the regulators. We have the right powers, they told peers; now we just need the cash.
Away from the headlines about AI safety, here's how Workday is engaging with policymakers to help get regulations in place that will allow enterprises to move forward with the technology.
Pandora CIO Sunil Srivastava tells diginomica how digital and data foundations are protecting and modernizing the jewellery business in a changing marketplace
OpenAI failed to appear before the House of Lords this week – but that was hardly a surprise in current circumstances. However, its internal struggles mirror those of the planet when it comes to AI technology.
The Subscription Economy specialist has seen a big influx of customers asking about its own carbon footprint and emission reduction targets.
Efforts to regulate AI safety are still a work in progress. Governments are mostly leaving it up to AI companies since the risks are poorly understood, and they don’t want to slow down innovations. Lessons from other kinds of post-market surveillance could inform the future of these efforts.
The Sam Altman saga this weekend at OpenAI reveals the danger of the e/acc AI faction - accountability is questionable
The Process Intelligence Graph, according to Celonis, provides enterprises with control and governance of their core enterprise data. This means less uncertainty when feeding AI models with data.
At this week's Workday Rising EMEA conference, Workday's Sayan Chakraborty spoke about the remarkable speed at which AI is evolving - and some of the risks, including the potential for 'model collapse'
As local government scrambles to figure out what it thinks about AI, now is the time for bold experimentation.
ESG continues to be a growing software category. But the relative newness of most ESG solutions means there are many firms with niche, small solutions and few with a large suite of ESG capabilities. Sphera stands apart from so many competitors due to long life, numerous acquisitions and a lot of experts on its staff. Here’s a quick look at Sphera.
Should LLMs and generative AI be more open and transparent? The mood music says yes. But what does ‘open’ really mean?
NIST has called for support on a new consortium to develop responsible AI metrics. In the long run, this is likely to have a far more substantive impact on the future of AI than political gatherings that talk about not regulating AI and taking nice photos. Your help is needed.