Main content

2023 - the year in AI Policy

Chris Middleton Profile picture for user cmiddleton December 29, 2023
The challenges around AI ethical policymaking didn't get any easier....


After a decades-long Winter, 2023 saw the flowering of AI Spring. That was mainly due to the success of generative systems, Large Language Models (LLMs), and epochal one-year-old ChatGPT, based on OpenAI’s GPT (general pre-trained transformer) engine. GPT-5 promises an almost AGI-like (artificial general intelligence) experience across multiple media – say its marketers.

Plus, rivals like Meta’s LLaMA, Anthropic’s Claude, Google’s Bard and Gemini, and a host of image, video, and audio tools grabbed popular attention too. Inevitably, many commentators reached for superlatives, comparing 2023 with the dotcom and mobility booms of the 1990s, which saw a similar alignment between enterprise and consumer interests.

So, is the hype justified? In many ways, yes. In general terms, AI promises giant leaps forward in drug discovery, preventative healthcare, materials science, personalization, predictive maintenance, sustainability, and the environment; and generative AI (gen AI) is already changing how we search for and interact with information. 

However, the AI Spring also presented big challenges in terms of policy, regulation, ethics, data protection, intellectual property, safety, and security. So, here is a selection of some of the red flags raised in this extraordinary year, during which the pace of change has often been less bewildering than the speed of AI’s tactical adoption.

UK ceding control to machines, say reports. Human leadership essential, says another

ChatGPT maker OpenAI stole a march on competitors this year, by making cloud-based AI accessible, free, and almost too good to be true. This dealer-like behaviour created a dependency culture among its millions of freemium users (and implied that paying writers, academics, artists and experts was a more urgent problem for AI to solve than pandemics, war, rogue asteroids, and climate change).

In our May report on a trio of research studies, we wrote:

There is obviously something wrong with a slot machine that pays out jackpots with nearly every pull of the lever. When is life ever that simple? As every gambler knows, such machines are always positioned upfront in casinos, to lure people in and make them part with their cash. Remember: the house always wins.

• The evidence supports our prediction: By October, OpenAI’s revenues had jumped from an estimated FY $200 million in 2022, to $1.3 billion – a 550% increase in just 10 months, largely from users abandoning the free version for a paid subscription. That included an estimated 80% of Fortune 500 companies – evidence of tactical ‘me too’ behaviour, rather than well-considered strategies.

AI will drive 300 years of change by 2026 as blue and white-collar workers vanish, says Avanade

Two themes dominated our critical thinking in 2023. One, users credit Gen-AI for its brilliance rather than the (uncredited) human expertise that was scraped to train it. And two, they already trust it and, in many cases, are ceding control of operations to it. A technology most had never heard of until this year!

The push to adopt AI at scale – a predicted 300 years of change over the next three years – was revealed in our October report, in which an Avanade survey found:

Sixty-three percent [of 3,000 enterprises surveyed] said their employees will need a completely new skill set, as enterprises switch to the ‘AI-first operating model’ predicted by a staggering 92% of respondents. A transformation they believe will take place in 2024.

But we warned: 

Less than half (48%) of organizations have put in place policies for responsible AI adoption – down from 52% in March.

AI is causing a new wave of shadow IT, warns HCLTech’s Ashish K Gupta

Our June story found HCLTech’s Ashish K Gupta explaining that a lot of Gen-AI adoption was individuals playing with free tools within the enterprise – as opposed to ‘enterprise adoption’.

But he noted:

It’s in the nature of technology that it always starts at the edge of an organization, then moves towards the center and the core. But with things like ChatGPT and Bard coming into the picture, the need for the technology function to start getting involved with AI adoption is only increasing.

Great expectations from AI, but a bleak house for cybersecurity

While AI has become a more strategic consideration for enterprises – in the sense of ‘We’ve bought the hammer, but do we have a nail?’ – the shadow-IT nature of some uptake remains a cybersecurity challenge, as we revealed in several reports this year. For example, many users are pasting source code and privileged content into cloud-based tools – crown jewels that may become free training data for the supplier.

Our July round-up of security reports also noted the trend of AI threats being fought with… more AI:

According to Censornet, organizations – especially SMEs – will be fighting AI with AI, with a claimed 84% of decision-makers planning to invest in AI-automated security solutions in the next two years, and 48% this year.

In other words, the rush to adopt simple solutions is generating new waves of complexity.

See also:


If you think OpenAI’s sacking of Sam Altman was just about board-level disagreement, think again

In November, the ChatGPT maker’s internal problems revealed a mismatch between its aim – to make society better – and its real-world behavior. 

To recap: CEO Sam Altman was deemed so untrustworthy that he was sacked by his own board, whose sole remit was the safe, responsible development of artificial general intelligence (AGI). He and co-founder Greg Brockman then moved to Microsoft to head up a new research division – apparently created to keep them onside. But after most OpenAI staff threatened to follow them out the door, Altman was reinstated as CEO and the board was fired instead – including all the women. Holy ethical black hole!

Thus, Otherworldly Tech Bro returned as conquering hero, though precisely what he had conquered remains a mystery (being held to account, perhaps?) However, the grubby soap opera revealed fault lines in the AI sector itself. These were between an extremist faction – evangelists who want to accelerate AI innovation at any cost – and the rest of humanity (who the cultists claim to be saving, while treating them with contempt). Holy stock options!

We wrote:

Devotees see the likes of Altman almost as messianic figures, and describe anyone who wants to moderate progress, or consider the ethical, social, economic, and cultural dimensions, as ‘decels’: decelerators, a derogative spin on ‘incels’. This self-styled ‘e/acc (effective acceleration) faction finds its natural home on X, the platform owned by Elon Musk, a man fond of decrying the ‘woke mind virus’ (anyone critical of his unfettered power and politics).

See also:

King Canute, ahoy? The House of Lords debates AI, as OpenAI explodes then reforms

Ancient and venerable though it may be, the UK’s House of Lords’ did sterling work debating LLMs in the weeks after the AI Safety Summit (see below). Throughout November, the Digital and Communications (Select) Committee interviewed vendors, regulators, academics, and representatives of different sectors – some of whom were angry at AI companies’ scraping of copyrighted data to train their LLMs (see below).

OpenAI missed an evidence session at the Lords, because it was imploding at the time. But among the expert witnesses who did attend was Jonas Andrulis, founder and CEO of German AI start-up Aleph Alpha. Speaking remotely in faltering English from a train going through tunnels (holy meta-irony!), he said the quiet part out loud:

How we arrive at LLMs has nothing to do with truth. It is just to learn patterns of language and complete them, to complete writing according to learned patterns. And this is also the reason why these models and their outputs are not consistent. They can contradict themselves, because they're not built as truth machines.

How will generative AI impact legal services? It’s all about responsibility, say lawyers

Despite gen AIs not being truth machines, many users still expect facts at the touch of a button. In 2023, that included two Manhattan lawyers who used ChatGPT to search for legal precedents in a lawsuit against an airline, and were fined for presenting fake data from the hallucinating system.

The presiding judge noted the pair had:

Abandoned their responsibilities when they submitted non-existent judicial opinions, with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.

The lawyers responded that they had acted in good faith, but had simply “[failed] to believe that a piece of technology could be making up cases”. 

Also see: for the winners and losers in professional services.

ChatGPT has the potential to spread misinformation ‘at unprecedented scale’

Sticking with ‘truth machines’ for a moment, in February Derek du Preez warned of Gen-AI’s potential for spreading conspiracy theories and deliberate disinformation, as well as hallucinations that might be reported as facts. In the wrong hands, this ability could be weaponized to support bad actors’ agendas and destabilize democratic processes.

Citing a NewsGuard report, Derek wrote:

Regulators and lawyers the world over seem confused at how to wrangle the AI tools into a box that enables them to be used for public good, whilst limiting the damage done by bad actors. But at the core of the NewsGuard report is that the way information is presented so confidently by tools such as ChatGPT, even if that information is inaccurate.

Also see:



Generative AI - authors and artists declare war on AI vendors worldwide

Few issues in AI are as divisive as copyright, with allegations flying that many LLMs and gen AIs were trained on data that included copyrighted content. So, the key questions are: were vendors permitted by fair use conventions? If so, under which legal regime? And should copyright holders be cited and paid?

In September, we reported class actions in the US and elsewhere to establish if the fair use of copyrighted data – which typically refers to academic research, not commercial exploitation – applied to AI. Vendors’ defence is that a system might be trained on unlicensed material, but does not necessarily generate a recognizable copy.

Even so, author Celine Kiernan spoke for many creative professionals:

I wrote these books based on a life of experience, the death of my dad, the many wars that gnaw our world, the history of my country and family. They came directly from my heart. What are they to a machine but words in sequence? […] What has the world let itself in for? In the creative fields, AI can never be anything but a warped photograph of a stranger.

Also see:


And: Peer review - can the UK Parliament's House of Lords save copyright from AI giants? (Here, vendors claimed they had no idea copyrighted content was scraped. A bad-faith argument when the Books3 database consists entirely of pirated texts!)

LLMs and generative AI – give us more money, regulators tell legislators

The EU’s AI Act, the US’ Bill of AI Rights, and other moves to balance innovation with ethical oversight, put regulators in the spotlight this year. Including some that would normally not be involved with the tech sector. 

One was Anna Boaden, Director of Policy and Human Rights at the Equality and Human Rights Commission (EHRC), which might be obliged to step in if AI is found to be automating historic societal biases (see below). But can these organizations do this new job? She said:

Yes, if we are structured and resourced in the right way. Making sure that when the technology is developed, equalities and human rights principles are embedded so that we can focus on outcomes. This is a massive area, with huge human rights and inequalities risks, but we are small in size. So, there is lot of will, but not a huge amount of resource.

In short: Give us some money and invest in our skills! Fair play. 

• Meanwhile, the suggestion that AI-generated content should be flagged whenever it is published online emerged from legislative discussions. In principle, we think it is a good one, though it is not a simple solution: a human author might have used an AI prompt or contribution at some point.

AI automates systemic racism, warns IBM strategist - why enterprises need to act now

The risk of AI automating historic societal biases was revealed in an important new book, Hidden in White Sight, by black American computer scientist Calvin D Lawrence, former Chief Architect for Cognitive Systems at IBM, now its CTO of Responsible and Trustworthy AI.

He wrote:

It’s a painful reality that AI doesn’t fail everyone equally. When tech goes wrong, it often goes terribly for People of Color. That’s not to indicate that the objects of AI’s failure are somehow always pre-determined, but even when the outcome is unintended and unwelcome (even by the perpetrator), we still feel the effects and the impact.

AI Safety Summit - when Rishi met the world's richest man and it was 'squeeeeeeeeeeeeee!!!'

The UK’s AI Safety Summit was in our sights throughout October. On the one hand, Prime Minister Rishi Sunak was commended for putting AI safety at the forefront of global policymaking, and for seeking to host a high-level debate. But on the other, the focus on frontier models ignored the many challenges listed above. 

Plus, the event gave allies such as the US and Europe a platform to explain – to a press not invited to Bletchley Park – how it was really they who were leading the debate on regulation and safety (via the EU’s AI Act, and more), not the UK.

A score-draw, then, with some British own-goals. One of which was Sunak fawning over the world’s richest and least accountable man, Elon Musk. Stuart Lauchlan wrote:

That ‘fanboy for tech’ element isn’t new. Tony Blair had it very badly when he was PM. It’s the driving need that burns in national leaders to deliver ‘their legacy’. What will he or she be remembered for? And tech is always a promising option - in theory.

Also see:

A grey colored placeholder image