King Canute, ahoy? The House of Lords debates AI, as OpenAI explodes and then reforms

Chris Middleton Profile picture for user cmiddleton November 23, 2023
Summary:
OpenAI failed to appear before the House of Lords this week – but that was hardly a surprise in current circumstances. However, its internal struggles mirror those of the planet when it comes to AI technology.

An image of a finger pointing to a digital scale
(Image by herbinisaac from Pixabay)

While the IT sector and its investors have struggled to follow the ins and outs of the OpenAI saga this week, others – including governments and regulators – are taking a longer-term and more responsible view. We should all be thankful for that. 

But first, here’s a brief recap for anyone who has slept through the past few days of upheavals, because everything that has happened is significant for AI ethics and governance moving forward.

First, OpenAI’s board appeared to carry out its founding duty – ensuring the safe development of responsible AGI – by firing a CEO it strongly implied was untrustworthy.

This, plus the sympathetic departure of co-founder Greg Brockman, triggered a public meltdown at the start-up, relocations to Microsoft, the creation of a new AI research division under Altman, Brockman, and Satya Nadella, and the threatened resignation of OpenAI’s workforce to join it.

Then, after what can best be described as four days’ walled-gardening leave – while an industry-wide feeding frenzy over talent ensued – Altman and Brockman returned to OpenAI, hailed as conquering heroes by anyone with stock options, venture capital, or messianic delusions. 

The board was then reconstituted minus any women*, and with the baffling addition of Larry Summers, the former President of Harvard, who once suggested that women were less suited to successful STEM careers than men. Holy responsible AGI! Plus, Bret Taylor, former co-CEO of Salesforce, a far more sensible appointment.

Note how swiftly diversity and the board’s founding governance remit were swept aside by the profit motive and what can politely be described as ideological zeal by OpenAI’s supporters. Ethicists worldwide might be engaged in academic thought experiments around AI, such as how to deal with Roko’s Basilisk, but – sadly – far less consideration is given to AI governance being subordinated to big bucks and ego.

Cue that most predictable, cyclical thing in IT: mass obeisance to the industry’s latest maverick, rock-star CEO, who (at the time of writing) is scrabbling to be reinstated on the board. Thus, OpenAI is no longer primarily a non-profit developing safe AGI; instead, it has morphed into one man’s cultish, $80 billion plaything, lionized by investors and anyone who hasn’t seen daylight this century.

On that point, a piece in the Washington Post this week noted Altman’s “self-serving approach”, which previously saw him kicked out of Y Combinator by his own mentor. Spooky: an AI might perceive that as a recurring behaviour pattern. Since then, Reuters has published intriguing revelations about Q* (‘Q-Star’) an apparent giant leap forward in technology at the company – one that doesn’t appear to have been mentioned at the UK’s AI Safety Summit on frontier models. (Who could have predicted that a frontier company would withhold its innovations? diginomica.)

Meanwhile, Bloomberg’s story about Altman scouring the Middle East for billions of dollars in sovereign wealth investment was swept aside by the outpouring of cultish e/acc drivel about him.

Got all that? Now relax.

Hallucinations of truth machines

So, it was no surprise that OpenAI was unable to field an expert witness for the House of Lords’ ongoing enquiry into Large Language Models (LLMs) on 21 November, thus depriving the British Parliament’s upper chamber of AI’s most prominent voice.

The Digital and Communications (Select) Committee has been doing sterling, fact-finding work on AI in the weeks since the Bletchley Park summit. Indeed, it has been covering the topics that the conference failed to address: governance, ethics, copyright, and more. 

Hopefully, OpenAI will honour its commitment to appear before the Committee soon.

We should remember that, for all their imperfections, some things still stand above the will of any dysfunctional CEO who plans to reformat society like it is some failing hard drive from the 1990s: namely, democratic governments and industry regulators. 

Appearing before the Committee on Tuesday were: Professor Zoubin Ghahramani, Vice President of Research at Google DeepMind, and Jonas Andrulis, founder and CEO of German AI start-up Aleph Alpha. At an earlier session on 14 November were Rob Sherman, VP and Deputy Chief Privacy Officer at Meta, and Owen Larter, Director of Public Policy in the Office of Responsible AI at Microsoft.

Andrulis – speaking remotely in faltering English from a train going through tunnels (a meta-irony) – shared a useful perspective on LLMs. While Altman has described AIs’ hallucinations as a valuable “feature”, Andrulis said:

How we arrive at LLMs has nothing to do with truth. It is just to learn patterns of language and complete them, to complete writing according to learned patterns. And this is also the reason why these models and their outputs are not consistent. They can contradict themselves, because they're not built as truth machines.

The challenge, of course, is many users are adopting them because they appear to be truth machines, ones that use similar language to expert humans. He continued:

But because we are humans, we care about truth [well, many people do, but others are using social platforms to game society’s own algorithms]. So, we need to do something about this, because hallucinations for us are a problem. 

But hallucination – and it's important to understand this – is not against the learning method of the system, because the system is not built on truth. 

These patterns that are incredibly powerful are able to give us the impression of reasoning, although the systems cannot reason themselves. And those patterns we are able to make visible in the positive and negative ways that we can for our users. 

I believe that it doesn't matter how good AI gets, responsibility can only ever be taken by humans. We need to empower the human to do that.

Wise words – and remember, they come from an AI CEO, and not some armchair ‘decel’ critic. So, is there a way to train LLMs to acknowledge their own failings? Andrulis said:

There is, but it’s a bit of a hack. You can try to fine-tune the model to behave like that, and there are ways you can build systems around that, such as to detect whether an answer is super low probability. 

Actually, Google has announced something where it can mark outputs that it is not certain about. So, there are certainly ways to build around that. But inherently, the systems are not able to do that.

Google DeepMind’s Ghahramani responded:

The hallucination of language models is clearly a problem that many of us are working on. 

This is actually very central to Google's mission, because Google was founded, over 25 years ago, on the basis of trying to provide high-quality information to people. So, one of the reasons we developed much of the underlying technology for LLMs, and were testing it in house for such a long time, is we didn't want to produce systems that would degrade the quality of information that's being produced and is available to our users.

That’s right. Instead, users began degrading their own information to game Google’s algorithms. But I digress. He continued:

The problem of hallucination is quite complex. So, if we think about it, we are often interested in factuality, and in receiving a factual response, but sometimes even humans can't agree on the facts. So, one has to be careful about determining factuality. 

On the other hand, it is important to be able to attribute statements, so attribution and grounding is a major area that we work on. And attribution is also very helpful. Because it allows the user of our systems to go to original sources and see where that information came from.

Hang on, isn’t Google the company that sometimes gave prominence to its own products, and/or links to sponsored content? He answered:

At present, all we are focused on is the quality of experience for our users. 

We're not actually looking at the economic models for LLMs. [Really?!] What we want to do is provide a better information-gathering experience for our users. And we've always been very clear, whenever we have links – whether it's in search results, or any technology that we produce – to distinguish between anything that's a sponsored link and anything that is obtained through ranking algorithms.

LLMs v Copyright 

Perhaps Bard users might like to ask it about European Commission antitrust cases. That aside, the Lords then moved onto questions such as whether the amount of data used to train LLMs is a significant factor in their accuracy.

After talking about different modalities in data – including biological sequences – Ghahramani said:

They're generally trained on openly available data on the Web. […] So, we're not currently limited by the amount of data. 

I think the more interesting dimension is the quality of data. One possible area of concern is, while there is a lot of high-quality data available on the Web, much of it human generated, you also get AI-generated data that could actually degrade the quality of data. So, trying to detect whether something is AI generated or human generated, and trying to assess what sources are reliable and what sources are not, is a deep and difficult question.

Indeed. All of which brought the Lords once again to the vexed question of copyright (see our previous report from the Lords on this). Ghahramani claimed:

There's actually no comprehensive way of evaluating the copyright status of every single piece of content on the Web. This isn't something that is registered in any way. And so, it's technically very difficult to do this in a systematic and automatic way.

In my view, this is a bad-faith argument, given the very clear copyright status of any published book or song, for example, and the trawling by some companies of entire datasets of unlicensed books to train their LLMs. But he added:

We care deeply about the rights of content providers, because the health of the Web over the last few decades has really depended on content providers being able to produce content, and then being able to derive value from that content being seen by others.

Cue laughter from anyone who has published their songs on Spotify and can’t even afford a coffee from the revenues. 

Ghahramani then suggested that an automated robot.txt-style opt-out for content creators would be the answer – but implicitly, only the answer for content published since 2021, given that many LLMs were trained on the pre-2021 Web. And also, an answer that forces everyone else to sort out the mess except the AI company.

Speaking at the earlier Committee session, Microsoft’s Larter said:

This discussion around copyright is going to have a big impact on how and where AI is trained and used. […] But first, it's important to appreciate what an LLM is. It's a large model, trained on text data, that is basically learning the associations between different ideas. It's not necessarily just sucking anything up from underneath.

Japan recently clarified that there is an exception within their law for text and data mining within the context of training an AI model. So, I think that's one thing that we're going to need to balance as we move forward. We support the recent recommendations from the balance report about clarifying UK copyright law, so that there is an exception for text and data mining.

In fact, the UK stepped back from implementing a broad commercial exception that allowed the scraping of copyrighted data for training LLMs; it was discussed, but not implemented – to the relief of organizations, such as the Publishers Association, which represent copyright holders.

So, was Larter claiming that, despite that, there is some overarching legal permission that ignores the distinction between training an LLM (which some might argue is an academic exercise allowed under copyright law) and its commercial outputs (for which there would be no exception, if copyrighted material is included)? He said:

I think it's really important to bring that clarity. I also think it's really important to understand that you need to train these LLMs on large datasets, if you're to get them to perform effectively.

In short, he claimed it is in everyone’s interests for LLMs to be allowed to suck up every kind of data – or at least, that was the implication. But would paying for licenses really be an obstacle to companies that are, in some cases, worth trillions of dollars?

Meta’s Sherman picked up the theme alongside Larter:

Large Language Models need to be trained on massively large data sets in order to be effective. If we want to enable the creation of LLMs to be useful […] this needs to be possible. And I think there's a lot of work to do in thinking about how copyright law should apply to that situation.

Larter added: 

I think that's right. But we will, of course, continue to abide by the law. But I think the law needs to be clear. 

I do think there is a danger that if there isn't sufficiently appropriate quality of data available, then you're going to have problems in performance. And safety, security and trustworthiness issues as well.

My take

In my view, the Lords’ enquiry – which is ongoing – has heard bad faith argument on bad faith argument from AI companies themselves, seasoned with moments of real clarity and insight. 

On the subject of copyright, UK law was, and is, very clear; it’s more the case that AI companies decided to scrape the Web and then cross their fingers and hope they could use their colossal economic might to deal with the lawsuits – while claiming it’s in everyone’s interests to let them. 

Hopefully, the many voices the Committee is hearing from outside the vendor community will enable it to reach balanced, insightful conclusions. On that, the signs seem positive. In the meantime, look for our next report on the hearings soon: the regulators’ view.

As for OpenAI? Let’s hope it can stop navel-gazing and finally engage with forces bigger than itself.

*diginomica is aware negotiations are ongoing at OpenAI to add women to the board

Loading
A grey colored placeholder image