Microsoft, Google and BT argue for regulation of AI use cases, not technologies

Derek du Preez Profile picture for user ddpreez February 23, 2023
Summary:
The three technology giants gave evidence to MPs on the UK’s Science and Technology Committee, arguing that when it comes to AI development and regulation, it’s the use cases that should be the focus.

brain
(Pixabay)

The launch of ChatGPT, a chatbot developed by OpenAI that uses Large Language Models (LLMs) to interact with users in a conversational way and serve up information sourced from the Internet, has firmly placed AI at the peak of a fresh hype cycle. 

A number of organizations, from technology vendors to media companies, have been quick to announce integration with the tool; Microsoft has announced a $10 billion investment in OpenAI and has launched a new chatbot for its Bing search engine; and users have been swarming to the generative AI platform to test it out, trying it out for a range of use cases, from writing essays to creating code. 

Spending a few minutes on ChatGPT can leave a user feeling very impressed (or alarmed) with how swift and natural its responses are. The system is able to follow up to queries in a conversational way and serve up content that sounds pretty convincing. 

However, it hasn’t been plain sailing. A number of researchers and journalists have pointed to the mixed levels of accuracy presented by ChatGPT (despite it appearing very confident in its answers), whilst a NY Times article about how Microsoft Bing’s version of the chatbot declared its love for a journalist is somewhat disturbing to say the least. 

Simply put, AI is at the forefront of peoples’ minds, with organizations scrambling to figure out how to take advantage of the latest developments, and governments are eyeing the work being done at technology companies cautiously - worried about where it may lead us. The concern primarily being, can it use regulation to keep pace with the developments? 

That was much of the focus in the UK this week, as Microsoft, Google and BT gave evidence to MPs on the Science and Technology Committee, which is seeking to understand the best approach to governance of AI technologies. 

Hugh Milward, General Manager of Corporate, External and Legal Affairs at Microsoft UK, was keen to point out that ChatGPT - and similar tools - are just a subsection of what will be possible with AI and that the opportunities extend far beyond conversational AI. He said: 

There's a lot lot further to go. I think the opportunity is absolutely tremendous. ChatGPT is obviously very exciting, but it's not the entirety of AI. And I think when we're looking at governance and AI, and where AI has got to, we're talking about the opportunity of a sort of new industrial revolution in AI.

Being able to help doctors, for example, to analyze scans at speeds at an order of magnitude improvement on what we currently have; address climate change issues like mass flooding and heatwaves; increase the development of clean energy in a way that we've never seen before; but also to help to address economic challenges as well. The growth opportunity, the increase in productivity, the opportunity there is just tremendous.

And on the point of governance, Milward added: 

But we need to get regulation rights if it's going to serve society in the way we want it to, but also do so in a responsible way.

Jen Gennai, Director of Responsible Innovation at Google, agreed with Microsoft's assessment of the opportunity and argued that there has been a lot of learning around AI governance, and that companies are ensuring that there are guardrails in place to ensure AI is developed responsibly. She said: 

[In terms of where we are, on the maturity curve] I would argue, AI is more mature than the original internet. But in terms of the potential, we haven't seen all the areas AI can be helpful from a societal level, a commercial level, and that's pretty exciting right now. But the governance is where I'd argue that the majority is actually further along than originally in the internet age. 

Adrian Joseph, Chief Data and AI Officer at BT Group, and also a member of the AI Council, an independent expert committee that provides advice to the Government on the AI ecosystem, said that artificial intelligence will be the most disruptive technology trend that we see over the next 10 years. However, he said a concern is around AI sovereignty, particularly around LLMs and the UK losing out to corporations and other nations. Joseph said: 

As a member of the AI Council, we have written to the government and strongly suggested that the UK should have a national investment in LLM, more broadly framed as generative AI, as we think that's really, really important. 

There's a risk I think that we in the UK lose out to the large tech companies, and possibly China, and get left behind. There's a very real risk of that unless we begin to leverage, invest and encourage our startup communities to leverage the great academic institutions that we've got. 

I think there's a very real risk that we in the UK could be left behind. 

We absolutely have the expertise and capability but we do need the convening power of the government, of ecosystems, to come together to allow us to unlock. 

Break the AI

In terms of how Google and Microsoft think about developing AI and addressing the concerns the public and governments might have, both companies spoke about iterative developments and using testing to ensure the outcomes are desirable. Answering a question about the results delivered to the NY Times journalist who was faced with what were presented as intense ‘emotions’ from Bing’s chatbot, Microsoft’s Milward said: 

The principles that we put in place here are iterative. So we can't create something in secret, as it were, and then launch it - because we'll get it wrong and it won't do the things we want. 

So what we have to do is allow some users to use it, and to have a process in place where we respond very, very quickly to what's happening - almost in real time - so that we can address it as we go, really quickly

The model of governance that we're trying to build, in terms of our own principles, and the way in which we develop and use AI, really is predicated on this idea of very fast iteration. 

So that when we get issues like that, which were concerning,  we are in a position to be able to engineer change extremely rapidly, which is what we were able to do. We will continue to learn from that experience. It's only by allowing people to use it that we were able to learn. If we hadn't done that we wouldn't have been able to learn something. 

Google’s Gennai said that her company’s focus is on what it calls ‘adversarial testing’ - essentially where it attempts to break the algorithm. Gennai explained: 

Responsible innovation is known as adversarial testing, where we try and break a product from different parts of our AI principles. So, for us, our AI principles are our ethical charter for the development and deployment of AI. 

We use each of those principles to try and essentially break the product. So we have a principle around fairness that we will not create or reinforce unfair bias. And we conduct tests to try and break the product from a fairness perspective. 

If you're familiar with how security teams for decades have done cybersecurity through red team testing, we've done similar with our principles. So we do that internally. And then as we release it into the wild, we're finding that a lot of users are also essentially augmenting those adversarial tests and providing examples of things that we may need to investigate further. 

We are continuing to learn from the use cases that through these experiments allow us to see where there are certain areas that may require guardrails. 

A focus on the use case

The Committee asked the companies providing evidence if they were concerned about their AI products being used by other nations, such as China, for reasons that suppressed certain population groups, or for purposes that were anti-democratic. Microsoft, in particular, agreed that they would be worried about that, but argued that this shouldn’t hinder the development of AI itself - and said that it’s important that pro-democracy nations ‘stay ahead’ of countries such as China. Milward said: 

We can't hold back the development of AI that has been developed in other countries where there is a high set of ethical principles and values, as a counterbalance to some of the countries where that is not the case. We have talked about China and that's one example where actually we've got to stay ahead. 

We have to allow [AI to]] flourish if we're going to ensure that [is the case]. The other thing of course is that as we regulate AI, to make sure that we are thinking hard about the regulation of uses of AI, rather than the AI itself. 

Because if we do that, then AI irrespective of where it's developed, if it's developed in a regime that we don't necessarily agree with, if we're regulating its use, then that AI has to, in its use in the UK, abide by a set of principles. And where we have no ability to regulate the AI itself, where it's developed in China, we can regulate how it's used in the UK. So it allows us then to worry less about where it's developed and worry more about how it's been used irrespective of where it's developed.

Milward went on to provide an example of an AI technology that may be dual use, highlighting why he’s arguing for use-based regulation. He said: 

The way in which we develop our technology and the way in which we license our technology includes restrictions on use. So we can define when a customer is allowed to use it or not.

But also, take a technology which is dual use, like facial recognition. You can use facial recognition technology to recognize cancer in a scan. You can use the same technology to recognize a missing child. You can use the same technology to identify a military target. And you could use the same technology to identify citizens who are unlicensed in a regime you might want to haul in. 

That is the same technology being used for very different purposes. And if we were to restrict the technology itself we wouldn't get the benefit from the cancer scans. In order to solve the problem of it being used in a surveillance society, we then regulate its use, rather than its development.

Then we can quite comfortably say no, we're not going to license this technology for you, and we're not going to license this use of that technology, even if we will license a different use of that technology. 

My take

A complex challenge to solve. You couldn’t help but get the impression throughout the sessions that the companies present were arguing that they’ve got it under control and for governments to not interfere too much. Equally, some of the questions from MPs really highlighted the disconnect between their knowledge of AI and the advanced work being carried out by these companies. Regulations keeping pace with the development of AI is a big ask, which is why a principles-based approach is probably the best outcome. One that commits to testable boundaries of what society and government deems acceptable. However, as has been the case with the development of the internet and data use thus far, often the consequences are unpredictable and unintended. 

Loading
A grey colored placeholder image