Main content

Why I can see an AI South Sea Bubble about to pop...

Martin Banks Profile picture for user mbanks March 29, 2024
The AI boom is mapping onto Gartner’s Hype Cycle - and that’s likely to mean trouble ahead. We've been here before and it's not pretty...


Every now and then two news releases arrive in my in-box at the same time, about the same subject, and taking opposing positions. That happened the other day, and the opposition between the two struck a strange chord, for I found myself thinking that both are right in their own way.

What is more, that very factor is likely to result in a large amount of unpleasantness for many, ranging across the board, from disappointed, frustrated users through to divested investors, be they hobbyist day-traders or venture capitalists.

The subject being disagreed about is, not surprisingly, AI. The basis of the disagreement is not whether it is a good thing or a bad thing - it will, of course, be a measure of both. Rather it is about what happens in the very near future before either of those options really start to become established realities. 

To give a measure of what is going through my mind, those of a certain age will probably recall the dot-com boom, where every company in the world was announcing they were ‘dot-com-ed’ and cutting deals with start-ups founded by young kids who had crazy ideas and huge wedges of money from VCs (with the largest amounts seemingly going to the kids with the weirdest company names). This was quite rapidly followed by a spectacular crash of the vast majority of those start ups, while many large IT systems vendors took a scalding at the very least. 

So first off let me run through each of those releases so you can see where I am coming from and I can explain where I think it is all going to, and what may well happen when it gets there. 

It’s all going great    

First up are the results of a survey conducted by Snowflake. This points to what the company calls “a rapidly accelerating growth” of chatbots built using Large Language Models (LLMs). Based on data generated by the company’s survey of AI-Apps built per day in 2023, this shows that of the total range of LLM applications available as of May 2023, nearly half (46%) are chatbots. This is up from 18% of the total apps the previous year.  Interestingly, it also shows that some 65% of all LLM projects are for work applications, which the company suggests shows a growing interest in applying generative AI tools to improving workforce productivity and insights.

The data source for the survey was Snowflake’s community of developers that use the Streamlit open source web applications development tools. This shows that 20,076 developers built over 33,143 LLM apps in the past nine months, the vast majority of which were created using Python. When it comes to developing AI projects, Python is the programming language of choice, due to its ease of use, active community of developers, and vast ecosystem of libraries and frameworks. 

What can be taken from this are some positive signs that AI is starting to be taken up fast by business users, and they are starting to look at business applications than can enhance important issues such as productivity. The prevalence for introducing chatbots also shows a fair degree of caution being exercised. It is fair to say that chatbots, especially those handling fairly straight forward tasks of limited scope, such as Level 1 customer support calls, are relatively simple apps. 

Maybe not so great just yet

The second survey comes from the British Standards Institution (BSI) and has a headline that declares `UK consumers fed-up with unhelpful AI chat bots’. This research suggests that only 15% of the Britons polled believe AI has enhanced their customer service experience, whilst 41% say that it has made it worse.

BSI has got into surveying and assessing such services as part of its new BSI Kitemark for Service Excellence, which has been developed to reflect the mass pandemic-triggered transition of consumer activity to the internet. With a growing number of energy providers, banks, insurance companies, and broadband providers now employing AI chatbots to manage customers, communicating with end users more effectively is an important goal. The Kitemark has been launched with the intention of preventing the erosion of a positive customer service culture.

The BSI had previously identified 57% of Britons knew that they had been communicating, at least once, with a service provider that used an automated chatbot. Only 32% however were aware they communicated with an AI-based service. The BSI has established a correlation between the advent of AI-based chatbots and a growing body of consumer opinion that the customer service levels have gone down, with 36% saying customer service standards have fallen in the last two years.

As a wake-up call to those businesses already playing in the chatbot field, the BSI also found that 35% of respondents said they found no benefit from AI-based services. In addition, while 42% said that AI chatbots are OK for handling simple complaints and issues, 68% said they believed them to be unsuitable for handling complex queries. As was observed by Kjell Carlsson of Domino Lab, companies creating AI chatbots are starting to question whether using LLMs for the job is either better or more economical that the old ways of using humans.  

What does this tell us?

The easiest thing to say is, ‘Here we go again’, and the only question is likely to revolve around just how negative the impact is going to be. The initial guess is that it will indeed be serious, for many businesses are already feeling the need to be seen to be leading edge players when it comes to offering and using AI. I have, for example, seen my first reference to ‘AI washing’, although I am sure there must be more already, and this will be huge. 

This is the old trick of making a claim - in this case that firms are using AI to give customers better, more comprehensive services - without actually making any of the necessary investment. The US Securities and Exchange Commission recently settled charges against two investment advisers for doing just this, obliging them to pay a total of $400,000 in civil penalties. There must be, and certainly will be, many smaller vendors and service providers which will be sorely tempted by just such a strategy, and most likely achieve the same result.

What this all shows is that the famous Hype Cycle created by market research company Gartner Research is already at work on AI applications. The development of generative AI technologies, and in particular the launch of OpenAI’s ChatGPT service, have opened up to become the Hype Cycle technology trigger.

The first stage of that cycle, driving up the slope of expectations, is now at break-neck speed. Both vendors and users are already building business plans based on the expectations that miracles will be commonplace in the very near future. This last year has seen those expectations accelerated by the reaction of the mass media to the endless possibilities such miracles will bring. One of those is the opportunity to make money by getting in on the ground floor as an investor, just as new start-ups proliferate. 

This applies to both venture capitalists and the ordinary person piling in at the peak of those expectations. It is possible to surmise that the peak has been reached, for one of the signs is that negative press coverage starts to appear. And I have to say that stage two of the Hype Cycle, known as the trough of disillusionment, is now just over the horizon.


The pace of expectation growth suggests that the descent into the trough will hard, fast and vicious, for there will be a long way to fall. There are several reasons why that fall should come as well. 

First off, there is currently little chance of most user expectations actually being fulfilled. There’s a case to be made that even relatively simple tasks like Level 1 service inquiry chatbots are not fulfilling user expectations. Where expectations are being met, there is usually a specific reason. For example, in areas such as healthcare services, the parameters of the task are reasonably well-bounded and increasing amounts of real end user data is becoming available for both system training and daily use. 

Another area of success is in bleeding edge research, where advances in specific areas, such as molecular human biology, are already reaping substantial and important results. The downside of such successes for the vast majority of users is how long it takes to train the system to get an answer, and just how much it costs to achieve a result. Many of the typical users’ dreams and expectations will cost them a great deal more then they may have expected.

Part of the reason for that is because Chat GPT is cloud-based SaaS so the users only pay just for the time and resources used, plus the minimal amounts of data uploaded to run experiments and pilot programs. So costs may well look attractive, but those building models later can find they get better results because that early data remains to be learned from. In other words, early experimenters have been laying a training path for their competitors as they come after. 

I suspect that not many companies will want to be an early adopter once they realise their early experiments can become the training materials for their competitors that follow after. And when they then start seeing the bills for the only alternative – their own implementation – the urge to follow their AI dreams may at least get deferred, or put on the unlit back-burner. 

The types of generative AI systems that most of them will actually need – the Small Language Model systems of two billion or seven billion parameters are starting to come through – but that means almost starting over, and re-thinking the scope of what they want to achieve and whether it now makes business sense to make the investment.

Meanwhile the companies that investors are targeting – as happened with the dot-com boom and, indeed, the South Sea Bubble of the 18th Century – will be showing far less in the way of future revenue streams or profits than many of them had predicted or planned for. The results there could be drastic and, for some, extremely painful. The wise might find it best to take two steps back and one step side ways while they can.

My take

Let’s face it, while we can be sure that AI is going to be the source of many a dramatic and mind scrambling developments out into the future, and the benefits across the board will be many and widespread, But it ain’t going to be quick and it ain’t going to be easy. Both vendors and users – and users’ own customers - need to be ready for the current headlong rush of new applications to crash and burn. We will be disillusioned, but from that will come something better, and something we will better understand how to exploit.    

A grey colored placeholder image