Main content

AI’s ‘long tail’ is preventing mature adoption, says Andrew Ng

Chris Middleton Profile picture for user cmiddleton November 11, 2022
AI technology is a bigger challenge than leaders realise. The solution? Data-centric AI.

Andrew Ng

Andrew Ng is one of the biggest names in Artificial Intelligence and Machine Learning, after team-founding and -leading stints at Google Brain, Baidu, and elsewhere, and as founder of Coursera and Landing AI. His online courses have attracted millions of views.

AI has huge potential outside of consumer software and internet apps, he believes. But as he walks around user premises talking to project managers about their work, he finds a sector in desperate need of guidance and leadership:

I think the biggest potential of AI still lies ahead of us, to use it for all the other industries other than just consumer software and internet. Everything from retail, travel, transportation and logistics, automotive and assembly, and many, many more.

But candidly, when I walk around everywhere from factories to hospitals, they just seek mentors. I think the adoption of AI in all of these industries is still very nascent. A study by McKinsey estimated $13 trillion worth of value annual value by 2030. But I think there's still a lot of work ahead of us to create that value. AI cannot reach its full potential until it's accessible to everyone.

So why isn't AI more widely or maturely adopted in many sectors? According to Ng, it's down to a long tail or customization problem. He explains: 

Here's what I mean. If you want to take all the potential and actual AI projects, and sort them in decreasing order of value, you get a curve. We're on the left and maybe the single most valuable commercial AI system in the world is some online ad system, and maybe the second is some Web search engine that shows the most relevant results. And then maybe with autonomous vehicles we’ll get there some day in the future.

So, in the AI world, we've figured out how to hire dozens or hundreds of machine learning engineers to build one giant monolithic system to serve millions, hundreds of millions, or even billions of users. But once we go into other industries, I see a lot of projects that are maybe worth $1 million to $5 million each, everything from a pizza chain wanting to do full demand forecasting, or a t-shirt manufacturer wanting to improve product placements, or how to do better quality control in automotive manufacturing.

[The problem is that] everywhere outside of consumer software and internet, we don't have these databases of 100 million users to apply one AI system to. Instead, I see tens of 1000s of projects that aren't being effectively executed on right now, because of the high costs of customization. I can't hire 10,000 machine learning engineers to build 10,000 of these projects.”

Data centric 

So, what’s the solution? Ng argues: 

Fortunately, an emerging technology called data-centric AI is now enabling a lot of these projects to be done. When you think about what a team has to do today to build an AI system, they have to write a lot of code. And while I hope that everyone will learn to code, realistically it is difficult to get everyone to do the right cutting-edge AI software by themselves.

Most people get a data set from somewhere and then have a team write code and focus on improving the software, but this turns out to be difficult. But with the data-centric approach to AI, we’ve flipped this recipe on its head. We observed that for a lot of AI applications, the code is already a solved problem. There could be some open-source implementation of an AI model you can get from a vendor that works just fine. So instead, it's more fruitful to provide the tools for your teams to work on the data.”

Using the example of an in-car AI system for identifying whether a driver or passenger has left their wallet/purse on the car seat, Ng demonstrates the challenge of teaching a computer vision system what arrangement of pixels means ‘wallet’ when different designs are viewed from multiple angles. Similar challenges are faced by computer scientists teaching robots to pick up tools, for example, or driverless cars to recognise pedestrians.

The problem isn’t the hardware or the coding of the application, but how well and consistently the data is labelled, he says: 

With data-centric AI, you can increase access to AI for teams because rather than needing the expertise to write the code, you really just need the expertise to know what is a wallet in a car. What you are really trying to do is to provide and describe data in a very consistent way. So, the key thing about this journey to democratise access to AI is to not just let the $100 million or billion-dollar systems get built, but to let all of these $1-5 billion projects in the tail get built.

With data-centric AI, it’s about providing training to more people, but to subject matter experts rather than to machine learning engineers. That’s what my team is doing and what many others are trying to do in different application areas. I hope that these will provide a foundation with which we can give a lot more people access, and a lot more people the ability to build custom AI systems.”

My take

An obvious point, perhaps, but an important one. With so many sophisticated AI tools in existence, the real challenge is accurate and consistent labelling in industry-specific areas, even in comparatively small data sets (as opposed to those containing millions of items).

Hopefully, Ng’s profile will ensure that data-centric tools are more widely adopted, rather than business leaders spending millions of dollars recreating the wheel when what’s needed is more data about the wheel itself.

A grey colored placeholder image