The biggest AI obstacle is culture, not data science skills - a services view from Globant
- For a gut check on acquiring data science talent, who better to ask than an AI services expert? But Juan José López Murphy of Globant surprised me by arguing that it's not skills that hold up AI projects - it's the need for a culture of experimentation.
Another way to frame that is: what is the role of the AI services provider to alleviate the skills pinch? I recently got a fresh angle on that from Juan José López Murphy, Technical Director and Data Science Practice Lead at Globant. The big surprise? Why culture trumps data science skills.
Why is AI sticking to the wall this time around?
Murphy has spent the last ten years employing mathematical models to do things like forecasting, documentation, and communications. He has seen AI hype surge and fall:
A couple years ago, AI had a huge high, then it fell out of favor because of missed opportunities or broken promises.
Making "AI" work at scale outside of lab settings was a problem in the past. So what's different this time?
1. Major players like Google and Facebook are now running machine learning at scale. Murphy:
What we are now calling "AI" is something that we've been building and using for the past, let's say, three to five years, whether it's on the big enterprises like Google, Facebook, Amazon, Netflix, Spotify, so they are already using these kinds of algorithms. We're integrating that into our everyday lives - and we've come to expect it. So we're not first promising, and then seeing if we can deliver. We're actually interacting with it already. So it's already installed.
That's great for Facebook and Google - and it certainly fuels user adoption. But how does that help companies without those resources?
2. Cloud computing and big data have matured, lowering the barriers to entry. Machine learning (ML) needs the capacity to crunch huge data sets:
We now have the model maturity and big data to actually supply the kind of data and structure that we need for training these kind of models. We have a lot of development in cloud computing, so that we can scale this without having to commit to huge servers or server farms.
3. Mobile has fueled an "explosion" of useful/massive data sets for ML scenarios:
A lot of people say that the iPhone was like a huge breakthrough in that sense, because it made the mobile usage and the apps and all the information about the people using them available in a way that we didn't have before, so that was a key enabler of what we call the data explosion.
4. Open source tooling makes AI more accessible. Even AI behemoths like Google and Facebook learned the hard way: that if they don't expose their AI tools via open source, they won't get adoption. The end result? Publicly available toolsets, from Hadoop to Tensor Flow to Kubernetes:
You're either consuming an API from an enterprise, or you're building your own models with official tools. If you go back a couple of years to the advance of Hadoop, Google had something similar built before that, but since it wasn't open to the public, someone else implemented those ideas and made them public. Then you had Hadoop laying the foundational infrastructure for big data - and not what Google already had in place. So they learned from that.
The biggest AI project obstacle is culture, not skills
That explanation makes sense. But it leaves open two monster questions:
- How should companies approach the skill/staffing side for such projects?
- With potential use cases across departments, where do you begin?
I was expecting Murphy to focus on the data science skills gap. But in his view, the biggest AI readiness issue for companies is not data science skills - it's culture. If a company doesn't have an "agile" project approach and willingness to experiment, they won't get far with AI. Murphy:
The most crucial aspect is openness to experiments... The most important thing is that you will try several initiatives in a very short time, very fast, to see if those make sense. And then you can scale from that.
That's why digital native companies have an edge with AI:
Companies that are more natively digital or have gone through that pathway, they understand the experiment in iterating before trying to scale. More traditional companies sometimes expect to say, "Oh no, I'm committing right now to this two year project, and it needs to work this way and that way."
Therefore, there is a huge link between capitalizing on AI and digital transformation:
Our co-founders like to say that AI is a wave that is mounted on top of the digital transformation wave, because you need to be agile in that sense of experimenting; you need to be data literate.
Getting started with AI - why data science skills aren't a deal breaker
In his projects with Globant, Murphy doesn't see lack of AI skills as a deal breaker:
If the company has some internal data talent or not, that changes just the terms of engagement. If they don't, we guide them. If they want to develop it, we help them.
Murphy acknowledges that if you want to develop a truly unique AI approach, you'll need internal data science skills. But those skills can be added over time, once early experiments take hold:
If you want to have something different from your competitors, then you will probably need some data-specific to people, and to build from scratch. But that's not the only way to leverage it. With Azure, GCP, Watson, and the whole array of models that Amazon is offering up, you can start consuming specific services for specific tasks, [and get started].
The wrap - the merits of AI for a "decision that hurts"
Muphy's view on a culture of experimentation rings true - as does putting AI in the context of digital transformation. You won't get far on one without the other.
What companies need from AI services is a real partner that can help them assess the right use case, and push ahead. No welcome wagons of junior consultants needed - or wanted. Murphy says Globant has found a fruitful "middle ground," that helps clients connect tech, strategy, and design.
As for where to get started, Murphy advises companies to ask this interesting question:
Tell me about a decision that hurts.
By a "decision that hurts," Murphy is talking about a pain point around decision-making due to prioritization and data volume issues. It could be something seemingly mundane, such as "Which emails do I read first?" Anything where there is a need to identify a priority action, a priority customer, a priority customer service situation. "Which sales lead do I call next?" could be a decision that hurts.
Murphy brought the point home with a customer example. The client had mechanical engineers fluent in big data technology, but they weren't skilled at defining business problems. Teaming with Globant, they decided to start by launching a chat bot pilot, focused on a customer service scenario. In this case, the "decision that hurt" was service representatives needing to focus on the customers with tricky problems, and automate the servicing of routine inquiries.
They started by building initial models to see if those models surfaced answers that made sense. In a couple weeks, they had an internal prototype they could consider exposing to real people. As the feedback from this project picked up steam, the company was better able to see how "AI" could be applied in other areas of the company.
That leaves open the question of how massive software vendors can hope to make AI revenues on an experiment-and-build approach. To which I'd say: welcome to the disruption you've been advocating.