Main content

Attention enterprises - your AI project success in 2024 is not a given. What will separate wins from failures?

Jon Reed Profile picture for user jreed January 18, 2024
Summary:
Contrary to marketing hyperbole, the data shows that enterprises are not charging into AI with abandon in 2024. But for those that do, what will constitute AI success, or failure? Here's how companies will manage potential and risk - and my top eight underrated factors for AI projects.

Hands holding a sheet with question mark in front of closed doors and stormy clouds © StunningArt - shutterstock

Judging from the marketing bombast, you might think every company is diving headlong into an AI project - but that's just not true.

IBM's Global AI Adoption Index indicated that:

  • 44% of "enterprise-scale companies", e.g. 1,000+ employees, have actively deployed AI in their business.
  • IBM found an additional 40% are "currently exploring or experimenting with AI," but those models are not deployed yet.

I'm less concerned with the frequency of AI deployments anyhow. I'm more concerned with getting them right - and demonstrating ROI without undue risk - while avoiding the ethical violations that are typical of what I call "AI overreach." (Exhibit 2,563:  RiteAid's facial recognition fiasco, and my recent analysis of that in hits and misses).

Achieving project success while minimizing risk won't be easy. As I wrote in Bringing generative AI to hourly workers:

The stakes for enterprise AI in 2024 are already high. The shakeout at OpenAI and the EU AI Act have added new levels of complexity, raising questions about whether open source AI is viable - and how companies will approach AI amidst new IP lawsuits.

Not to mention concerning stories about unethical training datawho owns the model's content IP, and model drift.

Will enterprises pull back on their AI pursuits?

Will enterprises pull back on their AI pursuits?

The answer to that last question? A qualified no. There is too much pressure on enterprises from "Shadow AI" (rogue ChatGPT adoption) not to act in some way, and simply trying to ban this technology is a risk unto itself. I believe we'll see three main groups of enterprise reactions to AI in 2024:

  1. Customers that will pursue AI over-aggressively, because they buy into over-promises about AI's capabilities, or believe they can use AI for quick gains to justify ill-conceived headcount reductions and streamroll over potential consequences (the "move fast and break things" contingent. I call this "AI overreach; for more on that, read on).
  2. Customers that hold off on any AI projects until the risks of IP litigation and regulatory uncertainty die down.
  3. Customers that move forward with AI, but in a more deliberate manner, likely acquiring AI solutions from trusted software vendors, who will theoretically assume a major chunk of the liability risk - and source different Large Language Models (LLMs) as needed without being locked into one. *See a final note at the end for an update on this one.

We can't do much for the first group - their reality check is coming. The second group may be making a strategic mistake. These "watch and wait" orgs may have an accurate risk assessment of ChatGPT-type tools, but do they have an infomed view into more narrowly-focused enterprise AI?

The pros and cons of enterprise AI in 2024

Industry-specific LLMs, bolstered by customer-specific data via Retrieval Augmented Generation (RAG) or other techniques, have a different set of pros/cons. Enterprise AI is already demonstrating live AI projects where risk is more carefully managed, via controlled AI output, high quality data input, and human-in-loop design principles.

You don't choose AI for fun, but within AI's purview, there are addressable business problems. Caveat: these types of projects don't have "revolutionary" sex appeal, but for the right company/scenario, they are showing quantifiable benefits. Here's two recent diginomica use cases:

For those concerned about customer choice, there's another welcome development: successful AI projects don't force customers only to work with mega-vendors. Yes, the advancements in gen AI rely on models trained on massive data sets that most companies aren't equipped to ingest. Unless these big data computing needs change, AI will have a decidedly big tech flavor.

However: savvy "industry AI" firms seem to be competing in this environment, sourcing from third party LLMs where needed. I noted one example in my recent profile of Legion AI, a firm specialized on the problems of hourly worker management:

This opens up the intriguing possibility that smaller startups of domain experts can build AI into their solutions - and provide impact that is much closer to "out of the box" than training your own LLM, and without the risk mitigation issues that do-AI-yourself currently raises.

Translation: you don't have to work with OpenAI, or wait on OpenAI to navigate its vexing copyright infringement lawsuits, in order to move ahead.

AI project failure - prevalent and concerning

But that's where the optimistic side of the story ends, at least for now. On the other side is the persistent undertow of AI project failure. Cognalytica, the firm that produces the AI Today podcast series and manages the CPMAI project management certification program, frequently cites AI project failure rates in the 70% range. In a recent Harvard Business Review piece, Iavor Bojinov, data scientist and Harvard Business School professor, hit on similar themes:

Sadly, beneath the aspirational headlines and the tantalizing potential lies a sobering reality: Most AI projects fail. Some estimates place the failure rate as high as 80%—almost double the rate of corporate IT project failures a decade ago. [Keep Your AI Projects on Track]

In their Global AI Adoption Index, IBM cited the top obstacles to AI deployments:

The top barriers preventing deployment include limited AI skills and expertise (33%), too much data complexity (25%), and ethical concerns (23%).

But once you get started, the risks can go up. On a recent AI Today podcast, Critical AI and Data Skills Needed for AI Project Managers, podcast co-host Ronald Schmelzer issued a project warning - and a huge tip on the way forward: don't treat your AI project like other tech project. A different risk profile - and project methodology - is needed:

The thing about these approaches is that they're great for generic project management; you don't want to throw those away. However, what people need to realize is that these AI and data projects, they have their unique challenges. We talk all the time about this high rate of failure for AI projects, 70-80% of AI projects that get started either never complete, or complete to a point where they are not meeting the objectives, and they fail or have to be cancelled.

It's a lot of time and resources wasted, but also some potential big risks. Because failed AI projects could mean lawsuits, they could mean getting into trouble with authorities - that can mean trust that you've eroded with your customers. So we can't just take these general project management approaches that aren't specific for AI and data, we need to add more to it.

AI project success in 2024 - overlooked and underrated factors

So how do you defy those numbers, and achieve AI project success in 2024? Some of the keys border on the obvious: start small in areas where you have quality data to exploit, take data governance seriously, and build on those wins. Doing AI for the sake of AI is a loser of an idea, etc.

When I outlined those three main AI customer groups above, I left out a fourth: companies with the IT, data science and computing resources to build-your-own, as Bloomberg did with Bloomberg GPT, a Large Language Model for finance.

Going that ambitious route is outside the scope of this article. Most customers that move forward in 2024 will almost certainly choose a trusted services partner, if not a packaged AI solution. But while that may mitigate some risks, as analyst Josh Greenbaum has exhorted, customers still need to own their own project outcomes [Attention Customers: You’re Responsible for Vendors’ Customer Success Efforts Too. With Proof Points].

The podcast and article links above detail many tips for AI projects; I won't repeat those here. Instead, I'll run down my underrated/overlooked factors in AI project success - outspoken criteria you might not see elsewhere.

1. Avoid "AI overreach" - most AI failures I've seen involve a profound misunderstanding of what AI is currently capable of, or buying into an overconfident view of AI's capabilities. Perhaps, in some cases, it's even using AI as a technical excuse for flawed business models or headcount reductions. It could be a reluctance to invest in human-in-loop design principles, in order to streamline costs. Or, in the case of Rite Aid, multiple points of failure, including an apparent lack of interest in reducing false positives. As I wrote about Rite Aid's facial recognition fail (and FTC intervention):

Talk about massive AI overreach. As per The Verge: 'Rite Aid employees allegedly followed customers around stores, performed searches, publicly accused them of shoplifting, and even asked the authorities to remove certain shoppers, according to the complaint.' Oh, and with an allegedly high rate of false positives that were never properly audited. Oh, and potential racial biases too. Fantastic work Rite Aid - sounds like a fabulous employee training program to boot.

2. The customer should define their own "customer success" AI metrics for each project, agreed to by vendor and/or services partner. Ask potential vendors what their definition of AI success will be, and how it will be tracked - but in the end, it's your metrics that all parties should buy into.

3. Those metrics should be tracked as real-time as possible, so that course corrections can be made. Why are AI vendors so good at building co-pilots, and so bad at building real-time, AI-driven dashboards to track project progress and flag potential issues?

4. AI projects might start small, but they must fit into the context of an overall business transformation, or, if not transformation, then at least a long term plan for competitive success in your industry Constellation's Ray Wang recently authored a notable piece on this topic, Big Idea: Return on Transformation Investments (RTI).

5. If the AI solution isn't tailored to your industry (and your industry's data), it probably isn't the right solution. There will be a few generalized exceptions, such as AI-generated job descriptions, but I wouldn't go deep into the AI pool without a partner that understands your industry's benchmarks, data, and data pitfalls.

6. Generative AI is probabilistic, not definitive - but how much inaccuracy can you tolerate? Inaccuracies vary by use case. What outliers are expected, and what is the potential cost/liability of those outliers? Example: a personalized recommendation "assistant" might be fine with 80 percent accuracy rates. Medical diagnostics assistants would have a much higher accuracy and outlier concern, which could be mitigated via human-in-loop design etc. (By the way, I'm officially banning the use of 'co-pilot' in any AI vendor speak, because co-pilots are supposed to be able to fly the plane by themselves... Wish me luck). It might seem like publishing AI-generated content has a suitable margin for error when it comes to accuracy, but look out - CNET and Sports Illustrated are still regrouping from brand damage from this type of AI overreach. Accuracy and the true cost of outliers is the new AI ROI math; calculate with care (the cost of outliers is the biggest reason why autonomous vehicles are still in the slow lane, years after the prognosticators said we would have them).

7. Hire an independent auditor of your AI systems for bias, compliance, and data privacy. I'm starting to see the emergence of "algorithm auditors" - I hope to follow that up in a future piece. Your SI and vendor might not love such an auditor, in fact I'm sure they won't. That's why you should introduce the concept earlier and see which vendors are negotiable on this, and which ones run for the hills. Be wary of vendors promising to "eliminate AI bias," or "eliminate hallucinations." Confronting bias is an ongoing organizational discipline to achieve proper results without serious blowback; it cannot be eliminated (actually, machines can help to reduce human bias if we would only apply more imagination to AI, but that's another article). Hallucinations aren't the big enterprise concern some think - accuracy is, and accuracy comes down to choosing the right use cases.

8. AI projects are not auto-magical; they should be subject to the same project disciplines as any other technology. I have some outspoken tactics for project success in general, including:

  • Systems of accountability, from dashboards to external independents to track project metrics and provide gut checks.
  • Customer-driven KPIs, as per above. How is this project improving your service to customers, suppliers, and other stakeholder groups (yes, employees count as a stakeholder group also).
  • Maturity models to track your evolution. Don't begin a project without a maturity model to aspire to. Ask your vendor to produce their AI maturity model, and to show you where customers get stuck - and where the big payoffs are. Then, adapt that model to your own pursuits. (I've seen customers adapt models the vendor has downplayed or discarded, for example, an "automated enterprise" maturity model that had fallen out of fashion.
  • Employee buy-in - AI success goes beyond change management. Employees rightly question how AI will impact their jobs and roles. Clear communication isn't enough. Involve employees in project goal setting, training/upskills opportunities - and defining a compelling vision for your internal work future. "Automated enterprise" is only compelling if that automation helps human employees reach their potential also. If headcount reductions must be part of this, that requires clear explanation also.

My take

Gen AI can seem like a contradiction. LLMs struggle to generalize outside of their data sets, and yet there are plenty of gen AI scenarios across departments. Enterprises have their hands full understanding which gen AI use cases to prioritize - and gen AI is just one of many AI approaches (AI Today talks about seven different AI patterns to evaluate).

The best way to solve this is the same as always: tackle burning problems and opportunities, and applying the best tech for the job - AI or not. But when - and if - AI comes into play, I hope this piece provokes a more provocative conversation about what success looks like.

End note: for those enterprises considering open source AI projects, I recommend Neil Raden's diginomica post on the impact of the EU AI Act on open source AI. Also check out Vijay Vijayasankar's Will Open source win over proprietary LLM for enterprises?, where I left a lengthy reader comment.

*Over on LinkedIn, Sameer Patel raised an interesting point about achieving broader productivity gains via "co-pilot" (ugh) type functionality in core productivity tools like email, via Google and Microsoft. While technically I'd put that in the third customer group above (acquiring AI from vendors), it is a bit of a different scenario as you would likely just consume that via an existing productivity vendor relationship, perhaps with add-on pricing. But check my back-and-forth with Patel for more of my views on this. I probably could have also addressed incorporating gen AI into internal development/coding/low-code solutions. Companies with internal IT departments that build workflow and external apps will likely look into this. It's a use case worth a close evaluation, but really requiring its own article. I factor that into a broader conversation about low-code/no-code, where gen AI may help in a couple different ways. Martin Banks wrote about this on diginomica - see Citizen Developer - your time has come!

Loading
A grey colored placeholder image