The five 'E's needed to put together a pragmatic AI strategy

George Lawton Profile picture for user George Lawton July 20, 2023
Summary:
A pragmatic five step program to get to a viable AI strategy.

AI

AI hype is all the buzz these days, fueled in part by the breakout success of ChatGPT. Early AI pioneers like Geoffrey Hinton are calling for caution, while BCS, the UK's chartered institute for IT, is calling on executives to consider how AI could be “a force for good.” 

Enter Nick Kramer, Leader of Applied Solutions at consultancy SSA & Company, who argues that the risks and opportunities of AI are similar to previous tech waves like cloud, mobile, and big data. The problem, he surmises, is that many tech leaders are so caught up in theoretical risks and benefits that they fail to take pragmatic action: 

Business strategy can’t be guided by theoretical risks of AI, no matter how dire, just as it can’t be guided by theoretical benefits, no matter how promising. Of course, doing nothing is an equally untenable response. While AI’s risks and opportunities are more existential and revolutionary, the challenges in adopting innovation are recurring, whether it’s the internet, cloud, blockchain, or now with AI that result in wasted budget and effort without creating differentiation or benefits.

In response to the increased pressures, many leaders are too reactive as they respond to board pressures clamoring for action and demanding a plan without a concrete vision to unify and guide the adoption of AI. One big risk is that AI innovation budgets end up funding poorly planned initiatives regardless of whether AI is the right solution to the problem. In response to the risk side of the hype, security and compliance teams will be allowed to constrain innovation out of misplaced fear. And in the middle, day-to-day issues will keep deprioritizing AI adoption. 

Kramer argues:

What’s needed at the table is an objective outside party, one without an agenda that’s credible, above the political fray, and that doesn’t have a dog in the fight. With that credibility, the third party can help implement an effective innovation playbook built from experiences across industries and hype cycles.

The way Kramer has been framing this problem is for enterprises to consider the 'Five Es' of innovation adoption in developing a pragmatic AI playbook: Educate, Establish, Experiment, Execute, and Evolve. He explains:

By deploying these ‘Five Es,’ companies create the foundational structures, supported by the appropriate knowledge and capabilities - to create an AI innovation strategy that integrates risk awareness and mitigation into the process of delivering that strategy.

Educate

The first step is familiarizing yourself with the lay of the emerging AI landscape. Although AI introduces new development processes and failure modes, it also builds on existing concepts. For example, shadow AI is simply an evolution of the “shadow IT” that enterprises have struggled with since the dawn of cloud services. Kramer says:

Executives don't need to be AI experts, but they should possess a fundamental understanding of AI, machine learning, and other emerging technologies. This knowledge will aid in alignment and help in evaluating potential risks and opportunities.

A base understanding of how AI issues can impact the business, like hallucinations, automation, governance, transparency, and risk management, can go a long way toward holding better conversations with boards, partners, customers, and employees. This contextualizes and structures communications of basic AI/ML principles throughout the organization. A shared foundation knowledge can also prepare the  organization to build or acquire the internal data science capabilities required to deliver and evolve AI/ML long-term. 

It’s also important to consider education as part of an ongoing process. Kramer says: 

A hallmark of innovative technologies is constant change, so education and communications must be ongoing.

Establish

Next up, teams need to get the ball rolling with a specific framework of technology, policy, and processes to get the ball rolling. It may not be perfect, but at least this can help build momentum. Kramer observes: 

What we’ve found makes the biggest difference in successful playbooks is that the earlier in the process decisions are made on crucial structural requirements in operationalizing AI or any other innovation, the better.

There are a lot of moving parts to consider when rolling out AI, so it's important to consider how this can build on rather than replace what you are already doing. For example, you should consider how to extend existing governance processes and policies to support AI governance mechanisms. This should address the way that new AI apps and usage can affect responsible use, bias and fairness, transparency, robustness and reliability, and privacy and security. Also, it is important to keep up with the potential impact of the latest AI-related regulations and standards, both locally and globally.

It's also helpful to incorporate AI into overall risk assessment methodologies and practices. Teams need to identify potential risks associated with deploying AI technology, including cybersecurity threats, privacy breaches, and reputational risks and prepare for the worst-case scenario. Also, develop an emergency response plan that can be implemented if an AI system fails or causes harm.

With risk management in place, you can start to build or add internal AI expertise to lead the technical execution of AI, work with third parties and mentor others within the organization. This requires investing in data scientists and analytics tools and upskilling existing employees. Kramer argues: 

The ownership of this capability must live within the business and not be considered an IT function.

Experiment

With the proper foundation in place, it's time to start experimenting with a clear framework for evaluating how AI pilots fit into the company’s strategy. It's also wise to plan for unfulfilled expectations. Kramer suggests developing metrics to help filter the wheat from the chaff, such as:

  • A clear description of realistically achievable and quantifiable benefits through the AI application.
  • Measurements and go/no-go milestones that provide multiple opportunities to assess progress and success with the opportunity to fail fast and minimize investment.
  • Business sponsorship and accountability of the pilot and its benefits.
  • Technical fit assessments that are thorough enough to reject pilots that are simply not suited for AI.
  • Agile delivery standards and templates to provide a consistent approach to execution that drives transparency, collaboration, and success assessment.

Once an evaluation framework is in place, you can encourage submissions of new ideas and thoughtfully prioritize AI pilots and investments. A top consideration should also be on AI systems that augment human decision-making rather than replacing it entirely. Especially at the beginning of any pilot, this can help maintain a balance between AI capabilities and human oversight, preventing an over-reliance on AI and mitigating potential risks.

It's also helpful to create a framework that encourages employees across the company to experiment with new ideas safely rather than betting on a few ideas. Kramer explains: 

We often see companies that invest too much in too few pilots, which leads to a reluctance to end pilots early enough, even when success becomes unlikely. We advise companies to increase capacity for testing and test as many pilots as possible. That places multiple small bets, increasing the likelihood of finding the ‘killer app’ and other real wins from AI. It also feeds the knowledge of AI throughout the organization. This approach can only work effectively when the company is able to evaluate progress frequently and be willing to end pilots, embracing a ‘fail fast’ mindset.

Execute

When launching pilots, it's important to incorporate standard practices for delivery, communications and collaboration. This will improve the chance of success for each pilot. Once a pilot meets or exceeds your goals, you need to have a framework to scale this up to increase business benefits. It is important to have already laid out the business case with measurable outcomes and milestones to prioritize investments and scale the more successful ones. 

A keystone of the execution phase is building out an AI development infrastructure and processes. This also needs to promote transparency in AI systems to provide a framework for explaining how decisions are made, particularly in high-stakes domains. This will help build trust with stakeholders while creating the visibility to proactively address risks before they become issues. 

Evolve

A long-term AI strategy also needs to consider AI as an ongoing process rather than one-off projects. The technology is rapidly evolving, so the approach to managing it must adapt accordingly. This includes regularly updating guidelines and policies to align with new technologies, ethical considerations, and legal frameworks. Kramer says:

Encourage collaboration with AI experts, ethical committees, and legal authorities to stay abreast of the latest developments and best practices in AI management. It’s important to leverage the continuous improvement tools and methods that work in other parts of the business into the company’s AI’s ongoing evolution.

My take

I agree with Kramer that a pragmatic approach is required not to get caught up in the hype or the doomerism of AI opportunities and risks. And developing some framework for understanding the field, getting the ball rolling and learning from experience is key. 

Every company should at least develop a framework to explore this field safely, whether that includes three, five or seven things to think about. Personally, I think three things are probably the most people could broadly agree upon at scale. My three would correlate with Kramer’s notion of Educate, Establish, and Experiment Safely. 

And by education, I don’t think we need to approach AI as a dark new art. Most things we are bumping up against are variations or accelerations of things we are already familiar with. The sooner we can get develop a shared understanding of what ideas like responsible gamification (or, as Jon Reed calls it, “real-time opportunities”), hallucination, provenance, and privacy mean in the course of our business and IT practices, the better the conversations we can have

Loading
A grey colored placeholder image