Enterprise AI in practice - addressing the top CIO questions

Neil Raden Profile picture for user Neil Raden September 14, 2023
Summary:
Enterprise AI is the surging topic, or, in some cases, the hot potato. A recent CIO forum raised critical issues - here are my responses.

Asking questions concept question marks © BlenderTimer - Pixabay
(© BlenderTimer - Pixabay)

I read a post on LinkedIn by Tiarne Hawkins,  a summary of her takeaways from a CIO roundtable she conducted recently. The roundtable raised some important issues. I’ll add my thoughts after each one.

Definition and authenticity of AI

What actually makes your offering AI? 

It is important for leaders to understand what truly qualifies as AI and not just basic automation. It's essential to discern between genuine AI offerings and those that are misrepresented as AI when they do not have human-like thinking and learning capabilities.

That’s easy – none of them have human-like thinking and learning capabilities. I think what Hawkins is getting at is discerning the difference between real AI and what we have now. I suppose it's OK for leaders to understand the difference when making policy or creating a contract, but what difference does it make in commercial organizations? As Deng Xiaoping famously said, "It doesn't matter whether a cat is black or white, as long as it catches mice." These semantic distinctions are not helpful in applying the technologies at hand and speculating when or if they will be “human-like” is just a distraction.

Our "Available AI" is a better way to phrase it. The fundamental technology is Machine Learning, the process of reasoning from associations found in the data without prior knowledge of context and using mathematical and statistical algorithms. Neural networks, the amped-up versions as Deep Learning and even LLM, still rely on the bedrock of Machine Learning. 

But generative AI, particularly LLMs, are on the razor's edge between Available AI and human-like intelligence, not because their internals are intelligent, but because they GENERATE something: creative writing, pictures, music, even code.

The AI cold start problem

How do you address the cold-start problem? 

Leaders want to ensure that the AI system can be effective even when there's a lack of historical data, as AI relies on data to become more intelligent.

The cold start problem occurs when, for instance, a recommender system lacks sufficient information to make reliable predictions or suggestions for a user or an item. Logically, a recommender system will be designed to analyze data you most likely never collected previously. Also, this can happen due to a new customer needing to provide ratings or feedback, a new item not receiving ratings or feedback, a user or item belonging to a niche or rare category, or the user or item changing their preferences over time. In any of these cases, the system can only make accurate recommendations once it has had sufficient activity to develop data that is credible How to solve it? The hot solution today is Synthetic Data.

Ethical concerns

How are you addressing ethics?  

With the potential misuse of AI in areas such as manipulating elections or influencing purchasing decisions, leaders are concerned about the ethical ramifications. They seek vendors that consider AI ethics and have policies and training in place to address these concerns.

The first question should be, "Why are you only thinking about that now?" What have you done about these ramifications with previous systems, developed or purchased that were just as likely to permit unethical behavior? Seeking vendors with AI policies may help, but only a little. What leaders need to do is to take a historical view. How have you and others in your industry produced unethical or unfair actions from digital systems? 

Diversity concerns

How are you mitigating bias?

AI bias remains a critical concern in 2023. Efforts have been undertaken to address biases in AI algorithms, particularly in sensitive areas such as criminal justice and hiring. Although attempts have been made to develop transparent and fair AI systems, challenges remain, especially in terms of recognizing and eliminating deeply ingrained biases in data.

There are two separate topics here: diversity and bias, but the rest of the text is only about bias and elliptically about diversity. In the context of AI, bias is not limited to the systemic reflection of negative perceptions in the data, but also the modeler’s sensitivity, the unexpected and emergent properties of Machine Learning and even the interpretation and application of the inferences from the AI models.

Since the answer above mentions criminal justice AND hiring, it references two types of diversity concerns. Achieving a more diverse community (DEI: Diversity, Equality and Inclusion) and fostering diversity of staff and thought, neither of which will find a solution in “recognizing and eliminating deeply ingrained biases in data.”

Human interaction

How will AI impact the "human touch" in interactions, especially for companies that value personalized customer relationships? 

Leaders are cautious about the potential loss of unique voice and tone in content, marketing messages, and customer interactions due to over-reliance on AI.

For example, the pre-AI technology for customer service is the status quo. The “personalized customer relationship" train left the station a long time ago. I suspect that “companies that value personalized customer relationships" have added in the cost of providing it, and their customers are willing to absorb it. 

Economic impact

How might AI affect job opportunities, especially for underprivileged individuals? 

There are concerns that as AI takes over certain roles, opportunities for individuals might diminish, leading to economic and social implications.

It is not clear how AI will affect jobs. Clever software to do that has existed for decades. Welding robots on assembly lines became commonplace, and human welders lost the race. However, there was a social benefit: robot welding improved the safety of workers from arc burns and inhaling hazardous fumes. Only a year or two ago, the discussion about job loss centered on low-skilled, low-paid workers, but with the frenzy of LLMs, it is clear the skilled workers, such as educators, HR staff, lawyers and judges (especially their clerks), government workers, clinicians (psychiatrists, psychologists, doctors, physicians, nurses, dentists) and auditors are likely to see their work altered or eliminated. With the potential for such widespread upheaval,  the inevitability of AI is likely to be attenuated by resistance and regulation.

Operational advantages

How can AI enhance customer experience, optimize operational processes, and improve risk management? 

Leaders see the potential of AI in transforming industries and business operations and seek to leverage its advantages for personalized experiences, optimized processes, and proactive risk management.

  • Enhance customer experience: Volumes have been written about this, but one thing stands out: the Eddy Effect, according Authenticx.com. “The Eddy Effect occurs when a customer’s desired or expected experience is disrupted by an obstacle that causes the customer to feel “stuck” in a problem.” (Watch for my write-up about Authenticx in a future article.) Until customer-facing systems can handle complicated, multi-step problems, "automated attendants" and DIY "My" sites will remain frustrating.
  • Optimize operational processes: I wonder if people are really interested in optimizing anything. Optimizing means finding the "best" solution, which isn't often practical. Some companies want to evaluate reasonable scenarios, taking in as many variables, conditions, relationships, and confounding variables as possible. A more common approach is "trade-off" analysis. Through trade-offs, you can clarify objectives and priorities, identify and quantify the trade-offs involved in each option, explore the implications and consequences of each option, communicate and justify your decisions with evidence and logic, and adapt and improve your decisions. But AI doesn’t do that. The model you create (and hopefully continuously compare to alternatives) is used by AI to take action, not be reflective. 
  • Improve risk management: AI, at this point, will challenge risk management as it introduces new and unseen risks that can appear and repeat millions of times before you even notice. However, AI already improves efficiency and productivity in risk management while reducing costs. AI's ability to handle and analyze large volumes of unstructured data faster, with considerably lower degrees of human intervention, makes risk management more accurate and efficient.

Strategic deployment

How can AI provide competitive advantages, support data-driven decision-making, and increase operational efficiency? 

AI's potential to outperform competitors, offer innovative solutions, and enhance overall business performance is of great interest to leaders. They are looking for ways to strategically deploy AI for maximum benefit, as seen in companies like JP Morgan Chase, Salesforce, Tesla and Amazon.

What is it they expect to gain? The first rule in enterprise AI is understanding what AI can provide. Companies like the four mentioned have vast resources to apply. The failure rate of AI projects is very high. The main culprit is an inadequate data management infrastructure to support it. The problems with data are broader and deeper than just extracting data from one place and putting it somewhere else. Cleansing, enriching, parsing, normalizing, transforming, filtering, shaping, integrating it and formatting it prior to its intended use should be entirely automated. But first, you must find it, understand what it means, discover its provenance, and get access to it if it is protected.

Given these various perspectives and concerns, it's evident that while AI offers opportunities, it also brings about challenges.

Loading
A grey colored placeholder image