Enterprise hits and misses - AI has an iPhone moment, but should we press pause? Automation changes careers, and transformation needs ROI

Jon Reed Profile picture for user jreed April 3, 2023
Summary:
This week - AI tech is surging, but can we press pause? Enterprises grapple with AI ethics, automation, and the ROI of digital transformation. Whiffs feature the Metaverse, and AI job loss alarmism.

King Checkmate

Lead story - AI iPhone moment is a gut check for the enterprise and for AI ethics

AI tech is surging, on the backs of widespread consumer adoption - not to mention polarizing debates on the consequences. But what does this mean for the enterprise?

Start with Chris' AI automates systemic racism, warns IBM strategist - why enterprises need to act now, a notable book review and interview with IBM's Calvin D. Lawrence:

In a deeply personal introduction, Lawrence writes:

'It’s a painful reality that AI doesn’t fail everyone equally. When tech goes wrong, it often goes terribly for People of Color. That’s not to indicate that the objects of AI’s failure are somehow always pre-determined, but even when the outcome is unintended and unwelcome (even by the perpetrator), we still feel the effects and the impact.'

Yet Lawrence isn’t some Luddite with a grudge or a skeptical academic - he is reporting on these problems from the inside, as both a Black American and as a highly experienced computer scientist.

So where do we go from here? Lawrence doesn't see easy answers, though he does believe regulatory bodies must play a role, given the profit incentive behind relentless automation. Beyond bias and the lack of diversity in AI teams (and training data), Lawrence warns of a problem we don't hear enough about: data drift. Via Chris:

'Even if you do all the right things, data will move. Data changes constantly. And it will become biased, and it will introduce bias into your processes.'

As a result, constant vigilance is essential, he says – particularly when you inadvertently create feedback loops or gravity wells of biased behavior, as suggested by some examples in the book.

Derek issued his take on the hotly-debated pause-LLM-expansion-for-six-months letter in We must stop assuming AI will inevitably lead to net positive outcomes. The most intriguing thing about this widely-misunderstood letter? The 1,000 folks who signed it rarely agree with each other. Few of the signatories actually thought there would be any agreed-upon "pause" in anything AI-related. They wrote the letter to spark a more intense debate. As Derek writes:

Slowing down AI will be hard. But as a society we have the right to ask the question: should we be doing this? And if so, how? And what’s the plan for if we do?

That's exactly why I have the still-unfasionable take that enterprises - notoriously cautious about disruptive technology - have a key role to play here. Let's put generative AI through enterprise hoop jumps, and see where we land (see Ron Miller of TechCrunch's paywalled article, Generative AI’s future in enterprise could be smaller, more focused language models). No, enterprise scale isn't an issue for generative AI, but cost, ethics, and rigorous approaches to data quality definitely are. We head closer to the enterprise via George's "The iPhone moment of AI" - NVIDIA Getty partnership foreshadows responsible AI era:

The intention to ensure responsible AI seems vital to building a community of creators and consumers rather than just a gimmicky service. It will also set the stage for thinking about designing new businesses process that balances the interests of contributors, the company, and users.

One of the more sensible - and hopeful - things I've seen about AI this week...Note: also see Cath's AI and inequality - avoiding social upheaval in an age of automation.

diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:

Domopalooza 2023 - diginomica team coverage - Domopalooza is in the books. As Alex put it: "Making a data investment is no small choice, and decision makers are under pressure to show results quickly to influence business outcomes." Domopalooza gave Domo a platform to share their response to that BI pressure - and to highlight customers putting BI in the hands of business users.

A few more vendor picks, without the quotables:

Jon's grab bag - Should you trust AI to manage your finances - Y/N? - Depends on how you put it, Chris. Manage? No. Help? Sure. Though Chris also warns: "A report from AI research company Evident this month says that 'there is no evidence of responsible AI activity' in eight of the world’s largest banks." On second thought... Meanwhile, Neil examines AI risk on a different front: Autonomous weapons systems - a cautionary use case for evaluating AI risks.

Derek looks at UK's shifting stance on AI innovation, via UK sets out new approach to regulating AI that will replace ‘patchwork of legal regimes’. Finally, Madeline wraps us on a tech-for-good note in Keolis steers away from diesel to electric buses with Stratio AI tech: "We've got some clients super-committed to invest massive money in public transport." You don't read that every day...

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

My top seven

  • What the 'new automation' means for technology careers - Joe McKendrik on what AIOps means to the tech pros in the crisis/opportunity zone: "'Hiring for AIOps takes longer than implementing AIOps,' the report's authors state. 'Organizations should invest in retraining existing ITOps employees for AIOps wherever possible.'"
  • This ChatGPT rival is free, open source, and available now - ChatGPT grabbed the headlines, but in truth, individuals and enterprises will have a number of options to kick tires on.
  • AI tools see uptick in adoption by Coca-cola, Instacart and other large brands despite risks - Bottom line - brands must respond, but with respect for how generative AI plays loose with the facts:  "A safer use has been thinking of the tools as a brainstorming 'thought partner' that won't produce the final product, Gressel said. 'It helps create mock ups that then are going to be turned by a human into something that is more concrete,' she said."
  • Side-step Oxymorons to Make Progress in Planning – Lora Cecere busts up the supply chain planning myths: "The brain uses the nervous system to sense and respond with continuous feedback and sensory perception. The optimization of digital signals by technologists is a sharp contrast to the concept of thinking."
  • How to Realize the ROI in Digital Transformation Technologies - Third Stage's Eric Kimberling with some project-tested thoughts in that neglected topic of transformation ROI. I especially like the post-go-live remediation review: "Usually, it's not significant in terms of having to re-implement or make major massive overhauls to the way you deploy technology. Typically, it's a bunch of little things."
  • Content is the Printer, Experience is the Ink: Why Community and In-person Events are so Important (With Aphorisms) - Provocative post by Josh Greenbaum, though I find most vendors have the opposite problem, where in-person event infatuation has them perpetually losing track of the power of hybrid and virtual, done the right (interactive!) way. Meanwhile, in-person events are still in dire need of re-invention also. Perhaps I'll get a longer comment in next week, but the post raises key points on what customer success should be about.

I am not afraid of robots. I am afraid of people - Gary Marcus explains why he signed a misunderstood letter with people he doesn't usually agree with, and text he didn't write. And: why he is afraid. I agree with Marcus that this language from the letter: "Contemporary AI systems are now becoming human-competitive at general tasks" is an overstatement.

Also, I believe that large language models like ChatGPT have inherent limitations that will not be overcome simply by pushing the boundaries of computing power and training data parameters. But Marcus signed the letter to instigate a bigger debate, a tactic I can agree with. And: I believe the reason he is scared is not too far from mine. ChatGPT has plenty of interesting use cases, but as of now, only two killer apps: 1. massive disinformation at scale, and 2. 'black hat' SEO by populating web sites and misdirecting web traffic with AI-generated crud. The first of these is scary enough to require a more sophisticated discussion, including ethics and policy. Whether we will have it is another matter.

Overworked businessman

Whiffs

We've seen some truly goofy warnings about AI taking jobs away lately. Example: per this report, These Are the Jobs Most Vulnerable to AI, Researchers Say, which claims that teachers, including history teachers(!) and finance jobs are the most at risk, I would simply say: I believe researchers of dubious reports are at even higher risk.

Meanwhile, I took another cheap-but-cathartic shot at Metaverse evangelists:

Speaking of whiffs coming back around, is Blockbuster coming back? Cryptic web site language says "We are working on rewinding your movie." Meanwhile, I went looking for some April Fools fun for you, but it seems that this year's ideas are pretty warmed over. However, my colleague Alex Lee sent over this "We need to talk" sendup from Linus Tech Talk, for your viewing enjoyment. See you next time...

If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

 

Loading
A grey colored placeholder image