Main content

Enterprise hits and misses - Adobe runs into Figma acquisition obstacles, ChatGPT flummoxes regulators, and the data-driven enterprise gets debunked

Jon Reed Profile picture for user jreed February 27, 2023
Summary:
This week - Adobe's Figma acquisition is far from a done deal. ChatGPT and generative AI provoke policy questions - will we get this right? Plus: the data-driven enterprise falls short, but why? In this week's whiffs, I give unlikely credit to... chatbots?

loser-and-winner

Lead story - Data-driven decision making - reality, or marketing fluffery?

Of all the over-inflated, top-heavy enterprise bromides of the last ten years, does anything beat the self-congratulatory "data-driven decision making"?

In Doubling down on double standards - why won't organizations practice what they preach when it comes to data-driven decision-making?, Stuart says "organizational platitudes about data-driven decision making aren't matched by what's really happening in the business world." And why is that, Stuart? He cites a provocative/disconcerting survey from Salesforce:

Given the advances in analytics tech that have taken place over those decades, it might be hoped that things had progressed further than they appear to, at least according to the Salesforce poll of “10,000 business leaders”. But in fact, while 80% of respondents are happy to state that data is critical in decision-making at their organizations, putting this into practice falls considerably short of the importance that it theoretically carries.

For example, over two-thirds (67%) of global respondents are not using data to make pricing decisions in line with real-world economic conditions. Given the turbulent nature of the worldwide macro-economic climate, that seems like a missed opportunity today more than ever.

A mixed bag, at best. Stuart piles on:

Meanwhile less than a third (29%) use data to inform their strategic planning for entering new markets, only 21% use data to drive decisions around diversity and inclusion policies, while a mere 17% base determining climate targets on hard data, this last point despite the near universal protestations of organizational commitment to sustainability initiatives!

Let me get this straight - the more important the decision, the less of a role data plays... Awesome. Granted, there are geographical variations in this survey, which Stuart delves into. Still, enterprise are most definitely investing in data projects. So it's back to Stuart's uncomfortable question: "Why aren’t organizations practising what they preach?"

Overall, the global respondents cite three major blockages. Firstly, lack of understanding of data, cited by 41%. Secondly, an inability to generate insight from data. Thirdly, there’s just too much data to cope with, 30%.

Stuart adds analytics/data science skills shortages into the obstacle mix as well. There isn't one magical solution to these dilemmas. But, as Stuart urges, we can start by curbing big data obsessions:

And at a data-greedy exec level, we all need to learn to say, ‘No!’. Just because there’s more data on offer, doesn’t mean it's good for you. Less can very often be more. Quality, not quantity, is key to what data you need. 

True. There will be no data-driven decisions without ironclad trust in that data. If the data quality falters, so does that trust. However, in Is the data-driven enterprise an oxymoron?, Neil takes the problem even further, questioning the premise of the data-driven organization entirely:

Reading between the lines, Watkinson is saying that business decisions aren't science. It's a provocative assertion. It goes to the heart of the current proposition that companies need to be data-driven.

What I take from Neil's analysis is that just by achieving trust in data quality (a far from modest problem in itself), you won't become "data-driven." Which gets to my particular grinding axe: I reject the "data-driven organization" model entirely. I believe in data-informed decision making. Data may be flawed or biased - it should be a decision making asset, not the driver. We should always question the data, probe the data one level deeper. That said, what I like about the "data-driven" phrase is that it implies we set aside our own assumptions/beliefs/market tactics, if (validated) data contradicts us. That's a healthy discipline all enterprises can benefit from.

Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:

Jon's grab bag - If you're not wild about the Monday - Friday punchcard thing, you might get a wee lift from Derek's latest: Results from the world’s largest four day working week trial show promise for the progressive policy. Madeline penned a story on AWS's diversity initiatives: “I see myself everywhere!” How AWS is outdoing its peers when it comes to Black representation.

The surge of ChatGPT/generative AI adoption is flummoxing regulators. Chris explains: Report - ChatGPT and generative AI demand a smarter approach to EU regulation. As institutions and regulators grapple with this, we can start with one good principle: disclose your bot usage. But what about the PR ramification?  "A real popcorn moment," quips Chris. Well, as Terrell Owens once said, get your popcorn ready. Big tech clearly wants regulatory input; Derek raises questions with Microsoft, Google and BT argue for regulation of AI use cases, not technologies:

You couldn’t help but get the impression throughout the sessions that the companies present were arguing that they’ve got it under control and for governments to not interfere too much.

They've got it under control - yeah, as in Microsoft dubbing itself "the responsible AI company" prior to releasing a drama-prone, nonsense-spewing chatbot as an enhancement to Bing search, before pulling down peacock feathers quickly on the bot's actual search possibilities? I think I may see a role for regulators after all...

Oh, and if you enjoy Amazon Flash Briefings, give ours a spin:

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

My top seven

  • We now work in an open source world; here's the data - We live in an open source world, writes Joe McKendrick, or should I say, warns Joe McKendrick: "Nearly 40% of teams using open source lack the internal skills to test, use, or integrate that software."
  • Open Source Vulnerabilities Are Still a Challenge for Developers - Add security to that open source concerns list, says The New Stack. Though the vast majority of businesses (96%) in the most recent Synopsis study utilize open source code, "Since 2019, high-risk vulnerabilities have increased by at least 42% across all 17 OSSRA businesses, soaring to 557% in the retail and e-commerce sectors and to 317% in the computer hardware and semiconductors sector." Add ChatGPT and generative AI tools to the mix: Security with ChatGPT: What Happens When AI Meets Your API?
  • DOJ suit could represent significant stumbling block for $20B Adobe-Figma deal - It's been a while since an enterprise software attracted this much regulatory attention. I expect this deal will ultimately go through, but I would not bet my cocktail napkin on it. It will be an interesting/problematic Adobe Summit in March with this issue looming.
  • Generative AI Won’t Revolutionize Search - Yet - As I said on Twitter, Change to: "Generative AI won't revolutionize (consumer) search- ever." Time to get clear on what AI is good/not good at vs technofascination. However, enterprise search use case is interesting. Controlled/clean data sets are key." That provoke a concise rebuttal from Vijay Vijayasankar, re: the problem of enterprise search:

Indeed. Moving on to:

  • University leaders issue AI guidance in response to growing popularity of ChatGPT  - an interesting view from Yale University: “You may have seen that some institutions have gone the direction of banning the use of ChatGPT; we’re not doing that,” Frederick said. “I think where the University leadership falls on this is that the considerations are going to be different for each school, each division, each discipline. So it needs to be a school-specific conversation.”
  • Lessons From My Company’s Digital Transformation Failure - Eric Kimberling turns the digital transformation critique into a mirror, and kudos for that: "Just because someone hasn't called the help desk and complained about an inability to do their job doesn't mean that they're not struggling to do their job."

Overworked businessman

Whiffs

Ever wished you hadn't gobbled down so much fish and chips? I'll bet she feels the same way:

Pretty sure Clive Boulton brought this to my attention a couple weeks ago, but it's a whiff that still needs fumigation:

Up to this point, I've been adamant that ChatGPT cannot write a complete tech article. Clarification: ChatGPT cannot write a really good tech article. Because as I read this piece from TechRepublic on the differences between 5G and 6G, slathered very generously with unselfconcious buzzwords and happy talk of revolutionary tech (remember when 5G was supposed to be that revolution, as it slogs along, so now we kick the can down the road?), I can only say: a robot can most definitely write that - and it won't even need the pompoms. See you next time...

If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

Loading
A grey colored placeholder image