Main content

Enterprise hits and misses - Zoom stirs an AI privacy controversy, ESG has teeth, and open source hits a commercial crossroads

Jon Reed Profile picture for user jreed August 14, 2023
This week - Zoom sparks an AI privacy debate with a controversial terms and conditions update. Open source hits a commercial 'fork' in the road, and does ESG have the regulatory teeth to put 'sexy' generative AI on the IT spending back burner? Your whiffs are back, and so am I.


Lead story - Generative AI - reassessing the risks and use cases

I return from vacation, only to find - more generative AI. But then again, I do welcome the precision of critical analysis. Neil kicks that off with Generative AI in the enterprise - re-assessing the risk factors. Neil defines four fundamental AI tensions that organizations must contend with:

  • Deploying models for customer efficiency versus preserving their privacy
  • Significantly improving the precision of predictions versus maintaining fairness and non-discrimination
  • Pressing the boundaries of personalization versus supporting community and citizenship
  • Using automation to make life more convenient versus de-humanizing interactions

In theory, "AI ethics" should help - but doesn't it seem like AI ethics is always lagging behind systems in production? Though Neil has been sharply critical of the problematic field of "AI ethics," he notes some promising developments, including new approaches to operationalizing AI ethics.

Of course, the challenge with generative AI is we can't assess the live enterprise use cases yet. However, George covers one that should go live in 2024: Elsevier wades into generative AI - cautiously. Elsevier has opted not to build its own LLM; it will license ChatGPT instead. But as George writes, this year is all about ensuring good results to research queries:

Elsevier is starting small with an alpha release of the new AI capabilities and taking advantage of its existing citation search engine, knowledge graph, and custom ontology to ground ChatGPT’s results to a chain of trust. This builds on the firm’s previous work on Small Language Models and graph data we covered in March.

Elsevier is also limiting the hallucinatory downsides of ChatGPT by putting a semantic search engine underneath it. He quotes Elsevier:

Using the query that the user types in, we're firing that into a semantic search engine and getting back the list of results. And we're using that, in addition to the query, to prompt the LLM to give essentially a summary. So we're essentially using the LLM as almost the natural language interface.

So when you get the results back, you actually get the references from Scopus that support all of the summary statements that come up in the summary. So that obviously reduces the risk of us making up references because it's very hard to make them up when you've essentially returned them from a search engine.

This strikes me as a well-thought approach to getting the most from an off-the-shelf LLM, while limiting its downsides. But I would also point out that: 1. this organization has considerable data and semantic assets to make this happen, and 2. when an LLM is just part of a well-designed mix, I think you would call this a progression of enterprise tech, not a revolution.

I realized that harshes the buzz of exuberant AI marketing teams everywhere... Just be glad I didn't get a chance to weigh in on George's Why we need to treat AI like a toddler - OWASP lists LLM vulnerabilities (that one came out while I was on vacation; Alex Lee reeled it into their guest edition of Hits and Misses last week).

diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:

Jon's grab bag - Mark filed an instructional hybrid cloud use case, Hybrid cloud gets the job at UK’s Department for Work and Pensions. Finally, Em filed a terrific green AI piece via AI for plant breeding - the new green revolution?

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

My top eight

Zoom Faces Challenges in Navigating the Age of Generative AI – Amalgam Insights's Hyoun Park issued a definitive examination of Zoom's new controversial terms of service. But Park is right; this issue extends far beyond Zoom:

On August 7, 2023, Zoom announced a change to its terms and conditions in response to language discovered in Zoom’s service agreement that gave Zoom nearly unlimited capability to collect data and an unlimited license to use this information going forward for any commercial use. In doing so, Zoom has brought up a variety of intellectual property and AI issues that are important for every software vendor, IT department, and software sourcing group to consider over the next 12-18 months.

In my view, Zoom made numerous mistakes here. As I said to Park on Twitter:

Zoom did issue some clarifications on this policy subsequent to Park's post. But as per ZDNet, that may not be enough: Zoom is entangled in an AI privacy mess. Zoom may have stepped it in this time around, but Park is right to extend this issue beyond Zoom:

For more of my commentary, click on Park's tweet above. As for Park's note on open source, that brings us to our next pick:

Overworked businessman


Via Alex Lee, I guess we have to move recipes off the harmless generative AI use case list for a little while:

So I whiffed a bit here:

Turns out, buried in the bowels of Facebook, I did have some control over these settings for the local group I run. With foot removed from mouth, I stand by the tweet; with a bit more flexibility to create our own badges, we'd actually have something kinda fun. The "top contributor" group badge is brute force - a top contributor, by Facebook's definition, is just a volume award. Whoever posts the most gets the nod. I shouldn't have derived nearly so much satisfaction by turning it off, thereby taking all the top contributor badges from active members away, but I did. Reserve a spot for me in purgatory...

We'll close with my news article title of the week: Florida village terrorized by peacocks plans to use vasectomies to solve the problem. If that seems like an indirect solution, bear in mind they plan to give the vasectomies to the peacocks - at least, I think that's the plan. We'll have to check back in on that one... See you next time.

If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

Image credit - Waiter Suggesting Bottle © Minerva Studiom, Overworked Businessman © Bloomua, Businessman Choosing Success or Failure Road © Creativa - all from Adobe Stock.

Disclosure - Oracle, Workday and Salesforce are diginomica premier partners as of this writing.

A grey colored placeholder image