Main content

Enterprise hits and misses - robotics and gen AI converge, the FTC shakes up non-competes, and AI projects need critical thinkers

Jon Reed Profile picture for user jreed April 29, 2024
This week - robotics and generative AI converge, but what breakthroughs are needed? The FTC issues a landmark non-compete policy change, and IBM wants to buy Hashicorp, and enterprises need to rethink AI model sizes. Plus: your whiffs.


Lead story - Robotics AI advancements - where are we now?

I keep a pretty close eye on robotics -  that's where AI is truly put to the real world test. It's one thing to build (impressive) probabilistic systems that predict the next word, or generate images on spec from training material.

These deep learning advancements are non-trivial achievements. Advancements in language translation alone are eye-opening.

But it's another thing to respond intelligently/immediately to an unpredictable world of perpetual movements - as the makers of the Humane AI pin (powered by OpenAI technology) have learned the hard way: Even more reasons Humane’s Ai Pin is a total bust.

So bring on George's How Rabbit’s Large Action Models tease the future of RPA. Rabbit debuted their R1 device at CES this week:

The new device can recognize objects and translate and manage interactions with web apps and AI agents. It also captures notes into a Rabbit Hole, an automated knowledge management tool that blends elements of Evernote, Notion, and to-do lists.

I believe that LLMs will make robots much more effective in receiving instructions and interacting with the humans in their environments. This alone seems promising; I still have near-collisions with a almost-useless atonally beeping robot in my local grocery store named "Marty." Marty makes R2D2 seem like Albert Einstein.

On the other hand, I don't see the robotics breakthroughs in other types of sensory input, from touch, to smell, to just plain old survival instinct, that allows humans to adapt to situations where robots will continue to struggle. Nor do I see a path to so-called "AGI" from this particular route, but that's a debate for another time. Asking a robot to "empty all the trash from the living room" simply cannot include the rare baseball card that fell to the floor, not to mention the pet hamster.

What jumps out here is the convergence of robotics and generative AI. That should help the startups in this space be more effective, hedge their bets, and go to where the short-term action/advancements are. Even if I'm right that AGI won't be found along this route, other useful inventions will surely happen from this convergence. As George notes, this seems to be the case for Rabbit, which has developed a new operating system (OS) based on what they call "Large Action Models":

Long before demoing the R1 at the CES conference, the company was working on Rabbit OS, built on a Large Action Model (LAM), a type of Large Language Model (LLM) specifically optimized for understanding and automating tasks. At the release party, Rabbit co-founder and CEO Jesse Lyu described their long-term ambitions for generative user interfaces that will allow anyone to simplify and personalize experience across multiple tools.

Now that sounds like the kind of thing enterprise projects can - and will - use.

Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my top choices from our vendor coverage:

Earnings reports of note:

  • AI demand drives Q1 SAP cloud revenue up 25% year-on-year - Stuart on a pivotal earnings report from SAP, albeit with a major "AI-driven restructuring" still underway. He quotes SAP CEO Christian Klein: "What we are seeing now with Business AI is actually that a lot of customers who probably planned their migration start date for S/4 [for] end of this year or next year, that they actually now want to move faster, because they see the capabilities with SAP Business AI on asset management, on just automating many, many workflows in their company."
  • ServiceNow CEO says process optimization is the “single biggest generative AI use case in the world today” - Derek finds ServiceNow CEO Bill McDermott in a characteristically ebullient mood. Derek: "ServiceNow’s recent history has seen it act as a process simplifier for enterprises to better manage their work and experiences. The pitch for many years now has been that the Now platform integrates with existing systems of record, minimizing the need to rip and replace other platforms." How well is this working in a gen AI context? We'll have plenty of updates/analysis from Derek at ServiceNow's upcoming Knowledge 2024 show.

A few more vendor picks, without the quotables:

Jon's grab bag - Madeline looks at a shift in climate innovation in Jamila Yamani, Salesforce Director of Climate Innovation and Energy Transition - "It matters where we spend our money, what food we choose to eat, how we choose to get around."  Speaking of climate, coral reefs are at an urgent juncture - can tech help? Cath digs in with Technology and data initiatives aim to preserve the world’s declining coral reefs.

The European encryption debate has implications beyond the region; Derek is on the case: Encryption debate rolls on - European police chiefs do not accept ‘binary choice between cyber security and privacy’. I'm still not sure what Zuck is up to with his metaverse money pit, but credit to him for attempt to try to manage investors' ungrounded/magical unrealistic short-term AI expectations: Meta stock sinks as Zuckerberg doles out tough love around AI - it's going to take a long time to turn a profit.

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

My top six

  • The IBM-HashiCorp coupling could be more complicated than it seems - IBM is after HashiCorp, and Ron Miller has some reasons as to why. But will HashiCorp's multi-cloud capabilities provide enough impact even as gen AI chips away at DevOps script automation?
  • FTC's noncompete ban could reshape the US workplace - this ban will surely face legal challenges, but this is quite a policy shift, and, in my view, a sensible one. There are ways to protect corporate assets without shackling workers; if non-competes were used narrowly and appropriately, we wouldn't be at this point. Brian Sommer will have a piece on this out on diginomica soon.
  • Bank CIO: We don't need AI whizzes, we need critical thinkers to challenge AI - Joe McKendrick nails it down: "There's a valuable lesson to anyone hiring or seeking to get hired for AI-intensive jobs, be it developers, consultants, or business users. The message of this critique is that anyone, even with limited or insufficient skills, can now use AI to get ahead, or appear to look like they're on top of things. Because of this, the playing field has been leveled. Needed are people who can provide perspective and critical thinking to the information and results that AI provides. Even skilled technologists or subject matter experts may fall into the trap of relying too much on AI for their output, versus their own expertise."
  • Why Do Organizations Pursue Digital Transformations? Here Are the Top Reasons - Eric Kimberling back in his stomping ground: "Frequently, organizations embark on transformative journeys out of necessity rather than strategic intent." Ouch....
  • The Rise of Large-Language-Model Optimization - Bruce Schneier has a wake up call for us, re: the impact of generative AI on the open web. He's not wrong, though I said to Frank Scavo, I see things a bit differently.
  • Enterprises Must Now Cultivate a Capable and Diverse AI Model Garden - Dion Hinchcliffe examines a key enterprise gen AI shift: building the right size model for the task/process/job at hand: "Developing a robust AI model portfolio—often visualized these days as a diverse 'garden' of different AI models, large and small—requires capability development, enthusiastic experimentation, thoughtful planning and strategic foresight. AI models not only have to be able to provide good answers, they must also be selected to be cost-effective." My only nit with Hinchliffe's post is semantical: I don't believe LLMs, even the largest ones, are capable of "complex reasoning" Yann LeCun doesn't either). To me, the attributes of the largest models are when you need the most fluid type of human-machine interaction (e.g. a "chat" that might cover a range of topics, and account for variations in slang, terminology, and word meaning).

Overworked businessman


A predictable yet still amusing outcome:

Perhaps a more serious type of whiff, but a whiff nonetheless:

I pride myself on receiving some of the wackiest PR pitches and interview requests of anyone. But diginomica contributor Brian Sommer may have the edge on me now. He just got this one:

Would like expert advice for an article called 'How often should you pump your septic tank' for Homes & Gardens online..

As of this writing, I don't know if Brian dispensed any septic tank wisdom or not. I'll ask him soon... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

Updated, May 1, 2024 with revisions to the lead story section. Originally, the lead story also mentioned Chris' Sanctuary AI CEO - the robots really are coming! Thanks to transformer AI. There are some notable updates to Chris' original piece - I encourage readers to check that piece out that piece in its entirety.

A grey colored placeholder image