Main content

AI - the future is open and on CPUs, claim Red Hat and Intel

Chris Middleton Profile picture for user cmiddleton March 28, 2024
Summary:
Stop right there, IT leader reaching for a closed, proprietary system running on some NVIDIA GPU farm! Just what do you think you are doing?!

Brain with coffee cup © bgton - Canva.com
( © bgton - Canva.com)

At last week’s KubeCon + CloudNativeCon (see diginomica, passim), enterprise open-source giant Red Hat and chip behemoth Intel presented themselves as natural allies in an AI-enabled world. Particularly one in which a combination of proprietary developers plus GPU colossus NVIDIA often holds sway. (Shh! Don’t mention them!)

In the view of many at the Paris event, the future will belong to trusted open data and open development – in the long tail, perhaps, of some generative AI companies scraping copyrighted content to train their systems (shh again! We don’t speak of such things here!). Many processes will – or should – be optimized for CPUs, rather than scarce, complex, pricey, planet-burning GPUs, they said.

It seems that this was no Paris fling. This week saw Red Hat and Intel deepen their relationship via a joint EMEA roundtable on open-source AI development. Appearing were Red Hat’s Mark Swinson, Enterprise IT Automation Sales Specialist, Erica Langhi, Associate Principal Solution Architect, and Stephan Gillich, Intel’s Director of Artificial Intelligence and Technical Computing.

The discussion was chaired by Red Hat EMEA evangelist Jan Wildeboer, whose opening gambit sought to cool the fevered brows of any IT leaders caught up in the tactical rush to buy an AI ‘hammer’ then look for some business nails. He said:

Certain parts of AI are really too overhyped at the moment. So, we don't believe as a group that AI is going to take over the world.

We also don't believe that bigger is better. What we do see is that AI fits into an evolution of tools, and not necessarily a revolution. And that evolution allows us to integrate AI into the systems that we already have and offer, and so bring more openness to AI.

In short, let’s not all rush into the arms of proprietary giants and a warehouse full of GPUs. Hybrid cloud will be critical to this highly evolved world, observed Langhi:

[My] view of the hybrid cloud for AI is an analogy with hybrid cars, which were introduced to enhance performance and optimize costs. 

And if we look at AI [in this context], it’s a workload. Ultimately, we want to position that workload on the best platform. So, if we need to train the big Large Language Models in that case, we want to leverage the infrastructure and power of the public cloud.

But there are also situations where, first of all, we want to refine and fine-tune models with our own data, and maybe bring them to the Edge. In that case, clearly, we don't want to move the data prem to the public cloud – sometimes also for governance and privacy rules. 

So, for me, to reach the full potential of AI is all about making sure that we have an underlying platform that can spawn between Edge and the public cloud in a consistent way, so that you're able to move the AI workload where you need it.

Then she reiterated a theme from KubeCon + CloudNativeCon:

The future of AI is having this ability to not have ‘one size fits all’.

Message received. Intel’s Gillich then stepped in and observed that, for AI to be truly pervasive and spur innovation, it needs to run on “almost every platform”. Those two aspects are indivisible, he said – much like Intel and Red Hat, it seems: 

We want to bring AI everywhere and make it real. So, this is about having an open software environment, which allows the AI developers to develop what they intend to, in a way that runs in both a hybrid cloud environment, and on premise – in the ‘AI PC’, for instance.

What we want is maximizing value. That means having the right platform for the right purpose. Because what we are seeing today, with the vast increase in AI usage, is a demand for computer cycles, but that obviously needs to be done efficiently and in a more sustainable way.

‘Not in my lifetime’

With those words ringing in your ears, you can picture NVIDIA’s $2.3 trillion market cap hovering over the discussion, like the Springtime sun in Paris last week. 

Gillich continued:

That's what we mean by maximizing value. To have the cloud AI workload running on CPUs like the [Intel] Xeon processor, but also having accelerators for workloads when you need them – for heavy training, for instance. But supported across all of these platforms with a software stack, and able to deploy inferencing on a variety of platforms as well.

He added:

Plus, we need to do that in a secure and responsible way. Obviously, as technology providers, we can’t influence what application providers are doing [perish the thought!], so they need to stay responsible when they develop AI applications. But we can supply the foundations.

Here, the spectre of AI ethics, data provenance, and the twin forces of trust and security reared their heads. Being open enhances security, claimed Wildeboer, though he acknowledged that this point is not obvious to everyone outside of the open-source and open-data communities.

Swinson observed:

There’s the question of provenance, and we've seen some issues bubble up around where IP has come from, and who owns the IP that has gone into training some of these LLMS. [For ‘bubble up’ read multiple lawsuits and class actions in the US]. So, that's the question of being open about what you're using as the source.

Then, once you train the model, there's also the question of clarity and understanding what the model is suggesting for you, that it is based on a good interpretation of the data that it consumed. So, having that visibility where the model is making its recommendations, based on its sources, is really important.

Explainability, transparency, and accountability added Langhi. But who needs those anymore? After all, NVIDIA CEO Jensen Huang recently claimed that AI means programmers are obsolete – along with everyone else who has the skill to put an idea into action themselves, perhaps. Did the panel agree? 

Gillich, for one, seemed to hedge his bets:

AI can be used to replace some of the boring stuff, and to combine things, but in the end, human intelligence is still needed. I think.

Phew! Red Hat’s Langhi said:

Yeah, I agree. For me, AI is augmenting human skills rather than replacing them. And that is also true for developers.” [Wait, are developers not human?!]

Swinson added: 

Yeah, I don't think there's going to be a replacement of developers – at least, not in my lifetime. I think the role will change, but you will still be an inventor, in essence, and a craftsman. A cross person.

My take

A cross person indeed! I imagine there will be lots of those, in every interpretation of those words, in our sort-of-lifetimes. At this point, the perspective of Red Hat’s Dominika Oliver, Global Leader of Software Engineering, would have been good to hear. She was billed to appear, but, alas, did not seem to take part in the conversation. 

Loading
A grey colored placeholder image