Main content

Developers rule the world! Intel's Pat Gelsinger on the new world order of AI and edge

Martin Banks Profile picture for user mbanks October 13, 2023
It's over two years since Gelsinger joined Intel to turn the company around. Here's how he's doing...

Pat Gelsinger - VMWorld 2018
Pat Gelsinger

A week is a long time in politics. Then again politicians usually just talk about things, but don’t actually do them, and the things they talked about in that week can take years to come to fruition. It is much the same with ‘Chipzilla’ – the irreverent nickname for chip behemoth, Intel. 

Two-and-a-half years have elapsed since Pat Gelsinger, fresh from being CEO at VMware, was parachuted in to take over the reins as CEO. In some businesses that would be more than enough time to effect a significant turn round in fortunes, if such a change was possible.

In the chip-making business, however, changes of direction can take time. There is the time needed to amoritize out existing production facilities, even if now superseded by the competition, coupled with the long term supply contracts for the products of those facilities. In addition, new facilities are hard to magic up, costing as they do in the region of $50 billion a time and a couple of years to design, build and commission.

Gelsinger came in with strong plans for a significant change in direction for the company, as we reported at the time: 

Pat Gelsinger is set to swing Intel whole-heartedly behind becoming the leading packaged systems producer rather than solely a chip producer, even though chip production is still going to be very much a prime-time activity.’ 

Gelsinger’s keynote presentation at its recent developer-targeted Innovation 2023 conference in San Jose, showed that the company is now at the point of starting to put those aspirations to the test. The chickens are coming home to roost, and it is time to see the quality of the eggs they lay, or if they lay any at all.

Intel’s fortunes had been on the slide for a while, caught out by those very manufacturing parameters set out above. Other vendors were leading the way in both chip manufacturing technologies and types of chip. Even now the company’s revenues, at a smidge over $54 billion for its 2022/2023 full year, are still declining.

But there are also some significant changes in the chip market – not least edge computing and the many guises of AI – which open up the range of applications that will be required, and form factor the systems needed to run them. The time of the packaged sub-system is now arriving, and this is the marketplace Gelsinger is hoping to position Intel to meet.

Developers are it in IT

This means that system software for such packages now has as much importance for the company as hardware technology, so developers are now high on the company’s hit list, and Gelsinger was keen to point out their importance, not just to Intel, but to the rest of us mere mortals as well.

There is a simple rule: developers rule. You run the global economy, right? Not politicians, not CEOs - developers are the ones running this global economy. What aspect of your life is not getting more digital? Everything, sports, entertainment, social experience or health, everything is becoming more digital. It's a foundational aspect of all economy and human experience. We're just seeing everything become a computer, replacing industries like oil that defined geopolitics for five decades. It's now silicon and the technology supply chains that it enables.

This he emphasised in his keynote presentation by directing his attention at the broad application areas he sees as Intel’s main targets, and then back-filling with information about what Intel now had to offer in order to realise those aspirations. 

Artificial Intelligence is, of course, old (and he should know, he was architecting systems for it back in the 80s). It never took off however, primarily because the technology available wasn’t up to the game. Now the picture is the exact opposite – AI is already on the road to making itself ubiquitous, absorbed into systems and devices to the point where we no longer think about its presence. The reason is also the exact opposite: now the technology is not only up to the job, it – both hardware and software as a combined whole – is only at the start-line of where it could end up. That will, of course, depend totally on the capabilities of systems running them – and the chips and packages they are built on. 

This will require a growing range of different chips offering different, and new, capabilities, and some of these were talked about at the conference. Gelsinger certainly worked hard at giving his audience a real sense of the direction Intel is being pushed now. There will certainly be new generations of CPUs, and devices where the capabilities of both are available. Chiplets are also likely to be an important new class of device that will go to make up new packaged devices that can be simply engineered to meet specific and niche applications or tasks.

To help make this happen more easily and quickly, the company last year announced plans for a Developer Cloud service, where developers can readily pull together the tools and support services they require, when they require them. This is now generally available and will give developers access to its large language model training facilities for AI, along with its software platforms, one API toolkit, Open Vino toolkit, and many of its developer tools. There are three tiers of service - a free one to investigate and experiment, a commercial premium level, and finally an enterprise grade service.

Two ways to AI

When it comes to new AI applications and tools to service them with, Intel seems to be going in two directions at once which, given its history and long track record, does make some sense. One of those is ‘home turf’ – the PC – which Gelsinger is already calling the AI/PC. It first needs to be said that the company is not heading off into PC manufacturing, but is offering a reference design for such a device that its many PC making partners can utilize. Some early birds had first samples of such machines available at the conference.

The key step that the design takes is to run both the latest Xeon CPUs and the new Gaudi 3 deep learning processor in tandem. First silicon of Gaudi 3 is now just running through device packaging processes. This one combines the Gaudi functionality, particularly in terms of AI training workloads, with Intel’s GPU chips already being used to run AI applications. It will also utilize the latest version the Xeon CPU, the Intel Core Ultra processor, formerly codenamed Meteor Lake, which is now being sampled by systems vendors. It is due to be made widely available in December. The initial target market for the ai/pc is seen as the developer community, though Gelsinger is well aware of a broader potential:

If we put it in the hands of every human on Earth, they'll make it useful with nimble models that can be used literally everywhere, offering personal, private, secure AI capabilities that infuse every aspect of our daily lives - at work, at play, at sport with gamers, and our personal assistance creators. We're seeing more apps open up, the old ideas sparking others. And this AI/PC performance and capability is the perfect experience at your fingertips.

The next generation server platform from the company will also get the opportunity to bring these technologies into play, and he informed the audience that two key processor developments codenamed Sierra Forest and Grand Rapids are both progressing on or ahead of schedule. Sierra Forest is specifically configured for large, Cloud Scale workloads, which could match what some AI application can generate, while Grand Rapids is a more balanced machine for peak performance and AI capabilities. This gives simplicity and flexibility for system designers, as both use the same I/O pin-out. So one board design can be used as the basis for a wider range of services and capabilities, coupled with greater compatibility between such systems.

Going edge-native

Edge computing is Gelsinger’s other big target, with both hardware and core software development seeing significant developments, mainly under the umbrella of Project Strata:

We believe that the next decade or two of development isn't going to be cloud-native, its edge-native. And that's going to be driven by what I like to call the three laws of edge and AI computing - the laws of physics - latency; economics - `the cost of cloud and the cost of bandwidth’; and finally the laws of the land -`data sovereignty’. But to do that we need a lot of this plumbing, making the edge accessible, reloadable, updatable. And this is what Project Strata does. It brings us an edge native software platform as services and support. And we're bringing together an ecosystem of Intel and third-party apps to enable this edge environment.

Some have been around for a while, such as the AI inferencing and deployment runtime platform, Open Vino. The latest version now provides broader applications support, more natural language processing, computer vision, generative AI, with the aim being to not just write and deploy AI inferencing at the edge, creating the chance to build an environment where `write once and bring AI everywhere’ is standard procedure.

One issue with edge computing is the greater level of heterogeneity in the range of architectures that must be confronted. Though many are based on Intel hardware a growing number are now based on ARM technology, with which Intel already has a relationship. As part of that, Open Vino has now been ported to the ARM platform, which should extend the range of AI inferencing applications considerably. This will all be supported by a new hybrid AI Software Development Kit due out early next year. This will be targeting low code and no code applications so they can take advantage of AI.

This will also be one of the main target sectors for chiplets, which are expected to be widely utilised in the new ranges of processor modules that edge computing will spawn. The semiconductor industry has come together to create UCIe, (Universal Chiplet Interconnect Express) as a common standard a high-bandwidth, low-latency connector for chiplets to communicate within a module, and Gelsinger claimed the bragging rights of having the world’s first UCIe test chip available for users to try.

One last note – something to watch for in the next few years. Intel has also developed a new substrate on which circuits can be built. Traditionally that material is pure silicon, or sometimes gallium arsenide. But what Intel has developed is a special mix of glass, which could be both cheaper and more robust. It could add a new dimension to what is possible with `chips’, for it could be possible to use whole glass wafers (the size of small dinner plates) intact, rather than scribed and diced into individual chips. In theory, a dinner-plate sized exabyte memory device, or rack cage equivalent may be coming down the line in a few years time.

A grey colored placeholder image