Intel's software evangelist talks cloud architectures and the Internet of Things

Profile picture for user mbanks By Martin Banks April 19, 2015
Summary:
Cloud architectures are moving to where Intel has been for ages, according to Chief Software Evangelist James Reinders.

james-reinders-intel-software1
James Reinders

Only a few weeks ago the predictions about chipmaker Intel’s 2015 Q1 earnings suggested they could be down by as much as $1 billion, due to the fading sales of PCs and lack of real penetration into the tablet and smartphones market sectors. But in the end the company managed to turn in a better than predicted 3% uptick in earnings, plus the expectation of better times to come.

The current uptick has been put down to aggressive cost management but the future, according to CEO, Brian Krzanich, is looking brighter, not least because there are an estimated 600 million PCs around that are over four years old. To date, their collective alter ego – Microsoft’s Windows operating system – has somewhat stalled. The established Windows 7 version is still favored over the later versions 8 and 8.1.

But with the upcoming Windows 10 gaining good reviews, coupled with the potential of it providing a real soup-to-nuts, write once, run-on-all capability for running applications, the prospects for Intel’s backbone x86 chip architecture are expected to show recovery.

The company is, however, also looking at developments in the cloud, big data analytics and the Internet of Things (IoT) as part of a swing towards building a bigger software business to complement its chip offerings.

Software future

Building software, of course, is not a new development for Intel. It has a long track record in the specialised field of highly parallelised development tools for High Performance Computers (HPC) and all-out supercomputers using tens of thousands of processor cores to provide TeraFLOP performance levels.

That track record with highly parallelised operations is now no longer the exclusive domain of the pointy heads, however. With the coming of hyperconverged, microserver architectures running continuously delivered short lifecycle applications, experience in producing highly parallelised applications management becomes a skill that moves centre stage.

According to James Reinders, Chief Evangelist with Intel's Software Products Division, the whole hyperconverged development is playing straight to the company’s strengths.

There is a real hunger for compute power and we are seeing higher levels of integration. Some of this obviously come from the product companies and is driven by what they have to sell. The hard part is knowing where the real problem is and which way to address it. For example, it could be better to have more fabric on the chip. We have some great products coming: yes some of them are silicon, but also on the software and management side. I do see Intel participating here, and not just in supporting hardware.

Speaking at the annual European conference of Intel’s Software Products Division, Reinders suggested that the key target now is improving programmer productivity, particularly through greater use of industry standards, as well as achieving scaling consistency and fostering the move from multicore processors to many-core co-processor systems.

Intel lagged behind other semiconductor companies in this area, but is now in serious catch-up mode with its 61-core Xeon Phi device. First announced in 2012, a new version of coprocessor is expected to appear later this year.

The co-processor approach, which combines classic processors with graphics processor technology – the latter being capable extremely fast repetitive processing for image rendering that can also be exploited by many business processes – follows an architectural model that increasingly applies across the board in IT infrastructure. This is the choice between a few large computers, or lots of small ones running in parallel.

This was first defined by the founder of supercomputer company Cray, as Reinders pointed out.

Seymour Cray once said: `should you have a couple of oxes to pull a plough, or 1,000 chickens?’ Too many very small computers has been seen as restrictive, but the combination of processor and a GPU co-processor does work in volume. The hard part is getting applications to scale on a parallel environment. Once it does, the number of processors is irrelevant.

One weakness has been that the processor/co-processor combination has always produced MTBF issues in practice, so devices like Phi packs everything into one device. According to Reinders, its biggest competitor is the standard Xeon processor.

intel
This same model is now at the heart of the hyperconverged microserver datacentres starting to appear, as well as in the biggest, fastest supercomputers. Reinders acknowledges this puts the company in a position of owning a rare level of experience and expertise in what is now the coming architecture.

Through its membership of OpenMP, the open multi-processing organisation producing a API for shared memory multiprocessing applications in C, C++ and Fortran, it also means that it now allows developers to think in terms of writing once for both multiple operating systems such as Linux, OS/X Solaris, AIX, HP-UX and, of course, Windows, and across different hardware architectures, including ARM-based processors:

These types of tool are now getting more powerful and effective as compute power increases – now some previously impossible tasks are possible. We are also trying to keep the original programming methods that developers understand, and instead let the compilers do the work.

The Internet of Things pitch

Reinders does see a particular connection between Intel’s experience here and the development of Internet of Things applications, particularly growing suggestion that the combination of IoT and predictive analytics will force a need for widely distributed, hierarchical analytical nodes throughout an IoT infrastructure, rather than a single, larege analytical engine. Intel is already conducting several studies into this `edge analytics’ area:

The potential that comes with IoT is significant, and yes, there is a lot of HPC knowledge and experience that is transferable to IoT. We have solved many similar problems to the hierarchical models of edge analytics that are emerging with IoT. But I don’t think they are exactly the same. It is more a case that they rhyme.

He did acknowledge the potential that might exist for this role through combining the company’s existing embedded systems skills with such developments as the Compute Stick - a PC on a USB  - as the basis of local analytical and management nodes in an edge analytics architecture.

My take

It is easy to dismiss Intel’s HPC experience as a niche of little value to the mainstream. But the mainstream is fast approaching the point where it will need what Intel has available – the ability to manage hugely complex processing environments, often working in real time, at huge scale and blistering performance levels.

Experience of doing that will be vital as more business applications move to hyperconverged, cloud delivering architectures.

Trouble is, I’m not entirely sure Intel sees the tiger it potentially has by the tail.