Intel’s ‘democracy pitch’ to direct the chaos of AI/ML

Profile picture for user mbanks By Martin Banks September 10, 2019
Summary:
Intel is setting itself the goal of providing developers with tools to deliver increasing levels of abstraction away from the tech coalface and democractise the development process.

inside a PC

Most of life is pretty chaotic, sometimes barely controllable and occasionally totally out of control. Making sense of such a world, bringing a sense of order, understanding and control to it, is one of the key drivers underpinning the race to bring Artificial Intelligence and Machine Learning into play. But because it is pitched as a race, there is every danger that the addition of AI/ML might serve to make life even more chaotic rather than resolving anything.

This is especially the case if AI/ML goes the way of history, where development of new technologies falls into the hands of those with the deepest pockets and the grandest mega-projects they feel need solving. More sense, as with the development of the PC and everything that has followed from it, comes with increasing levels of democratisation – let enough people play, dabble or seriously work with a technology and some good sense and applications will follow.

Such thinking lies behind the direction that Remi El-Ouazzane, VP and COO, Intel’s AI Product Group, is taking. Working with chaos rather than against it and encouraging the democratisation of AI as far and as wide as possible are both essential cornerstones of his thinking.

Why should it matter to Intel? As a chip maker surely AI/ML are just another bunch of applications for Xeon processors to run and memory chips to store? Well, yes, but the company also has what El-Ouazzane considers to be a secret weapon - the very development tools needed to create those applications that will (hopefully) make sense of the chaos.

It is his view that Deep Learning is just starting to accelerate Machine Learning adoption in a big way, with the early adopters being the cloud providers. The deployment of Deep Learning is becoming fundamental in that sector, especially when it is deployed to a recommendation engine or a neural translation engine or similar role. Even the smallest gain in efficiency can have huge impact and, as he observed, the gains can be very large:

Just to calibrate it for you, for the Amazons and Googles of the world a 0.1%  improvement of accuracy on recommendations is worth more than $1: like many, many, many more Dollars.  So those gains are very important.

The Amazons and Facebooks of the world consume most of the Deep Learning workloads. So for now Intel spends much of its efforts serving those customers when it comes to Deep Learning deployment.  However El-Ouazzane is clear in his own mind that Deep Learning tools are now moving towards the enterprise space, which are now waking up to the possibilities. He sees Intel’s role as helping them make sense of deep learning so their experiments make it through to production.

Handling the chaos

When it comes to hardware, Intel continues to develop its architectures, and now has a roadmap that covers scalar, vector, spatial and tensor architectures. This really is an attempt to cover all options for the near future. El-Ouazzane realises that there are no really clear markets, there is just an infinite variety as companies take their own paths to their own solutions:

You know, if I could pick one I would do it tomorrow morning, because it would be more efficient for everybody.  But the market is very heterogeneous and not one where one size fits all. And even in one market consumers take different paths. So that is why there is four architectures running in parallel.

It’s a specificity of Intel.  I think that what we are talking about here is a religious belief for the company. The tag line could be: `let chaos be’, because it's super chaotic. This plurality of hardware architecture is the only way to make sense out of the implementation chaos going on in the market place.

To help users cope with the chaos, Intel has been investing heavily in areas such as building very optimised, deep learning libraries for each architecture. But even here there are divisions. He now sees two main types of customer, for example.

One group has the skills to take an architecture, plus a low level library, and they can take it from there. But they are in the minority. The majority, however, are in the second class that require higher levels of abstraction. Not only that, they also demand that this allows them to make whatever they develop agnostic of the specific Intel hardware back-end it was developed on.

This step towards a design once/run many’capability comes from a tool the company has developed called openVINO. This is, essentially, a framework allowing any deep learning models to be optimised and run on any of Intel’s AI hardware back-ends. It is now seen as the fundamental rule and tool through which to make sense of the infinite variety of hardware paths users can take. El-Ouazzane says:

It provides an environment which factors developers away from the underlying hardware issues so they can concentrate on the purpose.

In essence, developers can design Deep Learning tools in any way they want and then use openVINO to port them onto the most appropriate of the four architecture options. And El-Ouazzane sees these architectural frameworks becoming like the operating system of deep learning so that everything can be abstracted up to that point. At the end of the day these frameworks are the ultimate abstraction for all others to work from.

The company is growing its range of use case examples, such as work it has done on traffic monitoring cameras in China, particularly in the tricky area of identifying drivers using mobile phones while driving. This, El-Ouazzane observed, has made a significant impact on the ability of police and city authorities to collect fees. Another is the work the company has done with Siemens Healthineers. This recently announced project is for tools that can infer from a cardiovascular MRI whether or not a patient is going to be subject to a cardiovascular disease.

Intel is now optimising around a model that, if said to be running at a speed nominal of 1x , then with its latest processor, known as Cascade Lake, will offer a performance improvement of 5x. For the Siemens project that means either 5x more data processed in a given time, or 5x faster processing of a given dataset.

It also has example user cases in other early adopter markets such as oil and gas and high frequency trading systems, and he acknowledged that there will be obvious suspicions about a negative impact on jobs from such deployments, and indeed productivity gains can be, he suggested, `massive’. But he stressed that, first and foremost, most businesses are primarily seeking its impact on the quality and accuracy of the output.

Now for some democracy

Although Intel is building up a team of AI specialists, the company is not interested in cornering the AI market, per se, although it is, of course, extremely interested in cornering the market for the base hardware and software. El-Ouazzane argues:

There is a customer obsession in Intel. We are not entering the mode where we become the one stop shop for everything. But it’s in our interests to lower the bar for the adoption of deep learning because it obviously benefits our business to become the next reference computer sub-strata.

El-Ouazzane foresees no situation where the company will get into developing products for specific applications areas or roles. What it will do instead is give users a complete development suite for deep learning tools, so that users might optimise their solutions on the Intel base platform.

He also sees much of that work being taken on by partner companies that can provide the relevant expertise to customers. One he cites is Data Robot, a company supported by the incubator support side of the company, Intel Capital. This can, in effect, take in customer data and output developed deep learning tools:

This is one of those companies that is, quote/unquote, `democratising’ the access to machine learning. We are very committed to this approach because the larger and more healthy the ecosystem is the more likely the system they use is Intel. And here, it is no shame to say it is better to be on Intel than something else. That's what we want.

That being said, his pride in the Intel software development team is apparent, along with his view that they are one of the company’s best kept secrets. It also prompts the question of whether the skills of that team might combine with increasing democratisation to open thing sup further by, perhaps, making OpenVINO available as a standalone appliance accessible to all?

He certainly sees possibilities in the accelerating pace of AI and deep learning democratisation, though he has doubts about the appearance of `big solutions’ that fix 80% of the world’s problems, tempting though that might be. But the pace of change and development in this sector is such anything is still a possibility:

That is happening at light speed because there is room for automation.  And if you look at the gap between the machine learning demand and the amount of machine learning scientists that are coming on the market, this is not closing it is augmenting. And we will participate in that trend…… but we have nothing to talk about now.

Which sounds very much like a ‘probably, but not just yet’.

My take

Making a pitch to be a key part – perhaps the key part - of AI/ML applications and systems development is a bold claim from Intel. But that is in effect what is happening here. I can imagine that there will be many howls of anguish for many out on the bleeding edge of development here as they no doubt see this as a move to hobble their creativity – and if Intel succeeds giving a consistent technological underpinning to future developments it may well cut some worthwhile lines of development  off at the knee.

The other side of the coin, however, has values of its own and one just has to look at the way a degree of consistency and lineage in processor architecture has allowed the PC to grow from the desktop through to being the heart of every corner of cloud computing. And those of us with long memories might well think fondly of old processor families, from AMD’s 2901, 4-bit Bit-Slice device, through the MOS Technology 6502, Zilog Z80 and, of course, the early 16-bit devices such as the TI TMS9900 and the Motorola 68000, each of which had its own advantages, but proved to be its own trap for applications developers. Intel, from the 386 onward, has understood the broader advantages for developers in providing as much consistency and lineage. It is a reasonable bet that the majority of AI/ML applications developers and users will want the same to happen now.