A question - has a real challenge now been laid down to the classic IT architecture that is now at the heart of the cloud?
As speculated, Chinese comms and computing vendor Huawei has made a clear pitch to move the world onward from the Intel Xeon/Linux/Microsoft architectured systems that fill up every data center and many of the corporate desktops.
It is a step that has to happen sometime soon, for the basis of most systems just starting their working life today goes back to the Intel 486 processor and Microsoft server software of over 30 years ago. And in that area, while there has been a great deal of refinement and extension, there has been surprisingly little in the way of innovation.
But as Huawei Deputy Chairman Ken Hu put it at the company’s Connect conference in Shanghai, there is now a need for a more powerful form of computing to take humans forward from here on in:
Rule-based computing is no longer sufficient, and we are now moving into new areas where there are no clear rules to work with. So scientists came up with the statistical computing model, and we see this becoming mainstream, and five years from now we expect to see such systems consuming 80% of the total computing power seen around us.
When he talks about the fading away of rules-based architectures he has the growth of AI systems and applications very firmly in mind. He sees computing entering into a new intelligent world with defining features such as a reliance on computing power for tasks such as training artificial intelligence and machine learning systems. He also sees computing becoming ubiquitious, with a strong trend out towards computing at the edges of networks which will include wearable devices and personal mobile clients:
With that in mind we need better cloud/edge co-operation that gives optimal performance and operations. It means that users get choices on what data located in the cloud and where it is processed. And from our point of view, the bigger the challenge the bigger the opportunity.
All change – hold very tight please
Some of those challenges will be in the form of new architectures, at both the platform and processor level, and this is where Huawei has been investing much of its R&D resources. It particularly sees AI as holding out the prospect of huge rewards, even from its current early days where the time and resources that has to be applied to teaching AI systems the basic ground work of whatever field it is they are to work in can be a major stumbling block.
The investments the company has made started to make their first tentative steps on to the stage at the Connect conference. They come in the form of both new server architecture – and the first examples of resultant hardware, and new operating environments to run on them. It has to be said, however, that it has just been announcements. The nearest the delegates got to real product was the brief appearance of one board-based system to be aimed at partner systems integrators.
That being said, the other side of the coin is that Huawei is one of the first vendors to acknowledge and announce plans for, the drastic changes set come from the combination of 5G comms, AI and the distribution of compute resources – the effective virtualisation of the traditional data center. A key component of this is that the company now believes that the existing dominant architecture of Intel’s x86 processor family has now reached its limits. Other architectural approaches provide the platform for the next steps.
In particular, the use of the ARM processor core architecture is now a fundamental component of the Huawei plan, and two new processors look to be key here. These are the Kunpeng and the Ascend, the former aimed at general purpose applications and the latter dedicated to AI application work. Both come with application development tools and a range of `assistance’ for systems integrators.
This does include the announcement of a loaded motherboard 'server' to help the SIs get started and at least prototype their developments. There does seem to be no major plans for Huawei to get into building a range of general purpose systems hardware, however.
That being said, it did announce Atlas 900, a new machine based on the Ascend processor and aimed specifically at AI applications. One of the early ones it sees as being important is the training of AI applications in the background data, processes and the rest involved in the work they are intended to do.
Professor Gao Wen of Peking University told conference delegates that this technology was behind the work being done there on the Peng Cheng CloudBrain, an exascale AI supercomputing system targeting applications in intelligent health care, intelligent traffic management and smart financial management. Current systems are running at 100 PetaFlops but the plan is to scale up to 1,000 PetaFlops.
The company also sees applications coming as the need for significantly more, readily available compute power grows, particularly in areas where it is currently uneconomical to use, and/or not sufficiently available, particularly in the wide area of continuous iteration of apps development. Or as the Professor put it:
The days of waterfall development are fading away.
It is in the nature of conferences such as this that subjects get painted in broad brushstrokes or displayed in big pixels, so there is much to follow up on here. There will also be the obvious discussions and examinations of whether Huawei has got it right or other vendors are doing it better. But it does seem fair to suggest that it has set out the broad plan of the playing field on which the next stage of the game of `getting ever-better value from ever-richer information’ will be played.
There is a marker here is the notion that we are at the end of one era and the start of a new one. As someone might have said: `Business computing is dead; long live business computing’.