In an era when it is unfashionable for even mega businesses to be responsible for anything other than the management of other people’s activities and investments, Huawei is something of a throwback. It makes things, often from the ground up, having first designed them, and before that having also designed and manufactured some of the core components on which those products are based.
Huawei is pitching for place in the commercial start up of AI as the next step forward in business information management and exploitation. It is still a marketplace where being able to produce as much as possible of the system and software required is probably an advantage, especially if, as is the case with Huawei, that it fundamentally believes that current technologies, including Nvidia’s Graphics Processor Unit (GPU) technology and Intel’s long standing x86 processor family, are now insufficient to meet the new needs coming along.
It was no surprise, therefore, to see that new products – or recently introduced ones – had strong representation at last week's Huawei Connect conference in Shanghai. The overall strategy of the company is to focus on four key areas.
At the highest level is architectural innovation, with continued development of the Da Vinci processor architecture, which itself is based upon the ARM processor architecture already dominant in many smartphones and mobile client systems. Huawei, however, is looking further afield that just that market and towards what it calls 'all-scenario’ processors. That should end up, therefore, as a common processor architecture out through to servers.
There is a four-piece line-up of processors from the company, ranging at the low end from the Kirin for smart devices and Honghu for smart screens, through to the Kunpeng for general purpose computing and the latest addition, the Ascend processor family, for AI-specific applications.
In a marketplace that could easily become messy in terms of who does what, a specific component of the company strategy is to keep clear business boundaries. It won’t, for example, sell processors directly. Instead, it will provide their performance and capabilities to its direct customers in the form of cloud services, while its partners can get real components, particularly if they prioritise the development of integrated solutions.
This is because the final stake in the ground is around building an open ecosystem. Over the next five years, Huawei plans to invest another US$1.5 billion in its developer program. The aim is to expand the program to support five million developers worldwide. In the company’s view they are the route through to developing the next generation of intelligent applications and solutions.
AI in three layers
When it comes to AI the key development from the company is the introduction of a new, three-layer architecture aimed at creating intelligent IP networks. It pulls together intelligent connectivity, intelligent operations and maintenance, and intelligent learning, and includes the Engine AI Turbo product series, the iMaster NCE autonomous driving network management and control system, and the iMaster NAIE network AI platform – claimed by the company as an industry first.
The company has incorporated AI capabilities into various layers of IP networks, fully enhancing the intelligence level of IP networks and accelerating network development to support autonomous driving.
In the data center interconnect (DCI) field, there were four new OptiX family products on show. The OptiXtrans DC908, is claimed to be the industry's first intelligent DCI product for global data center customers. This uses a simplified architecture and supports one-click service provisioning. With a 48 T/bit transmission capacity over one optical fibre, it is said to reduce the number of fibre connections by 90% a site.
The OptiXtrans E9600 is an intelligent all-optical transmission device targeting industrial applications such as energy, power and electricity, transportation, education and finance. A key feature is industry-level security which should provide the reliability needed for long-haul production data transmission. At the other end of the network, OptiXaccess and OptiXstar are optical access and optical terminal products designed for enterprises looking to reshape campus networks with optical fibre.
In the all-important area of processors, the key player now is set to be the Kunpeng 920. This is the all-purpose player in the company’s processor armoury. It is billed as one of the fastest Arm CPUs available, being built round 64 custom Arm cores working with an 8-channel DDR4-2933 memory controller. It also comes with on-chip I/O support for PCIe Gen4 and CCIX. This is pointed to as examples of where competitors are now falling behind. While AMD’s EPYC Rome processor will support PCIe Gen4, Intel’s Cascade Lake will support neither.
The company will have one server line utilising the Kunpeng processors. This is the TaiShan line of three servers that cover the 2-node and 4-node ends of the market, plus a storage server in either 40- or 72-bay configurations.
How to train your AI
At the other end of the scale comes the Ascend 910 processor, designed specifically for work in dedicated AI applications. This is the latest member of the Ascend-Max chipset series, first talked about last year. For the techies, its performance looks impressive, delivers 256 TeraFLOPS on half-precision floating point operations, and 512 TeraOPS when running integer precision calculations. Power consumption is a frugal 310W.
The Ascend processors get a starring role in the big systems announcement at the conference - the Atlas 900. This uses thousands of them to create a platform dedicated to training AI applications. The company claims it takes only 59.8 seconds for it to train ResNet-50, said to be the gold standard for measuring AI training performance. This beats the previous fastest performance by 10 seconds.
The company sees it bringing new possibilities to both scientific research and business innovation and plans to make it available for both large enterprises as well as as a cluster service on Huawei Cloud. This will make it accessible to a far wider range of business users. Based on Huawei’s FusionServer G series heterogeneous servers, the Atlas platform pools resources such as GPUs, HDDs, and SSDs, and provisions hardware resources on demand to suit the needs of specific service models.
Talking of Huawei Cloud, this too is getting a significant pump up, with 43 new AI cloud services, powered by Ascend processors, now available. These include AI Elastic Cloud Servers (ECSs), designed to double performance for applications such as AI inference, AI training, and autonomous driving training.
ImageSearch and Content Moderation now provide higher performance at 30% of the cost, whole performance is improved through use of dedicated hardware image acceleration operators, architecture reconstruction, and model compression. A new Knowledge Graph aimed at enterprise use enables one-stop knowledge extraction, mapping, convergence, and full lifecycle management.
The company’s new autonomous driving cloud service, Octopus, stores Petabyte-levels of drive test data and, it claims, supports retrieval of hundreds of millions of data records in seconds. Built on the Huawei CarbonData big data platform, it provides automatic data processing, training, and simulation throughout the entire autonomous driving process. Th company claims this accelerates the iteration frequency of autonomous driving algorithms from months to weeks and helps vehicle enterprises quickly develop autonomous driving products.
Yes, this is a list of products, but if Huawei is right and AI and the edge do really require some next generation tools with which to build them effectively – in much the same way that steam worked well for railways but petro-chemicals did the trick for road, and electricity looks best for both now – then some company has to provide those tools. It is a brave game the company is playing, and it certainly will not win it all. Others will develop better tools, but it all has to start somewhere.