Nvidia GTC17 sets the stage for our AI-augmented future

Kurt Marko Profile picture for user kmarko May 16, 2017
Summary:
Although the showpiece of Nvidia GTC17 was the new V100 GPU, software dominated an event that sets the stage for our AI-augmented future

Nvidia CEO Jensen Huang shows off the V100
Jensen Huang, Nvidia, shows off the V100

Even though the main event invariably unveils new chips that push the known state of semiconductor process, design and packaging technology, it’s notable that most of the activity at Nvidia's GPU Technology Conference (GTC) centers on software. What was once a venue for showing off Hollywood special effects and the latest video gaming titles has in recent years evolved into one of the premier forums for the latest research and products in AI and machine learning (ML).

The explosive growth of GTC, with attendance tripling in five years to 7,000 as the number of developers using Nvidia's CUDA GPU platform skyrocketed 11-fold to over half a million, is a testament to the use of GPUs in an ever-broader range of software. The flood of interest is a direct result of the ability of GPUs to accelerate the inherently parallel algorithms used in building ML applications, particularly those using deep learning techniques like neural networks.

The problem with such algorithms is that they are voracious consumers of data with a bias towards complexity – the larger the data set and more computationally intensive the approach, the more accurate and useful the results. For Nvidia, this is a good problem to have, since the company specializes in hardware built to tackle parallelizable complexity. Consequently, GTC has become relevant to more disciplines and businesses every year.

AI gets down to business

The increasing diversity of applications was manifest at this year's event. The talks were less about clever AI tricks like automatically tagging cats in a photo library, more about the use of ML to solve or accelerate business problems including price optimization, anomaly detection, malware filtering, medical image classification and conversational UI construction. Whether in cars, robots, CT scanners or cyber security appliances, AI and ML have become essential ways to enhance functionality, create new features and enable new business models.

Take Jet.com, the e-commerce startup Walmart acquired last year to compete with Amazon as shopping rapidly moves online. Jet is based on an extremely sophisticated pricing model that seeks to minimize the cost of an entire basket of goods that are provided by different merchants using various distribution centers and shipping charges. The goal is both provide lower average prices through merchant competition and increase the size of the basket through volume discount incentives.

A presentation by Daniel Egloff, managing director at QuantAlea and a partner at InCube, which consulted with Jet on new pricing software, shared details of how the basket-optimization problem can quickly explode into a universe of combinations. Finding the minimum overall price of even a relatively small basket of 8-10 items would take years using brute force calculation of all possible combinations – an eternity in a situation where buyers are ready to ditch a shopping cart at the slightest pause in the checkout process. Jet solved the problem using some ingenious algorithm development and the power of GPUs to process millions of calculations in parallel. The net is what Egloff estimates as discounts averaging 5% and up to 15% for larger transactions. Multiply that by a total e-commerce market approaching half a trillion dollars and it adds up to billions in potential savings.

ML applications permeate every industry

Elsewhere, GPU-powered algorithms enable software that can save lives or stop previously unknown cyber attacks. For example, an entire GTC track was devoted to medical applications where ML can assist pathologists in more rapidly and accurately identifying cancer cells from blood or tissue samples, isolate microscopic tumors on a CT scan and map a minimally invasive surgical remedy. Other sessions explored how ML can keep drivers from accidentally crossing an intersection when someone runs a red light, or block ransomware using a zero-day defect that might incapacitate a hospital's computer systems.

This was topical, as on Friday a massive ransomware attack significantly disrupted operations of UK's National Health Service causing hospitals to turn away patients, reschedule surgeries and divert ambulances. Such an attack, using techniques believed to originate at the NSA, might have been blocked had the victims been using software from Deep Instinct, an Israeli startup that won the Nvidia Inception Award for the  most disruptive startup for its deep learning software that can detect previously unseen malware in real time.

I'll have more details in the weeks and months to come about these and other scenarios where AI using various ML techniques like predictive analytics, deep learning self-teaching neural networks and cognitive computing are changing business, including details on Deep Instinct and how it could have thwarted the latest attack even on older, unpatched Windows systems. However, the common thread that brought the developers of such disruptive software together is the use of Nvidia's GPU engines that enable algorithms that would have been computationally infeasible just a couple years ago.

Nvidia's product barrage

At GTC, Nvidia co-founder and CEO Jensen Huang used his 2-hour-plus keynote to take attendees on a tour of the state of semiconductor manufacturing and process scaling, along with recent developments in ML and the rise of GPU computing, which he sees resulting from the confluence of processor and system architecture, self-learning algorithms and new applications. He then unleashed a barrage of product announcements headlined by Nvidia's next-generation GPU.

The so-called Volta-class V100, with an R&D pricetag of $3 billion, is a testament to chip design and fabrication technology and arguably the largest, most complex device ever made, with 21 billion transistors on an enormous die about 93% the size of the Apple Watch display. Featuring a new architecture optimized for AI and deep learning neural networks, the V100 is about 5-times as fast as its predecessor, the Pascal-generation P100, and tens of times the speed of conventional CPUs for both ML training and inference (in part due to a new runtime and optimizer called TensorRT). Nvidia announced several new systems using the V100 including the rack-mounted DGX-1 with 8 processors, the water cooled, deskside tower DGX station with 4 GPUs and the Open Compute Platform HGX-1 targeting large cloud service providers and designed with Microsoft.

The fastest hardware is of little benefit unless developers can use it, so Nvidia also addressed a persistent headache for AI developers – building and maintaining an ML software development stack. The company's  Nvidia Cloud Platform is a containerized, pre-integrated stack that includes the major deep learning frameworks like TensorFlow and Caffe2, along with the cuDNN development library and new TensorRT inference optimizer that can significantly increase the execution speed of huge neural networks. While Nvidia will offer the Cloud Platform as a PaaS, it will also be available from AWS and Azure, with others sure to follow.

My take

Huang's conclusion that we're entering "a new era of AI computing" isn't mere hyperbole. The scores of companies and thousands of attendees at GTC illustrate a depth of R&D and capital commitment to new types of software that can solve problems that are intractable with traditional sequential programming. Whether it's SAP building a system than can estimate advertising effectiveness from millions of social media and online interactions or a small startup like Deepgram using deep learning to analyze call center records to improve customer interactions, ML algorithms are changing businesses in every segment.

The excitement and burgeoning attendance at GTC notwithstanding, the AI revolution is just beginning and many hurdles remain, including the extreme complexity of AI software and concomitant expertise and computational overhead required to build it. Nvidia is using creative design and, with foundries like TSMC, state-of-the-art fabrication technology to rapidly address the latter with annual performance improvements that return to the exponential rates of Moore's Law. Likewise, the proliferation of GPU-based cloud services, both low-level IaaS instances and fully packaged functions like image recognition, voice-to-text translation and conversational chatbot engines are democratizing access to sophisticated features.

Machine learning is no longer an academic curiosity and organizations that fail to explore its potential to enable new products, services and business insights will increasingly find themselves at a strategic disadvantage to those who choose to jump on the AI technology train.

Loading
A grey colored placeholder image