2018 - when vendors built the foundations for AI while enterprises looked for useful applications
- Summary:
- A look back over a year of innovation in AI
Disentangling hype from reality shows that most ‘AI’ looks more like statistics than intelligence
The renaissance of AI, rescued from the ash heap of symbolic reasoning and expert systems by the rise of recursive deep learning algorithms, has fueled no end of hyperbolic predictions and dystopian narratives. Much of the exaggeration stems from the moniker itself, since despite the many impressive achievements of today’s incarnation of AI, it bears a closer resemblance to advanced statistics than it does cognitive intelligence. As I discussed in this article, there’s a growing backlash and active debate among academics as to whether machine and deep learning are ‘intelligent’ at all or merely clever ways of analyzing the massive troves of data now available:
An emerging debate among AI researchers is whether current machine and deep learning techniques amount a fundamentally new form of algorithmic reasoning or are merely an extension of longstanding mathematical techniques like descriptive statistics and curve fitting.
As I detail, influential computer scientist Judea Pearl is in the latter camp, saying in an interview, “As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting.” As I put it,
In essence, despite their cerebral inspiration, deep learning algorithms amount to another, albeit more powerful data analysis tool that is particularly adept at handling vast amounts of unstructured data.
Nevertheless, my conclusion then is still worth remembering, particularly in light of this article’s theme:
Although I frequently use the term AI, it’s always with reluctance and out of convention and not conviction. Much like IoT and cloud, epithets that are equally abused and imprecise, AI is a widely understood, convenient shorthand for a set of techniques that are increasingly powerful, yet fundamentally distinct from human intelligence and rationality.
While today’s AI doesn’t meet the philosophical definition of intelligence, it is fabulously useful to a growing catalog of problems and is fueling intense competition and innovation in both software and hardware.
Medicine proves to be a rich target for AI-enhancement
The biggest advances in millennial-generation AI have come via deep learning: recursive algorithms modeled after human neural networks that are particularly adept at pattern matching and image analysis. Although early deep learning demonstrations typically involved tagging simple objects from the type of quotidian photos that get shared on social media, more significant uses come from the fields of physical surveillance, aerial and satellite mapping and medical imaging.
Using AI to assist radiologists in identifying CT and MRI scans is an area of active research that I covered a year ago, however a more exciting prospect involves embedding AI in medical devices to enhance their performance by algorithmically refining the raw data. As I detailed in this article about AI-enhanced medical devices,:
An exciting area of research that illustrates the promise of AI augmentation is what I call AI-enhanced instrumentation. The concept is similar to techniques like high-dynamic range (HDR) photography, digital remastering of recordings or even film colorization in that one or more original sources of data are post-processed and enhanced to bring out added detail, remove noise or improve aesthetics. When applied to radiology and pathology, the concept entails taking either real-time or recorded images and processing them using DL [deep learning] algorithms and possibly CGI (like ray tracing, etc.) to produce an enhanced image that highlights features of interest like cancer cells or creates a more photorealistic, lifelike result.
The article discusses early results from an NVIDIA project that is already yielding impressive results, such as the three-dimensional reconstruction of pumping heart valves using sonogram data. I consider these medical applications harbingers of what could be achieved by applying the same concepts to other disciplines:
When the concept of AI post-processing is applied to other industries it means that organizations are sitting on a goldmine of data waiting to be algorithmically refined. Likewise, the idea of augmenting existing imaging and other sensors with DL and AR has myriad applications, for example in physical security (real-time face recognition), predictive maintenance (failure detection), retail (virtual modeling of out-of-stock clothing/sizes), traffic and traffic control (signal, escalator, etc. adjustments based on real-time conditions) and QA (physical defect detection) among others.
Just as we’re seeing in phones, where the addition of AI accelerators to the system processor has enabled amazing software features such as Apple’s Face ID, voice assistants and computational photographic enhancement, I believe the use of deep learning and other AI algorithms in all manner of industrial equipment will become commonplace:
We’re on the cusp of another generation of software-enhanced hardware in which AI and other computationally-intensive techniques dramatically improve the capabilities of existing devices. It’s time to dream big about how giving your sensors a brain transplant can enhance your business.
Intel won’t be left out of the AI infrastructure gold rush, but NVIDIA is also threatened by custom-designed AI processors
The AI renaissance is catalyzed by machine/deep learning algorithms was almost entirely enabled and promoted by NVIDIA, which repurposed and redesigned its GPUs as AI accelerators. The inherently parallel processing used for graphics rendering was a close match for the calculations used in neural networks and other recursive AI algorithms, but until recently NVIDIA has had the market for AI hardware to itself.
The situation started to change a few years ago when Google concluded that GPUs were overkill for many of its algorithms that are mostly built using the TensorFlow framework (which it initially developed, but has since open sourced). Google’s custom Tensor Processing Unit (TPU), which are currently on their third generation, provides GPU-level performance on a common benchmark at a significantly lower cost. However, as I detailed in two articles (part two here) on the race to develop customer AI processors, Google has spawned more than half a dozen competitors vying for a slice of the growing GPU pie. Referring to conclusions in a report on AI hardware by Deloitte:
Deloitte is correct and while the overall market for AI acceleration hardware will continue to grow, GPUs can no longer count on gleaning all of it as new types of custom chips, both dedicated (ASICs) and reprogrammable (FPGAs) emerge as better options for certain workloads and deployment scenarios.
Indeed, after summarizing the efforts of companies like Microsoft, Baidu, Intel and several startups, I conclude that we are still in the early days of AI hardware evolution not unlike the era that spawned the PC where at least three viable microprocessor architectures vied for supremacy:
The emergence of new types of machine learning algorithms, high-level software frameworks and cloud services that hide architectural details behind an API abstraction layer could rapidly change the competitive landscape. Indeed, frameworks like TensorFlow, Caffe2, MXNet and others still uninvented could drain the competitive moat that NVIDIA believes it has built in via the CUDA platform and APIs.
Indeed, Intel could pose the biggest challenge to NVIDIA’s dominance. As I detailed after an Intel event updating its data center strategy:
NVIDIA’s expertly crafted narrative that GPUs are behind the current wave of AI progress is like all tech legends: enough truth to be believable but omitting key facts that round out the story. “As Intel explains, the biggest sin of omission is the conflation of AI software with GPU hardware, an oversimplification that glosses over the messy reality that in AI workloads, one size doesn’t fit all. As the head of Intel’s AI products group, Naveen Rao explains in this blog accompanying his presentation at the DCI Summit, recent successes in AI rest on three pillars: improved and maturing software tools, better hardware (in many forms, not just GPUs) and a thriving research and development ecosystems, often open source.
As I discussed, Intel has made several key acquisitions to bolster its AI ambitions, including Nervana (whose founder, Rao, is now head of Intel’s AI products), Movidius, Mobileye and Vertex.ai. Intel is still stitching these together into a coherent strategy, but since its goal is to provide AI accelerators for every situation, whether embedded devices or cloud services, Intel faces hurdles NVIDIA doesn’t:
Such an expansive portfolio is necessary given the diversity of AI workloads and the resource limitations of devices running AI software. However, Intel’s strategy is spread across several processor architectures and instruction sets, making its work on software libraries and optimizations like the nGraph compiler all the more critical. “Such programmatic heterogeneity is a complexity NVIDIA doesn’t share, with its lineup of large (Volta), small (Jetson) and vehicular (Xavier) GPUs sharing the CUDA programming platform and as.
Unfortunately, the company’s recent architecture day event provided few additional details. Nonetheless, I believe Intel has the right approach by trying to hide chip-level architectural details behind layers of software abstraction. As I concluded last summer:
For business users of AI, it means favoring approaches that exploit abstraction layers such as development frameworks and higher level cloud services that insulate you from the implementation details and make it easier to rapidly exploit technological advances.
Putting AI to work for IT - new applications show promise
Most applications of AI are designed to enhance existing business processes, enable new services or provide business leaders insights and meaning by mining the firehose of data streaming into their organizations. In these situations, IT is an enabler of AI’s benefits, however the technology is also being applied within IT itself to increase operational efficiency, improve system and application performance, streamline troubleshooting and increase security.
2018 saw several compelling examples of AI in IT, a trend that some have termed AIOps. While wary of firms AI-washing products to capitalize on the hype, I detailed some legitimate uses of AI in IT in a recent column. Examples include advanced network monitoring and event correlation, threat detection and mitigation to improve network and application security and APM. In my view:
The glut of data finding its way into every corner of the enterprise is only useful when it is analyzed, summarized, intelligently extrapolated and ultimately, acted upon. … The rapidly maturing category of ITOA and AIOps products can tremendously improve all aspects of IT operations and service delivery from the lowest layer of network infrastructure to the user experience with individual applications.
Intent-based networking (IBN) is another area where the confluence of virtualizing IT infrastructure as software abstractions and advanced algorithms to analyze and program the systems has tremendous potential to improve IT operations. IBN has been championed by network powerhouse Cisco for several years, but I profiled an innovative startup, Apstra, that has perhaps the most advanced, mature IBN implementation so far available. I note that:
IBN is a reaction to extreme complexity and data overload and exemplifies the spread of system modeling with programmatic implementation to infrastructure design and deployment. When paired with sophisticated statistical and ML analysis of aggregated device data IBN should significantly reduce admin overhead while improving network reliability, security and availability.
Cisco’s imprimatur, and its subsequent release of actual IBN products, not just slideware, has made IBN safe for enterprise use, however as I conclude:
Still, caution is required since IBN technology is still quite immature and the market very fluid, thus organizations should be in no hurry to adopt anything. Nevertheless, IBN should be on every IT organization’s radar and is worth investing time to understand its capabilities, assess pilot test opportunities and a develop a timeline for eventual deployment.
It will be a tall order, but here’s wishing that 2019 is full of just as many innovative product developments, over-hyped-concepts-turned-useful and intriguing business applications of technology.