AI at the edge gets in the SWIM as smart devices spew untapped data

Kurt Marko Profile picture for user kmarko April 4, 2018
Summary:
AI at the edge has particular problems that are not easy to mediate. SWIM thinks it has an answer but it is playing against some mega-competitors.

via SWIM
SWIM's model

It’s easy to see AI as a data center problem, particularly after seeing NVIDIA introduce a system like the DGX-2 that’s capable of chewing on machine learning models with the same facility as several racks full of conventional machines.

However, as I wrote last week, AI is rapidly encroaching on every industry, business process and type and size of device. It’s understandable for consumer devices where the primary requirements are portability and mobility, meaning they must provide intelligence even when disconnected from the omniscient cloud.

These are also considerations in business applications, whether on the manufacturing floor, in civic infrastructure or transportation. However, a more pressing problem is the immense amount of data devices now produce and the unfeasible issue of moving it to central AI supercomputers for analysis.

Fortunately, there are many problems that can be solved by adding AI smarts, processing data and making decisions locally. There’s a rich market awaiting the companies that develop ways of solving these problems efficiently. SWIM.AI, which recently had its coming out party is one such company.

The limited history of AI at the edge is characterized by scenarios in which the device processes data streaming from its sensors in real time using previously trained deep learning models to identify patterns. Examples include predictive maintenance where an engine’s sensors can detect impending track or wheel failures or precision agriculture that uses drone imagery and other sensor data to predict crop yields and optimize the application of fertilizer and pesticides.

These typically use models trained on data aggregated from a multitude of devices and locations that include numerous incidents and environments that models use to learn behavior patterns and subsequently predict future events.

Training these models is the task of devices like DGX-2. Once developed they are compiled (perhaps using the TensorRT technology I discussed last week) and deployed to edge devices where the computational load for model inference is small enough to be feasible on streaming data using relatively modest hardware. SWIM believes there’s a category of problems in which such centralized training isn’t necessary and that local optimization can provide results that are more than good enough.

AI without the supercomputer, can local modeling be good enough?

The standard practice of centralized training with localized inference still requires collecting, aggregating and analyzing vast troves of device data.

Furthermore, when wrestling with problems like cyber security, autonomous vehicles or logistics optimization in which conditions on the ground regularly change, it means models must be retrained and deployed. Ergo, the data collection, transmission, aggregation and analysis problem never ends.

It would be far easier if each device could create useful predictions and act on them, individually, using local data and processing.

Mobile devices that incorporate AI for things like facial recognition, camera scene processing and smart assistant wake word detection typically use pre-trained models that are static: build a big enough facial database and you won’t need to update the model.

Most commercial examples of AI at the edge currently work the same way, however, the data collection is more difficult since problems like predictive maintenance and equipment failure are rare, not easily simulated and highly dependent on operating conditions.

These require an enormous number and variety of samples to create useful models. Exacerbating the difficulty is the fact that most industrial devices don’t have the computational horsepower of an iPhone or Galaxy, meaning standalone models must be very simple or devices must be capable of streaming data that can be centrally analyzed in real-time.

SWIM brings real-time intelligence to the edge

SWIM was founded in 2015 and has been operating under the radar until this week when it announced the EDX product and significant additions to its leadership, including new CTO Simon Crosby, co-founder of XenSource, one of the early hypervisor vendors later acquired by Citrix and who later founded Bromium, which used virtualization technology to secure client devices.

By dislodging Crosby from his current startup, where he remains an advisor, along with new chief marketing and product officers with ample tech bona fides, demonstrates that SWIM is ready, in the words of co-founder Rusty Cumpston, to “Shape the future of real-time analytics and distributed enterprise applications.”

SWIM takes a different approach by running models locally, while simultaneously streaming data and device metadata to a so-called digital twin that can be aggregated and included in more comprehensive analysis of an entire device environment. SWIM contrasts its approach with centralized AI modeling in this blog post (emphasis added),

Edge Computing flips this model on its head. Data is processed as close to the source as possible, while higher order computations (for example, aggregate statistics) are performed at whichever level in the application hierarchy makes the most sense (from an efficiency perspective). In theory, only the highest level computations would then occur in the cloud. But in practice, the entire edge-based application functions as a single cohesive cloud. The difference is that the edge-based architectures optimize for proximity, and therefore minimize incurred latency.

Because sensor data is processed on/near the physical edge devices, applications can access structured, real-time data streams within the edge network with minimal latency. ... Time series data retains the critical element of “timeliness,” meaning that proactive measures can be informed by Machine Learning prediction models in real-time, compared with post hoc batch data analysis which could be stale by minutes or hours after central processing.

The areas SWIM targets initially, like traffic management, other smart cities applications and predictive maintenance, require actual, not quasi-real-time performance. As it notes it a blog post,

We measure real-time in seconds or even milliseconds, depending on the application. In other words, ‘real-time’ should be defined as what’s happening right now, so that perishable insights can be identified and acted upon before it’s too late.

SWIM's showcase implementation isn't for traffic control, but predictive information that is useful to delivery, ridesharing and public transportation services.

It has partnered with Trafficwave, a company that provides traffic control technology, to launch TidalWave, a real-time traffic information service that can determine roadway congestion and estimate travel times. The service works with existing infrastructure and provides predictive information using a real-time API.

SWIM's software stack

The software embodiment of SWIM’s vision is EDX, a lean machine learning stack designed for edge devices with limited computational and storage that provides a self-training platform to analyze locally-generated data.

Lean is probably an understatement since according to Crosby SWIM's stack can generate useful predictions using quite modest hardware, for example the controllers present in traffic intersections. Depending on the application, EDX can be used on things as small as a Raspberry Pi or relatively beefy as a GPU-accelerated NVIDIA Jetson.

A valid concern with edge AI is that the architecture risks throwing away valuable data that if aggregated and subsequently processed could yield useful insights. The problem is easily solved using the concept of digital twins, namely virtual representations of individual devices that can be replicated and collected. Indeed, digital twins are a pillar of the AWS (which it calls device shadows) and Azure IoT services. As the SWIM blog describes them,

The Digital Twin is a virtual avatar that mirrors the state of a physical device or sensor. This decouples the physical sensor from the data it is creating, allowing applications to subscribe to the data from a sensor without having to worry about integrating with the physical sensor itself.

“Digital Twins make it easier to integrate sensor data into applications, and do not require developers to have access to the devices themselves. … Because Digital Twins can be more easily composed within an application, developers can model the real-world systems, while abstracting away the complexity of dealing with a physical device.

My take

As I detailed last week, we are at the early stages of AI's diffusion from the datacenter to individual devices. Whether it's a toothbrush that collects and analyzes saliva samples for cancer, diabetes and heart disease or a camera that might locally identify a missing child and notify authorities, the addition of data-driven machine intelligence to devices has profound implications for business and society.

While the examples just cited are unequivocally laudable given their potential to help people, other applications, such as China's camera-laden, AI-powered surveillance state are alarming. Indeed, it's no coincidence that YITU, an impressive Chinese AI software company that I met at NVIDIA GTC, got its start in facial recognition and now boasts technology that won the NIST's Face Recognition Challenge 2017.

So far, AI at the edge has been the stuff of one-off, idiosyncratic projects, however, SWIM shows the need for a versatile platform that can be used across a wide variety of edge hardware with an equal assortment of industry applications.

SWIM does not have the market to itself, as the major cloud vendors are extending their IoT services into smart edge devices such as AWS DeepLens, Azure IoT Edge with supported hardware and the Google Android Things platform and other devices running AWS Greengrass.

As with SWIM, these are designed to be used autonomously, however the cloud vendors see them as extensions of their IoT services: the device models are trained in the cloud and, like Echo devices or Google Home, the devices are designed to 'phone-a-friend' in the cloud for help or to deliver added features.

I am skeptical that wholly independent edge devices can infer enough useful information from local data streams to be broadly applicable, but Crosby assures me that the applications for SWIM technology go far beyond traffic and utility infrastructure. I believe that augmenting the local analysis with comprehensive models using data aggregated by digital twins (shadows) will be needed in many cases, but am excited to what SWIM users can do without a cloud backstop.

It will be fascinating to watch both the market for and applications of edge-based AI explode over the coming years.

Loading
A grey colored placeholder image