Can a piece of drywall be smart? Bringing machine learning to everyday objects with TinyML

Profile picture for user kmarko By Kurt Marko November 10, 2020
So-called smart devices like Amazon Echo and Google Nest made early headway into our homes. But will devices as small as a vibration sensor soon outsmart an Echo? Here's a look under the hood of "TinyML."

connected car concept

Since the HAL9000 and Star Trek's M-5 Multitronic, the power and capabilities of AI have always been oversold by both Hollywood and Silicon Valley. Although we're still waiting on machines that can carry on an intelligent conversation, AI has been creeping into many objects in our everyday lives behind the scenes, making them more useful and proactive.

People are most familiar with the intelligent assistants built into devices like the Amazon Echo, Google Nest Hub and Apple HomePod, but as I wrote more than three years ago, these rely on cloud backend services for most of their smarts, using local hardware primarily to recognize their wake word and listen for follow-up questions. 

Soon, devices as small as a vibration sensor will outsmart an Echo due to significant advances in the performance of low-power hardware and more efficient AI algorithms. The combination allows surprisingly sophisticated deep and machine learning models to run on embedded systems. Until recently, shoehorning AI software into a battery-powered device has required data scientists skilled in working with the constraints of an embedded SoC, but recent advances in AI development and automation frameworks, categorically termed TinyML, greatly expands the realm of smart devices.

From phones to sensors, AI permeating the environment

AI has significantly reshaped and improved everyday objects in ways that few people recognize. For example, most phone users don't realize that when they press the shutter button to take a snapshot, it unleashes a complicated process causing the camera to rapidly take multiple images using different exposure settings, analyzes them for features and then pixel-by-pixel using embedded deep learning models before combining them into a single picture. Apple calls this feature Deep Fusion, while Google uses similar computational photography techniques for its Night Sight, Astrophotography and HDR+ shooting modes. Here's what the process looks like when Pixel phones take a low-light shot. Apple's most recent iPhone 12 Pro and iPad Pro models go even further by combining data from both the camera and LIDAR (laser rangefinder) sensors. The stunning results are often impossible to recreate with a conventional camera and tripod. 

iPhone image specs
(via Google)

Source: Google Research paper; Handheld Mobile Photography in Very Low Light

While sensors and other low-power devices can't run algorithms of the same sophistication, TinyML and associated development tools promise to give AI smarts to an immense range of battery-powered devices. TinyML is the moniker for both a movement and a developer community. The movement is galvanized by the idea of making ML work on sensors that can be powered by a watch battery or energy harvesting to turn raw data into useful information. As two Google engineers put it in their how-to on TinyML development (emphasis added)

This is where the idea of TinyML comes in. Long conversations with colleagues across industry and academia have led to the rough consensus that if you can run a neural network model at an energy cost of below 1 mW, it makes a lot of entirely new applications possible. This might seem like a somewhat arbitrary number, but if you translate it into concrete terms, it means a device running on a coin battery has a lifetime of a year. That results in a product that's small enough to fit into any environment and able to run for a useful amount of time without any human intervention.

For context, a phone SoC like the Qualcomm Snapdragon 865 uses up to 5W, or about 1000-times the power of some TinyML devices. 

Cost is another aspect that differentiates TinyML devices from mobile or ultra-portable processors. For example, the cheapest Raspberry Pi, the Pi Zero, which uses a Broadcom SoC with an older Arm 32-bit core, runs about $5 in volume. The same model with embedded Bluetooth and Wi-Fi is double the price at $10. In contrast, many 32-bit microcontrollers used in embedded systems, like those using the popular Arm Cortex M0+, only cost $1. At that price, the ubiquity of microcontrollers in everyday objects isn't surprising, with sales expected to hit 38 billion devices in 2023. The ability to run machine learning algorithms on such quotidian hardware opens up a slew of new applications. 

Making TinyML easy with AutoML

TinyML the developer community has been kindled by the TinyML Foundation, a group of like-minded researchers and developers seeking to promote information exchange about innovative ML implementations on ultra-low power devices "at the very edge of the physical and digital world." In promoting the idea of TinyML services, Ericsson offers a useful graphical depiction of where TinyML fits in relation to other computing paradigms, where it sees the movement at the intersection of IoT devices, edge computing and machine learning data analysis. 

TinyML via Ericsson
(via Ericsson)

Source: Ericsson; TinyML as-a-Service: What is it and what does it mean for the IoT Edge?

TinyML has been the inspiration for several tools and services designed to accelerate and simplify the development and deployment of ML software on embedded systems. One of the first was TensorFlow Lite, a variant of the popular AI development framework targeting mobile and embedded devices. As a presentation by one of its chief developers illustrates, training  a TF Lite model merely requires passing standard TensorFlow models through a converter before feeding it with sensor data. Similarly, inference works by running data through a preprocessor and the TF Lite interpreter. TF Lite works in most TinyML scenarios using 32-bit microcontrollers and has been extensively tested with Arm Cortex-M devices. The TF Lite runtime takes only 16 KB. A simple speech recognition app like wake word detection takes only 22 KB, while person detection in a grayscale image feed can run in only 250 KB. 

TF Lite is perfect for developers already fluent in the TensorFlow framework and with an understanding of the limitations of embedded hardware, however, these requirements set a high bar for the millions of embedded developers. AutoML, a new development platform from Qeexo, is designed to lower these technical barriers by automating the data processing, model development, tuning and hardware provisioning for embedded developers. 

Like ML automation cloud services or server software such as AWS SageMaker, Google Cloud AutoML, Auger and Sigopt (which I highlighted back in 2017), Qeexo AutoML:

  • Supports a variety of popular ML techniques
  • Simplifies data preparation, labeling, validation and visualization via a management UI
  • Provides no-code automation of most of the typical ML workflow 
  • Supports most types of mobile sensors, including:
    • Motion: accelerometer, magnetometer, radar, gyroscope
    • Acoustic: microphone, ultrasonic, vibrometer
    • Environmental: temperature, humidity, air pressure, illumination, IR
    • Image: photo/video, thermal
    • Touchscreen: capacitive and IR
    • Biometric: fingerprint, heart rate
  • And, builds memory-efficient models for Arm Cortex-M0 to M4 microcontrollers.

There are several alternatives to Qeexo's system for embedded ML, including Cartesiam NanoEdge, Edge Impulse, NeuroPilot Micro and OctoML.

Qeexo ML 1
(via Qeexo slide deck)


Myriad applications

The overriding impetus behind moving ML to the far edge is so-called sensor fusion in which increasingly capable edge devices combine, correlate and analyze data from multiple sensors to detect anomalies, objects and their relative positions and make predictions using ML that are far more accurate that simple trend extrapolation techniques. Applications span many industries and usage scenarios, including:

  • Industrial predictive maintenance  
  • Cybersecurity 
  • Smart city and home  
  • Mobile and wearable devices (gesture detection, computational photography, medical health)
  • Automotive  (ADAS, hands-free assistants)

These environments require rapid results since the type of streaming data they generate is fleeting, with exponentially-decaying value over time. Thus, locally performing the ML without sending it the cloud and back is critical to achieving near-real time low-latency.

Qeexo ML2
(via Qeexo slide deck)

My take

We remain in the twilight hours of TinyML as the capabilities of microcontrollers and sophistication of ML optimization have reached a point where incredibly useful applications can now run on near-invisible devices. Systems like TF Lite, AutoML and others will unleash the creativity of millions of embedded developers to infuse intelligence, interactivity and uncanny features in almost every physical object we interact with. 

Qeexo has several examples that illustrate the way TinyML will reshape everyday products, including:

From cars that tell you when an engine bearing is about to fail to kitchen faucets that warn of harmful chemicals in the water, embedded intelligence is set to revolutionize our interactions with everyday objects.