General purpose AI in business – too hard and waiting for its Netscape Moment

SUMMARY:

AI is getting plenty of attention but when it comes to developing general purpose AI solutions, we’re a long way from having convenient, packaged products accessible to the non-specialist.

Google I/O demonstrated a company that has strategically shifted from search ads and mobile platforms to one dedicated to “making AI work for everyone” by incorporating machine intelligence in both overt and subtle ways into all its products. It’s a wise move that plays to Google’s technical strength and was previewed at its Next cloud event.

As I wrote in March, Google sees its cloud services as fueling the “democratization of AI” by abstracting many of the hard implementation details of building and using an AI software stack into cloud services. The early examples, such as Amazon AIAzure Cognitive Services and Google Cloud Machine Learning Services are splendid examples of encapsulating sophisticated AI functions in a fairly straightforward service with an API wrapper.

However, the high-level services mostly focus on the well-trodden paths of image and speech recognition; domains that have long catalyzed AI research. While such applications certainly have many uses in business, including for conversational interfaces as I detail here, they don’t address the vast majority of business problems that could benefit from machine learning optimization and where applying AI still requires too much time and expertise.

As the ARCHITECT blog rightly points out, AI research has often focused on games like Chess and Go, or handy add-ons to online consumer services like automatic image tagging and voice commands, not hard business problems. As they put it (emphasis added),

The risk here is that despite some apparently revolutionary technologies, their actual applications beyond playing games or improving things at companies like Google and Facebook could be quite limited. We could find ourselves stuck trying to solve many of the same enterprise problems as we were 10 years ago—fraud detection, sales optimization, recommendations, etc.—simply due to lack of imagination. Don’t get me wrong: Those are big and important problems and we should use AI to solve them if we can, but there have to be some optimal use cases for AI that are different than the optimal use cases for the first iteration of big data.

Unfortunately, when it comes to developing general purpose AI solutions, we’re a long way from having convenient, packaged products accessible to the non-specialist. Instead, frameworks like Caffe, H2O TensorFlow and Theano are designed for data (some would say rocket) scientists with advanced degrees in statistics.

AI in business is still waiting for its Netscape Moment heralding mass availability; namely, a tool or service that packages advanced algorithms with an intuitive interface and simple scripting dialect in a way that allows those with domain, but not data science expertise to create powerful, self-learning and -optimizing tools.

Power, flexibility or simplicity: pick two

For those willing to live with the limitations, cloud services like AWS Machine Learning successfully lower many technical roadblocks. AWS ML, which the company tags as being designed for developers, not statisticians, can apply machine learning optimizations based on Amazon’s internal recommendation and prediction systems to a variety of business tasks such as fraud detection, sales promotions or production and logistics planning.

However, AWS ML isn’t a general purpose AI framework. It uses a fixed set of statistical models designed for either classification (binary or multi-class) or regression problems and doesn’t incorporate today’s hottest AI technology, neural network-based deep learning.

In contrast, both Google Cloud ML, which supports the TensorFlow framework for general purpose model construction, and Azure Machine Learning, which includes about 25 packaged models designed for five types of problems, offer much greater flexibility, albeit at the cost of complexity and a steeper learning curve.

The SAP Analytics Cloud is another service that incorporates complex ML techniques with automated model management in a service designed for business analysts. Again, SAP trades off simplicity and sophistication by marrying pre-built predictive recipes with the ability of experts to create custom models that encompass elements of HANA PAL, APL and R.

The intention is that such custom models, once built by experts are reused by non-specialists via a template library.  AI has also been incorporated in CRM systems designed for business professionals from Salesforce (Einstein) and Natero, however, these too are applications targeted to a particular problem domain.

AI-squared: Letting the machine build its models

As AI spreads from academia and R&D labs to business line professionals, its tools are starting to follow a similar path of conceptual abstraction and maturity to that taken by data processing with the emergence of the relational database. Several startups I met at the recent Nvidia GTC event are focused on automating the process of AI model development.

Mark Hammond, CEO and co-founder of Bonsai embraces the database analogy, saying that its platform is an AI “engine” that abstracts complex features into a system that can be taught by rules in the same way a RDMS encapsulated complex data handling rules usable by non-specialists.

Bonsai seeks to shift the focus of AI software from programming models to teaching rules. It compares the product’s Inkling programming language to BASIC, describing it this way,

Our new programming language, Inkling, enables a teaching-oriented approach by providing a set of foundational primitives that can be used to represent AI without specifying how the AI is created. By abstracting away the underlying algorithms and techniques, you can focus on creative ways to describe your problems and leave the heavy lifting to us.

According to Hammond, the Bonsai engine uses many of the popular AI libraries like TensorFlow to build models, however the user interacts at the level of knowledge rules and data, not specific ML models and parameters.

Once ‘programmed’ with a set of domain expertise, like a RDMS, the Bonsai engine connects to streaming data sources to produce predictions or process control actions. The company, which recently more than doubled its initial A-round funding is initially targeting industrial applications such as robotics in manufacturing and logistics along with other industrial systems like HVAC, however its approach is clearly extensible to other disciplines.

Self-optimizing AI

Another company using automation to simplify and enhance AI development is SigOpt. Somewhat lower on the food chain than Bonsai, SigOpt seeks to improve a difficult task in machine learning system development: model optimization.

Latching onto the SaaS metaphor, CEO and co-founder Scott Clark calls SigOpt, which is provided as a hosted, black-box, “optimization as a service.” Like Bonsai, SigOpt works with an input data stream, however instead of teaching the system with heuristics, SigOpt takes a set of input parameters and goals, such as minimizing the use of certain chemicals and waste products in a production process while maximizing the output of usable product.

SigOpt then suggests a set of parameters to test, evaluates the problem using an ensemble of statistical and ML techniques and proposes a new set of parameters to test and tweak before running through the process again. After a few rounds of this feedback loop, SigOpt arrives at an optimal result that is both better and done orders of magnitude faster than the typical trial-and-error approach.

SigOpt is based on Clark’s doctoral research, and while it’s been around for over two years, with fewer than 20 employees, it’s still a small operation. Clark says SigOpt has received the most interest in financial services, including fraud detection, insurance risk prediction and quantitative algorithmic trading, along with oil and gas exploration and manufacturing process optimization.

My take

AI is starting to make profound changes in how organizations do business, whether in the public sector through predictive policing or manufacturing where it improves efficiency and lowers cost. However, the complexity of AI development tools doesn’t yet allow the technology to be usable by most organizations except in specialized, pre-packaged applications.

We’re still in the era of Visicalc, Lotus 1-2-3 and Excel, not Power BI, Qlik and Tableau. Just as Excel macros and SQL queries didn’t completely displace C++ code and Python scripts, there will always be the need for general-purpose model development and programming frameworks.

However, using AI to solve a wider range of business problems requires sizably expanding the population of ‘developers’ by reducing the knowledge barriers to model developments. AI’s Netscape Moment will come when those with domain- and process-specific knowledge can use it to readily business solve problems.

Image credit - via Bonsai and SigOpt

    Comments are closed.

    1. Dianne Greene made it back from Sapphire Now to Google I/O on Friday May 19 to interview a panel of the world’s top women (all Googlers) on the Past, Present and Future of AI / Machine Learning (http://goo.gl/cYdWps).

      Fei-Fei Li stuck me with a comment that 70 pct of internet data is pixel based (assume pictures and videos). Leaves only 30 pct text based ripe for AI in business applications (my assumption).

      Next up was the Android O team who showed a new local TensorFlow capability that automatically makes copy and paste of a complete phone number as easy as touching the phone number anywhere (no longer the need to carefully fiddle to select start and end). This AI powered copy and paste is marvellous user experience.

      My sense is someone from SAP could similarly win the Turing Award for AI powered UX! Alas, I’ll bet one Fei-Fei Li, Françoise Beaufays, Fernanda Viegas, Daphne Koller winning for Alphabet first.