Is AI an agent of big tech hegemony or multi-disciplinary research and innovation?

Profile picture for user kmarko By Kurt Marko October 4, 2019
Summary:
A recent New York Times article focused on fears of a big tech monopoly over AI technology and research. Severe debunkery ensues...

 

 

fear-in-the-eye-1549016

A recent New York Times article fretting about the soaring costs of developing and training leading-edge deep learning models and my admittedly provocative Tweet questioning the premise and motives of the article’s sources led to the type of online banter that indicates a nuanced question ill-suited for pithy Twitter responses.

Fears of AI creating a chasm between haves and have-nots are common, however the topic of AI-fueled inequality typically centers on its economic effects, namely that the growing substitution of manual labor with algorithmic automation serves to further polarize income distributions as the knowledge class controlling and using the algorithms get richer while the working class being displaced by machines suffers. The argument is well summarized in a recent economics research paper, The Wrong Kind of AI:

Many new technologies — those we call ‘automation technologies’ — do not increase laborís productivity, but are explicitly aimed at replacing it by substituting cheaper capital (machines) in a range of tasks performed by humans. As a result, automation technologies always reduce the laborís share in value added (because they increase productivity by more than wages and employment). They may also reduce overall labor demand because they displace workers from the tasks they were previously performing. There are countervailing effects to be sure: some of the productivity gains may translate into greater demand for labor in non-automated tasks and in other sectors. But even with these effects factored in, automation always reduces the labor share in value added.

In contrast, the article in question cites AI researchers whose argument is that AI technology itself is increasingly controlled, developed, operated by and optimized for massive technology companies that have the ample resources needed to fund what has become an exceedingly expensive endeavor. The article notes that (emphasis added):

Computer scientists say A.I. research is becoming increasingly expensive, requiring complex calculations done by giant data centers, leaving fewer people with easy access to the computing firepower necessary to develop the technology behind futuristic products like self-driving cars or digital assistants that can see, talk and reason.

The danger, they say, is that pioneering artificial intelligence research will be a field of haves and have-nots. And the haves will be mainly a few big tech companies like Google, Microsoft, Amazon and Facebook, which each spend billions a year building out their data centers.

In the have-not camp, they warn, will be university labs, which have traditionally been a wellspring of innovations that eventually power new products and services.

While I mocked the researcher’s central thesis, there are other ways that a disruptive technology like AI can create winners and losers and areas where mega techs will have a decided, perhaps insurmountable advantage.

The democratization of AI technology

The contention that university researchers, and by implication SMBs or any organization without at least a 9-figure technology budget, won’t be able to compete with the mega techs makes the false assumption that companies like Amazon, Google and Microsoft are hoarding the technology, guarding it behind a moat of patents, proprietary software and inaccessible hardware. Almost none of this is true. As I have documented many times over the past few years at Diginomica, AI is a commercial battleground for these huge companies use to differentiate their cloud service businesses to win new customers. For example, when Google made its initial move to vigorously compete with AWS and Azure, I wrote in Google's Next cloud move - yet another stab at the enterprise that:

Higher value service abstractions are critical to what Google's head of AI and Chief Scientist Fei-Fei Li called the "democratization of AI" by allowing non-specialists in business to tap into sophisticated models and techniques that were formerly limited to a high-priesthood of ML researchers.

In expanding its AI portfolio, Google announced the general availability of the Cloud Machine Learning Engine that simplifies the training and deployment of custom ML models using the TensorFlow platform and other GCP services.

Last fall, I documented how AWS broadens machine learning portfolio at re:Invent with services for both novices and experts by discussing its three-tier hierarchy of machine learning services spanning infrastructure, development platforms and applications. I noted that:

It's critical to understand that AWS's three-pronged strategy isn't just designed for a few AI experts or budding specialists, but to, in the words of Google's soon to be departing chief AI scientist Fei-Fei Li, democratize AI. There are many ways cloud services level the playing field for ML development, whether by providing rentable access to state-of-the-art compute power or streamlining the deployment of managed development frameworks like TensorFlow and MXNet. AWS, like Google Cloud and Azure, provides all of these, but many of its new services target the highest level of the ML service hierarchy, packaged applications. I believe these offer the greatest opportunity for AI democratization since they allow customers to use AI without fully understanding AI.

Indeed, AWS makes these intentions quite clear when its VP of machine learning writes:

We want to help all of our customers embrace machine learning, no matter their size, budget, experience, or skill level. Today’s announcements remove significant barriers to the successful adoption of machine learning, by reducing the cost of machine learning training and inference, introducing new SageMaker capabilities that make it easier for developers to build, train, and deploy machine learning models in the cloud and at the edge, and delivering new AI services based on our years of experience at Amazon.

It’s not just cloud operators that are improving access to sophisticated AI technology and infrastructure. Some of the most innovative research is being performed and commercialized by their component suppliers like Intel, NVIDIA and a host of startups building specialized AI accelerators. The result means systems using massive chips like NVIDIA’s V100 GPU allows doing calculations on a single workstation that once required racks of specialized HPC hardware. For example, NVIDIA’s $50,000 DGX Station is a deskside workstation with the performance of nearly 50 dual CPU servers. However, as I wrote in California's latest gold rush - Google and Intel dig in to tap the AI business market seam, Intel has risen to the challenge by adding features to its latest-generation Xeon processors that improve the performance of specific AI workloads by two to four times. Customers like the AI researcher in question are the benefactors ss these firms play technology leapfrog, resulting in huge jumps in performance per dollar.

The argument shouldn't be about whether the ability to industrialize particular AI applications on the massive scale required to turn it into a global business is something few companies can afford since the cost of similar technology infrastructure has always led to oligopolies. Indeed, semiconductor development and particularly manufacturing are a classic example. However, as I wrote in a Twitter reply:

Nothing has changed with AI. No one complained decades ago that Stanford couldn't afford to build a billion dollar fab like Intel. It still could afford enough advanced equipment to do cutting edge research, just not manufacture chips at scale. Same is true with AI.

Data and development expertise as competitive weapons

Barely mentioned in the TImes article is the area where mega techs with enormous consumer operations do have an advantage over smaller companies and academic researchers: the data needed to train and optimize deep learning models. Recent research has shown that model accuracy can be incrementally improved via better modeling, but proportionately improved to much better accuracy, indeed, to the point of irreducible error, by training it with more data.

Kurt chart 6

Source: Summary of paper, Deep learning scaling is predictable, empirically.

Here’s one conclusion from a paper on deep learning accuracy and `scaling,

We empirically validate that DL model accuracy improves as a power-law as we grow training sets for state-of-the-art (SOTA) model architectures in four machine learning domains: machine translation, language modeling, image processing, and speech recognition. These power-law learning curves exist across all tested domains, model architectures, optimizers, and loss functions.

For algorithms involving consumer behavior, preferences, physical location and movement, beliefs and associations, firms like Amazon, Apple, Facebook, Google and Microsoft have access to vast troves of minute data from their online properties and hardware products. The information typically feeds both proprietary algorithms that they monetize in various ways. In contrast, image, voice, video and text data that these firms have turned into facial recognition, speech-to-text (and vice versa) transcription, language translation and sentiment analysis software are available as online services to anyone via their cloud platforms.

Researchers at the US Bureau of Economic Analysis (BEA) have attempted to measure the value of data for various consumer platforms and have many examples in their paper, Value of Data: There’s No Such Thing as a Free Lunch in the Digital Economy and the totals can be astounding. Here’s what they conclude about Amazon:

Our initial results indicate that data can have enormous value: for example, in 2017, the value of Amazon’s data can account for 16% of Amazon’s market valuation and has an annual growth rate of 35%.

 

morekurt

Source: BEA report; Value of Data: There’s No Such Thing as a Free Lunch in the Digital Economy

However, the mega techs don’t have a monopoly on all data that might be useful for AI research, particularly that collected by governments (like weather, environmental, GIS and public records) and healthcare providers. Indeed, medical data might be the most valuable of all given AI opportunities in radiology, pharmacology, pathology and symptom diagnosis, as I write about in my column, Heartening life-saving applications that apply deep learning to medical data. While privacy regulations limit healthcare data sharing and aggregations, attractive commercial opportunities will likely spawn partnerships between large companies, healthcare providers and medical researchers that will find ways to work within privacy constraints.

AI expertise and development personnel is the other area where large firms have a significant advantage given the current shortage of such talent, the salaries such firms can pay and the R&D budgets they can offer researchers. The market for such workers is small, with the US Bureau of Labor Statistics estimating total employment for Computer and Information Research Scientists at only 31,700 last year. While not a perfect metric, it’s a reasonable proxy for the actual AI developer universe.

Given the intense demand for such skills, it’s surprising the BLS expects the number of job openings to grow by only 16 percent over the next decade. With average salaries at the high end of similar computer-related occupations, it’s reasonable to expect that market forces will balance job demand and supply over the long term and that, just as they do in other technical occupations, enough people with the requisite skills will gravitate to the academic and government sectors to fill the available slots.

3rdkurt

My take

Fears of a big tech monopoly of AI talent and technology are overwrought and comparing the systems required to perform AI research to particle accelerators is absurd. There are ample signs of AI democratization, with new examples, such as this article on Vianai Systems, regularly documented here at diginomica. The areas where mega techs like Amazon, Facebook, Google, et. al. have a distinct advantage are less due to their AI acumen and more a result of their access to vast amounts of consumer data, be it e-commerce transactions, search results or online interactions. If such data repositories are deemed to create unfair competitive asymmetries, the solution isn’t an AI technology tax or publicly funded AI server farms, but regulations on the collection, use and sharing of such data.

AI hegemony is a myth. AI won’t be the demise of innovative startups and academic research, but the catalyst for products, services, productivity improvements and healthcare advances, many of which built on infrastructure and services created and operated by big cloud providers.