Google uses I/O to flex its AI prowess

Kurt Marko Profile picture for user kmarko May 26, 2021
AI announcements at Google I/O last week trigger discussion of Google’s position vis-a-vis AI services at other cloud platforms.

Hands break chains of technical debt with bright clouds © Merydolla - shutterstock

The Google search engine was born before machine learning ignited a renaissance of AI, however, one can make a strong case that the Google we know today — the search engine, advertising network (AdWords), image and video services (Photos, YouTube) and SaaS applications (GSuite) — is impossible without AI. Consequently, Google is home to one of the world’s largest collections of AI researchers, developers and software projects. The centrality of AI to its business has led Google to become an influential advocate for the beneficial use and responsible development of AI technologies, declaring on a site devoted to explaining its principles that (emphasis added):

Google aspires to create technologies that solve important problems and help people in their daily lives. We are optimistic about the incredible potential for AI and other advanced technologies to empower people, widely benefit current and future generations, and work for the common good. We believe that these technologies will promote innovation and further our mission to organize the world’s information and make it universally accessible and useful.

Google has evangelized and facilitated AI development by contributing software like TensorFlow, RecSim, Dopamine and many more to the open source community and encapsulating AI models into many of its Google Cloud services. Unfortunately, turning to Google for AI tools often seems like going to the hardware store for furniture: most people want a pre-assembled piece or an IKEA-like kit, not a truckload of lumber, nuts and bolts. Integrated ML development environments like AWS SageMaker, and Microsoft Project Bonsai attempt to lower the barrier to ML development by automating and connecting the steps from model development and data engineering to training and deployment. Google introduced its take on a managed ML IDE and MLOps platform, Vertex AI, along with several other AI technologies at last week’s I/O developer conference.

Vertex AI - ML for the rest of us

Regardless of the deployment platform, the task of developing a ML model and incorporating it into an application has many steps, a set of processes those in the Ops-obsessed technology industry has dubbed MLOps. Google describes a seven-stage workflow for ML implementation:

  1. ML development.
  2. Data processing.
  3. Operationalized training.
  4. Model deployment and serving.
  5. ML workflow orchestration.
  6. Artifact organization.
  7. Model monitoring.


These tasks fall into three broad categories:

  • Data engineering to collect, filter and refine data relevant to a particular problem.
  • ML engineering to identify the best type of model, train a model and tune its parameters.
  • Application engineering, which incorporates the trained model into a system, collects metrics (predictive accuracy, performance, resource consumption).


Vertex AI seeks to streamline all phases of ML development by unifying tools under a single UI. The head of its Cloud AI services said Google had two goals when developing Vertex AI:

Get data scientists and engineers out of the orchestration weeds, and create an industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production.

According to Google’s internal research, engineers in various fields and levels of experience with ML can use Vertex AI to develop models that require only a fifth of the code as conventional development platforms by exploiting the following features.

  • Simplified model comparison and training via Google Cloud AutoML and its graphical UI.
  • Pre-trained models and APIs for text and document classification, sentiment analysis, ML tables, language translation, speech-to-text extraction, image and video classification and object tracking.
  • Integration with BigQuery for data extraction and ingestion using SQL.
  • Human review of data to create accurate labels.
  • Support for popular ML development frameworks and notebooks including TensorFlow, PyTorch and Jupyter.
  • Vertex Pipelines and Prediction to streamline model deployment and to Google Cloud.
  • Vertex Tensorboard to visualize and track models with graphs displaying images, text, and audio data.

Google’s announcement highlighted several beta customers that used Vertex on new projects or to streamline their development workflow. For example, Essence, a specialist in using data analytics and ML in advertising, used Vertex AI to rapidly create new models based on client requests, changing consumer behavior or additional information. Likewise, a division of L’Oreal is using Vertex to train all new AI models including a skin diagnostic app that captures and analyzes photos to recommend skincare products.

Breakthroughs in AI algorithms

Google I/O is primarily a developer conference, not a product showcase, therefore many of the announcements concerned new application and development platforms (like Android 12, Material Design, Flutter) and algorithm research. Several AI projects made the cut to be highlighted in the keynote and that demonstrate the increasing sophistication of Google’s AI usage, notably:

  • LaMDA: With search increasingly happening by voice using smart speakers, phone assistants and watches, Google recognizes that the search interface needs to be more conversational and less like completing a form. Thus, it has evolved from traditional keyword-based search algorithms to language-parsing deep learning models like BERT (Bidirectional Encoder Representations from Transformers). While BERT can understand the context of words, for example, that “hard” might mean difficulty or the resistance to pressure or scratching, it struggles to understand the flow of conversations that require intelligent, relevant replies to follow-up questions or comments. LaMDA, an evolution of BERT, is a new neural model designed to understand dialog. Unlike chatbots that offer literal, but often nonsensical replies, LaMDA can counter a query with a clarifying question. For example, in response to “I’m hungry. What’s the best place for food delivery right now?” A LaMDA agent might ask, “What’s your favorite kind of food?” Before listing a few places nearby that deliver that type.
  • MUM provides another way to improve search accuracy by handling complex queries best answered by gathering results from several sources. MUM (Multitask Unified Model) doesn’t just combine multiple steps into one search, but gathers and parses results in different formats (like images, text, PDFs, etc) and languages. MUM can also handle questions that involve relative comparisons or include qualifiers. For example, if asked, “I’ve completed the Coursera Introduction to Calculus course. What should I take next to prepare for a course on the applications of differential equations?’ MUM might respond with a series of advanced mathematics and science courses. Likewise, when sharing a photo of your walking shoes and asking whether they would work for ice fishing, MUM can categorize the image and see that it has few similarities with footwear used on snow and ice.
  • Maps: Google has long used location tracking in its mobile apps to gather data for Maps, Search and other services. For example, it can calculate the average speed of cars to highlight and route around traffic jams on Maps. Google recently started using an AI model to detect rapid deceleration events, i.e. slamming on the brakes, on different street sections. The model also uses contextual information, such as the sun angle into oncoming traffic or construction lane restrictions at particular times of the day to predict areas with an increased risk of crashes. Armed with this information, Maps suggests routes around potentially dangerous roads and highlights them on its UI.

My take

Google used I/O 2021 to show off its AI prowess, however, many of the improvements were for internal use. When it comes to delivering AI technology to developers, Vertex AI is playing catch-up to AWS SageMaker and Azure ML, while it is unclear when or if LaMDA and MUM will be turned into cloud services. Furthermore, whether or not its AI products are technically superior to its competitors’, Google still has an image problem with many enterprises that see GCP as a fertile playground for developers, but not a place they would trust to run critical applications.

Google’s financial results show encouraging progress in winning business customers, reporting a 48% revenue increase in Cloud and Workspace services in its Q1 2021 earnings. Furthermore, it rightly sees AI as a competitive differentiator, with CEO Sundar Pichai saying on the call that he is committed to bringing its AI improvements to enterprise customers via GCP. As I first wrote more than four years ago, it’s a mistake for either competitors or potential customers to sell Google short as a cloud platform. AI could be the gateway to growth if Google can use its Vertex experience to turn more of its raw technology into services for non-specialist enterprise developers.

A grey colored placeholder image