To coincide with the “Cloud AI” segment of its Cloud Next ’20: OnAir conference, Google today unveiled updates and new features across its portfolio of AI services. Contact Center AI, software that enables businesses to deploy virtual agents for customer service interactions, gained custom-generated voices and an agent assist module. As of this week, the file-analyzing Document AI will ship with a mortgage industry template for processing borrowers’ income. And forthcoming tools for AI Platform will provide automation and monitoring at the testing, deployment, and management stages of AI system construction.
“AI is opening up a new world of possibilities in areas like customer experience, user engagement, and access to content,” Google head of conversational AI Antony Passemard wrote in a blog post. “In Cloud AI, we’ve taken Google’s … machine learning models in speech and natural language processing and applied them.”
Today marks the beta debut of Dialogflow CX, the newest version of Google’s suite for building conversational experiences, which is now used by over a million developers. According to Passemard, Dialogflow CX is optimized for contact centers that deal with complex conversations and that deploy across platforms — including mobile, web, smart devices, chatbots, interactive voice response systems, messaging apps, and more.
Dialogflow CX introduces a streamlined visual builder — one that graphs conversation paths as state machine models — and the concept of first-class types (conversation states and state transitions) to provide fine-grained control over conversation paths. Also new are flows, which partition agents into smaller conversation topics and which can be used by team members to create paths within dialog trees.
Rolling out alongside Dialogflow CX is Agent Assist for Chat, a Contact Center AI add-on that provides agents with support via text, in addition to calls. Agent Assist transcribes calls in real time and identifies customer intent to provide step-by-step assistance, like recommended articles, deals and special offers, discount information, workflows, and automated dispositions.
Custom Voice is a more autonomous affair. Available in beta, it leverages Google’s Text-to-Speech API to enable companies to create voices that channel their brands across touchpoints. Much like Amazon’s Brand Voice, Custom Voice builds AI-generated voices that represent specific personas.
To prevent malicious applications of Custom Voice, Passemard says customers will have to complete a review and ensure their use case is aligned with Google’s AI Principles. English is the only language currently supported, and the models powering Custom Voice require “studio-quality” training audio data supplied by a voice actor. Developing and evaluating a model takes several weeks.
Within Document AI, Google took the wraps off Lending Document AI, a specialized solution for the mortgage industry that automates routine document review (now in alpha). Today also marked the beta launch of Procure-to-Pay Document AI, which aims to help companies automate the procurement cycle with a set of invoice and receipt parsers that take a documents and return cleanly structured data.
In March, Google announced Cloud AI Platform Pipelines, a service designed to deploy robust, repeatable AI pipelines, along with monitoring, auditing, version tracking, and reproducibility. By October, a fully managed offering for pipelines will launch in preview, enabling customers to build pipelines using prebuilt TensorFlow Extended components and templates.
By the end of 2020, Google plans to launch a Continuous Monitoring service and a Feature Store (in alpha) that serves as a repository for model feature values. Continuous Monitoring will flag models in production that begin to go stale or any outliers, skews, or concept drifts that emerge. Meanwhile, Feature Store will provide tooling to mitigate common causes of inconsistency between the features — individual measurable properties or characteristics — used for model training and prediction.
Continuous Monitoring and fully managed pipelines build upon the new ML Metadata Management product within AI Platform, which tracks artifacts and experiments run by teams to provide a ledger of actions and model lineage. Set to launch by the end of September, ML Metadata Management will enable customers to determine model provenance for any model trained on AI Platform for debugging, audit, and collaboration, Passemard said.
“Practicing machine learning operations means that you advocate for automation and monitoring at all steps of machine learning system construction, including integration, testing, releasing, deployment, and infrastructure management,” Passemard said. “The announcements we’re making today will help simplify how AI teams manage the entire machine learning development lifecycle.”