Introducing Dialogflow case studies

March 9, 2018

Everyday, we’re seeing more and more rich conversational experiences being built with Dialogflow. Today, we’re sharing details on some of these experiences in 3 new case studies with KLM Royal Dutch Airlines, Ticketmaster, and Domino’s. Read on to learn how the conversational experiences they’ve built help them stay ahead of the curve, be where their customers are, and assist throughout the entire user journey.

Staying ahead of the conversational technology curve

Domino’s believes conversational technology will be the next evolution in e-commerce and is keen on staying on top of and ahead of the curve. They incorporated Dialogflow’s machine learning and natural language understanding (NLU) capabilities into their ordering bot, ‘Dom.’ Through conversing with Dom, customers can make both simple and complex orders, request recent orders, and track order progress.

Dom, Domino's ordering bot
A pizza ordering conversation with 'Dom', Domino's ordering bot

Being where the customers are

With the popularity of messaging platforms and emergence of smart voice-controlled devices, Ticketmaster wants to help customers find their favorite artists and shows on all the platforms and surfaces they’re already using. They launched their ticket discovery and purchase experience to Google Assistant users on phones, and plan to scale to more devices with the Google Assistant built-in. They also plan to expand to platforms such as Amazon Alexa, Facebook Messenger, and Cortana, and to international markets outside the US, using Dialogflow’s cross-platform and multilingual features.

Ticketmaster on the Google Assistant
Browse events and purchase tickets directly with Ticketmaster on the Google Assistant

Assisting throughout the entire customer journey

KLM Royal Dutch Airlines built a booking bot called ‘BB’, and after launching it, identified a new customer engagement opportunity when flight booking is complete. Using Dialogflow’s easy-to-use platform, the airline quickly built an entirely new packing experience to help travelers prepare for their upcoming trip. The two unique yet interconnected experiences allow BB to assist customers throughout the travel journey in helpful (and fun!) ways.

BB, KLM's service bot
Get packing tips from BB, KLM's service bot

Check out these 3 case studies to learn more about how Domino’s, Ticketmaster, and KLM are using Dialogflow to establish their presence in the digital assistant space. We’ll continue to add more stories in the future so share with us cool experiences you’ve been building with Dialogflow as well! And if you’re new, learn how you can create your first Dialogflow agent here.

Posted by Mary Chen, travel and packing enthusiast, and Alan Montelongo, pizza and ticket enthusiast





How contexts and follow-up intents work

March 7, 2018

Using contexts and follow-up intents to respond correctly every time

Contexts

Contexts are a tool that allows Dialogflow developers to build complex, branching conversations that feel natural and real.

Here’s an example of a dialog powered by contexts.

User: “Will it rain in Mountain View today?”

Agent: “No, the forecast is for sunshine.”

User: “How about San Francisco?”

Agent: “San Francisco is expecting rain, so bring an umbrella!”

While the follow up, “How about San Francisco?”, doesn’t make sense as a standalone question, the agent knows the contextual inquiry is still about rain.

Dialogflow uses contexts to manage conversation state, flow and branching. You can use contexts to keep track of a conversation’s state, influence what intents are matched and direct the conversation based on a user’s previous responses. Contexts can also contain the values of entities and parameters, based on what the user has said previously.

In this blog post, we’ll be exploring the concept of contexts and showing the various ways you can work with them. By the end of the post, you’ll be able to use contexts as a tool in your own agents.

Input and output contexts

In a Dialogflow agent, each intent is configured with two lists of contexts:

  • Output contexts

  • Input contexts

Output contexts

Output contexts attach contexts to the session - the conversation’s state - after an intent has been matched. For instance, if you have an intent that is matched when a user mentions that they like cats, you specify that the output context “likes cats” is attached to the session after the intent is matched.

This means that when further requests are handled by Dialogflow or in your business logic, they can observe that the “likes cats” context is attached to the session and respond accordingly. For example, an entertainment app might know to show the user cat-related content when they ask for recommendations.

Input contexts

Input contexts can be used to filter which intents are matched, according to the following rules:

  • An intent will only be matched if all of the input contexts it specifies are currently active.

  • Given two intents with identical training examples, the intent whose input contexts are currently active will be matched.

The following table gives examples of how input contexts affect matching in various scenarios.

Contexts in the session Intent’s Input Contexts Can intent be matched?
No Contexts No Input Contexts Yes
No Contexts likes_cats No
likes_cats likes_cats Yes
likes_cats No Input Contexts Yes
likes_cats likes_dogs No
likes_cats
likes_dogs
likes_dogs Yes
likes_cats
likes_dogs
likes_cats
likes_dogs
Yes

Using input and output contexts, you can control dialog in the following ways:

  • Setting contexts when certain criteria are met

  • Creating intents with the proper input contexts

This can be useful in filling out forms: questions may only need to be asked if the user provides certain answers to other questions. It can also help manage conversational games, and ensure intents are matched in a certain order.

Adding Context to your intents

To add input or output contexts to your intent, first scroll to the top of your intent and click on Contexts as seen below:

Contexts UI

In the “Add input context” or “Add output context” sections, add your input or output contexts. If your agent uses a webhook for fulfillment, you can set output contexts in your webhook responses. Learn more about adding contexts here.

Context lifespan

To further control conversation state, contexts can have specific lifespans. The context will be attached to the session as long as the number of interactions between your agent and the user does not exceed the lifespan of the context when it was set. In the following image, the lifespan is set to 5.

Lifespan

Output contexts can be set again in subsequent intents and can even be “cleared” by setting the lifespan of the context to 0. This may be useful if the user wants to start the conversation over, or if you’d like to reset a context that is no longer relevant. Contexts are automatically cleared from the session ten minutes after being applied, regardless of their lifespan.

See the documentation for more information.

Follow-up Intents and Contexts

Follow-up intents provide a simple way to shape dialog without having to manage contexts manually. Here’s an example.

Nested follow-up intents

In this sequence, there are two sets of intents that can handle a yes or no answer. The intents handling yes or no for “Do you like cats?” are distinct from those handling yes or no for “Would you like to see a cat picture?”.

One set of intents is nested as follow-up intents for “Do you like cats?”, meaning they will only be matched in immediate response to the “Do you like cats?” intent.

The other set of intents is nested as follow-up intents for “Do you like cats? - yes”. This means that they will only be matched if the user had previously answered “yes” to the “Do you like cats?” question.

The structure of this conversation, along with the ability to correctly match the appropriate “yes” or “no” intent even when there are multiple equivalents, is powered by contexts.

When a follow-up intent is created, an output context is added to the parent intent and an input context of the same name is added to the newly created child intent. This means that the follow-up intent can only be matched when the parent intent was matched on the previous turn of conversation.

Follow-up intents allow you to conveniently apply the power of contexts to your conversation. See the documentation for further detail.

Parameters and Contexts

Contexts can also include parameter values from when the context was set. For instance, if an intent is matched when the user answers a question, “What is your favorite band?”, that includes a parameter for the name of the band, the name of the band can be surfaced in subsequent intents.

You can access this name in Dialogflow by entering #context_name.parameter_name (where context_name is the name of the context and parameter_name is the name of the parameter). This works for any response, as long as the context is currently active and the user has provided a value for the parameter.

For example, in the first screenshot below, the output context “favorite-city” is applied, and a parameter value for “geo-city” is extracted from what the user says.

Intent showing parameters

In the second screenshot, representing a subsequently matched intent, we can use the string #favorite-city.geo-city to access and output this value in the “Text response”. Since “favorite-city” has been added as an input context, this intent will only be matched after the previous one.

Intent showing response

When a user says “My favorite city is New York”, matching the “Remember Favorite City” intent, the value “New York” will be stored in the context. When they subsequently ask “What is my favorite city”, the agent will respond with “Your favorite city is New York.”

Learn more about extracting parameter values from contexts. Remember that if your agent uses a webhook for fulfillment, you can activate contexts and get parameter values in your fulfillment code.

Thanks for reading! Learn more about contexts or head over to your developer console to try them out. You can also discuss this more over on our developer community or ask questions in our support forum.

Posted by Matt Carroll and Daniel Imrie-Situnayake, Dialogflow Developer Relations.





Six new languages, including Actions on Google support

February 26, 2018

Map of the world with speech bubbles in various locations

Today, we’re announcing the availability of 6 additional languages that you can use in your Dialogflow agents:

  • Hindi (hi)

  • Thai (th)

  • Indonesian (id)

  • Swedish (sv)

  • Danish (da)

  • Norwegian (no)

All of these languages come with prebuilt agents for Small Talk, Support, Translate and Weather. They are fully supported by Actions on Google and can be used to build apps for the Google Assistant.

This brings our total number of supported root languages to 21, along with 9 locales. Here’s all the supported languages - learn how to build multilingual agents and give it a try in the Dialogflow console.

And if you missed the announcement from last week, the Assistant will be available in 30 languages by the end of the year. Let us know what language you’re most looking forward to! Share your feedback in our new developer community or post your technical questions on our help forum.

Posted by Dan Imrie-Situnayake, Developer Advocate





Announcing our Dialogflow Developer Community on Google+

January 30, 2018

We may only be a month into 2018, but one thing’s for sure: this is going to be a huge year for conversational technology.

Between platforms like the Google Assistant and a whole galaxy of instant messaging platforms, there’s no end of opportunities for developers to design and build amazing conversational experiences.

Our goal at Dialogflow is to provide the tools, skills, and support to help you do incredible things with this new medium. We have some big things planned this year, and here’s the first of them:

Join our official community

Today, we’re launching our official Dialogflow community for developers. Join us there to share cool projects you’re working on and meet awesome developers from around the world - including the Dialogflow team!

We’d love this to be a hub for learnings, feedback, and community as you dive deeper into exploring and building conversational technology. (Our help forum is still the best place to go for support-related questions.)


Join the official Dialogflow developer community!

We’re hanging out and looking forward to meeting you. Head on over to the new developer community and introduce yourself :)

Stay tuned for more news and updates planned for 2018, including tips, tricks and tutorials, along with industry insights from our team. Along with our new developer community, join us on our official Twitter or Facebook channels.

Posted by Dan Imrie-Situnayake, Developer Advocate





Introducing two beta releases: Dialogflow API V2 and new Enterprise Edition

November 16, 2017

When we joined Google, our vision was to use Google’s infrastructure and resources to accelerate improvements to our platform, making the best technologies in AI and machine learning available to our developer community.

Today, we’re launching two new important updates– in beta–that bring us even closer to that day-one vision:

Dialogflow API V2 BETA

Dialogflow API V2 BETA is the next iteration of our developer API adding new features and capabilities. API V2’s updates include:

  • Google Cloud Speech integration, allowing developers to send audio directly to Dialogflow for speech recognition and natural language understanding
  • Importing, exporting and restoring agents through API calls, for improved integration with developers’ change management processes
  • Support for gRPC, an RPC framework offering improved performance, scalability and integration

We will continue support for our existing API V1, while any future features and improvements will be added to API V2. If you wish to try out the new API V2 BETA, please review our FAQ page and follow these instructions.

Dialogflow Enterprise Edition BETA

Many of our clients have asked for enterprise support, and today we’re excited to be launching Dialogflow Enterprise Edition BETA. Dialogflow Enterprise Edition BETA combines our new API V2 with Google Cloud Platform’s enterprise-grade compliance and customer support. Head over to the Google Cloud Platform to learn more.

What do today’s announcements mean for me?

In short, no action is necessary for your existing agents – they will continue working as before. We recommend that most users continue development with API V1 unless they would like to evaluate API V2 BETA or require features only available in API V2 BETA. Using Dialogflow Enterprise Edition BETA requires the use of API V2 BETA.

As we improve API V2 tooling in the coming months, we’ll provide additional resources for migrating your existing agents from V1 to V2.

If you have more questions about trying API V2, don’t forget to check out our FAQ page.

Posted by Artem Goncharuk, Engineering Lead





Subscribe Via Email

Enter your email address:

Delivered by FeedBurner

About

Welcome to the Dialogflow blog. Subscribe to get product updates, best practices, and tutorials directly in your inbox.

Resources

Follow us on social

< Previous Page