We’ve changed our name! API.AI is now Dialogflow. Learn more here.

Introducing Session Flow for visualizing conversations in Analytics

March 20, 2018

Nine months ago, we introduced the Analytics dashboard in Dialogflow to help you monitor your agent’s performance around session traffic and intent usage. Today, we’re launching a new feature in Analytics called Session Flow, an interactive visualization that maps the most common user journeys of your agent across all platforms.

session flow screenshot

The new Session Flow report can help you answer questions to improve the user experience and increase overall usage of your agent, such as:

  • Which journeys are most and least common?
  • When do exits occur across user journeys?
  • What do transitions across intents look like in conversations?

Best Buy Canada uses Session Flow to decrease its Dialogflow agent’s exit rate by 10%

Best Buy Canada uses Session Flow to better understand how users of its order-status bot journey through different intents, which intents are most popular, and where exits occur. The company also relies on this feature when running experiments to analyze how changes to its bot would affect the overall user experience. Recently, Best Buy Canada found that the exit rate for its search intent decreased by 10% after updating its fallback intent to better handle failed product searches.

Try it out

Head over to your Dialogflow console to try out Session Flow. We hope that the new feature helps you improve the user experience and increase overall usage of your agent. We’re continually working with Chatbase, the cloud service for more easily analyzing and optimizing bots, on expanding the Analytics dashboard to help you monitor and improve agent performance.

Refer to the docs for more information about interpreting Analytics and as always, let us know in our help forum if you have any questions or feedback.

Posted by Justin Kestelyn, Chatbase PMM

Introducing Dialogflow's Node.js Fulfillment Library Beta

March 15, 2018

Fulfillment is a powerful way to connect Dialogflow’s natural language capabilities with your own backend, APIs, and databases to create contextual, personalized and actionable conversational experiences for your users. Dialogflow developers are using fulfillment to allow their users to order items, retrieve user-specific information such as emails, and control devices like a photobooth. Today, we’re simplifying Node.js fulfillment development with the beta release of the Dialogflow Fulfillment Library.

The new fulfillment library works seamlessly with text, card, image and suggestion responses across v1 and v2 agents, 8 chat and voice platforms, as well as Dialogflow’s own simulator. The library also supports custom payloads, which are platform-specific responses, for all 14 Dialogflow-supported platforms. It includes an integration with the new Actions on Google client library to easily create responses for the Google Assistant. Read on to see how you can use the Dialogflow Fulfillment Library to build agents across platforms and languages, and integrate with platforms like the Google Assistant.

Cross-platform responses

The fulfillment library supports text, card, image, and suggestion chip responses for Dialogflow’s simulator and these 8 platforms: the Google Assistant, Facebook, Slack, Telegram, Kik, Skype, Line, and Viber. See how to add text, card, and suggestions with the code below and what the responses look like on the Google Assistant, Dialogflow’s simulator, Slack, and Facebook Messenger:

const functions = require('firebase-functions');
const {WebhookClient, Card, Suggestion} = require('dialogflow-fulfillment');

exports.dialogflowFirebaseFulfillment = functions.https.onRequest((req, res) => {
  const agent = new WebhookClient({ req, res });
  function intentHandler(agent) {
    agent.add('This message is from Dialogflow\'s Cloud Functions for Firebase editor!');
    agent.add(new Card({
        title: 'Title: this is a card title',
        imageUrl: 'https://developers.google.com/actions/assistant.png',
        text: 'This is the body text of a card.  You can even use line\n  breaks and emoji! 💁',
        buttonText: 'This is a button',
        buttonUrl: 'https://assistant.google.com/'
    agent.add(new Suggestion('Quick Reply'));
    agent.add(new Suggestion('Suggestion'));


Copy and paste the above code into Dialogflow’s inline editor to try it now or check out the full quick start sample here.

Google Assistant Dialogflow's simulator
alt_text alt_text
Slack Facebook Messenger
alt_text alt_text

See the reference documentation for other rich response types, such as images.

Multilingual responses

You can also build multilingual and locale fulfillment using the locale attribute. See how to say hello in French or English depending on the language of the request with the code below:

const functions = require('firebase-functions');
const {WebhookClient, Card, Suggestion} = require('dialogflow-fulfillment');

exports.dialogflowFirebaseFulfillment = functions.https.onRequest((req, res) => {
  const agent = new WebhookClient({ req, res });
  function welcome(agent) {   // English handler function & intent map
    agent.add('Welcome to my agent!');
  function bienvenue(agent) {  // French handler function & intent map
    agent.add('Bienvenue à mon agent!');
  if (agent.locale === 'en') {
  } else if (agent.locale === 'fr') {

Copy and paste the above code into Dialogflow’s inline editor to try it now or check out the full multilingual and locale sample here.

Custom payloads and the Google Assistant Integration

The fulfillment library supports custom payload responses, which are platform-specific features such as authentication and transactions used by Dialogflow-supported platforms. These payloads will be sent to the target platform in place of other messages defined in the library. The example below shows how to add custom JSON payloads for the Google Assistant and Slack:

agent.add(new Payload(agent.ACTIONS_ON_GOOGLE, {/*your Google payload here*/});
agent.add(new Payload(agent.SLACK, {/*your Slack payload here*/});

Dialogflow’s fulfillment library also has an integration with the Google Assistant using Actions on Google’s v2 alpha client library. This integration enables creation of custom Actions on Google payloads through the Actions on Google v2 client library. Here is an example of how to ask for the user’s location through the Google Assistant:

const functions = require('firebase-functions');
const {WebhookClient, Card, Suggestion} = require('dialogflow-fulfillment');
const {Permission} = require('actions-on-google');

exports.dialogflowFirebaseFulfillment = functions.https.onRequest((req, res) => {
  const agent = new WebhookClient({ req, res });

  function intentHandler(agent) {
    let conv = agent.conv();
    conv.ask(new Permission({
      context: 'To give results in your area',
      permissions: 'DEVICE_PRECISE_LOCATION',


Copy and paste the above code into Dialogflow’s inline editor to try it now or check out the full Actions on Google sample here.

Next steps

Check out the Dialogflow fulfillment library on Github and npm, along with our quick start sample and getting started guide to get up and running today. Send us feedback, feature requests or bugs by opening an issue on GitHub.

We’ll be discussing the library, your feedback and feature requests on Dialogflow’s Google+ community. We look forward to hearing from you and seeing what you build with the Dialogflow fulfillment library!

Posted by Matt Carroll, Dialogflow Developer Relations

Introducing Dialogflow case studies

March 9, 2018

Everyday, we’re seeing more and more rich conversational experiences being built with Dialogflow. Today, we’re sharing details on some of these experiences in 3 new case studies with KLM Royal Dutch Airlines, Ticketmaster, and Domino’s. Read on to learn how the conversational experiences they’ve built help them stay ahead of the curve, be where their customers are, and assist throughout the entire user journey.

Staying ahead of the conversational technology curve

Domino’s believes conversational technology will be the next evolution in e-commerce and is keen on staying on top of and ahead of the curve. They incorporated Dialogflow’s machine learning and natural language understanding (NLU) capabilities into their ordering bot, ‘Dom.’ Through conversing with Dom, customers can make both simple and complex orders, request recent orders, and track order progress.

Dom, Domino's ordering bot
A pizza ordering conversation with 'Dom', Domino's ordering bot

Being where the customers are

With the popularity of messaging platforms and emergence of smart voice-controlled devices, Ticketmaster wants to help customers find their favorite artists and shows on all the platforms and surfaces they’re already using. They launched their ticket discovery and purchase experience to Google Assistant users on phones, and plan to scale to more devices with the Google Assistant built-in. They also plan to expand to platforms such as Amazon Alexa, Facebook Messenger, and Cortana, and to international markets outside the US, using Dialogflow’s cross-platform and multilingual features.

Ticketmaster on the Google Assistant
Browse events and purchase tickets directly with Ticketmaster on the Google Assistant

Assisting throughout the entire customer journey

KLM Royal Dutch Airlines built a booking bot called ‘BB’, and after launching it, identified a new customer engagement opportunity when flight booking is complete. Using Dialogflow’s easy-to-use platform, the airline quickly built an entirely new packing experience to help travelers prepare for their upcoming trip. The two unique yet interconnected experiences allow BB to assist customers throughout the travel journey in helpful (and fun!) ways.

BB, KLM's service bot
Get packing tips from BB, KLM's service bot

Check out these 3 case studies to learn more about how Domino’s, Ticketmaster, and KLM are using Dialogflow to establish their presence in the digital assistant space. We’ll continue to add more stories in the future so share with us cool experiences you’ve been building with Dialogflow as well! And if you’re new, learn how you can create your first Dialogflow agent here.

Posted by Mary Chen, travel and packing enthusiast, and Alan Montelongo, pizza and ticket enthusiast

How contexts and follow-up intents work

March 7, 2018

Using contexts and follow-up intents to respond correctly every time


Contexts are a tool that allows Dialogflow developers to build complex, branching conversations that feel natural and real.

Here’s an example of a dialog powered by contexts.

User: “Will it rain in Mountain View today?”

Agent: “No, the forecast is for sunshine.”

User: “How about San Francisco?”

Agent: “San Francisco is expecting rain, so bring an umbrella!”

While the follow up, “How about San Francisco?”, doesn’t make sense as a standalone question, the agent knows the contextual inquiry is still about rain.

Dialogflow uses contexts to manage conversation state, flow and branching. You can use contexts to keep track of a conversation’s state, influence what intents are matched and direct the conversation based on a user’s previous responses. Contexts can also contain the values of entities and parameters, based on what the user has said previously.

In this blog post, we’ll be exploring the concept of contexts and showing the various ways you can work with them. By the end of the post, you’ll be able to use contexts as a tool in your own agents.

Input and output contexts

In a Dialogflow agent, each intent is configured with two lists of contexts:

  • Output contexts

  • Input contexts

Output contexts

Output contexts attach contexts to the session - the conversation’s state - after an intent has been matched. For instance, if you have an intent that is matched when a user mentions that they like cats, you specify that the output context “likes cats” is attached to the session after the intent is matched.

This means that when further requests are handled by Dialogflow or in your business logic, they can observe that the “likes cats” context is attached to the session and respond accordingly. For example, an entertainment app might know to show the user cat-related content when they ask for recommendations.

Input contexts

Input contexts can be used to filter which intents are matched, according to the following rules:

  • An intent will only be matched if all of the input contexts it specifies are currently active.

  • Given two intents with identical training examples, the intent whose input contexts are currently active will be matched.

The following table gives examples of how input contexts affect matching in various scenarios.

Contexts in the session Intent’s Input Contexts Can intent be matched?
No Contexts No Input Contexts Yes
No Contexts likes_cats No
likes_cats likes_cats Yes
likes_cats No Input Contexts Yes
likes_cats likes_dogs No
likes_dogs Yes

Using input and output contexts, you can control dialog in the following ways:

  • Setting contexts when certain criteria are met

  • Creating intents with the proper input contexts

This can be useful in filling out forms: questions may only need to be asked if the user provides certain answers to other questions. It can also help manage conversational games, and ensure intents are matched in a certain order.

Adding Context to your intents

To add input or output contexts to your intent, first scroll to the top of your intent and click on Contexts as seen below:

Contexts UI

In the “Add input context” or “Add output context” sections, add your input or output contexts. If your agent uses a webhook for fulfillment, you can set output contexts in your webhook responses. Learn more about adding contexts here.

Context lifespan

To further control conversation state, contexts can have specific lifespans. The context will be attached to the session as long as the number of interactions between your agent and the user does not exceed the lifespan of the context when it was set. In the following image, the lifespan is set to 5.


Output contexts can be set again in subsequent intents and can even be “cleared” by setting the lifespan of the context to 0. This may be useful if the user wants to start the conversation over, or if you’d like to reset a context that is no longer relevant. Contexts are automatically cleared from the session ten minutes after being applied, regardless of their lifespan.

See the documentation for more information.

Follow-up Intents and Contexts

Follow-up intents provide a simple way to shape dialog without having to manage contexts manually. Here’s an example.

Nested follow-up intents

In this sequence, there are two sets of intents that can handle a yes or no answer. The intents handling yes or no for “Do you like cats?” are distinct from those handling yes or no for “Would you like to see a cat picture?”.

One set of intents is nested as follow-up intents for “Do you like cats?”, meaning they will only be matched in immediate response to the “Do you like cats?” intent.

The other set of intents is nested as follow-up intents for “Do you like cats? - yes”. This means that they will only be matched if the user had previously answered “yes” to the “Do you like cats?” question.

The structure of this conversation, along with the ability to correctly match the appropriate “yes” or “no” intent even when there are multiple equivalents, is powered by contexts.

When a follow-up intent is created, an output context is added to the parent intent and an input context of the same name is added to the newly created child intent. This means that the follow-up intent can only be matched when the parent intent was matched on the previous turn of conversation.

Follow-up intents allow you to conveniently apply the power of contexts to your conversation. See the documentation for further detail.

Parameters and Contexts

Contexts can also include parameter values from when the context was set. For instance, if an intent is matched when the user answers a question, “What is your favorite band?”, that includes a parameter for the name of the band, the name of the band can be surfaced in subsequent intents.

You can access this name in Dialogflow by entering #context_name.parameter_name (where context_name is the name of the context and parameter_name is the name of the parameter). This works for any response, as long as the context is currently active and the user has provided a value for the parameter.

For example, in the first screenshot below, the output context “favorite-city” is applied, and a parameter value for “geo-city” is extracted from what the user says.

Intent showing parameters

In the second screenshot, representing a subsequently matched intent, we can use the string #favorite-city.geo-city to access and output this value in the “Text response”. Since “favorite-city” has been added as an input context, this intent will only be matched after the previous one.

Intent showing response

When a user says “My favorite city is New York”, matching the “Remember Favorite City” intent, the value “New York” will be stored in the context. When they subsequently ask “What is my favorite city”, the agent will respond with “Your favorite city is New York.”

Learn more about extracting parameter values from contexts. Remember that if your agent uses a webhook for fulfillment, you can activate contexts and get parameter values in your fulfillment code.

Thanks for reading! Learn more about contexts or head over to your developer console to try them out. You can also discuss this more over on our developer community or ask questions in our support forum.

Posted by Matt Carroll and Daniel Imrie-Situnayake, Dialogflow Developer Relations.

Six new languages, including Actions on Google support

February 26, 2018

Map of the world with speech bubbles in various locations

Today, we’re announcing the availability of 6 additional languages that you can use in your Dialogflow agents:

  • Hindi (hi)

  • Thai (th)

  • Indonesian (id)

  • Swedish (sv)

  • Danish (da)

  • Norwegian (no)

All of these languages come with prebuilt agents for Small Talk, Support, Translate and Weather. They are fully supported by Actions on Google and can be used to build apps for the Google Assistant.

This brings our total number of supported root languages to 21, along with 9 locales. Here’s all the supported languages - learn how to build multilingual agents and give it a try in the Dialogflow console.

And if you missed the announcement from last week, the Assistant will be available in 30 languages by the end of the year. Let us know what language you’re most looking forward to! Share your feedback in our new developer community or post your technical questions on our help forum.

Posted by Dan Imrie-Situnayake, Developer Advocate

Next Page >