API.AI is joining Google!

September 19, 2016

We are excited to announce that API.AI is joining Google!

It has been a long and fun journey. Since API.AI’s launch in 2014, we’ve been constantly impressed by the fast and energetic adoption of the technology from people building conversational interfaces for chatbots, connected cars, smart home devices, mobile applications, wearables, services, robots and more. Our vision has been to make technology understand and speak human language and help developers build truly intelligent conversational interfaces for their products and services. The best part of our day is hearing from our diverse community and how they are using API.AI to create truly innovative, practical and inspirational products that will reshape how we live and work in the future.

What does this mean to you? We’re excited for you to continue using our developer platform to build conversational user interfaces. Joining Google will allow us to accelerate improvements to the platform and service our growing developer community in ways we’ve always dreamed. With Google’s knowledge, infrastructure and support, we can make sure you get access to the best available technologies and developments in AI and machine learning.

As we start this next chapter, we want to express how truly grateful we are to our customers, developers, partners, employees, advisors and investors. We can’t wait to continue working with you and could not have gotten to this point without your support.We cannot thank you enough.

We hope you are as excited as we are with what’s to come at Google and value your continued support.

Sincerely, Ilya Gelfenbeyn, CEO





API.AI’s Cisco Spark and Tropo Solution at Cisco Live!

June 24, 2016

It’s always exciting to do a little show-and-tell with products you’re proud of. That’s why we can’t wait to be at Cisco Live in Las Vegas from July 10-14, where we’ll be demonstrating API.AI’s latest integrations with Cisco Spark and Cisco-owned Tropo. We’ll be exhibiting at the event as well – if you’re attending, come by and see us at booth C7.

Tropo and Spark are each part of Cisco’s suite of communications solutions. Tropo is a cloud API for communicating via voice and SMS messaging, intended for large-scale enterprises and service providers. Using Tropo, developers and companies can automate real-time communication over voice and messaging channels – especially valuable as a platform for fostering collaboration in the cloud.

Spark is an enterprise messenger that features options for voice and video communication. Cisco has opened the APIs for Spark so that enterprise developers are welcome to customize the technology as needed.

Our Tropo and Spark integrations team up those technologies with API.AI’s own natural language understanding capabilities, adding the rich possibilities of conversational user experiences to these communications tools. Developers taking advantage of these new integrations can create bots and interfaces that comprehend natural speech and respond in real-time. And, our Tropo and Spark integrations are designed to accommodate business needs and common corporate policies around security, privacy, safe control over data, interoperability, and conforming with IT standards.

We’re proud to say that these integrations have already proven themselves to be valuable solutions for building powerful cloud-based collaboration tools, as our partner Redbooth can attest. Redbooth offers a cloud-based collaboration product that assists business teams in easily communicating ideas, amplifying productivity and performance. It makes use of Tropo and Spark in order to implement several aspects of its product’s communications functionality (for example, adding telephony to let users easily turn online conversations into phone calls with the click of a button for when instant voice communication is essential). Now using API.AI’s integrations, Redbooth has added a task management feature and has produced a bot that engages with users through natural language conversations – and can retrieve any information stored within Redbooth’s software. Users can simply speak with Redbooth’s bot just as they would with another person and receive a highly satisfying user experience.

Anyone at Cisco Live interested in developing conversational bots and interfaces for their communications solutions should come by and say, ‘Hi.’ We’d love to see you there.





Manipulating Entity Entries via API

April 4, 2016

Let’s imagine an agent for a retail store app. There is a developer entity that matches items available in the store. Let’s call it @item. The initial version of the entity contains a list of items available in the store at the moment of the agent’s creation. But since the stock of items is updated regularly, it would be very inefficient to update the entity manually.

To keep the entity in sync with the store database, API.AI now offers three additional options in the /entities endpoint that allow you to add, update, and delete entity entries via our API.

Let’s see how it works. Here is the original version of a sample entity we started working with:

Remember that for any of the following operations you’ll need to specify an entity ID (referenced as {eid}) or its name.

Adding Entity Entries

So, the store has two new items for sale, e.g., kefir and ice-cream, and the store owner needs to add new entries to the @item entity.

This can be done with the help of POST /entities/{eid}/entries request.

The POST body should be an array of entity entry objects in JSON format.

POST https://api.api.ai/v1/entities/item/entries?v=20150910

Headers:
Authorization: Bearer YOUR_DEVELOPER_ACCESS_TOKEN
Content-Type: application/json; charset=utf-8

POST body:
[{
 "value": "kefir",
 "synonyms": ["kefir"]
}, {
 "value": "ice-cream",
 "synonyms": ["ice-cream", "ice cream"]
}]

As a cURL request, it will look like this:

curl -i -X POST -H "Content-Type:application/json" -H "Authorization:Bearer YOUR_DEVELOPER_ACCESS_TOKEN" \
-d '[{"value": "kefir", "synonyms": ["kefir"]}, {"value": "ice-cream", "synonyms": ["ice-cream", "ice cream"]}]' 'https://api.api.ai/v1/entities/item/entries'

This is how the entity will look in the API.AI developer console after sending this request:

Updating Entity Entries

If you need to add or delete synonyms in already existing entries, you can use PUT /entities/{eid}/entries request.

The PUT body should be an array of entity entry objects in JSON format. The array of synonyms should contain the updated array of synonyms.

For example, you may want to add some spelling variants to the “yogurt” and “kefir” entries.

The cURL request for this will look like this:

curl -i -X PUT -H "Accept:application/json" H "Content-Type:application/json" -H "Authorization:Bearer YOUR_DEVELOPER_ACCESS_TOKEN" \
-d '[{"value":"yogurt", "synonyms": ["yoghurt", "yoghourt"]}, {"value":"kefir", "synonyms": ["kefir", "keefir", "kephir"]}]' 'https://api.api.ai/v1/entities/item/entries'

The result of this manipulation can be seen in the API.AI developer console:

Deleting Entity Entries

Now, let’s imagine some items were sold out and you need to delete the entries from the @item entity.

You can do it with the help of DELETE /entities/{eid}/entries request.

The DELETE body should contain an array of strings corresponding to the reference values of entity entries.

For example, for deleting “milk” and “kefir”, the cURL request will be this:

curl -i -X DELETE -H "Accept:application/json" -H "Content-Type:application/json" -H "Authorization:Bearer YOUR_DEVELOPER_ACCESS_TOKEN" \
-d '["milk", "kefir"]' 'https://api.api.ai/v1/entities/item/entries'

Here’s the result of this request seen in the API.AI developer console:

Related documentation:





Conversational Slack Bots 101

March 17, 2016

Following up on the VentureBeat article, here is a short tutorial on how to create a Slack bot at API.AI.

Millions of Slack users chat with each other and bots every day. Bots are the best, because you can integrate them with lots of useful services and apps. They provide you with information, deliver notifications, and offer a myriad of other handy features. You can simply talk to the bot naturally and it understands the context to respond intelligently. And now with the API.AI-Slack integration, you can train your bots, so they can have real conversations with users to perform their duties better.

Design conversations

Start with creating an agent at API.AI. The agent will be the conversational part of your bot’s brain.

When you come up with an idea of what conversations your bot should be able to carry with your users, start training the API.AI agent. It’s as simple as providing just a few examples of how people may phrase their requests and define what type of data you want to extract from such requests.

Connect to a web service

After your bot learns how to understand people, it needs to start responding. You can connect your bot to a web service via a webhook. This will allow you to pass information from a matched request into a web service and get a result from it. This way you can implement connections to data services, business logic, etc.

Test

In the API.AI developer account, you can find the test console, where you can try how well your agent understands you already. If your request is recognized, you’ll see which intent was used to process it and what information was extracted.

Train it more right away by adding more examples in existing scenarios or create new ones.

Test some more. Do you love it? Move to the next step.

Create a Slack bot

To enable Slack Integration, go to your agent’s settings, select ‘Integrations’ from the horizontal menu, and turn Slack integration on. Alternatively, you can select ‘Slack integration’ from the left side menu.

To define the way people can interact with your bot, tick the relevant checkboxes in the ‘Message Types’ section.

  • Direct message. The bot will receive a direct message from a user.
  • Direct mention. The bot will answer when mentioned at the beginning of a message.
  • Mention. The bot will answer when mentioned in any part of a message.
  • Ambient. The bot will monitor any message in a channel.

Don’t forget to save the settings.

To connect the bot to your Slack account, click ‘Test in Slack’ and then sign in with your team account. Let your team know about your bot. The more conversations it gets, the more data you’ll have to train it. And the smarter your bot will become as a result.

When you’re ready to publish your bot to the community, go to the Slack Apps webpage where you can find all the necessary documentation. Click on ‘My apps’ from the top menu, and then on ‘Create a new application’.

You’ll be asked to fill in the app name, descriptions of the app and a Redirect URI. Use the Redirect URI from your API.AI agent’s Slack Integration settings.

After clicking on the ‘Create application’ button, you’ll see your Client ID and Client Secret. Go to your API.AI agent’s Slack integration settings and fill in the Client ID and Client Secret fields with the values from your Slack app’s settings. Then click ‘Go online’.

Go back to your app settings on the Slack webpage, scroll down to the ‘Bot User’ section, and configure a new Bot user for your app by clicking on ‘Add a bot to this app’ button.

Scroll up to the top of the page and click the ‘Add to Slack’ link. The Slack button page will open.

Scroll down to the ‘Add to Slack button’ section. Uncheck ‘incoming webhook’ and tick the ‘bot’ checkbox, which generates the HTML code for the ‘Add to Slack’ button that can be added to your website.

To launch the bot for your Slack team, just click the button ‘Add to Slack’ above the generated code.

If you intend to make your bot accessible to other teams, you’ll need to place the ‘Add to Slack’ button on your website, so that anyone can launch your bot by clicking it.

Note: If you want to have your bot published in the Slack App Directory, you have to register as a developer and go through the Slack approval process.

Check out our API.AI for Slack video!





What are contexts and how are they used?

November 23, 2015

A context is a powerful tool that can help to create sophisticated voice interaction scenarios. In this post we’ll look at how you can use contexts to build dialogs.

When a dialog is created, it’s usually the case that many branches of conversation are possible. Therefore, intents in an agent should be designed so that they follow each other in the correct order during a conversation.

The best way to demonstrate how this works is to build a conversation together, step-by-step. As an example, we’re going to create an agent for a floral shop. You can download it here and try the instructions in your account.

First, let’s take a quick look at how contexts operate.

1. Using contexts

Contexts will appear in your agent just under the intent title. You should see two fields here: input and output contexts. Corresponding input and output contexts in intents will determine whether they follow or precede one another.

To add a context, just type its name into the field and press enter. The context will now appear as a box with a colored border.

Here we’ve set an output context for our floral shop named ‘compose’. Once this intent is matched, the context ‘compose’ will be set. So any intents that have this as an incoming context will now have top priority when matching user requests. Note that these intents will not be matched if the context is not set.

All contexts have a very important property – lifespan.

By default the lifespan value is set to five. It means that the context will last for the next five matched intents. So if you have different input/output contexts in each of the following intents, all of them will be collected in the next five stages of the dialog, like a chain of contexts.

While this feature can be very useful, at times you may want to get rid of a context after the following intent is reached. In these situations you can simply change the lifespan to one.

2. Managing the conversation flow

Now we’re ready to design the dialog we want our agent to produce. The best way to do this is to draw a scheme like the one pictured below. You can use Lucidchart for this.

From this scheme we can see that there are two main branches of the conversation. The first is for people who want to compose a bouquet themselves; the second provides the option to choose from one of four pre-made bouquets.

In order to manage these two branches of the conversation we’ll have to set different contexts.

The intent where fulfillment is ‘Okay! Would you like to compose a bouquet yourself?’, looks exactly as it was shown above: its name is ‘compose’ and it has an output context ‘compose’.

So we’ll have to create two different intents to match the possible answers. We’ll make one for ‘yes’:

And one for ‘no’:

In both intents we have the same input context: ‘compose’. So after the intent ‘compose’ was matched and the context ‘compose’ was set, we can then logically only reach those intents with an input context ‘compose’. So if we say ‘yes’ we get into the ‘yes - compose’ intent; if ‘no’, then we are in the ‘no - compose’ intent.

And now comes the most crucial point: we have to set different output contexts. So in order to continue choosing the flowers we have to set now ‘yes - compose’ context as output, and then as an input context to the following intents. The same goes for the other intent. In order to choose one of the pre-made floral arrangements, we have to set an output context ‘no - compose’ and then create the following intent with ‘no - compose’ as the input context.

This same logic applies along the whole dialog structure. Whenever you need to go further to a specific intent, you should set the corresponding contexts. This is especially important when you have multiple ‘yes/no’ answers in the conversation.

3. Gathering the parameters’ values

The other important function of the context is its ability to collect the parameters’ values. In our example, we’re collecting information during the dialog so we know by the end how to fill the customer’s order – how many flowers of what kind and what color we should add, or alternately, which pre-made bouquet we should choose. As long as a certain context is still alive (i.e., within its lifespan) the value we get from a parameter (if it was set in a context) is also alive.

In order to collect all the values at the end, we just have to set one context, which will appear in every intent, both input and output (except for the very first intent, where we set the output context only).

We just have to slightly modify what we see in the screenshots above:

To retrieve the parameters from the context, write: #context_name.parameter_value:

Then in the dialog those values will look the same as always:

Feel free to play around with this example. You can download the agent here.





< Previous Page
Next Page >