lundi, mai 18, 2015

Api.ai Makes It Easier To Add A Siri-Like Conversational UI To Your IoT App Or Device


Want to add a Siri-like “conversational interface” to your mobile app or device? Then api.ai, the Russian startup and team behind Speaktoit Assistant, a Siri alternative for Android, iOS and Windows Phone, has had you covered for a while.
But now the company has refined its offering to make it a lot easier for developers in the Internet-of-Things (IoT) space, such as smart home and wearables, to be able to use its technology to enhance their offerings.
Originally launched last September, api.ai effectively opened up the AI and natural language tech that powered Speaktoit so that developers could add conversational interfaces to their apps. And although 5,000 or so developers signed up to the platform, the feedback the api.ai team received informed them of two things.
Firstly there was a lot of interest shown, not just by mobile app developers, but also from the IoT space, namely the smarthome, and wearables, such as smart watches — areas the company had always planned on targeting.
Secondly, for many developers the platform required too much work upfront; despite the huge amount of heavily lifting the company’s machine learning-based tech already does, developers were craving more out-of-the-box examples they could easily plug into.
As a result, the api.ai team have gone to work to make its conversational UI a lot more context-aware by adding what the startup calls “pre-defined domains”, including those for various IoT categories. This means the platform knows ahead of time what domain any defined entities and intent applies to.
So for example, if you wanted to add voice resignation to control a smart lighting system, api.ai would already know you are working within the smarthome domain and is able to tap into its existing AI library related to that domain.
“Developers can now start from something right out of the box,” co-founder and CEO Ilya Gelfenbeyn tells me. “They can use domains like news, weather or smarthome, and so on.”
In addition, developers can also describe their own interactions and scenarios by simply providing a few examples based on the device’s capability, and the api.ai platform will use these to seed a more fully-developed conversation UI.
“What our system will do is train itself based on these examples, by finding some common semantic units, to enable it to understand further examples that were not covered by the developer,” he explains.
Here’s an example provided by api.ai of how that might pan out in practice:

Person: It’s very dark here.
Smart Home: Let’s turn on the light then.
Person: Turn it to romantic mode.
Smart Home: Ooh, I see. Here it is.
Person: Still too bright.
Smart Home: Taking it to the minimum.
Person: Same in the kitchen.
Smart Home: Lights in the kitchen are on.
Person: Turn on the heating there as well.
Smart Home: Thermostat is on for the kitchen only.
And here’s what a smart watch app for Magic might look like if it were powered by api.ai:
Meanwhile, the company’s tech supports 13 languages including Chinese, English French, Korean, Portuguese and Spanish. On the developer side it supports an array of platforms and coding languages including iOS, Android, HTML, Cordova, Python, C#, Xamarin, and Unity with native SDKs.

from TechCrunch http://ift.tt/1AaAlKm

Aucun commentaire:

Enregistrer un commentaire