Conversational artificial intelligence is the general term for the natural language models of artificial intelligence, which use natural language to interpret human words, talk to people, or perform tasks or calculate.
But talking to today’s most famous AI assistants — Alexa, Siri, Google Assistant — they’re not exactly conversational AI algorithms. They can tell you jokes, answer factual questions, or even answer multiple queries without having to repeat wake-up words over and over again, but dialogue or chat is still largely human work.
To share the progress of deep learning aimed at hosting conversations, Google launched Meena, a 2.6 billion-parameter neural network. Meena can handle multiple rounds of conversations, which Google claims are better than other AI agents built for conversations. It even tells a joke that blurts out.
Google also today released Sensibleness and Specificity Average (SSA), a metric created by Google researchers to measure the ability of conversation agents to maintain meaningful and specific responses to conversations. Humans ranked about 86 percent on SSA, and Meena scored 79 percent in preliminary tests. Mitsuku, an artificial intelligence agent founded by Pandora-Potts, has won the Loebner Award in the past four years with a score of 56 percent, compared with 31 percent for Microsoft’s Mandarin Chinese Little Ice.
Details of this work can be found in the paper “Towards a Human-like Open Domain Chatbot” published Monday on the preprint repository arXiv.
Meena trains 40 billion words and uses variants of seq2seq and popular Transformer syntax. Google first released Transformer in 2017, but since then the language has become one of the most powerful language models.
SSA is based on static conversations for performance assessment and has a fixed set of prompts or interactive features that allow a free flow of conversations. Each rated conversation had to last at least 14 rounds and no longer than 28 rounds. The score is then calculated based on the percentage of rounds considered specific or appropriate. The SSA punishes answers with generalizations.
The company said in a blog post that Google could offer Meena to researchers in the coming months, but decided to avoid an immediate demo.
Google’s SSA standard is different from other AI assistants that really evaluate the AI of the conversation.
The Alexa Award, which is in its third year, challenges a team of student developers to develop an AI that can lead to a conversation of up to 20 minutes. Last year, the finalists made it to about 10 minutes. The final round will be announced in May. All you have to do is say “Alexa, let’s talk” and you can have a chat with last year’s finalists.
Amazon began developing its multi-round chat product Conversations, a feature that combines recommendations for voice apps into multi-round conversations. When it was launched last summer, David Lemp, vice president of Amazon Devices, called it the “Holy Grail of Sound Science.”
Microsoft acquired Semantic Machines in 2018 and last year began offering additional roundtables for users of Microsoft’s Android framework.
As Ashwin Ram, former head of the Alexa Prize and director of Google Research Now said in 2017, AI assistants who can engage in dialogue could create closer relationships with people and provide emotional support or treat epidemics like autism.