Amazon has been trying to do one thing with Alexa ever since it launched an AI assistant. It makes the sound more interactive. Today the company took a major step in that direction, pushing for Alexa and a new style of speaking. These new features are starting for developers today, so we should add them to various skills soon.
On the Alexa Skill Kit blog, Amazon details these new emotions and speaking styles, which are powered by the company’s neural text-to-speech technology. Alexa can now answer questions with an excited or disappointed tone, both of which are available in three different intensities: high, medium, or low. For example, when users are playing a trivia game and receive the correct answer, developers can choose to respond to Alexa with a high-intensity upbeat tone, which can be heard below.
On the other hand, developers may want to use something like a low-intensity depressed tone when Alexa is giving bad news, such as a weather forecast that calls for rain or a game score in which the home team lost. See a sample of that particular feeling below.
In addition to the new feelings developers can tap into, Alexa now has two different speaking styles, which can also be employed. A speaking style gives Alexa a news anchor type temperament, which is clearly ideal when she is in the headlines. The second speaking style is given more attention to music, making it sound like a radio DJ. See both speaking styles below, along with a sample of how Alexa would normally sound for comparison.
These new speaking styles are definitely cool, though after years of listening to Alexa’s robot, usually unchanged tone, it’s definitely a little to listen to her speak with emotion or change things depending on the content disturbed. Developers who are interested in applying these speaking styles can learn how to do this in the Skill Kit blog post above, while the rest of us will wait for Alexa to start sounding a bit more animated.