By now, you’re likely familiar with voice-activated technologies. After all, 50% of all internet searches will be voice searches by 2020 and nearly 120 million smart speakers have been purchased for U.S. homes. With 59% of Americans planning to purchase a smart home product during the holiday season this year according to CTA, voice technologies will only become more pervasive in our lives.
We’re talking voice in vehicles, in remote controls, headphones, and event the walls of your home and office.
Voice isn’t just podcasts
Over the next few years, the utility provided by voice technology will move far beyond that of what you might associate non-search activities with, such as asking what time it is, kicking off your favorite podcast, or figuring out if it’s going to rain later in the day.
On the service provider side of the equation, companies have employed natural language processing and AI to gauge sentiment in conversations for years.
For example, customer service agents and their employers can rely on the technology to improve customer support conversations. If you want to build something that’s not available off the rack, companies can also plug in to API services from AWS, Microsoft, Google, and others to build their own use cases.
Diagnose in real time?
Take Danish startup Corti.ai, for example. They’re building “an intelligent partner that helps emergency medical professionals make life-saving decisions.”
By listening in on medical interviews — such as emergency calls — Corti.ai analyzes conversations in real-time and provides recommendations to medical professionals that will help them diagnose illnesses and subsequently offer prompts for effective action.
While Corti.ai’s tech may initially be piloted with medical professionals and emergency services, you can expect this type of technology to integrate seamlessly in to your daily routine via always-on availability when 5G and fog/cloud processing become more mainstream.