Eye on AI - December 18th, 2020
Welcome to Aigora's "Eye on AI" series, where we round up exciting news at the intersection of consumer science and artificial intelligence!
This week, we’ll be building on our discussion of voice-tech AI by looking at two developments in AI mimicry.
The Magic of AI Mimicry
If you read our post from December 4th, you should have some idea about how voice-tech is poised to become the next societal-shaping technology (think internet or iPhone). This week’s news saw two new articles adding to that discussion.
The first, Scientific American’s “Artificial Intelligence Is Now Shockingly Good at Sounding Human,” describes how advanced voice-AI has become in mimicking individual humans, then goes one step further by providing a video to demonstrate how it works.
“Current machine-learning techniques can model human speech, complete with awkward pauses and lip smacks,” writes Scientific American contributor Meghan McDonough. “Still, training on thousands of samples per second is prohibitively expensive for most real-world systems. Researchers, including those at VocaliD, are continually implementing newer and more efficient methods.”
The video is worth watching if you have the time––it breaks down how new voice-AI uses a combination of voice recordings and AI-driven voice inflections to nearly match an individual’s voice. It’s not perfect. But it’s leagues ahead of where voice AI was just a few years back. And it will only get better in time.
The second article, out of Entrepreneur, revealed that Microsoft filed a patent this week for a chatbot that can mimic individuals, based on data it collects through things like social media.
“The software would use data such as images, voice notes, social media posts, emails, among others [through social media, etc.] to create or modify a special index on the subject of the specific person's personality,” writes the Entrepreneur editorial team. “This could be an indication that [Microsoft] wants to improve its customer service or create its own personal assistant. However, the Input Portal explains that ‘the implications of this type of imitation of a digitized and hyperspecific personality are as varied as they are disturbing.’”
Imagine a robo-text asking for money from a bot that texts just like one of your relatives, or a political voice message that comes off sounding like one of your closest friends. Scary, right? AI chatbots are difficult enough to spot as it is without sounding like people we know. (We received your latest insurance payment. Click this link to confirm so we can steal all your data!) If masked with familiar-sounding personalities, fraud bots could become nearly impossible to detect.
Besides the fraud potential, there’s also the question of data ownership. As Input Portal points out, Microsoft’s new chatbot could be able to create digital assistant versions that are based on our digital profiles from specific years. If that’s the case, who then owns that version of us?
“Does the digital assistant modeled after yourself circa 2006 that you drunkenly purchased as a goof truly belong to you?” writes Input Portal’s Andrew Paul. “Or is that 16-year-old ‘you’ — you know, the you with an embarrassing obsession with The Decemberists, suit vests, and suspenders — property of Bill Gates in perpetuity?”
That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!