We already know that voice assistants like Alexa are listening to us. After all, it’s how they pick up our voices and hold conversations. If they weren’t listening, wake-words wouldn’t be necessary, and the entire point of an always-on virtual companion would be eliminated.
But beyond wake-words and commands, nearly all of the biggest virtual assistants have been caught recording and transcribing user conversations for “self-improvement.” Tap or click here for more details on how this privacy scandal emerged.
How is this possible? Well, it all comes back to wake-words, which are how the device knows to start recording. Unfortunately, voice-recognition software isn’t perfect, and words that are just close enough to the wake-word can be enough to trigger these devices by accident. Here’s what this means for your privacy.
Watch your mouth
Voice assistants like Siri, Google Assistant and Alexa are constantly listening for their wake-words to activate. But should you utter something similar sounding, they just might misinterpret it and start recording the rest of your conversation by accident.
According to a study from Ruhr University Bochum, accidental trigger-words are one of the most common reasons voice assistants are capturing conversations that are not meant to be heard. Tap or click here to see how Alexa accidentally shared a conversation with a user’s contact.
After conducting tests with devices from Amazon, Google, Microsoft and others, the researchers concluded there are potentially thousands of ways you can accidentally activate your device by voice, which has deep implications for the privacy of smart speaker users.
Here are some examples that were verified to trigger voice assistants:
- Alexa: “unacceptable,” “election,” “all excellent,” “a letter”
- Google Home: “OK, cool,” “Okay, who is reading,” “Ok, you know”
- Siri: “a city,” “hey Jerry,” “hey, seriously,” “that’s scary”
- Microsoft Cortana: “Montana,” “Fontana,” “Frittata”
As you can see, these sentences sound pretty similar to the official wake-words for these devices. This shows the limitations of voice-recognition technology, which focuses primarily on sounds being captured over individual word-recognition.
What are the privacy implications?
This new research spells out exactly what the biggest privacy issues are with these platforms. If you accidentally wake your device up, it’s likely whatever follows will be captured and transcribed by your device.
To make matters worse, the reasons behind these transcriptions absolutely make sense. According to Apple, Amazon and Google, audio transcription helps virtual assistants improve themselves — and accidental wake-ups give developers a chance to improve wake-word recognition in new releases.
Unfortunately, that also means anything that comes after it gets transcribed as well. It’s a double-edged sword.
That’s why, at the conclusion of the research, the people behind it suggest unplugging or powering-off your voice assistants when not in use. Alternatively, they suggest not using them in the first place. That’s a damning indictment if there ever was one, but with over 1,000 potential chances to accidentally wake up your smart speaker, we’d be skeptical too.
But if you do plan on using these systems in your smart home set up, you can thankfully limit what they’re able to transcribe (even if it’s picked up by accident). Tap or click here to get your smart speakers to stop listening to you so much.