By hiding “spyware” in computing devices such as computers and smartphones, it is possible to eavesdrop on user conversations, and Amazon’s Echo smart speaker can be turned into a listening device. The crisis is becoming more and more familiar. The artificial intelligence technology that protects users from such eavesdropping is “neuroacoustic camouflage,” which creates customized audio noise from the background, making the recorded audio inaudible.
Real-time neural sound camouflage | OpenReview
https://openreview.net/forum?id=qj1IZ-6TInc
Is technology spying on you? New artificial intelligence may prevent eavesdropping | science | AAAS
https://www.science.org/content/article/technology-spying-you-new-ai-could-prevent-eavesdropping
“Neural acoustic camouflage” is an AI technology that uses “adversarial attacks,” which are attacks that fool the AI and use machine learning to tune the sound so that the AI misses something else. …in regards to hostile attacks, Science describes them as “using artificial intelligence to deceive another artificial intelligence.”
It may sound easy to say “use an AI to fool another AI,” but the process isn’t as easy as it sounds. In the case of audio processing using hostile attacks, it is necessary to process the entire audio data, which makes real-time audio processing difficult.
However, neuroacoustic camouflage uses neural networks, which are brain-inspired machine learning systems to effectively predict the future. By training the neural network used in the Neural Voice Camouflage with hours of audio data, the research team continuously processes two seconds of audio data, and which sound is likely to follow.
For example, if someone says, “Enjoy the banquet,” you can’t predict exactly what the next word will bring out. However, given the characteristics of the speaker’s voice and what has just been said, it is possible to generate a sound that will make the phrase that will follow next inaudible. Additionally, since the sound generated by Neural Voice Camouflage sounds like background noise for humans, it appears that it would only be possible to interfere with the AI used for eavesdropping without disturbing the conversation.
The development team used an Automatic Speech Recognition (ASR) system to verify the accuracy of the neuroacoustic camouflage. When the audio was processed by Neural Voice Camouflage, we successfully reduced the word authentication accuracy of the ASR from 88.7% to 19.8%. On the other hand, the accuracy of word authentication in ASR is 87.2% when only white noise is added to the audio, and 79.5% when processing noise using hostile attacks without a predictive function such as neuroacoustic camouflage, it became clear that it could not be used to prevent it.
Furthermore, it was shown that even if the ASR was learned to be able to recognize a voice through anti-eavesdropping technology, the accuracy of word authentication in ASR could be prevented to 47.5% using Neural Voice Camouflage. The researchers claim that the hardest-to-eavesdrop words in Neural Voice Camouflage are short words like “the,” which are the least obvious parts of conversation.
In addition, the Neural Voice Camouflage development team is testing a speaker in the room where the recording microphone is located to produce noises that make eavesdropping difficult. This test was also successful, for example the conversation ‘I just got a new screen’ telling ASR ‘with reasons though it’s also a toscat and a neumanitor’. However, there is a reason).
“This is just the first step in using AI to protect privacy,” said Mia Chiquier, a computer scientist at Columbia University who led the study. “Artificial intelligence is data about our voice, our face, and our behavior.” We need a new generation of privacy-respecting technology to counter this. “
Chiquier argues that the predictive features used in Neural Voice Camouflage have great potential for self-driving cars and other applications that require real-time processing. Self-driving car technology needs to predict where the car will go next and where the pedestrians will be. The human brain accurately predicts this, so Chequé said, “(Our system) mimics the same way it mimics humans.”
“It’s great to combine the classic problem of machine learning with predicting the future with another problem of adversarial attacks,” said Andrew Owens, a computer scientist at the University of Michigan at Ann Arbor who studies speech processing. Bo Lee, a computer scientist at the University of Illinois at Urbana-Champaign who studies the application of hostile sonic attacks, said he is also impressed by the new approach of ASR. “Acoustic camouflage is an important technology,” said Jay Stanley, senior policy analyst at the American Civil Liberties Union.
“Travel maven. Beer expert. Subtly charming alcohol fan. Internet junkie. Avid bacon scholar.”
More Stories
Call of Duty: Black Ops 6 has released a trailer for the remastered version of the Nuketown map. Infected mode arrives today
What titles do you recommend for players who have never experienced the Golden Age of PlayStation 2? Introducing the games that experts love |.Game*Spark – the local and international gaming information site
CEO/Director of Sandbox ADV “Core Keeper,” which is selling well in Japan, asks for “ideas for communicating community love” in Japanese