July 27, 2024

TechNewsInsight

Technology/Tech News – Get all the latest news on Technology, Gadgets with reviews, prices, features, highlights and specificatio

AI-powered headphones filter out individual sounds

AI-powered headphones filter out individual sounds

Researchers from the United States have developed a new artificial intelligence system that allows headphone wearers to selectively block out noise. All that remains is the voice of one of the conversation participants. We explain how the new AI headphones work.

One look is enough and you can just hear the person you're looking at: that's how new headphone technology is supposed to work. For this purpose, a research team… University of Washington Developed an artificial intelligence system. “In crowded environments, the human mind can focus on the target speaker's speech if it knows in advance what the target speaker looks like.” Scientists explained. Now AI should be able to do this too.

This is how AI headphones filter out individual sounds

The new artificial intelligence system is integrated into the standard headphones. It prevents audio interference and thus improves the audio experience. But filtering out some of the noise has previously been a challenge for developers. However, the so-called targeted speech hearing from Washington is another step forward.

“With our device, you can now clearly hear a single speaker, even when you are in a noisy environment where many other people are talking,” explains the study’s lead author Shyam Gollakota, a professor in the Paul G. Allen School of Computer Science. & engineering.

To use the system, you tap on the headphones and activate the external microphones with the push of a button. The head should also be turned towards the person speaking for three to five seconds.

The sound waves from the speaker's voice must reach the microphones. The AI-powered headset then sends the signal to an integrated computer, where machine learning software recognizes the speaker's voice pattern.

See also  Why containers are transforming IT

The AI ​​system collects training data to improve

The system remembers the sound and then plays it again. This way, only the speaker's voice remains in the middle of the ambient noise – even after the listener moves or loses eye contact. The longer an AI system listens, the more training data it can collect. As the conversation continues, the ability to focus on the speaker's voice also improves.

So far, the team has tested the technology on 21 people. On average, they estimated that the clarity of a speaker's voice was twice as good as unfiltered audio data. The results are based on previous results Work on semantic listening on me. Users can select specific categories of noise such as birds or voices and block out other noises in the environment.

Currently, AI-enabled headphones can only register a speaker when there is no other loud sound coming from the same direction. However, the research team is working to expand the system to include earplugs and hearing aids. In the future, hearing-impaired people can use technology to improve their ability to hear in noisy environments.

It is also interesting: