Researchers from the USA have developed a new AI system that allows people wearing headphones to selectively block out noise. All that remains is the voice of the person in the conversation. We explain how the new AI headphones work.

One look is enough and you only hear the person you are looking at: This is how a new headphone technology is supposed to work. A research team at the University of Washington has developed an AI system for this purpose. “In busy environments, the human brain can concentrate on the speech of a target speaker if it knows in advance what he or she sounds like,” the scientists explain. Now artificial intelligence should be able to do this too.

How the AI ​​headphones filter out individual voices

The new AI system has been integrated into commercially available headphones. They block out acoustic disturbances and thus improve the audio experience. But filtering out certain noises has so far been a challenge for developers. The so-called Target Speech Hearing from Washington, however, is already one step further.

“With our devices, you can now hear a single speaker clearly and distinctly, even when you are in a noisy environment where many other people are talking,” explains the study's lead author Shyam Gollakota, a professor at the Paul G. Allen School of Computer Science & Engineering.

To use the system, you have to tap the headphones and press a button to activate the external microphone. Your head should also be turned towards the person speaking for three to five seconds.

The sound waves of the speaker's voice should then reach the microphones. The AI ​​headphones then send the signal to an integrated computer, where the machine learning software learns the speaker's voice pattern.

AI system collects training data to improve

The system remembers the voice and then plays it back. This way, only the speaker's voice remains in the midst of ambient noise – even after the listener moves or loses eye contact. The longer the AI ​​system listens, the more training data it can collect. As the conversation continues, the ability to concentrate on the speaker's voice also improves.

So far, the team has tested the technology on 21 people. On average, they rated the clarity of the speaker's voice almost twice as high as the unfiltered audio data. The results build on previous work on semantic hearing. Users could select certain classes of noise, such as birds or voices, and block out other noises in the environment.

Currently, the AI ​​headphones can only register a speaker if no other loud voice is coming from the same direction. However, the research team is working on expanding the system to include earplugs and hearing aids, as hearing-impaired people could use the technology to improve their hearing in noisy environments in the future.

Also interesting:

Source: https://www.basicthinking.de/blog/2024/06/03/ki-kopfhoerer-usa/

Leave a Reply

Your email address will not be published. Required fields are marked *