MIT AI Machine Learning Simulate Listener Sound Room
MIT and the MIT-IBM Watson AI Lab researchers have developed an AI-powered machine learning system that simulates how a listener would hear sound from any point in a room. More specifically, it achieves this by capturing how any sound in a room will propagate through the space, enabling the model to simulate what a listener would hear at different locations.



Since the system is capable of accurately modeling the acoustics of a scene, it can quickly learn the underlying 3D geometry of a room from sound recordings. Researchers can then use the acoustic informationto build accurate visual renderings of a room, just like how humans use sound when estimating the properties of their physical environment. Practical applications include both virtual and augmented reality as well as helping AI agents develop a better understanding of the world around them.

JBL Flip 4, Black - Waterproof, Portable & Durable Bluetooth Speaker - Up to 12 Hours of Wireless...
  • All-Purpose Bluetooth Speaker -Take the party everywhere with Flip 4, a portable Bluetooth speaker that delivers powerful stereo sound. With durable,...
  • Wireless & Noise Cancelling - Wirelessly connect up to 2 smartphones or tablets to the speaker & take turns playing impressive stereo sound. Plus,...
  • Waterproof & Durable - No more worrying about rain or spills: Flip 4 is completely waterproof—you can even submerge it in water. Plus, the improved,...

If you imagine standing near a doorway, what most strongly affects what you hear is the presence of that doorway, not necessarily geometric features far away from you on the other side of the room. We found this information enables better generalization than a simple fully connected network,” said Andrew Luo, lead author and a grad student at Carnegie Mellon University (CMU).

Author

A technology, gadget and video game enthusiast that loves covering the latest industry news. Favorite trade show? Mobile World Congress in Barcelona.