MIT Artificial Intelligennce Vision Touch
Artificial intelligence will soon take over the world, or so movies would like us to think. We’re getting closer everyday, thanks to researchers at Massachusetts Institute of Technology (MIT) who have developed predictive Artificial Intelligence (AI) that can learn to see by touching and to feel by seeing. Simply put, this AI system creates realistic tactile signals from visual inputs, and predicts which object and what part is being touched directly from those tactile inputs. “By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, said Yunzhu Li, PhD student and lead author from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Read more for two videos and additional information.



“They used a KUKA robot arm with a special tactile sensor called GelSight, designed by another group at MIT. Using a simple web camera, the team recorded nearly 200 objects, such as tools, household products, fabrics, and more, being touched more than 12,000 times. Breaking those 12,000 video clips down into static frames, the team compiled ‘VisGel,’ a dataset of more than 3 million visual/tactile-paired images,” reports MIT News.

Author

A technology, gadget and video game enthusiast that loves covering the latest industry news. Favorite trade show? Mobile World Congress in Barcelona.