MIT’s new robot, developed by researchers at the MCube lab, led by Alberto Rodriguez, uses machine-learning and sensory hardware to learn how to play the game Jenga. This technology could be used in robots for manufacturing assembly lines. Simply put, it’s equipped with a soft-pronged gripper, force-sensing wrist cuff, and an external camera, all of which it uses to see and feel the tower and its individual blocks. As the robot carefully pushes against a block, a computer takes in visual and tactile feedback from its camera as well as cuff, and then compares these measurements to moves that the robot previously made. “It also considers the outcomes of those moves — specifically, whether a block, in a certain configuration and pushed with a certain amount of force, was successfully extracted or not. In real-time, the robot then “learns” whether to keep pushing or move to a new block, in order to keep the tower from falling,” according to the paper. Read more to see it in-action.
“Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces. It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks. This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower. The key challenge is to learn from a relatively small number of experiments by exploiting common sense about objects and physics,” said Rodriguez.