Photo credit: OpenAI
San Francisco–based AI research lab OpenAI managed to successfully train a robotic hand to manipulate a cube with extraordinary dexterity. That’s right, using a reinforcement-learning algorithm, the hand taught itself how to manipulate the cube with a technique modeled after how animals learn. It simulated various conditions, and trained the robot to solve the cube to keep going regardless of any unknown physical factors. Read more for two videos and additional information.
After simulating thousands of years, the hand learned important general principles about how to interact with the world. It was equipped to handle minor variance in its environment, and to test its resilience, the researchers disrupted the hand as it tried to solve the cube by taping its fingers together, fitting it with a rubber glove, or poking it repeatedly.