The Squad X experimentation program was started to give infantry Marines the same resources that mounted forces have by using autonomous robots engineered by Defense Advanced Research Projects Agency (DARPA). In a test at Twentynine Palms earlier this year, a group of these autonomous ground and aerial systems provided intelligence as well as recon support for Marines equipped with sensor-laden vests as they moved between natural desert and mock city blocks, all the while ground-based units provided armed security for the primary force. Read more for a video of this test.
University of California, Irvine researchers have developed an artificial intelligence system, called DeepCubeA, that can solve a Rubik’s Cube in about 20 moves, or an average of 1.2 seconds. For comparison, the current human world record clocks in at 3.47 seconds, but the Massachusetts Institute of Technology’s min2phase algorithm solved one in a mere 0.38-seconds, roughly three times faster than DeepCubeA. Read more for a video of MIT’s record-setting robot and additional information.
Ever wonder how celebrities, or yourself, would look about 30-years in the future? The FaceApp aging feature should do the trick, and recently it’s become a social media hit, with many using it on well-known actors, like Tom Holland and Chris Evans. How does it work? It uses AI-powered neural network technology to automatically generate highly realistic transformations of faces in photographs. That’s right, the app can transform a face to make it smile, look younger, look older, or even change gender. Read more to see a few popular celebrities decades in the future.
Put simply, metamaterials are basically artificial materials engineered to have properties not found in naturally occurring materials, best known as materials for the ‘invisibility cloak’. AI is capable of precisely designing artificial atoms that are smaller than the wavelength of light and by controlling the polarization and spin of light, new optical properties are created not found in nature. Current methods require various trial and failures until the right material is obtained, but AI is expected to provide a hyper efficient solution for this problem. Read more for a video and additional information.
Bentley Motors has unveiled the EXP 100 GT to celebrate its 100th birthday today. This vehicle provides a vision of the future of luxury mobility and reimagines Grand Touring for the world of 2035. Starting of the exterior, the chassis is made from lightweight aluminum and carbon fiber, with driver / passenger doors that pivot outward and upwards for effortless access. You’ll also find dynamic exterior lighting, starting with the smart, illuminated matrix grille and Flying B mascot. The rear boasts a 3D OLED display on which lighting effects can blend in with the rear taillights. Read more for a detailed video tour, additional pictures and information.
If you wanted to see if an image is fake, or wanted to make one yourself, it usually starts with some kind of photo editing software. MIT’s GANpaint Studio is a tool that aims to make things a lot easier, thanks to artificial intelligence. For those who don’t know, a generative adversarial network (GAN) is a type of artificial intelligence machine learning technique made up of two nets that are in competition with one another in a zero-sum game framework. How does it work? Simply tell the tool where you want the object and it uses neural networks to insert one to match scene. Read more for a video demonstration and additional information.
Astrophysicists have developed a “Deep Density Displacement Model” (D³M) or complex, AI-powered 3D simulations of the universe, that are so fast, accurate and true-to-life that even the researchers themselves are baffled. “We can run these simulations in a few milliseconds, while other ‘fast’ simulations take a couple of minutes. Not only that, but we’re much more accurate,” said study co-author Shirley Ho, a group leader at the Flatiron Institute’s Center for Computational Astrophysics in New York City. Read more for a video and additional information.
London’s Imperial College and Samsung’s AI researchers have developed a new algorithm that can turn a static photo and audio file into an animated singing video portrait. Similar to other deepfake AI algorithms, this one also uses machine learning to generate their output, and even though the clips may be rough around the edges, it shows just what is possible in the future for better or worse. Read more for two videos showing the algorithm in-action, all created from a static photos and audio files.
Photo credit: Peta Pixel
Photo Wake-Up, a software application developed by University of Washington and Facebook computer scientists, is capable of turning a still photograph into a 3D character animation, making it perfect for augmented reality applications. How does it work? Put simply as possible, it constructs a body model based on a still image and then estimates a body map. Next, researchers design a 3D mesh, apply textures to it that match the body map, and then integrate a skeletal rig for controlling its motion. Read more for a video and additional information.
Artificial intelligence will soon take over the world, or so movies would like us to think. We’re getting closer everyday, thanks to researchers at Massachusetts Institute of Technology (MIT) who have developed predictive Artificial Intelligence (AI) that can learn to see by touching and to feel by seeing. Simply put, this AI system creates realistic tactile signals from visual inputs, and predicts which object and what part is being touched directly from those tactile inputs. “By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, said Yunzhu Li, PhD student and lead author from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Read more for two videos and additional information.