Artificial Intelligence


AnimeGAN AI Machine Learning Photos
Researchers from several Chinese universities have developed a machine learning framework, called AnimeGAN: A Novel Lightweight GAN for Photo Animation, that turns normal photographs into anime-style backgrounds. This AI-powered network combines neural style transfer and generative adversarial networks (GANs) to achieve fast and high-quality results with a light framework, thus helping artists save time when creating the lines, textures, colors, and shadows. Read more for two videos and additional information.

NeRF Google Brain AI 3D Models Photos
Always wanted to relive some of your vacation photos, but only have photos? Well, Google researchers may have the tool for you. They’ve managed to reconstruct detailed 3D scenes of famous landmarks, like the Trevi Fountain in Rome, using regular photographs and machine learning. These aren’t basic models, but rather 3D renderings that allows you to move the camera around like you’re actually there.

Pet robots are nothing new, but MOFLIN’s take on it most certainly is. This AI-powered robot not only possesses emotional capabilities that evolve like living animals, but it boasts a unique algorithm that enables it to learn and grow by constantly using its interactions to determine patterns and evaluate its surroundings from its sensors. Read more for a video and additional information.

DARPA AlphaDogFight Trial AI Air Force Pilot
DARPA’s AlphaDogFight Trial will pit their artificial intelligence algorithm against a human F-16 fighter pilot in aerial combat. This will be the third and final competition and is set to take place between Aug. 18-20. Unfortunately, this will be a virtual event due to the ongoing coronavirus pandemic. It was created to demonstrate the advanced AI systems’ capabilities in air warfare. Read more for two videos and additional information.

Nixon Deepfake Apollo 11
Researchers at the MIT Center for Advanced Virtuality uses deepfake technology to show President Richard Nixon addressing the nation to explain that the Apollo 11 mission had ended in tragedy, or at least in an alternate timeline. In reality, the Apollo 11 mission was a big success, with both Neil Armstrong and Buzz Aldrin, returning back to Earth safely. Read more for the video and additional information.

NASA Apollo 11 Launch Moon Landing AI
On July 21, 1969, Neil Armstrong made history by becoming the first human to step on the Moon, and described it as “one small step for man, a giant leap for mankind.” Buzz Aldrin would join him on the lunar surface approximately 19-minutes later, and together, they planted the US flag and made the iconic phone call with then President of the United States of America, Richard Nixon. Read more to see some footage from that historic mission upscaled by artificial intelligence.

AI Drone Swarm Behaviour
Caltech engineers revealed a new data-driven method to control the movement of multiple drones through cluttered, unfamiliar spaces, so they do not collide with each other. Obstacles that needed to be overcome in new environments include making split-second decisions about their trajectories despite having incomplete data about their future path and how not to run into one another. Read more for three videos and additional information.

Tom Selleck Indiana Jones Deepfake
Tom Selleck was a notable actor back in the early 80s, but spent years receiving little interest from the entertainment industry, that is until he was cast in the lead role as Thomas Magnum in Magnum, P.I.. Unfortunately, the show’s producers would not release the actor for other projects, so Selleck had to pass on the role of Indiana Jones in Raiders of the Lost Ark. We all know what happened next, actor Harrison Ford took the role and ran with it. Read more to see how Selleck would have looked as Indiana Jones, thanks to deepfake technology.

NASA Apollo 16 AI Upscaled
Apollo 16, launched on April 16, 1972, was the tenth crewed mission in the United States Apollo space program, and the fifth to land on the Moon. It was crewed by Commander John Young, Lunar Module Pilot Charles Duke and Command Module Pilot Ken Mattingly. On first drive of the lunar rover, Commander Young discovered that the rear steering was not working, so he alerted Mission Control to the problem before setting up the television camera and planting the United States flag with Duke. Read more to see some of this footage that has been upscaled AI.

AI Neural Network Views of Tokyo, Japan 4K 60FPS
Denis Shiryaev has used AI-powered neural networks to upscale many historical video clips, and his latest project takes us back to the dawn of film taken in Tokyo, Japan between the years of 1913-1915. Some of the work done includes boosting FPS to 60 frames per second, fixing some playback speed issues, enhancing faces, thanks to the pipeline of algorithms designed for facial restoration, and upscaling resolution to 4k. Read more for the video and additional information.