Photo credit: Pranav Mistry
Samsung NEON made its debut earlier this year at CES 2020 in Las Vegas, and it’s essentially a computationally created virtual being that looks and behaves like a real human. Pranav Mistry, the CEO of STAR (Samsung Technology and Advanced Research) Labs, announced this week that NEON will be available on smartphones before the end of the year. NEON View is the version that will be coming to mobile phones and possibly tablets. Read more for a video about NEON and additional information.
Photo credit: Mohamed Halawany via Yanko Design
Microsoft acquired AI startup Bonsai, which specialized in reinforcement learning for autonomous systems, and last year, they previewed a new Azure-based platform that is partially built on this technology, which helps developers train the models necessary to power them. This innovative combines Microsoft’s tools for machine teaching / machine learning with simulation tools as well as the company’s IoT services and its open-source Robot Operating System. Read more to see what a robot running on this technology could look like.
Artificial intelligence has been gaining traction with 3D artists and video editors who use the technology to improve their work as well as speed up their workflow. Today, Adobe Photoshop users can make use of GPU-accelerated neural filters. These neural filters are a new feature set for content creators to try AI-powered tools that enable them to explore innovative ideas and make amazing, complex adjustments to images in just seconds. Read more for two videos and additional information.
NVIDIA’s new Maxine platform was designed specifically for developers to build and deploy AI-powered features in video conferencing services using state-of-the-art models capable of running in the cloud. The applications built on this technology reduces video bandwidth usage down to one-tenth of H.264 using AI video compression, dramatically reducing costs. It can also be used for video calls, thanks to the AI-based super-resolution and artifact reduction that converts lower resolutions to higher resolution videos in real-time. Read more for two videos and additional information.
Louis Lumière directed and produced Snowball Fight in 1896, a short black-and-white silent documentary film centered around a pathway made through a snow-covered city street. As the title suggests, several men and women line both sides of the street and engage in a snowball fight. A cyclist rides towards the fight, and is shortly hit by a few snowballs as he approaches. Read more to see both the original and the AI colorized / upscaled version.
NVIDIA unveils the Jetson Nano 2GB Developer Kit today, and it’s designed specially for teaching and learning AI by creating hands-on projects in such areas as robotics and intelligent IoT. Best of all, despite its $59 price tag, this tiny computer is still supported by the NVIDIA JetPack SDK, which comes with NVIDIA container runtime and a full Linux software development environment. Read more for two hands-on videos and additional information.
Let’s face it, most of the humming you do is probably not worth recording, but after running it through Google’s machine learning Tone Transfer tool, you may reconsider. Using your Android smartphone, tablet or desktop, you can turn these simple hums into a violin, saxophone, flute or trumpet solo. Google research scientist Hanoi Hantrakul considers this tool to be deconstructing the sound into “Play-Doh”, which can then be molded into something else. Read more for a video and additional information.
Upshot–Knothole Grable was a nuclear weapons test conducted by the United States as part of Operation Upshot–Knothole and detonation happened 19 seconds after its deployment at 8:30am PDT on May 25, 1953, in Area 5 of the Nevada Test Site. Why was it codenamed Grable? Well, it was chosen because the letter Grable is phonetic for G, as in “gun”, since the warhead was a gun-type fission weapon. Read more to see what it looks like after neural networks have upscaled and colorized the footage.
Photo credit: Justin Pinkney and Doron Adler via Gizmodo
Thanks to AI-powered neural networks, we can generate photorealistic human faces from thin air, and now, transform those portraits into cartoon characters. Called “Toonify,” this tool is running on Pix2pixHD, an image-to-image conversion model. How does it work? Well machine learning expert Doron Adler trained a StyleGAN model to recognize features a that are cartoon-like, and then the tool automatically selected fake human faces from ThisPersonDoesNotExist to augment with them. Read more for a video and additional information.
Google, UC Merced and Shanghai Jiao Tong University researchers have developed DAIN, a depth-aware video frame interpolation algorithm, powered by neural networks, capable of seamlessly generating slow-motion videos from existing content without adding excessive noise or unwanted artifacts. It functions by generating new frames and slotting them between the original frames, increasing the video’s FPS for ultra smooth, depending on the number of generated frames, content. Read more for two videos and additional information.