What if you could unblur a background or remove objects seamlessly without having to use any special masking tools? Meet Adobe Project Stardust. This revolutionary AI photo editing tool was teased during the company’s MAX 2023 conference and has the ability to identify individual objects in just about any photo, along with their associated shadows.
Adobe unveiled Generative Fill in Photoshop today, which brings Firefly generative AI directly into your design workflows. This is touted as the world’s first co-pilot in both creative and design workflows for Creative Cloud, Document Cloud and Adobe Express. You’ll be able to add, extend or remove content from images in seconds.
The free NVIDIA Canvas beta app has been updated, and it offers an incredible real-time painting tool GauGAN to anyone with an NVIDIA RTX GPU. Put simply, artists can utilize advanced AI to quickly turn simple brushstrokes into photorealistic landscape images or just as a tool to speed up concept exploration, thus freeing more time to visualize ideas. This update is powered by the GauGAN2 AI model and NVIDIA RTX GPU Tensor Cores, resulting in increased quality as well as 4x higher resolution. Read more for a video demonstration and additional information.
During the Adobe MAX Sneaks 2021 event, the company teased Project Morpheus, which is essentially an AI-powered deepfake tool. This innovative video editing technology is powered by Adobe Sensei and utilizes machine learning to automate frame level appearance changes with extremely smooth, consistent results. This makes authoring and editing content easier than ever since it eliminates the time-consuming, frame-by-frame edits.
Manually masking objects in Lightroom can take quite a while to say the least, but now, there’s a much easier way to make selective adjustments in Adobe Camera Raw within Lightroom. Available on October 26th across all platforms, the Adobe Research Team have developed AI-powered selection tools, including Select Subject and the Sky Replacement tool, making it easy to precisely select objects.
Artificial intelligence has been gaining traction with 3D artists and video editors who use the technology to improve their work as well as speed up their workflow. Today, Adobe Photoshop users can make use of GPU-accelerated neural filters. These neural filters are a new feature set for content creators to try AI-powered tools that enable them to explore innovative ideas and make amazing, complex adjustments to images in just seconds.
Let’s face it, capturing photos in the perfect lighting conditions outdoors is nearly impossible, or at least on a whim, but with Adobe Photoshop’s new AI-powered Sky Replacement function, that will be a problem of the past. All you need to do is open the image, click the tool from the edit menu, select from the available sky presets or add your own, and then the Sensei AI system swaps out the sky automatically.
Instagram, Snapchat, TikTok, and other social media platforms make it easy to share videos on-the-fly, but sometimes, you record in the wrong format. Thankfully, there’s Auto Reframe, an Adobe Premiere Pro tool that automatically reframes content in different aspect ratios using artificial intelligence. Adobe’s Sensei machine-learning technology analyzes, crops and pans footage for different square, vertical and widescreen versions. Read more for a video and additional information.
If you’ve always wondered what it was like to edit images using the earliest versions of Adobe Photoshop, then wonder no more, as the “Computer Clan” shows us. They fired up Adobe Photoshop 0.63 Beta from 1988 on an old Macintosh powered by a 32MHz 68030 processor with 8MB of RAM. On a related note, did you know that Photoshop was developed in 1987 by two brothers Thomas and John Knoll? They later sold the distribution license to Adobe Systems Incorporated in 1988.