Adobe’s Firefly platform arrives on iOS and Android, transforming smartphones into portable creative studios. This standalone mobile app brings Adobe’s generative AI tools—previously tethered to desktops and web browsers—directly to your fingertips.
This standalone powerhouse lets users generate images and videos from text prompts, edit visuals with AI precision, and sync projects across devices. “The new Firefly mobile app gives you the freedom to generate images and videos wherever you are, using the same technology underlying many of the most powerful features in Photoshop, Premiere Pro, and Lightroom,” Adobe explains in its announcement.
- The first foldable phone engineered by Google, Pixel Fold has all the power of the Google Tensor G2 chip in a thin, pocket-size design; it’s made...
- Enjoy seamless multitasking with Split Screen[1]; drag two apps up from the taskbar to quickly open them side by side, or open two tabs in Chrome to...
- Unlocked Android 5G phone gives you the flexibility to change carriers and choose your own data plan[2]; works with Google Fi, Verizon, T-Mobile,...

Users can type a prompt like “futuristic city at dusk” and watch Firefly churn out a high-resolution image or a short video clip. Tools like Generative Fill and Generative Expand—familiar to Photoshop users—let you erase unwanted objects or extend backgrounds with a tap. For creators who thrive on spontaneity, this immediacy is a draw. As Adobe’s Alexandru Costin, VP of Generative AI, puts it, “It gives creators the freedom to generate and edit content wherever inspiration strikes and the flexibility to choose the AI model that best fits their vision.”
Firefly’s mobile app integrates third-party AI models from heavyweights like OpenAI, Google, Luma AI, and Pika, creating a creative hub where users can experiment with different engines. Want Google’s Veo 3 for video with native audio? Or perhaps OpenAI’s instruction-based editing for quick image tweaks? The app lets you switch models seamlessly, offering flexibility that feels like a digital Swiss Army knife. “Creators continue to impress us with the breadth and artistry of the images, videos, graphics, and designs they’re dreaming up in the Firefly app using models from both Adobe and our partners,” says Ely Greenfield, Adobe’s Senior VP and CTO.
This multi-model approach is a standout. Most AI apps lock you into one engine, but Firefly’s openness lets creators cherry-pick the best tool for the job. For instance, Adobe’s Firefly Image 4 handles most tasks, but Google’s Imagen 4 excels at text-to-image precision, while Luma AI’s Ray2 leans into dynamic video generation. The catch? Premium models require Firefly credits, available through a dedicated subscription or Creative Cloud Pro plan. Free users get 12 credits (10 for images, 2 for videos), enough to dip your toes but not to swim.
Beyond generation, Firefly introduces Boards, a moodboarding tool now out of beta and available on mobile. Boards let teams brainstorm visually, blending AI-generated images, videos, and Adobe Stock assets into collaborative canvases. “Ideating with Firefly Boards is like an AI-powered creative jam session,” Costin says, “where creative professionals can riff across media types, explore ideas together, and turn sparks of inspiration into production-ready concepts.” The tool auto-organizes assets with a single tap, syncs with Creative Cloud, and supports real-time collaboration, making it a boon for agencies or freelancers juggling client projects.