logo
English

Announcement

Nov 13, 2025

The Ultimate Guide to IMAGENAI AI Models for Visual Creation

Introduction: Your Complete AI Creative Studio

In 2025, the creative production landscape has fundamentally transformed. What once required photographers, studios, editors, 3D artists, and weeks of production time can now be accomplished in minutes with AI. But with dozens of AI models emerging each month, the challenge isn't access—it's knowing which tool to use for what.

This comprehensive guide breaks down every AI model available on our platform, explaining not just what they do, but when to use them, how they compare, and what makes each one uniquely powerful for your creative workflow.

Whether you're building e-commerce catalogs, launching advertising campaigns, creating social content, or producing cinematic videos, you'll find the right AI model here—and learn how to combine them into a seamless production pipeline.

Part 1: Text-to-Image Models — From Idea to Image

Text-to-image models are the foundation of AI visual creation. They transform written descriptions into fully realized images, enabling rapid concept exploration, product visualization, and creative experimentation without cameras or stock libraries.

FLUX.1 [schnell] – The Speed Champion

Generation Time: ~2-5 seconds

What it does:
FLUX schnell (German for "fast") is engineered for real-time generation. It prioritizes speed over perfection, making it the fastest production-ready image AI available today.

When to use it:

  • Rapid ideation sessions when you need to test 20 concepts in 5 minutes

  • Thumbnail generation for video storyboards or presentation decks

  • Live client reviews where you're iterating in real-time during calls

  • High-volume workflows like generating hundreds of social post variations

  • Prototyping before final renders to validate direction before investing credits in premium models

Why it matters:
In creative work, speed isn't just convenience—it's a strategic advantage. FLUX schnell lets you fail fast, explore more directions, and arrive at better final concepts because you can afford to experiment freely. Think of it as your creative sketchbook: quick, iterative, and judgment-free.

Best practices:

  • Use for exploration, not final delivery

  • Perfect for A/B testing visual directions

  • Great when prompt experimentation is more important than polish

FLUX.1 [dev] – The Premium Workhorse

Quality: Production-ready

What it does:
FLUX dev is the high-fidelity version of the FLUX architecture. It delivers sharper details, better prompt adherence, more consistent results, and significantly improved photorealism compared to schnell.

When to use it:

  • Final marketing assets that will be published to customers

  • Brand campaigns requiring consistent style and quality

  • Character design where facial features need to remain stable across generations

  • Product visualization where accuracy and realism matter

  • Social media content destined for feeds, stories, or ads

Why it matters:
FLUX dev strikes the perfect balance: professional quality without the premium price tag. It's the model you'll use most often once you've validated your concept. Where schnell gives you speed, dev gives you confidence that the output is client-ready.

Comparison tip:
Run schnell for your first 5-10 iterations, then switch to dev once you've found your direction. This saves credits while maintaining quality where it counts.

FLUX Pro Kontext – The Scene Composer

Specialty: Spatial intelligence

What it does:
Kontext (short for "context") is optimized for complex scene composition. It excels at understanding spatial relationships, perspective consistency, realistic lighting interactions, and multi-object scenes.

When to use it:

  • Product placement in realistic environments (phone on desk, shoes on pavement, bottle in restaurant)

  • Editorial imagery with multiple subjects and depth layers

  • Architectural visualization where perspective matters

  • Cinematic compositions with foreground/mid-ground/background elements

  • Complex storytelling scenes with multiple characters or objects interacting

Why it matters:
Most AI models struggle with spatial coherence—objects float, shadows point the wrong way, perspective breaks down. Kontext solves this. It understands that a coffee cup on a table should cast shadows, that objects further away should be smaller, that lighting should be consistent across a scene.

Pro tip:
Use Kontext when your prompt includes positional language like "behind," "next to," "in front of," or "surrounded by." This is where it shines brightest.

HiDream i1 Full – The Portrait Specialist

Specialty: Human realism at 17 billion parameters

What it does:
HiDream i1 is a massive 17B parameter model purpose-built for photorealistic human subjects. It excels at skin texture, facial detail, hair rendering, fabric materials, and natural poses.

When to use it:

  • Beauty and skincare campaigns requiring flawless skin rendering

  • Fashion lookbooks with realistic fabric drape and texture

  • Lifestyle product photography featuring human models

  • Portrait photography for fictional characters or brand ambassadors

  • Influencer-style content where realism is paramount

Why it matters:
Human faces are notoriously difficult for AI—uncanny valley is real. HiDream was trained specifically to overcome this, with special attention to diverse skin tones, natural expressions, realistic eye rendering, and believable hair. If your image centers on a person, this is your model.

Quality note:
HiDream produces some of the most convincing AI-generated human subjects available in 2025. If Imagen 4 weren't on the platform, this would be the realism champion.

Imagen 4 (Google) – The Industry Gold Standard

Status: Best in class

What it does:
Imagen 4 is Google's flagship image generation model and widely considered the best AI image generator in the world as of 2025. It delivers unparalleled photorealism, exceptional prompt understanding, perfect text rendering, and advertising-grade output quality.

When to use it:

  • Luxury brand campaigns where quality cannot be compromised

  • Hero images for websites, billboards, or print advertising

  • Professional photography replacement for high-end catalogs

  • Pitch presentations where wow-factor matters

  • Any project with a premium budget and zero tolerance for AI artifacts

Why it matters:
Imagen 4 doesn't just create images—it creates images indistinguishable from professional photography. Colors are rich and accurate, textures are believable under scrutiny, lighting is physically correct, and composition follows professional photography principles.

What sets it apart:

  • Best text rendering in any AI model (perfect for designs with typography)

  • Exceptional material rendering (glass, metal, fabric all look correct)

  • Superior color science (ready for print without color correction)

  • Minimal post-production required

Part 2: Image-to-Image Models — Transformation and Enhancement

These models don't create from scratch—they modify, enhance, fix, and transform existing images. They're the AI equivalent of a photo editing suite, essential for production workflows, e-commerce optimization, and creative refinement.

IC-Light V2 – The Relighting Revolution

Specialty: Photorealistic lighting transformation

What it does:
IC-Light V2 analyzes your image and completely regenerates its lighting while preserving the subject. It can add studio lighting to flat photos, match environmental lighting for composites, or transform day shots into golden hour.

When to use it:

  • E-commerce product photography that needs consistent lighting across 100+ SKUs

  • Product placement composites (adding your product to lifestyle scenes with matching light)

  • Brand consistency when working with photos from multiple sources

  • Shadow and reflection generation for realistic compositing

  • Transforming amateur photos into professional-looking packshots

Why it matters:
Lighting is what makes the difference between amateur and professional photography. IC-Light V2 gives you the ability to "reshoot" images with perfect lighting without ever touching a physical light source. It understands physics—shadows fall correctly, reflections appear on glossy surfaces, highlights bloom naturally.

Real-world workflow:
Say you have a product shot against a white background. With IC-Light V2, you can:

  1. Place it into a lifestyle scene

  2. Have the model match the ambient lighting of that scene

  3. Generate appropriate shadows and reflections

  4. Export a composite that looks like it was shot on location

This replaces entire photo studio sessions.

Qwen Image Edit – The Precision Surgeon

Specialty: Natural language image editing

What it does:
Qwen allows you to edit images using plain English commands. Want to change a shirt from blue to red? Remove a distracting background element? Adjust a facial expression? Just describe it in words.

When to use it:

  • Quick object removal (power lines, unwanted people, distracting elements)

  • Color and material swaps without masking or selecting

  • Product variations (change product color while keeping everything else identical)

  • Scene cleanup before final delivery

  • Iterative refinement when an image is 90% perfect but needs tweaks

Why it matters:
Traditional image editing requires Photoshop skills, layer management, and precise selection tools. Qwen makes editing as simple as conversation. This democratizes image refinement—anyone on your team can make adjustments, not just trained designers.

Example prompts:

  • "Remove the person in the background"

  • "Change the car color from red to matte black"

  • "Make the sky more dramatic and golden"

  • "Remove all text from this image"

Pro tip:
Qwen works best with specific, concrete edits. Vague prompts like "make it better" struggle, but precise requests like "remove the coffee cup on the left side of the table" work beautifully.

Nano Banana Edit – The E-commerce Guardian

Specialty: Label-preserving product editing

What it does:
Nano Banana is specifically engineered for product editing where accuracy matters. Unlike general image editors, it understands that product labels, logos, and text should remain untouched even when transforming the surrounding image.

When to use it:

  • Product photography edits where branding must remain pixel-perfect

  • Packaging shots that need background or lighting changes

  • Multi-element compositions where some objects should stay identical

  • CPG (consumer packaged goods) images with visible labels

  • Complex corrections requiring surgical precision

Why it matters:
Standard AI editing often distorts text, warps labels, or subtly changes product details—unacceptable for e-commerce. Nano Banana solves this by treating recognizable elements (like logos) as sacred, editing around them while preserving their integrity.

E-commerce use case:
You have a product shot of a labeled bottle. You want to change the background from white to a lifestyle kitchen scene. Nano Banana will:

  • Keep the bottle label perfectly legible and undistorted

  • Transform the background completely

  • Adjust lighting to match the new environment

  • Preserve all product details exactly as they were

This is critical for brand compliance and retail requirements.

Seedream V4 (ByteDance) – The Creative Powerhouse

Status: Industry-leading image editor

What it does:
Seedream V4 is ByteDance's flagship editing model and arguably the most powerful AI image editor available in 2025. It can perform full scene transformations, advanced retouching, face and pose editing, style transfers, and complex composite operations.

When to use it:

  • High-end retouching for beauty, fashion, and advertising

  • Complete scene transformations (winter to summer, day to night)

  • Face and body editing with natural results

  • Artistic style transfers while preserving content

  • Magazine-quality final touches before publication

Why it matters:
Where other editors make simple changes, Seedream reimagines entire images. It's the difference between "remove this object" and "transform this product shot into a cinematic advertisement." The model understands artistic intent, not just pixel manipulation.

Capability examples:

  • Turn a simple product on white into a dramatic lifestyle scene

  • Age or de-age faces naturally

  • Change poses and expressions while keeping identity

  • Apply professional color grading

  • Composite multiple images seamlessly

Strategic positioning:
Seedream is your "final polish" model. Use simpler editors (Qwen, Nano Banana) for straightforward tasks, then bring in Seedream when you need that last 10% of perfection that separates good from great.

Bria Expand – The Format Transformer

Specialty: Intelligent outpainting

What it does:
Bria Expand extends images beyond their original boundaries, generating contextually appropriate content to fill new canvas space. Perfect for adapting content across different aspect ratios and formats.

When to use it:

  • Multi-platform campaigns (converting square posts to landscape headers)

  • Aspect ratio conversion (1:1 to 16:9, 4:5 to 9:16, etc.)

  • Banner creation from existing assets

  • Cropped image recovery (expanding back out what was cropped)

  • Website hero sections that need horizontal expansion

Why it matters:
Modern marketing requires the same creative concept across dozens of formats: Instagram square, Facebook landscape, website hero, Pinterest portrait, Twitter header, YouTube thumbnail, email banner. Bria Expand means you create once and adapt intelligently, rather than shooting or designing each format separately.

Workflow example:

  1. Create hero image in 1:1 format with IC-Light V2

  2. Use Bria Expand to create 16:9 website version

  3. Use Bria Expand again for 9:16 Instagram Story

  4. All versions maintain visual consistency and quality

Intelligence note:
Bria doesn't just "stretch" or "fill with blur." It genuinely extends the scene—if you're expanding a kitchen scene, it adds more kitchen. If you're expanding an outdoor shot, it generates appropriate background with correct perspective.

ESRGAN Upscaler – The Resolution Multiplier

Specialty: AI-powered upscaling

What it does:
ESRGAN uses AI to intelligently upscale images, adding detail rather than just interpolating pixels. It can enlarge images 2-4x while maintaining (and sometimes enhancing) sharpness.

When to use it:

  • Print preparation (converting 72dpi web images to 300dpi print quality)

  • Low-resolution asset recovery (old logos, archived photos)

  • Thumbnail to hero image conversion

  • Detail enhancement on existing high-res images

  • Final quality boost before delivery

Why it matters:
Traditional upscaling (bicubic, Lanczos) simply makes pixels bigger, resulting in blur or blockiness. ESRGAN actually invents plausible detail based on what it understands about image structure, edges, and patterns.

Quality expectations:
ESRGAN works best when:

  • Original image is reasonably sharp (not heavily compressed or blurry)

  • Upscaling 2-3x (not trying to go from thumbnail to billboard)

  • Used as final step after other edits are complete

Cost efficiency:
At just 1 credit, ESRGAN is one of the best value tools on the platform. Use it liberally as a final enhancement step in virtually every workflow.

Bria Background Remove – The Premium Cutout Tool

Specialty: Professional-grade subject isolation

What it does:
Bria Background Remove uses advanced segmentation AI to cut subjects from backgrounds with exceptional accuracy, including notoriously difficult areas like hair, fur, transparent objects, and fine details.

When to use it:

  • E-commerce product images requiring clean white backgrounds

  • Model photography with complex hair that needs perfect cutouts

  • Compositing workflows where subject isolation is the first step

  • Multi-format content where subjects need different backgrounds

  • Professional retouching requiring pixel-perfect edges

Why it matters:
Manual background removal is time-consuming and requires skilled Photoshop work. Even then, hair edges are notoriously difficult. Bria solves this with AI that understands material properties—it knows hair is semi-transparent, glass has reflections, and fabric has texture.

Bria vs. Basic Background Removal:
The platform offers both. Use Basic (1 credit) for speed and prototypes. Use Bria (2 credits) when:

  • Hair or fur is present

  • Subject has fine details or transparent elements

  • Output is customer-facing

  • Compositing requires perfect edges

E-commerce workflow:

  1. Shoot products on any background (even cluttered spaces)

  2. Remove background with Bria

  3. Place on pure white or lifestyle scene

  4. Export retail-ready imagery

This workflow eliminates the need for expensive seamless backdrops and professional photo studios.

Background Removal (Basic) – The Speed Tool

Best for: Quick iterations

What it does:
Fast, effective background removal for straightforward subjects. Less sophisticated than Bria but significantly faster and cheaper.

When to use it:

  • Rapid prototyping where perfection isn't critical

  • Simple subjects without hair, fur, or transparency

  • High-volume processing where speed matters more than edge quality

  • Internal mockups not destined for external viewing

Strategic note:
Don't overthink this decision. For 99% of e-commerce products (shoes, electronics, packaged goods), Basic works perfectly fine. Reserve Bria for human models and complex subjects.

Product Photoshoot – The Virtual Studio

Specialty: AI product photography generation

What it does:
Product Photoshoot takes a clean cutout of your product and places it into photorealistic lifestyle scenes, generating appropriate lighting, shadows, reflections, and environmental context.

When to use it:

  • E-commerce lifestyle imagery without physical photoshoots

  • A/B testing product contexts (beach vs. office vs. home)

  • Seasonal campaigns (same product, different seasonal backgrounds)

  • Scale production (100s of SKUs × dozens of scenes)

  • Market testing before committing to expensive photography

Why it matters:
Traditional product photography requires:

  • Studio rental

  • Professional photographer

  • Props and set design

  • Models (sometimes)

  • Post-production editing

  • Weeks of lead time

Product Photoshoot delivers comparable results in minutes at a fraction of the cost.

Real-world application:
You're launching a new water bottle. With Product Photoshoot, you can generate:

  • Gym scene (on yoga mat with dumbbells)

  • Office desk scene (next to laptop)

  • Outdoor hiking scene (on rock with nature background)

  • Kitchen scene (on marble counter)

  • Beach scene (on towel with sand)

All with consistent product rendering and photorealistic environments—generated in under an hour.

Quality note:
Results are not always 100% photo-indistinguishable, but they're exceptional for web use, social media, and even most print applications. For hero advertising, combine with IC-Light V2 for an extra quality boost.

Part 3: AI Video Models — Bringing Visuals to Life

Video AI has reached a tipping point in 2025. What was once experimental is now production-ready, enabling brands to create motion content without cameras, actors, or video editors.

Kling Video (5s & 10s) – The Cinematic Motion Engine

Specialty: Image-to-video with camera intelligence

What it does:
Kling transforms static images into cinematic video clips with realistic motion, dramatic camera movements, and physics-based animation. It understands how objects move, how fabric flows, how liquids behave.

When to use it:

  • Product reveal videos (360° spins, dramatic zooms, hero reveals)

  • Social media ads (TikTok, Instagram Reels, YouTube Shorts)

  • Motion mockups for campaign pitches

  • Storyboard animation before committing to full video production

  • CGI-style camera moves without 3D software

Why it matters:
Video dramatically outperforms static imagery on social platforms—algorithms favor it, users engage longer, conversion rates increase. Kling lets you add motion to any visual asset, turning static product shots into scroll-stopping video content.

Motion quality:
Kling excels at:

  • Camera movements (dolly, pan, zoom, crane shots)

  • Object motion (products spinning, liquid pouring)

  • Subtle animation (fabric movement, hair flow)

  • Atmospheric effects (light changes, particle effects)

Strategic use:
Start with Kling 5s for testing and social content. Upgrade to 10s for:

  • More elaborate camera movements

  • Complete product reveals

  • Storytelling sequences requiring extended duration

Platform optimization:

  • TikTok: 5s clips (platform favors quick cuts)

  • Instagram Reels: 5-10s clips

  • YouTube Shorts: 10s clips

  • Website hero videos: 10s loops

Cost management:
Video generation is credit-intensive. Validate your still image first (using FLUX or Imagen), ensure it's perfect, then animate it. Don't burn video credits iterating on composition—fix that upstream.

Sora 2 – The Text-to-Video Pioneer

Status: Industry gold standard from OpenAI

What it does:
Sora generates complete video sequences from text descriptions alone—no input image required. It creates scenes, characters, camera movements, and narratives entirely from prompts.

When to use it:

  • Concept videos for pitches and presentations

  • Storyboard visualization before live-action production

  • Animated explainer content

  • Speculative creative ("what if" scenarios for campaigns)

  • B-roll generation for video projects

Why it matters:
Sora represents a fundamental shift: video creation without video capture. This opens entirely new creative possibilities—historical scenes, impossible physics, fantasy worlds, speculative futures—all generated from imagination.

Quality characteristics:

  • Exceptional physics understanding

  • Coherent multi-second sequences

  • Realistic textures and lighting

  • Creative camera work

  • Strong narrative coherence

Sora vs. Kling:

  • Kling: Animate existing images (image-to-video)

  • Sora: Generate video from scratch (text-to-video)

Use Kling when: You have a specific visual you want to animate
Use Sora when: You're starting from pure concept with no source image

Creative applications:

  • Generate impossible product demos (phone surviving lava)

  • Create historical or futuristic contexts

  • Visualize abstract concepts (trust, innovation, growth)

  • Produce fantasy or sci-fi content

Limitation awareness:
While Sora is extraordinary, it's not yet perfect for:

  • Precise product rendering (use image-to-video for products)

  • Extended narrative sequences (current length limits)

  • Specific brand assets or logos (better to composite those in)

Best practice:
Use Sora for creative exploration and storytelling, then polish with traditional tools. It's exceptional for getting 80% of the way to a vision, with final 20% coming from compositing, color grading, or combining multiple clips.

Part 4: Image-to-3D Models — The Third Dimension

3D is no longer just for game developers and CGI studios. In 2025, e-commerce, AR experiences, and interactive web content all benefit from 3D assets—and AI now makes 3D accessible to anyone with a 2D image.

Meshy v6 – The 3D Transformation Engine

Specialty: Single-image to full 3D model

What it does:
Meshy analyzes a 2D product photo and reconstructs it as a complete 3D model with geometry, textures, and materials—ready for use in AR apps, 3D viewers, game engines, or rendering software.

When to use it:

  • E-commerce 3D product viewers (interactive 360° on product pages)

  • AR experiences (visualize furniture in your room, try-on experiences)

  • Digital twins for products, packaging, or objects

  • Game asset creation from real-world references

  • CGI production starting from photography

  • Metaverse and virtual worlds requiring 3D product representation

Why it matters:
Traditional 3D modeling requires specialized software (Blender, Maya), technical expertise, and hours of manual work per asset. Meshy reduces this to minutes with a single photo input.

E-commerce transformation:
Modern consumers expect interactive product experiences. Meshy enables:

  • 360° product spin viewers

  • AR "see it in your space" features

  • Interactive zoom and exploration

  • Multi-angle viewing without shooting multiple photos

Quality expectations:
Meshy v6 produces:

  • Good: Clean geometry suitable for web viewing

  • Great: Textured models with realistic materials

  • Excellent: Assets ready for professional rendering

Not perfect for: Extreme close-ups under scrutiny (still improving)
Perfect for: Web 3D viewers, AR, and most commercial applications

Technical output:

  • Standard formats (GLTF, FBX, OBJ)

  • PBR materials (compatible with modern renderers)

  • Optimized topology (web-friendly polygon counts)


Platform requirements:
To use Meshy outputs, you'll need:

  • Web 3D viewer framework (Three.js, Babylon.js)

  • AR framework (ARKit, ARCore, WebXR)

  • Or 3D software (Blender, Cinema 4D, Unreal Engine)

The platform provides the 3D asset; implementation is separate.

Part 5: Model Selection Framework — Choosing the Right Tool

Decision Matrix: Speed vs. Quality

Need it NOW:

  • FLUX.1 [schnell] (images)

  • Background Removal Basic (cutouts)

  • Kling 5s (quick motion)

Need it PERFECT:

  • Imagen 4 (premium images)

  • Seedream V4 (advanced editing)

  • Kling 10s (cinematic motion)

Need VOLUME:

  • FLUX.1 [dev] (balanced quality/cost)

  • Product Photoshoot (scale production)

  • Bria Expand (format multiplication)

By Creative Discipline:

If you're a PHOTOGRAPHER:

  • Start with IC-Light V2 (relighting mastery)

  • Add Seedream V4 (retouching powerhouse)

  • Explore Product Photoshoot (extend beyond physical limits)

If you're a DESIGNER:

  • Start with FLUX.1 [dev] (production workhorse)

  • Add Bria Expand (format flexibility)

  • Explore Imagen 4 (premium finals)

If you're a VIDEO CREATOR:

  • Start with Kling Video (motion creation)

  • Add Sora 2 (concept generation)

  • Combine with FLUX for input images

If you're an E-COMMERCE MANAGER:

  • Start with Background Removal + Product Photoshoot

  • Add IC-Light V2 (consistency)

  • Scale with Bria Expand (formats)

  • Consider Meshy v6 (3D experiences)

If you're a BRAND MARKETER:

  • Start with Imagen 4 (campaign quality)

  • Add Kling Video (social motion)

  • Explore full pipeline (multiformat campaigns)

Part 6: Quality Control and Best Practices

Getting the Best Results:

For Text-to-Image Models:

  • Be specific (not "beautiful sunset" but "golden hour sunset over calm ocean, warm orange glow, wispy clouds")

  • Reference styles ("cinematic," "editorial," "product photography")

  • Specify technical parameters ("shallow depth of field," "35mm lens," "soft lighting")

  • Iterate prompts systematically (change one variable at a time)

For Image Editing Models:

  • Start with high-quality inputs (garbage in, garbage out)

  • Make one edit at a time (chain edits rather than asking for multiple changes)

  • Be precise with locations ("top left corner" not "over there")

  • Review intermediate steps before proceeding

For Video Generation:

  • Start with strong composition (interesting angles, clear focal points)

  • Consider motion beforehand (what should move? how?)

  • Preview still frames before animating

  • Keep duration appropriate to platform (5s for TikTok, 10s for YouTube)

For 3D Generation:

  • Use clear, well-lit source images

  • Avoid occlusion (show full product if possible)

  • Clean backgrounds help (remove distractions)

  • Consider final use case (web viewer needs different detail than AR)

Your Creative Superpower

These AI models aren't replacements for human creativity—they're amplifiers. They don't make creative decisions for you; they remove the tedious execution barriers between your vision and its realization.

What once required:

  • Hiring photographers, videographers, 3D artists

  • Renting studios and equipment

  • Weeks of production timelines

  • Tens of thousands in budget

Now requires:

  • Your creative vision

  • Strategic prompt engineering

  • Minutes to hours of generation time

  • A fraction of the traditional cost

The playing field has leveled. Small teams can now compete with enterprise creative departments. Solo creators can produce at agency scale. Startups can test creative directions that previously required venture backing.

But here's what hasn't changed: good taste, strategic thinking, brand understanding, and creative judgment still matter immensely. AI handles execution; you provide the direction.

The winners in 2025 and beyond won't be those with the biggest budgets or largest teams. They'll be those who best understand how to orchestrate these tools into coherent creative strategies.

Your move.

Quick Reference: Model Comparison Chart

Model

Speed

Best For

When to Use

FLUX schnell

⚡⚡⚡

Ideation

Exploration phase

FLUX dev

⚡⚡

Production

Validated concepts

FLUX Kontext

⚡⚡

Scenes

Complex compositions

HiDream i1

⚡⚡

Portraits

Human subjects

Imagen 4

Premium

Hero assets only

IC-Light V2

⚡⚡

Relighting

Product consistency

Qwen Edit

⚡⚡

Quick edits

Simple changes

Nano Banana

⚡⚡

Products

Label preservation

Seedream V4

Advanced

Complex retouching

Bria Expand

⚡⚡

Formats

Multi-platform

ESRGAN

⚡⚡⚡

Upscaling

Print prep

Bria BG Remove

⚡⚡

Pro cutouts

Hair/fur/detail

Basic BG Remove

⚡⚡⚡

Simple cutouts

Speed priority

Product Photoshoot

⚡⚡

Lifestyle

E-commerce scale

Kling 5s

Social video

TikTok/Reels

Kling 10s

Premium video

Hero motion

Sora 2

⚡⚡

Text-to-video

Concepts

Meshy v6

3D models

Interactive/AR


Changelog