LocalForge AILocalForge AI
BlogFAQ

InvokeAI — Open-Source Local AI Tool

InvokeAI is a free, open-source creative engine for Stable Diffusion, SDXL, and FLUX models — and it has the single best canvas editing experience in all of local AI. No subscriptions, no cloud dependency, full privacy.

This guide breaks down what InvokeAI does well, where it falls short, what hardware you need, and whether it's the right tool for you in 2026.

Runs Locally Open Source NSFW Allowed

InvokeAI gives you a Photoshop-style infinite canvas powered by Stable Diffusion — and nothing else in local AI comes close to that specific experience.

At a Glance

Detail Info
Type Local / self-hosted
Price Free (Apache 2.0 open source)
Platform Windows, macOS, Linux
Supported models SD 1.5, SDXL, FLUX.1, FLUX.2 Klein
Interface Browser-based WebUI (localhost:9090)
Key strength Unified Canvas + node workflow editor
GitHub stars ~26,000+
NSFW filters None by default (your hardware, your rules)
Commercial version Yes — Invoke also sells enterprise tiers

TL;DR — Is It Worth It?

If you want the best inpainting and outpainting experience in local AI, InvokeAI is it. The Unified Canvas feels like an actual creative tool — not a prompt box bolted onto a model loader. You can paint, mask, extend, and composite in one workspace, and that workflow is genuinely addictive.

The tradeoff: it's slower than ComfyUI and Forge in raw generation speed, it needs more VRAM for FLUX models, and it's not always first to support new architectures. But for iterative, canvas-based creative work? Nothing else touches it.

Top 5 Features

  1. Unified Canvas — An infinite, Photoshop-style workspace where txt2img, img2img, inpainting, and outpainting all happen in one place. Layers, brush tools, masks, region-based prompting. This is the killer feature and it's genuinely excellent.

  2. Node-based workflow editor — Build custom generation pipelines by wiring together nodes. Save workflows as JSON, share them, version them. Not as deep as ComfyUI's node system, but far more approachable.

  3. Model manager with one-click downloads — Browse, download, and switch between SD 1.5, SDXL, FLUX.1, and FLUX.2 Klein models from the UI. Supports checkpoints, diffusers, LoRAs, and embeddings.

  4. Gallery and boards — Organize your generations into project boards, each with its own asset folder. Full metadata recall lets you remix any past image with the exact settings that created it.

  5. ControlNet and regional prompting — Regional guidance layers let you prompt different areas of the canvas independently. ControlNet integration is built into the canvas workflow, not bolted on as an afterthought.

Requirements & Setup

Model GPU (NVIDIA) VRAM RAM Disk
SD 1.5 (512×512) GTX 1060+ 4 GB+ 8 GB ~40 GB
SDXL (1024×1024) RTX 2060+ 8 GB+ 16 GB ~110 GB
FLUX.1 (1024×1024) RTX 2060+ 10 GB+ 32 GB ~210 GB
FLUX.2 Klein (quantized) RTX 2060+ 6 GB+ 32 GB ~210 GB

AMD GPUs work on Linux only (ROCm driver). macOS runs on Apple Silicon via MPS but expect slower performance.

Installation is straightforward — download the launcher, pick an install directory, and it handles the Python environment automatically. First launch pulls the default model (~4–7 GB) and starts the WebUI at http://localhost:9090. The whole process takes about 10–15 minutes on a decent connection.

One heads-up: FLUX models are hungry. The full FLUX.1 pipeline wants 10 GB+ VRAM and 32 GB RAM. If you're on an 8 GB card, stick with SDXL or try the FLUX.2 Klein quantized models — they squeeze into 6 GB.

Limitations

  • Slower generation than competitors. In SDXL benchmarks, InvokeAI trails both Forge (~5–6s) and ComfyUI (~8s). It's not unusable, but if speed is your top priority, you'll feel it.

  • Not first to new models. ComfyUI typically gets day-one support for new architectures (Flux, Wan, etc.). InvokeAI follows weeks or months later. If you need bleeding-edge models the day they drop, this isn't your tool.

  • VRAM management can be aggressive. InvokeAI grabs GPU memory and holds it until you restart or manually clear the cache. If you're multitasking with other GPU apps, you'll need to tweak invokeai.yaml settings.

  • Node editor is less powerful than ComfyUI's. The workflow builder is solid for most tasks, but power users doing complex multi-model pipelines will hit its ceiling faster than they would with ComfyUI.

How It Compares

Feature InvokeAI ComfyUI Forge AUTOMATIC1111
Canvas editing Best-in-class Basic (via plugins) None built-in None built-in
Learning curve Moderate (~1–2 hours) Steep (~2–4 weeks) Easy (~2–3 hours) Easy (~2–3 hours)
Generation speed (SDXL) ~11–13s ~8s ~5–6s ~11s
VRAM efficiency Moderate Good (~9.2 GB SDXL) Best (~8–9 GB) Higher (~10.7 GB)
New model support Delayed Day-one Fast Stalled
Node workflows Yes (simpler) Yes (deep) No No
Video generation No Yes (Wan, SVD) Yes (Forge Neo) No
Development activity Active Very active Active (Forge Neo) Stalled
Best for Canvas artists Power users Speed seekers Legacy users

Bottom Line

Use InvokeAI if:

  • You do iterative canvas work — inpainting faces, extending backgrounds, compositing multiple elements. Nothing else compares.
  • You want a polished, all-in-one UI — gallery management, boards, metadata recall, regional prompting, all built in.
  • You're a visual artist, not a pipeline engineer — the interface feels like a creative tool, not a developer utility.

Skip InvokeAI if:

  • Raw speed matters most — Forge and ComfyUI are measurably faster for batch generation.
  • You need bleeding-edge model support — ComfyUI gets new architectures first, sometimes by months.
  • You want video generation — InvokeAI is images only. ComfyUI handles Wan and SVD natively.

For a zero-setup option that gets you generating immediately, LocalForge AI ships with Forge pre-configured and ready to go — no Python environments, no YAML files, no troubleshooting. It's one path among several, but it's the fastest way from download to first image.

Frequently Asked Questions

Is InvokeAI free? +
Yes. The Community Edition is completely free and open source under the Apache 2.0 license. Invoke also sells commercial tiers (Starter at $19/mo, Indie at $49/mo) for enterprise features like team collaboration and role-based access, but the core open-source app has no restrictions.
What GPU do I need to run InvokeAI? +
For SD 1.5 models: any NVIDIA GPU with 4 GB+ VRAM (GTX 1060 or newer). For SDXL: 8 GB+ VRAM (RTX 2060+). For FLUX models: 10 GB+ VRAM and 32 GB system RAM, though FLUX.2 Klein quantized versions can work with 6 GB VRAM. AMD GPUs are supported on Linux only.
Can InvokeAI run on macOS or without an NVIDIA GPU? +
Yes. InvokeAI supports Apple Silicon Macs via MPS (Metal Performance Shaders). Performance is slower than NVIDIA CUDA, but it works. AMD GPUs are supported on Linux via ROCm drivers. CPU-only generation is technically possible but extremely slow.
What is the difference between InvokeAI and ComfyUI? +
InvokeAI excels at canvas-based editing — inpainting, outpainting, and compositing in a Photoshop-style workspace. ComfyUI excels at complex node-based workflows, faster generation speed, and day-one support for new models. InvokeAI is more approachable; ComfyUI is more powerful for advanced pipeline work.
Does InvokeAI support FLUX models? +
Yes. InvokeAI supports FLUX.1 (needs 10 GB+ VRAM) and the newer FLUX.2 Klein family, including quantized versions that run on GPUs with as little as 6 GB VRAM. FLUX support was added in version 5.0+.
Is InvokeAI good for beginners? +
It is a solid middle ground. The UI is more polished and intuitive than ComfyUI, and the canvas editor is easy to pick up. But it is more complex than Fooocus (which is literally type-and-generate). Plan for 1–2 hours to get comfortable with the interface.

Details

Website https://github.com/invoke-ai/InvokeAI
Runs Locally Yes
Open Source Yes
NSFW Allowed Yes

Supported Models

Stable Diffusion 1.5
SDXL 1.0
Flux 1 Dev