How to Run an AI Image Generator Completely Offline
Every cloud AI image generator logs your prompts, filters your outputs, and stops working when their servers go down. You can run Stable Diffusion, SDXL, and Flux entirely on your own hardware — no internet needed after initial setup. This guide covers hardware requirements, the install process, and the exact flags to go fully air-gapped.
Key Takeaway — April 2026
Install a free UI (Forge, ComfyUI, or Fooocus) on an internet-connected machine, download your models, set offline flags, then unplug. Total setup: 30–90 minutes. Minimum hardware: NVIDIA GPU with 4 GB VRAM, 8 GB RAM, 10 GB disk. Or use LocalForge AI for a pre-configured offline setup with models included.
What You Need
- GPU (minimum): NVIDIA with 4 GB VRAM (GTX 1650) — generates 512×512 slowly
- GPU (recommended): RTX 3060 12 GB or RTX 4060 Ti 16 GB for comfortable SDXL. RTX 4090 24 GB for full-precision Flux.
- RAM: 8 GB minimum, 16–32 GB recommended
- Disk: 10–20 GB for one UI + one model. 100 GB+ SSD for multiple checkpoints and LoRAs.
- OS: Windows 10/11, Linux (Ubuntu 22.04+), or macOS with Apple Silicon (M1+)
- Software: Python 3.10+, CUDA toolkit, PyTorch — all bundled with one-click installers
Step 1 — Pick and Install a UI
Download one of these while you're still online:
- Stability Matrix (best for beginners) — one-click installer that manages Forge, ComfyUI, A1111, and Fooocus from one app. Shared model folder saves disk space. Download from lykos.ai.
- ComfyUI Portable — node-based, most flexible, best Flux/SDXL support. Grab the portable .7z from GitHub releases.
- Forge WebUI — A1111 fork with 10–30% faster generation and better VRAM handling. Best balance of simplicity and performance.
- Fooocus — simplest option, prompt-focused. Currently in LTS mode (bug fixes only). Only download from the official GitHub repo — fake sites like fooocus.com exist.
Run your chosen UI once while online. This downloads Python dependencies, PyTorch, and CUDA libraries automatically.
Step 2 — Download Models Before You Disconnect
Grab everything you need from these two sources:
- CivitAI (civitai.com) — largest community model library, free downloads
- Hugging Face (huggingface.co) — official repos from Stability AI and Black Forest Labs
Models to start with:
- SD 1.5 (~2–4 GB) — lightweight, massive LoRA ecosystem
- SDXL (~6.5 GB) — higher quality, 1024×1024 native resolution
- FLUX.1 schnell (Apache 2.0, free commercial use) — best prompt following. Q4/Q5 GGUF versions run on 8 GB VRAM cards.
Always download .safetensors format — .ckpt files can execute arbitrary code.
Place checkpoints in models/Stable-diffusion/ (Forge/A1111) or models/checkpoints/ (ComfyUI). LoRAs go in models/Lora/, VAEs in models/VAE/.
Step 3 — Set Offline Flags
Each UI handles offline mode differently:
- Forge / A1111: Edit
webui-user.batand add:
set COMMANDLINE_ARGS=--offline --skip-install --skip-version-check --skip-python-version-check --skip-torch-cuda-test
- ComfyUI: Runs fully offline by default once dependencies are installed. Disable auto-update in ComfyUI Manager settings.
- Fooocus: Fully offline after first run (which downloads its default SDXL model automatically).
Step 4 — Disconnect and Generate
Unplug ethernet or disable Wi-Fi. Open the UI, select your model from the dropdown, type a prompt, hit Generate. If an image appears, you're done — your machine is now a self-contained image generator with zero internet dependency.
Verify It Works
Use a simple test prompt at 512×512. You should see an image in 5–30 seconds depending on your GPU. If the model doesn't appear in the dropdown, check that your .safetensors file is in the correct directory and refresh the model list.
Troubleshooting
- A1111 crashes on launch offline: The
--skip-installflag is unreliable in some versions. Delete thevenv/folder, reconnect briefly, re-run once to rebuild dependencies, then go offline again. - "CUDA out of memory" error: Use
--lowvramor--medvramflags. For Flux, add--cpu-text-encoderto offload the T5 encoder to system RAM. Or switch to GGUF quantized models in ComfyUI. - Washed-out or wrong colors: Missing VAE. For SD 1.5, manually select
vae-ft-mse-840000-ema-pruned.safetensors. SDXL checkpoints usually bake in the VAE. - ComfyUI Manager won't load offline: Pre-install
gitdbandGitPythoninto the embedded Python environment while you're still online. - "Exception importing xformers": Make sure xformers matches your PyTorch version (same CUDA build). Or drop
--xformersand use--opt-sdp-attentioninstead.
What to Do Next
- Pick your models: Best Local Stable Diffusion Models for curated recommendations
- Learn ComfyUI workflows: ComfyUI Setup Guide for the node-based editor
- Add models later: Download on any internet-connected machine, transfer via USB to your offline setup. Drop
.safetensorsinto the rightmodels/subfolder and refresh.
