LocalForge AILocalForge AI
BlogFAQ

How to Run AI Image Generation Without Internet

Most "local AI" guides still assume you're connected to the internet for pip installs, model downloads, and update checks. This one doesn't. Here's how to set up AI image generation that works after you unplug the cable — whether you're air-gapped for privacy, traveling without Wi-Fi, or just tired of cloud subscriptions. Total time: 2–5 hours if you've got a capable GPU.

What You Need

  • GPU: NVIDIA with 6+ GB VRAM minimum (GTX 1660 Super or better). RTX 3060 12 GB is the budget sweet spot (~$300 used). RTX 4090 24 GB if you want zero compromises
  • RAM: 16 GB minimum, 32 GB recommended
  • Disk space: 30–50 GB for one UI + one model. 100 GB+ for multiple models and LoRAs. A single SDXL checkpoint is ~6.5 GB, SD 1.5 is ~4 GB, Flux is ~12 GB+
  • OS: Windows 10/11, Linux (Ubuntu 22.04+), or macOS (Apple Silicon M1+)
  • Internet: You need it once — to download everything. After that, never again

Step 1 — Download Everything While You're Still Online

This is the step that matters most. Miss one dependency and you're stuck offline with a broken install. Download all of these to a single folder or USB drive:

  • Python 3.10 installer — grab the offline .exe from python.org
  • CUDA Toolkit local installer — choose "exe (local)" not "exe (network)" (~3 GB). The network installer requires internet during install
  • NVIDIA GPU drivers — latest Game Ready or Studio driver
  • Your UI — ComfyUI Portable is the best choice (self-contained ~1.5 GB zip with embedded Python). Fooocus or Forge also work
  • Model checkpoints.safetensors files from Civitai or Hugging Face. Pick your models: SD 1.5 (~4 GB), SDXL (~6.5 GB), or Flux (~12 GB+). Grab any LoRAs and VAEs you want too
  • PyTorch wheels — on your connected machine, run: pip download torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 -d ./offline_packages
  • UI dependencies — from inside your UI folder: pip download -r requirements.txt -d ./offline_packages

Your download folder will be 30–100 GB depending on how many models you grab.

Step 2 — Install System Software on the Offline Machine

Run the offline installers you downloaded:

  1. Python 3.10 — check "Add Python to PATH" (skip this and you'll get python is not recognized errors later)
  2. NVIDIA GPU driver
  3. CUDA Toolkit local installer

Verify CUDA: run nvcc --version in your terminal. It should return your CUDA version.

Step 3 — Install Python Packages Without Internet

Point pip at your downloaded wheels instead of PyPI:

pip install --no-index --find-links=./offline_packages torch torchvision torchaudio
pip install --no-index --find-links=./offline_packages -r requirements.txt

Confirm GPU access: python -c "import torch; print(torch.cuda.is_available())" should return True.

The most common failure here is a Python version mismatch. Wheel files are version-specific — cp310 means Python 3.10. If your offline machine runs 3.11, those wheels won't install.

Step 4 — Place Models and Configure for Offline Mode

Copy your .safetensors files into the correct folders:

  • ComfyUI: models/checkpoints/, models/loras/, models/vae/
  • Forge/A1111: models/Stable-diffusion/, models/Lora/, models/VAE/
  • Fooocus: models/checkpoints/, models/loras/

If you're using A1111 or Forge, add these flags to webui-user.bat:

set COMMANDLINE_ARGS=--skip-install --skip-version-check --no-download-sd-model --do-not-download-clip

Without these, A1111 hangs on startup trying to reach pypi.org and github.com. ComfyUI doesn't need any special flags — it runs offline out of the box. Or try LocalForge AI, which ships pre-configured for offline use with zero setup required.

Step 5 — Generate Your First Image

Launch your UI:

  • ComfyUI Portable: run run_nvidia_gpu.bathttp://127.0.0.1:8188
  • Forge/A1111: run webui-user.bathttp://127.0.0.1:7860
  • Fooocus: run run.bathttp://127.0.0.1:7865

Select your model from the dropdown, type a prompt, hit Generate. First image should appear in 5–30 seconds depending on your GPU and model.

Verify It Works

Disconnect from the internet — pull the ethernet cable or disable Wi-Fi — and generate another image. If it works, you're fully offline. If the UI crashes or hangs, you still have a dependency trying to phone home. Check the troubleshooting section below.

Troubleshooting

  • A1111 hangs on startup: Add --skip-install --skip-version-check to COMMANDLINE_ARGS. Remove any git pull lines from webui-user.bat
  • "No matching distribution found" during pip install: Python version on your offline machine doesn't match the downloaded wheel files. Check the wheel filename — cp310 = Python 3.10
  • torch.cuda.is_available() returns False: CUDA Toolkit version must match your PyTorch build. Installed CUDA 12.1? Use PyTorch wheels from the cu121 directory
  • "No models found" in the UI: Checkpoint files aren't in the right folder. Must be .safetensors or .ckpt, placed directly in the checkpoints directory — not a subfolder inside it
  • "Could not create share link" error: Don't use the --share flag when offline. It tries to create a public Gradio URL, which requires internet

What to Do Next

  • Set up more models: How to Run Civitai Models Locally
  • Go deeper with SDXL: How to Run SDXL Locally
  • Make it portable: Copy your entire working install to a USB 3.0+ drive or external SSD. ComfyUI Portable works great for this — plug into any Windows machine with an NVIDIA GPU and go

FAQ

Can I run AI image generation without any internet connection? +
Yes. Once you download the software, Python dependencies, and model files while online, everything runs 100% offline. No cloud connection, no account, no subscription.
Which AI image generator works best offline? +
ComfyUI Portable is the best option for offline use. It's self-contained, doesn't need special flags, and includes its own embedded Python. Fooocus is the easiest if you just want to type prompts and generate.
How much disk space do I need for offline AI image generation? +
30–50 GB minimum for one UI and one model checkpoint. A single SDXL model is ~6.5 GB, SD 1.5 is ~4 GB, and Flux is ~12 GB+. Budget 100 GB+ if you want multiple models and LoRAs.
Do I need an NVIDIA GPU to run AI offline? +
NVIDIA is the easiest path — 6+ GB VRAM minimum. AMD GPUs work with some UIs on Linux but setup is harder and slower. Apple Silicon Macs (M1+) work with ComfyUI and Fooocus but generate slower than equivalent NVIDIA cards.
Can I run AI image generation from a USB drive? +
Yes. Copy your entire ComfyUI Portable folder (UI + embedded Python + models) to a USB 3.0+ drive or external SSD. Plug it into any Windows machine with an NVIDIA GPU and compatible drivers, run the batch file, and generate. USB 2.0 is too slow for model loading.