LocalForge AILocalForge AI
BlogFAQ

How to Run Civitai Models Locally — Complete Setup Guide

You downloaded a model from CivitAI. Now what? This guide gets you from downloaded file to generated image in under 30 minutes. Everything runs on your hardware — no cloud, no accounts, no content filters.

What You Need

  • GPU: NVIDIA with 4 GB+ VRAM (8 GB recommended)
  • RAM: 16 GB minimum, 32 GB recommended
  • Disk: 20 GB free + your model files (2-25 GB each)
  • OS: Windows 10/11 (recommended), Linux, or macOS (limited GPU support)
  • Software: Python 3.10+ and Git (Forge auto-installs Python if needed)

Do not have a CivitAI model yet? Start with How to Download Civitai Models.

Step 1 — Pick Your UI

Three real options in 2026:

  • Forge (recommended) — Form-based UI. Familiar layout, 30-50% less VRAM than A1111, supports SDXL and Flux. Best for most users.
  • ComfyUI — Node-based workflow editor. Best performance, first to support new models, steeper learning curve. For power users.
  • Fooocus — Type a prompt, click generate. Minimal options. For absolute beginners who do not want to tweak settings.

Skip A1111 — Forge is a better version of the same thing with lower VRAM usage.

Step 2 — Install Forge

  1. Download Forge from GitHub
  2. Extract the zip to a folder with no spaces in the path (e.g., C:\ai\forge\)
  3. Run webui-user.bat (Windows) or webui.sh (Linux/Mac)
  4. Wait 10-15 minutes on first run — it downloads Python, PyTorch, and dependencies automatically

When done, your browser opens to http://127.0.0.1:7860 with the Forge UI.

If your antivirus blocks the install, add the folder to your exceptions list. This is a common issue with Python-based tools.

Step 3 — Place Your Model Files

Drop your downloaded .safetensors files into the right folders:

File Type Forge Folder
Checkpoints models/Stable-diffusion/
LoRAs models/Lora/
VAEs models/VAE/
Embeddings embeddings/

Then click the refresh button in the Forge UI next to the checkpoint dropdown. Your model appears in the list.

Step 4 — Configure for Your GPU

Your VRAM determines which models you can run:

  • 4-6 GB VRAM (GTX 1650, RTX 2060 6GB): SD 1.5 models only. Set resolution to 512x512. Use CyberRealistic or Realistic Vision.
  • 8 GB VRAM (RTX 3060 8GB, RTX 4060): SDXL models work. Set resolution to 1024x1024. Juggernaut XL, RealVisXL, Pony V6 all run well.
  • 12+ GB VRAM (RTX 3060 12GB, RTX 4070 Ti+): Flux models work. Use ComfyUI for best Flux performance.

If you hit "CUDA out of memory" errors, add --medvram to the COMMANDLINE_ARGS line in webui-user.bat and restart.

Step 5 — Generate Your First Image

  1. Select your model from the checkpoint dropdown
  2. Type a prompt:
    • Realistic: portrait of a woman, natural lighting, photorealistic, 8k
    • Anime (Pony/Illustrious): score_9, score_8_up, score_7_up, 1girl, anime style, detailed
  3. Click Generate

Your first image appears in 5-30 seconds depending on your GPU and model size.

Step 6 — Optimize Your Settings

Dial in these settings for better results:

  • Sampler: DPM++ 2M Karras (realistic) or Euler a (anime)
  • Steps: 20-30 for quality, 10-15 for speed
  • CFG Scale: 5-7 for SDXL, 7-9 for SD 1.5
  • Resolution: 512x512 for SD 1.5, 1024x1024 for SDXL
  • Clip Skip: Set to 2 for Pony and Illustrious models

Verify It Works

You should see a clear, coherent image matching your prompt. If the image is:

  • Black: VAE issue. Switch to a BakedVAE model version or download the matching VAE.
  • Blurry or low-detail: Steps too low. Increase to 25-30.
  • Wrong style: Check that you are using the right prompt format for your model (score tags for Pony, standard prompt for SDXL).

Troubleshooting

  • "CUDA out of memory": Model too large for your VRAM. Add --medvram to launch arguments, or switch to a smaller model.
  • Black/corrupt images: VAE mismatch. Download the correct VAE or use a BakedVAE checkpoint.
  • Very slow (minutes per image): Make sure you are running on GPU, not CPU. Update your NVIDIA drivers. Check that CUDA is detected in the Forge console output.
  • LoRA has no effect: Use the correct prompt syntax: <lora:filename:0.8> — adjust the 0.8 weight up or down.
  • Pony models look bad: You need quality score tags. Add score_9, score_8_up, score_7_up at the start of your prompt.

What to Do Next

Or skip all of this — LocalForge AI ships with Forge pre-installed, popular models included, and zero Python/Git setup. Generate images 5 minutes after download.

FAQ

Do I need Python to run Civitai models locally? +
Forge auto-installs Python on first run. You do not need to install it manually. ComfyUI also bundles its own Python environment.
How much VRAM do I need? +
4 GB minimum for SD 1.5 models. 8 GB for SDXL models. 12+ GB for Flux models. Check the model architecture tag on CivitAI to know which tier it needs.
Can I run Civitai models on AMD GPUs? +
Limited support. ComfyUI has experimental AMD support via ROCm on Linux. Forge works best with NVIDIA GPUs. macOS with Apple Silicon has basic support through MLX but model selection is limited.
Is this legal? +
Yes. Downloading and running models locally for personal use is legal. Some Flux-based models use non-commercial licenses — check the license on each model CivitAI page if you plan commercial use.
How long does generation take? +
5-10 seconds per image on an RTX 3060 with SDXL at 20 steps. SD 1.5 is faster (2-5 seconds). Flux is slower (15-30 seconds). Times vary with resolution and step count.
Can my ISP see what I am generating? +
No. Everything runs on your hardware with no internet connection required after downloading the model. No data leaves your machine.