How to Use Civitai Models in ComfyUI
You downloaded a model from Civitai. Now you're staring at a .safetensors file with no idea where it goes. This guide covers the full path — download, folder placement, node wiring — in about 20 minutes.
What You Need
- GPU: NVIDIA with 8 GB+ VRAM (RTX 3060 minimum). AMD RDNA 3+ has experimental support. Apple Silicon works via Metal.
- RAM: 16 GB minimum, 32 GB recommended
- Disk space: 10–50 GB+ depending on model count. A single SDXL checkpoint is ~6.5 GB. NVMe SSD recommended for faster loading.
- Software: ComfyUI installed (Desktop, Portable, or Manual). A free Civitai account for gated models.
Step 1 — Know What You're Downloading
Civitai hosts several model types, and each goes in a different folder:
- Checkpoints: The base model (2–7 GB). This is the neural network that generates images. SD 1.5, SDXL, Flux — these are checkpoints.
- LoRAs: Small add-ons (10–200 MB) that fine-tune style, characters, or concepts. Must match the base model architecture — an SD 1.5 LoRA won't work with an SDXL checkpoint.
- VAEs: Fix color and contrast issues. Most modern checkpoints have one baked in.
- Embeddings: Tiny files that nudge output toward specific concepts. Less powerful than LoRAs.
Always pick .safetensors over .ckpt — .ckpt files can execute arbitrary Python code.
Step 2 — Download from Civitai
- Search civitai.com for your model.
- Check the Base Model field (SD 1.5, SDXL, Flux) — this determines compatibility with your workflow.
- Note any trigger words listed on the page (you'll need these for LoRAs).
- Click Download and grab the
.safetensorsfile.
Some models require a free account. For bulk or CLI downloads, generate an API key at civitai.com/user/account and use:
wget --content-disposition "https://civitai.com/api/download/models/{modelVersionId}?token=YOUR-TOKEN"
Step 3 — Drop Files in the Right Folder
Move each file into the matching subfolder inside ComfyUI/models/:
| Model Type | Folder |
|---|---|
| Checkpoints | models/checkpoints/ |
| LoRAs | models/loras/ |
| VAEs | models/vae/ |
| Embeddings | models/embeddings/ |
| ControlNet | models/controlnet/ |
| Upscale models | models/upscale_models/ |
This is the #1 beginner pain point. Wrong folder = model doesn't appear in the node dropdown.
Already have models from A1111 or Forge? Rename extra_model_paths.yaml.example to extra_model_paths.yaml in your ComfyUI root and point it at your existing model folders. No re-downloading needed.
Step 4 — Refresh and Load Models
If ComfyUI is already running, click Refresh in the top menu bar (or press R). New models appear in the dropdown menus of matching nodes.
Load a checkpoint:
Right-click the canvas → Add Node → Loaders → Load Checkpoint. Select your model from the ckpt_name dropdown. The outputs wire to KSampler (MODEL), CLIP Text Encode (CLIP), and VAE Decode (VAE).
Add a LoRA: Add Node → Loaders → Load LoRA. Wire it between the checkpoint and your text encoders. Set strength to 0.7–1.0 and add the LoRA's trigger words to your positive prompt.
Import a Civitai workflow: Drag a workflow JSON or a PNG with embedded metadata straight onto the canvas. If nodes are missing, install ComfyUI Manager and hit "Install Missing Custom Nodes."
Or skip the manual wiring — LocalForge AI ships with ComfyUI pre-configured so you can jump straight to generating.
Verify It Works
Hit Queue Prompt (or Ctrl+Enter). If everything's wired correctly, you'll see an image in your Preview Image node within 10–60 seconds depending on resolution and GPU.
Troubleshooting
- Model not in dropdown: Wrong folder. Check the table in Step 3, then press R to refresh. Still missing? Restart ComfyUI entirely.
- "Failed to load checkpoint": Corrupted download or architecture mismatch. Re-download the file from Civitai and verify the base model matches your workflow.
- CUDA out of memory: Reduce resolution or batch size. Launch with the
--lowvramflag. FP8 quantized models can cut VRAM usage roughly in half. - LoRA has no effect: Trigger words are missing from your prompt. Also check that the LoRA's base model matches your checkpoint — SD 1.5 LoRA + SDXL checkpoint = broken output.
- Missing custom nodes in imported workflow: Install ComfyUI Manager → "Install Missing Custom Nodes" → restart ComfyUI.
What to Do Next
- Pick models: Best Models for ComfyUI — curated picks that actually work
- Try Flux: Best Flux Models on Civitai — the newest architecture with the best output quality
- Compare UIs: ComfyUI vs A1111 — not sure ComfyUI is right for you? This breaks it down
