LocalForge AILocalForge AI
BlogFAQ

How to Use Civitai Models in ComfyUI

You downloaded a model from Civitai. Now you're staring at a .safetensors file with no idea where it goes. This guide covers the full path — download, folder placement, node wiring — in about 20 minutes.

What You Need

  • GPU: NVIDIA with 8 GB+ VRAM (RTX 3060 minimum). AMD RDNA 3+ has experimental support. Apple Silicon works via Metal.
  • RAM: 16 GB minimum, 32 GB recommended
  • Disk space: 10–50 GB+ depending on model count. A single SDXL checkpoint is ~6.5 GB. NVMe SSD recommended for faster loading.
  • Software: ComfyUI installed (Desktop, Portable, or Manual). A free Civitai account for gated models.

Step 1 — Know What You're Downloading

Civitai hosts several model types, and each goes in a different folder:

  • Checkpoints: The base model (2–7 GB). This is the neural network that generates images. SD 1.5, SDXL, Flux — these are checkpoints.
  • LoRAs: Small add-ons (10–200 MB) that fine-tune style, characters, or concepts. Must match the base model architecture — an SD 1.5 LoRA won't work with an SDXL checkpoint.
  • VAEs: Fix color and contrast issues. Most modern checkpoints have one baked in.
  • Embeddings: Tiny files that nudge output toward specific concepts. Less powerful than LoRAs.

Always pick .safetensors over .ckpt.ckpt files can execute arbitrary Python code.

Step 2 — Download from Civitai

  1. Search civitai.com for your model.
  2. Check the Base Model field (SD 1.5, SDXL, Flux) — this determines compatibility with your workflow.
  3. Note any trigger words listed on the page (you'll need these for LoRAs).
  4. Click Download and grab the .safetensors file.

Some models require a free account. For bulk or CLI downloads, generate an API key at civitai.com/user/account and use:

wget --content-disposition "https://civitai.com/api/download/models/{modelVersionId}?token=YOUR-TOKEN"

Step 3 — Drop Files in the Right Folder

Move each file into the matching subfolder inside ComfyUI/models/:

Model Type Folder
Checkpoints models/checkpoints/
LoRAs models/loras/
VAEs models/vae/
Embeddings models/embeddings/
ControlNet models/controlnet/
Upscale models models/upscale_models/

This is the #1 beginner pain point. Wrong folder = model doesn't appear in the node dropdown.

Already have models from A1111 or Forge? Rename extra_model_paths.yaml.example to extra_model_paths.yaml in your ComfyUI root and point it at your existing model folders. No re-downloading needed.

Step 4 — Refresh and Load Models

If ComfyUI is already running, click Refresh in the top menu bar (or press R). New models appear in the dropdown menus of matching nodes.

Load a checkpoint: Right-click the canvas → Add Node → Loaders → Load Checkpoint. Select your model from the ckpt_name dropdown. The outputs wire to KSampler (MODEL), CLIP Text Encode (CLIP), and VAE Decode (VAE).

Add a LoRA: Add Node → Loaders → Load LoRA. Wire it between the checkpoint and your text encoders. Set strength to 0.7–1.0 and add the LoRA's trigger words to your positive prompt.

Import a Civitai workflow: Drag a workflow JSON or a PNG with embedded metadata straight onto the canvas. If nodes are missing, install ComfyUI Manager and hit "Install Missing Custom Nodes."

Or skip the manual wiring — LocalForge AI ships with ComfyUI pre-configured so you can jump straight to generating.

Verify It Works

Hit Queue Prompt (or Ctrl+Enter). If everything's wired correctly, you'll see an image in your Preview Image node within 10–60 seconds depending on resolution and GPU.

Troubleshooting

  • Model not in dropdown: Wrong folder. Check the table in Step 3, then press R to refresh. Still missing? Restart ComfyUI entirely.
  • "Failed to load checkpoint": Corrupted download or architecture mismatch. Re-download the file from Civitai and verify the base model matches your workflow.
  • CUDA out of memory: Reduce resolution or batch size. Launch with the --lowvram flag. FP8 quantized models can cut VRAM usage roughly in half.
  • LoRA has no effect: Trigger words are missing from your prompt. Also check that the LoRA's base model matches your checkpoint — SD 1.5 LoRA + SDXL checkpoint = broken output.
  • Missing custom nodes in imported workflow: Install ComfyUI Manager → "Install Missing Custom Nodes" → restart ComfyUI.

What to Do Next

FAQ

Can I use any Civitai model in ComfyUI? +
Yes, as long as the file is .safetensors or .ckpt format. But the model's base architecture (SD 1.5, SDXL, Flux) must match the other components in your workflow. Mixing architectures produces errors or garbage output.
Where do I put Civitai models in ComfyUI? +
Each type has its own folder inside ComfyUI/models/. Checkpoints go in models/checkpoints/, LoRAs in models/loras/, VAEs in models/vae/. Wrong folder means the model won't show up in the node dropdown.
Do I need a Civitai account to download models? +
Not always. Many models are publicly downloadable. Some creators restrict access to logged-in users — a free Civitai account handles that.
How much VRAM do I need for Civitai models in ComfyUI? +
8 GB minimum for SD 1.5 and SDXL models. 24 GB recommended for Flux. FP8 quantized versions cut VRAM needs roughly in half.
Can I share models between ComfyUI and A1111? +
Yes. Edit extra_model_paths.yaml in your ComfyUI root folder to point at A1111's model directories. No need to duplicate multi-gigabyte files.