Flux NSFW Models in ComfyUI
Drop single-file FP8 checkpoints into models/checkpoints/ and use Load Checkpoint — split Flux dev UNETs belong in models/diffusion_models/ with Load Diffusion Model, not the checkpoint node. If you're on ComfyUI, swap the VAE loader for ae.safetensors under models/vae/, wire DualCLIPLoader with type=flux, and keep CFG near 1.0 on official FP8 dev; NSFW community merges usually want DPM++ 2M + beta scheduler (Fluxmania: sgm_uniform).
The Models
1. Fluxed Up 7.1
Top PickFP16, FP8 single file, or GGUF — all route cleanly once you match loader to file type.
Architecture: Flux.1 D · VRAM: 12 GB+ (GGUF lower) · Best for: ComfyUI fit: Excellent
View on CivitAI →2. Persephone 2.0
GGUF available; OOM reports usually mean fp8 T5 or quant — not a broken graph.
Architecture: Flux.1 D · VRAM: 12 GB+ · Best for: ComfyUI fit: Excellent
View on CivitAI →3. Fluxmania (Kreamania)
Pair with sgm_uniform scheduler — deviates from generic Flux defaults.
Architecture: Flux.1 D · VRAM: 12 GB+ · Best for: ComfyUI fit: Strong
View on CivitAI →4. aidmaNSFWunlock LoRA
Lightweight LoRA add-on — keep base Flux dev wiring, update Comfy if LoRA nodes misbehave.
Architecture: Flux LoRA · VRAM: Minimal overhead · Best for: ComfyUI fit: Excellent
View on CivitAI →5. Flux Unchained
Standard split-FLUX expectations — same CLIP/VAE pitfalls as other dev merges.
Architecture: Flux.1 D · VRAM: 12 GB+ · Best for: ComfyUI fit: Strong
View on CivitAI →6. NSFW Master Flux
Merged LoRA-in-checkpoint — still Flux dev semantics; don't SDXL your CFG.
Architecture: Flux.1 D (merged) · VRAM: 12 GB+ · Best for: ComfyUI fit: Strong
View on CivitAI →7. CHROMA
Schnell-family — verify sampler defaults from the HF card, not Flux dev presets.
Architecture: Flux.1 Schnell · VRAM: 12 GB+ · Best for: ComfyUI fit: Strong
View on CivitAI →Why This Matters
You already picked a Flux NSFW checkpoint or LoRA — the failure mode now is wiring: wrong node for the file type, wrong CLIP type string, or a VAE swap that nukes color. This page is the ComfyUI routing table so you stop debugging black outputs and OOMs that are just loader mistakes.
Loader Setup
Flux NSFW packs ship as either one bundled file or split components. The #1 mistake: feeding a UNET-only file into Load Checkpoint. That node expects a full checkpoint; UNET tensors belong on Load Diffusion Model with separate CLIP + VAE.
| Pattern | Where files go | Node(s) |
|---|---|---|
| FP8 single-file checkpoint | models/checkpoints/ |
Load Checkpoint |
| Split Flux dev (UNET + CLIP + VAE files) | models/diffusion_models/ (+ CLIP/VAE paths below) |
Load Diffusion Model + DualCLIPLoader + VAELoader |
| GGUF quantized UNET | Same as split; GGUF loader expects the custom node stack | UnetLoaderGGUF (ComfyUI-GGUF) + CLIP + VAE |
For GGUF: install ComfyUI-GGUF (city96), then pip install gguf in the ComfyUI Python env so the loader can read tensors without hand-converting.
CLIP + VAE
Use DualCLIPLoader: set type to flux, clip_name1 to clip_l, clip_name2 to T5 XXL (grab the fp8 T5 if you're VRAM-pinched on 8–12 GB). Point VAELoader at ae.safetensors under models/vae/ — if colors go neon or skin turns plastic, you almost always grabbed the wrong VAE file or an old duplicate in a subfolder.
Sampler Settings
Official FP8 Flux dev is tuned for CFG ~1.0 — cranking CFG like SDXL is how you get muddy contrast. For community NSFW merges, most workflows stick to DPM++ 2M with a beta scheduler; Fluxmania specifically lines up with sgm_uniform. Match the checkpoint card: if the author calls out a scheduler, trust it over generic defaults.
Model Compatibility
| Model | ComfyUI fit | Notes |
|---|---|---|
| Fluxed Up 7.1 | Excellent | FP16 / FP8 single file or GGUF — pick one stack and stay consistent |
| Persephone 2.0 | Excellent | GGUF available; watch VRAM — OOM reports usually mean fp8 T5 or quant, not “bad model” |
| Fluxmania | Strong | Use sgm_uniform scheduler with its workflow expectations |
| aidmaNSFWunlock | Excellent | LoRA add-on — update ComfyUI if strength sliders do nothing |
| Flux Unchained | Strong | Same split-FLUX wiring as other dev merges |
| NSFW Master Flux | Strong | Merged LoRA-in-checkpoint — still Flux dev semantics for CLIP/VAE |
| CHROMA | Strong | Schnell-family behavior — verify sampler defaults against the HF card, not Flux dev presets |
GGUF for Low VRAM
On 8–12 GB, prioritize quantized UNET (GGUF) + fp8 T5, batch size 1, and avoid chaining huge hires fixes until the base pass is stable. If you're tired of manual quant picks and path hygiene, LocalForge AI gives you a managed local stack so you're not babysitting pip install gguf and node versions between updates — ComfyUI stays ComfyUI, minus the archaeology.
Common Problems
| Symptom | Likely fix |
|---|---|
| “Wrong model type” / load error on UNET | You used Load Checkpoint on a diffusion-model-only file — switch to Load Diffusion Model + CLIP + VAE |
| OOM on 12 GB or less | fp8 T5, GGUF UNET, batch 1; drop resolution before you drop steps |
DualCLIPLoader error |
type=flux; verify clip_l + T5 XXL filenames match what's on disk |
| LoRA does nothing | Update ComfyUI; confirm LoRA format matches Flux dev expectations |
| Wrong colors / gray cast | Wrong ae.safetensors — replace with the VAE bundled with the checkpoint family |
Verdict
Treat Flux NSFW in ComfyUI as three decisions: which file shape you downloaded (single vs split), whether you need GGUF for VRAM, and whether your sampler profile is dev-official (CFG ~1) or community-tuned (DPM++ 2M / beta / sgm_uniform). Nail loaders + DualCLIPLoader + ae.safetensors first — everything else is tuning. For model picks and download links, start at Best Flux NSFW Models on CivitAI; for NSFW-specific ComfyUI context, see ComfyUI for NSFW and the Flux local install guide.
