AIDMA NSFW Unlock LoRA for Flux
AIDMA NSFW Unlock is the most-downloaded Flux NSFW LoRA on CivitAI: 130.8k+ downloads, 1,380+ reviews. This page covers the trigger word, the 0.5–1.0 weight, FP16/FP8/GGUF base pairings, the sampler recipe, and the HuggingFace mirror typo to avoid.
The Models
1. AIDMA NSFW Unlock (FLUX-v0.2)
Top Pick~19.3 MB file, 130.8k+ downloads, 1,380+ reviews. Trigger aidmaNSFWunlock, weight 0.5–1.0 (start 0.8). The default first download for I-have-Flux-dev-locally-I-want-NSFW.
Architecture: Flux.1 Dev LoRA · VRAM:Negligible vs base · Best for: Default NSFW unlock for Flux.1 Dev
View on CivitAI →2. HuggingFace mirror (shahtab/FLUXNSFWunlock)
Same file, different host. Mirror prints the trigger as aidmaNSWFunlock (typo) — use aidmaNSFWunlock from CivitAI. Treat as fallback, not default.
Architecture: Flux.1 Dev LoRA (mirror) · VRAM:Same as Model 1 · Best for: No-account or region-blocked download
View on CivitAI →3. Flux.1 Dev FP16/BF16
The base AIDMA was trained against. RTX 3090 / 4090 / A6000 territory. No drift, full LoRA effect.
Architecture: Flux.1 Dev (full precision) · VRAM:24 GB+ · Best for: Reference-quality LoRA fidelity
View on CivitAI →4. Flux.1 Dev FP8
RTX 3060 12 GB / 3080 / 4070 lane. Same LoRA loader path as FP16; AIDMA strength may need a small bump (0.05–0.10) — community lore, verify on your seed.
Architecture: Flux.1 Dev quantized to FP8 · VRAM:12–16 GB · Best for: Consumer-card default
View on CivitAI →5. Flux.1 Dev GGUF Q5_K_M
Documented running in <9 GB. Requires ComfyUI-GGUF custom node. Q4_0/Q4_1 floor for <10 GB; sub-Q4 trades visible quality for VRAM.
Architecture: Flux.1 Dev GGUF quantized · VRAM:8–12 GB · Best for: 8 GB cards via ComfyUI-GGUF
View on CivitAI →6. Flux.1 Schnell (DO NOT pair)
AIDMA is trained on Dev. No public success report on Schnell, plus documented FP8 black-image issues. Use Dev (FP16/FP8/GGUF), not Schnell.
Architecture: Flux.1 Schnell · VRAM:Lower than Dev · Best for: Not recommended as AIDMA base
View on CivitAI →7. Detail Enhancer FLUX V1
~41.5k engagement, 312 reviews. No trigger required. Weight 0.5–1.0. Not an uncensor — load after AIDMA when polish is the gap.
Architecture: Flux.1 Dev LoRA · VRAM:Modest add-on · Best for: Skin/texture after AIDMA
View on CivitAI →8. Nude Style for FLUX V2
~2.9k V2 downloads, 112 reviews. No trigger; nsfw / nude tokens help. Weight ~1.0. Female-anatomy focus per creator.
Architecture: Flux.1 Dev LoRA · VRAM:Modest add-on · Best for: Female anatomy realism after AIDMA
View on CivitAI →Get the File: CivitAI vs HuggingFace
The canonical source is the CivitAI model page — FLUX NSFW unlock #674027. Filename aidmaNSFWunlock-FLUX-V0.2.safetensors, ~19.3 MB, ~130.8k–130.9k downloads, 1,380+ reviews on the v0.2 line marked "Overwhelmingly Positive." If you can log in to CivitAI, pull from there — you inherit the version stats and community context.
If CivitAI is region-blocked (UK), login-walled, or you're scripting downloads in a headless box, the HuggingFace mirror at shahtab/FLUXNSFWunlock is the fallback. Two warnings:
- The HF mirror prints the trigger as
aidmaNSWFunlock(W and F transposed). It's a typo. The working trigger isaidmaNSFWunlockfrom the CivitAI creator block — copy it from there or from the table above and paste, never retype. - I can't verify the mirror is byte-identical to the CivitAI file. If you care, hash both and compare before loading.
Drop the .safetensors into your runtime's LoRA folder (ComfyUI/models/loras/ or webui-forge/models/Lora/) and you're done with the download step.
The Local Stack: Pick a Base Checkpoint by VRAM
AIDMA is a Flux.1 Dev LoRA. The LoRA itself adds negligible VRAM — your card budget is dominated by which Flux dev variant you load underneath. Three real options:
24 GB+ — Flux.1 Dev (FP16 / BF16)
The reference weights. Every CivitAI Flux LoRA assumes this is the base, so AIDMA's training and most of the example galleries match this precision. Reach for it on RTX 3090, 4090, A6000, or similar workstation cards. Lower-precision variants drift slightly but stay usable; this one doesn't drift at all.
12–16 GB — Flux.1 Dev FP8
The pragmatic default for consumer hardware in 2026. The Comfy-Org community ships an FP8 build of Flux dev (the Schnell FP8 file is the same distribution pattern), and the LoRA loader path is identical to FP16. RTX 3060 12 GB, 3080, 4070 — this is your lane.
Practical note: if AIDMA looks faint on FP8, nudge the weight up by 0.05–0.10 vs your FP16 baseline. I haven't seen a published number for this; treat it as community lore I'd verify against your own seed.
8–12 GB — Flux.1 Dev GGUF (Q5_K_M or Q8)
For 8 GB cards (RTX 3050, 3060 8 GB), GGUF quantization is what makes Flux dev fit at all. Q5_K_M is the documented sweet spot — runs in <9 GB. The multi-quant pack gives you Q2 through Q8 if you want to A/B precision.
GGUF requires the ComfyUI-GGUF custom node — Forge support exists but ComfyUI is the better-documented path. Q4_0/Q4_1 is the floor for <10 GB systems; below Q4 you're trading visible quality for VRAM, and AIDMA's effect at Q2/Q3 is something I'd test before trusting.
If wiring three Flux dev variants and a LoRA loader in ComfyUI sounds like a Saturday you don't have, LocalForge AI ships a pre-configured local stack. It's one option here alongside doing it raw.
The Sampler Recipe
There's no AIDMA-specific sampler published. Use the Flux dev defaults, which the LoRA inherits cleanly:
- Sampler / scheduler: Euler + Simple is the fast pairing — occasionally blurry. Swap to
dpmpp_2m + sgm_uniformif Euler looks soft. - For realism specifically: Beta scheduler with DEIS outperforms Euler on skin and lighting.
- Steps: 20–30 covers most NSFW LoRA workflows. The Flux NF4 sampling article finds 25–30 establishes faces, ~50 is the breaking point, 100+ is wasted compute.
- CFG: 1.0. Distilled CFG: ~3.5. Same NF4 article documents these as the Flux dev defaults you don't override without a reason.
Trigger word aidmaNSFWunlock goes in the prompt; LoRA strength 0.8 is my single-number starting point. Push to 1.0 if the unlock isn't taking; drop to 0.5 if anatomy starts melting.
Companion LoRAs I Stack After AIDMA
AIDMA is an unlock, not a polish. The two LoRAs I reach for second:
- Detail Enhancer FLUX V1 — ~41.5k header engagement, 312 reviews. No trigger required. Weight 0.5–1.0. Fixes "the unlock works but skin looks plastic." It does not uncensor on its own — load it after AIDMA, never instead of it.
- Nude Style for FLUX V2 — ~2.9k downloads on V2, 112 reviews. No trigger; the tokens
nsfwandnudehelp. Weight ~1.0. Targets the toy-like nipple/breast issue AIDMA alone often leaves on the table. Female-anatomy focus per the creator description.
A two-LoRA stack (AIDMA + one of these) is where I get the most gain per minute spent. Three LoRAs starts to introduce conflicts — drop individual weights to 0.5–0.8 if you push it that far. The deeper LoRA roundup lives at Best Flux NSFW LoRAs on CivitAI.
Don't Use Flux.1 Schnell as the Base
This comes up in searches and forums often enough to warrant a section: AIDMA NSFW Unlock is trained against Flux.1 Dev, not Schnell. I couldn't find a single public success report pairing AIDMA with Schnell, and Schnell has its own documented FP8 instability — 'NoneType' object is not iterable errors and black image outputs.
Schnell's Apache-2.0 license and 4-step speed are real wins for other workflows. They aren't worth the gamble here. If you want NSFW on Flux, run Dev — FP16, FP8, or GGUF. A separate Pony-style sibling LoRA at #675975 uses trigger aidmansfwunlockfluxponystyle for Flux Pony users. That's a different asset, not a Schnell workaround.
Quick Comparison
| Stack component | Format | VRAM target | Best for | My pick |
|---|---|---|---|---|
| AIDMA NSFW Unlock (CivitAI) | LoRA, ~19.3 MB | Any | The unlock itself | ⭐ |
| HuggingFace mirror | LoRA mirror | Any | No-account / region-blocked download | |
| Flux.1 Dev FP16 | Checkpoint | 24 GB+ | Reference quality | |
| Flux.1 Dev FP8 | Checkpoint | 12–16 GB | Consumer-card default | ⭐ |
| Flux.1 Dev GGUF Q5_K_M | Checkpoint | 8–12 GB | Squeezing Flux into 8–12 GB | |
| Flux.1 Schnell | Checkpoint | Lower than Dev | Avoid as AIDMA base | |
| Detail Enhancer FLUX V1 | Companion LoRA | Modest | Skin/texture polish | |
| Nude Style for FLUX V2 | Companion LoRA | Modest | Female anatomy realism |
What to Do Next
- Wiring this in ComfyUI? Flux NSFW models in ComfyUI — node graph, LoRA loader placement, FP8/GGUF paths.
- Want more LoRA options? Best Flux NSFW LoRAs on CivitAI — the broader roundup AIDMA tops, including Lustly.ai, X Plus, and nsfw-highress as A/B candidates.
- Decided LoRAs are too fiddly? All Flux NSFW Models — merged checkpoints (Fluxed Up, etc.) where the unlock is baked in.
Verdict
Download aidmaNSFWunlock-FLUX-V0.2.safetensors from CivitAI #674027 (or the HF mirror if you're locked out — but trust the CivitAI trigger spelling, not the mirror's). Pair it with Flux.1 Dev — FP16 if you have 24 GB, FP8 for 12–16 GB cards, GGUF Q5_K_M for 8 GB. Trigger aidmaNSFWunlock at strength 0.8, sampler Euler + Simple, 20–30 steps, CFG 1, distilled CFG ~3.5. If skin looks plastic or anatomy is off, stack Detail Enhancer V1 or Nude Style V2 after AIDMA — not instead of it. And don't waste time on Flux.1 Schnell as the base.
What to Do Next
Wire it in ComfyUI
Node graph, LoRA loader placement, and FP8 / GGUF Flux dev paths for NSFW workflows.
Compare with other Flux NSFW LoRAs
The broader LoRA roundup AIDMA tops — Lustly.ai, X Plus, nsfw-highress, and stacking order.
Want a checkpoint instead?
Merged Flux NSFW checkpoints when LoRA-only stacks feel underpowered.
