LocalForge AILocalForge AI
BlogFAQ

RuinedFooocus — The Uncensored Fooocus Fork With Model Merging and Flux Support

RuinedFooocus is a community fork of Fooocus that strips content filters and adds support for Flux.1, Stable Diffusion 3.x, model merging, and video generation. It keeps Fooocus's dead-simple interface while expanding what models you can actually run. The tradeoff: it's based on an older Fooocus version, so you lose some newer upstream features like automatic masking and direct upscaling.

Runs Locally Open Source NSFW Allowed

What RuinedFooocus Actually Is

RuinedFooocus is a Python-based fork of Fooocus created by runew0lf. The original Fooocus gives you Midjourney-style simplicity on local hardware — type a prompt, get an image, no settings to fiddle with. RuinedFooocus takes that same interface and removes the content filters, then bolts on Flux.1 support, SD3/SD3.5 model compatibility, a model merger, and video generation via WAN and Hunyuan. It's not a competing project — it's Fooocus with the guardrails removed and some extra engines under the hood.

What It's Like to Use

Here's what actually happens when you set it up for the first time: you download a 7z archive, extract it, and double-click run.bat. The first launch takes a while because it auto-downloads the default model — budget 10-15 minutes depending on your internet speed. After that, a browser tab opens with a clean, minimal interface. Type a prompt, hit generate, wait 10-30 seconds depending on your GPU. The simplicity is the point — RuinedFooocus hides the advanced settings behind expandable panels so you don't have to touch them unless you want to. You'll probably spend your first session just testing different models and being surprised at how fast it is compared to heavier frontends.

What It Does Well

The Flux.1 integration is the headline feature, and it works well. RuinedFooocus auto-downloads the required encoders when you first select a Flux model — no manual dependency hunting, no separate VAE downloads, no config file editing. On a 12 GB card (RTX 3060), Flux.1 Dev generates a 1024×1024 image in about 45-60 seconds. That's slower than SDXL, but the photorealism jump is massive. If you've been running SDXL checkpoints and want to try Flux without switching to ComfyUI's node editor, this is the easiest path.

The model merger (MergeMaker) is genuinely fun to experiment with. You can blend two checkpoints at configurable ratios and test the result immediately. I've seen people merge a photorealistic base with a stylized LoRA and get results neither model produces alone. It's not a feature you'll use daily, but when you want to create something unique, it's right there in the UI instead of requiring a separate tool.

Multiple prompt support is a small feature that saves real time. Separate your prompts with --- and RuinedFooocus generates each one in sequence. Set up 5 different prompts, walk away, come back to 5 different images. On a batch of 10 prompts at 512×512 on an RTX 3060, the whole queue finished in about 4 minutes — that's the kind of throughput that makes experimentation practical.

LoRA support works exactly as you'd expect. Drop LoRA files into the models folder, select them from the UI, adjust strength with a slider. The thumbnail browser for models and LoRAs is a nice quality-of-life touch — you can see previews instead of guessing from filenames. Small thing, but it matters when you've got 20+ LoRAs downloaded.

Video generation through WAN and Hunyuan is the newest addition. It's experimental — output quality is inconsistent and render times are long — but the fact that you can go from text prompt to short video clip without leaving the UI is impressive for a community fork.

What It Gets Wrong

It's based on an older Fooocus codebase. This means you miss upstream features that current Fooocus users take for granted — automatic masking, image enhancement, inpainting improvements, and direct upscaling. If you've been using a recent Fooocus build, switching to RuinedFooocus feels like gaining model support but losing editing tools. That's a real tradeoff, not a minor inconvenience.

The community is small. RuinedFooocus doesn't have the documentation, tutorial ecosystem, or Discord community that Forge or ComfyUI enjoy. When you hit a problem, you're mostly on your own — GitHub issues and a handful of guides are what you've got. Expect to do some troubleshooting without step-by-step walkthroughs.

Update cadence is unpredictable. Community forks depend on maintainer availability. Upstream Fooocus changes don't automatically flow into RuinedFooocus, so feature parity drifts over time. If long-term support matters to you, the more popular frontends have larger contributor bases.

SD3/SD3.5 support exists but those models are VRAM-hungry. Running SD3.5 Large needs 16-24 GB VRAM, which prices out most consumer GPUs. SD3.5 Medium is more reasonable at 8-12 GB, but the quality gap between SD3.5 Medium and a good SDXL fine-tune is debatable. The Flux.1 support is the more practical addition for most hardware setups.

Hardware Reality Check

Minimum: NVIDIA GPU with 4 GB VRAM, 8 GB system RAM, 40 GB free disk space. At 4 GB VRAM you're limited to SD 1.5 models at 512×512 — it works, but it's slow and restrictive. You'll want at least 6 GB VRAM to run SDXL models comfortably.

Recommended: RTX 3060 12 GB or better, 16 GB system RAM, SSD with 100+ GB free (models add up fast). At 12 GB VRAM, you can run SDXL, Flux.1 Dev (with quantization), and most LoRAs without hitting out-of-memory errors. An RTX 4070 Ti Super (16 GB) opens up SD3.5 and gives you faster Flux generation — roughly 30-40 seconds per 1024×1024 image. AMD GPUs work with 8+ GB VRAM but expect slower performance and occasional compatibility issues.

Who This Is Actually For

If you're a Fooocus user who wants Flux.1 support without switching to ComfyUI's node editor, RuinedFooocus is the direct upgrade path. Same simple interface, more model options. You'll trade some upstream features for expanded model compatibility, and that's a fair deal if Flux quality matters to you.

If you're someone who likes experimenting with models — merging checkpoints, testing LoRAs, comparing architectures — the built-in MergeMaker and batch prompt features make this a great tinkering platform. It's designed for people who want to explore, not just produce.

If you're a beginner who just wants the simplest possible setup, start with regular Fooocus instead. It has better upstream support, more documentation, and you won't need the extra model support until you've outgrown SDXL. Or try LocalForge AI for a one-click install with curated models — no extraction, no bat files, generating in minutes.

Alternatives Worth Considering

Original Fooocus gives you a more polished experience with better upstream support — pick it if you don't need Flux or uncensored generation and want the simplest possible local tool. Forge is the step up if you want more control — extensions, custom samplers, better VRAM optimization — while staying in a form-based UI. ComfyUI is where you go when you want full pipeline control through a visual node editor, with support for every model architecture including Flux, Hunyuan, and newer experimental models.

Frequently Asked Questions

Is RuinedFooocus free? +
Completely free and open source. You download it from GitHub, extract it, and run it. The models are free too — SDXL checkpoints from Civitai, Flux.1 Schnell (Apache 2.0 license), and community LoRAs all cost nothing. Your only expense is the hardware to run it on.
What's the difference between RuinedFooocus and regular Fooocus? +
RuinedFooocus adds Flux.1 model support, SD3/SD3.5 compatibility, a model merger, video generation, and removes content filters. The tradeoff is that it's based on an older Fooocus codebase, so you lose newer upstream features like automatic masking, image enhancement, and direct upscaling. If you need those editing tools, stick with original Fooocus.
Can RuinedFooocus run Flux models? +
Yes — that's one of its main additions. It auto-downloads the required encoders when you first select a Flux model, so there's no manual dependency setup. You'll need at least 8 GB VRAM for Flux.1 Schnell (the fast version) or 12 GB for Flux.1 Dev (the quality version). Generation is slower than SDXL but the photorealism is a clear step up.
What GPU do I need for RuinedFooocus? +
Minimum is an NVIDIA GPU with 4 GB VRAM for basic SD 1.5 generation. For SDXL — which is what most people run — you want 6-8 GB VRAM. For Flux.1 models, 12 GB is the practical starting point. An RTX 3060 12 GB hits the sweet spot for price-to-capability. AMD GPUs work with 8+ GB VRAM but expect slower speeds.
Is RuinedFooocus safe to install? +
It's open source on GitHub, so you can inspect every line of code. The install is a 7z archive — extract and run, no installer. It downloads AI models on first launch, which is normal. As with any open-source tool, download only from the official repository (runew0lf/RuinedFooocus on GitHub) to avoid modified versions.

Details

Website https://github.com/runew0lf/RuinedFooocus
Runs Locally Yes
Open Source Yes
NSFW Allowed Yes

Supported Models

SDXL 1.0
Juggernaut XL