LocalForge AILocalForge AI
BlogFAQ

SD.Next — The Multi-Backend AI Image Generator for Power Users

SD.Next is an all-in-one Stable Diffusion WebUI that supports multiple backends (Original and Diffusers), runs on NVIDIA, AMD, and Intel GPUs, and handles models from SD 1.5 to Flux to video generators in one interface. It's the frontend you pick when you've outgrown Forge's model support or need AMD/Intel GPU compatibility that other tools don't offer. The tradeoff: extension compatibility with A1111 isn't guaranteed, and the learning curve is steeper than form-based alternatives.

Runs Locally Open Source NSFW Allowed

What SD.Next Actually Is

SD.Next is vladmandic's fork of the AUTOMATIC1111 WebUI that diverged significantly from the original. It uses two backends — the original LDM/A1111 codebase for SD 1.x/2.x models, and a Hugging Face Diffusers backend (the default) that supports everything else: SDXL, Flux.1, SD3/SD3.5, Stable Cascade, PixArt, LTX Video, Hunyuan, and newer models as they ship. If a diffusion model exists, SD.Next probably supports it already or will within weeks of release. The repo has 7,000+ stars, 430+ contributors, and pushes 300+ commits between releases. Development velocity is high.

What It's Like to Use

If you're coming from Forge or A1111, the interface looks familiar — it's the same Gradio-based layout with tabs for txt2img, img2img, extras, and settings. The difference shows up when you open the model dropdown and see architectures that don't exist in other frontends. Switching between backends is a settings toggle, not a separate install. Your first session will probably involve picking a backend (Diffusers for most use cases), selecting a model, and adjusting quantization settings if you're on limited VRAM. The settings panel is dense — significantly more options than Forge — and you'll want to spend 30 minutes exploring what's there before generating.

What It Does Well

Model coverage is unmatched in a form-based UI. SD.Next runs SD 1.5, SDXL, SD3/SD3.5, Flux.1, Stable Cascade, PixArt, Kandinsky, DeepFloyd IF, LTX Video, WAN, Hunyuan Video, SeedVR2, and more. Forge supports a subset. ComfyUI supports everything but through nodes. SD.Next gives you the broadest model support in a traditional settings-and-buttons interface. If you want to test a new model architecture without learning node graphs, this is where you go.

The quantization engine (SDNQ) is a standout feature. SD.Next offers 4-bit quantization with what the devs describe as "nearly zero-loss quality" — you can run models that normally need 16-24 GB VRAM on cards with 8-12 GB. On-the-fly quantization testing lets you compare precision levels without re-downloading models. For anyone running on 8 GB cards, this is the difference between "can't load the model" and "running fine at 90%+ quality."

AMD and Intel GPU support actually works here. Forge is NVIDIA-only for practical purposes. ComfyUI has experimental AMD support. SD.Next ships with tested ROCm support (including Windows as of late 2025), Intel Arc/IPEX, DirectML, OpenVINO, and ZLUDA. If you're on an AMD RX 7900 XTX or an Intel Arc A770, SD.Next is your best option for a feature-complete Stable Diffusion frontend.

Built-in model downloading from CivitAI and Hugging Face saves the copy-paste-URL-into-browser dance. Browse, select, download — all from the UI. It's a quality-of-life feature, but when you're testing 5 different checkpoints in a session, it matters.

Video generation is integrated, not bolted on. LTX Video, WAN 2.2, Hunyuan Video — these run through the same interface as image generation. Select the model, set parameters, generate. In Forge, video requires separate extensions. In ComfyUI, it requires building a video pipeline from nodes. SD.Next makes it a dropdown selection.

What It Gets Wrong

A1111 extension compatibility is incomplete. Extensions written for AUTOMATIC1111 don't always work because SD.Next rewrites parts of the backend. Roop, AnimateDiff, and some popular extensions have documented incompatibilities. If your workflow depends on specific A1111 extensions, verify compatibility before switching. Forge maintains better A1111 extension support.

The settings surface area is large. SD.Next exposes more configuration than most users need — backend selection, compilation modes (Triton, StableFast, DeepCache, OneDiff), quantization settings, attention mechanisms, and cross-attention options. For power users, this is the appeal. For everyone else, it's noise. There's no "just works" preset that hides the complexity the way Fooocus does.

Performance on low-end hardware isn't its strength. Forge's aggressive model offloading makes it faster on 6-8 GB cards for basic SDXL generation. SD.Next performs better in larger batches and with advanced features, but for the common case of "generate one image on a mid-range GPU," Forge is faster. The quantization features close this gap, but they require manual configuration.

Installation requires Git and Python. The setup is: clone the repo, run the installer script, wait for dependencies. It's not difficult for anyone comfortable with a terminal, but it's more friction than Forge's batch installer or Fooocus's extract-and-run approach. Don't install to OneDrive folders, admin-restricted paths, or hidden directories — the FAQ specifically warns against this.

Hardware Reality Check

SD.Next supports the widest range of hardware of any Stable Diffusion frontend. NVIDIA GPUs with 4+ GB VRAM via CUDA. AMD GPUs with 8+ GB VRAM via ROCm (Linux and Windows). Intel Arc GPUs via IPEX/OneAPI. Even CPU-only generation via OpenVINO, though it's very slow.

For practical use: NVIDIA RTX 3060 12 GB is the sweet spot for most users — it runs SDXL natively, Flux with quantization, and handles batches without crashing. On AMD, the RX 7900 XTX (24 GB) gives you headroom equivalent to an RTX 4090 in VRAM, though generation is ~20-30% slower due to ROCm overhead. Intel Arc A770 (16 GB) works for SDXL but Flux support is still experimental. Newer models like SeedVR2 (6.4-16 GB model files) aren't designed for low-end hardware — plan for 12+ GB VRAM.

Who This Is Actually For

If you're on an AMD or Intel GPU, SD.Next is your primary option. No other form-based frontend offers reliable non-NVIDIA support. The ROCm backend works, the Intel backend works, and development actively maintains them. You're not a second-class citizen here.

If you want to test every new diffusion model without switching tools, SD.Next's model coverage and rapid update cadence mean new architectures ship within weeks of release. Forge waits longer. ComfyUI gets them faster but requires node setup. SD.Next strikes the middle ground — broad model support in a conventional UI.

If you're looking for the simplest local setup, SD.Next isn't it. Fooocus is simpler. Forge is easier. LocalForge AI is one click. SD.Next is for people who already know what backends and quantization mean and want the tool that gives them the most control without going full node-based.

Alternatives Worth Considering

Forge is the better choice if you're on NVIDIA, want A1111 extension compatibility, and don't need exotic model support — it's faster on low-end hardware and easier to set up. ComfyUI gives you full pipeline control and the fastest access to bleeding-edge models, but through a node editor that requires learning a new workflow paradigm. Fooocus is the opposite end of the spectrum — minimal interface, SDXL-focused, zero configuration.

Frequently Asked Questions

Is SD.Next free? +
Yes. Fully open source, no cost, no account required. Download from GitHub, run the installer, and generate. The models are free too — SDXL checkpoints from Civitai, Flux.1 Schnell, and community fine-tunes all cost nothing. Your only cost is the hardware.
SD.Next vs Forge — which should I pick? +
If you're on an NVIDIA GPU and primarily use SDXL or SD 1.5, Forge is faster and simpler. If you need Flux support, AMD/Intel GPU support, SD3.5, video generation, or 4-bit quantization, SD.Next covers more ground. Forge has better A1111 extension compatibility. SD.Next has broader model and hardware support. Pick based on what you actually need.
Does SD.Next work on AMD GPUs? +
Yes — and it's one of the best options available for AMD users. ROCm support works on both Linux and Windows (added late 2025). You'll need 8+ GB VRAM. An RX 7900 XTX handles SDXL and quantized Flux well. Performance is 20-30% slower than equivalent NVIDIA hardware, but it works reliably.
Can I use A1111 extensions with SD.Next? +
Some work, some don't. SD.Next uses a different backend implementation than AUTOMATIC1111, so extensions that depend on A1111-specific code may fail. Popular extensions like roop and AnimateDiff have documented incompatibilities. Check the GitHub issues before committing to SD.Next if your workflow relies on specific extensions.
What models does SD.Next support? +
Virtually everything: SD 1.5, SDXL, SD3/SD3.5, Flux.1, Stable Cascade, PixArt, Kandinsky, DeepFloyd IF, LTX Video, WAN, Hunyuan Video, SeedVR2, and more. The Diffusers backend (default) adds new model support rapidly — usually within weeks of a model's public release. This is the broadest model support of any form-based Stable Diffusion frontend.

Details

Website https://github.com/vladmandic/automatic
Runs Locally Yes
Open Source Yes
NSFW Allowed Yes

Supported Models

Stable Diffusion 1.5
SDXL 1.0
Flux 1 Dev