Inpainting With Stable Diffusion: Edit AI Images Locally Without Limits
How to use inpainting and outpainting in Stable Diffusion to edit, fix, and extend AI-generated images on your local machine. Fix faces, hands, backgrounds — completely uncensored and offline.
Your Image Is 90% Perfect. Inpainting Fixes the Rest.
You generated an amazing image — but one hand has six fingers, the face is slightly off, or the background doesn't match your vision. On cloud platforms, you regenerate and pray. Locally, you paint a mask over the problem area and fix just that part.
Inpainting is the most underused feature in Stable Diffusion. Once you learn it, your hit rate goes from 1-in-20 to nearly every image being usable.
How Inpainting Works in Forge
- Go to img2img → Inpaint tab in Forge UI
- Upload or send your image from txt2img
- Paint a mask over the area you want to change (black = change, white = keep)
- Write a prompt describing what the masked area should become
- Set denoising strength: 0.3–0.5 for subtle fixes, 0.6–0.8 for major changes
- Set "Inpaint area" to "Only masked" for better detail in small regions
- Hit Generate
Common Inpainting Use Cases
Fixing Hands & Fingers
The #1 use case. Mask the hand area, set denoising to 0.5–0.7, prompt with "perfect hand, five fingers, detailed fingers, natural pose". Use ADetailer for automated hand fixing across batches.
Improving Faces
Mask the face. Lower denoising (0.3–0.4) to keep the overall look while fixing details. Or use ADetailer which auto-detects faces and applies inpainting — one of the most useful Forge extensions.
Changing Clothing / Objects
Mask the clothing area, prompt with the new outfit. Denoising 0.6–0.8 for major changes. "Red evening gown" replaces whatever was there before while keeping the body pose and background intact.
Background Replacement
Mask everything except the subject. Prompt with your new background. "Mountain landscape at sunset" or "modern apartment interior" — the subject stays while the world changes around them.
Outpainting: Extend Beyond Borders
Outpainting adds content outside the original image frame:
- In Forge: Use img2img with "Resize and fill" — set a larger canvas size, and SD generates the new areas
- In ComfyUI: Use the "Pad Image" node + inpaint workflow for more control
- Best results: Extend one direction at a time (e.g., add 256px to the right, then bottom)
- Prompt matters: Describe what should be in the extended area for coherent results
The #1 frustration with inpainting? Getting ADetailer installed and configured correctly. It depends on specific model files, extension versions, and Forge compatibility. One wrong version and it breaks your entire UI. LocalForge AI ships with ADetailer and inpainting pre-tested — fix hands and faces on your very first generation.
Pro Tips
- Mask padding: Set mask padding to 32–64 pixels so inpainting blends seamlessly with surrounding areas
- "Only masked" + high resolution: When inpainting small areas like hands, "Only masked" at 512×512 gives much better detail than full-image inpainting
- Use the same model: Inpaint with the same checkpoint that generated the original for the most consistent style
- Inpainting models: Some models have dedicated inpainting variants (e.g., sd-xl-inpainting) that handle mask edges better
- Multiple passes: Fix one thing at a time. Hands first, then face, then background. Compounding changes in one pass often causes artifacts.
FAQ
Does inpainting use more VRAM?
Slightly — about 0.5–1 GB more than txt2img because it processes both the original image and the mask. On 8+ GB GPUs, this is rarely an issue. On 6 GB, use "Only masked" mode to reduce memory usage.
Can I inpaint with a different model than the original?
Yes, but results may have style inconsistencies at the mask edges. For best results, use the same model and similar settings (sampler, CFG) as the original generation.
