ComfyUI-RMBG

A sophisticated ComfyUI custom node engineered for advanced image background removal and precise segmentation of objects, faces, clothing, and fashion elements. This tool leverages a diverse array of models, including RMBG-2.0, INSPYRENET, BEN, BEN2, BiRefNet, SDMatte models, SAM, SAM2 and GroundingDINO, while also incorporating a new feature for real-time background replacement and enhanced edge detection for improved accuracy.

News & Updates

ReferenceLatentMaskr

V 2 5 2

mask_overlay

https://github.com/user-attachments/assets/7faa00d3-bbe2-42b8-95ed-2c830a1ff04f

Features

RMBG Demo

Installation

Method 1. install on ComfyUI-Manager, search Comfyui-RMBG and install

install requirment.txt in the ComfyUI-RMBG folder

  ./ComfyUI/python_embeded/python -m pip install -r requirements.txt

[!NOTE] Windows desktop app: if the app crashes after install, set PYTHONUTF8=1 before installing requirements, then retry.

[!NOTE] YOLO nodes require the optional ultralytics package. Install it only if you need YOLO to avoid dependency conflicts: ./ComfyUI/python_embeded/python -m pip install ultralytics --no-deps.

[!TIP] Note: If your environment cannot install dependencies with the system Python, you can use ComfyUI’s embedded Python instead. Example (embedded Python): ./ComfyUI/python_embeded/python.exe -m pip install --no-user --no-cache-dir -r requirements.txt

Method 2. Clone this repository to your ComfyUI custom_nodes folder:

  cd ComfyUI/custom_nodes
  git clone https://github.com/1038lab/ComfyUI-RMBG

install requirment.txt in the ComfyUI-RMBG folder

  ./ComfyUI/python_embeded/python -m pip install -r requirements.txt

Method 3: Install via Comfy CLI

Ensure pip install comfy-cli is installed. Installing ComfyUI comfy install (if you don’t have ComfyUI Installed) install the ComfyUI-RMBG, use the following command:

  comfy node install ComfyUI-RMBG

install requirment.txt in the ComfyUI-RMBG folder

  ./ComfyUI/python_embeded/python -m pip install -r requirements.txt

4. Manually download the models:

Usage

RMBG Node

RMBG

Optional Settings :bulb: Tips

| Optional Settings | :memo: Description | :bulb: Tips | |———————-|—————————————————————————–|—————————————————————————————————| | Sensitivity | Adjusts the strength of mask detection. Higher values result in stricter detection. | Default value is 0.5. Adjust based on image complexity; more complex images may require higher sensitivity. | | Processing Resolution | Controls the processing resolution of the input image, affecting detail and memory usage. | Choose a value between 256 and 2048, with a default of 1024. Higher resolutions provide better detail but increase memory consumption. | | Mask Blur | Controls the amount of blur applied to the mask edges, reducing jaggedness. | Default value is 0. Try setting it between 1 and 5 for smoother edge effects. | | Mask Offset | Allows for expanding or shrinking the mask boundary. Positive values expand the boundary, while negative values shrink it. | Default value is 0. Adjust based on the specific image, typically fine-tuning between -10 and 10. | | Background | Choose output background color | Alpha (transparent background) Black, White, Green, Blue, Red | | Invert Output | Flip mask and image output | Invert both image and mask output | | Refine Foreground | Use Fast Foreground Color Estimation to optimize transparent background | Enable for better edge quality and transparency handling | | Performance Optimization | Properly setting options can enhance performance when processing multiple images. | If memory allows, consider increasing process_res and mask_blur values for better results, but be mindful of memory usage. |

Basic Usage

  1. Load RMBG (Remove Background) node from the 🧪AILab/🧽RMBG category
  2. Connect an image to the input
  3. Select a model from the dropdown menu
  4. select the parameters as needed (optional)
  5. Get two outputs:
    • IMAGE: Processed image with transparent, black, white, green, blue, or red background
    • MASK: Binary mask of the foreground

Parameters

Segment Node

  1. Load Segment (RMBG) node from the 🧪AILab/🧽RMBG category
  2. Connect an image to the input
  3. Enter text prompt (tag-style or natural language)
  4. Select SAM and GroundingDINO models
  5. Adjust parameters as needed:
    • Threshold: 0.25-0.35 for broad detection, 0.45-0.55 for precision
    • Mask blur and offset for edge refinement
    • Background color options

About Models

## RMBG-2.0 RMBG-2.0 is is developed by BRIA AI and uses the BiRefNet architecture which includes: - High accuracy in complex environments - Precise edge detection and preservation - Excellent handling of fine details - Support for multiple objects in a single image - Output Comparison - Output with background - Batch output for video The model is trained on a diverse dataset of over 15,000 high-quality images, ensuring: - Balanced representation across different image types - High accuracy in various scenarios - Robust performance with complex backgrounds ## INSPYRENET INSPYRENET is specialized in human portrait segmentation, offering: - Fast processing speed - Good edge detection capability - Ideal for portrait photos and human subjects ## BEN BEN is robust on various image types, offering: - Good balance between speed and accuracy - Effective on both simple and complex scenes - Suitable for batch processing ## BEN2 BEN2 is a more advanced version of BEN, offering: - Improved accuracy and speed - Better handling of complex scenes - Support for more image types - Suitable for batch processing ## BIREFNET MODELS BIREFNET is a powerful model for image segmentation, offering: - BiRefNet-general purpose model (balanced performance) - BiRefNet_512x512 model (optimized for 512x512 resolution) - BiRefNet-portrait model (optimized for portrait/human matting) - BiRefNet-matting model (general purpose matting) - BiRefNet-HR model (high resolution up to 2560x2560) - BiRefNet-HR-matting model (high resolution matting) - BiRefNet_lite model (lightweight version for faster processing) - BiRefNet_lite-2K model (lightweight version for 2K resolution) ## SAM SAM is a powerful model for object detection and segmentation, offering: - High accuracy in complex environments - Precise edge detection and preservation - Excellent handling of fine details - Support for multiple objects in a single image - Output Comparison - Output with background - Batch output for video ## SAM2 SAM2 is the latest segmentation model family designed for efficient, high-quality text-prompted segmentation: - Multiple sizes: Tiny, Small, Base+, Large - Optimized inference with strong accuracy - Automatic download on first use; manual placement supported in `ComfyUI/models/sam2` ## GroundingDINO GroundingDINO is a model for text-prompted object detection and segmentation, offering: - High accuracy in complex environments - Precise edge detection and preservation - Excellent handling of fine details - Support for multiple objects in a single image - Output Comparison - Output with background - Batch output for video ## BiRefNet Models - BiRefNet-general purpose model (balanced performance) - BiRefNet_512x512 model (optimized for 512x512 resolution) - BiRefNet-portrait model (optimized for portrait/human matting) - BiRefNet-matting model (general purpose matting) - BiRefNet-HR model (high resolution up to 2560x2560) - BiRefNet-HR-matting model (high resolution matting) - BiRefNet_lite model (lightweight version for faster processing) - BiRefNet_lite-2K model (lightweight version for 2K resolution)

Requirements

SDMatte models (manual download)

Troubleshooting (short)

Credits

Star History

Star History Chart

</a>

If this custom node helps you or you like my work, please give me ⭐ on this repo! It’s a great encouragement for my efforts!

License

GPL-3.0 License