Paste your workflow JSON — we'll tell you if it's deploy-ready, what custom nodes it needs, and how much it'll cost to run.
Free. No signup required.
Identifies every custom node in your workflow and resolves it to a GitHub repository. Uses the ComfyUI Registry API, Manager database, and GitHub search to find the correct package for each node type.
Estimates VRAM requirements based on the models and node types in your workflow. Recommends the right GPU: L4 (24GB) for image generation, A6000 (48GB) for video and upscaling, A100 (80GB) for 14B+ parameter models.
Extracts all model file references — checkpoints, UNETs, VAEs, LoRAs, ControlNets, and CLIP models. Shows you exactly what needs to be downloaded before the workflow can run.
Shows the hourly cost to run your workflow on cloud GPUs. L4 at $0.51/hr, A6000 at $0.64/hr, A100 at $1.81/hr. Know your costs before deploying.
When you load a workflow and see red “missing node” boxes, it means the required custom node packages aren't installed. This tool identifies which packages you need and where to find them on GitHub.
Workflows with large models (Wan 2.2 14B, HunyuanVideo, SUPIR upscaling) need more VRAM than consumer GPUs provide. This tool estimates VRAM requirements so you can choose the right GPU before deploying.
Running ComfyUI workflows on cloud GPUs (RunPod, vast.ai) requires knowing which Docker image, custom nodes, and models to install. This tool generates a complete dependency report that makes cloud deployment straightforward.
Works with any ComfyUI workflow in frontend or API format, including: