Skip to main content

ComfyUI Workflow Compatibility Check

Paste your workflow JSON — we'll tell you if it's deploy-ready, what custom nodes it needs, and how much it'll cost to run.

Free. No signup required.

Export from ComfyUI: Menu → Save (API Format)

What This Tool Checks

Custom Node Resolution

Identifies every custom node in your workflow and resolves it to a GitHub repository. Uses the ComfyUI Registry API, Manager database, and GitHub search to find the correct package for each node type.

GPU Requirements

Estimates VRAM requirements based on the models and node types in your workflow. Recommends the right GPU: L4 (24GB) for image generation, A6000 (48GB) for video and upscaling, A100 (80GB) for 14B+ parameter models.

Model Detection

Extracts all model file references — checkpoints, UNETs, VAEs, LoRAs, ControlNets, and CLIP models. Shows you exactly what needs to be downloaded before the workflow can run.

Cost Estimate

Shows the hourly cost to run your workflow on cloud GPUs. L4 at $0.51/hr, A6000 at $0.64/hr, A100 at $1.81/hr. Know your costs before deploying.

How It Works

  1. 1.Export your workflow from ComfyUI using Menu → Save (API Format). This creates a JSON file with all node types and connections.
  2. 2.Paste the JSON above or drag-and-drop the .json file. The tool analyzes it instantly — no upload, no account needed.
  3. 3.Get a compatibility report: which custom nodes are resolved, which models are needed, what GPU you need, and how much it costs per hour. If everything resolves, your workflow is deploy-ready.

Common ComfyUI Workflow Issues

Missing custom nodes (red boxes)

When you load a workflow and see red “missing node” boxes, it means the required custom node packages aren't installed. This tool identifies which packages you need and where to find them on GitHub.

CUDA out of memory (OOM) errors

Workflows with large models (Wan 2.2 14B, HunyuanVideo, SUPIR upscaling) need more VRAM than consumer GPUs provide. This tool estimates VRAM requirements so you can choose the right GPU before deploying.

Deploying to cloud GPUs

Running ComfyUI workflows on cloud GPUs (RunPod, vast.ai) requires knowing which Docker image, custom nodes, and models to install. This tool generates a complete dependency report that makes cloud deployment straightforward.

Supported Workflow Types

Works with any ComfyUI workflow in frontend or API format, including:

  • Flux Dev / Schnell image generation
  • SDXL and SD 1.5 workflows
  • Wan 2.2 text-to-video and image-to-video
  • HunyuanVideo and LTX 2.3 video
  • SUPIR and SeedVR2 upscaling
  • ControlNet, IP-Adapter, InstantID
  • Qwen Image Edit and generation
  • Custom LoRA and multi-model pipelines