HunyuanVideoGP V5 breaks the laws of VRAM: generate a 10.5s duration video at 1280x720 (+ loras) with 24 GB of VRAM or a 14s duration video at 848x480 (+ loras) video with 16 GB of VRAM, no quantization
ComfyUI Running Significantly Slower on Linux compared to Windows
Questions for RTX 6000 Ada Users
How to get ComfyUI running on your new Nvidia 5090 or 5080.
Let`s make an collective up-to-date Stable Diffusion GPUs benchmark
Wanted to apply for a job opening at comfyui and unfortunately don’t fit their requirements
ComfyUI now supports Nvidia Cosmos: The best open source Image to Video model so far.
ComfyUI announced Gen AI OS plans 🤩
NEW: Cosmos1GP - Text 2 Image / Image 2 Video / Video continuation, for 24 GB VRAM and lower
Nvidia Cosmos is coming to ComfyUI
Introducing ParaAttention: Fastest HunyuanVideo Inference with Context Parallelism and First Block Cache
🎉 v0.1.0 of diffusion-rs: Blazingly fast inference of diffusion models.
What you guys think about the new AMD AI Max Series coming this Q1
I'm tired, boss.
Video AI is taking over Image AI, why?
ComfyUI now supports running Hunyuan Video with 8GB VRAM
All this talk of Nvidia snubbing vram for the 50 series...is amd viable for comfyui?
Intel is choosing the wrong community for their GPUs?
Is SD 3.5 Large supposed to use over 16 GB VRAM and run EXTREMELY slowly on 4070 Ti Super?
Flux speeds took nosedive after recent comfy update - please help!
You can now run LTX Video < 10 GB VRAM - powered by GGUF & Diffusers!
Official LTX Video model broken. Can't load checkpoint. Not even an error message, just an exclamation mark. What does one do in such a situation?