Show HN: ComfyUI extension for the new Stable Video Diffusion model
github.comHey HN!
Just shipped a new ComfyUI extension to add support for the new Stable Video Diffusion model!
Now, you can plug this into any existing ComfyUI workflow to do some really cool things!
Here are a few examples:
Image to video workflow: https://comfyworkflows.com/workflows/5a4cd9fd-9685-4985-adb8...
Image to video workflow (w/ high FPS using RIFE frame interpolation): https://comfyworkflows.com/workflows/bf3b455d-ba13-4063-9ab7...
Would love to hear any feedback! :) Well, I can't load the workflow. Installing the missing nodes requires restarting comfyui and it seems persistence on comfyworkflow's cloud isn't guaranteed anymore. Also this website is pure single-page garbage. Everything is super slow, can't ctrl+click on a link to open it in another tab, etc ... An UX nightmare. Can you recommend a competitor ?
Looking at: - https://www.thinkdiffusion.com/ I get similar results on thinkdiffusion This is great! I use ComfyUI extensively and will try this out. Its hard to setup workflow correctly for these Video diffusion models. I recently built a set of docker images to simplify GenAI workflows. Check it out at https://github.com/jaideep2/apinaio Only works on CUDA hardware for now, but contributions welcome :) In the first example https://comfyworkflows.com/workflows/5a4cd9fd-9685-4985-adb8... Which are the initial images? How many? Only two or a few? The initial image is just 1 -- the very first frame of the video. I don't understand how you tell the rocket is going up and the floor is not moving. Is that in the workflow [1] or the program firere that without help. [1] I don't understand what the worflow says. Sorry if my question is obvious.