wan 2.1 14b i2v 720p q6_k quantized gguf
hey guys i saw a youtube tutorial that claimed to be able to run wan 2.1 14b i2v 720p q6_k quantized gguf model on 8gb of vram successfully so i thought i would try it.
I downloaded the city96 gguf model here: https://huggingface.co/city96/Wan2.1-I2V-14B-720P-gguf?show_file_info=wan2.1-i2v-14b-720p-Q6_K.gguf
i used this workflow: https://tensor.art/workflows/83641138...
Well it worked (which i was shocked), but the output was very bad. Super glitchy/weird stuff happens.
I tried all sorts of different configurations steps 25-50, cfg 4-10, denoise 0.4-1.0, euler/uni_pc/dpmpp_2m sampler_names and simple/normal schedulers, i was able to get results without crashing my computer..but the outputs werejust so bad, i couldn't get even 1 decent output after a lot of experimentation. Any tips to get better outputs? Should I try the calcuis gguf models instead of city96? Lower quantizaiton? 480p instead of 720p?
Thanks in advance guys!
Can you post a full screenshot of the workflow? The above link you posted seems to be cut off.
There's a few things that could cause issues like that, but it's hard to troubleshoot without extra info.