Lynx gguf models
#14 opened 5 days ago
by
dozaler

Wan2_2_Animate_14B_Q3_K_M.gguf for 12GB VRAM?
#13 opened 21 days ago
by
fawogin598
Using the GGUF model workflow - triton installation?
#12 opened 21 days ago
by
cbridges5519
Different size compared to QuantStack version
#11 opened 22 days ago
by
NielsGx
What text encoder to use with Wan Animate gguf?
1
#10 opened 23 days ago
by
Noire1

request Wan2_2 Animate 14B_Q5_K_M
➕
3
1
#9 opened 23 days ago
by
sunnyboxs
Do VACE modules need to use the same quantization level as the main model?
#8 opened 23 days ago
by
ai4johndoe
GGUFLoaderKJ Throws error
👍
➕
1
8
#7 opened 24 days ago
by
Pravbk
How to replicate upscale facedetailer wan 2.2 from the native nodes
#6 opened 25 days ago
by
TheWut
Does the version of InfiniteTalk GGUF model support multi-GPU inference?
#4 opened about 1 month ago
by
gxx
Compatibility with Intel Arc XPU not Cuda ?
#3 opened about 2 months ago
by
AI-Joe-git

gguf not working in multitalk model loader
14
#1 opened about 2 months ago
by
gisbornetv
