--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-I2V-14B-480P - Wan-AI/Wan2.1-I2V-14B-480P-Diffusers pipeline_tag: image-to-video tags: - text-to-image - lora - diffusers - template:diffusion-lora - image-to-video widget: - text: >- The video begins with a close-up portrait of a man wearing a suit. The background changes and then the pu11y puppy effect begins. The man is now surrounded by many puppies. He pets the puppies. output: url: example_videos/man1_puppy.mp4 - text: >- The video begins with a relaxed shot of a man smiling, sitting on a modern gray armchair. The pu11y puppy effect happens, and the man is surrounded by golden retriever puppies. He interacts with the puppies on his phone. output: url: example_videos/man2_puppy.mp4 - text: >- The video opens with a studio portrait of a man smiling in a white t-shirt. The pu11y puppy effect then begins, as puppies begin to gather and surround him. He is now holding a puppy in his arms. output: url: example_videos/man3_puppy.mp4 ---
This LoRA is trained on the Wan2.1 14B I2V 480p model and allows you to make any person/object in an image get surrounded by puppies!
The key trigger phrase is: pu11y puppy effect
For best results, try following the structure of the prompt examples above. These worked well for me.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!