nielsr HF Staff commited on
Commit
0c6cc0c
·
verified ·
1 Parent(s): 46c9052

Add pipeline tag, library name and set inference to true

Browse files

This PR improves the model card by:

- Adding the `pipeline_tag`, enabling people to find your model at https://huggingface.co/models?pipeline_tag=text-to-video
- Adding the `library_name` which allows easier usage of the model
- Setting inference to `true`

Files changed (1) hide show
  1. README.md +10 -11
README.md CHANGED
@@ -1,13 +1,15 @@
1
  ---
 
 
2
  license: other
3
  license_link: https://huggingface.co/THUDM/CogVideoX-5b-I2V/blob/main/LICENSE
4
- language:
5
- - en
6
  tags:
7
- - video-generation
8
- - thudm
9
- - image-to-video
10
- inference: false
 
 
11
  ---
12
 
13
  # CogVideoX1.5-5B-I2V
@@ -118,8 +120,6 @@ conversion to get a better experience.**
118
 
119
  1. Install the required dependencies
120
 
121
-
122
-
123
  ```shell
124
  # diffusers (from source)
125
  # transformers>=4.46.2
@@ -164,7 +164,7 @@ export_to_video(video, "output.mp4", fps=8)
164
 
165
  [PytorchAO](https://github.com/pytorch/ao) and [Optimum-quanto](https://github.com/huggingface/optimum-quanto/) can be
166
  used to quantize the text encoder, transformer, and VAE modules to reduce CogVideoX's memory requirements. This allows
167
- the model to run on free T4 Colab or GPUs with lower VRAM! Also, note that TorchAO quantization is fully compatible
168
  with `torch.compile`, which can significantly accelerate inference.
169
 
170
  ```python
@@ -248,5 +248,4 @@ This model is released under the [CogVideoX LICENSE](LICENSE).
248
  journal={arXiv preprint arXiv:2408.06072},
249
  year={2024}
250
  }
251
- ```
252
-
 
1
  ---
2
+ language:
3
+ - en
4
  license: other
5
  license_link: https://huggingface.co/THUDM/CogVideoX-5b-I2V/blob/main/LICENSE
 
 
6
  tags:
7
+ - video-generation
8
+ - thudm
9
+ - image-to-video
10
+ pipeline_tag: text-to-video
11
+ library_name: diffusers
12
+ inference: true
13
  ---
14
 
15
  # CogVideoX1.5-5B-I2V
 
120
 
121
  1. Install the required dependencies
122
 
 
 
123
  ```shell
124
  # diffusers (from source)
125
  # transformers>=4.46.2
 
164
 
165
  [PytorchAO](https://github.com/pytorch/ao) and [Optimum-quanto](https://github.com/huggingface/optimum-quanto/) can be
166
  used to quantize the text encoder, transformer, and VAE modules to reduce CogVideoX's memory requirements. This allows
167
+ the model to run on free T4 Colab or GPUs with smaller VRAM! Also, note that TorchAO quantization is fully compatible
168
  with `torch.compile`, which can significantly accelerate inference.
169
 
170
  ```python
 
248
  journal={arXiv preprint arXiv:2408.06072},
249
  year={2024}
250
  }
251
+ ```