Fac256xc / README.md
khalidsaifullaah's picture
updated project's BibTex in README
2d77795
|
raw
history blame
4.91 kB
metadata
title: DALL路E mini
emoji: 馃
colorFrom: yellow
colorTo: green
sdk: streamlit
app_file: app/app.py
pinned: false

DALL路E Mini

Generate images from a text prompt

Our logo was generated with DALL路E mini using the prompt "logo of an armchair in the shape of an avocado".

You can create your own pictures with the demo (temporarily in beta on Huging Face Spaces but soon to be open to all).

How does it work?

Refer to our report.

Development

Dependencies Installation

The root folder and associated requirements.txt is only for the app.

For development, use dev/requirements.txt or dev/environment.yaml.

Training of VQGAN

The VQGAN was trained using taming-transformers.

We recommend using the latest version available.

Conversion of VQGAN to JAX

Use patil-suraj/vqgan-jax.

Training of Seq2Seq

Refer to dev/seq2seq folder.

You can also adjust the sweep configuration file if you need to perform a hyperparameter search.

Inference Pipeline

To generate sample predictions and understand the inference pipeline step by step, refer to dev/inference/inference_pipeline.ipynb.

Open In Colab

FAQ

Where to find the latest models?

Trained models are on 馃 Model Hub:

Where does the logo come from?

The "armchair in the shape of an avocado" was used by OpenAI when releasing DALL路E to illustrate the model's capabilities. Having successful predictions on this prompt represents a big milestone to us.

Authors

Acknowledgements

Citing DALL路E mini

If you find DALL路E mini useful in your research or wish to refer, please use the following BibTeX entry.

@misc{Dayma_DALL路E_Mini_2021,
author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and L锚 Kh岷痗, Ph煤c and Melas, Luke and Ghosh, Ritobrata},
doi = {10.5281/zenodo.1234},
month = {7},
title = {DALL路E Mini},
url = {https://github.com/borisdayma/dalle-mini},
year = {2021}
}

References

@misc{ramesh2021zeroshot,
      title={Zero-Shot Text-to-Image Generation}, 
      author={Aditya Ramesh and Mikhail Pavlov and Gabriel Goh and Scott Gray and Chelsea Voss and Alec Radford and Mark Chen and Ilya Sutskever},
      year={2021},
      eprint={2102.12092},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@misc{esser2021taming,
      title={Taming Transformers for High-Resolution Image Synthesis}, 
      author={Patrick Esser and Robin Rombach and Bj枚rn Ommer},
      year={2021},
      eprint={2012.09841},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@misc{lewis2019bart,
      title={BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, 
      author={Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Ves Stoyanov and Luke Zettlemoyer},
      year={2019},
      eprint={1910.13461},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@misc{radford2021learning,
      title={Learning Transferable Visual Models From Natural Language Supervision}, 
      author={Alec Radford and Jong Wook Kim and Chris Hallacy and Aditya Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
      year={2021},
      eprint={2103.00020},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}