README / README.md
shepnerd's picture
Update README.md
1859be0 verified
|
raw
history blame
1.29 kB
---
title: README
emoji:
colorFrom: purple
colorTo: gray
sdk: static
pinned: false
---
<div align="center">
<b><font size="6">OpenGVLab</font></b>
</div>
Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vision-Centric AI research. The GV in our name, OpenGVLab, means general vision, a general understanding of vision, so little effort is needed to adapt to new vision-based tasks.
# Models
- [InternVL](https://github.com/OpenGVLab/InternVL): a pioneering open-source alternative to GPT-4V.
- [InternImage](https://github.com/OpenGVLab/InternImage): a large-scale vision foundation models with deformable convolutions.
- [InternVideo](https://github.com/OpenGVLab/InternVideo): large-scale video foundation models for multimodal understanding.
- [VideoChat](https://github.com/OpenGVLab/Ask-Anything): an end-to-end chat assistant for video comprehension.
- [All Seeing]():
- [All Seeing V2]():
-
# Datasets
- [ShareGPT4o]():
- [InternVid](https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid): a large-scale video-text dataset for multimodal understanding and generation.
# Benchmarks
- [MVBench](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2): a comprehensive benchmark for multimodal video understanding.