File size: 4,134 Bytes
775f54b 89798ac 775f54b bda75b4 775f54b 587f4d2 775f54b d598d2e 775f54b 56e106e 775f54b 1832209 89798ac 775f54b 4423fc7 9154724 4423fc7 775f54b 54f217e 4423fc7 775f54b 4423fc7 7c7f1ed 4423fc7 06b4412 4423fc7 89798ac 017bb14 4423fc7 e5eadad 4423fc7 775f54b 4423fc7 775f54b 16dbbb0 775f54b 03082b0 ebe0b3f 1d65013 775f54b c9334b8 3092132 775f54b 54f64bc 775f54b 54f64bc 775f54b 407ec51 775f54b 407ec51 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
---
language:
- en
license: apache-2.0
size_categories:
- 1M<n<10M
task_categories:
- video-text-to-text
configs:
- config_name: Live-CC-5M for Dataset Viewer
data_files:
- split: preview_first_100
path: live_cc_100_for_preview.json
- split: full_5m
path: live_cc_5m_with_seeks.jsonl
---
# Dataset Card for Live-CC-5M

## Dataset Description
- **Curated by:** Joya Chen
- **Language(s) (NLP):** English
- **License:** Apache License 2.0
## Uses
This dataset is used for [LiveCC-7B-Base](https://huggingface.co/chenjoya/LiveCC-7B-Instruct) model pre-training. We only allow the use of this dataset for academic research and educational purposes. For OpenAI GPT-4o generated user prompts, we recommend users check the [OpenAI Usage Policy](https://openai.com/policies/usage-policies/).
- **Project Page**: https://showlab.github.io/livecc
- **Paper**: https://huggingface.co/papers/2504.16030
### Live-CC-5M Dataset
- Statistics: 5,047,208 YouTube Video-CC 30~240s samples.

- Annotation JSONL (YouTube CC):
Each line of the JSONL file is organized in a common user/assistant conversation format with a special "text_stream" key. Example:
```
[
{"role": "user", "content": [{"type": "video", "video": "video/youtube/-4dnPeRv1ns.mp4", "video_start": 16.8, "video_end": 158.8}, {"type": "text", "text": "", "previous": "", "title": "Airsoft G&G Combat Machine M4 Review"}]},
{"role": "assistant", "content": [{"type": "text_stream", "text_stream": [[16.8, 16.9, "all"], [16.9, 17.0, "right"], [17.0, 17.1, "you"], [17.1, 17.3, "guys"], [17.3, 17.4, "so"], [17.4, 17.5, "this"], ...]}]}
]
```
- "title" denotes the YouTube title.
- "previous" denotes previous ASR content before "video_start".
- Each item in "text_stream" indicates start timestamp, end timestamp, and the word.
During pre-training, we use "title" and "previous" as context. Please refer to our dataloader (https://github.com/showlab/livecc/data/lmm_dataset.py) to learn how to make it compatible with popular LMMs (e.g. QwenVL series).
The last line of JSONL contains the file handle seek indices:
```
b'[0, 3149, 7796, 10436, 18949, 22917, 41985, 65721, 73045, 76797, 82262, ...]'
```
This allows for easy streaming loading access using:
```python
import json
# read the last line of jsonl
def readlastline(path: str):
with open(path, "rb") as f:
f.seek(-2, 2) # avoid last
while f.read(1) != b"\n":
f.seek(-2, 1)
return f.readline()
# parse to seek indices list
seeks = json.loads(readlastline('live_cc_5m_with_seeks.jsonl'))
# during data loader
def __getitem(self, index):
...
with open('live_cc_5m_with_seeks.jsonl') as f:
f.seek(seeks[index])
datum = json.loads(f.readline())
...
```
- Videos: Due to 5M videos are too large, we are sorry that we cannot find way to share them. But,
- You can find all YouTube IDs in the annotation JSONL
- We have released video files in SFT dataset https://huggingface.co/datasets/chenjoya/Live-WhisperX-526K
### Data Production Pipeline

Please read the paper Section3 for details. They have been fully open-sourced at: https://github.com/showlab/livecc/tree/main/data/production
## Citation
If you find our work helpful, feel free to give us a cite ;)
```bibtex
@article{livecc,
author = {Joya Chen and Ziyun Zeng and Yiqi Lin and Wei Li and Zejun Ma and Mike Zheng Shou},
title = {LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale},
journal = {arXiv preprint arXiv:2504.16030}
year = {2025},
}
```
## Contact
[Joya Chen](https://chenjoya.github.io/) |