chenjoya commited on
Commit
775f54b
·
verified ·
1 Parent(s): 4e825c9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -3
README.md CHANGED
@@ -1,3 +1,83 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ configs:
3
+ - config_name: Live-CC-5M for Dataset Viewer
4
+ data_files:
5
+ - split: preview_first_100
6
+ path: live_cc_100_for_preview.json
7
+ - split: full_5m
8
+ path: live_cc_5m_for_preview.json
9
+ license: apache-2.0
10
+ task_categories:
11
+ - video-text-to-text
12
+ language:
13
+ - en
14
+ size_categories:
15
+ - 1M<n<10M
16
+ ---
17
+
18
+ # Dataset Card for LiveCC-WhisperX-528K
19
+
20
+ ## Dataset Description
21
+ - **Curated by:** Joya Chen
22
+ - **Language(s) (NLP):** English
23
+ - **License:** Apache License 2.0
24
+
25
+ ## Uses
26
+ This dataset is used for the training of the [LiveCC-7B-Base](https://huggingface.co/chenjoya/LiveCC-7B-Instruct) model. We only allow the use of this dataset for academic research and educational purposes. For OpenAI GPT-4o generated user prompts, we recommend users check the [OpenAI Usage Policy](https://openai.com/policies/usage-policies/).
27
+
28
+ - **Project Page**: https://showlab.github.io/livecc/
29
+ - **Paper**: https://arxiv.org/abs/xxxx.xxxxx
30
+ - **Training Code**: https://www.github.com/showlab/livecc
31
+
32
+ ### Data Sources
33
+
34
+ - **Live-CC-5M**: This repository includes
35
+ - Annotation JSONL (YouTube CC): https://huggingface.co/datasets/chenjoya/Live-CC-5M/blob/main/live_cc_5m_with_seeks.jsonl
36
+
37
+ Due to 5M videos are too large, we are sorry that we cannot provide them. But,
38
+ - You can find all YouTube keys in the annotation JSONL
39
+ - We have released video files in SFT set: https://huggingface.co/datasets/chenjoya/Live-WhisperX-528K
40
+
41
+ This dataset contains 5,047,208 YouTube Video-CC instances, with YouTube categories:
42
+ ```
43
+ {'People & Blogs': 1130380, 'Howto & Style': 889359, 'Entertainment': 585838, 'Education': 453305, 'Sports': 449166, 'Autos & Vehicles': 410228, 'Science & Technology': 345308, 'Film & Animation': 197529, 'Travel & Events': 151357, 'Pets & Animals': 127535, 'Gaming': 117149, 'News & Politics': 83560, 'Nonprofits & Activism': 59140, 'Comedy': 47354}
44
+ ```
45
+ Each line of the JSONL file is organized in a common user/assistant conversation format with a special "text_stream" key. Example:
46
+ ```
47
+ [
48
+ {"role": "user", "content": [{"type": "video", "video": "video/youtube/-4dnPeRv1ns.mp4", "video_start": 16.8, "video_end": 158.8}, {"type": "text", "text": "", "previous": "", "title": "Airsoft G&G Combat Machine M4 Review"}]},
49
+ {"role": "assistant", "content": [{"type": "text_stream", "text_stream": [[16.8, 16.9, "all"], [16.9, 17.0, "right"], [17.0, 17.1, "you"], [17.1, 17.3, "guys"], [17.3, 17.4, "so"], [17.4, 17.5, "this"], ...]}]}
50
+ ]
51
+ ```
52
+ "title" denotes the YouTube title. "previous" denotes previous ASR content before "video_start". Each item in "text_stream" indicates start timestamp, end timestamp, and the word. During pre-training, we use "title" and "previous" as context. Please refer to our dataloader (https://github.com/showlab/livecc/data/lmm_dataset.py) to learn how to make it compatible with popular LMMs (e.g. QwenVL series).
53
+
54
+ The last line of JSONL contains the file handle seek indices:
55
+ ```
56
+ [0, 3149, 7796, 10436, 18949, 22917, 41985, 65721, 73045, 76797, 82262, ...]
57
+ ```
58
+ This allows for easy streaming access using:
59
+
60
+ ```
61
+ with open('live_whisperx_528k_with_seeks.jsonl') as f:
62
+ f.seek(seek_index)
63
+ datum = json.loads(f.readline())
64
+ ```
65
+
66
+ ### Data Pipeline
67
+
68
+ Please refer to Section 3 of our [paper](https://arxiv.org/abs/xxxx.xxxxx). It has been fully open-sourced at: https://github.com/showlab/livecc/data/production/pretrain
69
+
70
+ ## Citation
71
+
72
+ ```bibtex
73
+ @inproceedings{livecc,
74
+ author = {Joya Chen and Ziyun Zeng and Yiqi Lin and Wei Li and Zejun Ma and Mike Zheng Shou},
75
+ title = {LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale},
76
+ booktitle = {CVPR},
77
+ year = {2025},
78
+ }
79
+ ```
80
+
81
+ ## Dataset Card Contact
82
+
83
+ [Joya Chen](https://chenjoya.github.io/)