chenjoya commited on
Commit
4423fc7
·
verified ·
1 Parent(s): cb3c654

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -34
README.md CHANGED
@@ -29,47 +29,64 @@ This dataset is used for the training of the [LiveCC-7B-Base](https://huggingfac
29
  - **Paper**: https://arxiv.org/abs/xxxx.xxxxx
30
  - **Training Code**: https://www.github.com/showlab/livecc
31
 
32
- ### Data Sources
 
 
 
33
 
34
- - **Live-CC-5M**: This repository includes
35
  - Annotation JSONL (YouTube CC): https://huggingface.co/datasets/chenjoya/Live-CC-5M/blob/main/live_cc_5m_with_seeks.jsonl
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
- Due to 5M videos are too large, we are sorry that we cannot find way to share them. But,
38
- - You can find all YouTube IDs in the annotation JSONL
39
- - We have released video files in SFT set: https://huggingface.co/datasets/chenjoya/Live-WhisperX-528K
40
-
41
- This dataset contains 5,047,208 YouTube Video-CC instances, with YouTube categories:
42
- ```
43
- {'People & Blogs': 1130380, 'Howto & Style': 889359, 'Entertainment': 585838, 'Education': 453305, 'Sports': 449166, 'Autos & Vehicles': 410228, 'Science & Technology': 345308, 'Film & Animation': 197529, 'Travel & Events': 151357, 'Pets & Animals': 127535, 'Gaming': 117149, 'News & Politics': 83560, 'Nonprofits & Activism': 59140, 'Comedy': 47354}
44
- ```
45
- Each line of the JSONL file is organized in a common user/assistant conversation format with a special "text_stream" key. Example:
46
- ```
47
- [
48
- {"role": "user", "content": [{"type": "video", "video": "video/youtube/-4dnPeRv1ns.mp4", "video_start": 16.8, "video_end": 158.8}, {"type": "text", "text": "", "previous": "", "title": "Airsoft G&G Combat Machine M4 Review"}]},
49
- {"role": "assistant", "content": [{"type": "text_stream", "text_stream": [[16.8, 16.9, "all"], [16.9, 17.0, "right"], [17.0, 17.1, "you"], [17.1, 17.3, "guys"], [17.3, 17.4, "so"], [17.4, 17.5, "this"], ...]}]}
50
- ]
51
- ```
52
- - "title" denotes the YouTube title.
53
- - "previous" denotes previous ASR content before "video_start".
54
- - Each item in "text_stream" indicates start timestamp, end timestamp, and the word.
55
-
56
- During pre-training, we use "title" and "previous" as context. Please refer to our dataloader (https://github.com/showlab/livecc/data/lmm_dataset.py) to learn how to make it compatible with popular LMMs (e.g. QwenVL series).
57
-
58
- The last line of JSONL contains the file handle seek indices:
59
- ```
60
- [0, 3149, 7796, 10436, 18949, 22917, 41985, 65721, 73045, 76797, 82262, ...]
61
- ```
62
- This allows for easy streaming access using:
63
 
64
- ```
65
- with open('live_whisperx_528k_with_seeks.jsonl') as f:
66
- f.seek(seek_index)
67
- datum = json.loads(f.readline())
68
- ```
69
 
70
  ### Data Pipeline
71
 
72
- Please refer to Section 3 of our [paper](https://arxiv.org/abs/xxxx.xxxxx). It has been fully open-sourced at: https://github.com/showlab/livecc/data/production/pretrain
 
 
 
 
73
 
74
  ## Citation
75
 
 
29
  - **Paper**: https://arxiv.org/abs/xxxx.xxxxx
30
  - **Training Code**: https://www.github.com/showlab/livecc
31
 
32
+ ### Live-CC-5M Dataset
33
+
34
+ - Statistics: 5,047,208 YouTube Video-CC 30~240s instances.
35
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642435a1a3adbc7142c3b0a6/-RR-sI7F1a1XpxuQad2DH.png)
36
 
 
37
  - Annotation JSONL (YouTube CC): https://huggingface.co/datasets/chenjoya/Live-CC-5M/blob/main/live_cc_5m_with_seeks.jsonl
38
+
39
+ Each line of the JSONL file is organized in a common user/assistant conversation format with a special "text_stream" key. Example:
40
+ ```
41
+ [
42
+ {"role": "user", "content": [{"type": "video", "video": "video/youtube/-4dnPeRv1ns.mp4", "video_start": 16.8, "video_end": 158.8}, {"type": "text", "text": "", "previous": "", "title": "Airsoft G&G Combat Machine M4 Review"}]},
43
+ {"role": "assistant", "content": [{"type": "text_stream", "text_stream": [[16.8, 16.9, "all"], [16.9, 17.0, "right"], [17.0, 17.1, "you"], [17.1, 17.3, "guys"], [17.3, 17.4, "so"], [17.4, 17.5, "this"], ...]}]}
44
+ ]
45
+ ```
46
+ - "title" denotes the YouTube title.
47
+ - "previous" denotes previous ASR content before "video_start".
48
+ - Each item in "text_stream" indicates start timestamp, end timestamp, and the word.
49
+
50
+ During pre-training, we use "title" and "previous" as context. Please refer to our dataloader (https://github.com/showlab/livecc/data/lmm_dataset.py) to learn how to make it compatible with popular LMMs (e.g. QwenVL series).
51
 
52
+ The last line of JSONL contains the file handle seek indices:
53
+ ```
54
+ [0, 3149, 7796, 10436, 18949, 22917, 41985, 65721, 73045, 76797, 82262, ...]
55
+ ```
56
+ This allows for easy streaming access using:
57
+
58
+ ```python
59
+ import json
60
+
61
+ # read the last line of jsonl
62
+ def readlastline(path: str):
63
+ with open(path, "rb") as f:
64
+ f.seek(-2, 2) # avoid last \n
65
+ while f.read(1) != b"\n":
66
+ f.seek(-2, 1)
67
+ return f.readline()
68
+
69
+ # parse to seek indices list
70
+ seeks = json.loads(readlastline('live_cc_5m_with_seeks.jsonl'))
71
+
72
+ # during data loader
73
+ with open('live_cc_5m_with_seeks.jsonl') as f:
74
+ f.seek(seeks[index])
75
+ datum = json.loads(f.readline())
76
+ ```
 
77
 
78
+ - Videos: Due to 5M videos are too large, we are sorry that we cannot find way to share them. But,
79
+ - You can find all YouTube IDs in the annotation JSONL
80
+ - We have released video files in SFT dataset https://huggingface.co/datasets/chenjoya/Live-WhisperX-526K
81
+
 
82
 
83
  ### Data Pipeline
84
 
85
+ Please refer to Section 3 of our [paper](https://arxiv.org/abs/xxxx.xxxxx).
86
+
87
+
88
+
89
+ It has been fully open-sourced at: https://github.com/showlab/livecc/data/production/pretrain
90
 
91
  ## Citation
92