Datasets:

Size:
n>1T
ArXiv:
BigDong commited on
Commit
05be785
Β·
1 Parent(s): dcc9c14

update readme and create necessary folders

Browse files
README.md CHANGED
@@ -1,4 +1,18 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  task_categories:
3
  - text-generation
4
  language:
@@ -8,4 +22,43 @@ pretty_name: Ultra-FineWeb
8
  size_categories:
9
  - n>1T
10
  ---
11
- # Ultra-FineWeb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ configs:
3
+ - config_name: default
4
+ data_files:
5
+ - split: en
6
+ path: data/ultrafineweb_en/*
7
+ - split: zh
8
+ path: data/ultrafineweb_zh/*
9
+ features:
10
+ - name: content
11
+ dtype: string
12
+ - name: score
13
+ dtype: float
14
+ - name: source
15
+ dtype: string
16
  task_categories:
17
  - text-generation
18
  language:
 
22
  size_categories:
23
  - n>1T
24
  ---
25
+ # Ultra-FineWeb
26
+
27
+ - [πŸ“œ Technical Report]()
28
+ - [πŸ’» Github Repo]()
29
+ - [πŸ€— Classifier Models]()
30
+
31
+ </div>
32
+
33
+ ## πŸ“š Introduction
34
+
35
+ Ultra-FineWeb datasets contains approximately 1 trillion English tokens and 120 billion Chinese tokens.
36
+
37
+ ## πŸ“’ What's New
38
+
39
+ - **[2025.xx.xx]** **Ultra-FineWeb** technical report is available on [arXiv](). πŸ”₯πŸ”₯πŸ”₯
40
+ - Datasets and models are coming soon... πŸ”œπŸš€
41
+
42
+ ## πŸ’‘ Highlights
43
+
44
+ ## πŸ“ˆ Evaluation Results
45
+
46
+ ## ❀️ Acknowledgements
47
+
48
+ - The ***Ultra-FineWeb classifier*** is built based on [fastText](https://fasttext.cc/).
49
+ - The ***Ultra-FineWeb-en dataset*** is built based on [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
50
+ - The ***Ultra-FineWeb-zh dataset*** is constructed based on [IndustryCorpus2](https://huggingface.co/datasets/BAAI/IndustryCorpus2), [MiChao](https://opendatalab.com/OpenDataLab/MiChao), [WuDao](https://data.baai.ac.cn/details/WuDaoCorporaText), [SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B), [WanJuan](https://opendatalab.com/OpenDataLab/WanJuanCC), [ChineseWebText](https://huggingface.co/datasets/CASIA-LM/ChineseWebText2.0), [TeleChat](https://huggingface.co/datasets/Tele-AI/TeleChat-PTD), and [CCI3](https://huggingface.co/datasets/BAAI/CCI3-Data).
51
+
52
+ Thanks for their awesome work! Open-source contributions make Ultra-FineWeb possible! πŸ™Œ
53
+
54
+ ## 🌟 Citation
55
+
56
+ If you find our work useful, please consider citing:
57
+
58
+ ```bibtex
59
+ Coming soon...
60
+ ```
61
+
62
+ ## πŸ’³ License
63
+
64
+ This project is released under the [MIT](./LICENSE). Please note that since ***Ultra-FineWeb*** is built using multiple datasets, users should check the **LICENSE of each dataset individually** to ensure proper usage and compliance.
data/ultrafineweb_en/.gitkeep ADDED
File without changes
data/ultrafineweb_zh/.gitkeep ADDED
File without changes