recipes
Browse files
README.md
CHANGED
@@ -1,3 +1,37 @@
|
|
1 |
-
---
|
2 |
-
license: odc-by
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: odc-by
|
3 |
+
---
|
4 |
+
|
5 |
+
We provide here all recipes used in the [DataDecide paper](http://www.arxiv...).
|
6 |
+
|
7 |
+
## Recipes
|
8 |
+
|
9 |
+
| **Source** | **Recipe** | **Description** |
|
10 |
+
|------------------------------------|--------------------------------|-----------------|
|
11 |
+
| Dolma1.7 | Original | The 1.7 release of the Dolma dataset (Soldaini et al., 2024), a 2.3 trillion token corpus sampling sources commonly used in LM training. |
|
12 |
+
| Dolma1.7 | No code | Dolma1.7 with code-related subsets (Starcoder, StackExchange) removed. |
|
13 |
+
| Dolma1.7 | No math, code | Dolma1.7 excluding OpenWebMath, arXiv STEM papers, Starcoder, StackExchange, and Algebraic Stack. |
|
14 |
+
| Dolma1.7 | No Reddit | Dolma1.7 with Reddit subset excluded. |
|
15 |
+
| Dolma1.7 | No Flan | Dolma1.7 with Flan subset removed. |
|
16 |
+
| Dolma1.6++ | Original | Dolma1.6 with additional sources from Dolma1.7: RedPajama ArXiv, OpenWebMath, Algebraic Stack, Flan, Starcoder, and Falcon. |
|
17 |
+
| C4 | Original | The C4 dataset (Raffel et al., 2020) as processed in Dolma1.7, derived from April 2019 Common Crawl with automatic filtering. |
|
18 |
+
| FineWeb-Pro | [Original](https://huggingface.co/datasets/HuggingFaceFW/fineweb) | FineWeb-Pro (Zhou et al., 2024), created using a model-guided approach to apply programmatic cleaning over FineWeb. |
|
19 |
+
| FineWeb-Edu | [Original](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus/viewer/fineweb-edu-dedup) | FineWeb-Edu (Benallal et al., 2024), deduplicated subset of SmolLM-Corpus filtered by an educational quality classifier. |
|
20 |
+
| Falcon | Original | Falcon RefinedWeb (Penedo et al., 2023) as used in Dolma1.7, built from all Common Crawl through June 2023, aggressively filtered. |
|
21 |
+
| Falcon+CC | Original | Unfiltered combination of Falcon RefinedWeb and Dolma1.7's Common Crawl data. |
|
22 |
+
| Falcon+CC | QC 10% | Top 10% of Falcon+CC by reproduction of the DCLM quality filter (Li et al., 2024). |
|
23 |
+
| Falcon+CC | QC 20% | Top 20% of Falcon+CC by reproduced DCLM filter. |
|
24 |
+
| Falcon+CC | QC Orig 10% | Top 10% using the original DCLM-provided quality filter. |
|
25 |
+
| Falcon+CC | QC Tulu 10% | Top 10% filtered using a classifier trained on pre-release Tulu-v3 (Lambert et al., 2024). |
|
26 |
+
| DCLM-Baseline | Original | DCLM-Baseline from Li et al. (2024). |
|
27 |
+
| DCLM-Baseline | QC 7% FW2 | Top 7% by DCLM filter, filtered with FineWeb-Edu, keeping only documents scored ≥2. |
|
28 |
+
| DCLM-Baseline | QC 7% FW3 | Same as above but restricted to documents scored ≥3. |
|
29 |
+
| DCLM-Baseline | QC FW 10% | Filtered using FineWeb-Edu classifier, top 10% retained. |
|
30 |
+
| DCLM-Baseline | QC FW 3% | Same as above but only top 3% retained. |
|
31 |
+
| DCLM-Baseline | QC 10% | 10% retained using classifier fine-tuned on OpenHermes and Reddit ELI5. |
|
32 |
+
| DCLM-Baseline | QC 20% | Same as above, but retaining top 20%. |
|
33 |
+
| DCLM-Baseline 25% / Dolma 75% | 75% Dolma / 25% DCLM | Mixed dataset: 75% Dolma1.7 and 25% DCLM-Baseline. |
|
34 |
+
| DCLM-Baseline 50% / Dolma 50% | 50% Dolma / 50% DCLM | Mixed dataset: 50% Dolma1.7 and 50% DCLM-Baseline. |
|
35 |
+
| DCLM-Baseline 75% / Dolma 25% | 25% Dolma / 75% DCLM | Mixed dataset: 25% Dolma1.7 and 75% DCLM-Baseline. |
|
36 |
+
|
37 |
+
|