benbogin's picture
update recipes readme
0254547 verified
|
raw
history blame
4.38 kB
metadata
license: odc-by
language:
  - en

We provide here all recipes used in the DataDecide paper.

Recipes

Source Recipe Description
Dolma1.7 Original The 1.7 release of the Dolma dataset (Soldaini et al., 2024), a 2.3 trillion token corpus sampling sources commonly used in LM training.
Dolma1.7 No code Dolma1.7 with code-related subsets (Starcoder, StackExchange) removed.
Dolma1.7 No math, code Dolma1.7 excluding OpenWebMath, arXiv STEM papers, Starcoder, StackExchange, and Algebraic Stack.
Dolma1.7 No Reddit Dolma1.7 with Reddit subset excluded.
Dolma1.7 No Flan Dolma1.7 with Flan subset removed.
Dolma1.6++ Original Dolma1.6 with additional sources from Dolma1.7: RedPajama ArXiv, OpenWebMath, Algebraic Stack, Flan, Starcoder, and Falcon.
C4 Original The C4 dataset (Raffel et al., 2020) as processed in Dolma1.7, derived from April 2019 Common Crawl with automatic filtering.
FineWeb-Pro Original FineWeb-Pro (Zhou et al., 2024), created using a model-guided approach to apply programmatic cleaning over FineWeb.
FineWeb-Edu Original FineWeb-Edu (Benallal et al., 2024), deduplicated subset of SmolLM-Corpus filtered by an educational quality classifier.
Falcon Original Falcon RefinedWeb (Penedo et al., 2023) as used in Dolma1.7, built from all Common Crawl through June 2023, aggressively filtered.
Falcon+CC Original Unfiltered combination of Falcon RefinedWeb and Dolma1.7's Common Crawl data.
Falcon+CC QC 10% Top 10% of Falcon+CC by reproduction of the DCLM quality filter (Li et al., 2024).
Falcon+CC QC 20% Top 20% of Falcon+CC by reproduced DCLM filter.
Falcon+CC QC Orig 10% Top 10% using the original DCLM-provided quality filter.
Falcon+CC QC Tulu 10% Top 10% filtered using a classifier trained on pre-release Tulu-v3 (Lambert et al., 2024).
DCLM-Baseline Original DCLM-Baseline from Li et al. (2024).
DCLM-Baseline QC 7% FW2 Top 7% by DCLM filter, filtered with FineWeb-Edu, keeping only documents scored ≥2.
DCLM-Baseline QC 7% FW3 Same as above but restricted to documents scored ≥3.
DCLM-Baseline QC FW 10% Filtered using FineWeb-Edu classifier, top 10% retained.
DCLM-Baseline QC FW 3% Same as above but only top 3% retained.
DCLM-Baseline QC 10% 10% retained using classifier fine-tuned on OpenHermes and Reddit ELI5.
DCLM-Baseline QC 20% Same as above, but retaining top 20%.
DCLM-Baseline 25% / Dolma 75% 75% Dolma / 25% DCLM Mixed dataset: 75% Dolma1.7 and 25% DCLM-Baseline.
DCLM-Baseline 50% / Dolma 50% 50% Dolma / 50% DCLM Mixed dataset: 50% Dolma1.7 and 50% DCLM-Baseline.
DCLM-Baseline 75% / Dolma 25% 25% Dolma / 75% DCLM Mixed dataset: 25% Dolma1.7 and 75% DCLM-Baseline.

This dataset is licensed under ODC-BY and intended for research and educational use in accordance with Ai2's Responsible Use Guidelines.