BramVanroy's picture
Update README.md
1235836 verified
|
raw
history blame
9.15 kB
---
license: cc
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: ©️ Common Crawl Creative Commons
language:
- afr
- deu
- eng
- fra
- fry
- ita
- nld
- spa
- af
- de
- en
- fr
- fy
- it
- nl
- es
configs:
- config_name: default
data_files: data/**/*.parquet
# Languages
- config_name: afr
data_files: data/**/afr/*.parquet
- config_name: deu
data_files: data/**/deu/*.parquet
- config_name: eng
data_files: data/**/eng/*.parquet
- config_name: spa
data_files: data/**/spa/*.parquet
- config_name: fra
data_files: data/**/fra/*.parquet
- config_name: fry
data_files: data/**/fry/*.parquet
- config_name: ita
data_files: data/**/ita/*.parquet
- config_name: nld
data_files: data/**/nld/*.parquet
# Per-crawl
# CC-MAIN-2019-30
- config_name: CC-MAIN-2019-30
data_files: data/CC-MAIN-2019-30/**/*.parquet
- config_name: CC-MAIN-2019-30-afr
data_files: data/CC-MAIN-2019-30/afr/*.parquet
- config_name: CC-MAIN-2019-30-deu
data_files: data/CC-MAIN-2019-30/deu/*.parquet
- config_name: CC-MAIN-2019-30-eng
data_files: data/CC-MAIN-2019-30/eng/*.parquet
- config_name: CC-MAIN-2019-30-spa
data_files: data/CC-MAIN-2019-30/spa/*.parquet
- config_name: CC-MAIN-2019-30-fra
data_files: data/CC-MAIN-2019-30/fra/*.parquet
- config_name: CC-MAIN-2019-30-fry
data_files: data/CC-MAIN-2019-30/fry/*.parquet
- config_name: CC-MAIN-2019-30-ita
data_files: data/CC-MAIN-2019-30/ita/*.parquet
- config_name: CC-MAIN-2019-30-nld
data_files: data/CC-MAIN-2019-30/nld/*.parquet
# CC-MAIN-2020-05
- config_name: CC-MAIN-2020-05
data_files: data/CC-MAIN-2020-05/**/*.parquet
- config_name: CC-MAIN-2020-05-afr
data_files: data/CC-MAIN-2020-05/afr/*.parquet
- config_name: CC-MAIN-2020-05-deu
data_files: data/CC-MAIN-2020-05/deu/*.parquet
- config_name: CC-MAIN-2020-05-eng
data_files: data/CC-MAIN-2020-05/eng/*.parquet
- config_name: CC-MAIN-2020-05-spa
data_files: data/CC-MAIN-2020-05/spa/*.parquet
- config_name: CC-MAIN-2020-05-fra
data_files: data/CC-MAIN-2020-05/fra/*.parquet
- config_name: CC-MAIN-2020-05-fry
data_files: data/CC-MAIN-2020-05/fry/*.parquet
- config_name: CC-MAIN-2020-05-ita
data_files: data/CC-MAIN-2020-05/ita/*.parquet
- config_name: CC-MAIN-2020-05-nld
data_files: data/CC-MAIN-2020-05/nld/*.parquet
---
> **Raw CommonCrawl crawls, annotated with potential Creative Commons license information**
**The licensing information is extracted from the web pages based on whether they link to Creative Commons licenses but false positives may occur!** While further filtering based on the location type of the license should improve the precision (e.g. by removing hyperlink (a_tag) references), false positives may still occur. **See Recommendations and Caveats below!**
## Usage
```python
from datasets import load_dataset
# Everything
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons")
# Single dump, all languages
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30")
# Single language, all dumps
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "nld")
# Single language, single dump
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30-nld")
```
## Fields
In some cases, multiple licenses are found on a single page. All licenses are collected in `potential_licenses`. From these, the "best guess" is selected
based on three criteria:
1. location_preference_order: meta_tag, json-ld, link_tag, a_tag
2. head_preference_order: True, False
3. footer_preference_order: True, False
Based on these criteria, the "best guessed" license is picked as the one in the `license_*` columns. Potential disagreement between multiple licenses is given in `license_disagreement`.
- text: the extracted text (unmodified)
- id: WARC-Record-ID
- dump: Common Crawl crawl
- url: original url for document
- date: crawl date
- file_path: file path on the S3 bucket
- license_abbr: the license type. Possible values: "cc-unknown" (recommended to filter this one out), "by", "by-sa", "by-nd", "by-nc", "by-nc-sa", "by-nc-nd", "zero", "certification", "mark". If multiple licenses were found (`potential_licenses`)
- license_version: the license version, e.g. "4.0"
- license_location: the location where the license was found. Possible values: "meta_tag", "json-ld", "link_tag", "a_tag"
- license_in_head: whether the license was found inside a `head` HTML element
- license_in_footer: whether the license was found inside a `footer` HTML element, or an HTML element that had `footer` in the ID or class name
- potential_licenses:
- abbr: list of all found license abbreviations
- version: list of all found license versions
- location: list of all found license locations
- in_head: list of whether licenses were found in the head
- in_footer: list of whether licenses were found in a footer
- license_parse_error: whether there was a problem when trying to extract the license, e.g. an unparseable HTML document
- license_disagreement: whether the `potential_licenses["abbr"]` disagree, i.e., different types of licenses were found. License *versions* are not included in the comparison!
- language: the language, as detected by fastText `ft176`
- language_score: the language identification confidence score
- found_in_fw2: whether this sample was found in FineWeb-2. Crawls that are more recent than FW2 (everything after 2024-18) is marked as None **and so are all English samples**
## Progress
The attempt is to at least process all five RedPyjama crawls + `CC-MAIN-2019-30`.
Done:
- CC-MAIN-2019-30
Running:
- CC-MAIN-2020-05
- CC-MAIN-2021-04
To do:
- CC-MAIN-2022-05
- CC-MAIN-2023-06
- CC-MAIN-2024-51
- CC-MAIN-2025-05
## Languages
The following languages are included.
- Afrikaans: afr
- German: deu
- English: eng
- French: fra
- Frysian: fry
- Italian: ita
- Dutch: nld
- Spanish: spa
## Recommendations and Caveats
- Raw CommonCrawl data is processed in an attempt to extract licensing information. No quality filtering is done!! It is **highly** recommended to filter this data further on quality, fluency, toxicity, etc.
- Similarly, the data has **not been deduplicated**.
- The licenses include all possible Creative Commons licenses, including non-commercial ones. Take care about what kind of data you wish to use, and filter out non-commercial licenses when needed.
- The column `license_disagreement` indicates whether multiple licenses were found that have not the same abbreviation, e.g. `cc-by` and `cc-by-nc`. It is recommended to filter these out.
- The column `license_parse_error` indicates whether an error occurred when parsing the license. You probably want to filter out documents where this was the case, though this should be extremely rare.
- Unsurpisingly, the data contains a lot of Wikipedia/Wikimedia content. Depending on what you need, you may wish to filter those out. For Wikipedia specifically, you may opt to use the more thoroughly parsed (but potentially more outdated) [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) set.
- In exceptional cases, a link to creativecommons.org is found but the exact license could not be found. These are under `license_abbr="cc-unknown"` which you may wish to filter out.
Recommendation:
```python
from datasets import load_dataset
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30", split="train")
ds = ds.filter(
lambda x: (
(not x["license_disagreement"]) and # Only use pages with a consistent license
(x["found_in_fw2"] or x["language"] == "eng") and # Only use pages that are in FineWeb-2 (non-English) or English
"nc" not in x["license_abbr"] and # Exclude non-commercial licenses
x["license_abbr"] != "cc-unknown" and # Exclude unknown licenses
"wiki" not in x["url"] # Exclude Wiki-like pages (best to get those from a more reliable parser)
),
num_proc=96
)
```
## Citation
```bibtex
@software{Vanroy_CommonCrawl-CreativeCommons_2025,
author = {Vanroy, Bram},
license = {GPL-3.0},
month = feb,
title = {{CommonCrawl-CreativeCommons}},
url = {https://github.com/BramVanroy/CommonCrawl-CreativeCommons},
version = {1.1.0},
year = {2025}
}
```
## Acknowledgments
- [TNO](https://www.tno.nl/nld/), who funded the work hours to accomplish this collection. They intend to use parts of this material for the [GPT-NL project](https://gpt-nl.nl/).
- [Flemish Supercomputer Center](https://www.vscentrum.be/) for part of the compute under grant 2024-107
- Guilherme Penedo ([@guipenedo](https://huggingface.co/guipenedo)) and the rest of the [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) and [datatrove](https://github.com/huggingface/datatrove) team for the help and insights
- ML6 and specifically Robin Van Craenenbroek for their [Fondant Creative Commons](https://github.com/ml6team/fondant-usecase-filter-creative-commons/tree/add-fondant-usecase-cc-image-extraction) filter for image datasets. While my approach is different, their code did serve as inspiration.