--- license: cc task_categories: - text-generation task_ids: - language-modeling pretty_name: ©️ Common Crawl Creative Commons language: - afr - deu - eng - fra - fry - ita - nld - spa - af - de - en - fr - fy - it - nl - es configs: - config_name: default data_files: data/**/*.parquet # Languages - config_name: afr data_files: data/**/afr/*.parquet - config_name: deu data_files: data/**/deu/*.parquet - config_name: eng data_files: data/**/eng/*.parquet - config_name: spa data_files: data/**/spa/*.parquet - config_name: fra data_files: data/**/fra/*.parquet - config_name: fry data_files: data/**/fry/*.parquet - config_name: ita data_files: data/**/ita/*.parquet - config_name: nld data_files: data/**/nld/*.parquet # Per-crawl # CC-MAIN-2019-30 - config_name: CC-MAIN-2019-30 data_files: data/CC-MAIN-2019-30/**/*.parquet - config_name: CC-MAIN-2019-30-afr data_files: data/CC-MAIN-2019-30/afr/*.parquet - config_name: CC-MAIN-2019-30-deu data_files: data/CC-MAIN-2019-30/deu/*.parquet - config_name: CC-MAIN-2019-30-eng data_files: data/CC-MAIN-2019-30/eng/*.parquet - config_name: CC-MAIN-2019-30-spa data_files: data/CC-MAIN-2019-30/spa/*.parquet - config_name: CC-MAIN-2019-30-fra data_files: data/CC-MAIN-2019-30/fra/*.parquet - config_name: CC-MAIN-2019-30-fry data_files: data/CC-MAIN-2019-30/fry/*.parquet - config_name: CC-MAIN-2019-30-ita data_files: data/CC-MAIN-2019-30/ita/*.parquet - config_name: CC-MAIN-2019-30-nld data_files: data/CC-MAIN-2019-30/nld/*.parquet # CC-MAIN-2020-05 - config_name: CC-MAIN-2020-05 data_files: data/CC-MAIN-2020-05/**/*.parquet - config_name: CC-MAIN-2020-05-afr data_files: data/CC-MAIN-2020-05/afr/*.parquet - config_name: CC-MAIN-2020-05-deu data_files: data/CC-MAIN-2020-05/deu/*.parquet - config_name: CC-MAIN-2020-05-eng data_files: data/CC-MAIN-2020-05/eng/*.parquet - config_name: CC-MAIN-2020-05-spa data_files: data/CC-MAIN-2020-05/spa/*.parquet - config_name: CC-MAIN-2020-05-fra data_files: data/CC-MAIN-2020-05/fra/*.parquet - config_name: CC-MAIN-2020-05-fry data_files: data/CC-MAIN-2020-05/fry/*.parquet - config_name: CC-MAIN-2020-05-ita data_files: data/CC-MAIN-2020-05/ita/*.parquet - config_name: CC-MAIN-2020-05-nld data_files: data/CC-MAIN-2020-05/nld/*.parquet # CC-MAIN-2023-06 - config_name: CC-MAIN-2023-06 data_files: data/CC-MAIN-2023-06/**/*.parquet - config_name: CC-MAIN-2023-06-afr data_files: data/CC-MAIN-2023-06/afr/*.parquet - config_name: CC-MAIN-2023-06-deu data_files: data/CC-MAIN-2023-06/deu/*.parquet - config_name: CC-MAIN-2023-06-eng data_files: data/CC-MAIN-2023-06/eng/*.parquet - config_name: CC-MAIN-2023-06-spa data_files: data/CC-MAIN-2023-06/spa/*.parquet - config_name: CC-MAIN-2023-06-fra data_files: data/CC-MAIN-2023-06/fra/*.parquet - config_name: CC-MAIN-2023-06-fry data_files: data/CC-MAIN-2023-06/fry/*.parquet - config_name: CC-MAIN-2023-06-ita data_files: data/CC-MAIN-2023-06/ita/*.parquet - config_name: CC-MAIN-2023-06-nld data_files: data/CC-MAIN-2023-06/nld/*.parquet # CC-MAIN-2024-51 - config_name: CC-MAIN-2024-51 data_files: data/CC-MAIN-2024-51/**/*.parquet - config_name: CC-MAIN-2024-51-afr data_files: data/CC-MAIN-2024-51/afr/*.parquet - config_name: CC-MAIN-2024-51-deu data_files: data/CC-MAIN-2024-51/deu/*.parquet - config_name: CC-MAIN-2024-51-eng data_files: data/CC-MAIN-2024-51/eng/*.parquet - config_name: CC-MAIN-2024-51-spa data_files: data/CC-MAIN-2024-51/spa/*.parquet - config_name: CC-MAIN-2024-51-fra data_files: data/CC-MAIN-2024-51/fra/*.parquet - config_name: CC-MAIN-2024-51-fry data_files: data/CC-MAIN-2024-51/fry/*.parquet - config_name: CC-MAIN-2024-51-ita data_files: data/CC-MAIN-2024-51/ita/*.parquet - config_name: CC-MAIN-2024-51-nld data_files: data/CC-MAIN-2024-51/nld/*.parquet # CC-MAIN-2024-46 - config_name: CC-MAIN-2024-46 data_files: data/CC-MAIN-2024-46/**/*.parquet - config_name: CC-MAIN-2024-46-afr data_files: data/CC-MAIN-2024-46/afr/*.parquet - config_name: CC-MAIN-2024-46-deu data_files: data/CC-MAIN-2024-46/deu/*.parquet - config_name: CC-MAIN-2024-46-eng data_files: data/CC-MAIN-2024-46/eng/*.parquet - config_name: CC-MAIN-2024-46-spa data_files: data/CC-MAIN-2024-46/spa/*.parquet - config_name: CC-MAIN-2024-46-fra data_files: data/CC-MAIN-2024-46/fra/*.parquet - config_name: CC-MAIN-2024-46-fry data_files: data/CC-MAIN-2024-46/fry/*.parquet - config_name: CC-MAIN-2024-46-ita data_files: data/CC-MAIN-2024-46/ita/*.parquet - config_name: CC-MAIN-2024-46-nld data_files: data/CC-MAIN-2024-46/nld/*.parquet --- > **Raw CommonCrawl crawls, annotated with potential Creative Commons license information** **The licensing information is extracted from the web pages based on whether they link to Creative Commons licenses but false positives may occur!** While further filtering based on the location type of the license should improve the precision (e.g. by removing hyperlink (a_tag) references), false positives may still occur. **See Recommendations and Caveats below!** ## Code I am very grateful to the Flemish Supercomputer to provide compute necessary to create this dataset, but as you can tell there is still a lot of data left to be processed. Therefore, I am happy to collaborate to process as many Common Crawl crawls as possible. [Shoot me a message](mailto:bram.vanroy@kuleuven.be) if you want to sponsor this project with compute! You can also simply run the code yourself if you'd like. You can find the whole code base, based on `datatrove`, on [Github](https://github.com/BramVanroy/CommonCrawl-CreativeCommons). If you use the code, please [reference my work](https://github.com/BramVanroy/CommonCrawl-CreativeCommons?tab=readme-ov-file#citation) accordingly and share your processed crawls with the rest of the world (or get in touch with me so I can add them to this repo). ## Usage ```python from datasets import load_dataset # Everything -- massive, you will need streaming ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", streaming=True) # Single dump, all languages -- large, you may need streaming on non-server hardware ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30") # Single language, all dumps -- very large, you will likely need streaming ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "nld", streaming=True) # Single language, single dump ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30-nld") ``` ## Fields In some cases, multiple licenses are found on a single page. All licenses are collected in `potential_licenses`. From these, the "best guess" is selected based on three criteria: 1. location_preference_order: meta_tag, json-ld, link_tag, a_tag 2. head_preference_order: True, False 3. footer_preference_order: True, False Based on these criteria, the "best guessed" license is picked as the one in the `license_*` columns. Potential disagreement between multiple licenses is given in `license_disagreement`. - text: the extracted text (unmodified) - id: WARC-Record-ID - dump: Common Crawl crawl - url: original url for document - date: crawl date - file_path: file path on the S3 bucket - license_abbr: the license type. Possible values: "cc-unknown" (recommended to filter this one out), "by", "by-sa", "by-nd", "by-nc", "by-nc-sa", "by-nc-nd", "zero", "certification", "mark". If multiple licenses were found (`potential_licenses`) - license_version: the license version, e.g. "4.0" - license_location: the location where the license was found. Possible values: "meta_tag", "json-ld", "link_tag", "a_tag" - license_in_head: whether the license was found inside a `head` HTML element - license_in_footer: whether the license was found inside a `footer` HTML element, or an HTML element that had `footer` in the ID or class name - potential_licenses: - abbr: list of all found license abbreviations - version: list of all found license versions - location: list of all found license locations - in_head: list of whether licenses were found in the head - in_footer: list of whether licenses were found in a footer - license_parse_error: whether there was a problem when trying to extract the license, e.g. an unparseable HTML document - license_disagreement: whether the `potential_licenses["abbr"]` disagree, i.e., different types of licenses were found. License *versions* are not included in the comparison! - language: the language, as detected by glotlid - language_score: the language identification confidence score - found_in_fw: whether this sample was found in FineWeb(-2). For non-English, crawls that are more recent than FW2 (everything after 2024-18) is marked as None. For English, crawls that are more recent than FW v1.3 is marked as None (after 2024-51). ## Progress The attempt is to at least process all five RedPyjama crawls + `CC-MAIN-2019-30`. Done: - CC-MAIN-2019-30 - CC-MAIN-2020-05 - CC-MAIN-2023-06 - CC-MAIN-2024-51 - CC-MAIN-2024-46 - CC-MAIN-2025-05 Running: - CC-MAIN-2021-04 - CC-MAIN-2022-05 ## Languages The following languages are included. - Afrikaans: afr - German: deu - English: eng - French: fra - Frysian: fry - Italian: ita - Dutch: nld - Spanish: spa ## Recommendations and Caveats - Raw CommonCrawl data is processed in an attempt to extract licensing information. No quality filtering is done!! It is **highly** recommended to filter this data further on quality, fluency, toxicity, etc. - Similarly, the data has **not been deduplicated**. - The licenses include all possible Creative Commons licenses, including non-commercial ones. Take care about what kind of data you wish to use, and filter out non-commercial licenses when needed. - The column `license_disagreement` indicates whether multiple licenses were found that have not the same abbreviation, e.g. `cc-by` and `cc-by-nc`. It is recommended to filter these out. - The column `license_parse_error` indicates whether an error occurred when parsing the license. You probably want to filter out documents where this was the case, though this should be extremely rare. - Unsurpisingly, the data contains a lot of Wikipedia/Wikimedia content. Depending on what you need, you may wish to filter those out. For Wikipedia specifically, you may opt to use the more thoroughly parsed (but potentially more outdated) [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) set. - In exceptional cases, a link to creativecommons.org is found but the exact license could not be found. These are under `license_abbr="cc-unknown"` which you may wish to filter out. Recommendation: ```python from datasets import load_dataset ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30", split="train") ds = ds.filter( lambda x: ( (not x["license_disagreement"]) and # Only use pages with a consistent license x["found_in_fw"] and # Only use pages that are in FineWeb(-2) "nc" not in x["license_abbr"] and # Exclude non-commercial licenses x["license_abbr"] != "cc-unknown" and # Exclude unknown licenses "wiki" not in x["url"] # Exclude Wiki-like pages (best to get those from a more reliable parser) ), num_proc=16 ) ``` ## Citation ```bibtex @software{Vanroy_CommonCrawl-CreativeCommons_2025, author = {Vanroy, Bram}, license = {GPL-3.0}, month = feb, title = {{CommonCrawl-CreativeCommons}}, url = {https://github.com/BramVanroy/CommonCrawl-CreativeCommons}, version = {1.3.0}, year = {2025} } ``` ## Acknowledgments - The [Common Crawl](https://commoncrawl.org/) non-profit organization. - [TNO](https://www.tno.nl/nl/), who funded the work hours to accomplish this code. They intend to use (parts of) [the generated material](https://huggingface.co/datasets/BramVanroy/CommonCrawl-CreativeCommons) for the [GPT-NL project](https://gpt-nl.nl/). - [Flemish Supercomputer Center](https://www.vscentrum.be/) for part of the compute under grant 2024-107 - Guilherme Penedo ([@guipenedo](https://huggingface.co/guipenedo)) and the rest of the [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) and [datatrove](https://github.com/huggingface/datatrove) team for the help and insights - ML6 and specifically Robin Van Craenenbroek for their [Fondant Creative Commons](https://github.com/ml6team/fondant-usecase-filter-creative-commons/tree/add-fondant-usecase-cc-image-extraction) filter for image datasets. While my approach is different, their code did serve as inspiration.