File size: 16,609 Bytes
3d22d5c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4eb2386
 
 
 
 
 
 
 
3d22d5c
5ceee6f
 
 
 
 
 
 
 
 
3d22d5c
 
0bf91c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ceee6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97c59d9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26bb1c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ceee6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3d22d5c
 
330677b
3d22d5c
330677b
 
 
3d22d5c
9979ac5
 
 
 
3d22d5c
 
 
 
 
2680fcd
a2f21da
3d22d5c
2680fcd
 
 
a2f21da
0bf91c9
3d22d5c
a2f21da
 
3d22d5c
 
455e49f
3d22d5c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
04d05ee
3d22d5c
6ccbe48
3d22d5c
 
 
 
e41236e
3d22d5c
db78d50
d21b1ef
34a458b
 
26bb1c4
b799f24
3d22d5c
 
e41236e
5b06f33
3d22d5c
 
2cb9f83
3d22d5c
455e49f
 
 
 
 
 
 
 
3d22d5c
2cb9f83
 
 
3d22d5c
457e83c
3d22d5c
 
 
 
 
 
 
 
 
457e83c
 
 
 
 
 
 
 
 
 
49dccc7
 
 
 
 
457e83c
49dccc7
457e83c
 
 
5b06f33
 
 
 
d58e500
 
5b06f33
457e83c
b8c4a0e
 
 
 
 
 
 
 
 
a2f21da
b8c4a0e
 
 
 
 
3d22d5c
 
a2f21da
 
3d22d5c
 
a2f21da
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
---
license: cc
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: ©️ Common Crawl Creative Commons
language:
- afr
- deu
- eng
- fra
- fry
- ita
- nld
- spa
- af
- de
- en
- fr
- fy
- it
- nl
- es
configs:
- config_name: v1
  data_files: 
  - data/CC-MAIN-2019-30/**/*.parquet
  - data/CC-MAIN-2020-05/**/*.parquet
  - data/CC-MAIN-2022-05/**/*.parquet
  - data/CC-MAIN-2023-06/**/*.parquet
  - data/CC-MAIN-2024-46/**/*.parquet
  - data/CC-MAIN-2024-51/**/*.parquet
  - data/CC-MAIN-2025-05/**/*.parquet
- config_name: default
  data_files: data/**/*.parquet
# Languages
- config_name: afr
  data_files: data/**/afr/*.parquet
- config_name: deu
  data_files: data/**/deu/*.parquet
- config_name: eng
  data_files: data/**/eng/*.parquet
- config_name: spa
  data_files: data/**/spa/*.parquet
- config_name: fra
  data_files: data/**/fra/*.parquet
- config_name: fry
  data_files: data/**/fry/*.parquet
- config_name: ita
  data_files: data/**/ita/*.parquet
- config_name: nld
  data_files: data/**/nld/*.parquet
# Per-crawl
# CC-MAIN-2019-30
- config_name: CC-MAIN-2019-30
  data_files: data/CC-MAIN-2019-30/**/*.parquet
- config_name: CC-MAIN-2019-30-afr
  data_files: data/CC-MAIN-2019-30/afr/*.parquet
- config_name: CC-MAIN-2019-30-deu
  data_files: data/CC-MAIN-2019-30/deu/*.parquet
- config_name: CC-MAIN-2019-30-eng
  data_files: data/CC-MAIN-2019-30/eng/*.parquet
- config_name: CC-MAIN-2019-30-spa
  data_files: data/CC-MAIN-2019-30/spa/*.parquet
- config_name: CC-MAIN-2019-30-fra
  data_files: data/CC-MAIN-2019-30/fra/*.parquet
- config_name: CC-MAIN-2019-30-fry
  data_files: data/CC-MAIN-2019-30/fry/*.parquet
- config_name: CC-MAIN-2019-30-ita
  data_files: data/CC-MAIN-2019-30/ita/*.parquet
- config_name: CC-MAIN-2019-30-nld
  data_files: data/CC-MAIN-2019-30/nld/*.parquet
# CC-MAIN-2020-05
- config_name: CC-MAIN-2020-05
  data_files: data/CC-MAIN-2020-05/**/*.parquet
- config_name: CC-MAIN-2020-05-afr
  data_files: data/CC-MAIN-2020-05/afr/*.parquet
- config_name: CC-MAIN-2020-05-deu
  data_files: data/CC-MAIN-2020-05/deu/*.parquet
- config_name: CC-MAIN-2020-05-eng
  data_files: data/CC-MAIN-2020-05/eng/*.parquet
- config_name: CC-MAIN-2020-05-spa
  data_files: data/CC-MAIN-2020-05/spa/*.parquet
- config_name: CC-MAIN-2020-05-fra
  data_files: data/CC-MAIN-2020-05/fra/*.parquet
- config_name: CC-MAIN-2020-05-fry
  data_files: data/CC-MAIN-2020-05/fry/*.parquet
- config_name: CC-MAIN-2020-05-ita
  data_files: data/CC-MAIN-2020-05/ita/*.parquet
- config_name: CC-MAIN-2020-05-nld
  data_files: data/CC-MAIN-2020-05/nld/*.parquet
# CC-MAIN-2022-05
- config_name: CC-MAIN-2022-05
  data_files: data/CC-MAIN-2022-05/**/*.parquet
- config_name: CC-MAIN-2022-05-afr
  data_files: data/CC-MAIN-2022-05/afr/*.parquet
- config_name: CC-MAIN-2022-05-deu
  data_files: data/CC-MAIN-2022-05/deu/*.parquet
- config_name: CC-MAIN-2022-05-eng
  data_files: data/CC-MAIN-2022-05/eng/*.parquet
- config_name: CC-MAIN-2022-05-spa
  data_files: data/CC-MAIN-2022-05/spa/*.parquet
- config_name: CC-MAIN-2022-05-fra
  data_files: data/CC-MAIN-2022-05/fra/*.parquet
- config_name: CC-MAIN-2022-05-fry
  data_files: data/CC-MAIN-2022-05/fry/*.parquet
- config_name: CC-MAIN-2022-05-ita
  data_files: data/CC-MAIN-2022-05/ita/*.parquet
- config_name: CC-MAIN-2022-05-nld
  data_files: data/CC-MAIN-2022-05/nld/*.parquet
# CC-MAIN-2023-06
- config_name: CC-MAIN-2023-06
  data_files: data/CC-MAIN-2023-06/**/*.parquet
- config_name: CC-MAIN-2023-06-afr
  data_files: data/CC-MAIN-2023-06/afr/*.parquet
- config_name: CC-MAIN-2023-06-deu
  data_files: data/CC-MAIN-2023-06/deu/*.parquet
- config_name: CC-MAIN-2023-06-eng
  data_files: data/CC-MAIN-2023-06/eng/*.parquet
- config_name: CC-MAIN-2023-06-spa
  data_files: data/CC-MAIN-2023-06/spa/*.parquet
- config_name: CC-MAIN-2023-06-fra
  data_files: data/CC-MAIN-2023-06/fra/*.parquet
- config_name: CC-MAIN-2023-06-fry
  data_files: data/CC-MAIN-2023-06/fry/*.parquet
- config_name: CC-MAIN-2023-06-ita
  data_files: data/CC-MAIN-2023-06/ita/*.parquet
- config_name: CC-MAIN-2023-06-nld
  data_files: data/CC-MAIN-2023-06/nld/*.parquet
  # CC-MAIN-2024-46
- config_name: CC-MAIN-2024-46
  data_files: data/CC-MAIN-2024-46/**/*.parquet
- config_name: CC-MAIN-2024-46-afr
  data_files: data/CC-MAIN-2024-46/afr/*.parquet
- config_name: CC-MAIN-2024-46-deu
  data_files: data/CC-MAIN-2024-46/deu/*.parquet
- config_name: CC-MAIN-2024-46-eng
  data_files: data/CC-MAIN-2024-46/eng/*.parquet
- config_name: CC-MAIN-2024-46-spa
  data_files: data/CC-MAIN-2024-46/spa/*.parquet
- config_name: CC-MAIN-2024-46-fra
  data_files: data/CC-MAIN-2024-46/fra/*.parquet
- config_name: CC-MAIN-2024-46-fry
  data_files: data/CC-MAIN-2024-46/fry/*.parquet
- config_name: CC-MAIN-2024-46-ita
  data_files: data/CC-MAIN-2024-46/ita/*.parquet
- config_name: CC-MAIN-2024-46-nld
  data_files: data/CC-MAIN-2024-46/nld/*.parquet
# CC-MAIN-2024-51
- config_name: CC-MAIN-2024-51
  data_files: data/CC-MAIN-2024-51/**/*.parquet
- config_name: CC-MAIN-2024-51-afr
  data_files: data/CC-MAIN-2024-51/afr/*.parquet
- config_name: CC-MAIN-2024-51-deu
  data_files: data/CC-MAIN-2024-51/deu/*.parquet
- config_name: CC-MAIN-2024-51-eng
  data_files: data/CC-MAIN-2024-51/eng/*.parquet
- config_name: CC-MAIN-2024-51-spa
  data_files: data/CC-MAIN-2024-51/spa/*.parquet
- config_name: CC-MAIN-2024-51-fra
  data_files: data/CC-MAIN-2024-51/fra/*.parquet
- config_name: CC-MAIN-2024-51-fry
  data_files: data/CC-MAIN-2024-51/fry/*.parquet
- config_name: CC-MAIN-2024-51-ita
  data_files: data/CC-MAIN-2024-51/ita/*.parquet
- config_name: CC-MAIN-2024-51-nld
  data_files: data/CC-MAIN-2024-51/nld/*.parquet
# CC-MAIN-2025-05
- config_name: CC-MAIN-2025-05
  data_files: data/CC-MAIN-2025-05/**/*.parquet
- config_name: CC-MAIN-2025-05-afr
  data_files: data/CC-MAIN-2025-05/afr/*.parquet
- config_name: CC-MAIN-2025-05-deu
  data_files: data/CC-MAIN-2025-05/deu/*.parquet
- config_name: CC-MAIN-2025-05-eng
  data_files: data/CC-MAIN-2025-05/eng/*.parquet
- config_name: CC-MAIN-2025-05-spa
  data_files: data/CC-MAIN-2025-05/spa/*.parquet
- config_name: CC-MAIN-2025-05-fra
  data_files: data/CC-MAIN-2025-05/fra/*.parquet
- config_name: CC-MAIN-2025-05-fry
  data_files: data/CC-MAIN-2025-05/fry/*.parquet
- config_name: CC-MAIN-2025-05-ita
  data_files: data/CC-MAIN-2025-05/ita/*.parquet
- config_name: CC-MAIN-2025-05-nld
  data_files: data/CC-MAIN-2025-05/nld/*.parquet
---

> **Raw CommonCrawl crawls, annotated with Creative Commons license information**

This dataset is an effort to collect Creative Commons-licensed web data in one place.

The licensing information is extracted from the web pages based on whether they link to Creative Commons licenses either overtly in `a` tags (like in the footer of Wikipedia) or in metadata fields indicating deliberate Creative Commons publication. **However, false positives may occur!** While further filtering based on the location type of the license should improve the precision (e.g. by removing hyperlink (a_tag) references), false positives may still occur. **See Recommendations and Caveats below!** Also see [Personal and Sensitive Information](#personal-and-sensitive-information).

## Code

I am very grateful to the Flemish Supercomputer to provide compute necessary to create this dataset, but as you can tell there is still a lot of data left to be processed. Therefore, I am happy to collaborate to process as many Common Crawl crawls as possible. [Shoot me a message](mailto:bram.vanroy@kuleuven.be) if you want to sponsor this project with compute! You can also simply run the code yourself if you'd like. You can find the whole code base, based on `datatrove`, on [Github](https://github.com/BramVanroy/CommonCrawl-CreativeCommons). If you use the code, please [reference my work](https://github.com/BramVanroy/CommonCrawl-CreativeCommons?tab=readme-ov-file#citation) accordingly and share your processed crawls with the rest of the world (or get in touch with me so I can add them to this repo).

## Usage

```python
from datasets import load_dataset

# Everything, most recent -- massive, you will need streaming
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", streaming=True)

# v1 (2019-30, 2020-05, 2022-05, 2023-06, 2024-51, 2025-05, 2024-46)
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "v1", streaming=True)

# Single dump, all languages -- large, you may need streaming on non-server hardware
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30")

# Single language, all dumps -- very large, you will likely need streaming
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "nld", streaming=True)

# Single language, single dump
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30-nld")
```

## Fields

In some cases, multiple licenses are found on a single page. All licenses are collected in `potential_licenses`. From these, the "best guess" is selected
based on three criteria:

1. location_preference_order: meta_tag, json-ld, link_tag, a_tag
2. head_preference_order: True, False
3. footer_preference_order: True, False

Based on these criteria, the "best guessed" license is picked as the one in the `license_*` columns. Potential disagreement between multiple licenses is given in `license_disagreement`.

- text: the extracted text (unmodified)
- id: WARC-Record-ID
- dump: Common Crawl crawl
- url: original url for document
- date: crawl date
- file_path: file path on the S3 bucket
- license_abbr: the license type. Possible values: "cc-unknown" (recommended to filter this one out), "by", "by-sa", "by-nd", "by-nc", "by-nc-sa", "by-nc-nd", "zero", "certification", "mark". If multiple licenses were found (`potential_licenses`) 
- license_version: the license version, e.g. "4.0"
- license_location: the location where the license was found. Possible values: "meta_tag", "json-ld", "link_tag", "a_tag"
- license_in_head: whether the license was found inside a `head` HTML element
- license_in_footer: whether the license was found inside a `footer` HTML element, or an HTML element that had `footer` in the ID or class name
- potential_licenses:
  - abbr: list of all found license abbreviations
  - version: list of all found license versions
  - location: list of all found license locations
  - in_head: list of whether licenses were found in the head
  - in_footer: list of whether licenses were found in a footer
- license_parse_error: whether there was a problem when trying to extract the license, e.g. an unparseable HTML document
- license_disagreement: whether the `potential_licenses["abbr"]` disagree, i.e., different types of licenses were found. License *versions* are not included in the comparison!
- language: the language, as detected by glotlid
- language_score: the language identification confidence score
- found_in_fw: whether this sample was found in FineWeb(-2). For non-English, crawls that are more recent than FW2 (everything after 2024-18) is marked as None. For English, crawls that are more recent than FW v1.3 is marked as None (after 2024-51).


## Progress

In the `v1` release, the following crawls are included

- CC-MAIN-2019-30
- CC-MAIN-2020-05
- CC-MAIN-2023-06
- CC-MAIN-2024-51
- CC-MAIN-2024-46
- CC-MAIN-2025-05
- CC-MAIN-2022-05

Other crawls are continuously being added.

## Languages

The following languages are included. This is a limited set due to computational and storage limitations. 

- Afrikaans: afr
- German: deu
- English: eng
- French: fra
- Frysian: fry
- Italian: ita
- Dutch: nld
- Spanish: spa

## Quantity

The number of tokens (Llama 3.3 tokenizer) and number of documents are given in the [counts.json](https://huggingface.co/datasets/BramVanroy/CommonCrawl-CreativeCommons/blob/main/counts.json) file.

## Recommendations and Caveats

- Raw CommonCrawl data is processed in an attempt to extract licensing information. No quality filtering is done!! It is **highly** recommended to filter this data further on quality, fluency, toxicity, etc.
- Similarly, the data has **not been deduplicated**. 
- The licenses include all possible Creative Commons licenses, including non-commercial ones. Take care about what kind of data you wish to use, and filter out non-commercial licenses when needed.
- The column `license_disagreement` indicates whether multiple licenses were found that have not the same abbreviation, e.g. `cc-by` and `cc-by-nc`. It is recommended to filter these out.
- The column `license_parse_error` indicates whether an error occurred when parsing the license. You probably want to filter out documents where this was the case, though this should be extremely rare.
- Unsurpisingly, the data contains a lot of Wikipedia/Wikimedia content. Depending on what you need, you may wish to filter those out. For Wikipedia specifically, you may opt to use the more thoroughly parsed (but potentially more outdated) [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) set.
- In exceptional cases, a link to creativecommons.org is found but the exact license could not be found. These are under `license_abbr="cc-unknown"` which you may wish to filter out.


Recommendation:

```python
from datasets import load_dataset


ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30", split="train")
ds = ds.filter(
    lambda x: (
        (not x["license_disagreement"]) and    # Only use pages with a consistent license
        x["found_in_fw"] and                   # Only use pages that are in FineWeb(-2)
        "nc" not in x["license_abbr"] and      # Exclude non-commercial licenses
        x["license_abbr"] != "cc-unknown" and  # Exclude unknown licenses
        "wiki" not in x["url"]                 # Exclude Wiki-like pages (best to get those from a more reliable parser)
    ), 
    num_proc=16
)
```

## Personal and Sensitive Information

This dataset is a heavily filtered version of the Common Crawl dataset. CommonCrawl respects robots.txt and will not include websites if their robots.txt say so. Even so, if you find that your website was included you can submit a [removal request](https://docs.google.com/forms/d/e/1FAIpQLSddAIuUui5xnAzBqft6MnzPYihr-AaS-Nj8x01Y6AM8NQ0YLQ/viewform?usp=sharing) indicating that you are the owner of the website.

Take-down notices on other Common Crawl-based datasets such as FineWeb are considered. Domains specified and verified in those take-down notices are not included in this dataset.

In this dataset, measures are taken to anonymise email addresses and public IP addresses following the [FineWeb-2 approach](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#personal-and-sensitive-information-and-opt-out). Email addresses matching a regular expression are replaced with `firstname.lastname@example.org`. Similarly, IP addresses allocated for [public networks](https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml) are replaced by unused IP addresses. Despite these best efforts on such large volumes of text, you may still encounter that your personal information is present in the dataset. In that case you can submit a [removal request](https://docs.google.com/forms/d/e/1FAIpQLSddAIuUui5xnAzBqft6MnzPYihr-AaS-Nj8x01Y6AM8NQ0YLQ/viewform?usp=sharing).

## Citation

```bibtex
@software{Vanroy_CommonCrawl-CreativeCommons_2025,
  author = {Vanroy, Bram},
  license = {GPL-3.0},
  month = feb,
  title = {{CommonCrawl-CreativeCommons}},
  url = {https://github.com/BramVanroy/CommonCrawl-CreativeCommons},
  version = {1.3.0},
  year = {2025}
}
```


## Acknowledgments

- The [Common Crawl](https://commoncrawl.org/) non-profit organization. 
- [TNO](https://www.tno.nl/nl/), who funded the work hours to accomplish this code. They intend to use (parts of) [the generated material](https://huggingface.co/datasets/BramVanroy/CommonCrawl-CreativeCommons) for the [GPT-NL project](https://gpt-nl.nl/).
- [Flemish Supercomputer Center](https://www.vscentrum.be/) for part of the compute under grant 2024-107
- Guilherme Penedo ([@guipenedo](https://huggingface.co/guipenedo)) and the rest of the [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) and [datatrove](https://github.com/huggingface/datatrove) team for the help and insights
- ML6 and specifically Robin Van Craenenbroek for their [Fondant Creative Commons](https://github.com/ml6team/fondant-usecase-filter-creative-commons/tree/add-fondant-usecase-cc-image-extraction) filter for image datasets. While my approach is different, their code did serve as inspiration.