Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -130,14 +130,14 @@ configs:
|
|
130 |
```python
|
131 |
from datasets import load_dataset
|
132 |
|
133 |
-
# Everything
|
134 |
-
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons")
|
135 |
|
136 |
-
# Single dump, all languages
|
137 |
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30")
|
138 |
|
139 |
-
# Single language, all dumps
|
140 |
-
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "nld")
|
141 |
|
142 |
# Single language, single dump
|
143 |
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30-nld")
|
@@ -252,7 +252,7 @@ ds = ds.filter(
|
|
252 |
month = feb,
|
253 |
title = {{CommonCrawl-CreativeCommons}},
|
254 |
url = {https://github.com/BramVanroy/CommonCrawl-CreativeCommons},
|
255 |
-
version = {1.
|
256 |
year = {2025}
|
257 |
}
|
258 |
```
|
@@ -260,7 +260,8 @@ ds = ds.filter(
|
|
260 |
|
261 |
## Acknowledgments
|
262 |
|
263 |
-
- [
|
|
|
264 |
- [Flemish Supercomputer Center](https://www.vscentrum.be/) for part of the compute under grant 2024-107
|
265 |
- Guilherme Penedo ([@guipenedo](https://huggingface.co/guipenedo)) and the rest of the [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) and [datatrove](https://github.com/huggingface/datatrove) team for the help and insights
|
266 |
-
- ML6 and specifically Robin Van Craenenbroek for their [Fondant Creative Commons](https://github.com/ml6team/fondant-usecase-filter-creative-commons/tree/add-fondant-usecase-cc-image-extraction) filter for image datasets. While my approach is different, their code did serve as inspiration.
|
|
|
130 |
```python
|
131 |
from datasets import load_dataset
|
132 |
|
133 |
+
# Everything -- massive, you will need streaming
|
134 |
+
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", streaming=True)
|
135 |
|
136 |
+
# Single dump, all languages -- large, you may need streaming on non-server hardware
|
137 |
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30")
|
138 |
|
139 |
+
# Single language, all dumps -- very large, you will likely need streaming
|
140 |
+
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "nld", streaming=True)
|
141 |
|
142 |
# Single language, single dump
|
143 |
ds = load_dataset("BramVanroy/CommonCrawl-CreativeCommons", "CC-MAIN-2019-30-nld")
|
|
|
252 |
month = feb,
|
253 |
title = {{CommonCrawl-CreativeCommons}},
|
254 |
url = {https://github.com/BramVanroy/CommonCrawl-CreativeCommons},
|
255 |
+
version = {1.3.0},
|
256 |
year = {2025}
|
257 |
}
|
258 |
```
|
|
|
260 |
|
261 |
## Acknowledgments
|
262 |
|
263 |
+
- The [Common Crawl](https://commoncrawl.org/) non-profit organization.
|
264 |
+
- [TNO](https://www.tno.nl/nl/), who funded the work hours to accomplish this code. They intend to use (parts of) [the generated material](https://huggingface.co/datasets/BramVanroy/CommonCrawl-CreativeCommons) for the [GPT-NL project](https://gpt-nl.nl/).
|
265 |
- [Flemish Supercomputer Center](https://www.vscentrum.be/) for part of the compute under grant 2024-107
|
266 |
- Guilherme Penedo ([@guipenedo](https://huggingface.co/guipenedo)) and the rest of the [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) and [datatrove](https://github.com/huggingface/datatrove) team for the help and insights
|
267 |
+
- ML6 and specifically Robin Van Craenenbroek for their [Fondant Creative Commons](https://github.com/ml6team/fondant-usecase-filter-creative-commons/tree/add-fondant-usecase-cc-image-extraction) filter for image datasets. While my approach is different, their code did serve as inspiration.
|