Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -199,7 +199,7 @@ The licensing information is extracted from the web pages based on whether they
|
|
199 |
|
200 |
I am very grateful to the Flemish Supercomputer to provide compute necessary to create this dataset, but as you can tell there is still a lot of data left to be processed. Therefore, I am happy to collaborate to process as many Common Crawl crawls as possible. [Shoot me a message](mailto:bram.vanroy@kuleuven.be) if you want to sponsor this project with compute! You can also simply run the code yourself if you'd like. You can find the whole code base, based on `datatrove`, on [Github](https://github.com/BramVanroy/CommonCrawl-CreativeCommons). If you use the code, please [reference my work](https://github.com/BramVanroy/CommonCrawl-CreativeCommons?tab=readme-ov-file#citation) accordingly and share your processed crawls with the rest of the world (or get in touch with me so I can add them to this repo).
|
201 |
|
202 |
-
The approach to creating this dataset is different from similar endeavors such as [common-pile/dolma-cccc](https://huggingface.co/datasets/common-pile/dolma-cccc) and
|
203 |
|
204 |
## Usage
|
205 |
|
|
|
199 |
|
200 |
I am very grateful to the Flemish Supercomputer to provide compute necessary to create this dataset, but as you can tell there is still a lot of data left to be processed. Therefore, I am happy to collaborate to process as many Common Crawl crawls as possible. [Shoot me a message](mailto:bram.vanroy@kuleuven.be) if you want to sponsor this project with compute! You can also simply run the code yourself if you'd like. You can find the whole code base, based on `datatrove`, on [Github](https://github.com/BramVanroy/CommonCrawl-CreativeCommons). If you use the code, please [reference my work](https://github.com/BramVanroy/CommonCrawl-CreativeCommons?tab=readme-ov-file#citation) accordingly and share your processed crawls with the rest of the world (or get in touch with me so I can add them to this repo).
|
201 |
|
202 |
+
The approach to creating this dataset is different from similar endeavors such as the awesome [common-pile/dolma-cccc](https://huggingface.co/datasets/common-pile/dolma-cccc) and [C4Corpus](https://data.commoncrawl.org/contrib/c4corpus/CC-MAIN-2016-07/index.html) datasets. They rely on intricately crafted regular expressions to quickly extract potential licenses from a web page (string-based matching). However, doing so makes it hard to retrieve any structural meta information about the license such as where it was found on the page. In C5, the whole webpage is parsed into a programmatic structure, allowing for an iterative search through this parsed "tree". That makes it possible to track where licenses were found (in the head of a document, for instance). Such information is crucial to minimise false positives: if a license is referred in a `meta` tag in the `head` of an HTML page, it is more trustworthy than a "random link" referring to a copyright license in the middle of a web page, which might just be discussing the license in general or providing a license for a picture on the website. Metadata *about* the license is powerful to attach confidence to the extracted licenses, enabling robust filtering to avoid false positives. While I strongly believe this approach is valuable it also makes it very *slow* compared to a regex search!
|
203 |
|
204 |
## Usage
|
205 |
|