Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Bengali
ArXiv:
Libraries:
Datasets
Dask
License:
nielsr HF Staff commited on
Commit
f2c62e5
·
verified ·
1 Parent(s): 1597c2f

Add link to paper

Browse files

This PR adds a link to the paper associated with this dataset: https://huggingface.co/papers/2502.11187

Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -1,8 +1,10 @@
1
  ---
2
- task_categories:
3
- - text-generation
4
  language:
5
  - bn
 
 
 
 
6
  dataset_info:
7
  - config_name: common_crawl
8
  features:
@@ -44,7 +46,7 @@ configs:
44
  - config_name: default
45
  data_files:
46
  - split: train
47
- path: "**/train-*.parquet"
48
  - config_name: common_crawl
49
  data_files:
50
  - split: train
@@ -57,11 +59,12 @@ configs:
57
  data_files:
58
  - split: train
59
  path: translated/train-*
60
- license: cc-by-4.0
61
- pretty_name: TituLM Bangla Corpus
62
  ---
63
 
64
  ## TituLM Bangla Corpus
 
 
 
65
  **TituLM Bangla Corpus** is one of the largest Bangla clean corpus prepared for pretraining, continual pretraining or fine-tuning Large Language Model(LLM) for improving Bangla text generation capability.
66
  This dataset contains diverse sources and categories of Bangla text. The largest part of this dataset contains filtered common crawled datasets. As we saw existing all common crawl datasets have issues with proper text extraction from HTML pages and Bangla language specific filtering as all those datasets build for multilingual purposes.
67
  Keeping that in mind we applied [Trafilatura](https://trafilatura.readthedocs.io/en/latest/) tool to extract text from common crawl web pages. Compared to existing extraction pages we found this tool perform better. We generate several Bangla language specific quality signals over the dataset and filtered using different quality signals threshold.
 
1
  ---
 
 
2
  language:
3
  - bn
4
+ license: cc-by-4.0
5
+ task_categories:
6
+ - text-generation
7
+ pretty_name: TituLM Bangla Corpus
8
  dataset_info:
9
  - config_name: common_crawl
10
  features:
 
46
  - config_name: default
47
  data_files:
48
  - split: train
49
+ path: '**/train-*.parquet'
50
  - config_name: common_crawl
51
  data_files:
52
  - split: train
 
59
  data_files:
60
  - split: train
61
  path: translated/train-*
 
 
62
  ---
63
 
64
  ## TituLM Bangla Corpus
65
+
66
+ This dataset is associated with the paper [TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking](https://huggingface.co/papers/2502.11187)
67
+
68
  **TituLM Bangla Corpus** is one of the largest Bangla clean corpus prepared for pretraining, continual pretraining or fine-tuning Large Language Model(LLM) for improving Bangla text generation capability.
69
  This dataset contains diverse sources and categories of Bangla text. The largest part of this dataset contains filtered common crawled datasets. As we saw existing all common crawl datasets have issues with proper text extraction from HTML pages and Bangla language specific filtering as all those datasets build for multilingual purposes.
70
  Keeping that in mind we applied [Trafilatura](https://trafilatura.readthedocs.io/en/latest/) tool to extract text from common crawl web pages. Compared to existing extraction pages we found this tool perform better. We generate several Bangla language specific quality signals over the dataset and filtered using different quality signals threshold.