Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,25 +1,144 @@
|
|
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
-
|
7 |
-
sequence: string
|
8 |
-
- name: characters
|
9 |
-
sequence: string
|
10 |
-
- name: pos_labels
|
11 |
-
sequence: string
|
12 |
-
- name: space_labels
|
13 |
-
sequence: int64
|
14 |
-
splits:
|
15 |
-
- name: train
|
16 |
-
num_bytes: 1022649369
|
17 |
-
num_examples: 424181
|
18 |
-
download_size: 151255810
|
19 |
-
dataset_size: 1022649369
|
20 |
-
configs:
|
21 |
-
- config_name: default
|
22 |
-
data_files:
|
23 |
-
- split: train
|
24 |
-
path: data/train-*
|
25 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
---
|
3 |
+
language:
|
4 |
+
- fa
|
5 |
+
license: gpl-3.0
|
6 |
+
size_categories:
|
7 |
+
- 100K<n<1M
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
+
|
10 |
+
# Persian Space and ZWNJ Correction Dataset
|
11 |
+
|
12 |
+
## Dataset Description
|
13 |
+
|
14 |
+
### Dataset Summary
|
15 |
+
|
16 |
+
This dataset contains Persian text annotated for space and Zero-Width Non-Joiner (ZWNJ) correction tasks. It consists of 424,181 examples derived from the Bijankhan and Peykare corpora. Each example includes the original sentence, tokenized text, character-level information, part-of-speech tags, and space labels.
|
17 |
+
|
18 |
+
The dataset is designed for training models that can automatically correct spacing and ZWNJ usage in Persian text, addressing common orthographic issues in Persian digital text.
|
19 |
+
|
20 |
+
### Languages
|
21 |
+
|
22 |
+
The dataset contains text in Persian (Farsi) language.
|
23 |
+
|
24 |
+
## Dataset Structure
|
25 |
+
|
26 |
+
### Data Instances
|
27 |
+
|
28 |
+
Each instance in the dataset contains:
|
29 |
+
- `sentences`: The complete Persian text string
|
30 |
+
- `tokens`: Tokenized form of the sentence
|
31 |
+
- `characters`: Individual non-space characters from the sentence
|
32 |
+
- `pos_labels`: Part-of-speech tags for each token
|
33 |
+
- `space_labels`: Space and ZWNJ labels for each character
|
34 |
+
|
35 |
+
### Data Fields
|
36 |
+
|
37 |
+
- `sentences`: string - Full Persian text sentence
|
38 |
+
- `tokens`: list of strings - The words/tokens in the sentence
|
39 |
+
- `characters`: list of strings - Individual characters (excluding spaces and ZWNJ)
|
40 |
+
- `pos_labels`: list of strings - POS tag for each token
|
41 |
+
- `space_labels`: list of integers - Labels indicating proper space or ZWNJ placement:
|
42 |
+
- `0`: No space after the character
|
43 |
+
- `1`: Space after the character
|
44 |
+
- `2`: ZWNJ (Zero-Width Non-Joiner) after the character
|
45 |
+
|
46 |
+
### Data Splits
|
47 |
+
|
48 |
+
While the dataset does not come with predefined splits, in the original research, it was divided as follows:
|
49 |
+
- 80% for training
|
50 |
+
- 10% for validation
|
51 |
+
- 10% for testing
|
52 |
+
|
53 |
+
Users can recreate these splits or create custom splits as needed for their specific use cases.
|
54 |
+
|
55 |
+
## Dataset Creation
|
56 |
+
|
57 |
+
### Source Data
|
58 |
+
|
59 |
+
#### Initial Data Collection and Normalization
|
60 |
+
|
61 |
+
This dataset was derived from two major Persian corpora:
|
62 |
+
|
63 |
+
- **Bijankhan Corpus**: A Persian tagged corpus developed for linguistics research and natural language processing.
|
64 |
+
- **Peykare Corpus**: A comprehensive Persian corpus developed for language resources and evaluation.
|
65 |
+
|
66 |
+
### Annotations
|
67 |
+
|
68 |
+
The annotation process involved:
|
69 |
+
|
70 |
+
[PLACEHOLDER: Brief description of the annotation procedure]
|
71 |
+
|
72 |
+
For detailed information about the preprocessing, annotation, and labeling procedures, please refer to:
|
73 |
+
|
74 |
+
[PAPER CITATION PLACEHOLDER]
|
75 |
+
|
76 |
+
## Usage
|
77 |
+
|
78 |
+
This dataset is intended for training and evaluating models for Persian space and ZWNJ correction. Several models have been trained using this dataset:
|
79 |
+
|
80 |
+
- https://huggingface.co/PerSpaCor/bert-base-multilingual-uncased
|
81 |
+
- https://huggingface.co/PerSpaCor/Relu-Norm
|
82 |
+
- https://huggingface.co/PerSpaCor/DualStep-DropNet
|
83 |
+
- https://huggingface.co/PerSpaCor/SimplexNet
|
84 |
+
- https://huggingface.co/PerSpaCor/bert-base-multilingual-cased
|
85 |
+
- https://huggingface.co/PerSpaCor/HooshvareLab-bert-base-parsbert-uncased
|
86 |
+
- https://huggingface.co/PerSpaCor/HooshvareLab-bert-fa-zwnj-base
|
87 |
+
- https://huggingface.co/PerSpaCor/HooshvareLab-roberta-fa-zwnj-base
|
88 |
+
- https://huggingface.co/PerSpaCor/imvladikon-charbert-roberta-wiki
|
89 |
+
|
90 |
+
### Example Code
|
91 |
+
|
92 |
+
```python
|
93 |
+
from datasets import load_dataset
|
94 |
+
|
95 |
+
# Load the dataset
|
96 |
+
dataset = load_dataset("PerSpaCor/bijankhan-peykare-annotated")
|
97 |
+
|
98 |
+
# Sample usage
|
99 |
+
example = dataset[0]
|
100 |
+
print(f"Sentence: {example['sentences']}")
|
101 |
+
print(f"Characters: {example['characters']}")
|
102 |
+
print(f"POS Labels: {example['pos_labels']}")
|
103 |
+
print(f"Space Labels: {example['space_labels']}")
|
104 |
+
|
105 |
+
# Create splits if needed
|
106 |
+
train_test = dataset.train_test_split(test_size=0.2)
|
107 |
+
test_valid = train_test["test"].train_test_split(test_size=0.5)
|
108 |
+
|
109 |
+
train_dataset = train_test["train"]
|
110 |
+
valid_dataset = test_valid["train"]
|
111 |
+
test_dataset = test_valid["test"]
|
112 |
+
```
|
113 |
+
|
114 |
+
## Citation
|
115 |
+
|
116 |
+
If you use this dataset in your research, please cite:
|
117 |
+
|
118 |
+
[PAPER CITATION PLACEHOLDER]
|
119 |
+
|
120 |
+
And also cite the original corpora:
|
121 |
+
|
122 |
+
```bibtex
|
123 |
+
@article{bijankhan,
|
124 |
+
author = {Bijankhan, M.},
|
125 |
+
title = {The Role of Linguistic Structures in Writing Grammar: Introduction to a Computer Software},
|
126 |
+
journal = {Journal of Linguistics},
|
127 |
+
volume = {19},
|
128 |
+
number = {2},
|
129 |
+
pages = {48--67},
|
130 |
+
year = {2004}
|
131 |
+
}
|
132 |
+
|
133 |
+
@article{peykare,
|
134 |
+
author = {Bijankhan, M. and Sheykhzadegan, J. and Bahrani, M. and others},
|
135 |
+
title = {Lessons from building a Persian written corpus: Peykare},
|
136 |
+
journal = {Language Resources and Evaluation},
|
137 |
+
volume = {45},
|
138 |
+
number = {2},
|
139 |
+
pages = {143--164},
|
140 |
+
year = {2011},
|
141 |
+
doi = {10.1007/s10579-010-9132-x},
|
142 |
+
url = {https://doi.org/10.1007/s10579-010-9132-x}
|
143 |
+
}
|
144 |
+
```
|