Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Persian
Libraries:
Datasets
Dask
License:
File size: 4,590 Bytes
1cb8fd9
afe862f
1cb8fd9
 
 
 
 
afe862f
1cb8fd9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145

---
language:
  - fa
license: gpl-3.0
size_categories:
  - 100K<n<1M
---

# Persian Space and ZWNJ Correction Dataset

## Dataset Description

### Dataset Summary

This dataset contains Persian text annotated for space and Zero-Width Non-Joiner (ZWNJ) correction tasks. It consists of 424,181 examples derived from the Bijankhan and Peykare corpora. Each example includes the original sentence, tokenized text, character-level information, part-of-speech tags, and space labels.

The dataset is designed for training models that can automatically correct spacing and ZWNJ usage in Persian text, addressing common orthographic issues in Persian digital text.

### Languages

The dataset contains text in Persian (Farsi) language.

## Dataset Structure

### Data Instances

Each instance in the dataset contains:
- `sentences`: The complete Persian text string
- `tokens`: Tokenized form of the sentence
- `characters`: Individual non-space characters from the sentence
- `pos_labels`: Part-of-speech tags for each token
- `space_labels`: Space and ZWNJ labels for each character

### Data Fields

- `sentences`: string - Full Persian text sentence
- `tokens`: list of strings - The words/tokens in the sentence
- `characters`: list of strings - Individual characters (excluding spaces and ZWNJ)
- `pos_labels`: list of strings - POS tag for each token
- `space_labels`: list of integers - Labels indicating proper space or ZWNJ placement:
  - `0`: No space after the character
  - `1`: Space after the character
  - `2`: ZWNJ (Zero-Width Non-Joiner) after the character

### Data Splits

While the dataset does not come with predefined splits, in the original research, it was divided as follows:
- 80% for training
- 10% for validation
- 10% for testing

Users can recreate these splits or create custom splits as needed for their specific use cases.

## Dataset Creation

### Source Data

#### Initial Data Collection and Normalization

This dataset was derived from two major Persian corpora:

- **Bijankhan Corpus**: A Persian tagged corpus developed for linguistics research and natural language processing.
- **Peykare Corpus**: A comprehensive Persian corpus developed for language resources and evaluation.

### Annotations

The annotation process involved:

[PLACEHOLDER: Brief description of the annotation procedure]

For detailed information about the preprocessing, annotation, and labeling procedures, please refer to:

[PAPER CITATION PLACEHOLDER]

## Usage

This dataset is intended for training and evaluating models for Persian space and ZWNJ correction. Several models have been trained using this dataset:

- https://huggingface.co/PerSpaCor/bert-base-multilingual-uncased
- https://huggingface.co/PerSpaCor/Relu-Norm
- https://huggingface.co/PerSpaCor/DualStep-DropNet
- https://huggingface.co/PerSpaCor/SimplexNet
- https://huggingface.co/PerSpaCor/bert-base-multilingual-cased
- https://huggingface.co/PerSpaCor/HooshvareLab-bert-base-parsbert-uncased
- https://huggingface.co/PerSpaCor/HooshvareLab-bert-fa-zwnj-base
- https://huggingface.co/PerSpaCor/HooshvareLab-roberta-fa-zwnj-base
- https://huggingface.co/PerSpaCor/imvladikon-charbert-roberta-wiki

### Example Code

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("PerSpaCor/bijankhan-peykare-annotated")

# Sample usage
example = dataset[0]
print(f"Sentence: {example['sentences']}")
print(f"Characters: {example['characters']}")
print(f"POS Labels: {example['pos_labels']}")
print(f"Space Labels: {example['space_labels']}")

# Create splits if needed
train_test = dataset.train_test_split(test_size=0.2)
test_valid = train_test["test"].train_test_split(test_size=0.5)

train_dataset = train_test["train"]
valid_dataset = test_valid["train"]
test_dataset = test_valid["test"]
```

## Citation

If you use this dataset in your research, please cite:

[PAPER CITATION PLACEHOLDER]

And also cite the original corpora:

```bibtex
@article{bijankhan,
  author = {Bijankhan, M.},
  title = {The Role of Linguistic Structures in Writing Grammar: Introduction to a Computer Software},
  journal = {Journal of Linguistics},
  volume = {19},
  number = {2},
  pages = {48--67},
  year = {2004}
}

@article{peykare,
  author = {Bijankhan, M. and Sheykhzadegan, J. and Bahrani, M. and others},
  title = {Lessons from building a Persian written corpus: Peykare},
  journal = {Language Resources and Evaluation},
  volume = {45},
  number = {2},
  pages = {143--164},
  year = {2011},
  doi = {10.1007/s10579-010-9132-x},
  url = {https://doi.org/10.1007/s10579-010-9132-x}
}
```