|
|
|
--- |
|
language: |
|
- fa |
|
license: gpl-3.0 |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
# Persian Space and ZWNJ Correction Dataset |
|
|
|
## Dataset Description |
|
|
|
### Dataset Summary |
|
|
|
This dataset contains Persian text annotated for space and Zero-Width Non-Joiner (ZWNJ) correction tasks. It consists of 424,181 examples derived from the Bijankhan and Peykare corpora. Each example includes the original sentence, tokenized text, character-level information, part-of-speech tags, and space labels. |
|
|
|
The dataset is designed for training models that can automatically correct spacing and ZWNJ usage in Persian text, addressing common orthographic issues in Persian digital text. |
|
|
|
### Languages |
|
|
|
The dataset contains text in Persian (Farsi) language. |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
Each instance in the dataset contains: |
|
- `sentences`: The complete Persian text string |
|
- `tokens`: Tokenized form of the sentence |
|
- `characters`: Individual non-space characters from the sentence |
|
- `pos_labels`: Part-of-speech tags for each token |
|
- `space_labels`: Space and ZWNJ labels for each character |
|
|
|
### Data Fields |
|
|
|
- `sentences`: string - Full Persian text sentence |
|
- `tokens`: list of strings - The words/tokens in the sentence |
|
- `characters`: list of strings - Individual characters (excluding spaces and ZWNJ) |
|
- `pos_labels`: list of strings - POS tag for each token |
|
- `space_labels`: list of integers - Labels indicating proper space or ZWNJ placement: |
|
- `0`: No space after the character |
|
- `1`: Space after the character |
|
- `2`: ZWNJ (Zero-Width Non-Joiner) after the character |
|
|
|
### Data Splits |
|
|
|
While the dataset does not come with predefined splits, in the original research, it was divided as follows: |
|
- 80% for training |
|
- 10% for validation |
|
- 10% for testing |
|
|
|
Users can recreate these splits or create custom splits as needed for their specific use cases. |
|
|
|
## Dataset Creation |
|
|
|
### Source Data |
|
|
|
#### Initial Data Collection and Normalization |
|
|
|
This dataset was derived from two major Persian corpora: |
|
|
|
- **Bijankhan Corpus**: A Persian tagged corpus developed for linguistics research and natural language processing. |
|
- **Peykare Corpus**: A comprehensive Persian corpus developed for language resources and evaluation. |
|
|
|
### Annotations |
|
|
|
The annotation process involved: |
|
|
|
[PLACEHOLDER: Brief description of the annotation procedure] |
|
|
|
For detailed information about the preprocessing, annotation, and labeling procedures, please refer to: |
|
|
|
[PAPER CITATION PLACEHOLDER] |
|
|
|
## Usage |
|
|
|
This dataset is intended for training and evaluating models for Persian space and ZWNJ correction. Several models have been trained using this dataset: |
|
|
|
- https://huggingface.co/PerSpaCor/bert-base-multilingual-uncased |
|
- https://huggingface.co/PerSpaCor/Relu-Norm |
|
- https://huggingface.co/PerSpaCor/DualStep-DropNet |
|
- https://huggingface.co/PerSpaCor/SimplexNet |
|
- https://huggingface.co/PerSpaCor/bert-base-multilingual-cased |
|
- https://huggingface.co/PerSpaCor/HooshvareLab-bert-base-parsbert-uncased |
|
- https://huggingface.co/PerSpaCor/HooshvareLab-bert-fa-zwnj-base |
|
- https://huggingface.co/PerSpaCor/HooshvareLab-roberta-fa-zwnj-base |
|
- https://huggingface.co/PerSpaCor/imvladikon-charbert-roberta-wiki |
|
|
|
### Example Code |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
# Load the dataset |
|
dataset = load_dataset("PerSpaCor/bijankhan-peykare-annotated") |
|
|
|
# Sample usage |
|
example = dataset[0] |
|
print(f"Sentence: {example['sentences']}") |
|
print(f"Characters: {example['characters']}") |
|
print(f"POS Labels: {example['pos_labels']}") |
|
print(f"Space Labels: {example['space_labels']}") |
|
|
|
# Create splits if needed |
|
train_test = dataset.train_test_split(test_size=0.2) |
|
test_valid = train_test["test"].train_test_split(test_size=0.5) |
|
|
|
train_dataset = train_test["train"] |
|
valid_dataset = test_valid["train"] |
|
test_dataset = test_valid["test"] |
|
``` |
|
|
|
## Citation |
|
|
|
If you use this dataset in your research, please cite: |
|
|
|
[PAPER CITATION PLACEHOLDER] |
|
|
|
And also cite the original corpora: |
|
|
|
```bibtex |
|
@article{bijankhan, |
|
author = {Bijankhan, M.}, |
|
title = {The Role of Linguistic Structures in Writing Grammar: Introduction to a Computer Software}, |
|
journal = {Journal of Linguistics}, |
|
volume = {19}, |
|
number = {2}, |
|
pages = {48--67}, |
|
year = {2004} |
|
} |
|
|
|
@article{peykare, |
|
author = {Bijankhan, M. and Sheykhzadegan, J. and Bahrani, M. and others}, |
|
title = {Lessons from building a Persian written corpus: Peykare}, |
|
journal = {Language Resources and Evaluation}, |
|
volume = {45}, |
|
number = {2}, |
|
pages = {143--164}, |
|
year = {2011}, |
|
doi = {10.1007/s10579-010-9132-x}, |
|
url = {https://doi.org/10.1007/s10579-010-9132-x} |
|
} |
|
``` |
|
|