Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -359,7 +359,9 @@ configs:
|
|
359 |
|
360 |
## Dataset Description
|
361 |
|
362 |
-
|
|
|
|
|
363 |
|
364 |
### Dataset Summary
|
365 |
|
@@ -367,7 +369,27 @@ skLEP, the General Language Understanding Evaluation benchmark for Slovak is a c
|
|
367 |
|
368 |
### Supported Tasks and Leaderboards
|
369 |
|
370 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
371 |
|
372 |
### Languages
|
373 |
|
@@ -377,65 +399,129 @@ The language data in skLEP is in Slovak (BCP-47 `sk`)
|
|
377 |
|
378 |
### Data Instances
|
379 |
|
380 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
381 |
|
382 |
### Data Fields
|
383 |
|
384 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
385 |
|
386 |
### Data Splits
|
387 |
|
388 |
-
|
389 |
|
390 |
## Dataset Creation
|
391 |
|
392 |
### Curation Rationale
|
393 |
|
394 |
-
|
|
|
|
|
395 |
|
396 |
### Source Data
|
397 |
|
398 |
#### Initial Data Collection and Normalization
|
399 |
|
400 |
-
|
|
|
|
|
|
|
|
|
|
|
401 |
|
402 |
#### Who are the source language producers?
|
403 |
|
404 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
405 |
|
406 |
### Annotations
|
407 |
|
408 |
#### Annotation process
|
409 |
|
410 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
411 |
|
412 |
#### Who are the annotators?
|
413 |
|
414 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
415 |
|
416 |
### Personal and Sensitive Information
|
417 |
|
418 |
-
|
419 |
|
420 |
## Considerations for Using the Data
|
421 |
|
422 |
### Social Impact of Dataset
|
423 |
|
424 |
-
|
425 |
|
426 |
### Discussion of Biases
|
427 |
|
428 |
-
|
|
|
|
|
|
|
|
|
|
|
429 |
|
430 |
### Other Known Limitations
|
431 |
|
432 |
-
|
|
|
|
|
|
|
433 |
|
434 |
## Additional Information
|
435 |
|
436 |
### Dataset Curators
|
437 |
|
438 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
439 |
|
440 |
### Licensing Information
|
441 |
|
@@ -474,4 +560,12 @@ If you use skLEP, please cite the following paper:
|
|
474 |
|
475 |
### Contributions
|
476 |
|
477 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
359 |
|
360 |
## Dataset Description
|
361 |
|
362 |
+
skLEP (General Language Understanding Evaluation benchmark for Slovak) is the first comprehensive benchmark specifically designed for evaluating Slovak natural language understanding (NLU) models. The benchmark encompasses nine diverse tasks that span token-level, sentence-pair, and document-level challenges, thereby offering a thorough assessment of model capabilities.
|
363 |
+
|
364 |
+
To create this benchmark, we curated new, original datasets tailored for Slovak and meticulously translated established English NLU resources with native speaker post-editing to ensure high quality evaluation.
|
365 |
|
366 |
### Dataset Summary
|
367 |
|
|
|
369 |
|
370 |
### Supported Tasks and Leaderboards
|
371 |
|
372 |
+
skLEP includes nine tasks across three categories:
|
373 |
+
|
374 |
+
**Token-Level Tasks:**
|
375 |
+
|
376 |
+
- Part-of-Speech (POS) Tagging using Universal Dependencies
|
377 |
+
- Named Entity Recognition using Universal NER (UNER)
|
378 |
+
- Named Entity Recognition using WikiGoldSK (WGSK)
|
379 |
+
|
380 |
+
**Sentence-Pair Tasks:**
|
381 |
+
|
382 |
+
- Recognizing Textual Entailment (RTE)
|
383 |
+
- Natural Language Inference (NLI)
|
384 |
+
- Semantic Textual Similarity (STS)
|
385 |
+
|
386 |
+
**Document-Level Tasks:**
|
387 |
+
|
388 |
+
- Hate Speech Classification (HS)
|
389 |
+
- Sentiment Analysis (SA)
|
390 |
+
- Question Answering (QA) based on SK-QuAD
|
391 |
+
|
392 |
+
A public leaderboard is available at <https://github.com/slovak-nlp/sklep>
|
393 |
|
394 |
### Languages
|
395 |
|
|
|
399 |
|
400 |
### Data Instances
|
401 |
|
402 |
+
The benchmark contains the following data splits:
|
403 |
+
|
404 |
+
- **hate-speech**: 10,531 train, 1,339 validation, 1,319 test examples
|
405 |
+
- **sentiment-analysis**: 3,560 train, 522 validation, 1,042 test examples
|
406 |
+
- **ner-wikigoldsk**: 4,687 train, 669 validation, 1,340 test examples
|
407 |
+
- **ner-uner**: 8,483 train, 1,060 validation, 1,061 test examples
|
408 |
+
- **pos**: 8,483 train, 1,060 validation, 1,061 test examples
|
409 |
+
- **question-answering**: 71,999 train, 9,583 validation, 9,583 test examples
|
410 |
+
- **rte**: 2,490 train, 277 validation, 1,660 test examples
|
411 |
+
- **nli**: 392,702 train, 2,490 validation, 5,004 test examples
|
412 |
+
- **sts**: 5,604 train, 1,481 validation, 1,352 test examples
|
413 |
|
414 |
### Data Fields
|
415 |
|
416 |
+
Each task has specific data fields:
|
417 |
+
|
418 |
+
**Token-level tasks** (UD, UNER, WGSK): `sentence`, `tokens`, `ner_tags`/`pos_tags`, `ner_tags_text`
|
419 |
+
|
420 |
+
**Sentence-pair tasks**:
|
421 |
+
|
422 |
+
- RTE: `text1`, `text2`, `label`, `idx`, `label_text`
|
423 |
+
- NLI: `premise`, `hypothesis`, `label`
|
424 |
+
- STS: `sentence1`, `sentence2`, `similarity_score`
|
425 |
+
|
426 |
+
**Document-level tasks**:
|
427 |
+
|
428 |
+
- Hate Speech/Sentiment: `text`, `label`, `id`
|
429 |
+
- Question Answering: `id`, `title`, `context`, `question`, `answers`
|
430 |
|
431 |
### Data Splits
|
432 |
|
433 |
+
All tasks follow a standard train/validation/test split structure. Some datasets (HS and QA) originally only had train/test splits, so validation sets were created by sampling from the training data to match the test set size.
|
434 |
|
435 |
## Dataset Creation
|
436 |
|
437 |
### Curation Rationale
|
438 |
|
439 |
+
skLEP was created to address the lack of a comprehensive benchmark for Slovak natural language understanding. While similar benchmarks exist for other Slavic languages (Bulgarian, Polish, Russian, Slovene), no equivalent existed for Slovak despite the emergence of several Slovak-specific large language models.
|
440 |
+
|
441 |
+
The benchmark was designed to provide a principled tool for evaluating language understanding capabilities across diverse tasks, enabling systematic comparison of Slovak-specific, multilingual, and English pre-trained models.
|
442 |
|
443 |
### Source Data
|
444 |
|
445 |
#### Initial Data Collection and Normalization
|
446 |
|
447 |
+
Data was collected from multiple sources:
|
448 |
+
|
449 |
+
- **Existing Slovak datasets**: Universal Dependencies, Universal NER, WikiGoldSK, Slovak Hate Speech Database, Reviews3, SK-QuAD
|
450 |
+
- **Translated datasets**: RTE, NLI (XNLI), and STS were translated from English using machine translation services followed by native speaker post-editing
|
451 |
+
|
452 |
+
During preprocessing, duplicates were removed from XNLI and STS datasets. For STS, sentence pairs with identical text but non-perfect similarity scores were eliminated as translation artifacts.
|
453 |
|
454 |
#### Who are the source language producers?
|
455 |
|
456 |
+
The source language producers include:
|
457 |
+
|
458 |
+
- Native Slovak speakers for original Slovak datasets
|
459 |
+
- Professional translators and native Slovak post-editors for translated datasets
|
460 |
+
- Wikipedia contributors for WikiGoldSK and SK-QuAD
|
461 |
+
- Social media users for hate speech dataset
|
462 |
+
- Customer reviewers for sentiment analysis dataset
|
463 |
|
464 |
### Annotations
|
465 |
|
466 |
#### Annotation process
|
467 |
|
468 |
+
Annotation processes varied by dataset:
|
469 |
+
|
470 |
+
- **Token-level tasks**: Following Universal Dependencies and Universal NER annotation guidelines
|
471 |
+
- **WikiGoldSK**: Manual annotation following BSNLP-2017 guidelines with CoNLL-2003 NER tagset
|
472 |
+
- **Hate Speech**: Expert annotation with quality filtering (removing annotators with >90% uniform responses or <70% agreement)
|
473 |
+
- **Sentiment Analysis**: Manual labeling by two annotators reaching consensus
|
474 |
+
- **SK-QuAD**: Created by 150+ volunteers and 9 part-time annotators, validated by 5 paid reviewers
|
475 |
+
- **Translated datasets**: Professional translation followed by native speaker post-editing
|
476 |
|
477 |
#### Who are the annotators?
|
478 |
|
479 |
+
Annotators include:
|
480 |
+
|
481 |
+
- Expert linguists and NLP researchers for token-level tasks
|
482 |
+
- Native Slovak speakers for post-editing translated content
|
483 |
+
- Domain experts for hate speech classification
|
484 |
+
- Trained volunteers and professional annotators for SK-QuAD
|
485 |
+
- Customer service experts for sentiment analysis
|
486 |
|
487 |
### Personal and Sensitive Information
|
488 |
|
489 |
+
The hate speech dataset contains social media posts that may include offensive language by design. Personal information was removed during preprocessing. Other datasets (Wikipedia-based, customer reviews, translated content) have minimal personal information risk.
|
490 |
|
491 |
## Considerations for Using the Data
|
492 |
|
493 |
### Social Impact of Dataset
|
494 |
|
495 |
+
skLEP enables systematic evaluation and improvement of Slovak NLP models, supporting the development of better language technology for Slovak speakers. The hate speech detection task specifically contributes to online safety tools for Slovak social media platforms.
|
496 |
|
497 |
### Discussion of Biases
|
498 |
|
499 |
+
Potential biases include:
|
500 |
+
|
501 |
+
- **Domain bias**: Wikipedia-heavy content in several tasks may not represent colloquial Slovak
|
502 |
+
- **Translation bias**: Translated tasks may carry over English linguistic patterns
|
503 |
+
- **Social media bias**: Hate speech dataset reflects specific online communities
|
504 |
+
- **Geographic bias**: May favor standard Slovak over regional variants
|
505 |
|
506 |
### Other Known Limitations
|
507 |
|
508 |
+
- Some test sets differ from English counterparts due to translation and re-labeling requirements
|
509 |
+
- Dataset sizes vary significantly across tasks
|
510 |
+
- Limited coverage of specialized domains outside Wikipedia and social media
|
511 |
+
- Validation sets for some tasks were created by splitting training data rather than independent collection
|
512 |
|
513 |
## Additional Information
|
514 |
|
515 |
### Dataset Curators
|
516 |
|
517 |
+
skLEP was curated by researchers from:
|
518 |
+
|
519 |
+
- Comenius University in Bratislava, Slovakia
|
520 |
+
- Technical University of Košice, Slovakia
|
521 |
+
- Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
|
522 |
+
- Cisco Systems
|
523 |
+
|
524 |
+
Lead contact: Marek Šuppa (<marek@suppa.sk>)
|
525 |
|
526 |
### Licensing Information
|
527 |
|
|
|
560 |
|
561 |
### Contributions
|
562 |
|
563 |
+
Contributions to skLEP include:
|
564 |
+
|
565 |
+
- First comprehensive Slovak NLU benchmark with 9 diverse tasks
|
566 |
+
- High-quality translations with native speaker post-editing
|
567 |
+
- Extensive baseline evaluations across multiple model types
|
568 |
+
- Open-source toolkit and standardized leaderboard
|
569 |
+
- Rigorous evaluation methodology with hyperparameter optimization
|
570 |
+
|
571 |
+
Future contributions and improvements are welcome through the project repository.
|