Datasets:

Modalities:
Text
Formats:
json
Sub-tasks:
extractive-qa
Languages:
Catalan
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
carmentano commited on
Commit
d35ab5b
·
1 Parent(s): 7be2de0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -20,7 +20,7 @@ task_ids:
20
  - extractive-qa
21
  ---
22
 
23
- # ViquiQuAD, An extractive QA dataset for catalan, from the Wikipedia
24
 
25
  ## Table of Contents
26
  - [Table of Contents](#table-of-contents)
@@ -55,9 +55,9 @@ task_ids:
55
 
56
  ### Dataset Summary
57
 
58
- ViquiQuAD, An extractive QA dataset for catalan, from the Wikipedia.
59
 
60
- This dataset contains 3111 contexts extracted from a set of 597 high quality original (no translations) articles in the Catalan Wikipedia "Viquipèdia" (ca.wikipedia.org), and 1 to 5 questions with their answer for each fragment.
61
 
62
  Viquipedia articles are used under [CC-by-sa](https://creativecommons.org/licenses/by-sa/3.0/legalcode) licence.
63
 
@@ -90,7 +90,7 @@ The dataset is in Catalan (`ca-CA`).
90
 
91
  ### Data Fields
92
 
93
- Follows [Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250) for squad v1 datasets.
94
 
95
  - `id` (str): Unique ID assigned to the question.
96
  - `title` (str): Title of the Wikipedia article.
@@ -111,7 +111,7 @@ Follows [Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250) for sq
111
 
112
  ### Methodology
113
 
114
- From a set of high quality, non-translation, articles in the Catalan Wikipedia (ca.wikipedia.org), 597 were randomly chosen, and from them 3111, 5-8 sentence contexts were extracted. We commissioned creation of between 1 and 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 ([Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250)). In total, 15153 pairs of a question and an extracted fragment that contains the answer were created.
115
 
116
  ### Curation Rationale
117
 
@@ -119,7 +119,7 @@ For compatibility with similar datasets in other languages, we followed as close
119
 
120
  ### Source Data
121
 
122
- - https://ca.wikipedia.org
123
 
124
  #### Initial Data Collection and Normalization
125
 
@@ -133,7 +133,7 @@ Volunteers who collaborate with [Catalan Wikipedia](ca.wikipedia.org).
133
 
134
  #### Annotation process
135
 
136
- We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 ([Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250)).
137
 
138
  #### Who are the annotators?
139
 
 
20
  - extractive-qa
21
  ---
22
 
23
+ # ViquiQuAD, An extractive QA dataset for Catalan, from the Wikipedia
24
 
25
  ## Table of Contents
26
  - [Table of Contents](#table-of-contents)
 
55
 
56
  ### Dataset Summary
57
 
58
+ ViquiQuAD, An extractive QA dataset for Catalan, from the Wikipedia.
59
 
60
+ This dataset contains 3111 contexts extracted from a set of 597 high quality original (no translations) articles in the Catalan Wikipedia "[Viquipèdia(ca.wikipedia.org)]", and 1 to 5 questions with their answer for each fragment.
61
 
62
  Viquipedia articles are used under [CC-by-sa](https://creativecommons.org/licenses/by-sa/3.0/legalcode) licence.
63
 
 
90
 
91
  ### Data Fields
92
 
93
+ Follows [Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets.
94
 
95
  - `id` (str): Unique ID assigned to the question.
96
  - `title` (str): Title of the Wikipedia article.
 
111
 
112
  ### Methodology
113
 
114
+ From a set of high quality, non-translation, articles in the [Catalan Wikipedia](ca.wikipedia.org), 597 were randomly chosen, and from them 3111, 5-8 sentence contexts were extracted. We commissioned creation of between 1 and 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250)). In total, 15153 pairs of a question and an extracted fragment that contains the answer were created.
115
 
116
  ### Curation Rationale
117
 
 
119
 
120
  ### Source Data
121
 
122
+ - [Catalan Wikipedia](https://ca.wikipedia.org)
123
 
124
  #### Initial Data Collection and Normalization
125
 
 
133
 
134
  #### Annotation process
135
 
136
+ We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” EMNLP (2016)](http://arxiv.org/abs/1606.05250)).
137
 
138
  #### Who are the annotators?
139