Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
text-scoring
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
86b8c73
·
1 Parent(s): 29650bc

Update parquet files

Browse files
Files changed (5) hide show
  1. .gitattributes +0 -27
  2. README.md +0 -186
  3. dataset_infos.json +0 -1
  4. default/has_part-train.parquet +3 -0
  5. has_part.py +0 -117
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,186 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - machine-generated
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - unknown
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - extended|other-Generics-KB
16
- task_categories:
17
- - text-classification
18
- task_ids:
19
- - text-scoring
20
- paperswithcode_id: haspart-kb
21
- pretty_name: hasPart KB
22
- tags:
23
- - Meronym-Prediction
24
- dataset_info:
25
- features:
26
- - name: arg1
27
- dtype: string
28
- - name: arg2
29
- dtype: string
30
- - name: score
31
- dtype: float64
32
- - name: wikipedia_primary_page
33
- sequence: string
34
- - name: synset
35
- sequence: string
36
- splits:
37
- - name: train
38
- num_bytes: 4363417
39
- num_examples: 49848
40
- download_size: 7437382
41
- dataset_size: 4363417
42
- ---
43
-
44
- # Dataset Card for [HasPart]
45
-
46
- ## Table of Contents
47
- - [Dataset Description](#dataset-description)
48
- - [Dataset Summary](#dataset-summary)
49
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
50
- - [Languages](#languages)
51
- - [Dataset Structure](#dataset-structure)
52
- - [Data Instances](#data-instances)
53
- - [Data Fields](#data-fields)
54
- - [Data Splits](#data-splits)
55
- - [Dataset Creation](#dataset-creation)
56
- - [Curation Rationale](#curation-rationale)
57
- - [Source Data](#source-data)
58
- - [Annotations](#annotations)
59
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
60
- - [Considerations for Using the Data](#considerations-for-using-the-data)
61
- - [Social Impact of Dataset](#social-impact-of-dataset)
62
- - [Discussion of Biases](#discussion-of-biases)
63
- - [Other Known Limitations](#other-known-limitations)
64
- - [Additional Information](#additional-information)
65
- - [Dataset Curators](#dataset-curators)
66
- - [Licensing Information](#licensing-information)
67
- - [Citation Information](#citation-information)
68
- - [Contributions](#contributions)
69
-
70
- ## Dataset Description
71
-
72
- - **Homepage:** https://allenai.org/data/haspartkb
73
- - **Repository:**
74
- - **Paper:** https://arxiv.org/abs/2006.07510
75
- - **Leaderboard:**
76
- - **Point of Contact:** Peter Clark <peterc@allenai.org>
77
-
78
- ### Dataset Summary
79
-
80
- This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet.
81
-
82
- ### Supported Tasks and Leaderboards
83
-
84
- Text Classification / Scoring - meronyms (e.g., `plant` has part `stem`)
85
-
86
- ### Languages
87
-
88
- English
89
-
90
- ## Dataset Structure
91
-
92
- ### Data Instances
93
-
94
- [More Information Needed]
95
- ```
96
- {'arg1': 'plant',
97
- 'arg2': 'stem',
98
- 'score': 0.9991798414303377,
99
- 'synset': ['wn.plant.n.02', 'wn.stalk.n.02'],
100
- 'wikipedia_primary_page': ['Plant']}
101
-
102
- ```
103
-
104
- ### Data Fields
105
-
106
- - `arg1`, `arg2`: These are the entities of the meronym, i.e., `arg1` _has\_part_ `arg2`
107
- - `score`: Meronymic score per the procedure described below
108
- - `synset`: Ontological classification from WordNet for the two entities
109
- - `wikipedia_primary_page`: Wikipedia page of the entities
110
-
111
- **Note**: some examples contain synset / wikipedia info for only one of the entities.
112
-
113
- ### Data Splits
114
-
115
- Single training file
116
-
117
- ## Dataset Creation
118
-
119
- Our approach to hasPart extraction has five steps:
120
-
121
- 1. Collect generic sentences from a large corpus
122
- 2. Train and apply a RoBERTa model to identify hasPart relations in those sentences
123
- 3. Normalize the entity names
124
- 4. Aggregate and filter the entries
125
- 5. Link the hasPart arguments to Wikipedia pages and WordNet senses
126
-
127
- Rather than extract knowledge from arbitrary text, we extract hasPart relations from generic sentences, e.g., “Dogs have tails.”, in order to bias the process towards extractions that are general (apply to most members of a category) and salient (notable enough to write down). As a source of generic sentences, we use **GenericsKB**, a large repository of 3.4M standalone generics previously harvested from a Webcrawl of 1.7B sentences.
128
-
129
- ### Annotations
130
-
131
- #### Annotation process
132
-
133
- For each sentence _S_ in GenericsKB, we identify all noun chunks in the sentence using a noun chunker (spaCy's Doc.noun chunks). Each chunk is a candidate whole or part. Then, for each possible pair, we use a RoBERTa model to classify whether a hasPart relationship exists between them. The input sentence is presented to RoBERTa as a sequence of wordpiece tokens, with the start and end of the candidate hasPart arguments identified using special tokens, e.g.:
134
-
135
- > `[CLS] [ARG1-B]Some pond snails[ARG1-E] have [ARG2-B]gills[ARG2-E] to
136
- breathe in water.`
137
-
138
- where `[ARG1/2-B/E]` are special tokens denoting the argument boundaries. The `[CLS]` token is projected to two class labels (hasPart/notHasPart), and a softmax layer is then applied, resulting in output probabilities for the class labels. We train with cross-entropy loss. We use RoBERTa-large (24 layers), each with a hidden size of 1024, and 16 attention heads, and a total of 355M parameters. We use the pre-trained weights available with the
139
- model and further fine-tune the model parameters by training on our labeled data for 15 epochs. To train the model, we use a hand-annotated set of ∼2k examples.
140
-
141
- #### Who are the annotators?
142
-
143
- [More Information Needed]
144
-
145
- ### Personal and Sensitive Information
146
-
147
- [More Information Needed]
148
-
149
- ## Considerations for Using the Data
150
-
151
- ### Social Impact of Dataset
152
-
153
- [More Information Needed]
154
-
155
- ### Discussion of Biases
156
-
157
- [More Information Needed]
158
-
159
- ### Other Known Limitations
160
-
161
- [More Information Needed]
162
-
163
- ## Additional Information
164
-
165
- ### Dataset Curators
166
-
167
- [More Information Needed]
168
-
169
- ### Licensing Information
170
-
171
- [More Information Needed]
172
-
173
- ### Citation Information
174
-
175
- @misc{bhakthavatsalam2020dogs,
176
- title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations},
177
- author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark},
178
- year={2020},
179
- eprint={2006.07510},
180
- archivePrefix={arXiv},
181
- primaryClass={cs.CL}
182
- }
183
-
184
- ### Contributions
185
-
186
- Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old\u2019s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet.\n", "citation": "@misc{bhakthavatsalam2020dogs,\n title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations},\n author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark},\n year={2020},\n eprint={2006.07510},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://allenai.org/data/haspartkb", "license": "", "features": {"arg1": {"dtype": "string", "id": null, "_type": "Value"}, "arg2": {"dtype": "string", "id": null, "_type": "Value"}, "score": {"dtype": "float64", "id": null, "_type": "Value"}, "wikipedia_primary_page": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "synset": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "has_part", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4363417, "num_examples": 49848, "dataset_name": "has_part"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1Ev4RqWcPsLI9rgOGAKh-_dFKqcEZ1u-G": {"num_bytes": 7437382, "checksum": "cc38fd2b464bc45c05a6a31162801bc1b3e6a6be43bb4293b53c102e03d27193"}}, "download_size": 7437382, "post_processing_size": null, "dataset_size": 4363417, "size_in_bytes": 11800799}}
 
 
default/has_part-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:436e9b3b2bcb1752b653d525ceef114a12e9b17eda869d34d317e6403f9b09b3
3
+ size 2069390
has_part.py DELETED
@@ -1,117 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet."""
16
-
17
-
18
- import ast
19
- from collections import defaultdict
20
-
21
- import datasets
22
-
23
-
24
- _CITATION = """\
25
- @misc{bhakthavatsalam2020dogs,
26
- title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations},
27
- author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark},
28
- year={2020},
29
- eprint={2006.07510},
30
- archivePrefix={arXiv},
31
- primaryClass={cs.CL}
32
- }
33
- """
34
-
35
- _DESCRIPTION = """\
36
- This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet.
37
- """
38
-
39
- _HOMEPAGE = "https://allenai.org/data/haspartkb"
40
-
41
- _LICENSE = ""
42
-
43
-
44
- TSV_ID = "1Ev4RqWcPsLI9rgOGAKh-_dFKqcEZ1u-G"
45
- FOLDER_ID = "1NzjXX46NnpxtgxBrkBWFiUbsXAMdd-lB"
46
- ID = TSV_ID
47
-
48
- _URL = f"https://drive.google.com/uc?export=download&id={ID}"
49
-
50
-
51
- class HasPart(datasets.GeneratorBasedBuilder):
52
- def _info(self):
53
- features = datasets.Features(
54
- {
55
- "arg1": datasets.features.Value("string"),
56
- "arg2": datasets.features.Value("string"),
57
- "score": datasets.features.Value("float64"),
58
- "wikipedia_primary_page": datasets.features.Sequence(datasets.features.Value("string")),
59
- "synset": datasets.features.Sequence(datasets.features.Value("string")),
60
- }
61
- )
62
-
63
- return datasets.DatasetInfo(
64
- description=_DESCRIPTION,
65
- features=features,
66
- supervised_keys=None,
67
- homepage=_HOMEPAGE,
68
- license=_LICENSE,
69
- citation=_CITATION,
70
- )
71
-
72
- def _split_generators(self, dl_manager):
73
- """Returns SplitGenerators."""
74
-
75
- dl_fp = dl_manager.download_and_extract(_URL)
76
-
77
- return [
78
- datasets.SplitGenerator(
79
- name=datasets.Split.TRAIN,
80
- gen_kwargs={
81
- "input_file": dl_fp,
82
- "split": "train",
83
- },
84
- ),
85
- ]
86
-
87
- def _parse_metadata(self, md):
88
- """metadata is a list of dicts in the tsv file, hence needs to be parsed using
89
- ast.literal_eval to convert to python objects.
90
-
91
- Note that metadata resulting in parsing error will be skipped
92
- """
93
- md = ast.literal_eval(md)
94
- dd = defaultdict(list)
95
-
96
- for entry in md:
97
- try:
98
- for k, v in entry.items():
99
- dd[k].append(v)
100
- except AttributeError:
101
- continue
102
- return dd
103
-
104
- def _generate_examples(self, input_file, split):
105
- """Yields examples."""
106
- with open(input_file, encoding="utf-8") as f:
107
- for id_, line in enumerate(f):
108
- _, arg1, arg2, score, metadata = line.split("\t")
109
- metadata = self._parse_metadata(metadata)
110
- example = {
111
- "arg1": arg1,
112
- "arg2": arg2,
113
- "score": float(score),
114
- "wikipedia_primary_page": metadata["wikipedia_primary_page"],
115
- "synset": metadata["synset"],
116
- }
117
- yield id_, example