|
--- |
|
dataset_info: |
|
- config_name: Documents |
|
features: |
|
- name: doc_id |
|
dtype: string |
|
- name: doc |
|
dtype: string |
|
splits: |
|
- name: history |
|
num_bytes: 508218 |
|
num_examples: 224 |
|
- name: religion |
|
num_bytes: 302837 |
|
num_examples: 126 |
|
- name: recherche |
|
num_bytes: 235256 |
|
num_examples: 69 |
|
- name: python |
|
num_bytes: 660763 |
|
num_examples: 194 |
|
download_size: 952235 |
|
dataset_size: 1707074 |
|
- config_name: MOOC_MCQ_Queries |
|
features: |
|
- name: query_id |
|
dtype: string |
|
- name: query |
|
dtype: string |
|
- name: answers |
|
sequence: string |
|
- name: distractions |
|
sequence: string |
|
- name: relevant_docs |
|
sequence: string |
|
splits: |
|
- name: history |
|
num_bytes: 13156 |
|
num_examples: 58 |
|
- name: religion |
|
num_bytes: 52563 |
|
num_examples: 125 |
|
- name: recherche |
|
num_bytes: 18791 |
|
num_examples: 52 |
|
- name: python |
|
num_bytes: 29759 |
|
num_examples: 85 |
|
download_size: 80494 |
|
dataset_size: 114269 |
|
configs: |
|
- config_name: Documents |
|
data_files: |
|
- split: history |
|
path: Documents/history-* |
|
- split: religion |
|
path: Documents/religion-* |
|
- split: recherche |
|
path: Documents/recherche-* |
|
- split: python |
|
path: Documents/python-* |
|
- config_name: MOOC_MCQ_Queries |
|
data_files: |
|
- split: history |
|
path: MOOC_MCQ_Queries/history-* |
|
- split: religion |
|
path: MOOC_MCQ_Queries/religion-* |
|
- split: recherche |
|
path: MOOC_MCQ_Queries/recherche-* |
|
- split: python |
|
path: MOOC_MCQ_Queries/python-* |
|
--- |
|
|
|
# Text embedding Datasets |
|
|
|
The text embedding datasets consist of several (query, passage) paired datasets aiming for text-embedding model finetuning. These datasets are ideal for developing and testing algorithms in the fields of natural language processing, information retrieval, and similar applications. |
|
|
|
## Dataset Details |
|
Each dataset in this collection is structured to facilitate the training and evaluation of text-embedding models. The datasets are diverse, covering multiple domains and formats. They are particularly useful for tasks like semantic search, question-answering systems, and document retrieval. |
|
### [MOOC MCQ Queries] |
|
The "MOOC MCQ Queries" dataset is derived from [FUN MOOC](https://www.fun-mooc.fr/fr/), an online platform offering a wide range of French courses across various domains. This dataset is uniquely valuable for its high-quality content, manually curated to assist students in understanding course materials better. |
|
#### Content Overview: |
|
- **Language**: French |
|
- **Domains**: |
|
- History: 57 examples |
|
- Religion: 125 examples |
|
- [Other domains to be added] |
|
- **Dataset Description**: |
|
Each record in the dataset includes the following fields: |
|
```json |
|
{ |
|
"query_id": "Unique identifier for each query", |
|
"query": "Text of the multiple-choice question (MCQ)", |
|
"answers": ["List of correct answer choices"], |
|
"distractions": ["List of incorrect choices"], |
|
"relevant_docs": ["List of relevant document IDs aiding the answer"] |
|
} |
|
``` |
|
- **statistics**: |
|
| Category | Num. of Queries | Query Avg. Words | Number of Docs | Short Docs (<375 words) | Long Docs (≥375 words) | Doc Avg. Words | |
|
|----------------|-----------------|------------------|----------------|-------------------------|------------------------|----------------| |
|
| history | 57 | 11.31 | 224 | 147 | 77 | 351.79 | |
|
| religion | 125 | 15.08 | 126 | 78 | 48 | 375.63 | |
|
| recherche | 52 | 12.71 | 69 | 20 | 49 | 535.00 | |
|
| python | 85 | 21.24 | 194 | 27 | 167 | 552.60 | |
|
|
|
### [Wikitext generated Queries] |
|
To complete |
|
|
|
### [Documents] |
|
This dataset is an extensive collection of document chunkings or entire document for short texts, designed to complement the MOOC MCQ Queries and other datasets in the collection. |
|
|
|
- **chunking strategies**: |
|
- MOOC MCQ Queries: documents are chunked according to their natural divisions, like sections or subsections, ensuring that each chunk maintains contextual integrity. |
|
- **content format**: |
|
|
|
```json |
|
{ |
|
"doc_id": "Unique identifier for each document", |
|
"doc": "Text content of the document" |
|
} |
|
``` |