datasetId
large_stringlengths 6
110
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-07 08:14:41
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-07 08:13:27
| trending_score
float64 1
39
⌀ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
Asap7772/omnimath-hint-generator-qwen3-4b-filtered-lr5e5 | Asap7772 | 2025-05-06T03:38:13Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T03:38:08Z | null | ---
dataset_info:
features:
- name: domain
sequence: string
- name: difficulty
dtype: float64
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: note1
dtype: string
- name: note2
dtype: string
- name: note3
dtype: string
- name: note4
dtype: string
- name: note5
dtype: string
- name: all_hints
dtype: string
splits:
- name: train
num_bytes: 32335811
num_examples: 4428
download_size: 17175995
dataset_size: 32335811
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/omnimath-hint-generator-qwen3-4b-filtered-lr1e6 | Asap7772 | 2025-05-06T03:37:54Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T03:37:48Z | null | ---
dataset_info:
features:
- name: domain
sequence: string
- name: difficulty
dtype: float64
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: note1
dtype: string
- name: note2
dtype: string
- name: note3
dtype: string
- name: note4
dtype: string
- name: note5
dtype: string
- name: all_hints
dtype: string
splits:
- name: train
num_bytes: 31789157
num_examples: 4428
download_size: 16627252
dataset_size: 31789157
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
milashkaarshif/MoeGirlPedia_wikitext_raw_archive | milashkaarshif | 2025-05-06T03:24:48Z | 261 | 29 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:zh",
"language:ja",
"language:en",
"license:cc-by-nc-sa-3.0",
"size_categories:1M<n<10M",
"region:us",
"wiki",
"wikitext",
"anime",
"comic",
"game",
"archive",
"art",
"music",
"pedia",
"MGP",
"萌娘百科",
"萌百",
"百科",
"维基"
] | [
"text-generation",
"text2text-generation"
] | 2023-05-03T14:07:17Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: "mgp_archive_2505.tar.gz"
license: cc-by-nc-sa-3.0
task_categories:
- text-generation
- text2text-generation
language:
- zh
- ja
- en
tags:
- wiki
- wikitext
- anime
- comic
- game
- archive
- art
- music
- pedia
- MGP
- 萌娘百科
- 萌百
- 百科
- 维基
size_categories:
- 1M<n<10M
---
Glad to see models and datasets were inspired from this dataset, thanks to all who are using this dataset in their training materials.
Feel free to re-upload the contents to places like the Internet Archive (Please follow the license and keep these files as-is) to help preserve this digital asset.
Looking forward to see more models and synthetic datasets trained from this raw archive, good luck!
Note: Due to the content censorship system introduced by MGP on 2024/03/29, it is unclear that how future backups will be conducted. mgp_archive_240329.tar.gz is the last dataset before content censorship. |
flyingbugs/OpenR1-Math-220k-pruned-middle-random-perturbation | flyingbugs | 2025-05-06T03:24:24Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T03:23:12Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
sequence: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
sequence: bool
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 4641654537
num_examples: 93733
download_size: 2050398207
dataset_size: 4641654537
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-hmmt-feb-25-4096-with-old-prm-indices_38400_46080 | kaiwenw | 2025-05-06T03:19:25Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T03:19:11Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1134888781
num_examples: 7680
download_size: 268323879
dataset_size: 1134888781
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
flyingbugs/OpenR1-Math-220k-pruned-tail-random-perturbation | flyingbugs | 2025-05-06T03:18:47Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T03:17:35Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
sequence: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
sequence: bool
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 4659603091
num_examples: 93733
download_size: 2049154692
dataset_size: 4659603091
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
deployedApps/logs | deployedApps | 2025-05-06T03:18:36Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T09:51:10Z | null | ---
dataset_info:
features:
- name: timestamp
dtype: string
- name: prompt
dtype: string
- name: total_images
dtype: int64
- name: total_time
dtype: float64
- name: individual_times
sequence: float64
splits:
- name: train
num_bytes: 712
num_examples: 2
download_size: 3153
dataset_size: 712
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
trqcbf/merged_par0 | trqcbf | 2025-05-06T03:14:03Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T18:42:59Z | null | ---
dataset_info:
features:
- name: question_1
dtype: string
- name: figure_path
dtype: string
- name: choices_1
dtype: string
- name: answer
dtype: string
- name: reasoning_path
dtype: string
- name: task_type
dtype: string
- name: caption
dtype: string
- name: related_text
dtype: string
- name: paper_id
dtype: string
- name: reasoning_path_revised_time
dtype: int64
- name: question_type
dtype: float64
- name: source
dtype: string
- name: key_question
dtype: int64
- name: key_image
dtype: int64
- name: task
dtype: string
- name: generated_index
dtype: string
- name: question
dtype: string
- name: choices
dtype: string
- name: correct_index
dtype: int64
- name: code
dtype: string
- name: run_id
dtype: int64
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 1075046
num_examples: 218
download_size: 204387
dataset_size: 1075046
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.2_num-company_3_dataset_1_for_gen_3_v2 | HungVu2003 | 2025-05-06T03:07:27Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T03:07:25Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 4066563
num_examples: 14998
download_size: 2167612
dataset_size: 4066563
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-hmmt-feb-25-4096-with-labels-prm-indices_38400_46080 | kaiwenw | 2025-05-06T03:07:14Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T03:06:49Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1134888781
num_examples: 7680
download_size: 670961721
dataset_size: 1134888781
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ashikshaffi08/reddit_dataset_250 | ashikshaffi08 | 2025-05-06T02:56:18Z | 174 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-03-26T21:49:34Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** ashikshaffi08/reddit_dataset_250
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5F9HhkadjnEgvCwMqDpD3eS3jeaHmj9WNM9KRYia9PAdqBjS
### Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md).
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{ashikshaffi082025datauniversereddit_dataset_250,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={ashikshaffi08},
year={2025},
url={https://huggingface.co/datasets/ashikshaffi08/reddit_dataset_250},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 4482985
- **Date Range:** 2009-06-19T00:00:00Z to 2025-05-06T00:00:00Z
- **Last Updated:** 2025-05-06T02:56:16Z
### Data Distribution
- Posts: 12.35%
- Comments: 87.65%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/AskReddit | 124281 | 2.77% |
| 2 | r/wallstreetbets | 89717 | 2.00% |
| 3 | r/politics | 86937 | 1.94% |
| 4 | r/worldnews | 63162 | 1.41% |
| 5 | r/news | 28455 | 0.63% |
| 6 | r/gaming | 28186 | 0.63% |
| 7 | r/nba | 24429 | 0.54% |
| 8 | r/pics | 24399 | 0.54% |
| 9 | r/relationship_advice | 21964 | 0.49% |
| 10 | r/todayilearned | 21703 | 0.48% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-03-26T21:50:11Z | 430042 | 430042 |
| 2025-03-27T15:51:31Z | 985643 | 1415685 |
| 2025-04-22T08:53:11Z | 5141 | 1420826 |
| 2025-04-23T02:50:50Z | 2 | 1420828 |
| 2025-04-23T20:04:19Z | 2 | 1420830 |
| 2025-04-24T14:04:32Z | 2 | 1420832 |
| 2025-04-25T08:20:26Z | 2 | 1420834 |
| 2025-05-01T18:23:13Z | 257841 | 1678675 |
| 2025-05-02T12:22:07Z | 318080 | 1996755 |
| 2025-05-03T06:29:53Z | 808673 | 2805428 |
| 2025-05-06T02:56:16Z | 1677557 | 4482985 |
|
pranavsaroha/so100_foldtowel_0505_01 | pranavsaroha | 2025-05-06T02:52:50Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so100",
"fold_towel"
] | [
"robotics"
] | 2025-05-06T02:44:59Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- fold_towel
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100_bimanual",
"total_episodes": 8,
"total_frames": 11746,
"total_tasks": 1,
"total_videos": 32,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:8"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.images.left_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.overhead": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.right_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.side_camera": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 1280,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
kaiwenw/distill-r1-qwen-1.5b-hmmt-feb-25-4096-with-labels-prm-indices_69120_76800 | kaiwenw | 2025-05-06T02:52:13Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T02:51:48Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1134084142
num_examples: 7680
download_size: 670229063
dataset_size: 1134084142
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-hmmt-feb-25-4096-with-old-prm-indices_69120_76800 | kaiwenw | 2025-05-06T02:48:54Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T02:48:43Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1134084142
num_examples: 7680
download_size: 267686389
dataset_size: 1134084142
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PhanithLIM/whisper-small-khmer-pre | PhanithLIM | 2025-05-06T02:46:38Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-06T02:46:38Z | null | ---
license: apache-2.0
---
|
kaiwenw/distill-r1-qwen-1.5b-hmmt-feb-25-4096-with-old-prm-indices_7680_15360 | kaiwenw | 2025-05-06T02:40:18Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T02:40:06Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1129187355
num_examples: 7680
download_size: 266731036
dataset_size: 1129187355
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AlignmentResearch/DoNotAnswer | AlignmentResearch | 2025-05-06T02:18:25Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T02:18:15Z | null | ---
dataset_info:
- config_name: default
features:
- name: clf_label
dtype:
class_label:
names:
'0': Benign
'1': Harmful
- name: instructions
dtype: string
- name: content
sequence: string
- name: answer_prompt
dtype: string
- name: proxy_clf_label
dtype: int64
- name: gen_target
dtype: string
- name: proxy_gen_target
dtype: string
splits:
- name: train
num_bytes: 20918
num_examples: 132
- name: validation
num_bytes: 0
num_examples: 0
download_size: 9692
dataset_size: 20918
- config_name: neg
features:
- name: clf_label
dtype:
class_label:
names:
'0': Benign
'1': Harmful
- name: instructions
dtype: string
- name: content
sequence: string
- name: answer_prompt
dtype: string
- name: proxy_clf_label
dtype: int64
- name: gen_target
dtype: string
- name: proxy_gen_target
dtype: string
splits:
- name: train
num_bytes: 0
num_examples: 0
- name: validation
num_bytes: 0
num_examples: 0
download_size: 4268
dataset_size: 0
- config_name: pos
features:
- name: clf_label
dtype:
class_label:
names:
'0': Benign
'1': Harmful
- name: instructions
dtype: string
- name: content
sequence: string
- name: answer_prompt
dtype: string
- name: proxy_clf_label
dtype: int64
- name: gen_target
dtype: string
- name: proxy_gen_target
dtype: string
splits:
- name: train
num_bytes: 20918
num_examples: 132
- name: validation
num_bytes: 0
num_examples: 0
download_size: 9692
dataset_size: 20918
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- config_name: neg
data_files:
- split: train
path: neg/train-*
- split: validation
path: neg/validation-*
- config_name: pos
data_files:
- split: train
path: pos/train-*
- split: validation
path: pos/validation-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.2_num-company_2_dataset_0_for_gen_2_v2 | HungVu2003 | 2025-05-06T01:24:48Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T01:24:47Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2041973
num_examples: 13750
download_size: 1129954
dataset_size: 2041973
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/Omni-MATH-20-per-source | Asap7772 | 2025-05-06T01:24:10Z | 8 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T00:03:16Z | null | ---
dataset_info:
features:
- name: domain
sequence: string
- name: difficulty
dtype: float64
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: gpt-4.1-mini_responses
sequence: string
- name: gpt-4.1-mini_is_corrects
sequence: bool
- name: gpt-4.1-mini_success_rate
dtype: float64
splits:
- name: train
num_bytes: 7971519
num_examples: 140
download_size: 3091915
dataset_size: 7971519
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
grimjim/role_meta_info_multilingual | grimjim | 2025-05-06T01:23:54Z | 0 | 0 | [
"language:en",
"language:zh",
"license:mit",
"size_categories:1K<n<10K",
"arxiv:2401.12474",
"region:us"
] | [] | 2025-05-06T01:11:54Z | null | ---
language:
- en
- zh
size_categories:
- 1K<n<10K
license: mit
---
Adapted from
["Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-Alignment" by Keming Lu, Bowen Yu, Chang Zhou, and Jingren Zhou](https://arxiv.org/abs/2401.12474)
and the associated [GitHub repository OFA-Sys/Ditto](https://github.com/OFA-Sys/Ditto).
The contents of said repo were declared public domain; in that spirit, the original and derived ChatML-formatted jsonl files have also been released as public domain. |
reasoning-proj/_judged_math_traces_original_DeepSeek-R1-Distill-Qwen-7B | reasoning-proj | 2025-05-06T01:11:07Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T01:11:03Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: question_license
dtype: string
- name: question_source
dtype: string
- name: model_name
dtype: string
- name: verifier_score
dtype: int64
splits:
- name: train
num_bytes: 49684056
num_examples: 2359
download_size: 19790217
dataset_size: 49684056
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yunjae-won/mp_mistral7bv3 | yunjae-won | 2025-05-06T01:10:48Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T01:10:44Z | null | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: prompt_text
dtype: string
splits:
- name: train
num_bytes: 4968591
num_examples: 4096
download_size: 2702430
dataset_size: 4968591
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reasoning-proj/_judged_math_traces_original_DeepSeek-R1-Distill-Qwen-14B | reasoning-proj | 2025-05-06T01:09:35Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T01:09:30Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: question_license
dtype: string
- name: question_source
dtype: string
- name: model_name
dtype: string
- name: verifier_score
dtype: int64
splits:
- name: train
num_bytes: 42417718
num_examples: 2359
download_size: 17450904
dataset_size: 42417718
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kothasuhas/ctx16-4-epochs-1.6Mv4-5-5 | kothasuhas | 2025-05-06T01:05:27Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T01:05:20Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 108454126
num_examples: 1600000
download_size: 81628670
dataset_size: 108454126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.0_num-company_3_dataset_0_for_gen_19_v2 | HungVu2003 | 2025-05-06T00:58:17Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:58:16Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 976899
num_examples: 12500
download_size: 630564
dataset_size: 976899
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.0_num-company_3_dataset_1_for_gen_18_v2 | HungVu2003 | 2025-05-06T00:56:37Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:56:36Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3724066
num_examples: 12500
download_size: 1980322
dataset_size: 3724066
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.0_num-company_3_dataset_0_for_gen_18_v2 | HungVu2003 | 2025-05-06T00:56:35Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:56:34Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 980914
num_examples: 12500
download_size: 629999
dataset_size: 980914
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zijiewang/qwen2.7b.pretrained.negated.anion_CondaQA | zijiewang | 2025-05-06T00:56:14Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:56:13Z | null | ---
dataset_info:
features:
- name: QuestionID
dtype: string
- name: original cue
dtype: string
- name: PassageEditID
dtype: int64
- name: original passage
dtype: string
- name: SampleID
dtype: int64
- name: label
dtype: string
- name: original sentence
dtype: string
- name: sentence2
dtype: string
- name: PassageID
dtype: int64
- name: sentence1
dtype: string
- name: prediciton
dtype: string
splits:
- name: train
num_bytes: 12847465
num_examples: 7240
download_size: 1357608
dataset_size: 12847465
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.0_num-company_3_dataset_1_for_gen_17_v2 | HungVu2003 | 2025-05-06T00:54:51Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:54:50Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3731880
num_examples: 12500
download_size: 1983734
dataset_size: 3731880
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reasoning-proj/_judged_math_traces_original_DeepSeek-R1-Distill | reasoning-proj | 2025-05-06T00:49:02Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:21:57Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: question_license
dtype: string
- name: question_source
dtype: string
- name: model_name
dtype: string
- name: verifier_score
dtype: int64
splits:
- name: train
num_bytes: 74394712
num_examples: 2359
download_size: 22308269
dataset_size: 74394712
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_3_dataset_1_for_gen_3_v2 | HungVu2003 | 2025-05-06T00:47:39Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:47:38Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6654894
num_examples: 14998
download_size: 3349396
dataset_size: 6654894
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.0_num-company_3_dataset_0_for_gen_11_v2 | HungVu2003 | 2025-05-06T00:44:32Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:44:31Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 986510
num_examples: 12500
download_size: 636460
dataset_size: 986510
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.0_num-company_3_dataset_1_for_gen_10_v2 | HungVu2003 | 2025-05-06T00:42:56Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:42:55Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3728613
num_examples: 12500
download_size: 1977457
dataset_size: 3728613
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.4_num-company_2_dataset_0_for_gen_9_v2 | HungVu2003 | 2025-05-06T00:42:46Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:42:44Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 4072785
num_examples: 15000
download_size: 1709027
dataset_size: 4072785
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n7_seed47639 | ssundaram | 2025-05-06T00:41:56Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:41:54Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 9033279
num_examples: 7200
download_size: 3863250
dataset_size: 9033279
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n3_seed1234 | ssundaram | 2025-05-06T00:41:46Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:41:44Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 8472131
num_examples: 6969
download_size: 3634383
dataset_size: 8472131
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n2_seed1234 | ssundaram | 2025-05-06T00:41:43Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:41:42Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 7972063
num_examples: 6739
download_size: 3429729
dataset_size: 7972063
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.0_num-company_3_dataset_1_for_gen_7_v2 | HungVu2003 | 2025-05-06T00:37:50Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:37:49Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3709255
num_examples: 12500
download_size: 1976125
dataset_size: 3709255
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.0_num-company_3_dataset_0_for_gen_6_v2 | HungVu2003 | 2025-05-06T00:36:10Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:36:09Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 986236
num_examples: 12500
download_size: 635999
dataset_size: 986236
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_3_dataset_2_for_gen_2_v2 | HungVu2003 | 2025-05-06T00:06:44Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:06:42Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2372495
num_examples: 14998
download_size: 1268615
dataset_size: 2372495
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kothasuhas/ctx16-4-epochs-1.6Mv3-5-5 | kothasuhas | 2025-05-06T00:00:21Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:00:15Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 108432789
num_examples: 1600000
download_size: 81596961
dataset_size: 108432789
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mhr2004/nevir-original-mhr2004-roberta-large-stsb-lr2e-05-bs32-pred | mhr2004 | 2025-05-05T23:53:39Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T23:53:38Z | null | ---
dataset_info:
features:
- name: input_ids_1
sequence: int64
- name: att_1
sequence: int64
- name: query
dtype: string
- name: doc_1
dtype: string
- name: doc_2
dtype: string
- name: input_ids_2
sequence: int64
- name: att_2
sequence: int64
- name: label
dtype: int64
- name: pair_id
dtype: int64
- name: pred
dtype: int64
splits:
- name: train
num_bytes: 49610561
num_examples: 2766
download_size: 2469479
dataset_size: 49610561
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mhr2004/nev-original-cross-encoder-stsb-roberta-large-bs8-lr2e-05-pred | mhr2004 | 2025-05-05T23:44:21Z | 14 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T03:59:15Z | null | ---
dataset_info:
features:
- name: input_ids_1
sequence: int64
- name: att_1
sequence: int64
- name: query
dtype: string
- name: doc_1
dtype: string
- name: doc_2
dtype: string
- name: input_ids_2
sequence: int64
- name: att_2
sequence: int64
- name: label
dtype: int64
- name: pair_id
dtype: int64
- name: pred
dtype: int64
splits:
- name: train
num_bytes: 49610561
num_examples: 2766
download_size: 2469483
dataset_size: 49610561
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
osama24sy/llama3.1-8b-it-coutdown-game-7k-qwq-r64-v0.2-24-v0.1 | osama24sy | 2025-05-05T23:33:13Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T23:33:12Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: numbers
sequence: int64
- name: operations
sequence:
sequence: string
- name: response
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 2943811
num_examples: 150
download_size: 995531
dataset_size: 2943811
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
guluthemonster/M3VIR | guluthemonster | 2025-05-05T23:31:22Z | 220 | 0 | [
"task_categories:image-to-3d",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | [
"image-to-3d"
] | 2025-04-16T22:40:54Z | null | ---
license: mit
language:
- en
pretty_name: M³VIR
size_categories:
- 100B<n<1T
task_categories:
- image-to-3d
---
# M³VIR
In the field of restoration and 3D reconstruction, particularly for rendered content such as gaming environments, the lack of sufficient ground-truth training data presents a significant challenge. While these techniques are extensively studied in real-world applications, their adaptation to virtual or synthetic environments remains relatively underexplored. This is largely due to the distinct characteristics of rendered content, which differ from natural scenes in terms of texture, lighting, and geometry. The absence of high-quality, annotated datasets tailored for virtual content restoration and reconstruction has hindered progress and limited the development of effective methods in this domain.
To address this gap, we introduce a large-scale, high-quality dataset specifically designed for rendered environments. This dataset aims to support a wide range of tasks, including image restoration, 3D reconstruction, novel view synthesis, and content manipulation, thereby facilitating research and development of generative AI algorithms for virtual content.
### Dataset Sources
- **Repository:** [M3VIR](https://huggingface.co/datasets/guluthemonster/M3VIR)
- **Paper:** [More Information Needed]
## Dataset Details
M³VIR provids 8 categories: Churches-And-Temples, Hiking-Trails, Hotels-And-Restaurants, Mountains, Parks-And-Recreation-Areas, Residential-Areas, School-Universities, and Urban-Street-Views. For each category, we collected three types of scenes:
- MovingCameraDynamicScene
- MovingCameraStaticScene
- StaticCameraDynamicScene
For each scene type, we collect 10 distinct video sets featuring varying scene content. Each set includes different resolutions and visual styles: a photo-realistic style available in 960×540, 1920×1080, and 2880×1620 resolutions (Realistic_960x540_1024sample, Realistic_1920x1080_1024sample, Realistic_2880x1620_1024sample); a cartoon style in 1920×1080 resolution (Cartoon_1920x1080_1024sample); and a metalized style also in 1920×1080 resolution (Metalize_1920x1080_1024sample). Corresponding segmentation maps are provided as ID_images. Since Realistic_1920x1080_1024sample, Cartoon_1920x1080_1024sample, and Metalize_1920x1080_1024sample share the same segmentation annotations, we include the ID_images only once to conserve storage.
The dataset is split into 80% for training (64 sets) and 20% for testing (16 sets). To support the four challenge tracks, the full M³VIR dataset is divided into two subsets: M³VIR_MR and M³VIR_MS. Due to the large size of the dataset, a small-scale mini training set will also be provided for Track 1 to facilitate quick experimentation and baseline development.
Each video sample—defined by a specific style and resolution (e.g., realistic style at 1920×1080 resolution)—includes six temporally synchronized camera views with a shared camera center. These views are captured from different perspectives: Back, Front, Left60, Left120, Right60, and Right120, providing diverse angular coverage of the scene. Each video sequence is 2 seconds long, recorded at 15 frames per second, resulting in a total of 30 image frames per view.
### M³VIR-Tracks
| Dataset | Rate|Scene Types|Resolution Styles|Data Path|
| :----------- | :-----------: | :-------: | :-------: | :-------: |
| M³VIR_MR | 5% |MovingCamDyn/MovingCamStatic/StaticCamDyn |Real_960x540/Real_1920x1080/Real_2880x1620|Track1|
| | Full |MovingCamStatic|Real_1920x1080|Track2|
| | Full |MovingCamStatic|Real_960x540/Real_1920x1080/Real_2880x1620|Track3|
| M³VIR_MS | Full |MovingCamDyn/MovingCamStatic/StaticCamDyn |Cartoon_1920x1080/Metal_1920x1080/Real_1920x1080|Track4|
For more details about the datasets and challenge tracks, please refer to the official challenge page:
https://richmediagai.github.io/challenges.html
## How to Download
[Use Hugging Face Command Line Interface (CLI)](https://huggingface.co/docs/huggingface_hub/guides/cli#huggingface-cli-download)
```
Download Entire Dataset
$ huggingface-cli download guluthemonster/M3VIR --repo-type dataset --local-dir .
Download Specified Folder
$ huggingface-cli download guluthemonster/M3VIR --repo-type dataset --include TRACKS/* --local-dir .
```
[Use Git](https://huggingface.co/docs/hub/datasets-downloading#using-git)
```
$ git clone https://huggingface.co/datasets/guluthemonster/M3VIR
```
After download the dataset, you can use following codes to extract the files in each subfolder (take the TRACKS/Track1/Batch1 as an example):
```
$ python Scripts/extract_track1.py --input_path TRACKS/Track1/Batch1 --output_path /path/to/your/folder
```
# Citation |
rasdani/swe-fixer-debug-DeepSeek-R1 | rasdani | 2025-05-05T23:31:07Z | 100 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-26T12:20:51Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: source
dtype: string
- name: task_type
dtype: string
- name: in_source_id
dtype: string
- name: prompt
dtype: string
- name: golden_standard_solution
dtype: string
- name: verification_info
dtype: string
- name: metadata
dtype: string
- name: llm_response
dtype: string
splits:
- name: train
num_bytes: 4481159
num_examples: 30
download_size: 1738534
dataset_size: 4481159
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
justus27/s2-bigmath | justus27 | 2025-05-05T23:23:29Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T23:23:27Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: task_type
dtype: string
- name: prompt
dtype: string
- name: verification_info
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 94831922
num_examples: 251122
download_size: 33716599
dataset_size: 94831922
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_3_dataset_0_for_gen_2_v2 | HungVu2003 | 2025-05-05T23:23:19Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T23:23:18Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2316785
num_examples: 14998
download_size: 1248334
dataset_size: 2316785
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
konwoo/dbs2-fs1-np1-3e-07-wd0.0-token-37M | konwoo | 2025-05-05T23:18:42Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T23:18:20Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: log_weight
dtype: float32
splits:
- name: train
num_bytes: 357286424
num_examples: 150000
download_size: 210460073
dataset_size: 357286424
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_3_dataset_2_for_gen_1_v2 | HungVu2003 | 2025-05-05T23:03:05Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T23:03:03Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2903319
num_examples: 14998
download_size: 1465946
dataset_size: 2903319
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/am_1000k | mlfoundations-dev | 2025-05-05T22:45:33Z | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T22:33:59Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
- name: info
struct:
- name: source
dtype: string
- name: reference_answer
dtype: string
- name: test_case
dtype: string
- name: think_content
dtype: string
- name: answer_content
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 34780694630.0
num_examples: 1000000
download_size: 16721202205
dataset_size: 34780694630.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Solmazp/synthetic_data_67k | Solmazp | 2025-05-05T22:45:02Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T11:43:27Z | null | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: category
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 34634560
num_examples: 67030
download_size: 17592428
dataset_size: 34634560
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
littleGuagua/x_dataset_11627 | littleGuagua | 2025-05-05T22:40:39Z | 1,545 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T13:13:48Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** littleGuagua/x_dataset_11627
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5FUByNzgdM2eukk6SwetFsZ4EPTxRqaV4YNEhNcusS1SxRVX
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{littleGuagua2025datauniversex_dataset_11627,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={littleGuagua},
year={2025},
url={https://huggingface.co/datasets/littleGuagua/x_dataset_11627},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 149000631
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-10T00:00:00Z
- **Last Updated:** 2025-02-18T20:47:28Z
### Data Distribution
- Tweets with hashtags: 42.62%
- Tweets without hashtags: 57.38%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 85489881 | 57.38% |
| 2 | #riyadh | 1033096 | 0.69% |
| 3 | #zelena | 790108 | 0.53% |
| 4 | #tiktok | 618215 | 0.41% |
| 5 | #bbb25 | 362232 | 0.24% |
| 6 | #ad | 356819 | 0.24% |
| 7 | #jhope_at_galadespiècesjaunes | 234343 | 0.16% |
| 8 | #bbmzansi | 207541 | 0.14% |
| 9 | #pr | 188395 | 0.13% |
| 10 | #yahooニュース | 178958 | 0.12% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T13:14:32Z | 2274090 | 2274090 |
| 2025-01-30T01:26:02Z | 29523249 | 31797339 |
| 2025-02-02T13:36:10Z | 29333848 | 61131187 |
| 2025-02-06T01:47:05Z | 28740147 | 89871334 |
| 2025-02-09T14:00:59Z | 29293177 | 119164511 |
| 2025-02-13T02:15:32Z | 28379764 | 147544275 |
| 2025-02-18T05:45:25Z | 808939 | 148353214 |
| 2025-02-18T20:47:28Z | 647417 | 149000631 |
|
mlfoundations-dev/am_300k | mlfoundations-dev | 2025-05-05T22:33:58Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T22:28:27Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
- name: info
struct:
- name: source
dtype: string
- name: reference_answer
dtype: string
- name: test_case
dtype: string
- name: think_content
dtype: string
- name: answer_content
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 10990699503.08
num_examples: 316000
download_size: 5137040476
dataset_size: 10990699503.08
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/am_100k | mlfoundations-dev | 2025-05-05T22:28:26Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T22:25:45Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
- name: info
struct:
- name: source
dtype: string
- name: reference_answer
dtype: string
- name: test_case
dtype: string
- name: think_content
dtype: string
- name: answer_content
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 3478069463.0
num_examples: 100000
download_size: 1651806572
dataset_size: 3478069463.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_3_dataset_0_for_gen_1_v2 | HungVu2003 | 2025-05-05T22:28:11Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T22:28:09Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2456540
num_examples: 14998
download_size: 1348760
dataset_size: 2456540
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
saurabh5/tulu-3-personas-code-rlvr | saurabh5 | 2025-05-05T22:26:22Z | 140 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-21T15:59:43Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
sequence: string
- name: dataset
dtype: string
- name: good_program
dtype: bool
- name: rewritten_solution
dtype: string
- name: rewritten_input
dtype: string
splits:
- name: train
num_bytes: 93955133
num_examples: 30678
download_size: 40687998
dataset_size: 93955133
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/am_30k | mlfoundations-dev | 2025-05-05T22:25:44Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T22:24:34Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
- name: info
struct:
- name: source
dtype: string
- name: reference_answer
dtype: string
- name: test_case
dtype: string
- name: think_content
dtype: string
- name: answer_content
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1099069950.308
num_examples: 31600
download_size: 529940913
dataset_size: 1099069950.308
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ajagota71/ajagota71_pythia-70m-detox-epoch-20_2000_samples_detoxified | ajagota71 | 2025-05-05T22:24:07Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T22:24:05Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: output
dtype: string
- name: model_name
dtype: string
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: generation_timestamp
dtype: string
splits:
- name: train
num_bytes: 568330
num_examples: 2000
download_size: 303098
dataset_size: 568330
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/limo_0.3k | mlfoundations-dev | 2025-05-05T22:20:28Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T22:20:21Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 12146394.849449204
num_examples: 316
download_size: 5242165
dataset_size: 12146394.849449204
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PeggyPeiyao/SentimentAnalysis | PeggyPeiyao | 2025-05-05T22:20:24Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-05T22:09:34Z | null | ---
license: apache-2.0
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_3_dataset_0_for_gen_19_v2 | HungVu2003 | 2025-05-05T22:06:27Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T22:06:25Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1147103
num_examples: 12500
download_size: 699488
dataset_size: 1147103
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
osama24sy/llama3.1-8b-it-24-game-8k-qwq-r64-hm-24-v0.3 | osama24sy | 2025-05-05T22:05:30Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T22:05:29Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: numbers
sequence: int64
- name: operations
sequence:
sequence: string
- name: response
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 3047330
num_examples: 150
download_size: 1058689
dataset_size: 3047330
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_1_for_gen_19_v2 | HungVu2003 | 2025-05-05T22:05:10Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T22:05:09Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 817432
num_examples: 12500
download_size: 564857
dataset_size: 817432
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_3_dataset_2_for_gen_18_v2 | HungVu2003 | 2025-05-05T22:04:20Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T22:04:19Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1739962
num_examples: 12500
download_size: 855124
dataset_size: 1739962
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_2_for_gen_15_v2 | HungVu2003 | 2025-05-05T21:57:49Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:57:48Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 813798
num_examples: 12500
download_size: 562288
dataset_size: 813798
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_3_dataset_2_for_gen_12_v2 | HungVu2003 | 2025-05-05T21:52:34Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:52:32Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1729800
num_examples: 12500
download_size: 852510
dataset_size: 1729800
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_2_for_gen_12_v2 | HungVu2003 | 2025-05-05T21:51:56Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:51:55Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 810696
num_examples: 12500
download_size: 560366
dataset_size: 810696
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_3_dataset_2_for_gen_10_v2 | HungVu2003 | 2025-05-05T21:48:42Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:48:41Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1748228
num_examples: 12500
download_size: 856404
dataset_size: 1748228
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_3_dataset_1_for_gen_10_v2 | HungVu2003 | 2025-05-05T21:48:40Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:48:37Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6611250
num_examples: 12500
download_size: 3368318
dataset_size: 6611250
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_1_for_gen_10_v2 | HungVu2003 | 2025-05-05T21:48:07Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:48:06Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 820715
num_examples: 12500
download_size: 567865
dataset_size: 820715
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_1_for_gen_8_v2 | HungVu2003 | 2025-05-05T21:44:16Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:44:15Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 814252
num_examples: 12500
download_size: 562693
dataset_size: 814252
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_0_for_gen_8_v2 | HungVu2003 | 2025-05-05T21:44:14Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:44:13Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 815984
num_examples: 12500
download_size: 564583
dataset_size: 815984
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_3_dataset_2_for_gen_6_v2 | HungVu2003 | 2025-05-05T21:41:00Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:40:59Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1752148
num_examples: 12500
download_size: 860192
dataset_size: 1752148
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Setayeshabiazi/Train_Test_Split_LLM_Project | Setayeshabiazi | 2025-05-05T21:40:01Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:37:09Z | null | ---
dataset_info:
features:
- name: repository_name
dtype: string
- name: func_path_in_repository
dtype: string
- name: func_name
dtype: string
- name: whole_func_string
dtype: string
- name: func_code_string
dtype: string
- name: func_documentation_string
dtype: string
- name: func_code_url
dtype: string
- name: language
dtype: string
- name: split_name
dtype: string
- name: func_code_tokens
sequence: 'null'
- name: func_documentation_tokens
sequence: 'null'
- name: llm_used
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1223714.2303030302
num_examples: 148
- name: test
num_bytes: 140561.76969696968
num_examples: 17
download_size: 467179
dataset_size: 1364276.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
icedwind/x_dataset_12552 | icedwind | 2025-05-05T21:38:46Z | 1,251 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-27T09:06:40Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** icedwind/x_dataset_12552
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5EsgiG2PjgxDgxGHe8sqdeADbznL53ScJSG2UMRozvuDHJW7
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{icedwind2025datauniversex_dataset_12552,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={icedwind},
year={2025},
url={https://huggingface.co/datasets/icedwind/x_dataset_12552},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 55275936
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-11T00:00:00Z
- **Last Updated:** 2025-02-18T19:50:04Z
### Data Distribution
- Tweets with hashtags: 44.05%
- Tweets without hashtags: 55.95%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 30924129 | 55.95% |
| 2 | #riyadh | 390293 | 0.71% |
| 3 | #zelena | 311850 | 0.56% |
| 4 | #tiktok | 238579 | 0.43% |
| 5 | #bbb25 | 161226 | 0.29% |
| 6 | #ad | 134995 | 0.24% |
| 7 | #jhope_at_galadespiècesjaunes | 89186 | 0.16% |
| 8 | #grammys | 79177 | 0.14% |
| 9 | #bbmzansi | 74293 | 0.13% |
| 10 | #pr | 73757 | 0.13% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-27T09:07:36Z | 2955867 | 2955867 |
| 2025-01-30T21:10:30Z | 9690897 | 12646764 |
| 2025-02-03T09:14:40Z | 11584067 | 24230831 |
| 2025-02-06T21:18:44Z | 9766486 | 33997317 |
| 2025-02-10T09:23:04Z | 9007442 | 43004759 |
| 2025-02-13T21:27:37Z | 10986716 | 53991475 |
| 2025-02-18T04:48:46Z | 650061 | 54641536 |
| 2025-02-18T19:50:04Z | 634400 | 55275936 |
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_2_for_gen_5_v2 | HungVu2003 | 2025-05-05T21:38:45Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:38:44Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 807175
num_examples: 12500
download_size: 557156
dataset_size: 807175
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_1_for_gen_5_v2 | HungVu2003 | 2025-05-05T21:38:44Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:38:43Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 813566
num_examples: 12500
download_size: 561184
dataset_size: 813566
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_3_dataset_1_for_gen_4_v2 | HungVu2003 | 2025-05-05T21:37:12Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:37:11Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6600068
num_examples: 12500
download_size: 3355823
dataset_size: 6600068
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_0_for_gen_3_v2 | HungVu2003 | 2025-05-05T21:35:12Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:35:08Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 829995
num_examples: 12500
download_size: 575037
dataset_size: 829995
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_2_for_gen_2_v2 | HungVu2003 | 2025-05-05T21:33:27Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:33:26Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 824084
num_examples: 12500
download_size: 569959
dataset_size: 824084
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_0_for_gen_2_v2 | HungVu2003 | 2025-05-05T21:33:23Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:33:21Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 818212
num_examples: 12500
download_size: 565767
dataset_size: 818212
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ieuniversity/group_4_submission | ieuniversity | 2025-05-05T21:32:45Z | 249 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-23T21:10:03Z | null | ---
dataset_info:
features:
- name: ID
dtype: string
- name: CLASE
dtype: string
splits:
- name: train
num_bytes: 897695
num_examples: 25808
download_size: 500636
dataset_size: 897695
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
uzairrj/MNIST-Numpy-Dump | uzairrj | 2025-05-05T21:26:30Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"region:us"
] | [] | 2025-05-05T21:07:29Z | null | ---
license: apache-2.0
pretty_name: MNIST numpy dump
size_categories:
- 10K<n<100K
--- |
littleGuagua/x_dataset_8140 | littleGuagua | 2025-05-05T21:07:51Z | 1,049 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-26T13:25:56Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** littleGuagua/x_dataset_8140
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5HasdyDaczLXYaiykhuuszTMWS65QmAgo72UpwABUi3czyeu
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{littleGuagua2025datauniversex_dataset_8140,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={littleGuagua},
year={2025},
url={https://huggingface.co/datasets/littleGuagua/x_dataset_8140},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 50376997
- **Date Range:** 2025-01-21T00:00:00Z to 2025-02-10T00:00:00Z
- **Last Updated:** 2025-02-18T20:55:45Z
### Data Distribution
- Tweets with hashtags: 39.81%
- Tweets without hashtags: 60.19%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 30319553 | 60.19% |
| 2 | #riyadh | 310085 | 0.62% |
| 3 | #zelena | 215655 | 0.43% |
| 4 | #tiktok | 192806 | 0.38% |
| 5 | #ad | 112205 | 0.22% |
| 6 | #bbb25 | 110854 | 0.22% |
| 7 | #grammys | 82659 | 0.16% |
| 8 | #jhope_at_galadespiècesjaunes | 70215 | 0.14% |
| 9 | #bbmzansi | 66978 | 0.13% |
| 10 | #sixtonesann | 65126 | 0.13% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-26T13:26:49Z | 2721817 | 2721817 |
| 2025-01-30T01:43:17Z | 9702324 | 12424141 |
| 2025-02-02T13:47:13Z | 12507356 | 24931497 |
| 2025-02-06T01:50:29Z | 8691717 | 33623214 |
| 2025-02-09T13:54:19Z | 8748247 | 42371461 |
| 2025-02-13T02:21:42Z | 6726572 | 49098033 |
| 2025-02-18T05:54:36Z | 648154 | 49746187 |
| 2025-02-18T20:55:45Z | 630810 | 50376997 |
|
XiaoZhang98/OntoBench | XiaoZhang98 | 2025-05-05T21:02:06Z | 291 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T22:11:20Z | null | ---
license: apache-2.0
dataset_info:
features:
- name: identifier
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: answer
dtype: string
- name: task_label
dtype: string
- name: domain
dtype: string
- name: label
dtype: string
- name: iri
dtype: string
splits:
- name: 1_1_class_definition_understanding
num_bytes: 8192266
num_examples: 9151
- name: 1_2_class_relation_understanding
num_bytes: 3626863
num_examples: 9201
- name: 1_3_property_domain_understanding
num_bytes: 115314
num_examples: 375
- name: 1_4_instance_class_understanding
num_bytes: 884651
num_examples: 2475
- name: 1_5_instance_definition_understanding
num_bytes: 3364620
num_examples: 3814
- name: 2_1_inferred_relation_reasoning
num_bytes: 3176938
num_examples: 8208
- name: 2_2_constraint_reasoning
num_bytes: 3482634
num_examples: 6956
- name: 2_3_instance_class_reasoning
num_bytes: 1361422
num_examples: 3793
- name: 2_4_swrl_based_logic_reasoning
num_bytes: 2850512
num_examples: 6517
- name: 2_5_description_logic_reasoning
num_bytes: 861489
num_examples: 2560
- name: 3_1_class_definition_generation
num_bytes: 1318076
num_examples: 2935
- name: 3_2_class_hierarchy_construction
num_bytes: 2474509
num_examples: 951
- name: 3_3_property_relation_construction
num_bytes: 587871
num_examples: 255
- name: 3_4_constraint_construction
num_bytes: 1777700
num_examples: 642
- name: 3_5_ontology_alignment
num_bytes: 7916840
num_examples: 1148
download_size: 14563405
dataset_size: 41991705
configs:
- config_name: default
data_files:
- split: 1_1_class_definition_understanding
path: data/1_1_class_definition_understanding-*
- split: 1_2_class_relation_understanding
path: data/1_2_class_relation_understanding-*
- split: 1_3_property_domain_understanding
path: data/1_3_property_domain_understanding-*
- split: 1_4_instance_class_understanding
path: data/1_4_instance_class_understanding-*
- split: 1_5_instance_definition_understanding
path: data/1_5_instance_definition_understanding-*
- split: 2_1_inferred_relation_reasoning
path: data/2_1_inferred_relation_reasoning-*
- split: 2_2_constraint_reasoning
path: data/2_2_constraint_reasoning-*
- split: 2_3_instance_class_reasoning
path: data/2_3_instance_class_reasoning-*
- split: 2_4_swrl_based_logic_reasoning
path: data/2_4_swrl_based_logic_reasoning-*
- split: 2_5_description_logic_reasoning
path: data/2_5_description_logic_reasoning-*
- split: 3_1_class_definition_generation
path: data/3_1_class_definition_generation-*
- split: 3_2_class_hierarchy_construction
path: data/3_2_class_hierarchy_construction-*
- split: 3_3_property_relation_construction
path: data/3_3_property_relation_construction-*
- split: 3_4_constraint_construction
path: data/3_4_constraint_construction-*
- split: 3_5_ontology_alignment
path: data/3_5_ontology_alignment-*
---
|
semran1/calibration_test3 | semran1 | 2025-05-05T21:00:41Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:00:13Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: cc-path
dtype: string
- name: domain
dtype: string
- name: lang
dtype: string
- name: lang_score
dtype: float64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: math_score
dtype: float64
- name: type
dtype: string
splits:
- name: train
num_bytes: 222271464.0
num_examples: 50000
download_size: 119477496
dataset_size: 222271464.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Enpas/Sabian-2.0 | Enpas | 2025-05-05T20:54:46Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T20:44:06Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 7362090509.281
num_examples: 32901
- name: test
num_bytes: 1111697749.158
num_examples: 3139
download_size: 12133462393
dataset_size: 8473788258.439
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
doublesizebed/malay-tts-tags | doublesizebed | 2025-05-05T20:47:53Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T19:01:09Z | null | ---
dataset_info:
features:
- name: audio_filename
dtype: string
- name: prompt
dtype: string
- name: transcription
dtype: string
- name: gender
dtype: string
- name: audio_filepath
dtype: audio
- name: utterance_pitch_mean
dtype: float64
- name: utterance_pitch_std
dtype: float64
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speech_duration
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: pitch
dtype: string
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: text_description
dtype: string
splits:
- name: train
num_bytes: 1080413341.0
num_examples: 20000
download_size: 1074278610
dataset_size: 1080413341.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.4_num-company_2_dataset_0_for_gen_8_v2 | HungVu2003 | 2025-05-05T20:46:10Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T20:46:08Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3307811
num_examples: 15000
download_size: 1577310
dataset_size: 3307811
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
icedwind/x_dataset_7114 | icedwind | 2025-05-05T20:40:18Z | 1,045 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | 2025-01-29T04:56:37Z | null | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 X (Twitter) Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** icedwind/x_dataset_7114
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 5HWXQSVqjd4pY525MupNTU7NaEb7r35ppxXfgeDWPgpfBuhm
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single tweet with the following fields:
### Data Fields
- `text` (string): The main content of the tweet.
- `label` (string): Sentiment or topic category of the tweet.
- `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present.
- `datetime` (string): The date when the tweet was posted.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{icedwind2025datauniversex_dataset_7114,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={icedwind},
year={2025},
url={https://huggingface.co/datasets/icedwind/x_dataset_7114},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 38590587
- **Date Range:** 2025-01-22T00:00:00Z to 2025-02-12T00:00:00Z
- **Last Updated:** 2025-02-18T21:48:51Z
### Data Distribution
- Tweets with hashtags: 48.74%
- Tweets without hashtags: 51.26%
### Top 10 Hashtags
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | NULL | 19781385 | 51.26% |
| 2 | #riyadh | 289177 | 0.75% |
| 3 | #zelena | 233455 | 0.60% |
| 4 | #tiktok | 182562 | 0.47% |
| 5 | #ad | 111797 | 0.29% |
| 6 | #bbb25 | 107360 | 0.28% |
| 7 | #jhope_at_galadespiècesjaunes | 73362 | 0.19% |
| 8 | #pr | 58834 | 0.15% |
| 9 | #yahooニュース | 56344 | 0.15% |
| 10 | #theheartkillersep11 | 55012 | 0.14% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2025-01-29T04:57:34Z | 3402840 | 3402840 |
| 2025-02-01T16:59:53Z | 7356908 | 10759748 |
| 2025-02-05T05:03:04Z | 9386957 | 20146705 |
| 2025-02-08T17:06:06Z | 7524854 | 27671559 |
| 2025-02-12T05:13:11Z | 9621743 | 37293302 |
| 2025-02-18T06:47:29Z | 636505 | 37929807 |
| 2025-02-18T21:48:51Z | 660780 | 38590587 |
|
weqweasdas/qw_grpo_new_test_minerva_math | weqweasdas | 2025-05-05T20:33:06Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T20:29:59Z | null | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: gt_cot
dtype: string
- name: gt
dtype: string
- name: unit
dtype: string
- name: solution
sequence: string
- name: answer_type
dtype: string
- name: subfield
dtype: string
- name: code
sequence: string
- name: pred
sequence: string
- name: report
sequence: 'null'
- name: score
sequence: bool
splits:
- name: train
num_bytes: 57491469
num_examples: 675
download_size: 43726766
dataset_size: 57491469
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Evangelinejy/DeepScaleR_with_solution_external_difficulty | Evangelinejy | 2025-05-05T20:32:13Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T20:32:12Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: solution
dtype: string
- name: difficulty
dtype: float64
- name: difficulty_raw
sequence: float64
splits:
- name: train
num_bytes: 22506849
num_examples: 40315
download_size: 10412281
dataset_size: 22506849
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ai-chem/Chelate_metal_complexes | ai-chem | 2025-05-05T20:32:11Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"metal-complexes",
"radiometals",
"coordination-chemistry"
] | [] | 2025-05-05T17:47:49Z | null | ---
dataset_info:
features:
- name: pdf
dtype: string
- name: doi
dtype: string
- name: doi_sourse
dtype: string
- name: supplementary
dtype: int64
- name: title
dtype: string
- name: publisher
dtype: string
- name: year
dtype: int64
- name: access
dtype: int64
- name: compound_id
dtype: string
- name: compound_name
dtype: string
- name: smiles
dtype: string
- name: smiles_type
dtype: string
- name: metal
dtype: string
- name: target
dtype: string
- name: page_smiles
dtype: int64
- name: origin_smiles
dtype: string
- name: page_metal
dtype: int64
- name: origin_metal
dtype: string
- name: page_target
dtype: float64
- name: origin_target
dtype: string
splits:
- name: train
num_bytes: 329597
num_examples: 907
download_size: 40094
dataset_size: 329597
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- metal-complexes
- radiometals
- coordination-chemistry
---
# Dataset Card for Complexes
This dataset includes information about **metal-containing chemical complexes**, particularly those involving radiometals like gallium. It contains chemical structures, target values, and references to the source literature.
## Dataset Summary
- **Number of rows**: 907
- **Number of columns**: 20
- **Data type**: CSV
## Column Examples
- `smiles`: Chemical structure
- `metal`: Metal involved (e.g., Ga)
- `target`: Property of interest
- `doi`, `publisher`, `title`: Source article
## Potential Uses
- Coordination chemistry studies
- Metal–ligand interaction modeling
- Radiopharmaceutical design
## License
MIT
|
ai-chem/Cytotoxicity | ai-chem | 2025-05-05T20:30:19Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"cytotoxicity",
"nanomaterials",
"toxicology",
"bio-nano-interactions"
] | [] | 2025-05-05T17:47:51Z | null | ---
dataset_info:
features:
- name: sn
dtype: int64
- name: Material
dtype: string
- name: Shape
dtype: string
- name: Coat/Functional group
dtype: string
- name: Synthesis method
dtype: string
- name: Surface charge
dtype: string
- name: Size in medium (nm)
dtype: float64
- name: Zeta in medium (mV)
dtype: float64
- name: no. of cells (cells/well)
dtype: float64
- name: Human/Animal
dtype: string
- name: Cell source
dtype: string
- name: Cell tissue
dtype: string
- name: Cell morphology
dtype: string
- name: Cell age
dtype: string
- name: Time (hr)
dtype: int64
- name: Concentration (µg/ml)
dtype: float64
- name: Test
dtype: string
- name: Test indicator
dtype: string
- name: Viability (%)
dtype: float64
- name: doi
dtype: string
- name: Article_list
dtype: int64
- name: Core size (nm)
dtype: float64
- name: Hydrodynamic diameter (nm)
dtype: float64
- name: Zeta potential (mV)
dtype: float64
- name: Cell type
dtype: string
- name: journal_name
dtype: string
- name: publisher
dtype: string
- name: year
dtype: float64
- name: title
dtype: string
- name: journal_is_oa
dtype: bool
- name: is_oa
dtype: string
- name: oa_status
dtype: string
- name: pdf
dtype: string
- name: access
dtype: int64
splits:
- name: train
num_bytes: 2594352
num_examples: 5476
download_size: 173648
dataset_size: 2594352
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- cytotoxicity
- nanomaterials
- toxicology
- bio-nano-interactions
---
# Dataset Card for cytox_NeurIPS_updated_data
This dataset reports **cytotoxicity data** for various nanomaterials tested on different cell types. It includes nanomaterial characteristics, experimental conditions, and cell viability outcomes.
## Dataset Summary
- **Number of rows**: 5476
- **Number of columns**: 32
- **Data format**: CSV
## Column Examples
- `material`, `shape`, `coat/functional group`: Nanomaterial descriptors
- `cell type`, `cell tissue`, `human/animal`: Biological models used
- `viability (%)`: Measured cytotoxicity
- `doi`, `publisher`, `title`: Source references
## Potential Uses
- Nanotoxicology studies
- Predictive modeling of nanomaterial–cell interactions
- Safe-by-design nanomaterial development
## License
MIT
|
ai-chem/Eye_drops | ai-chem | 2025-05-05T20:24:38Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"eye-drops",
"permeability",
"drug-delivery"
] | [] | 2025-05-05T17:47:56Z | null | ---
dataset_info:
features:
- name: smiles
dtype: string
- name: name
dtype: string
- name: perm (cm/s)
dtype: string
- name: logP
dtype: float64
- name: doi
dtype: string
- name: PMID
dtype: float64
- name: title
dtype: string
- name: publisher
dtype: string
- name: year
dtype: int64
- name: access
dtype: int64
- name: page
dtype: float64
- name: origin
dtype: string
splits:
- name: train
num_bytes: 45633
num_examples: 163
download_size: 14233
dataset_size: 45633
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- eye-drops
- permeability
- drug-delivery
---
# Dataset Card for Eye Drops
This dataset contains data on **ocular drug candidates** and their **corneal permeability**. It includes SMILES, physicochemical properties, and literature references.
## Dataset Summary
- **Number of rows**: 163
- **Number of columns**: 12
- **Data type**: CSV
## Column Examples
- `smiles`: Molecular structure
- `perm (cm/s)`: Permeability value
- `logP`: Lipophilicity
- `PMID`, `title`, `publisher`: Source info
## Potential Uses
- Drug delivery modeling
- QSAR for corneal absorption
- Eye drop formulation research
## License
MIT
|
TheRealPilot638/Llama-3.2-1B-beam-search_16_no_chunking_H200 | TheRealPilot638 | 2025-05-05T20:11:23Z | 2 | 0 | [
"region:us"
] | [] | 2025-05-04T17:39:19Z | null | ---
dataset_info:
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-0--agg_strategy--last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: pred
dtype: string
- name: completion_tokens
sequence: int64
- name: scores
sequence:
sequence: float64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
- name: pred_weighted@8
dtype: string
- name: pred_maj@8
dtype: string
- name: pred_naive@8
dtype: string
- name: pred_weighted@16
dtype: string
- name: pred_maj@16
dtype: string
- name: pred_naive@16
dtype: string
splits:
- name: train
num_bytes: 15301039
num_examples: 500
download_size: 2440565
dataset_size: 15301039
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-0--agg_strategy--last--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 32
num_examples: 1
download_size: 1961
dataset_size: 32
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-1--agg_strategy--last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: pred
dtype: string
- name: completion_tokens
sequence: int64
- name: scores
sequence:
sequence: float64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
- name: pred_weighted@8
dtype: string
- name: pred_maj@8
dtype: string
- name: pred_naive@8
dtype: string
- name: pred_weighted@16
dtype: string
- name: pred_maj@16
dtype: string
- name: pred_naive@16
dtype: string
splits:
- name: train
num_bytes: 15114807
num_examples: 500
download_size: 2362761
dataset_size: 15114807
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-1--agg_strategy--last--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 32
num_examples: 1
download_size: 1961
dataset_size: 32
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-2--agg_strategy--last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: pred
dtype: string
- name: completion_tokens
sequence: int64
- name: scores
sequence:
sequence: float64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
- name: pred_weighted@8
dtype: string
- name: pred_maj@8
dtype: string
- name: pred_naive@8
dtype: string
- name: pred_weighted@16
dtype: string
- name: pred_maj@16
dtype: string
- name: pred_naive@16
dtype: string
splits:
- name: train
num_bytes: 14848306
num_examples: 500
download_size: 2369235
dataset_size: 14848306
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-2--agg_strategy--last--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 32
num_examples: 1
download_size: 1961
dataset_size: 32
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-3--agg_strategy--last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: pred
dtype: string
- name: completion_tokens
sequence: int64
- name: scores
sequence:
sequence: float64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
- name: pred_weighted@8
dtype: string
- name: pred_maj@8
dtype: string
- name: pred_naive@8
dtype: string
- name: pred_weighted@16
dtype: string
- name: pred_maj@16
dtype: string
- name: pred_naive@16
dtype: string
splits:
- name: train
num_bytes: 14983679
num_examples: 500
download_size: 2264287
dataset_size: 14983679
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-3--agg_strategy--last--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 32
num_examples: 1
download_size: 1961
dataset_size: 32
configs:
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-0--agg_strategy--last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-0--agg_strategy--last/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-0--agg_strategy--last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-0--agg_strategy--last--evals/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-1--agg_strategy--last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-1--agg_strategy--last/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-1--agg_strategy--last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-1--agg_strategy--last--evals/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-2--agg_strategy--last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-2--agg_strategy--last/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-2--agg_strategy--last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-2--agg_strategy--last--evals/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-3--agg_strategy--last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-3--agg_strategy--last/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-3--agg_strategy--last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-16--m-4--iters-40--look-1--seed-3--agg_strategy--last--evals/train-*
---
|
polygraf-ai/dpo-dataset-v1 | polygraf-ai | 2025-05-05T20:07:45Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T20:07:39Z | null | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen_ai_score
dtype: float64
- name: chosen_quality_score
dtype: int64
- name: rejected_ai_score
dtype: float64
- name: rejected_quality_score
dtype: int64
- name: original_dataset_index
dtype: int64
splits:
- name: train
num_bytes: 19520349
num_examples: 4309
download_size: 9081948
dataset_size: 19520349
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
osama24sy/llama3.1-8b-it-10k-qwen-singleturn-onesolution-r64-24-v0.3 | osama24sy | 2025-05-05T19:58:17Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T19:58:13Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: numbers
sequence: int64
- name: operations
sequence:
sequence: string
- name: response
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 243196
num_examples: 150
download_size: 97582
dataset_size: 243196
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.0_num-company_2_dataset_1_for_gen_17_v2 | HungVu2003 | 2025-05-05T19:58:04Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T19:58:03Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3731880
num_examples: 12500
download_size: 1983734
dataset_size: 3731880
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.