The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: TypeError Message: Couldn't cast array of type string to null Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1905, in _iter_arrow for key, pa_table in self.ex_iterable._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow for key, pa_table in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table pa_table = table_cast(pa_table, self.info.features.arrow_schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2223, in cast_table_to_schema arrays = [ File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2224, in <listcomp> cast_array_to_feature( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2052, in cast_array_to_feature casted_array_values = _c(array.values, feature.feature) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2086, in cast_array_to_feature return array_cast( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1948, in array_cast raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}") TypeError: Couldn't cast array of type string to null
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Bittensor Subnet 13 X (Twitter) Dataset


Miner Data Compliance Agreement
In uploading this dataset, I am agreeing to the Macrocosmos Miner Data Compliance Policy.
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks. For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example:
- Sentiment Analysis
- Trend Detection
- Content Analysis
- User Behavior Modeling
Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
Dataset Structure
Data Instances
Each instance represents a single tweet with the following fields:
Data Fields
text
(string): The main content of the tweet.label
(string): Sentiment or topic category of the tweet.tweet_hashtags
(list): A list of hashtags used in the tweet. May be empty if no hashtags are present.datetime
(string): The date when the tweet was posted.username_encoded
(string): An encoded version of the username to maintain user privacy.url_encoded
(string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present.
Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
Dataset Creation
Source Data
Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines.
Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
Considerations for Using the Data
Social Impact and Biases
Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population.
Limitations
- Data quality may vary due to the decentralized nature of collection and preprocessing.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public tweets and does not include private accounts or direct messages.
- Not all tweets contain hashtags or URLs.
Additional Information
Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use.
Citation Information
If you use this dataset in your research, please cite it as follows:
@misc{James0962025datauniversex_dataset_58,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={James096},
year={2025},
url={https://huggingface.co/datasets/James096/x_dataset_58},
}
Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
Dataset Statistics
[This section is automatically updated]
- Total Instances: 200892543
- Date Range: 2006-01-04T00:00:00Z to 2025-06-19T00:00:00Z
- Last Updated: 2025-07-11T04:03:59Z
Data Distribution
- Tweets with hashtags: 19.69%
- Tweets without hashtags: 80.31%
Top 10 Hashtags
For full statistics, please refer to the stats.json
file in the repository.
Rank | Topic | Total Count | Percentage |
---|---|---|---|
1 | NULL | 161336639 | 80.31% |
2 | #tiktok | 709726 | 0.35% |
3 | #pr | 431495 | 0.21% |
4 | #ad | 407921 | 0.20% |
5 | #enhypen | 357033 | 0.18% |
6 | #whalestorexoxo | 333901 | 0.17% |
7 | #iran | 286317 | 0.14% |
8 | #loveislandusa | 276715 | 0.14% |
9 | #wearemadleen | 247909 | 0.12% |
10 | #aiforall | 244168 | 0.12% |
Update History
Date | New Instances | Total Instances |
---|---|---|
2025-07-10T17:26:35Z | 48 | 48 |
2025-07-10T17:26:42Z | 50 | 98 |
2025-07-10T17:44:45Z | 50 | 148 |
2025-07-11T04:03:59Z | 200892395 | 200892543 |
- Downloads last month
- 507