Dataset Viewer
Auto-converted to Parquet
datasetId
large_stringlengths
6
110
author
large_stringlengths
3
34
last_modified
large_stringdate
2021-05-20 00:57:22
2025-05-07 08:14:41
downloads
int64
0
3.97M
likes
int64
0
7.74k
tags
large listlengths
1
2.03k
task_categories
large listlengths
0
16
createdAt
large_stringdate
2022-03-02 23:29:22
2025-05-07 08:13:27
trending_score
float64
1
39
card
large_stringlengths
31
1M
Sim4Rec/pearl_with_profiles
Sim4Rec
2025-05-07T02:34:50Z
238
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-01-15T03:46:02Z
null
--- dataset_info: features: - name: data_id dtype: int64 - name: user_persona dtype: string - name: seen_movie_titles sequence: string - name: gt_abstract dtype: string - name: gt_movie_title dtype: string - name: gt_genre dtype: string - name: gt_director dtype: string - name: gt_cast dtype: string - name: dialogue list: - name: content dtype: string - name: role dtype: string - name: title dtype: string - name: movie_history list: - name: rating dtype: string - name: review dtype: string - name: title dtype: string - name: user_profile dtype: string - name: user_chat list: - name: content dtype: string - name: role dtype: string - name: assistant_chat list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 1378375511.0 num_examples: 46218 - name: validation num_bytes: 137941154.0 num_examples: 4606 - name: test num_bytes: 61914901.0 num_examples: 2069 download_size: 632968306 dataset_size: 1578231566.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
icedwind/x_dataset_12970
icedwind
2025-05-07T02:08:25Z
2,049
0
[ "task_categories:text-classification", "task_categories:token-classification", "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_ids:sentiment-analysis", "task_ids:topic-classification", "task_ids:named-entity-recognition", "task_ids:language-modeling", "task_ids:text-scoring", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:extractive-qa", "task_ids:news-articles-summarization", "multilinguality:multilingual", "source_datasets:original", "license:mit", "size_categories:100M<n<1B", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[ "text-classification", "token-classification", "question-answering", "summarization", "text-generation" ]
2025-01-28T23:27:14Z
null
--- license: mit multilinguality: - multilingual source_datasets: - original task_categories: - text-classification - token-classification - question-answering - summarization - text-generation task_ids: - sentiment-analysis - topic-classification - named-entity-recognition - language-modeling - text-scoring - multi-class-classification - multi-label-classification - extractive-qa - news-articles-summarization --- # Bittensor Subnet 13 X (Twitter) Dataset <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> ## Dataset Description - **Repository:** icedwind/x_dataset_12970 - **Subnet:** Bittensor Subnet 13 - **Miner Hotkey:** 5FjmBWG6CrGX74iFhChXLETvDQ3kcgvroZhsgyGKSXmvxGxK ### Dataset Summary This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks. For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe). ### Supported Tasks The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example: - Sentiment Analysis - Trend Detection - Content Analysis - User Behavior Modeling ### Languages Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation. ## Dataset Structure ### Data Instances Each instance represents a single tweet with the following fields: ### Data Fields - `text` (string): The main content of the tweet. - `label` (string): Sentiment or topic category of the tweet. - `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present. - `datetime` (string): The date when the tweet was posted. - `username_encoded` (string): An encoded version of the username to maintain user privacy. - `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present. ### Data Splits This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp. ## Dataset Creation ### Source Data Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines. ### Personal and Sensitive Information All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information. ## Considerations for Using the Data ### Social Impact and Biases Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population. ### Limitations - Data quality may vary due to the decentralized nature of collection and preprocessing. - The dataset may contain noise, spam, or irrelevant content typical of social media platforms. - Temporal biases may exist due to real-time collection methods. - The dataset is limited to public tweets and does not include private accounts or direct messages. - Not all tweets contain hashtags or URLs. ## Additional Information ### Licensing Information The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use. ### Citation Information If you use this dataset in your research, please cite it as follows: ``` @misc{icedwind2025datauniversex_dataset_12970, title={The Data Universe Datasets: The finest collection of social media data the web has to offer}, author={icedwind}, year={2025}, url={https://huggingface.co/datasets/icedwind/x_dataset_12970}, } ``` ### Contributions To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms. ## Dataset Statistics [This section is automatically updated] - **Total Instances:** 54816137 - **Date Range:** 2025-01-22T00:00:00Z to 2025-02-13T00:00:00Z - **Last Updated:** 2025-02-18T17:33:06Z ### Data Distribution - Tweets with hashtags: 43.35% - Tweets without hashtags: 56.65% ### Top 10 Hashtags For full statistics, please refer to the `stats.json` file in the repository. | Rank | Topic | Total Count | Percentage | |------|-------|-------------|-------------| | 1 | NULL | 31054855 | 56.65% | | 2 | #riyadh | 354013 | 0.65% | | 3 | #zelena | 257909 | 0.47% | | 4 | #tiktok | 233757 | 0.43% | | 5 | #ad | 128023 | 0.23% | | 6 | #bbb25 | 127665 | 0.23% | | 7 | #superbowl | 91524 | 0.17% | | 8 | #bbmzansi | 78771 | 0.14% | | 9 | #perfect10linersep16 | 77174 | 0.14% | | 10 | #pr | 75894 | 0.14% | ## Update History | Date | New Instances | Total Instances | |------|---------------|-----------------| | 2025-01-28T23:28:09Z | 2878524 | 2878524 | | 2025-02-01T11:31:54Z | 11125347 | 14003871 | | 2025-02-04T23:35:23Z | 10564190 | 24568061 | | 2025-02-08T11:37:41Z | 5577751 | 30145812 | | 2025-02-11T23:42:08Z | 11929151 | 42074963 | | 2025-02-16T19:40:54Z | 11389333 | 53464296 | | 2025-02-18T02:31:26Z | 660780 | 54125076 | | 2025-02-18T17:33:06Z | 691061 | 54816137 |
james-1111/x_dataset_0305158
james-1111
2025-05-07T01:16:34Z
336
0
[ "task_categories:text-classification", "task_categories:token-classification", "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_ids:sentiment-analysis", "task_ids:topic-classification", "task_ids:named-entity-recognition", "task_ids:language-modeling", "task_ids:text-scoring", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:extractive-qa", "task_ids:news-articles-summarization", "multilinguality:multilingual", "source_datasets:original", "license:mit", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[ "text-classification", "token-classification", "question-answering", "summarization", "text-generation" ]
2025-01-25T07:10:23Z
null
--- license: mit multilinguality: - multilingual source_datasets: - original task_categories: - text-classification - token-classification - question-answering - summarization - text-generation task_ids: - sentiment-analysis - topic-classification - named-entity-recognition - language-modeling - text-scoring - multi-class-classification - multi-label-classification - extractive-qa - news-articles-summarization --- # Bittensor Subnet 13 X (Twitter) Dataset <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> <center> <img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer"> </center> ## Dataset Description - **Repository:** james-1111/x_dataset_0305158 - **Subnet:** Bittensor Subnet 13 - **Miner Hotkey:** 5FNEpTedLPFnPqoC8qEB56xnMNd12N4GiUwA1gwW385Rt8n3 ### Miner Data Compliance Agreement In uploading this dataset, I am agreeing to the [Macrocosmos Miner Data Compliance Policy](https://github.com/macrocosm-os/data-universe/blob/add-miner-policy/docs/miner_policy.md). ### Dataset Summary This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks. For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe). ### Supported Tasks The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example: - Sentiment Analysis - Trend Detection - Content Analysis - User Behavior Modeling ### Languages Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation. ## Dataset Structure ### Data Instances Each instance represents a single tweet with the following fields: ### Data Fields - `text` (string): The main content of the tweet. - `label` (string): Sentiment or topic category of the tweet. - `tweet_hashtags` (list): A list of hashtags used in the tweet. May be empty if no hashtags are present. - `datetime` (string): The date when the tweet was posted. - `username_encoded` (string): An encoded version of the username to maintain user privacy. - `url_encoded` (string): An encoded version of any URLs included in the tweet. May be empty if no URLs are present. ### Data Splits This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp. ## Dataset Creation ### Source Data Data is collected from public tweets on X (Twitter), adhering to the platform's terms of service and API usage guidelines. ### Personal and Sensitive Information All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information. ## Considerations for Using the Data ### Social Impact and Biases Users should be aware of potential biases inherent in X (Twitter) data, including demographic and content biases. This dataset reflects the content and opinions expressed on X and should not be considered a representative sample of the general population. ### Limitations - Data quality may vary due to the decentralized nature of collection and preprocessing. - The dataset may contain noise, spam, or irrelevant content typical of social media platforms. - Temporal biases may exist due to real-time collection methods. - The dataset is limited to public tweets and does not include private accounts or direct messages. - Not all tweets contain hashtags or URLs. ## Additional Information ### Licensing Information The dataset is released under the MIT license. The use of this dataset is also subject to X Terms of Use. ### Citation Information If you use this dataset in your research, please cite it as follows: ``` @misc{james-11112025datauniversex_dataset_0305158, title={The Data Universe Datasets: The finest collection of social media data the web has to offer}, author={james-1111}, year={2025}, url={https://huggingface.co/datasets/james-1111/x_dataset_0305158}, } ``` ### Contributions To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms. ## Dataset Statistics [This section is automatically updated] - **Total Instances:** 4499023 - **Date Range:** 2025-01-02T00:00:00Z to 2025-04-27T00:00:00Z - **Last Updated:** 2025-05-07T01:16:34Z ### Data Distribution - Tweets with hashtags: 2.51% - Tweets without hashtags: 97.49% ### Top 10 Hashtags For full statistics, please refer to the `stats.json` file in the repository. | Rank | Topic | Total Count | Percentage | |------|-------|-------------|-------------| | 1 | NULL | 1238008 | 91.65% | | 2 | #箱根駅伝 | 8147 | 0.60% | | 3 | #thameposeriesep9 | 7605 | 0.56% | | 4 | #दुनियाको_भ्रमित_करना_बंद_करो | 6500 | 0.48% | | 5 | #ad | 5237 | 0.39% | | 6 | #tiktok | 5032 | 0.37% | | 7 | #zelena | 4878 | 0.36% | | 8 | #smackdown | 4844 | 0.36% | | 9 | #कबीर_परमेश्वर_निर्वाण_दिवस | 4843 | 0.36% | | 10 | #delhielectionresults | 3476 | 0.26% | ## Update History | Date | New Instances | Total Instances | |------|---------------|-----------------| | 2025-01-25T07:07:31Z | 453526 | 453526 | | 2025-01-25T07:07:59Z | 453526 | 907052 | | 2025-01-25T07:08:28Z | 453526 | 1360578 | | 2025-01-25T07:08:56Z | 446896 | 1807474 | | 2025-01-25T07:09:24Z | 446896 | 2254370 | | 2025-01-25T07:09:52Z | 446896 | 2701266 | | 2025-01-25T07:10:21Z | 446896 | 3148162 | | 2025-01-25T07:10:51Z | 446896 | 3595058 | | 2025-02-18T03:40:24Z | 467290 | 4062348 | | 2025-05-07T01:16:34Z | 436675 | 4499023 |
reasoning-proj/exp_rob_dfiltered_DeepSeek-R1-Distill-Qwen-7B_mbenign_complete_step_t30
reasoning-proj
2025-05-07T00:37:57Z
0
0
[ "region:us" ]
[]
2025-05-07T00:37:55Z
null
--- dataset_info: features: - name: question dtype: string - name: answer_content dtype: string - name: reference_answer dtype: string - name: id dtype: string - name: metadata struct: - name: question_license dtype: string - name: question_source dtype: string - name: model_name dtype: string - name: verifier_score dtype: int64 - name: mutated_answer_content dtype: string splits: - name: train num_bytes: 9163480 num_examples: 600 download_size: 3918238 dataset_size: 9163480 configs: - config_name: default data_files: - split: train path: data/train-* ---
CanopyElias/emilia-snac-with-spk-emb-DE
CanopyElias
2025-05-06T23:07:19Z
0
0
[ "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-06T22:56:30Z
null
--- dataset_info: features: - name: codes_list sequence: int64 - name: speaker_embedding sequence: float32 - name: text dtype: string splits: - name: part_34 num_bytes: 20899789 num_examples: 3061 - name: part_84 num_bytes: 2960956 num_examples: 373 - name: part_88 num_bytes: 2855397 num_examples: 365 - name: part_61 num_bytes: 16604500 num_examples: 2195 - name: part_67 num_bytes: 16863948 num_examples: 2220 - name: part_68 num_bytes: 16751360 num_examples: 2227 - name: part_62 num_bytes: 17460872 num_examples: 2293 - name: part_65 num_bytes: 17136460 num_examples: 2249 - name: part_40 num_bytes: 35979781 num_examples: 4764 - name: part_41 num_bytes: 36004130 num_examples: 4829 - name: part_79 num_bytes: 11586778 num_examples: 1471 - name: part_30 num_bytes: 38499602 num_examples: 5343 - name: part_58 num_bytes: 25004870 num_examples: 3318 - name: part_49 num_bytes: 38140081 num_examples: 5124 - name: part_17 num_bytes: 88647911 num_examples: 14188 - name: part_15 num_bytes: 106684060 num_examples: 16768 - name: part_9 num_bytes: 113127812 num_examples: 16535 - name: part_2 num_bytes: 114158041 num_examples: 16914 - name: part_25 num_bytes: 109781455 num_examples: 17620 download_size: 340135587 dataset_size: 829147803 configs: - config_name: default data_files: - split: part_34 path: data/part_34-* - split: part_84 path: data/part_84-* - split: part_88 path: data/part_88-* - split: part_61 path: data/part_61-* - split: part_67 path: data/part_67-* - split: part_68 path: data/part_68-* - split: part_62 path: data/part_62-* - split: part_65 path: data/part_65-* - split: part_40 path: data/part_40-* - split: part_41 path: data/part_41-* - split: part_79 path: data/part_79-* - split: part_30 path: data/part_30-* - split: part_58 path: data/part_58-* - split: part_49 path: data/part_49-* - split: part_17 path: data/part_17-* - split: part_15 path: data/part_15-* - split: part_9 path: data/part_9-* - split: part_2 path: data/part_2-* - split: part_25 path: data/part_25-* ---
konwoo/lte-ctx16-fs1-np32-lr1e-05
konwoo
2025-05-06T21:50:33Z
0
0
[ "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-06T21:50:29Z
null
--- dataset_info: features: - name: text dtype: string - name: p_log_probs dtype: float32 - name: q_log_probs dtype: float32 - name: p_hat_log_probs dtype: float32 - name: num_tokens dtype: float32 splits: - name: train num_bytes: 10789651 num_examples: 128000 download_size: 8568695 dataset_size: 10789651 configs: - config_name: default data_files: - split: train path: data/train-* ---
louisbrulenaudet/code-artisanat
louisbrulenaudet
2025-05-06T20:39:32Z
493
0
[ "task_categories:text-generation", "task_categories:table-question-answering", "task_categories:summarization", "task_categories:text-retrieval", "task_categories:question-answering", "task_categories:text-classification", "multilinguality:monolingual", "source_datasets:original", "language:fr", "license:apache-2.0", "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "doi:10.57967/hf/1457", "region:us", "finetuning", "legal", "french law", "droit français", "Code de l'artisanat" ]
[ "text-generation", "table-question-answering", "summarization", "text-retrieval", "question-answering", "text-classification" ]
2023-12-12T18:49:02Z
null
--- license: apache-2.0 language: - fr multilinguality: - monolingual tags: - finetuning - legal - french law - droit français - Code de l'artisanat source_datasets: - original pretty_name: Code de l'artisanat task_categories: - text-generation - table-question-answering - summarization - text-retrieval - question-answering - text-classification size_categories: - 1K<n<10K --- # Code de l'artisanat, non-instruct (2025-05-06) The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects. Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all. ## Concurrent reading of the LegalKit [<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon) To use all the legal data published on LegalKit, you can use RAGoon: ```bash pip3 install ragoon ``` Then, you can load multiple datasets using this code snippet: ```python # -*- coding: utf-8 -*- from ragoon import load_datasets req = [ "louisbrulenaudet/code-artisanat", "louisbrulenaudet/code-action-sociale-familles", # ... ] datasets_list = load_datasets( req=req, streaming=False ) dataset = datasets.concatenate_datasets( datasets_list ) ``` ### Data Structure for Article Information This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information. 1. **Basic Information** - `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123"). - `texte` (string): **Text Content** - The textual content of the article. - `dateDebut` (string): **Start Date** - The date when the article came into effect. - `dateFin` (string): **End Date** - The date when the article was terminated or superseded. - `num` (string): **Article Number** - The number assigned to the article. - `id` (string): **Article ID** - Unique identifier for the article. - `cid` (string): **Chronical ID** - Chronical identifier for the article. - `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME"). - `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE"). 2. **Content and Notes** - `nota` (string): **Notes** - Additional notes or remarks associated with the article. - `version_article` (string): **Article Version** - The version number of the article. - `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section. 3. **Additional Metadata** - `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements. - `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article. - `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements. - `nature` (string): **Nature** - The nature or category of the document (e.g., "Article"). - `texteHtml` (string): **HTML Content** - The article's content in HTML format. 4. **Versioning and Extensions** - `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension. - `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article. - `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection. - `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs. - `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element. 5. **Origin and Relationships** - `origine` (string): **Origin** - The origin of the document (e.g., "LEGI"). - `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension. - `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI). - `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text. 6. **Hierarchical Relationships** - `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section. - `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions. - `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services. - `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable"). - `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring. - `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article. - `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section. 7. **Additional Content and History** - `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published. - `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format. - `historique` (string): **History** - Historical context or changes specific to collective agreements. - `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format. - `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)"). - `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain. - `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format. - `inap` (string): **INAP** - A placeholder for INAP-specific information. ## Feedback If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com).
WardahsLab/UJ_Staple_Pantry_1000
WardahsLab
2025-05-06T20:25:17Z
0
0
[ "task_categories:image-segmentation", "license:cc-by-sa-4.0", "size_categories:1K<n<10K", "format:imagefolder", "modality:image", "library:datasets", "library:mlcroissant", "region:us" ]
[ "image-segmentation" ]
2025-05-06T13:35:46Z
null
--- license: cc-by-sa-4.0 task_categories: - image-segmentation size_categories: - 1K<n<10K --- # The Staple Pantry Dataset 🥫 This research introduces the **Staple Pantry Dataset**, a new benchmark for **instance-level ingredient segmentation of raw pantry items** focusing on cluttered and overlapping ingredients. Our dataset offers precise instance masks for everyday food items such as grains, flour, spices, and fresh vegetables, providing a more practical benchmark for real-world kitchen automation systems. ## Dataset Overview 📊 ### Key Features - **1,000 images** with **6,882 instance masks** and **405,534,586 labelled pixels** across **100 ingredient categories**. - **Real-world complexity**: Overlapping items, obscured items, variable lighting, cluttered backgrounds. - **High-quality annotations**: Pixel-level polygon masks for fine-grained textures (e.g., salt grains, vegetable stems). ## Dataset Structure 📂 The dataset is organised as follows: ```plaintext UJ Staple Pantry 1000/ ├── train/ │ ├── images/ │ ├── annotations.json ├── valid/ │ ├── images/ │ ├── annotations.json ``` ## Dataset Splits 🧩 | Split | Images | Instances | |-------|--------|-----------| | Train | 800 | 5,506 | | Test | 200 | 1,376 |
mih12345/tts_italian_may_7
mih12345
2025-05-06T18:49:01Z
0
0
[ "license:mit", "region:us" ]
[]
2025-05-06T18:08:50Z
null
--- license: mit dataset_info: features: - name: text dtype: string - name: stringlengths dtype: int64 - name: audio dtype: audio - name: audioduration(s) dtype: float64 splits: - name: train num_bytes: 620296000.844 num_examples: 1223 download_size: 566413313 dataset_size: 620296000.844 configs: - config_name: default data_files: - split: train path: data/train-* ---
reasoning-proj/exp_rob_dfiltered_Llama-3_1-Nemotron-Nano-8B-v1_mneutral_add_random_text_t50
reasoning-proj
2025-05-06T18:40:31Z
0
0
[ "region:us" ]
[]
2025-05-06T18:40:27Z
null
--- dataset_info: features: - name: question dtype: string - name: answer_content dtype: string - name: reference_answer dtype: string - name: id dtype: string - name: metadata struct: - name: question_license dtype: string - name: question_source dtype: string - name: model_name dtype: string - name: verifier_score dtype: int64 - name: mutated_answer_content dtype: string splits: - name: train num_bytes: 897822 num_examples: 50 download_size: 351195 dataset_size: 897822 configs: - config_name: default data_files: - split: train path: data/train-* ---
Aravindh25/trossen_pick_granola_bars_3cam_v1
Aravindh25
2025-05-06T18:13:50Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "region:us", "LeRobot", "tutorial" ]
[ "robotics" ]
2025-05-06T18:12:05Z
null
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "trossen_ai_solo", "total_episodes": 25, "total_frames": 34048, "total_tasks": 1, "total_videos": 75, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:25" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 7 ], "names": [ "main_joint_0", "main_joint_1", "main_joint_2", "main_joint_3", "main_joint_4", "main_joint_5", "main_joint_6" ] }, "observation.state": { "dtype": "float32", "shape": [ 7 ], "names": [ "main_joint_0", "main_joint_1", "main_joint_2", "main_joint_3", "main_joint_4", "main_joint_5", "main_joint_6" ] }, "observation.images.cam_wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.cam_high": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.cam_front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
IABD11/DatasetEmocionesIABD11
IABD11
2025-05-06T17:53:37Z
8
0
[ "license:cc-by-nc-4.0", "size_categories:n<1K", "format:csv", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-29T17:10:37Z
null
--- license: cc-by-nc-4.0 --- # DatasetEmocionesIABD11 Este dataset contiene frases en español etiquetadas con emociones básicas. Está diseñado para tareas de clasificación de emociones en texto. ## 📁 Estructura del Dataset El dataset está en formato CSV y contiene dos columnas: - `texto`: una frase en español. - `emocion`: etiqueta textual que representa la emoción principal expresada en la frase. ### Ejemplo: ```csv texto,emocion "Estoy muy feliz por todo lo que está pasando",alegre "Hoy ha sido un gran día",alegre "Me encanta la vida",alegre "No tengo ganas de hacer nada",triste "Todo me sale mal últimamente",triste "Estoy muy cansado de todo",triste "Voy al supermercado",neutral "Está lloviendo",neutral "El coche es rojo",neutral ``` # 👨‍💻 Autor **Nombre:** Miguel Sedano Izurieta **Asignatura:** SBD **Curso:** IABD
reasoning-proj/exp_rob_dfiltered_DeepSeek_R1_Distill_Qwen_1_5B_madversarial_insert_w_t10
reasoning-proj
2025-05-06T17:52:12Z
0
0
[ "region:us" ]
[]
2025-05-06T16:48:58Z
null
--- dataset_info: features: - name: question dtype: string - name: answer_content dtype: string - name: reference_answer dtype: string - name: id dtype: string - name: metadata struct: - name: question_license dtype: string - name: question_source dtype: string - name: model_name dtype: string - name: verifier_score dtype: int64 - name: mutated_answer_content dtype: string splits: - name: train num_bytes: 788325 num_examples: 50 download_size: 314168 dataset_size: 788325 configs: - config_name: default data_files: - split: train path: data/train-* ---
OPR-Project/NCLT_OpenPlaceRecognition
OPR-Project
2025-05-06T16:52:50Z
1,811
0
[ "license:odbl", "region:us" ]
[]
2025-03-24T17:26:34Z
null
--- license: odbl --- [NCLT Dataset](http://robots.engin.umich.edu/nclt/index.html#top) is The University of Michigan North Campus Long-Term Vision and LIDAR Dataset. This dataset is a modified version of the NCLT Dataset, which is licensed under the Open Database License (ODbL) v1.0. As required by the license, this modified version is also released under the ODbL v1.0. You must attribute the original source and share any further modifications under the same license. --- # ❗ Please note **This dataset is currently not compatible with the `datasets` library.** The recommended way to download the data is by using the `huggingface_hub` library. Example code snippet: ```python from pathlib import Path from huggingface_hub import snapshot_download out_dir = Path("/dir/to/save/data") out_dir.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="OPR-Project/NCLT_OpenPlaceRecognition", repo_type="dataset", local_dir=out_dir) ``` For reading and working with the data, we recommend using the OpenPlaceRecognition library: https://github.com/OPR-Project/OpenPlaceRecognition
mteb/MalayalamNewsClassification
mteb
2025-05-06T16:13:21Z
0
0
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:derived", "multilinguality:monolingual", "language:mal", "license:mit", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2005.00085", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T16:13:17Z
null
--- annotations_creators: - derived language: - mal license: mit multilinguality: monolingual task_categories: - text-classification task_ids: - topic-classification dataset_info: features: - name: text dtype: string - name: label dtype: string splits: - name: train num_bytes: 1153014 num_examples: 5036 - name: test num_bytes: 292970 num_examples: 1260 download_size: 577789 dataset_size: 1445984 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">MalayalamNewsClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> A Malayalam dataset for 3-class classification of Malayalam news articles | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | News, Written | | Reference | https://github.com/goru001/nlp-for-malyalam | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["MalayalamNewsClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @article{kunchukuttan2020indicnlpcorpus, author = {Anoop Kunchukuttan and Divyanshu Kakwani and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar}, journal = {arXiv preprint arXiv:2005.00085}, title = {AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic Languages}, year = {2020}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("MalayalamNewsClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 1260, "number_of_characters": 101349, "number_texts_intersect_with_train": 34, "min_text_length": 14, "average_text_length": 80.43571428571428, "max_text_length": 375, "unique_text": 1251, "unique_labels": 3, "labels": { "business": { "count": 383 }, "sports": { "count": 446 }, "entertainment": { "count": 431 } } }, "train": { "num_samples": 5036, "number_of_characters": 400263, "number_texts_intersect_with_train": null, "min_text_length": 10, "average_text_length": 79.48034154090548, "max_text_length": 399, "unique_text": 4958, "unique_labels": 3, "labels": { "business": { "count": 1540 }, "sports": { "count": 1743 }, "entertainment": { "count": 1753 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
c-ho/bll_pt
c-ho
2025-05-06T15:51:39Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-06T14:59:13Z
null
--- dataset_info: features: - name: doc_id dtype: string - name: doc_title dtype: string - name: doc_lang dtype: string - name: doc_type dtype: string - name: doc_desc_list sequence: string - name: ddc dtype: string - name: doc_subject_list sequence: string - name: bll_match_id sequence: string - name: bll_match_literals sequence: sequence: string - name: bll_superclasses sequence: sequence: string - name: bll_superclass_literals sequence: sequence: sequence: string - name: bll_top_node sequence: string - name: text dtype: string splits: - name: train num_bytes: 31891495 num_examples: 4476 download_size: 10725952 dataset_size: 31891495 configs: - config_name: default data_files: - split: train path: data/train-* ---
xbilek25/static_train_10800_14400
xbilek25
2025-05-06T15:05:53Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-06T15:05:20Z
null
--- dataset_info: features: - name: client_id dtype: string - name: path dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: sentence dtype: string - name: up_votes dtype: int64 - name: down_votes dtype: int64 - name: age dtype: string - name: gender dtype: string - name: accent dtype: string - name: locale dtype: string - name: segment dtype: string - name: variant dtype: string splits: - name: train num_bytes: 773517598.0 num_examples: 3600 download_size: 718803067 dataset_size: 773517598.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
TheoM55/mvtec_all_objects_split
TheoM55
2025-05-06T15:02:16Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-06T14:56:18Z
null
--- dataset_info: features: - name: image_path dtype: image - name: split dtype: string - name: object dtype: string - name: defect dtype: string - name: label dtype: int64 - name: mask_path dtype: image splits: - name: bottle.train num_bytes: 111821158.75214793 num_examples: 209 - name: bottle.test num_bytes: 44740733.676690325 num_examples: 83 - name: cable.train num_bytes: 303321613.3563691 num_examples: 224 - name: cable.test num_bytes: 201868992.04221144 num_examples: 150 - name: capsule.train num_bytes: 252459879.8216287 num_examples: 219 - name: capsule.test num_bytes: 152284222.31714606 num_examples: 132 - name: carpet.train num_bytes: 522219865.94546133 num_examples: 280 - name: carpet.test num_bytes: 218719829.71292493 num_examples: 117 - name: grid.train num_bytes: 124224944.63429211 num_examples: 264 - name: grid.test num_bytes: 37170487.141949944 num_examples: 78 - name: hazelnut.train num_bytes: 480029810.4166978 num_examples: 391 - name: hazelnut.test num_bytes: 137561138.7642884 num_examples: 110 - name: leather.train num_bytes: 341465508.2022787 num_examples: 245 - name: leather.test num_bytes: 183970644.66156146 num_examples: 124 - name: metal_nut.train num_bytes: 109148948.52857676 num_examples: 220 - name: metal_nut.test num_bytes: 56704001.29902876 num_examples: 115 - name: pill.train num_bytes: 168148799.75513634 num_examples: 267 - name: pill.test num_bytes: 106922400.06032872 num_examples: 167 - name: screw.train num_bytes: 130516544.22338438 num_examples: 320 - name: screw.test num_bytes: 65443158.11169219 num_examples: 160 - name: tile.train num_bytes: 233535029.59805754 num_examples: 230 - name: tile.test num_bytes: 118193690.71292491 num_examples: 117 - name: toothbrush.train num_bytes: 64393390.41688457 num_examples: 60 - name: toothbrush.test num_bytes: 45091343.6918192 num_examples: 42 - name: transistor.train num_bytes: 274201541.57994026 num_examples: 213 - name: transistor.test num_bytes: 128987954.69480762 num_examples: 100 - name: wood.train num_bytes: 377587786.6161748 num_examples: 247 - name: wood.test num_bytes: 120878073.84889802 num_examples: 79 - name: zipper.train num_bytes: 98334094.66753829 num_examples: 240 - name: zipper.test num_bytes: 61185288.74915951 num_examples: 151 download_size: 5269258143 dataset_size: 5271130876.000001 configs: - config_name: default data_files: - split: bottle.train path: data/bottle.train-* - split: bottle.test path: data/bottle.test-* - split: cable.train path: data/cable.train-* - split: cable.test path: data/cable.test-* - split: capsule.train path: data/capsule.train-* - split: capsule.test path: data/capsule.test-* - split: carpet.train path: data/carpet.train-* - split: carpet.test path: data/carpet.test-* - split: grid.train path: data/grid.train-* - split: grid.test path: data/grid.test-* - split: hazelnut.train path: data/hazelnut.train-* - split: hazelnut.test path: data/hazelnut.test-* - split: leather.train path: data/leather.train-* - split: leather.test path: data/leather.test-* - split: metal_nut.train path: data/metal_nut.train-* - split: metal_nut.test path: data/metal_nut.test-* - split: pill.train path: data/pill.train-* - split: pill.test path: data/pill.test-* - split: screw.train path: data/screw.train-* - split: screw.test path: data/screw.test-* - split: tile.train path: data/tile.train-* - split: tile.test path: data/tile.test-* - split: toothbrush.train path: data/toothbrush.train-* - split: toothbrush.test path: data/toothbrush.test-* - split: transistor.train path: data/transistor.train-* - split: transistor.test path: data/transistor.test-* - split: wood.train path: data/wood.train-* - split: wood.test path: data/wood.test-* - split: zipper.train path: data/zipper.train-* - split: zipper.test path: data/zipper.test-* ---
konwoo/37M-ctx16-100M
konwoo
2025-05-06T14:20:03Z
0
0
[ "size_categories:100M<n<1B", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-06T14:16:43Z
null
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 6762176534 num_examples: 100000000 download_size: 5092025391 dataset_size: 6762176534 configs: - config_name: default data_files: - split: train path: data/train-* ---
txya900619/audiocaps-16k
txya900619
2025-05-06T14:13:00Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-06T13:00:43Z
null
--- dataset_info: features: - name: youtube_id dtype: string - name: start_time dtype: int64 - name: caption dtype: string - name: audio dtype: audio: sampling_rate: 16000 splits: - name: train num_bytes: 9088777452.895 num_examples: 34365 - name: val num_bytes: 491016602.331 num_examples: 1759 - name: test num_bytes: 889188037.952 num_examples: 3366 download_size: 10351008734 dataset_size: 10468982093.178 configs: - config_name: default data_files: - split: train path: data/train-* - split: val path: data/val-* - split: test path: data/test-* ---
CentrumDiagnostykiZnamion/dataset_melanoma
CentrumDiagnostykiZnamion
2025-05-06T14:01:08Z
0
0
[ "license:other", "size_categories:n<1K", "format:imagefolder", "modality:image", "library:datasets", "library:mlcroissant", "region:us" ]
[]
2025-05-06T14:01:04Z
null
--- license: other license_name: safescan-validation-restricted-use-agreement license_link: LICENSE ---
munnabhaimbbsfail/folderuploaddataset
munnabhaimbbsfail
2025-05-06T13:58:41Z
0
0
[ "task_categories:token-classification", "language:aa", "license:unknown", "size_categories:n<1K", "format:imagefolder", "modality:image", "library:datasets", "library:mlcroissant", "region:us", "art" ]
[ "token-classification" ]
2025-05-06T13:39:10Z
null
--- license: unknown task_categories: - token-classification language: - aa tags: - art pretty_name: gjhghjhb size_categories: - 1K<n<10K ---
SciKnowOrg/ontolearner-arts_and_humanities
SciKnowOrg
2025-05-06T13:24:37Z
0
0
[ "language:en", "license:mit", "region:us", "OntoLearner", "ontology-learning", "arts_and_humanities" ]
[]
2025-05-06T13:24:33Z
null
--- license: mit language: - en tags: - OntoLearner - ontology-learning - arts_and_humanities pretty_name: Agricultural --- <div> <img src="https://raw.githubusercontent.com/sciknoworg/OntoLearner/main/images/logo.png" alt="OntoLearner" style="display: block; margin: 0 auto; width: 500px; height: auto;"> <h1 style="text-align: center; margin-top: 1em;">Arts And Humanities Domain Ontologies</h1> </div> ## Overview The arts and humanities domain encompasses ontologies that systematically represent and categorize the diverse aspects of human cultural expression, including music, visual arts, historical artifacts, and broader humanistic studies. This domain plays a crucial role in knowledge representation by providing structured frameworks that facilitate the organization, retrieval, and analysis of complex cultural and artistic data. Through these ontologies, the domain supports interdisciplinary research and enhances the understanding and preservation of cultural heritage. ## Ontologies | Ontology ID | Full Name | Classes | Properties | Last Updated | |-------------|-----------|---------|------------|--------------| | ChordOntology | Chord Ontology (ChordOntology) | 9 | 0 | 2007-10-25| | ICON | Icon Ontology (ICON) | 76 | 68 | April 26th, 2024| | MusicOntology | Music Ontology (MusicOntology) | 92 | 165 | 2013/07/22| | Nomisma | Nomisma Ontology (Nomisma) | 36 | 71 | 2025-01-22| | TimelineOntology | Timeline Ontology (TimelineOntology) | 47 | 46 | 25th October 2007| ## Dataset Files Each ontology directory contains the following files: 1. `<ontology_id>.<format>` - The original ontology file 2. `term_typings.json` - Dataset of term to type mappings 3. `taxonomies.json` - Dataset of taxonomic relations 4. `non_taxonomic_relations.json` - Dataset of non-taxonomic relations 5. `<ontology_id>.rst` - Documentation describing the ontology ## Usage These datasets are intended for ontology learning research and applications.
Elif3757/bookDataSet
Elif3757
2025-05-06T12:50:12Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-06T12:49:11Z
null
--- dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 23135676.7050891 num_examples: 11212 - name: test num_bytes: 2571089.2949109008 num_examples: 1246 download_size: 9834010 dataset_size: 25706766.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
mteb/LearnedHandsEducationLegalBenchClassification
mteb
2025-05-06T12:42:03Z
0
0
[ "task_categories:text-classification", "annotations_creators:expert-annotated", "multilinguality:monolingual", "language:eng", "license:cc-by-nc-sa-4.0", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2308.11462", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T12:41:58Z
null
--- annotations_creators: - expert-annotated language: - eng license: cc-by-nc-sa-4.0 multilinguality: monolingual task_categories: - text-classification task_ids: [] dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 6971 num_examples: 6 - name: test num_bytes: 78942 num_examples: 56 download_size: 62342 dataset_size: 85913 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">LearnedHandsEducationLegalBenchClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> This is a binary classification task in which the model must determine if a user's post discusses issues around school, including accommodations for special needs, discrimination, student debt, discipline, and other issues in education. | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Legal, Written | | Reference | https://huggingface.co/datasets/nguha/legalbench | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["LearnedHandsEducationLegalBenchClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @misc{guha2023legalbench, archiveprefix = {arXiv}, author = {Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li}, eprint = {2308.11462}, primaryclass = {cs.CL}, title = {LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models}, year = {2023}, } @dataset{learned_hands, author = {{Suffolk University Law School} and {Stanford Legal Design Lab}}, note = {The LearnedHands dataset is licensed under CC BY-NC-SA 4.0}, title = {LearnedHands Dataset}, url = {https://spot.suffolklitlab.org/data/#learnedhands}, urldate = {2022-05-21}, year = {2022}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("LearnedHandsEducationLegalBenchClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 56, "number_of_characters": 78257, "number_texts_intersect_with_train": 0, "min_text_length": 214, "average_text_length": 1397.4464285714287, "max_text_length": 4864, "unique_text": 56, "unique_labels": 2, "labels": { "1": { "count": 28 }, "0": { "count": 28 } } }, "train": { "num_samples": 6, "number_of_characters": 6899, "number_texts_intersect_with_train": null, "min_text_length": 822, "average_text_length": 1149.8333333333333, "max_text_length": 1637, "unique_text": 6, "unique_labels": 2, "labels": { "1": { "count": 3 }, "0": { "count": 3 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/LanguageClassification
mteb
2025-05-06T12:40:38Z
0
0
[ "task_categories:text-classification", "task_ids:language-identification", "annotations_creators:derived", "multilinguality:monolingual", "language:ara", "language:bul", "language:cmn", "language:deu", "language:ell", "language:eng", "language:fra", "language:hin", "language:ita", "language:jpn", "language:nld", "language:pol", "language:por", "language:rus", "language:spa", "language:swa", "language:tha", "language:tur", "language:urd", "language:vie", "license:unknown", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T12:40:33Z
null
--- annotations_creators: - derived language: - ara - bul - cmn - deu - ell - eng - fra - hin - ita - jpn - nld - pol - por - rus - spa - swa - tha - tur - urd - vie license: unknown multilinguality: monolingual task_categories: - text-classification task_ids: - language-identification dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 12455318 num_examples: 70000 - name: validation num_bytes: 1777455 num_examples: 10000 - name: test num_bytes: 363008 num_examples: 2048 download_size: 10878978 dataset_size: 14595781 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">LanguageClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> A language identification dataset for 20 languages. | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Reviews, Web, Non-fiction, Fiction, Government, Written | | Reference | https://huggingface.co/datasets/papluca/language-identification | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["LanguageClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @inproceedings{conneau2018xnli, author = {Conneau, Alexis and Rinott, Ruty and Lample, Guillaume and Williams, Adina and Bowman, Samuel R. and Schwenk, Holger and Stoyanov, Veselin}, booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing}, location = {Brussels, Belgium}, publisher = {Association for Computational Linguistics}, title = {XNLI: Evaluating Cross-lingual Sentence Representations}, year = {2018}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("LanguageClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 2048, "number_of_characters": 224352, "num_texts_in_train": 31, "min_text_length": 14, "average_text_length": 109.546875, "max_text_length": 1270, "unique_text": 2025, "unique_labels": 20, "labels": { "17": { "count": 102 }, "0": { "count": 102 }, "11": { "count": 102 }, "4": { "count": 103 }, "3": { "count": 102 }, "1": { "count": 102 }, "10": { "count": 102 }, "2": { "count": 103 }, "16": { "count": 103 }, "9": { "count": 103 }, "5": { "count": 102 }, "7": { "count": 102 }, "13": { "count": 102 }, "14": { "count": 103 }, "12": { "count": 102 }, "15": { "count": 103 }, "19": { "count": 102 }, "18": { "count": 102 }, "6": { "count": 103 }, "8": { "count": 103 } } }, "train": { "num_samples": 70000, "number_of_characters": 7760299, "num_texts_in_train": null, "min_text_length": 2, "average_text_length": 110.86141428571429, "max_text_length": 2422, "unique_text": 68978, "unique_labels": 20, "labels": { "12": { "count": 3500 }, "1": { "count": 3500 }, "19": { "count": 3500 }, "15": { "count": 3500 }, "13": { "count": 3500 }, "11": { "count": 3500 }, "17": { "count": 3500 }, "14": { "count": 3500 }, "16": { "count": 3500 }, "5": { "count": 3500 }, "0": { "count": 3500 }, "8": { "count": 3500 }, "7": { "count": 3500 }, "2": { "count": 3500 }, "3": { "count": 3500 }, "10": { "count": 3500 }, "6": { "count": 3500 }, "18": { "count": 3500 }, "4": { "count": 3500 }, "9": { "count": 3500 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/HamshahriClustring
mteb
2025-05-06T12:13:32Z
0
0
[ "task_categories:text-classification", "annotations_creators:derived", "multilinguality:monolingual", "language:fas", "license:unknown", "modality:text", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T12:13:29Z
null
--- annotations_creators: - derived language: - fas license: unknown multilinguality: monolingual task_categories: - text-classification task_ids: [] dataset_info: features: - name: sentences dtype: string - name: labels dtype: int64 splits: - name: test num_bytes: 828718 num_examples: 2048 download_size: 414008 dataset_size: 828718 configs: - config_name: default data_files: - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">HamshahriClustring</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> These datasets have been extracted from the RSS feed of two Farsi news agency websites. | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | News | | Reference | https://github.com/mallahyari/Farsi-datasets | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["HamshahriClustring"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("HamshahriClustring") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 2048, "number_of_characters": 444075, "min_text_length": 72, "average_text_length": 216.83349609375, "max_text_length": 458, "unique_texts": 322, "min_labels_per_text": 2, "average_labels_per_text": 1.0, "max_labels_per_text": 237, "unique_labels": 47, "labels": { "6": { "count": 96 }, "11": { "count": 150 }, "10": { "count": 189 }, "25": { "count": 132 }, "14": { "count": 26 }, "27": { "count": 101 }, "34": { "count": 25 }, "29": { "count": 111 }, "28": { "count": 141 }, "17": { "count": 51 }, "33": { "count": 54 }, "24": { "count": 12 }, "12": { "count": 132 }, "42": { "count": 237 }, "0": { "count": 33 }, "30": { "count": 64 }, "35": { "count": 23 }, "3": { "count": 49 }, "44": { "count": 9 }, "4": { "count": 16 }, "23": { "count": 7 }, "16": { "count": 37 }, "8": { "count": 26 }, "38": { "count": 36 }, "1": { "count": 21 }, "46": { "count": 14 }, "2": { "count": 15 }, "45": { "count": 16 }, "7": { "count": 27 }, "9": { "count": 12 }, "5": { "count": 20 }, "31": { "count": 21 }, "13": { "count": 9 }, "43": { "count": 16 }, "36": { "count": 7 }, "32": { "count": 41 }, "26": { "count": 15 }, "21": { "count": 10 }, "22": { "count": 12 }, "20": { "count": 15 }, "19": { "count": 2 }, "18": { "count": 2 }, "39": { "count": 2 }, "40": { "count": 2 }, "15": { "count": 5 }, "37": { "count": 5 }, "41": { "count": 2 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/FrenkEnClassification
mteb
2025-05-06T12:09:40Z
0
0
[ "task_categories:text-classification", "task_ids:sentiment-analysis", "task_ids:sentiment-scoring", "task_ids:sentiment-classification", "task_ids:hate-speech-detection", "annotations_creators:derived", "multilinguality:monolingual", "language:eng", "license:unknown", "modality:text", "arxiv:1906.02045", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T12:09:36Z
null
--- annotations_creators: - derived language: - eng license: unknown multilinguality: monolingual task_categories: - text-classification task_ids: - sentiment-analysis - sentiment-scoring - sentiment-classification - hate-speech-detection dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 1323909 num_examples: 8404 - name: validation num_bytes: 145112 num_examples: 933 - name: test num_bytes: 466308 num_examples: 2301 download_size: 1244444 dataset_size: 1935329 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">FrenkEnClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> English subset of the FRENK dataset | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Social, Written | | Reference | https://arxiv.org/abs/1906.02045 | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["FrenkEnClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @misc{ljubešić2019frenk, archiveprefix = {arXiv}, author = {Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, eprint = {1906.02045}, primaryclass = {cs.CL}, title = {The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, url = {https://arxiv.org/abs/1906.02045}, year = {2019}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("FrenkEnClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 2301, "number_of_characters": 434318, "number_texts_intersect_with_train": 23, "min_text_length": 1, "average_text_length": 188.75184702303346, "max_text_length": 7322, "unique_text": 2282, "unique_labels": 2, "labels": { "0": { "count": 1426 }, "1": { "count": 875 } } }, "train": { "num_samples": 8404, "number_of_characters": 1216080, "number_texts_intersect_with_train": null, "min_text_length": 1, "average_text_length": 144.70252260828178, "max_text_length": 5449, "unique_text": 8275, "unique_labels": 2, "labels": { "0": { "count": 5379 }, "1": { "count": 3025 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/CzechSubjectivityClassification
mteb
2025-05-06T12:01:30Z
0
0
[ "task_categories:text-classification", "task_ids:sentiment-analysis", "task_ids:sentiment-scoring", "task_ids:sentiment-classification", "task_ids:hate-speech-detection", "annotations_creators:human-annotated", "multilinguality:monolingual", "language:ces", "license:unknown", "modality:text", "arxiv:2009.08712", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T12:01:25Z
null
--- annotations_creators: - human-annotated language: - ces license: unknown multilinguality: monolingual task_categories: - text-classification task_ids: - sentiment-analysis - sentiment-scoring - sentiment-classification - hate-speech-detection dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 994630 num_examples: 7443 - name: validation num_bytes: 66061 num_examples: 500 - name: test num_bytes: 264471 num_examples: 2000 download_size: 949685 dataset_size: 1325162 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">CzechSubjectivityClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> An Czech dataset for subjectivity classification. | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Reviews, Written | | Reference | https://arxiv.org/abs/2009.08712 | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["CzechSubjectivityClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @inproceedings{priban-steinberger-2022-czech, address = {Marseille, France}, author = {P{\v{r}}ib{\'a}{\v{n}}, Pavel and Steinberger, Josef}, booktitle = {Proceedings of the Thirteenth Language Resources and Evaluation Conference}, month = jun, pages = {1381--1391}, publisher = {European Language Resources Association}, title = {\{C\}zech Dataset for Cross-lingual Subjectivity Classification}, url = {https://aclanthology.org/2022.lrec-1.148}, year = {2022}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("CzechSubjectivityClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "validation": { "num_samples": 500, "number_of_characters": 54082, "number_texts_intersect_with_train": 0, "min_text_length": 28, "average_text_length": 108.164, "max_text_length": 443, "unique_text": 500, "unique_labels": 2, "labels": { "0": { "count": 250 }, "1": { "count": 250 } } }, "test": { "num_samples": 2000, "number_of_characters": 216612, "number_texts_intersect_with_train": 0, "min_text_length": 25, "average_text_length": 108.306, "max_text_length": 689, "unique_text": 2000, "unique_labels": 2, "labels": { "0": { "count": 1000 }, "1": { "count": 1000 } } }, "train": { "num_samples": 7443, "number_of_characters": 816035, "number_texts_intersect_with_train": null, "min_text_length": 24, "average_text_length": 109.6379148192933, "max_text_length": 5399, "unique_text": 7443, "unique_labels": 2, "labels": { "0": { "count": 3750 }, "1": { "count": 3693 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/CrossLingualSemanticDiscriminationWMT21
mteb
2025-05-06T12:00:14Z
0
0
[ "region:us" ]
[]
2025-05-06T12:00:08Z
null
--- dataset_info: - config_name: deu-fra-corpus features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string splits: - name: test num_bytes: 880283 num_examples: 4465 download_size: 374870 dataset_size: 880283 - config_name: deu-fra-qrels features: - name: query-id dtype: string - name: corpus-id dtype: string - name: score dtype: int64 splits: - name: test num_bytes: 21995 num_examples: 893 download_size: 10903 dataset_size: 21995 - config_name: deu-fra-queries features: - name: _id dtype: string - name: text dtype: string splits: - name: test num_bytes: 168540 num_examples: 893 download_size: 103956 dataset_size: 168540 configs: - config_name: deu-fra-corpus data_files: - split: test path: deu-fra-corpus/test-* - config_name: deu-fra-qrels data_files: - split: test path: deu-fra-qrels/test-* - config_name: deu-fra-queries data_files: - split: test path: deu-fra-queries/test-* ---
mteb/ContractNLINoticeOnCompelledDisclosureLegalBenchClassification
mteb
2025-05-06T11:58:52Z
0
0
[ "task_categories:text-classification", "annotations_creators:expert-annotated", "multilinguality:monolingual", "language:eng", "license:cc-by-4.0", "modality:text", "arxiv:2308.11462", "arxiv:2110.01799", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T11:58:48Z
null
--- annotations_creators: - expert-annotated language: - eng license: cc-by-4.0 multilinguality: monolingual task_categories: - text-classification task_ids: [] dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 3529 num_examples: 8 - name: test num_bytes: 73354 num_examples: 142 download_size: 37736 dataset_size: 76883 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">ContractNLINoticeOnCompelledDisclosureLegalBenchClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> This task is a subset of ContractNLI, and consists of determining whether a clause from an NDA clause provides that the Receiving Party shall notify Disclosing Party in case Receiving Party is required by law, regulation or judicial process to disclose any Confidential Information. | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Legal, Written | | Reference | https://huggingface.co/datasets/nguha/legalbench | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["ContractNLINoticeOnCompelledDisclosureLegalBenchClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @misc{guha2023legalbench, archiveprefix = {arXiv}, author = {Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li}, eprint = {2308.11462}, primaryclass = {cs.CL}, title = {LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models}, year = {2023}, } @article{koreeda2021contractnli, author = {Koreeda, Yuta and Manning, Christopher D}, journal = {arXiv preprint arXiv:2110.01799}, title = {ContractNLI: A dataset for document-level natural language inference for contracts}, year = {2021}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("ContractNLINoticeOnCompelledDisclosureLegalBenchClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 142, "number_of_characters": 71490, "number_texts_intersect_with_train": 0, "min_text_length": 65, "average_text_length": 503.4507042253521, "max_text_length": 1976, "unique_text": 142, "unique_labels": 2, "labels": { "1": { "count": 71 }, "0": { "count": 71 } } }, "train": { "num_samples": 8, "number_of_characters": 3417, "number_texts_intersect_with_train": null, "min_text_length": 181, "average_text_length": 427.125, "max_text_length": 816, "unique_text": 8, "unique_labels": 2, "labels": { "1": { "count": 4 }, "0": { "count": 4 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/CUADRofrRofoRofnLegalBenchClassification
mteb
2025-05-06T11:56:05Z
0
0
[ "task_categories:text-classification", "annotations_creators:expert-annotated", "multilinguality:monolingual", "language:eng", "license:cc-by-4.0", "modality:text", "arxiv:2308.11462", "arxiv:2103.06268", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T11:56:01Z
null
--- annotations_creators: - expert-annotated language: - eng license: cc-by-4.0 multilinguality: monolingual task_categories: - text-classification task_ids: [] dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 2384 num_examples: 6 - name: test num_bytes: 281177 num_examples: 690 download_size: 144348 dataset_size: 283561 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">CUADRofrRofoRofnLegalBenchClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> This task was constructed from the CUAD dataset. It consists of determining if the clause grant one party a right of first refusal, right of first offer or right of first negotiation to purchase, license, market, or distribute equity interest, technology, assets, products or services. | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Legal, Written | | Reference | https://huggingface.co/datasets/nguha/legalbench | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["CUADRofrRofoRofnLegalBenchClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @misc{guha2023legalbench, archiveprefix = {arXiv}, author = {Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li}, eprint = {2308.11462}, primaryclass = {cs.CL}, title = {LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models}, year = {2023}, } @article{hendrycks2021cuad, author = {Hendrycks, Dan and Burns, Collin and Chen, Anya and Ball, Spencer}, journal = {arXiv preprint arXiv:2103.06268}, title = {Cuad: An expert-annotated nlp dataset for legal contract review}, year = {2021}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("CUADRofrRofoRofnLegalBenchClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 690, "number_of_characters": 272872, "number_texts_intersect_with_train": 0, "min_text_length": 69, "average_text_length": 395.46666666666664, "max_text_length": 4220, "unique_text": 690, "unique_labels": 2, "labels": { "1": { "count": 345 }, "0": { "count": 345 } } }, "train": { "num_samples": 6, "number_of_characters": 2312, "number_texts_intersect_with_train": null, "min_text_length": 202, "average_text_length": 385.3333333333333, "max_text_length": 665, "unique_text": 6, "unique_labels": 2, "labels": { "1": { "count": 3 }, "0": { "count": 3 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/CUADNoSolicitOfEmployeesLegalBenchClassification
mteb
2025-05-06T11:55:13Z
0
0
[ "task_categories:text-classification", "annotations_creators:expert-annotated", "multilinguality:monolingual", "language:eng", "license:cc-by-4.0", "modality:text", "arxiv:2308.11462", "arxiv:2103.06268", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T11:55:09Z
null
--- annotations_creators: - expert-annotated language: - eng license: cc-by-4.0 multilinguality: monolingual task_categories: - text-classification task_ids: [] dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 3111 num_examples: 6 - name: test num_bytes: 61052 num_examples: 142 download_size: 34785 dataset_size: 64163 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">CUADNoSolicitOfEmployeesLegalBenchClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> This task was constructed from the CUAD dataset. It consists of determining if the clause restricts a party's soliciting or hiring employees and/or contractors from the counterparty, whether during the contract or after the contract ends (or both). | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Legal, Written | | Reference | https://huggingface.co/datasets/nguha/legalbench | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["CUADNoSolicitOfEmployeesLegalBenchClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @misc{guha2023legalbench, archiveprefix = {arXiv}, author = {Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li}, eprint = {2308.11462}, primaryclass = {cs.CL}, title = {LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models}, year = {2023}, } @article{hendrycks2021cuad, author = {Hendrycks, Dan and Burns, Collin and Chen, Anya and Ball, Spencer}, journal = {arXiv preprint arXiv:2103.06268}, title = {Cuad: An expert-annotated nlp dataset for legal contract review}, year = {2021}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("CUADNoSolicitOfEmployeesLegalBenchClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 142, "number_of_characters": 59348, "number_texts_intersect_with_train": 0, "min_text_length": 68, "average_text_length": 417.943661971831, "max_text_length": 1881, "unique_text": 142, "unique_labels": 2, "labels": { "1": { "count": 71 }, "0": { "count": 71 } } }, "train": { "num_samples": 6, "number_of_characters": 3039, "number_texts_intersect_with_train": null, "min_text_length": 109, "average_text_length": 506.5, "max_text_length": 974, "unique_text": 6, "unique_labels": 2, "labels": { "1": { "count": 3 }, "0": { "count": 3 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/CUADNoSolicitOfCustomersLegalBenchClassification
mteb
2025-05-06T11:55:07Z
0
0
[ "task_categories:text-classification", "annotations_creators:expert-annotated", "multilinguality:monolingual", "language:eng", "license:cc-by-4.0", "modality:text", "arxiv:2308.11462", "arxiv:2103.06268", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T11:55:03Z
null
--- annotations_creators: - expert-annotated language: - eng license: cc-by-4.0 multilinguality: monolingual task_categories: - text-classification task_ids: [] dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 2846 num_examples: 6 - name: test num_bytes: 34011 num_examples: 84 download_size: 24659 dataset_size: 36857 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">CUADNoSolicitOfCustomersLegalBenchClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> This task was constructed from the CUAD dataset. It consists of determining if the clause restricts a party from contracting or soliciting customers or partners of the counterparty, whether during the contract or after the contract ends (or both). | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Legal, Written | | Reference | https://huggingface.co/datasets/nguha/legalbench | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["CUADNoSolicitOfCustomersLegalBenchClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @misc{guha2023legalbench, archiveprefix = {arXiv}, author = {Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li}, eprint = {2308.11462}, primaryclass = {cs.CL}, title = {LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models}, year = {2023}, } @article{hendrycks2021cuad, author = {Hendrycks, Dan and Burns, Collin and Chen, Anya and Ball, Spencer}, journal = {arXiv preprint arXiv:2103.06268}, title = {Cuad: An expert-annotated nlp dataset for legal contract review}, year = {2021}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("CUADNoSolicitOfCustomersLegalBenchClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 84, "number_of_characters": 33003, "number_texts_intersect_with_train": 0, "min_text_length": 84, "average_text_length": 392.89285714285717, "max_text_length": 1314, "unique_text": 84, "unique_labels": 2, "labels": { "1": { "count": 42 }, "0": { "count": 42 } } }, "train": { "num_samples": 6, "number_of_characters": 2774, "number_texts_intersect_with_train": null, "min_text_length": 128, "average_text_length": 462.3333333333333, "max_text_length": 829, "unique_text": 6, "unique_labels": 2, "labels": { "1": { "count": 3 }, "0": { "count": 3 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/CUADAffiliateLicenseLicenseeLegalBenchClassification
mteb
2025-05-06T11:53:00Z
0
0
[ "task_categories:text-classification", "annotations_creators:expert-annotated", "multilinguality:monolingual", "language:eng", "license:cc-by-4.0", "modality:text", "arxiv:2308.11462", "arxiv:2103.06268", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T11:52:56Z
null
--- annotations_creators: - expert-annotated language: - eng license: cc-by-4.0 multilinguality: monolingual task_categories: - text-classification task_ids: [] dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 3551 num_examples: 6 - name: test num_bytes: 98231 num_examples: 198 download_size: 53399 dataset_size: 101782 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">CUADAffiliateLicenseLicenseeLegalBenchClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> This task was constructed from the CUAD dataset. It consists of determining if a clause describes a license grant to a licensee (incl. sublicensor) and the affiliates of such licensee/sublicensor. | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Legal, Written | | Reference | https://huggingface.co/datasets/nguha/legalbench | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["CUADAffiliateLicenseLicenseeLegalBenchClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @misc{guha2023legalbench, archiveprefix = {arXiv}, author = {Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li}, eprint = {2308.11462}, primaryclass = {cs.CL}, title = {LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models}, year = {2023}, } @article{hendrycks2021cuad, author = {Hendrycks, Dan and Burns, Collin and Chen, Anya and Ball, Spencer}, journal = {arXiv preprint arXiv:2103.06268}, title = {Cuad: An expert-annotated nlp dataset for legal contract review}, year = {2021}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("CUADAffiliateLicenseLicenseeLegalBenchClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 198, "number_of_characters": 95853, "number_texts_intersect_with_train": 0, "min_text_length": 62, "average_text_length": 484.1060606060606, "max_text_length": 3074, "unique_text": 198, "unique_labels": 2, "labels": { "1": { "count": 99 }, "0": { "count": 99 } } }, "train": { "num_samples": 6, "number_of_characters": 3479, "number_texts_intersect_with_train": null, "min_text_length": 81, "average_text_length": 579.8333333333334, "max_text_length": 1638, "unique_text": 6, "unique_labels": 2, "labels": { "1": { "count": 3 }, "0": { "count": 3 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
mteb/Assin2STS
mteb
2025-05-06T11:49:59Z
0
0
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-scoring", "task_ids:fact-checking", "task_ids:fact-checking-retrieval", "annotations_creators:human-annotated", "multilinguality:monolingual", "language:por", "license:unknown", "modality:text", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "sentence-similarity" ]
2025-05-06T11:49:53Z
null
--- annotations_creators: - human-annotated language: - por license: unknown multilinguality: monolingual task_categories: - sentence-similarity task_ids: - semantic-similarity-scoring - fact-checking - fact-checking-retrieval dataset_info: features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: score dtype: float64 splits: - name: train num_bytes: 785995 num_examples: 6500 - name: test num_bytes: 309890 num_examples: 2448 - name: validation num_bytes: 60824 num_examples: 500 download_size: 504138 dataset_size: 1156709 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: validation path: data/validation-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">Assin2STS</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> Semantic Textual Similarity part of the ASSIN 2, an evaluation shared task collocated with STIL 2019. | | | |---------------|---------------------------------------------| | Task category | t2t | | Domains | Written | | Reference | https://link.springer.com/chapter/10.1007/978-3-030-41505-1_39 | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["Assin2STS"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @inproceedings{real2020assin, author = {Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo}, booktitle = {International Conference on Computational Processing of the Portuguese Language}, organization = {Springer}, pages = {406--412}, title = {The assin 2 shared task: a quick overview}, year = {2020}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("Assin2STS") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 2448, "number_of_characters": 262185, "unique_pairs": 2436, "min_sentence1_length": 19, "average_sentence1_len": 55.15318627450981, "max_sentence1_length": 159, "unique_sentence1": 2064, "min_sentence2_length": 18, "average_sentence2_len": 51.9485294117647, "max_sentence2_length": 158, "unique_sentence2": 2075, "min_score": 1.0, "avg_score": 3.565230803113747, "max_score": 5.0 } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
gunnybd01/Real_Estate_News_smr
gunnybd01
2025-05-06T11:46:46Z
210
0
[ "region:us" ]
[]
2025-05-05T14:42:07Z
null
--- dataset_info: features: - name: Date dtype: string - name: Symbol dtype: string - name: Article dtype: string - name: Summary dtype: string splits: - name: train num_bytes: 165862467 num_examples: 29040 download_size: 74761920 dataset_size: 165862467 configs: - config_name: default data_files: - split: train path: data/train-* ---
reasoning-proj/filtered_math_traces_original_DeepSeek-R1-Distill-Llama-8B
reasoning-proj
2025-05-06T11:43:13Z
0
0
[ "region:us" ]
[]
2025-05-06T11:43:09Z
null
--- dataset_info: features: - name: question dtype: string - name: answer_content dtype: string - name: reference_answer dtype: string - name: id dtype: string - name: metadata struct: - name: question_license dtype: string - name: question_source dtype: string - name: model_name dtype: string - name: verifier_score dtype: int64 splits: - name: train num_bytes: 14275644 num_examples: 600 download_size: 3642333 dataset_size: 14275644 configs: - config_name: default data_files: - split: train path: data/train-* ---
SayantanJoker/processed_seamless_align_hindi_new_chunk_100
SayantanJoker
2025-05-06T11:39:22Z
0
0
[ "region:us" ]
[]
2025-05-06T11:38:01Z
null
--- dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string - name: file_name dtype: string splits: - name: train num_bytes: 2531272564.0 num_examples: 10000 download_size: 2400877104 dataset_size: 2531272564.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
mteb/DBpediaClassification
mteb
2025-05-06T11:31:43Z
0
0
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:derived", "multilinguality:monolingual", "language:eng", "license:cc-by-sa-3.0", "modality:text", "arxiv:1509.01626", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
[ "text-classification" ]
2025-05-06T11:31:39Z
null
--- annotations_creators: - derived language: - eng license: cc-by-sa-3.0 multilinguality: monolingual task_categories: - text-classification task_ids: - topic-classification dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 607175 num_examples: 2048 - name: test num_bytes: 597695 num_examples: 2048 download_size: 786345 dataset_size: 1204870 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* tags: - mteb - text --- <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">DBpediaClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> DBpedia14 is a dataset of English texts from Wikipedia articles, categorized into 14 non-overlapping classes based on their DBpedia ontology. | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Encyclopaedic, Written | | Reference | https://arxiv.org/abs/1509.01626 | ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_tasks(["DBpediaClassification"]) evaluator = mteb.MTEB(task) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @inproceedings{NIPS2015_250cf8b5, author = {Zhang, Xiang and Zhao, Junbo and LeCun, Yann}, booktitle = {Advances in Neural Information Processing Systems}, editor = {C. Cortes and N. Lawrence and D. Lee and M. Sugiyama and R. Garnett}, pages = {}, publisher = {Curran Associates, Inc.}, title = {Character-level Convolutional Networks for Text Classification}, url = {https://proceedings.neurips.cc/paper_files/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf}, volume = {28}, year = {2015}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("DBpediaClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 2048, "number_of_characters": 568368, "number_texts_intersect_with_train": 0, "min_text_length": 37, "average_text_length": 277.5234375, "max_text_length": 1045, "unique_text": 2048, "unique_labels": 14, "labels": { "7": { "count": 147 }, "0": { "count": 146 }, "10": { "count": 146 }, "3": { "count": 146 }, "13": { "count": 147 }, "2": { "count": 146 }, "12": { "count": 147 }, "1": { "count": 146 }, "6": { "count": 146 }, "11": { "count": 146 }, "8": { "count": 146 }, "5": { "count": 147 }, "4": { "count": 146 }, "9": { "count": 146 } } }, "train": { "num_samples": 2048, "number_of_characters": 578420, "number_texts_intersect_with_train": null, "min_text_length": 22, "average_text_length": 282.431640625, "max_text_length": 777, "unique_text": 2048, "unique_labels": 14, "labels": { "12": { "count": 147 }, "10": { "count": 146 }, "2": { "count": 146 }, "5": { "count": 147 }, "13": { "count": 147 }, "9": { "count": 146 }, "6": { "count": 146 }, "4": { "count": 146 }, "3": { "count": 146 }, "1": { "count": 146 }, "0": { "count": 146 }, "8": { "count": 146 }, "11": { "count": 146 }, "7": { "count": 147 } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
SayantanJoker/processed_seamless_align_hindi_new_chunk_90
SayantanJoker
2025-05-06T11:25:24Z
0
0
[ "region:us" ]
[]
2025-05-06T11:23:58Z
null
--- dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string - name: file_name dtype: string splits: - name: train num_bytes: 2601821036.0 num_examples: 10000 download_size: 2461691227 dataset_size: 2601821036.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
mteb/CodeSearchNetCCRetrieval
mteb
2025-05-06T11:11:20Z
0
0
[ "region:us" ]
[]
2025-05-06T11:10:21Z
null
--- dataset_info: - config_name: go-corpus features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string splits: - name: test num_bytes: 34809839 num_examples: 182735 download_size: 16154525 dataset_size: 34809839 - config_name: go-qrels features: - name: query-id dtype: string - name: corpus-id dtype: string - name: score dtype: int64 splits: - name: test num_bytes: 243660 num_examples: 8122 download_size: 92695 dataset_size: 243660 - config_name: go-queries features: - name: _id dtype: string - name: text dtype: string splits: - name: test num_bytes: 2020824 num_examples: 8122 download_size: 924417 dataset_size: 2020824 - config_name: java-corpus features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string splits: - name: test num_bytes: 49027018 num_examples: 181061 download_size: 19858046 dataset_size: 49027018 - config_name: java-qrels features: - name: query-id dtype: string - name: corpus-id dtype: string - name: score dtype: int64 splits: - name: test num_bytes: 328650 num_examples: 10955 download_size: 124190 dataset_size: 328650 - config_name: java-queries features: - name: _id dtype: string - name: text dtype: string splits: - name: test num_bytes: 3916921 num_examples: 10955 download_size: 1601964 dataset_size: 3916921 - config_name: javascript-corpus features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string splits: - name: test num_bytes: 18616585 num_examples: 65201 download_size: 8499898 dataset_size: 18616585 - config_name: javascript-qrels features: - name: query-id dtype: string - name: corpus-id dtype: string - name: score dtype: int64 splits: - name: test num_bytes: 92148 num_examples: 3291 download_size: 38534 dataset_size: 92148 - config_name: javascript-queries features: - name: _id dtype: string - name: text dtype: string splits: - name: test num_bytes: 1506447 num_examples: 3291 download_size: 642774 dataset_size: 1506447 - config_name: php-corpus features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string splits: - name: test num_bytes: 70164589 num_examples: 268237 download_size: 28221123 dataset_size: 70164589 - config_name: php-queries features: - name: _id dtype: string - name: text dtype: string splits: - name: test num_bytes: 4928215 num_examples: 14014 download_size: 1899269 dataset_size: 4928215 - config_name: python-corpus features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string splits: - name: test num_bytes: 108454853 num_examples: 280652 download_size: 45190661 dataset_size: 108454853 - config_name: python-qrels features: - name: query-id dtype: string - name: corpus-id dtype: string - name: score dtype: int64 splits: - name: test num_bytes: 447540 num_examples: 14918 download_size: 168729 dataset_size: 447540 - config_name: python-queries features: - name: _id dtype: string - name: text dtype: string splits: - name: test num_bytes: 8455942 num_examples: 14918 download_size: 3565411 dataset_size: 8455942 - config_name: ruby-corpus features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string splits: - name: test num_bytes: 5489759 num_examples: 27588 download_size: 2573853 dataset_size: 5489759 - config_name: ruby-qrels features: - name: query-id dtype: string - name: corpus-id dtype: string - name: score dtype: int64 splits: - name: test num_bytes: 35308 num_examples: 1261 download_size: 15482 dataset_size: 35308 - config_name: ruby-queries features: - name: _id dtype: string - name: text dtype: string splits: - name: test num_bytes: 354183 num_examples: 1261 download_size: 164306 dataset_size: 354183 configs: - config_name: go-corpus data_files: - split: test path: go-corpus/test-* - config_name: go-qrels data_files: - split: test path: go-qrels/test-* - config_name: go-queries data_files: - split: test path: go-queries/test-* - config_name: java-corpus data_files: - split: test path: java-corpus/test-* - config_name: java-qrels data_files: - split: test path: java-qrels/test-* - config_name: java-queries data_files: - split: test path: java-queries/test-* - config_name: javascript-corpus data_files: - split: test path: javascript-corpus/test-* - config_name: javascript-qrels data_files: - split: test path: javascript-qrels/test-* - config_name: javascript-queries data_files: - split: test path: javascript-queries/test-* - config_name: php-corpus data_files: - split: test path: php-corpus/test-* - config_name: php-queries data_files: - split: test path: php-queries/test-* - config_name: python-corpus data_files: - split: test path: python-corpus/test-* - config_name: python-qrels data_files: - split: test path: python-qrels/test-* - config_name: python-queries data_files: - split: test path: python-queries/test-* - config_name: ruby-corpus data_files: - split: test path: ruby-corpus/test-* - config_name: ruby-qrels data_files: - split: test path: ruby-qrels/test-* - config_name: ruby-queries data_files: - split: test path: ruby-queries/test-* ---
SayantanJoker/processed_seamless_align_hindi_new_chunk_80
SayantanJoker
2025-05-06T11:11:11Z
0
0
[ "region:us" ]
[]
2025-05-06T11:09:48Z
null
--- dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string - name: file_name dtype: string splits: - name: train num_bytes: 2609116403.0 num_examples: 10000 download_size: 2480586121 dataset_size: 2609116403.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
orinnebula/request
orinnebula
2025-05-06T11:04:12Z
3
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-06T04:03:20Z
null
--- dataset_info: features: - name: id dtype: string - name: model dtype: string - name: revision dtype: string - name: precision dtype: string - name: weight_type dtype: string - name: submitted_time dtype: string - name: model_type dtype: string - name: params dtype: float64 - name: license dtype: string - name: private dtype: bool splits: - name: train num_bytes: 638 num_examples: 4 download_size: 4486 dataset_size: 638 configs: - config_name: default data_files: - split: train path: data/train-* ---
SayantanJoker/processed_seamless_align_hindi_new_chunk_73
SayantanJoker
2025-05-06T11:01:19Z
0
0
[ "region:us" ]
[]
2025-05-06T10:59:49Z
null
--- dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string - name: file_name dtype: string splits: - name: train num_bytes: 2622999296.0 num_examples: 10000 download_size: 2502920456 dataset_size: 2622999296.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
vetter0002/Llama-3.2-1B-Instruct_gsm8k_s5
vetter0002
2025-05-06T11:00:52Z
0
0
[ "region:us" ]
[]
2025-05-06T10:43:42Z
null
--- dataset_info: config_name: eval_Llama-3.2-1B-Instruct_ft_dgsm8k_batch20_nseq5 features: - name: Task ID dtype: int64 - name: Question dtype: string - name: Responses dtype: string - name: Extracted Answer dtype: string - name: Extracted Answers dtype: string - name: Ground Truth dtype: string splits: - name: train num_bytes: 7480491 num_examples: 1319 download_size: 2105256 dataset_size: 7480491 configs: - config_name: eval_Llama-3.2-1B-Instruct_ft_dgsm8k_batch20_nseq5 data_files: - split: train path: eval_Llama-3.2-1B-Instruct_ft_dgsm8k_batch20_nseq5/train-* ---
severo/trending-repos
severo
2025-05-06T11:00:39Z
741
12
[ "license:apache-2.0", "size_categories:10K<n<100K", "format:csv", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "croissant" ]
[]
2023-07-28T13:57:34Z
null
--- license: apache-2.0 pretty_name: Trending repositories on Hugging Face size_categories: - n<1K configs: - config_name: models data_files: "models.csv" - config_name: datasets data_files: "datasets.csv" - config_name: spaces data_files: "spaces.csv" tags: - croissant --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** Sylvain Lesage ### Dataset Summary This dataset contains the 20 trending repositories of each type: models, datasets, and space, on Hugging Face, every day. Each type can be loaded from its own dataset config. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Not relevant. ## Dataset Structure ### Data Instances The dataset contains three configurations: **models**: the history of trending models on Hugging Face **datasets**: the history of trending datasets on Hugging Face **spaces**: the history of trending spaces on Hugging Face ### Data Fields - date (string): the date of the lookup to the trending repositories - author (string): id of the repository owner. It can be null. - id (string): id of the repository - rank (int64): rank in the trending repositories of its kind (model, dataset, or space). Starts at 1. - recent_likes (int64): number of likes received lately (last week) - likes (int64): total number of likes - month_downloads (int64): number of downloads in the last month. Null for the spaces. ### Data Splits Each configuration only has one split: `train` that contains all the rows. ## Dataset Creation ### Curation Rationale The dataset is updated daily through a cron job that calls the `https://huggingface.co/api/trending?type=${repoType}&limit=20` endpoint for each repository type (model, dataset, space). The script runs in an [Observable](https://observablehq.com/@huggingface) notebook, and the files are uploaded using the [huggingface.js](https://github.com/huggingface/huggingface.js) library. ### Source Data #### Initial Data Collection and Normalization Not relevant. #### Who are the source language producers? Not relevant. ### Annotations #### Annotation process Not relevant. #### Who are the annotators? Not relevant. ### Personal and Sensitive Information Only public repositories are included in the trending repositories. ## Considerations for Using the Data ### Social Impact of Dataset Not relevant. ### Discussion of Biases The trending repositories reflect the likes given by Hugging Face users in the last week. Any bias that applies to the users can be reflected in this dataset. As a vanity metric, some users might also be tempted to generate fake likes. ### Other Known Limitations Not relevant. ## Additional Information ### Dataset Curators Sylvain Lesage, Hugging Face ### Licensing Information Apache License 2.0 ### Citation Information Not relevant. ### Contributions Not relevant.
kanghokh/klue_mrc_case2
kanghokh
2025-05-06T10:46:42Z
0
0
[ "region:us" ]
[]
2025-05-06T10:46:37Z
null
--- dataset_info: features: - name: title dtype: string - name: category dtype: string - name: source dtype: string - name: context dtype: string - name: question dtype: string - name: question_type dtype: int64 - name: is_impossible dtype: bool - name: answer_text dtype: string - name: answer_start dtype: int64 - name: negative_samples sequence: string - name: search_result sequence: string - name: answer dtype: string - name: refs sequence: int64 splits: - name: train num_bytes: 5375273 num_examples: 289 download_size: 3125131 dataset_size: 5375273 configs: - config_name: default data_files: - split: train path: data/train-* ---
GaspardNW/Chien_2.72sec_0aug_0shiftAug_specmask0_nfft2048_hop512_sr48000
GaspardNW
2025-05-06T10:45:49Z
0
0
[ "region:us" ]
[]
2025-05-06T10:45:02Z
null
--- dataset_info: features: - name: filename dtype: string - name: duration dtype: int64 - name: sampling_rate dtype: int64 - name: magnitude_array sequence: sequence: sequence: float64 - name: min_max_vals sequence: float64 splits: - name: train num_bytes: 1784057798 num_examples: 849 download_size: 910473011 dataset_size: 1784057798 configs: - config_name: default data_files: - split: train path: data/train-* ---
HungVu2003/opt-350m_beta_1.0_alpha_0.2_num-company_3_dataset_2_for_gen_6_v2
HungVu2003
2025-05-06T10:43:51Z
0
0
[ "region:us" ]
[]
2025-05-06T10:43:50Z
null
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 3972689 num_examples: 14998 download_size: 1501011 dataset_size: 3972689 configs: - config_name: default data_files: - split: train path: data/train-* ---
SayantanJoker/processed_seamless_align_hindi_new_chunk_57
SayantanJoker
2025-05-06T10:38:04Z
0
0
[ "region:us" ]
[]
2025-05-06T10:36:39Z
null
--- dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string - name: file_name dtype: string splits: - name: train num_bytes: 2652249548.0 num_examples: 10000 download_size: 2524851601 dataset_size: 2652249548.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
SayantanJoker/processed_seamless_align_hindi_new_chunk_51
SayantanJoker
2025-05-06T10:29:21Z
0
0
[ "region:us" ]
[]
2025-05-06T10:27:54Z
null
--- dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string - name: file_name dtype: string splits: - name: train num_bytes: 2664301672.0 num_examples: 10000 download_size: 2541716672 dataset_size: 2664301672.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
SayantanJoker/processed_seamless_align_hindi_new_chunk_43
SayantanJoker
2025-05-06T10:17:43Z
0
0
[ "region:us" ]
[]
2025-05-06T10:16:21Z
null
--- dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string - name: file_name dtype: string splits: - name: train num_bytes: 2680739446.0 num_examples: 10000 download_size: 2565708463 dataset_size: 2680739446.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
ieasybooks-org/prophet-mosque-library-compressed
ieasybooks-org
2025-05-06T10:16:54Z
58
0
[ "task_categories:image-to-text", "language:ar", "license:mit", "size_categories:10K<n<100K", "format:csv", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[ "image-to-text" ]
2025-05-04T17:08:25Z
null
--- license: mit task_categories: - image-to-text language: - ar pretty_name: Prophet's Mosque Library - Compressed size_categories: - 10K<n<100K configs: - config_name: index data_files: - split: index path: index.tsv --- # Prophet's Mosque Library - Compressed ## 📖 Overview [Prophet’s Mosque Library](https://alharamain.gov.sa/public/?page=page_299500) is one of the primary resources for Islamic books. It hosts more than 48,000 PDF books across over 70 categories. In this dataset, we processed the original PDF files using Google Document AI APIs and extracted their contents into two additional formats: TXT and DOCX. ## 📊 Dataset Contents *Note: The rest of the dataset PDF files exist in this repository: https://huggingface.co/datasets/ieasybooks-org/prophet-mosque-library-compressed-cont.* This dataset is identical to [ieasybooks-org/prophet-mosque-library](https://huggingface.co/datasets/ieasybooks-org/prophet-mosque-library), with one key difference: the contents have been compressed for easier downloading. Specifically, the `pdf`, `txt`, and `docx` folders have been packaged into `pdf.zip`, `txt.zip`, and `docx.zip`, respectively. For detailed information about the dataset contents and usage instructions, please refer to the original dataset page: [ieasybooks-org/prophet-mosque-library](https://huggingface.co/datasets/ieasybooks-org/prophet-mosque-library).
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
364