File size: 6,767 Bytes
b58a9fb d369a31 cfc216f b58a9fb 3ec4f44 b58a9fb cfc216f b58a9fb cfc216f b58a9fb a5e0fdc d2ac645 a5e0fdc e161a37 b58a9fb cfc216f b58a9fb cfc216f b58a9fb 23ab49d b58a9fb c279948 9f78956 b58a9fb c279948 b58a9fb c279948 b58a9fb c279948 b58a9fb c279948 b58a9fb c279948 b58a9fb c279948 23ab49d b58a9fb c279948 b58a9fb c279948 b58a9fb cfc216f b58a9fb 8647bad b58a9fb 23ab49d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
---
license: mit
language:
- en
tags:
- conversations
- tagging
- embeddings
- bittensor
- dialog
- social media
- podcast
pretty_name: 5,000 Podcast Conversations with Metadata and Embedding Dataset
size_categories:
- 1M<n<10M
---
## ποΈ ReadyAI - 5,000 Podcast Conversations with Metadata and Embedding Dataset
ReadyAI, operating subnet 33 on the [Bittensor Network](https://bittensor.com/) is an open-source initiative focused on low-cost, resource-minimal pipelines for structuring raw data for AI applications.
This dataset is part of the ReadyAI Conversational Genome Project, leveraging the Bittensor decentralized network.
AI runs on structured data β and this dataset bridges the gap between raw conversation transcripts and structured, vectorized semantic tags.
You can find more about our subnet on GitHub [here](https://github.com/afterpartyai/bittensor-conversation-genome-project).
---
## Full Vectors Access
β‘οΈ **Download the full 45 GB conversation tags embeddings** from [here](https://huggingface.co/datasets/ReadyAi/5000-podcast-conversations-with-metadata-and-embedding-dataset/tree/main/data)
For large-scale processing and fine-tuning.
---
## π¦ Dataset Versions
In addition to the full dataset, two smaller versions are available:
- **Small version**
- Located in the `small_dataset` folder.
- Contains 1,000 conversations with the same file structure as the full dataset.
- All filenames are prefixed with `small_`.
- **Medium version**
- Located in the `medium_dataset` folder.
- Contains 2,500 conversations
- Also using the same structure and `medium_` prefix for all files.
These subsets are ideal for lightweight experimentation, prototyping, or benchmarking.
---
## π Dataset Overview
This dataset contains **annotated conversation transcripts** with:
- Human-readable semantic tags
- **Embedding vectors** contextualized to each conversation
- Participant metadata
It is ideal for:
- Semantic search over conversations
- AI assistant training (OpenAI models, fine-tuning)
- Vector search implementations using **pg_vector** and **Pinecone**
- Metadata analysis and tag retrieval for LLMs
The embeddings were generated with the [text-embedding-ada-002](https://huggingface.co/Xenova/text-embedding-ada-002) model and have 1536 dimensions per tag.
---
## π Dataset Structure
The dataset consists of four main components:
### 1. **data/bittensor-conversational-tags-and-embeddings-part-*.parquet** β Tag Embeddings and Metadata
Each Parquet file contains rows with:
| Column | Type | Description |
|:-------|:-----|:------------|
| c_guid | int64 | Unique conversation group ID |
| tag_id | int64 | Unique identifier for the tag |
| tag | string | Semantic tag (e.g., "climate change") |
| vector | list of float32 | Embedding vector representing the tag's meaning **in the conversation's context** |
β
Files split into ~1GB chunks for efficient loading and streaming.
---
### 2. **tag_to_id.parquet** β Tag Mapping
Mapping between tag IDs and human-readable tags.
| Column | Type | Description |
|:-------|:-----|:------------|
| tag_id | int64 | Unique tag ID |
| tag | string | Semantic tag text |
β
Useful for reverse-mapping tags from models or outputs.
---
### 3. **conversations_to_tags.parquet** β Conversation-to-Tag Mappings
Links conversations to their associated semantic tags.
| Column | Type | Description |
|:-------|:-----|:------------|
| c_guid | int64 | Conversation group ID |
| tag_ids | list of int64 | List of tag IDs relevant to the conversation |
β
For supervised training, retrieval tasks, or semantic labeling.
---
### 4. **conversations_train.parquet** β Full Conversation Text and Participants
Contains the raw multi-turn dialogue and metadata.
| Column | Type | Description |
|:-------|:-----|:------------|
| c_guid | int64 | Conversation group ID |
| transcript | string | Full conversation text |
| participants | list of strings | List of speaker identifiers |
β
Useful for dialogue modeling, multi-speaker AI, or fine-tuning.
---
## π How to Use
**Install dependencies**
```python
pip install pandas pyarrow datasets
```
**Download the dataset**
```python
import datasets
path = "ReadyAi/5000-podcast-conversations-with-metadata-and-embedding-dataset"
dataset = datasets.load_dataset(path)
print(dataset['train'].column_names)
```
**Load a single Parquet split**
```python
import pandas as pd
df = pd.read_parquet("data/bittensor-conversational-tags-and-embeddings-part-0000.parquet")
print(df.head())
```
**Load all tag splits**
```python
import pandas as pd
import glob
files = sorted(glob.glob("data/bittensor-conversational-tags-and-embeddings-part-*.parquet"))
df_tags = pd.concat((pd.read_parquet(f) for f in files), ignore_index=True)
print(f"Loaded {len(df_tags)} tag records.")
```
**Load tag dictionary**
```python
tag_dict = pd.read_parquet("tag_to_id.parquet")
print(tag_dict.head())
```
**Load conversation to tags mapping**
```python
df_mapping = pd.read_parquet("conversations_to_tags.parquet")
print(df_mapping.head())
```
**Load full conversations dialog and metadata**
```python
df_conversations = pd.read_parquet("conversations_train.parquet")
print(df_conversations.head())
```
---
## π₯ Example: Reconstruct Tags for a Conversation
```python
# Build tag lookup
tag_lookup = dict(zip(tag_dict['tag_id'], tag_dict['tag']))
# Pick a conversation
sample = df_mapping.iloc[0]
c_guid = sample['c_guid']
tag_ids = sample['tag_ids']
# Translate tag IDs to human-readable tags
tags = [tag_lookup.get(tid, "Unknown") for tid in tag_ids]
print(f"Conversation {c_guid} has tags: {tags}")
```
---
## π¦ Handling Split Files
| Situation | Strategy |
|:----------|:---------|
| Enough RAM | Use `pd.concat()` to merge splits |
| Low memory | Process each split one-by-one |
| Hugging Face datasets | Use streaming mode |
**Example (streaming with Hugging Face `datasets`)**
```python
from datasets import load_dataset
dataset = load_dataset(
"ReadyAi/5000-podcast-conversations-with-metadata-and-embedding-dataset",
split="train",
streaming=True
)
for example in dataset:
print(example)
break
```
---
## π License
MIT License
β
Free to use and modify
---
## β¨ Credits
Built using contributions from Bittensor conversational miners and the ReadyAI open-source community.
---
## π― Summary
| Component | Description |
|:----------|:------------|
| parquets/part_*.parquet | Semantic tags and their contextual embeddings |
| tag_to_id.parquet | Dictionary mapping of tag IDs to text |
| conversations_to_tags.parquet | Links conversations to tags |
| conversations_train.parquet | Full multi-turn dialogue with participant metadata |
|