Add 1M token sample with all columns
Browse files
README.md
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Essential Web v1.0 - 1M Token Sample
|
2 |
+
|
3 |
+
Approximately 1,000,000 tokens sampled from Essential Web v1.0.
|
4 |
+
|
5 |
+
## Dataset Info
|
6 |
+
|
7 |
+
- **Target**: 1,000,000 tokens
|
8 |
+
- **Actual**: ~1,099,800 tokens (estimated)
|
9 |
+
- **Source**: [EssentialAI/essential-web-v1.0](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0)
|
10 |
+
|
11 |
+
## Schema
|
12 |
+
|
13 |
+
This sample preserves ALL columns from the original dataset, including:
|
14 |
+
- `id`: Document ID
|
15 |
+
- `text`: Text content
|
16 |
+
- `metadata`: URL and source information
|
17 |
+
- `quality_signals`: RedPajama quality metrics
|
18 |
+
- `eai_taxonomy`: Essential AI taxonomy labels
|
19 |
+
- `pid`: Partition ID
|
20 |
+
- And all other original columns
|
21 |
+
|
22 |
+
## Usage
|
23 |
+
|
24 |
+
```python
|
25 |
+
from datasets import load_dataset
|
26 |
+
|
27 |
+
dataset = load_dataset("sumuks/essential-web-v1.0-sample-1M")
|
28 |
+
|
29 |
+
# Access the data with all columns
|
30 |
+
example = dataset['train'][0]
|
31 |
+
print(example['text'][:200] + "...")
|
32 |
+
|
33 |
+
# Access quality signals
|
34 |
+
print(example['quality_signals'])
|
35 |
+
|
36 |
+
# Access taxonomy
|
37 |
+
print(example['eai_taxonomy'])
|
38 |
+
```
|
39 |
+
|
40 |
+
## File Structure
|
41 |
+
|
42 |
+
The dataset is split across multiple parquet files in the `data/` directory:
|
43 |
+
- `data/part-00000.parquet`
|
44 |
+
- `data/part-00001.parquet`
|
45 |
+
- etc.
|
46 |
+
|
47 |
+
HuggingFace datasets automatically loads all parts as a single dataset.
|
48 |
+
|
49 |
+
## Sampling Method
|
50 |
+
|
51 |
+
- Random sampling across snapshots
|
52 |
+
- Preserves all original columns and metadata
|
53 |
+
- Token estimation: ~600 tokens per row
|
data/part-00000.parquet/9b706465-8f66-432b-a3fa-98c33de3672d-0.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c0427fc7bc816e302ba45f183043792f130f9e88235efd372e0d819579dfc102
|
3 |
+
size 6010080
|