metadata
license: odc-by
task_categories:
- text-classification
- token-classification
- question-answering
- text-generation
- text2text-generation
size_categories:
- 100K<n<1M
Essential Web v1.0 - 1M Token Sample
Approximately 1,000,000 tokens sampled from Essential Web v1.0.
Dataset Info
- Target: 1,000,000 tokens
- Actual: ~1,099,800 tokens (estimated)
- Source: EssentialAI/essential-web-v1.0
Schema
This sample preserves ALL columns from the original dataset, including:
id
: Document IDtext
: Text contentmetadata
: URL and source informationquality_signals
: RedPajama quality metricseai_taxonomy
: Essential AI taxonomy labelspid
: Partition ID- And all other original columns
Usage
from datasets import load_dataset
dataset = load_dataset("sumuks/essential-web-v1.0-sample-1M")
# Access the data with all columns
example = dataset['train'][0]
print(example['text'][:200] + "...")
# Access quality signals
print(example['quality_signals'])
# Access taxonomy
print(example['eai_taxonomy'])
File Structure
The dataset is split across multiple parquet files in the data/
directory:
data/part-00000.parquet
data/part-00001.parquet
- etc.
HuggingFace datasets automatically loads all parts as a single dataset.
Sampling Method
- Random sampling across snapshots
- Preserves all original columns and metadata
- Token estimation: ~600 tokens per row