sumuks's picture
sumuks HF Staff
Update README.md
1d5eb18 verified
metadata
license: odc-by
task_categories:
  - text-classification
  - token-classification
  - question-answering
  - text-generation
  - text2text-generation
size_categories:
  - 100K<n<1M

Essential Web v1.0 - 1M Token Sample

Approximately 1,000,000 tokens sampled from Essential Web v1.0.

Dataset Info

Schema

This sample preserves ALL columns from the original dataset, including:

  • id: Document ID
  • text: Text content
  • metadata: URL and source information
  • quality_signals: RedPajama quality metrics
  • eai_taxonomy: Essential AI taxonomy labels
  • pid: Partition ID
  • And all other original columns

Usage

from datasets import load_dataset

dataset = load_dataset("sumuks/essential-web-v1.0-sample-1M")

# Access the data with all columns
example = dataset['train'][0]
print(example['text'][:200] + "...")

# Access quality signals
print(example['quality_signals'])

# Access taxonomy
print(example['eai_taxonomy'])

File Structure

The dataset is split across multiple parquet files in the data/ directory:

  • data/part-00000.parquet
  • data/part-00001.parquet
  • etc.

HuggingFace datasets automatically loads all parts as a single dataset.

Sampling Method

  • Random sampling across snapshots
  • Preserves all original columns and metadata
  • Token estimation: ~600 tokens per row