small subset of DCLM (around 15B tokens) pre-tokenized with GPT-4 tokenizer into flat files of tokens (int32), which can be read with numpy or torch. each document is separated by <|endoftext|>.