Error while downloading the dataset

#1
by andreidima - opened

Hi,
what is the correct way of downloading this dataset?
I tried with

from datasets import load_dataset
dataset = load_dataset("Babelscape/multinerd")

but I get

NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1003356272, num_examples=2678400, shard_lengths=[1339200, 1339200], dataset_name='multinerd'), 'recorded': SplitInfo(name='train', num_bytes=501678136, num_examples=1339200, shard_lengths=None, dataset_name='multinerd')}, {'expected': SplitInfo(name='validation', num_bytes=126568414, num_examples=334800, shard_lengths=None, dataset_name='multinerd'), 'recorded': SplitInfo(name='validation', num_bytes=63284207, num_examples=167400, shard_lengths=None, dataset_name='multinerd')}, {'expected': SplitInfo(name='test', num_bytes=126797504, num_examples=335986, shard_lengths=None, dataset_name='multinerd'), 'recorded': SplitInfo(name='test', num_bytes=63398752, num_examples=167993, shard_lengths=None, dataset_name='multinerd')}]

I ran into the same issue. It seems that when downloading it normally, datasets expects twice as many rows as exist in the dataset for unknown reasons, leading to this validation error. I found that you can circumvent this problem by instead downloading the parquet branch, which HF automatically generates:

from datasets import load_dataset
dataset = load_dataset("Babelscape/multinerd", revision="refs/convert/parquet")

Thank you @kgnlp , that did the trick!

Sign up or log in to comment