Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 188, in _generate_tables
                  csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 73, in wrapper
                  return function(*args, download_config=download_config, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1199, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
                  return _read(filepath_or_buffer, kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read
                  parser = TextFileReader(filepath_or_buffer, **kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
                  self._engine = self._make_engine(f, self.engine)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
                  return mapping[engine](f, **self.options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
                  self._reader = parsers.TextReader(src, **kwds)
                File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
                File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Prompt2SceneGallery

Dataset Details

Dataset Description

Prompt2SceneGallery dataset showcases one of the utilities of the Prompt2SceneBench Dataset (https://huggingface.co/datasets/bodhisattamaiti/Prompt2SceneBench).

The dataset consists of 5163 indoor scene images, the images were generated using Stable Diffusion XL (SDXL), and the prompts were randomly sampled from the Prompt2SceneBench Dataset. The images generated by SDXL have dimensions of 1024x1024.

  • Curated by: Bodhisatta Maiti
  • Funded by: N/A
  • Shared by: Bodhisatta Maiti
  • Language(s): English
  • License: CC BY-NC-SA 4.0

Dataset Sources

Uses

Direct Use

Prompt2SceneGallery can be directly used for:

  1. Prompt–Image Alignment Evaluation (Imperfect Realism): Analyze how closely generated images match structured prompts in terms of object presence, co-location, and scene context — even when generation is imperfect.
  2. Failure Case Analysis for Text-to-Image Models: Study the failure modes of models like SDXL in spatial reasoning, compositionality, or object fidelity.
  3. Visual Grounding Benchmarking (With Noise): Use imperfect generations to stress-test grounding models on scene understanding under visually noisy conditions.
  4. Evaluation Dataset for Captioning Models: Evaluate how well captioning models (e.g., BLIP, LLaVA) can describe structured scenes — including their limitations in hallucinated or partially wrong outputs.
  5. Robustness and Semantic Drift Studies: Explore how generation quality affects semantic drift between prompt and image, especially for structured spatial prompts.
  6. Synthetic Scene Prototyping for Research: Serve as a starting point for prototyping indoor spatial benchmarks without needing human annotation.

Out-of-Scope Use

  • Outdoor scenes, surreal or abstract visual compositions.
  • Benchmarks involving human-centric understanding or motion.
  • Direct use for safety-critical or clinical systems.

Dataset Structure

Images (Prompt2SceneGallery_1024_v1.zip)

Size: 5163 images

Images metadata (prompt2scenegallery_metadata.csv)

Size: 5163 records

Each row in the CSV corresponds to a single prompt instance and includes the following fields:

  • type: Prompt category — one of A, B, C, or D, based on number of objects and complexity.
  • object1, object2, object3, object4: Objects involved in the scene (some may be None/NaN/Null depending on type).
  • surface: The surface where the objects are placed (e.g., desk surface, bench).
  • scene: The indoor environment (e.g., living room, study room).
  • prompt: The final structured natural language prompt.
  • filename: The filename of the generated images

Note:

  • Type A prompt has only 1 object (object2, object3, object4 fields will be None/NaN/Null)
  • Type B prompt has only 2 objects (object3, object4 fields will be None/NaN/Null)
  • Type C prompt has only 3 objects (object4 field will be None/NaN/Null)
  • Type D prompt has 4 objects (all the object fields will have values)

Sample Examples:

  • Type A: a football located on a bench in a basement. (object1: football, surface: bench, scene: basement)
  • Type B: a coffee mug beside a notebook on a wooden table in a home office. (object1: coffee mug, object2: notebook, surface: wooden table, scene: home office)
  • Type C: a jar, a coffee mug, and a bowl placed on a kitchen island in a kitchen. (object1: jar, object2: coffee mug, object3: bowl, surface: kitchen island, scene: kitchen)
  • Type D: An arrangement of an air purifier, a pair of slippers, a guitar, and a pair of shoes on a floor in a bedroom. (object1:air purifier, object2: pair of slippers, object3: guitar, object4: pair of shoes, surface: floor, scene: bedroom)

Dataset Creation

Curation Rationale

The dataset was created to provide a controlled and structured benchmark for evaluating spatial and compositional understanding in generative AI systems, particularly in indoor environments.

Source Data

Data Collection and Processing

Images were generated using Stable Diffusion XL (base 1.0) via structured text prompts.

Who are the source data producers?

The content was synthesized using the SDXL model by the dataset author.

Annotations

No human annotations were added post-generation..

Personal and Sensitive Information

No personal or sensitive information is present. The dataset consists of entirely synthetic images generated by SDXL.

Bias, Risks, and Limitations

This dataset focuses only on physically and contextually plausible indoor scenes. It excludes unusual, humorous, or surrealistic scenarios intentionally. It may not cover the full range of compositional variation needed in creative applications.

Recommendations

Use with generative models that understand object placement and spatial grounding. Avoid using it to benchmark models trained for outdoor or abstract scenes.

Citation

APA:

Maiti, B. (2025). Prompt2SceneGallery: A Visual Gallery of Indoor Scenes Generated from Structured Prompt Templates [Data set]. Zenodo. https://doi.org/10.5281/zenodo.16559327

Glossary

  • Type (Prompt category): The number of objects (1 to 4) described in the scene vary based on the prompt type (A, B, C and D).
  • Surface: Physical platform or area where objects rest.
  • Scene: Room or environment in which the surface is situated.

Dataset Card Authors

  • Bodhisatta Maiti

Dataset Card Contact

Downloads last month
20