Datasets:

Modalities:
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
File size: 2,723 Bytes
3bc04bc
 
 
 
 
 
 
 
 
 
57aafdc
 
3bc04bc
57aafdc
 
3bc04bc
57aafdc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3bc04bc
57aafdc
 
 
3bc04bc
62130fd
 
 
 
 
 
 
 
3bc04bc
 
 
57aafdc
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: apache-2.0
configs:
  - config_name: foregrounds
    data_files: data/foregrounds/**/*
  - config_name: backgrounds
    data_files: data/backgrounds/*
---
# Transforms-2D Base Dataset

This dataset contains foreground objects and background images used by the Transforms-2D dataset in the paper [Understanding the Role of Invariance in Transfer Learning](https://arxiv.org/abs/2407.04325), published at TMLR 2024.
The code for the paper is available [here](https://github.com/tillspeicher/representation-invariance-transfer), including the [implementation of the Transforms-2D dataset](https://github.com/tillspeicher/representation-invariance-transfer/tree/master/src/transforms_2d).

The Transforms-2D dataset consists of transformed versions of image objects with transparency masks (from this base dataset), pasted onto background images (also from this base dataset).
It is used to study the role of invariance in transfer learning, by creating images with carefully controlled transformations.

## Usage

The dataset here comes in two configurations: a `foregrounds` configuration with 61 classes of images and several images per class, and a `backgrounds` configuration with 867 background images of nature scenes.

To load the respective configuration, use

```python
from datasets import load_dataset

data = load_dataset(
    "tillspeicher/transforms_2d_base",
    "foregrounds", # or "backgrounds"
    # There's only one the "train" split
    split="train",
)
```

## Citation

If you are using the Transform-2D dataset, please consider citing the following paper:
```bibtex
@article{
    speicher2024understanding,
    title={Understanding the Role of Invariance in Transfer Learning},
    author={Till Speicher and Vedant Nanda and Krishna P. Gummadi},
    journal={Transactions on Machine Learning Research},
    issn={2835-8856},
    year={2024},
    url={https://arxiv.org/abs/2407.04325},
}
```


## Attribution

The data here is based on the [SI-Score dataset](https://github.com/google-research/si-score/tree/master?tab=readme-ov-file) ([paper](https://arxiv.org/abs/2007.08558)) and re-uploaded to HF to make it easier to access than the original AWS S3 bucket.
If you are using this dataset, please consider citing the original authors as well.

The foreground images are segmented versions of OpenImages, with CC-licenses.
The attributions for each image can be found on the [OpenImages](https://storage.googleapis.com/openimages/web/download.html) website in the Image IDs CSVs.

The background images come from Pexels.com and carry a Pexels license.
Some of the background images do not carry a Pexels license.
The attributions for these images are listed in `samples_attributions.md`.