Datasets:
File size: 5,617 Bytes
2650831 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | ---
license: mit
task_categories:
- image-classification
- image-to-text
- zero-shot-image-classification
language:
- en
pretty_name: COLA
size_categories:
- 10K<n<100K
tags:
- compositionality
- vision-language
- visual-genome
- clevr
- paco
configs:
- config_name: multiobjects
data_files:
- split: val
path: data/multiobjects.parquet
- config_name: singleobjects_gqa
data_files:
- split: val
path: data/singleobjects_gqa.parquet
- config_name: singleobjects_clevr
data_files:
- split: val
path: data/singleobjects_clevr.parquet
- config_name: singleobjects_paco
data_files:
- split: val
path: data/singleobjects_paco.parquet
---
# COLA: Compose Objects Localized with Attributes
Self-contained Hugging Face port of the **COLA** benchmark from the paper
["How to adapt vision-language models to Compose Objects Localized with Attributes?"](https://arxiv.org/abs/2305.03689).
- π Paper: https://arxiv.org/abs/2305.03689
- π Project page: https://cs-people.bu.edu/array/research/cola/
- π» Original code & data: https://github.com/ArijitRay1993/COLA
This repository bundles the benchmark annotations as Parquet files and the referenced
images as regular files under `images/`, so the dataset is fully self-contained.
## Dataset Structure
```
.
βββ data/
β βββ multiobjects.parquet
β βββ singleobjects_gqa.parquet
β βββ singleobjects_clevr.parquet
β βββ singleobjects_paco.parquet
β βββ singleobjects_gqa_labels.json
β βββ singleobjects_clevr_labels.json
β βββ singleobjects_paco_labels.json
βββ images/
βββ vg/<vg_id>.jpg # Visual Genome images (multiobjects + GQA)
βββ clevr/valA/*.png # CLEVR-CoGenT valA
βββ clevr/valB/*.png # CLEVR-CoGenT valB
βββ coco/val2017/*.jpg # COCO val2017 (PACO)
βββ coco/train2017/*.jpg # COCO train2017 (PACO)
```
Image paths stored in parquet are **relative to the repository root**, e.g.
`images/vg/2390970.jpg`. Load them by joining with the local clone / snapshot path.
## Configs / Splits
### `multiobjects` (210 pairs)
A hard imageβcaption matching task. Each row contains two images and two captions
whose objects/attributes are swapped: caption 1 applies to image 1 (not image 2) and
vice versa.
| Field | Type | Description |
|------------|--------|-----------------------------------|
| `image1` | string | Relative path to image 1 |
| `caption1` | string | Caption describing image 1 |
| `image2` | string | Relative path to image 2 |
| `caption2` | string | Caption describing image 2 |
### `singleobjects_gqa` (2,589 rows), `singleobjects_clevr` (30,000 rows), `singleobjects_paco` (7,921 rows)
Multi-label classification across fixed vocabularies of multi-attribute object
classes (320 for GQA, 96 for CLEVR, 400 for PACO). The label lists live at
`data/singleobjects_<subset>_labels.json`.
| Field | Type | Description |
|----------------------|-----------------|---------------------------------------------------------------|
| `image` | string | Relative path to the image |
| `objects_attributes` | string (JSON) | Objects + attributes annotation (GQA and CLEVR only) |
| `label` | list\[int] | Binary indicator per class (length matches labels vocabulary) |
| `hard_list` | list\[int] | Indicator of whether each class is "hard" for this image |
For a given class, the paper's MAP metric is computed on images where `hard_list == 1`
for that class. See `scripts/eval.py` in the [original repo](https://github.com/ArijitRay1993/COLA)
for the exact metric.
## Loading
```python
from datasets import load_dataset
mo = load_dataset("array/cola", "multiobjects", split="val")
gqa = load_dataset("array/cola", "singleobjects_gqa", split="val")
clv = load_dataset("array/cola", "singleobjects_clevr", split="val")
paco = load_dataset("array/cola", "singleobjects_paco", split="val")
```
To open an image, resolve it against the local snapshot root:
```python
from huggingface_hub import snapshot_download
from PIL import Image
import os
root = snapshot_download("array/cola", repo_type="dataset")
ex = mo[0]
img1 = Image.open(os.path.join(root, ex["image1"]))
img2 = Image.open(os.path.join(root, ex["image2"]))
```
Or, if you've cloned the repo with `git lfs`, just open paths directly:
```python
Image.open(f"{REPO_DIR}/{ex['image1']}")
```
## Licensing / Source notes
- Visual Genome, CLEVR-CoGenT, and COCO images are redistributed here under their
respective original licenses. Please refer to the upstream datasets:
- [Visual Genome](https://visualgenome.org/) (CC BY 4.0)
- [CLEVR-CoGenT](https://cs.stanford.edu/people/jcjohns/clevr/) (CC BY 4.0)
- [COCO 2017](https://cocodataset.org/) (CC BY 4.0 for annotations; Flickr terms for images)
- The COLA annotations (parquet files and label lists) are released under the MIT
license, matching the [original COLA repo](https://github.com/ArijitRay1993/COLA).
## Citation
```bibtex
@article{ray2023cola,
title = {COLA: How to adapt vision-language models to Compose Objects Localized with Attributes?},
author = {Ray, Arijit and Radenovic, Filip and Dubey, Abhimanyu and Plummer, Bryan A. and Krishna, Ranjay and Saenko, Kate},
journal = {arXiv preprint arXiv:2305.03689},
year = {2023}
}
```
|