Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 306246899 bytes, limit is 300000000 bytes Make sure that 1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Common-O

measuring multimodal reasoning across scenes

Common-O, inspired by cognitive tests for humans, probes multimodal LLMs' ability to reason across scenes by asking "what’s in common?"

fair conference content copy.001

Common-O is comprised of household objects:

fair conference content copy.003

We have two subsets: Common-O (3 - 8 objects) and Common-O Complex (8 - 16 objects).

Multimodal LLMs excel at single image perception, but struggle with multi-scene reasoning

single_vs_multi_image(1)

Evaluating a Multimodal LLM on Common-O

import datasets

# get a sample
common_o = datasets.load_dataset("facebook/Common-O")["main"]
# common_o_complex = datasets.load_dataset("facebook/Common-O")["complex"]
x = common_o[3]

output: str = model(x["image_1"], x["image_2"], x["question"])

check_answer(output, x["answer"])

To check the answer, we use an exact match criteria:

import re

def check_answer(generation: str, ground_truth: str) -> bool:
    """
    Args:
        generation: model response, expected to contain "Answer: ..."
        ground_truth: comma-separated string of correct answers

    Returns: bool, whether the prediction matches the ground truth
    """
    preds = generation.split("\n")[-1]
    preds = re.sub("Answer:", "", preds)
    preds = preds.split(",")
    preds = [p.strip() for p in preds]
    preds = sorted(preds, key=lambda x: x[0])

    # split into a list
    ground_truth_list = [a.strip() for a in ground_truth.split(",")]
    ground_truth_list = sorted(ground_truth_list)
    return preds == ground_truth_list

Some models have specific formatting outputs for their answers, e.g. \boxed{A} or Answer: A. We recommend checking a few responses as you may notice slight variations based on this. This public set also has slight variations with the set used in the original paper, so while the measured capabilities are identical do not expect an exact replication of accuracy figures.

If you'd like to use a single image model, here's a handy function to turn image_1 and image_2 into a single split image:

from PIL import Image

def concat_images_horizontal(
        image1: Image.Image, image2: Image.Image, include_space: bool=True, space_width: int=20, fill_color: tuple=(0, 0, 0)
        )  -> Image.Image:
    # from https://note.nkmk.me/en/python-pillow-concat-images/
    if not include_space:
        dst = Image.new("RGB", (image1.width + image2.width, image1.height))
        dst.paste(image1, (0, 0))
        dst.paste(image2, (image1.width, 0))
    else:
        total_width = image1.width + space_width + image2.width
        max_height = max(image1.height, image2.height)

        dst = Image.new("RGB", (total_width, max_height), color=fill_color)
        dst.paste(image1, (0, (max_height - image1.height) // 2))
        dst.paste(image2, (image1.width + space_width, (max_height - image2.height) // 2))
    return dst

For more details about Common-O see the

Please note we fixed an upload issue as of March 3, 2026, so please consider using the latest version if you cached an earlier version. Thank you to Milad Afshari for catching this!

Cite:

@inproceedings{Ross2025what0s,
  title  = {What’s in Common? Multimodal Models Hallucinate When Reasoning Across Scenes},
  author = {Candace Ross and Florian Bordes and Adina Williams and Polina Kirichenko and Mark Ibrahim},
  year   = {2025},
  url    = {https://openreview.net/attachment?id=d0F0N0cu4n&name=supplementary_material}
}
Downloads last month
326

Paper for facebook/Common-O