Datasets:
Tasks:
Image-Text-to-Video
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
< 1K
ArXiv:
Add dataset card for RISE-Video
#1
by
nielsr
HF Staff
- opened
README.md
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
task_categories:
|
| 5 |
+
- image-to-video
|
| 6 |
+
tags:
|
| 7 |
+
- video-generation
|
| 8 |
+
- reasoning-benchmark
|
| 9 |
+
- TI2V
|
| 10 |
+
arxiv: 2602.05986
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# RISE-Video: Can Video Generators Decode Implicit World Rules?
|
| 14 |
+
|
| 15 |
+
[**Paper**](https://huggingface.co/papers/2602.05986) | [**GitHub**](https://github.com/VisionXLab/Rise-Video)
|
| 16 |
+
|
| 17 |
+
RISE-Video is a pioneering reasoning-oriented benchmark for Text-Image-to-Video (TI2V) synthesis that shifts the evaluative focus from surface-level aesthetics to deep cognitive reasoning.
|
| 18 |
+
|
| 19 |
+
The benchmark comprises 467 meticulously human-annotated samples spanning eight rigorous categories: *Commonsense Knowledge*, *Subject Knowledge*, *Perceptual Knowledge*, *Societal Knowledge*, *Logical Capability*, *Experiential Knowledge*, *Spatial Knowledge*, and *Temporal Knowledge*. It provides a structured testbed for probing model intelligence across diverse dimensions.
|
| 20 |
+
|
| 21 |
+
## Evaluation Protocol
|
| 22 |
+
|
| 23 |
+
The framework introduces a multi-dimensional evaluation protocol consisting of four metrics:
|
| 24 |
+
- **Reasoning Alignment**: Evaluates how well the video follows implicit constraints.
|
| 25 |
+
- **Temporal Consistency**: Assesses the logical flow and consistency over time.
|
| 26 |
+
- **Physical Rationality**: Checks for adherence to physical laws.
|
| 27 |
+
- **Visual Quality**: Measures surface-level aesthetic and technical fidelity.
|
| 28 |
+
|
| 29 |
+
## Usage
|
| 30 |
+
|
| 31 |
+
### Video Generation
|
| 32 |
+
|
| 33 |
+
The first frame and text prompt for video generation are provided in this dataset. To follow the evaluation protocol, generated videos should be organized in the following folder structure:
|
| 34 |
+
|
| 35 |
+
`{MODEL NAME}/{CATEGORY}/{TASK_ID}`
|
| 36 |
+
|
| 37 |
+
- `MODEL NAME`: The name of the generation model.
|
| 38 |
+
- `CATEGORY`: The category of the sample (e.g., `Subject Knowledge`).
|
| 39 |
+
- `TASK_ID`: The unique ID of each sample (corresponding to the `"task_id"` field in the JSON metadata).
|
| 40 |
+
|
| 41 |
+
### Frame Extraction
|
| 42 |
+
|
| 43 |
+
To extract the video frames required for the *Reasoning Alignment* dimension, you can use the script provided in the repository:
|
| 44 |
+
|
| 45 |
+
```bash
|
| 46 |
+
cd reasoning_fps
|
| 47 |
+
python fps_clip.py
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
### Evaluation
|
| 51 |
+
|
| 52 |
+
To run the automated evaluation pipeline:
|
| 53 |
+
1. Configure parameters in `eval.py` (paths to JSON, root directories, and OpenAI API keys).
|
| 54 |
+
2. Run the evaluation script:
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
python eval.py
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
## Citation
|
| 61 |
+
```bibtex
|
| 62 |
+
@misc{liu2026risevideovideogeneratorsdecode,
|
| 63 |
+
title={RISE-Video: Can Video Generators Decode Implicit World Rules?},
|
| 64 |
+
author={Mingxin Liu and Shuran Ma and Shibei Meng and Xiangyu Zhao and Zicheng Zhang and Shaofeng Zhang and Zhihang Zhong and Peixian Chen and Haoyu Cao and Xing Sun and Haodong Duan and Xue Yang},
|
| 65 |
+
year={2026},
|
| 66 |
+
eprint={2602.05986},
|
| 67 |
+
archivePrefix={arXiv},
|
| 68 |
+
primaryClass={cs.CV},
|
| 69 |
+
url={https://arxiv.org/abs/2602.05986},
|
| 70 |
+
}
|
| 71 |
+
```
|