Datasets:
task_categories:
- reinforcement-learning
- robotics
tags:
- robotics
- libero
- manipulation
- semantic-action-chunking
- vision-language
- imitation-learning
size_categories:
- 100K<n<1M
GATE-VLAP Datasets
Grounded Action Trajectory Embeddings with Vision-Language Action Planning
This repository contains preprocessed datasets from the LIBERO benchmark suite, specifically designed for training vision-language-action models with semantic action segmentation.
Why Raw Format?
We provide datasets in raw PNG + JSON format rather than pre-packaged TAR/WebDataset files for several important reasons:
Advantages of Raw Format
- Easy Inspection: Browse and visualize individual demonstrations directly on HuggingFace
- Maximum Flexibility:
- Load with any framework (PyTorch, TensorFlow, JAX)
- Convert to your preferred format (TAR, RLDS, LeRobot, custom)
- Cherry-pick specific demos or subtasks
- Better Debugging:
- Inspect problematic frames without extracting archives
- Verify data quality visually
- Check action sequences frame-by-frame
- Transparent: See exact file structure and metadata organization
- Version Control: Git LFS handles individual files better than large archives
Converting to TAR/WebDataset
If you need TAR format for efficient streaming during training, you can easily convert:
import webdataset as wds
from pathlib import Path
import json
from PIL import Image
def convert_to_tar(input_dir, output_pattern, maxcount=1000):
"""
Convert raw PNG+JSON format to WebDataset TAR shards.
Args:
input_dir: Path to subtask directory (e.g., "libero_10/pick_up_the_black_bowl")
output_pattern: Output pattern (e.g., "output/shard-%06d.tar")
maxcount: Max samples per shard (default: 1000 frames per TAR)
"""
with wds.ShardWriter(output_pattern, maxcount=maxcount) as sink:
subtask_path = Path(input_dir)
# Iterate through demos
for demo_dir in sorted(subtask_path.iterdir()):
if not demo_dir.is_dir():
continue
# Iterate through timesteps
for json_file in sorted(demo_dir.glob("*.json")):
png_file = json_file.with_suffix(".png")
if not png_file.exists():
continue
# Load data
with open(json_file) as f:
data = json.load(f)
# Create WebDataset sample
sample = {
"__key__": f"{demo_dir.name}/{json_file.stem}",
"png": Image.open(png_file),
"json": data,
"action.pyd": data["action"], # NumPy-compatible format
"robot_state.pyd": data["robot_state"],
}
sink.write(sample)
# Example: Convert a subtask to TAR
convert_to_tar(
"libero_10/pick_up_the_black_bowl",
"tar_output/pick_up_the_black_bowl-%06d.tar"
)
Loading Raw Data
from pathlib import Path
import json
from PIL import Image
import numpy as np
def load_demo(demo_dir):
"""Load a single demonstration."""
frames = []
demo_path = Path(demo_dir)
for json_file in sorted(demo_path.glob("*.json")):
# Load metadata
with open(json_file) as f:
data = json.load(f)
# Load image
png_file = json_file.with_suffix(".png")
data["image"] = np.array(Image.open(png_file))
frames.append(data)
return frames
# Load a specific demo
demo = load_demo("libero_10/pick_up_the_black_bowl/demo_0")
print(f"Demo length: {len(demo)} frames")
print(f"Action shape: {demo[0]['action'].shape}")
Datasets Included
LIBERO-10 (Long-Horizon Tasks)
- Task Type: 10 complex, long-horizon manipulation tasks
- Segmentation Method: Semantic Action Chunking using Gemini Vision API
- Demos: 1,354 demonstrations across 29 subtasks
- Frames: 103,650 total frames
- Subtasks: Tasks are automatically segmented into atomic subtasks
Example Tasks:
pick_up_the_black_bowl→ Segmented into pick and place subtasksclose_the_drawer→ Segmented into approach, grasp, close subtasksput_the_bowl_in_the_drawer→ Multi-step pick, open, place, close sequence
LIBERO-Object (Object Manipulation)
- Task Type: 10 object-centric manipulation tasks
- Segmentation Method: Rule-based gripper detection with stop signals
- Demos: 875 demonstrations across 20 subtasks
- Frames: 66,334 total frames
- Subtasks: Pick and place variations for 10 different objects
Example Tasks:
pick_up_the_alphabet_soup→ Approach, grasp, liftplace_the_alphabet_soup_on_the_basket→ Move, position, place, release
📁 Dataset Structure
gate-institute/GATE-VLAP-datasets/
├── libero_10/ # Long-horizon tasks
│ ├── close_the_drawer/
│ │ ├── demo_0/
│ │ │ ├── demo_0_timestep_0000.png # RGB observation (128x128)
│ │ │ ├── demo_0_timestep_0000.json # Action + metadata
│ │ │ ├── demo_0_timestep_0001.png
│ │ │ ├── demo_0_timestep_0001.json
│ │ │ └── ...
│ │ ├── demo_1/
│ │ └── ...
│ ├── pick_up_the_black_bowl/
│ └── ... (29 subtasks total)
│
├── libero_object/ # Object manipulation tasks
│ ├── pick_up_the_alphabet_soup/
│ │ ├── demo_0/
│ │ │ ├── demo_0_timestep_0000.png
│ │ │ ├── demo_0_timestep_0000.json
│ │ │ └── ...
│ │ └── ...
│ └── ... (20 subtasks total)
│
└── metadata/ # Dataset statistics & segmentation
├── libero_10_complete_stats.json
├── libero_10_all_segments.json
├── libero_object_complete_stats.json
└── libero_object_all_segments.json
Data Format
JSON Metadata (per timestep)
Each .json file contains:
{
"action": [0.1, -0.2, 0.0, 0.0, 0.0, 0.0, 1.0], // 7-DOF action (xyz, rpy, gripper)
"robot_state": [...], // Joint positions, velocities
"demo_id": "demo_0",
"timestep": 42,
"subtask": "pick_up_the_black_bowl",
"parent_task": "LIBERO_10",
"is_stop_signal": false // Segment boundary marker
}
Action Space
- Dimensions: 7-DOF
[0:3]: End-effector position delta (x, y, z)[3:6]: End-effector orientation delta (roll, pitch, yaw)[6]: Gripper action (0.0 = close, 1.0 = open)
- Range: Normalized to [-1, 1]
- Control: Delta actions (relative to current pose)
Image Format
- Resolution: 128×128 pixels
- Channels: RGB (3 channels)
- Format: PNG (lossless compression)
- Camera: Front-facing agentview camera
Metadata Files Explained
1. libero_10_complete_stats.json
Purpose: Overview statistics for the entire LIBERO-10 dataset
{
"dataset": "LIBERO-10",
"total_parent_tasks": 10,
"total_subtasks": 29,
"total_demos": 1354,
"total_frames": 103650,
"parent_task_mapping": {
"LIBERO_10": {
"frames": 103650,
"demos": 1354,
"subtasks": ["pick_up_the_black_bowl", "close_the_drawer", ...]
}
},
"subtask_details": {
"pick_up_the_black_bowl": {
"demo_count": 48,
"frame_count": 3516,
"avg_frames_per_demo": 73.25,
"parent_task": "LIBERO_10"
},
...
}
}
Use Case:
- Understand dataset composition
- Plan training splits
- Check demo/frame distribution across tasks
2. libero_10_all_segments.json
Purpose: Detailed segmentation metadata for each demonstration
{
"demo_0": {
"subtask": "pick_up_the_black_bowl",
"parent_task": "LIBERO_10",
"segments": [
{
"segment_id": 0,
"start_frame": 0,
"end_frame": 35,
"description": "Approach the black bowl",
"action_type": "reach"
},
{
"segment_id": 1,
"start_frame": 36,
"end_frame": 45,
"description": "Grasp the black bowl",
"action_type": "grasp"
},
...
],
"segmentation_method": "gemini_vision_api",
"total_segments": 3
},
...
}
Use Case:
- Train with semantic action chunks
- Implement hierarchical policies
- Analyze action primitives
- Filter by segment type
3. libero_object_complete_stats.json
Purpose: Statistics for LIBERO-Object dataset (same structure as LIBERO-10)
Key Differences:
- Fewer, simpler subtasks (20 vs 29)
- Object-centric task naming
- Rule-based segmentation instead of vision-based
4. libero_object_all_segments.json
Purpose: Segmentation for LIBERO-Object demonstrations
Segmentation Method: Rule-based gripper detection
- Segments identified by gripper state changes
- Stop signals mark task completion
- More consistent segment boundaries than vision-based
Citation
If you use this dataset, please cite:
@article{gateVLAP2024,
title={GATE-VLAP: Grounded Action Trajectory Embeddings with Vision-Language Action Planning},
author={[Your Name]},
journal={arXiv preprint arXiv:XXXX.XXXXX},
year={2024}
}
@inproceedings{liu2023libero,
title={LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning},
author={Liu, Bo and Zhu, Yifeng and Gao, Chongkai and Feng, Yihao and Liu, Qiang and Zhu, Yuke and Stone, Peter},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023}
}
Related Resources
- Model Checkpoints: gate-institute/GATE-VLAP (coming soon)
- Original LIBERO: https://github.com/Lifelong-Robot-Learning/LIBERO
- Paper: arXiv:XXXX.XXXXX (coming soon)
Acknowledgments
- LIBERO Benchmark: Original dataset by Liu et al. (2023)
- Segmentation: Gemini Vision API for LIBERO-10 semantic chunking
- Infrastructure: Processed on GATE Institute infrastructure
Contact
For questions or issues, please open an issue on our GitHub repository or contact [your-email@example.com].
Dataset Version: 1.0
Last Updated: December 2025
Maintainer: GATE Institute