asenppopov commited on
Commit
93dea3f
·
verified ·
1 Parent(s): 3d25b9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +126 -176
README.md CHANGED
@@ -17,83 +17,38 @@ size_categories:
17
 
18
  **Grounded Action Trajectory Embeddings with Vision-Language Action Planning**
19
 
20
- This repository contains preprocessed datasets from the LIBERO benchmark suite, specifically designed for training vision-language-action models with semantic action segmentation.
21
 
22
- ## Why Raw Format?
23
 
24
- We provide datasets in **raw PNG + JSON format** rather than pre-packaged TAR/WebDataset files for several important reasons:
25
 
26
- ### Advantages of Raw Format
 
 
 
27
 
28
- 1. **Easy Inspection**: Browse and visualize individual demonstrations directly on HuggingFace
29
- 2. **Maximum Flexibility**:
30
- - Load with any framework (PyTorch, TensorFlow, JAX)
31
- - Convert to your preferred format (TAR, RLDS, LeRobot, custom)
32
- - Cherry-pick specific demos or subtasks
33
- 3. **Better Debugging**:
34
- - Inspect problematic frames without extracting archives
35
- - Verify data quality visually
36
- - Check action sequences frame-by-frame
37
- 4. **Transparent**: See exact file structure and metadata organization
38
- 5. **Version Control**: Git LFS handles individual files better than large archives
39
 
40
- ### Converting to TAR/WebDataset
 
 
41
 
42
- If you need TAR format for efficient streaming during training, you can easily convert:
 
43
 
44
- ```python
45
- import webdataset as wds
46
- from pathlib import Path
47
- import json
48
- from PIL import Image
49
 
50
- def convert_to_tar(input_dir, output_pattern, maxcount=1000):
51
- """
52
- Convert raw PNG+JSON format to WebDataset TAR shards.
53
-
54
- Args:
55
- input_dir: Path to subtask directory (e.g., "libero_10/pick_up_the_black_bowl")
56
- output_pattern: Output pattern (e.g., "output/shard-%06d.tar")
57
- maxcount: Max samples per shard (default: 1000 frames per TAR)
58
- """
59
- with wds.ShardWriter(output_pattern, maxcount=maxcount) as sink:
60
- subtask_path = Path(input_dir)
61
-
62
- # Iterate through demos
63
- for demo_dir in sorted(subtask_path.iterdir()):
64
- if not demo_dir.is_dir():
65
- continue
66
-
67
- # Iterate through timesteps
68
- for json_file in sorted(demo_dir.glob("*.json")):
69
- png_file = json_file.with_suffix(".png")
70
-
71
- if not png_file.exists():
72
- continue
73
-
74
- # Load data
75
- with open(json_file) as f:
76
- data = json.load(f)
77
-
78
- # Create WebDataset sample
79
- sample = {
80
- "__key__": f"{demo_dir.name}/{json_file.stem}",
81
- "png": Image.open(png_file),
82
- "json": data,
83
- "action.pyd": data["action"], # NumPy-compatible format
84
- "robot_state.pyd": data["robot_state"],
85
- }
86
-
87
- sink.write(sample)
88
-
89
- # Example: Convert a subtask to TAR
90
- convert_to_tar(
91
- "libero_10/pick_up_the_black_bowl",
92
- "tar_output/pick_up_the_black_bowl-%06d.tar"
93
- )
94
  ```
95
 
96
- ### Loading Raw Data
97
 
98
  ```python
99
  from pathlib import Path
@@ -102,7 +57,7 @@ from PIL import Image
102
  import numpy as np
103
 
104
  def load_demo(demo_dir):
105
- """Load a single demonstration."""
106
  frames = []
107
  demo_path = Path(demo_dir)
108
 
@@ -119,10 +74,59 @@ def load_demo(demo_dir):
119
 
120
  return frames
121
 
122
- # Load a specific demo
123
- demo = load_demo("libero_10/pick_up_the_black_bowl/demo_0")
124
  print(f"Demo length: {len(demo)} frames")
125
- print(f"Action shape: {demo[0]['action'].shape}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
  ```
127
 
128
  ## Datasets Included
@@ -133,12 +137,12 @@ print(f"Action shape: {demo[0]['action'].shape}")
133
  - **Segmentation Method**: Semantic Action Chunking using Gemini Vision API
134
  - **Demos**: 1,354 demonstrations across 29 subtasks
135
  - **Frames**: 103,650 total frames
136
- - **Subtasks**: Tasks are automatically segmented into atomic subtasks
137
 
138
  **Example Tasks**:
139
- - `pick_up_the_black_bowl` → Segmented into pick and place subtasks
140
- - `close_the_drawer` → Segmented into approach, grasp, close subtasks
141
- - `put_the_bowl_in_the_drawer` → Multi-step pick, open, place, close sequence
142
 
143
  ### LIBERO-Object (Object Manipulation)
144
 
@@ -146,37 +150,26 @@ print(f"Action shape: {demo[0]['action'].shape}")
146
  - **Segmentation Method**: Semantic Action Chunking using Gemini Vision API
147
  - **Demos**: 875 demonstrations across 20 subtasks
148
  - **Frames**: 66,334 total frames
149
- - **Subtasks**: Pick and place variations for 10 different objects
150
 
151
  **Example Tasks**:
152
- - `pick_up_the_alphabet_soup` → Approach, grasp, lift
153
- - `place_the_alphabet_soup_on_the_basket` → Move, position, place, release
154
 
155
  ## 📁 Dataset Structure
156
 
157
  ```
158
  gate-institute/GATE-VLAP-datasets/
159
- ├── libero_10/ # Long-horizon tasks
160
- │ ├── close_the_drawer/
161
- ├── demo_0/
162
- │ │ ├── demo_0_timestep_0000.png # RGB observation (128x128)
163
- │ │ ├── demo_0_timestep_0000.json # Action + metadata
164
- │ │ │ ├── demo_0_timestep_0001.png
165
- │ │ │ ├── demo_0_timestep_0001.json
166
- │ │ │ └── ...
167
- │ │ ├── demo_1/
168
- │ │ └── ...
169
- │ ├── pick_up_the_black_bowl/
170
- │ └── ... (29 subtasks total)
171
 
172
- ├── libero_object/ # Object manipulation tasks
173
- │ ├── pick_up_the_alphabet_soup/
174
- ├── demo_0/
175
- │ │ ├── demo_0_timestep_0000.png
176
- │ │ │ ├── demo_0_timestep_0000.json
177
- │ │ │ └── ...
178
- │ │ └── ...
179
- │ └── ... (20 subtasks total)
180
 
181
  └── metadata/ # Dataset statistics & segmentation
182
  ├── libero_10_complete_stats.json
@@ -185,6 +178,23 @@ gate-institute/GATE-VLAP-datasets/
185
  └── libero_object_all_segments.json
186
  ```
187
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
188
  ## Data Format
189
 
190
  ### JSON Metadata (per timestep)
@@ -193,13 +203,13 @@ Each `.json` file contains:
193
 
194
  ```json
195
  {
196
- "action": [0.1, -0.2, 0.0, 0.0, 0.0, 0.0, 1.0], // 7-DOF action (xyz, rpy, gripper)
197
- "robot_state": [...], // Joint positions, velocities
198
  "demo_id": "demo_0",
199
  "timestep": 42,
200
  "subtask": "pick_up_the_black_bowl",
201
  "parent_task": "LIBERO_10",
202
- "is_stop_signal": false // Segment boundary marker
203
  }
204
  ```
205
 
@@ -225,33 +235,7 @@ Each `.json` file contains:
225
 
226
  **Purpose**: Overview statistics for the entire LIBERO-10 dataset
227
 
228
- ```json
229
- {
230
- "dataset": "LIBERO-10",
231
- "total_parent_tasks": 10,
232
- "total_subtasks": 29,
233
- "total_demos": 1354,
234
- "total_frames": 103650,
235
- "parent_task_mapping": {
236
- "LIBERO_10": {
237
- "frames": 103650,
238
- "demos": 1354,
239
- "subtasks": ["pick_up_the_black_bowl", "close_the_drawer", ...]
240
- }
241
- },
242
- "subtask_details": {
243
- "pick_up_the_black_bowl": {
244
- "demo_count": 48,
245
- "frame_count": 3516,
246
- "avg_frames_per_demo": 73.25,
247
- "parent_task": "LIBERO_10"
248
- },
249
- ...
250
- }
251
- }
252
- ```
253
-
254
- **Use Case**:
255
  - Understand dataset composition
256
  - Plan training splits
257
  - Check demo/frame distribution across tasks
@@ -260,36 +244,13 @@ Each `.json` file contains:
260
 
261
  **Purpose**: Detailed segmentation metadata for each demonstration
262
 
263
- ```json
264
- {
265
- "demo_0": {
266
- "subtask": "pick_up_the_black_bowl",
267
- "parent_task": "LIBERO_10",
268
- "segments": [
269
- {
270
- "segment_id": 0,
271
- "start_frame": 0,
272
- "end_frame": 35,
273
- "description": "Approach the black bowl",
274
- "action_type": "reach"
275
- },
276
- {
277
- "segment_id": 1,
278
- "start_frame": 36,
279
- "end_frame": 45,
280
- "description": "Grasp the black bowl",
281
- "action_type": "grasp"
282
- },
283
- ...
284
- ],
285
- "segmentation_method": "gemini_vision_api",
286
- "total_segments": 3
287
- },
288
- ...
289
- }
290
- ```
291
 
292
- **Use Case**:
293
  - Train with semantic action chunks
294
  - Implement hierarchical policies
295
  - Analyze action primitives
@@ -297,21 +258,11 @@ Each `.json` file contains:
297
 
298
  ### 3. `libero_object_complete_stats.json`
299
 
300
- **Purpose**: Statistics for LIBERO-Object dataset (same structure as LIBERO-10)
301
-
302
- **Key Differences**:
303
- - Fewer, simpler subtasks (20 vs 29)
304
- - Object-centric task naming
305
- - Rule-based segmentation instead of vision-based
306
 
307
  ### 4. `libero_object_all_segments.json`
308
 
309
- **Purpose**: Segmentation for LIBERO-Object demonstrations
310
-
311
- **Segmentation Method**: Rule-based gripper detection
312
- - Segments identified by gripper state changes
313
- - Stop signals mark task completion
314
- - More consistent segment boundaries than vision-based
315
 
316
  ## Citation
317
 
@@ -320,7 +271,7 @@ If you use this dataset, please cite:
320
  ```bibtex
321
  @article{gateVLAP@SAC2026,
322
  title={Atomic Action Slicing: Planner-Aligned Options for Generalist VLA Agents},
323
- author={[Stefan Tabakov, Asen Popov, Dimitar Dimitrov, Ensiye Kiyamousavi and Boris Kraychev]},
324
  journal={arXiv preprint arXiv:XXXX.XXXXX},
325
  conference={The 41st ACM/SIGAPP Symposium On Applied Computing (SAC2026), track on Intelligent Robotics and Multi-Agent Systems (IRMAS)},
326
  year={2025}
@@ -329,7 +280,7 @@ If you use this dataset, please cite:
329
  @inproceedings{liu2023libero,
330
  title={LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning},
331
  author={Liu, Bo and Zhu, Yifeng and Gao, Chongkai and Feng, Yihao and Liu, Qiang and Zhu, Yuke and Stone, Peter},
332
- booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
333
  year={2023}
334
  }
335
  ```
@@ -338,18 +289,17 @@ If you use this dataset, please cite:
338
 
339
  - **Model Checkpoints**: [gate-institute/GATE-VLAP](https://huggingface.co/gate-institute/GATE-VLAP)
340
  - **Original LIBERO**: [https://github.com/Lifelong-Robot-Learning/LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO)
341
- - **Paper**: [arXiv:XXXX.XXXXX](https://arxiv.org) *(coming soon)*
342
-
343
 
344
  ## Acknowledgments
345
 
346
  - **LIBERO Benchmark**: Original dataset by Liu et al. (2023)
347
- - **Segmentation**: Gemini Vision API for LIBERO semantic chunking
348
- - **Infrastructure**: Processed on GATE Institute infrastructure
349
 
350
  ## Contact
351
 
352
- For questions or issues, please open an issue on our [GitHub repository *(coming soon)*](https://github.com/your-repo).
353
 
354
  ---
355
 
 
17
 
18
  **Grounded Action Trajectory Embeddings with Vision-Language Action Planning**
19
 
20
+ This repository contains preprocessed datasets from the LIBERO benchmark suite in WebDataset TAR format, specifically designed for training vision-language-action models with semantic action segmentation.
21
 
22
+ ## Data Format: WebDataset TAR
23
 
24
+ We provide datasets in **WebDataset TAR format** for optimal performance:
25
 
26
+ **Fast loading** - Efficient streaming during training
27
+ ✅ **Easy downloading** - Single file per subtask
28
+ ✅ **HuggingFace optimized** - Quick browsing and file listing
29
+ ✅ **Inspectable** - Extract locally to view individual frames
30
 
31
+ ### Extracting TAR Files
 
 
 
 
 
 
 
 
 
 
32
 
33
+ ```bash
34
+ # Download a subtask
35
+ wget https://huggingface.co/datasets/gate-institute/GATE-VLAP-datasets/resolve/main/libero_10/pick_up_the_black_bowl.tar
36
 
37
+ # Extract all files
38
+ tar -xf pick_up_the_black_bowl.tar
39
 
40
+ # View structure
41
+ ls
42
+ # Output: demo_0/ demo_1/ demo_2/ ...
 
 
43
 
44
+ # View demo contents
45
+ ls demo_0/
46
+ # Output: demo_0_timestep_0000.png demo_0_timestep_0000.json
47
+ # demo_0_timestep_0001.png demo_0_timestep_0001.json
48
+ # ...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ```
50
 
51
+ ### Loading Raw Data (After Extraction)
52
 
53
  ```python
54
  from pathlib import Path
 
57
  import numpy as np
58
 
59
  def load_demo(demo_dir):
60
+ """Load a single demonstration from extracted TAR."""
61
  frames = []
62
  demo_path = Path(demo_dir)
63
 
 
74
 
75
  return frames
76
 
77
+ # After extracting pick_up_the_black_bowl.tar
78
+ demo = load_demo("demo_0")
79
  print(f"Demo length: {len(demo)} frames")
80
+ print(f"Action shape: {demo[0]['action']}")
81
+ ```
82
+
83
+ ### Loading with WebDataset (Direct Streaming)
84
+
85
+ ```python
86
+ import webdataset as wds
87
+ from PIL import Image
88
+ import json
89
+
90
+ # Stream data directly from HuggingFace (no download needed!)
91
+ url = "https://huggingface.co/datasets/gate-institute/GATE-VLAP-datasets/resolve/main/libero_10/pick_up_the_black_bowl.tar"
92
+
93
+ dataset = wds.WebDataset(url).decode("rgb")
94
+
95
+ for sample in dataset:
96
+ # sample["png"] = PIL Image (128x128 RGB)
97
+ # sample["json"] = bytes (JSON metadata)
98
+ metadata = json.loads(sample["json"])
99
+ image = sample["png"]
100
+
101
+ print(f"Action: {metadata['action']}")
102
+ print(f"Image shape: {np.array(image).shape}")
103
+ break
104
+ ```
105
+
106
+ ### Training with Multiple Subtasks
107
+
108
+ ```python
109
+ import webdataset as wds
110
+ import torch
111
+ from torch.utils.data import DataLoader
112
+
113
+ # Load multiple subtasks at once
114
+ base_url = "https://huggingface.co/datasets/gate-institute/GATE-VLAP-datasets/resolve/main/libero_10/"
115
+ subtasks = ["pick_up_the_black_bowl", "close_the_drawer", "open_the_top_drawer"]
116
+ urls = [f"{base_url}{task}.tar" for task in subtasks]
117
+
118
+ dataset = (
119
+ wds.WebDataset(urls)
120
+ .decode("rgb")
121
+ .to_tuple("png", "json")
122
+ .map(preprocess_fn) # Your preprocessing function
123
+ )
124
+
125
+ dataloader = DataLoader(dataset, batch_size=32, num_workers=4)
126
+
127
+ for images, actions in dataloader:
128
+ # Train your model
129
+ pass
130
  ```
131
 
132
  ## Datasets Included
 
137
  - **Segmentation Method**: Semantic Action Chunking using Gemini Vision API
138
  - **Demos**: 1,354 demonstrations across 29 subtasks
139
  - **Frames**: 103,650 total frames
140
+ - **TAR Files**: 29 files (one per subtask)
141
 
142
  **Example Tasks**:
143
+ - `pick_up_the_black_bowl.tar` → Pick and place subtasks
144
+ - `close_the_drawer.tar` → Approach, grasp, close subtasks
145
+ - `put_the_bowl_in_the_drawer.tar` → Multi-step pick, open, place, close sequence
146
 
147
  ### LIBERO-Object (Object Manipulation)
148
 
 
150
  - **Segmentation Method**: Semantic Action Chunking using Gemini Vision API
151
  - **Demos**: 875 demonstrations across 20 subtasks
152
  - **Frames**: 66,334 total frames
153
+ - **TAR Files**: 20 files (one per subtask)
154
 
155
  **Example Tasks**:
156
+ - `pick_up_the_alphabet_soup.tar` → Approach, grasp, lift
157
+ - `place_the_alphabet_soup_on_the_basket.tar` → Move, position, place, release
158
 
159
  ## 📁 Dataset Structure
160
 
161
  ```
162
  gate-institute/GATE-VLAP-datasets/
163
+ ├── libero_10/ # Long-horizon tasks (29 TAR files)
164
+ │ ├── close_the_drawer.tar
165
+ │ ├── pick_up_the_black_bowl.tar
166
+ │ ├── open_the_top_drawer.tar
167
+ └── ... (26 more)
 
 
 
 
 
 
 
168
 
169
+ ├── libero_object/ # Object manipulation (20 TAR files)
170
+ │ ├── pick_up_the_alphabet_soup.tar
171
+ │ ├── place_the_alphabet_soup_on_the_basket.tar
172
+ └── ... (18 more)
 
 
 
 
173
 
174
  └── metadata/ # Dataset statistics & segmentation
175
  ├── libero_10_complete_stats.json
 
178
  └── libero_object_all_segments.json
179
  ```
180
 
181
+ ### Inside Each TAR File
182
+
183
+ After extracting `pick_up_the_black_bowl.tar`:
184
+
185
+ ```
186
+ pick_up_the_black_bowl/
187
+ ├── demo_0/
188
+ │ ├── demo_0_timestep_0000.png # RGB observation (128×128)
189
+ │ ├── demo_0_timestep_0000.json # Action + metadata
190
+ │ ├── demo_0_timestep_0001.png
191
+ │ ├── demo_0_timestep_0001.json
192
+ │ └── ...
193
+ ├── demo_1/
194
+ │ └── ...
195
+ └── ... (all demos for this subtask)
196
+ ```
197
+
198
  ## Data Format
199
 
200
  ### JSON Metadata (per timestep)
 
203
 
204
  ```json
205
  {
206
+ "action": [0.1, -0.2, 0.0, 0.0, 0.0, 0.0, 1.0], // 7-DOF action
207
+ "robot_state": [...], // Joint state
208
  "demo_id": "demo_0",
209
  "timestep": 42,
210
  "subtask": "pick_up_the_black_bowl",
211
  "parent_task": "LIBERO_10",
212
+ "is_stop_signal": false // Segment boundary
213
  }
214
  ```
215
 
 
235
 
236
  **Purpose**: Overview statistics for the entire LIBERO-10 dataset
237
 
238
+ **Use Cases**:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
239
  - Understand dataset composition
240
  - Plan training splits
241
  - Check demo/frame distribution across tasks
 
244
 
245
  **Purpose**: Detailed segmentation metadata for each demonstration
246
 
247
+ Contains semantic action chunks with:
248
+ - Segment boundaries (start/end frames)
249
+ - Action descriptions
250
+ - Segment types (reach, grasp, move, place, etc.)
251
+ - Gemini Vision API segmentation method
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
252
 
253
+ **Use Cases**:
254
  - Train with semantic action chunks
255
  - Implement hierarchical policies
256
  - Analyze action primitives
 
258
 
259
  ### 3. `libero_object_complete_stats.json`
260
 
261
+ **Purpose**: Statistics for LIBERO-Object dataset
 
 
 
 
 
262
 
263
  ### 4. `libero_object_all_segments.json`
264
 
265
+ **Purpose**: Segmentation for LIBERO-Object demonstrations with semantic action chunking
 
 
 
 
 
266
 
267
  ## Citation
268
 
 
271
  ```bibtex
272
  @article{gateVLAP@SAC2026,
273
  title={Atomic Action Slicing: Planner-Aligned Options for Generalist VLA Agents},
274
+ author={Tabakov, Stefan and Popov, Asen and Dimitrov, Dimitar and Kiyamousavi, Ensiye and Kraychev, Boris},
275
  journal={arXiv preprint arXiv:XXXX.XXXXX},
276
  conference={The 41st ACM/SIGAPP Symposium On Applied Computing (SAC2026), track on Intelligent Robotics and Multi-Agent Systems (IRMAS)},
277
  year={2025}
 
280
  @inproceedings{liu2023libero,
281
  title={LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning},
282
  author={Liu, Bo and Zhu, Yifeng and Gao, Chongkai and Feng, Yihao and Liu, Qiang and Zhu, Yuke and Stone, Peter},
283
+ booktitle={NeurIPS Datasets and Benchmarks Track},
284
  year={2023}
285
  }
286
  ```
 
289
 
290
  - **Model Checkpoints**: [gate-institute/GATE-VLAP](https://huggingface.co/gate-institute/GATE-VLAP)
291
  - **Original LIBERO**: [https://github.com/Lifelong-Robot-Learning/LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO)
292
+ - **Paper**: Coming soon
 
293
 
294
  ## Acknowledgments
295
 
296
  - **LIBERO Benchmark**: Original dataset by Liu et al. (2023)
297
+ - **Segmentation**: Gemini Vision API for semantic action chunking
298
+ - **Institution**: GATE Institute, Sofia, Bulgaria
299
 
300
  ## Contact
301
 
302
+ For questions or issues, please contact the GATE Institute.
303
 
304
  ---
305