Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# FunGraph3D
|
| 2 |
+
FunGraph3D.annotations.json: Object and interactive element segmentation annotations
|
| 3 |
+
|
| 4 |
+
FunGraph.relations.json: Functional 3D scene graph annotations
|
| 5 |
+
|
| 6 |
+
all_labels.json: all labels appeared in the dataset for evaluation
|
| 7 |
+
|
| 8 |
+
all_labels_clip.embedding.npy: CLIP embeddings of all labels appeared in the dataset for evaluation
|
| 9 |
+
|
| 10 |
+
all_edges.json: all relationship descriptions appeared in the dataset for edge evaluation
|
| 11 |
+
|
| 12 |
+
all_edges_bert_embeddings.npy: BERT embeddings of all relationship descriptions appeared in the dataset for edge evaluation
|
| 13 |
+
|
| 14 |
+
OpenFunGraph_split.txt: split used for OpenFunGraph evaluation
|
| 15 |
+
|
| 16 |
+
## Assets for each scene
|
| 17 |
+
|
| 18 |
+
interaction_video.mp4: dynamic interaction video for the scene
|
| 19 |
+
|
| 20 |
+
xxx.ply: 5mm laser scan for the scene
|
| 21 |
+
|
| 22 |
+
videoX/depth: high resolution depth rendered by the Leica scan
|
| 23 |
+
|
| 24 |
+
videoX/raw_camera: raw camera pose files for iPad scan
|
| 25 |
+
|
| 26 |
+
videoX/raw_low_res_depth: raw low resolution LiDar depth files for iPad scan
|
| 27 |
+
|
| 28 |
+
videoX/raw_rgb: raw RGB files for iPad scan
|
| 29 |
+
|
| 30 |
+
videoX/rgb: registered RGB images in the scan by COLMAP (used for evaluation)
|
| 31 |
+
|
| 32 |
+
cameras.txt: camera intrinsics by COLMAP
|
| 33 |
+
|
| 34 |
+
images.txt: camera extrinsics by COLMAP
|