SkiMap++: Real-Time Mapping and Object Recognition for Robotics

We introduce SkiMap++, an extension to the recently proposed SkiMap mapping framework for robot navigation [1] The extension deals with enriching the map with  semantic information concerning the presence in the environment of certain objects that may be usefully recognized by the robot, e.g. for the sake of grasping them. More precisely, the map can accommodate information about the spatial locations of certain 3D object features, as determined by matching the visual features extracted from the incoming frames through a random forest learned off-line from a set of object models. Thereby, evidence about the presence of object features is gathered from multiple vantage points alongside with the standard geometric mapping task, so to enable recognizing the objects and estimating their 6 DOF poses. As a result, SkiMap++ can reconstruct the geometry of large scale environments as well as localize some relevant objects therein. As an additional contribution, we present an  RGB-D dataset featuring ground-truth camera and object poses, which may be deployed by researchers interested in pursuing SLAM alongside with object recognition, a topic often referred to as Semantic SLAM.

[1] D. D. Gregorio and L. D. Stefano, “Skimap : An efficient mapping framework for robot navigation,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, [arXiv preprint arXiv:1704.05832]. [Online]. Available: https://arxiv.org/abs/1704.05832

 

SK_DATASET_17

You can download Skimap++ Dataset (dubbed sk_dataset_17) Here (~7 Gb). 

Download also relative camera parameters: asus_nov_2016.yaml.

Below a detailed explanation of the Dataset structure.

Folder tree

Uncompressing the archive you can see a folder tree like this:

  • models
  • scenes
  • models.txt
  • models_variants.txt
  • scenes.txt

Folders and files will be explained below:

  • models.txt: contains the list of Objects contained in the dataset
  • models_variants.txt: contains a list of object's 'variants' as described in the paper. For example if you have the model "Robot" and the variants "top" and "down" you will see in the "models" folder three sub-folder "Robot","Robot_top" and "Robot_down". They can be treated like different objects but represent the same object scanned from different viewpoints. 
  • scenes.txt: contains the list of scenes subfolders.
  • models (folder): contains the aforementioned models subfolders.
  • scenes (folder): contains the aforementioned scenes subfolders.

Single model folder

In a single model folder you will find a list of frames representing an RGBD acquisition of the target model. Each frame is stored as different files with a progressive number as prefix. Here the explanation of a generic frame:

  • <NUMBER>_rgb.png: rgb image
  • <NUMBER>_depth.png: depth image
  • <NUMBER>_mask.png: masked auto applied with background removal
  • <NUMBER>_depthmasked.png: masked depth image 
  • <NUMBER>_camera_pose.txt: is the camera pose relative to the object in the form [x y z qx qy qz qw]. Each object is considered in the center of the world so camera poses is expressed in the model Reference Frame.

Single scene folder

Scenes folders are a little bit different due to ROS bag exporting procedure. This is a tipical scene subfolder:

  • rgb (folder): folder containing ordered rgb images
  • depth (folder): folder containing ordered depth images
  • camera_pose.txt: file containing ordered camera poses [x y z qx qy qz qw]  expressed in the world (VICON SYSTEM) reference frame.
  • timings.txtordered list of timestamps

Is convenient to know that World Reference Frame (VICON SYSTEM RF), in which camera poses are expressed, is the same among all scenes.

Scene Objects Ground Truth

In the "scenes" folder there is a file for each scene, named <SCENE_NAME>_gt.txt, containing the ground truth of the object models expressed in the World Reference Frame, each ground truth row is as follow:

  • <MODEL_NAME> x y z qx qy qz qw TEMP_NUMBER*

* Temp number is used for debug purposes 

 

 For any question send an email to: This email address is being protected from spambots. You need JavaScript enabled to view it.