Generating 3D point clouds from RGB+D observations (CObservation3DRangeScan objects)

1. Calibration parameters

Firstly, you must have very clear the calibration parameters involved in the process.

Each “RGB+D observation” is stored in MRPT as one object of type mrpt::slam::CObservation3DRangeScan (click to see its full description), which for this tutorial will be assumed that contains:

These observations can be captured from a sensor (e.g. Kinect) in real-time, or loaded from a dataset, as explained here.

In any case, the calibration parameters of both cameras will come already filled in with their correct values (or, at least, those the user provided in the moment of capturing/grabbing the dataset!).

This set of three pieces of data that must be calibrated (for example, see the tutorial for Kinect calibration) before generating precise 3D point clouds from RGB+D observations are: the two sets of camera parametersand the relative 6D pose between them.

3D camera reference systems

2. Projection equations

(Write me!) In the meanwhile, please refer to the source code of CObservation3DRangeScan_project3D_impl.h and to the example kinect_online_offline_demo.

2.1. General case

2.2. “Rectified” depth image

 


There exist different possibilities:

3.1. RGB+D –> local 3D point cloud

 

3.2. RGB+D –> local 3D point cloud –> CPointsMap (& derived)

3.3. RGB+D –> CPointsMap (& derived)

3.4. RGB+D –> mrpt::opengl::CPointCloud

3.5. RGB+D –> mrpt::opengl::CPointCloudColoured

3.6. RGB+D –> pcl::PointCloud<PointXYZ> 

3.7. RGB+D –> pcl::PointCloud<PointXYZRGB>

  • Patola

    Any of the sample applications can use kinect to construct a 3d point cloud and save it to disk? The info in this page is only useful to developers.