The Kinect sensor provides an inexpensive alternative to traditional laser scanners for working in small workspaces. Transforming Kinect’s 3D range images into 2D scans allows us to exploit the large body of methods and techniques existing for 2D laser scans (e.g. in SLAM or localization).
MRPT provides this conversion as the method
CObservation3DRangeScan::convertTo2DScan(), which returns a 2D laser scan with more “rays” (N) than columns has the 3D observation (W), exactly: N = W * oversampling_ratio. This oversampling is required since laser scans sample the space at evenly-separated angles, while a range camera follows a tangent-like distribution. By oversampling we make sure we don’t leave “gaps” unseen by the virtual “2D laser”.
All obstacles within a frustum are considered
and the minimum distance is kept in each direction. The horizontal FOV
of the frustum is automatically computed from the intrinsic parameters of the range camera (see Kinect calibration
), but the vertical FOV must be provided by the user
, and can be set to be asymetric which may be useful depending on the zone of interest where to look for obstacles.
All spatial transformations are riguorously taken into account in this class, using the depth camera intrinsic calibration parameters. Obviously, a requisite for calling this method is the 3D observation having range data, i.e. hasRangeImage must be true. It’s not needed to have RGB data nor the raw 3D point clouds for this method to work.
2. The example
This program allows the user to dynamically change the FOV frustum to see its effects.
The following video demonstrates this example at work: