Application: kf-slam

This application implements a simple Kalman Filter for solving the SLAM problem for generic 3D range-bearing sensors and a full 6D robot state space. This program is a front-end for the classmrpt::slam::CRangeBearingKFSLAM . All the parameters to the algorithm are passed through a configuration file in the command line. The filter processes actions and observations from a rawlog file and generates a number of files describing the evolution of the filter.

kf-slam can also optionally employ a 2D version of the same range-bearing problem (since mrpt 0.9.4), a more efficient approach for SLAM problems really happening in planar environments. See this configuration file as an example of how to use the kf_implementation parameter.

1. Description

The theoretical background of 6D Range-Bearing SLAM is explained in this paper .

2. Example

The following results have been obtained from synthetic data, generated with a the application “simul-landmarks”. That simulator generates a set of random landmarks in 3D and generates noisy range-bearing measurements from some predefined robot path. The output is a rawlog file, which is processed by the kf-slam application. You can reproduce the example shown in the videos below doing, from a console:

$ cd MRPT/share/mrpt/config_files/kf-slam

$ kf-slam EKF-SLAM_6D_test.ini

(Also on YouTube, with worse quality)

Another example of a path in 2D only:

  • Cro

    I want to apply EKF-SLAM online by using MRPT libraries. I have the mathematical model for the project(motion & measurement models). I would like to know how to pass the state vector tp EKF-SLAM in an online manner. Samples of MRPT are only for offline that I’ve seen. Any suggestions!

  • Deepak R

    Hi, I am planning to use the kf-slam application to implement a stereo-slam system. I am specifically interested in ekf based solution. This ekf-slam application uses simulated action and landmark entries. I want to replace them with data from my hand-held stereo camera.
    I went through the kf-slam code. My understanding is that I just have to change the way the action and observation data is fed into the kalman filter. Currently the code reads the data from a rawlog file. I want to feed in the live data.

    I am planning to implement a vision module that takes the stereo camera image pairs to identify 2D features and then project them into 3D space. These 3D points can be the landmarks and I can get the observation vector (range, azimuth and pitch) of each landmark as I know their 3D point.

    So now, I have an observation vector and also the action vector if I assume ‘no control input’ both of which I can feed in to the kf-slam’s kalman filter loop.

    Please let me know if this kf-slam application is a good place to start for implementing a stereo slam applicaiton. Is there any other mrpt application that has kf-slam algorithm implemented along with camera interface?

    • jlblanco

      Yes, kf-slam seems a good starting point for your project!
      There is no app in MRPT with all the visual front-end already implemented and ready-to-use. Perhaps you could take a look at another of our recent apps, which makes use of MRPT and implemented a visual front-end: https://github.com/famoreno/srba-stereo-slam

      • Deepak R

        I started implementing the camera interface for the kf-slam app code. I want to replace the readActionObservationPair() function with my own so that I can fill up live action (CActionCollectionPtr datatype), observations (CSensoryFramePtr datatype) data.

        But when I go through the rawlog file I see that the action/observation data for ekf-slam app is of a different type which I can access/assign as below.

        CActionRobotMovement3DPtr act3D = CActionRobotMovement3D::Create();
        act3D->timestamp = 3;
        act3D->poseChange;
        CObservationBearingRangePtr obsBR = CObservationBearingRange::Create();
        obsBR->maxSensorDistance;

        I need these datatypes to converted to CActionCollectionPtr and CSensoryFramePtr so that they can be passed on to processActionObservation() function.

        Please let me know how to do it. Also is there a better way to feed action/observation data to the kalman filter loop? Please point me to example codes that might have done this already.

  • Yoshitaka Takebe

    Hi. I have some questions.

    1) I am planning to use AR-markers as landmark of CRangeBearingKFSLAM. Is it possible?

    2) How cah I save the map created by CRangeBearingKFSLAM to file ?

    3) After landmark map creation, robot will be localized in the map (saved in 2).
    What class in MRPT is used for the localization?
    Is CMonteCarloLocalization2D/3D class OK?
    In the case, how landmarks and observation are set?

    Sorry about my poor English.

    • jlblanco

      Hi Yoshitaka,

      1) Yes, it should be able: you should implement what is called the “SLAM front-end”, in this case capable of converting images into CObservationBearingRange objects. You might need to use stereo vision and/or knowledge on the scale of markers to find out all three components: range, yaw & pitch for each observation.

      2) Hum… it’s true that we didn’t provide an easy way to do this. But you could save the map in your own format by calling the getCurrentState(), or even internal_getXkk() to get the entire state vector.

      3) The easiest way would be to simple keep using the same class CRangeBearingKFSLAM. We didn’t provide any switch to disable map updating, but as long as observations are not *really* inconsistent, the map should not change in a significant way.

      In case you want to contribute some methods to save/load maps to that SLAM class, or whatever, feel free of forking and opening a Pull-request in GitHub…

      では、また

      • Yoshitaka Takebe

        Thank you for your reply, jlbranco.
        I understood.

        I thought if CLandmarkMaps was extended like that the CObservationBearingRange was able to compute particle filter in “internal_computeObservationLikelihood” method.
        When ARMarkers position were set as “beacon” in LandmarkMap, the CMonteCarloLocalization2D was usable.
        How about this idea ?

        • jlblanco

          That would be also a valid option! If you try to implement it, let us know the results…

          However, using a non-parametric filter (particle filters, PF) when the *unique* sensor gives us observations suitable to be used in a parametric map (i.e. discrete set of landmarks) might be suboptimal… give it a try to both options, parametric (EKF) and non-parametric (PF) to see if it’s worth…

  • Hi Jose Luis,

    I want to implement a “SLAM front-end” using points as features and later LIDAR corner detectors as features. I wonder if you have those front-end apps already developed.
    The idea is to use them with Gazebo and ROS. I just checked the ROS node mrpt_ekf_slam_2d and what I understood is that I need to implement, in case you don’t have them, a node “SLAM front-end” which will feed the mrpt_msgs/ObservationRangeBearing topic with point or corner observations. Is that right? or I am missing something?

    Thanks!