Application: RawLogViewer

1. Purpose of this program

The main goal of the GUI program RawLogViewer is to allow users to quickly inspect, cut, or modify in many different ways a given Rawlog (datasets) gathered by a robot, with the idea of preparing it as the input to some other program, typically for localization (pf-localization) or mapping (kf-slam, rbpf-slam,…):


Examples of typical operations are extracting just a part of a large dataset, decimating the rawlog if original data was gathered at a very high rate, or modifying the sensor positions on the robot. The program also permits importing datasets from logs stored using the CARMEN format (see also:carmen2rawlog) or the MOOS asynchronous log (“*.alog”) format. Further, limited exporting is also implemented into plain text files (e.g. for processing them from MATLAB). Note that future development of this program will focus on visualization only, while modification of datasets will be possible with a separate program, rawlog-edit.

This video demonstrates RawLogViewer inspecting RGB+D datasets grabbed with Microsoft’s Kinect:

2. Interface description

The UI of this program is divided into 3 main spaces:


  • Left: The sequence of all the objects stored in the loaded dataset. Click on an object to select it and preview its contents on the other 2 spaces on the right.
  • Top-right: A textual description of the selected object.
  • Bottom-right: For some objects, a visual preview of the observation contents. For example, a 2D view of a laser scan, the images of a monocular or stereo camera, etc. When no object is selected, this panel displays a summary of the entire dataset, including interesting information as each sensor’s rate, etc…

3. Important manipulations of a rawlog

3.1. Cut, selective removals, etc…

Menu path: “Edit” -> “Edit rawlog…”
Description: This command opens up a dialog with several configurable filters to remove specific parts of a dataset, either by their position in time, or by sensor-type or sensor-label.

Screenshot-RawlogViewer-Edit the rawlog

3.2. Changing the position of sensors

Menu path: “Edit” -> “Change sensor/camera parameters…”
Description: All sensor (observation) objects in a dataset have an associated “pose on the robot”, describing its 6D location on the vehicle frame of reference (e.g. which will be always the origin for hand-held devices). This dialog allows getting and setting this pose to all the objects in a dataset at once. It’s typically used together to sensor labels:

Screenshot-RawlogViewer-Change sensor pose information:

3.3. Converting a rawlog format: “observations-only” ==> “actions-sensory frame

Menu path: “Edit” -> “Convert into SF format” -> choose your conversion parameters.
Note: Typically, at least one of the first “actions” will be empty (e.g. without odometry). Check it out by inspecting rawlog entry number “1” in the tree view. If it is empty, it is strongly recommended to cut off those first entries. See how to remove part of a rawlog in section 3.1 above.
Continue inspecting the rawlog and, if satisfied, save it as a new rawlog file.

3.4. Converting a rawlog format: “actions-sensory frame” ==> “observations-only

Menu path: “File” -> “Operations on files” -> “Convert to observations-only rawlog”

4. The “maps and paths generation module” dialog

Menu path: “Tools” -> “Open the maps & paths generation module…”
Description: This dialog allows users to do three different things, all mostly oriented to datasets with 2D/3D laser scans:


5. The Scan-Matching (ICP) module

See the separate tutorial.


6. Actions: odometry and its uncertainty

6.1 Concepts

For the mathematical details of each probabilistic model read the tutorial on the topic.

6.2. Modifying the probabilistic motion model

Menu path: Sensors -> Odometry -> Modify motion model…
Description: This dialog allows experimenting with the parameters of two different probabilistic motion models and changing those parameters for all the odometry increments in the rawlog.


Modeling the actual uncertainty may be critical for some particle filter-based SLAM methods. As can be seen in the next example, sometimes it’s needed to increase the uncertainty of odometry increments up to the point that the real hypothesis after closing a loop is included by the probability distribution of all the potential robot paths:

Screenshot-GenRandomPaths2    Screenshot-GenRandomPaths1

Note how the loop closure can’t be correctly detected by any RBPF SLAM method if the uncertainty in the odometry is over-confident, as in the image on the left.

6.3. Creating ‘odometry’ from scan-matching

Menu path: Sensors -> Odometry -> Recalculate actions with ICP…
Description: This operation allows “correcting” the odometry readings by performing ICP-based scan matching between each pair of consecutive observations. Note that this operation is only implemented for rawlogs in the “SensoryFrame” format.

Screenshot-RawMap-odo Screenshot-RawMap-odo-icp

7. Displaying images as a video

Menu path: Sensors -> Images -> Show images as a video…
Description: This module allows visualizing all existing imaging sensors in a dataset as a video sequence or just peeking through the entire duration using a scrollbar:


Note that this works for monocular cameras, stereo cameras and the intensity channel of 3D cameras.

8. Generating a ground truth path from rawlogs with 3 RTK GPS

This specifically applies to rawlogs described in this paper.


  • Load the rawlog.
  • Click the “Raw map” button.
  • If desired, mark the desired range of enties to process in the controls on the top.
  • Also set a decimation>1 if you don’t mind not to insert all laser scans in the 3D map, making the process a bit faster.
  • Click “Map from RTK GPS”.

The map should appear in the main area at the bottom. By hitting “Save vehicle path…”, a lot of information can be dumped to text files, including the reconstructed ground truth.