The Málaga Stereo and Laser Urban Data Set


  • 2017-05-10: Add calibration and data sheet of the MTi iMU.
  • 2014-05-15: All images have been rectified again such that both stereo image centers coincide. This simplifies the number of parameters needed to perform stereo SLAM or visual odometry. Calibration files of rectified images have been updated.
  • 2013-10-09: First version online.
Download dataset filesSize 883 MB 2.0 GB 796 MB 653 MB 4.4 GB 4.4 GB 2.2 GB 10 GB 968 MB 17 GB 2.3 GB 8.5 GB 33 GB 1.9 GB 1.3 GB
SensorsStereo camera, IMU, GPS, 2xSICK LMS, 3xHOKUYO
Recorded at:Málaga (Spain)
Additional info:
This dataset was gathered entirely in urban scenarios with a car equipped with several sensors, including one stereo camera (Bumblebee2) and five laser scanners. One distinctive feature of the present dataset is the existence of high-resolution stereo images grabbed at high rate (20fps) during a 36.8km trajectory, turning the dataset into a suitable benchmark for a variety of computer vision techniques. Both plain text and binary files are provided, as well as open source tools for working with the binary versions.

1. Summary

The car after installing the sensors.

The car after installing the sensors.

This dataset was gathered entirely in urban scenarios with a car equipped with several sensors, including one stereo camera (Bumblebee2) and five laser scanners. One distinctive feature of the present dataset is the existence of high-resolution stereo images grabbed at high rate (20fps) during a 36.8km trajectory, turning the dataset into a suitable benchmark for a variety of computer vision techniques. Both plain text and binary files are provided, as well as open source tools for working with the binary versions.


J.L. Blanco-Claraco, F.A. Moreno-Dueñas, J. Gonzalez-Jimenez. The Málaga Urban Dataset: High-rate Stereo and Lidars in a realistic urban scenario“, The International Journal of Robotics Research (IJRR), Feb 2014, vol. 33, no. 2, 207-214. (Bibtex) DOI: 10.1177/0278364913507326 (OnlineFirst; Draft PDF ).

2. Downloads: selected extracts

For the convenience of usage we provide separate downloads for a number of selected sections of the dataset.
It should be much easier and convenient to work with these sets instead of the entire dataset, available below.

All packages include raw and rectified stereo images, all data in plain text files and in binary (.rawlog) format.

and download
Path overview Video summary (Click to play)
Extract #01:
Straight path in the faculty parking.

Extract #02:
Through an under-construction road.

Extract #03:
Roundabout 3/4 turn.

Extract #04:
Roundabout with traffic.

Extract #05:
Avenue loop closure (~1.7 km).

Extract #06:
Block loop closure (~1.2 km).

Extract #07:
Short avenue loop closure (~0.7 km).

Extract #08:
Long loop closure (~4.5 km).

Extract #09:
Campus boulevard (with traffic).

Extract #10:
Multiple loop closures.

Extract #11:
High-way incorporation (with traffic).

Extract #12:
Long avenue with traffic (~3.7 km).

Extract #13:
Downtown (traffic,pedestrians).

Extract #14:
Direct sun example.

Extract #15:
Direct sun example.

3. Downloads: entire dataset

3.1. Plain text

3.2. Images/video

3.3. Binary files

  • download (1.3 GB, MD5) – Contents of this file:
    • malaga-urban-dataset_all-sensors.rawlog: The main binary log with all sensors for the entire dataset.
    • malaga-urban-dataset_CAM+GPS.rawlog: A filtered version of the one above, where the only sensor streams are that of the stereo camera and the
         GPS receiver.
    • malaga-urban-dataset_CAM+GPS.kml: A Google Earth file with a representation of the dataset path (as generated with the tool “rawlog-edit”).
All .rawlog files can be opened with the program “RawLogViewer“, part of MRPT. In Ubuntu, install the package “mrpt-apps“.
Note that raw (non undistorted) images are NOT included in these rawlogs. They are available for download in a link above. If you use MRPT apps or C++ classes to parse the .rawlog files, please create a directory named “Images” and put all images there for the software to find them.

4. Overview of the entire dataset

4.1. Path in Google Maps

The following map has been obtained from the GPS sensor onboard the vehicle during the whole dataset run.
You can zoom in to see the path in more detail.

4.2. Video index

This composition shows:

  • Top-left: raw images from stereo (left) camera.
  • Bottom-left: The position from GPS, overlaid to a map of the city.
  • Right: A 3D reconstruction of the environment by very simple interpolation of GPS points. Only two lasers (out of five) are used in this view.
  • You can also see the exact timestamp for each instant, which is useful to directly skip to an interesting part for your application.

The video is also available for download for offline usage:

5. Parsing the logs (C++ code)

Apart from plain text files, binary .rawlog files are an efficient and convenient way of parsing vision and robotic datasets
from C++ code. The following code is provided as a starting point from which to write your own programs:

Example of usage:

./parse_dataset_example malaga-urban-dataset-extract-01/malaga-urban-dataset-extract-01_rectified_800x600.rawlog

6. Additional info

  • Extra pictures:
Sensors before mounting on the car.

Sensors before mounting on the car.

The car after installing the sensors.

The car after installing the sensors.

  • Analysis of the (complete) rawlog file using “rawlog-edit --info“:

$ rawlog-edit --info -i malaga-urban-dataset_all-sensors.rawlog
[rawlog-edit] Operation to perform: info
[rawlog-edit] Opening 'malaga-urban-dataset_all-sensors.rawlog'...
[rawlog-edit] Open OK.
[rawlog-edit] Found external storage directory: Images
Progress: 2146534 objects --- Pos:    4.76 GB/>   1.41 GB
Time to parse file (sec)          : 38.7593
Physical file size                : 1.41 GB
Uncompressed file size            : 4.79 GB
Compression ratio                 : 29.36%
Overall number of objects         : 2153717
Actions/SensoryFrame format       : No
Observations format               : Yes
All sensor labels                 : CAMERA1, GPS_DELUO, HOKUYO1, HOKUYO2, HOKUYO3, LASER1, LASER2, XSensMTi
Sensor (Label/Occurs/Rate/Durat.) :         CAMERA1 / 113082 /19.998 /5654.574
Sensor (Label/Occurs/Rate/Durat.) :       GPS_DELUO /  11244 /1.989 /5653.000
Sensor (Label/Occurs/Rate/Durat.) :         HOKUYO1 / 225416 /39.864 /5654.617
Sensor (Label/Occurs/Rate/Durat.) :         HOKUYO2 / 225631 /39.902 /5654.624
Sensor (Label/Occurs/Rate/Durat.) :         HOKUYO3 / 225510 /39.880 /5654.621
Sensor (Label/Occurs/Rate/Durat.) :          LASER1 / 398531 /74.974 /5315.578
Sensor (Label/Occurs/Rate/Durat.) :          LASER2 / 404487 /73.568 /5498.109
Sensor (Label/Occurs/Rate/Durat.) :        XSensMTi / 549816 /100.000 /5498.150

  • mounir

    Dears researchers,
    Im working on integration GPS , IMU and Odometer in embedded system(FPGA) , I am searching for dataset including GPS(normal),IMU and Odometry plus ground truth ,I would be grateful if you could help me to find it, because i have seen some dataset i didn’t find what im looking for .
    I look forward to hearing from you dears ,
    thank you

    Waiting for an answer

    • Jose Luis Blanco

      Dear Mounir,

      Excepting wheel odometry, this dataset has the rest of sensors and also ground truth:


      • mounir

        dear Jose Luis
        thank you so much for your help , this dataset is very good ,but i still have the problem of sensor missing , i need a dataset containing all three sensors (GPS,INS ,Odometry,GT) 🙁 , i don’t know if i can generate Odometry data from Rowlog even if dataset doesn’t contain Odometry (from camera) Or its Impossible

        thank JL

  • Mohamed

    I work on a hybrid visual /inertial visual odometry system.

    I have a problem with estimation of accelerometer bias.

    Did you calibrated the accelerometer in your dataset?

    if no, is there isn’t a (rich-motion) data set from which i can estimate accelerometer bias?

    • Jose Luis Blanco

      Hi Mohamed,

      In the specific case of this dataset we used XSens IMUs which come with a calibration certificate of each individual unit. Though, I don’t have that information at hand right now (I moved to another lab), so if you need that information you would have to email the second or third author of the dataset paper.

  • Ellon Paiva Mendes


    I’m trying to use the dataset to do visual inertial SLAM. Up to now I downloaded the extract 06 and I found strange behaviour when looking at the images. For instance, looking at the sequence of rectified 800×600 images:


    The middle image shows an non-natural movement in relation to the others. Also, the difference between the timestamp of the third image and the second image is too small. This pattern repeats also later in the same dataset. Do you know what’s happening here?

  • Jose Luis Blanco

    Hi Ellon, and thanks for the feedback.

    I checked those images and indeed it seems there was some error and the last two became probably swapped. It may be caused by sporadic Firewire transmission errors and the grabbing software not being able to detect the situation. Unfortunately the only solution at this stage is manually or automatically detect those eventual errors and remove the keyframes, sorry about that.

    About timestamps: timestamps are given according to the computer internal time, but one can be sure that images are grabbed in a timely fashion at precisely 20 Hz via Bumblebee2’s internal trigger.


  • Juil Sock

    Dear researcher

    First of all thank you for the great dataset! I have been trying to project the point cloud data acquired from Hokuyo1 onto image captured by camera1 but the downloaded file only contains intrinsic parameter and extrinsic parameter seems to be omitted. The paper does have a figure and the summary of sensor positioning but the value for “z” position of camera1 in table.2 does not seem to agree with the position of the camera1 in the figure. It would be great if you could include extrinsic parameters for the sensors in the dataset.

    Thank you

  • Chao Qu

    I don’t quite understand the camera params file.
    Could you explain why the left and right camera have different cx after they are rectified?

    • jlblanco

      As mentioned in the changelog at the top of this page, images were re-rectified in May 2014 to assure that both image streams have the same intrinsic parameters. Perhaps you found a copy of the old parameters file?

      The correct parameters for each resolution can be found here:

      There you should also find the “old” rectification parameters (files with suffix _former_2010), just for reference, just ignore them. Just to reassure you have the correct dataset parameters, open the .rawlog files with RawLogViewer and click on any stereo image. You should see the associated parameters, for both raw and rectified dataset files.

  • Jan Kucera

    Hi guys, thanks for providing such valuable datasets! I downloaded extract_03 and inspected the source stereo images in JPEG – they seem to exhibit some heavy compression artifacts especially in the horizontal direction (a horizontal line looks very bad). It is interesting though that vertical lines are ok. Do you have the uncompressed images (maybe in png) still at your disposal? I think it is not a good idea to lower the quality of the data by JPEG compression. What software did you use for the conversion to JPEG? I made some tests and I was not able to reproduce your artifacts in my JPG compression even with higher compression ration set, which may point to some possible conversion issue on your side…

    Here an image with the artifacts I mean highlighted: I attach the image to this comment once again.

    • jlblanco

      Hi Jan, and thanks for the feedback!

      We know the problems of jpg, but still had to use a compressed format during the original dataset grabbing due to (IIRC) bandwidth limitation with PNG images at full resolution and full framerate.

      *RAW* images were saved with >=95% quality so hopefully they should not present those artifacts, please give them a try and let us know if you still find the same problems. Rectified versions were generated with MRPT tools (rawlog-edit --stereo-rectify) that ultimately call OpenCV for the image warping.

      Both RAW and RECTIFIED images were saved with OpenCV imwrite().

      Hope it helps…

      • Jan Kucera

        Dear Jose,

        thanks for a reply. But I was talking about the RAW images in the “Images” folder and probably forgot to mention it. The camera calibration images (with chessboard) do exhibit exactly same thing and rectified sequences do too. This is very strange and may point to some kind of bug or unexpected behavior of OpenCV’s imwrite. I tried to simulate the JPEG compression with my image and FastStone image viewer and it did show similar artifact neither on horizontal nor on vertical edges/lines even with much higher compression ratios (50% or so). But I am not an expert on JPEG and can imagine it may have actually some advanced settings available.

        What may be very helpful to see the actual input matrix (Mat) for that imwrite or maybe try to capture a few static raw images to PNG if you can still do that with a similar setup? Because there may actually be a problem even in the raw image source for the original imwrite… one never knows.

        What setup did you use to acquire the camera calibration images? I guess you may have connected the Bumblebee2 directly to a computer in a lab? If you can easily reproduce that setup and switch to PNG that would be the fastest test probably. I do not know the internals and circuitry of the Bumblebee2 but since it can do the stereo matching itself it must be more complicated than a single mono camera and its firmware could actually produce some problems I guess.

        • jlblanco

          Hi Jan,

          Perhaps this is an artifact from the Bayer arrangement of color pixels?
          I moved in 2012 to another lab and don’t have any Bumblebee around for testing, but will ask someone there in Malaga to plug in it and check raw images to sort this out…

          • jlblanco

            Jan: We are making tests and we have confirmed so far that it is NOT caused by JPEG compression, but by the Bayer demosaicing filter. Hope it helps. Will post updates.

          • jlblanco

            Well, indeed the problem was related to the usage of the default NEAREST Bayer filter in libdc1394. After some testing (thanks Jesus Briales!!) we found that in the future it will be much better to use the HQLINEAR filter. All images and an explanatory README here:

            Regarding this dataset, I guess that the easiest solution to minimize this “zig-zag” effect in edges is to employ the 800×600 rectified images.


  • Ralph

    Hi. Thank you for providing bumblebee dataset.
    I have a question for Disparity Map.
    I need a Disparity Map above bumblebee dataset. But, you are not providing disparity map dataset.
    So, I am going to get a Disparity Map through the openCV SGBM(Semi Global Block Matching). However, It is difficult for me to select optimal openCV SGBM parameters. Therfore, I can not get a good qualiy Disparity Map.
    Could you tell me what is the best parameters of openCV SGBM for getting bumblebee dataset Dispariy Map??
    or, If you have a disparity map, could you send me the disparity map dataset?

  • Hyeonbo Kim

    Hi, thanks for providing such valuable datasets! I downloaded extract_011. I have a question for calibration of camera and Lidar.

    I use just one camera image and LMS-200 Lidar data. I try to calibrate these sensors, but I think that results are not reliable. So I wonder how did calibrate you camera images and Lidar data. If you can advise how to accomplish the calibration of camera and Lidar, please let me know how to do that.

    The below contents are the way that I accomplished the calibration of sensors. Would you check the all the step, and advise to me?

    Thank you.

    1. Calculate the world coordinate X, Y, Z using Lidar data.

    We can get theta using Lidar’s angular resolution, and rho is distance value that measured form Lidar.

    2. Find the pixel ‘m’ which is projected from world coordinate ‘M’.

    We find the projected point ‘m’ using camera calibration that is equation below.

    Intrinsic values are known from your data set, and camera pose matrix is relative pose between Lidar and Camera.

    We can get the rotation vector and translation vector. We need the rotation matrix (3×3), so we calculate the rotation matrix using Rodrigues transform. And we combine the rotation matrix and translation vector to (3×4) matrix form.

    The below image is result of the above steps.

    We use “malaga-urban-dataset-extract-11”.

    The Red points are the scanned point from the Lidar, the rectangle is distance from the camera, and the blue points just present Lidar data.

    We think that it is not reliable results, so need some advice about calibration of camera and Lidar.

  • Georgios D. Karantaidis

    Hello, everyone. Which is the baseline of the stereo camera?
    I appreciate your help!
    Thank you!

  • Deepali Ghorpade

    Thank you for such valuable dataset. Can I know the specification details (Model Number) of Hokuyo sensors used here?

    • jlblanco

      Hokuyo UTM-30LX.

      • Deepali Ghorpade

        Thank you @jlblanco:disqus .

        Can I get dataset for Hokuyo URG-04LX-UG01 ???
        Is it available on MRPT repository?
        Thanks in advance.

  • أحمد أيمن

    I downloaded the plain text dataset and the IMU velocity and position internal estimations are all zeros
    where can I get the dataset containing the estimated values ?

    • jlblanco

      The IMU was only giving out raw data (acceleration and gyros rates), plus the filtered estimation of the attitude (angles). No “velocity”, no “position”… they cannot be reliably obtained from an IMU alone, since it has no global references, and double integrating acceleration is normally a very bad idea leading to poor results.

      PS: You have “position” in the GPS files.

      • أحمد أيمن

        I know it’s not reliable alone, but in the readme file it said that it has estimated position and velocity, so I though they fused and filtered the IMU data with the GPS to estimate them.

        thanks for the note

  • Akshay Kumar

    Thank you for the dataset. Can you tell me the model number of the xSens-MTi IMU? I’m trying to find the noise parameters for the IMU from the datasheet.

  • Ravindra ji

    Thank you very much for sharing the knowledge and contributing on this topic. I would like to ask a question regarding the correlation time (seconds) of IMU. It is given by the manufacture or we have to calculate it?
    In your INS example, the following is mentioned in the code:
    %% Correlation time [s].
    % Rate gyros
    tauR(1,1) = 626.8115;
    tauR(2,1) = 6468.0515;
    tauR(3,1) = 602.5784;
    could you please share some ideas or formulas for calculating it?
    Thank you very much in advance.

  • Zana

    Thanks for such valuable dataset

    Are IMU and GPS data synchronized with camera?

    • jlblanco

      Yes, they are. Actually, all sensors are timestamped with a common timeframe. So, sensors do not “fire” simultaneously, but using the common timestamps one could interpolate / extrapolate the vehicle path as needed.
      Hope this answer helped!