# SLAM

2 posts / 0 new
SLAM

Hey,

I'm actually a beginner at robotics but am eager to learn. I'm working on a robotics project which shall require the use of SLAM(What I have understood is that a signature of the environment is made by LIDAR and that scan is compared to scans in the future to figure out where the robot is). My questions are:

1> I havent really been able to understand SLAM very well so where can i study it? Papers and stuff are super complicated to start with.
2> What if NLOS navigation probes are used to triangulate the position? Would an obstacle avoidance algorithm be computationally lighter?(I dont need the map of the environment actually)
3> if i use the ICP SLAM available here on MRPT how exactly do i tell the robots actuators something in effect telling it to "Move here" or "Move away" or "Go around" or whatever.

Looking forward to a response!

ZcuBa
SLAM

SLAM is an abbreviation for Simoultaneus Location And Mapping
Basicly you attempt to build a map of your environment while moving, to identify where you are in relation to your sorroundings.

There are alot of strategies to attempt to solve this problem:

(at first accept that I will try to explain this from a 2d (3DoF) perspective, as everything gets immensely diifcult if you move to 3D (6DoF)

Often a static map is assumed, with no dynamic objects (such as people, vehicles or animals).
It is assumed that when you drive you have some idea about how far you and in which direction you have moved, even though this may have a small error.

Now suppose you always knew excactly how you moved, you could simply measure distances to the map, and then move all measurements according to the robots position.
Then as the robot travelled, the map would be build up as a lot of points where distances was measured.
The distance may be measured using Radar, Lidar, or by simply touching the walls...

Detecting dynamics such as people is now identified as a sensor reading that does not correllate with an area previousely travelled...
However where you have not yet travelled (or seen) you have no way of knowing if the sensors detects a static wall or a human..

There are some inherent problems in the above:
(1)
Sensors have noise, and thus they may report bad readings, hence statistics are used to build a confidence level on readings. In this manner, stuff you have seen many times, is more likely to be mapped, than stuff that only appeared a few times. While this helps reducing errors due to noise and people, you also suddenly requires the map to be sampled multiple times to calculate the confidence..

(2)
You do not know where the robot is. You may guess at it using measurements on the wheels to propagte motions (only good on smooth surfaces, with no wheel slip), or using some inertia system, but these methods will build up errors over time. You may have GPS, which can help bound the errors, but even GPS have local errors and is thus not accurate enough for, say hitting a button using a robot.., (and may not always be available. i.e. indoor driving)

The solution is to use the guess found from any combination of theese sensors, and then combine them with the map build so far, and the distance readings, to provide a probable position of your robot..

(3)
Noise noise noise noise, dynamics etc..

Popular Solutions
Use a statistics filter, like Kalman (Baysian) or Monte Carlo (Particle) to propagate robot motions, as they can be build around a model of the robot, given the inputs for the motors. In this manner, they can analyse all sensor readings to help predict if a pose estimate is likely.

Kalman Filters use a model, and then compares predicted measures with actual sensor readings to locate the robot and the map..

Particle filters creates multiple hypothesis about where the robot could be, and then selects the one that fits the sensor readings best..

ICP aligns a point cloud (could be from LIDAR) with a previously scanned point cloud (could be map) to get the best fit, by itereatively searching for it..

If you want to build any of the filters, you need to have math skills enough to read those articles, and copy their results..
Much of the math is done for you in MRPT, but if you get the math behind it, it is easy to use it ;)
------------
NLOS is the Non-Line Of Sight property of electrical waves, and can be used to detect which sources that is directly visible, and which is caused by a mirror effect. This is very usefull for triangulating a position, if you have some source of waves that can be considered in the far field.
In near field everything gets complicated..

Using this to detect obstacles, and the position of the robot may prove tricky, and I have not seen it done sufficiently well, when attempting to model the reason for loosing line of sight...
Only for compensating it, but I am not an expert in radio technics, and stuff might have happened without me noticing..

However for motion planning there are at least two things you need to concider, global and local planning !
If you only need to avoid obstacles, and otherwise have global target (such as a charger or point of interest), you might be able to do an obstacle avoidance algorithm that is fast and reliable based on NLOS only.

For global path planning, you need a global map, otherwise your robot will likely be caught in a local area of attraction (local minimum)
-------------
The ICP algorithm is a scanmatching algorithm, it tells you where the robot is, not what to do.
For that you need a controller which follows a route.
You need a driver for the actuators that MRPT can use if you decide to use the build in strategies in MRPT.
Otherwise I suggest you look at wave-propagtion, dijkstras algorithm for path planning, or other path planning algorithms, and then define the robot actuator speeds as a function of the robot position error in relation to the path planned..

To make things very simple for you:
You might want to define odometry first, to have a model of how the actuators interact with the floor
When this is the case, I suggest you simply determine the closest point on the path and then define the error as angle and distance to that point from the robot, and then apply two seperate PID regulators to correct the angle and distance.
When robot reaches the closest point, move the point along the path towards the target..