User:Dllu/sandbox/Monte Carlo localization

Monte Carlo localization (MCL), also known as particle filter localization[1], is an application of the particle filter, a Monte Carlo method, for robot localization.[2][3][4][5] Given a map of the environment, the algorithm seeks to estimate the position and orientation of a robot as it moves and senses the environment[4]. The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state.[4] The algorithm typically starts with a uniform random distribution of particles over the configuration space.[4] Whenever the robot senses something or moves, the particles are resampled based on recursive Bayesian estimation. Ultimately, the particles should converge towards the actual position of the robot, giving a good estimate of its position.[4]

State representation

edit

The belief, which is the robot's estimate of its current state, is a probability density function distributed over the state space.[4][1] In the MCL algorithm, the belief at a time   is represented by a set of   particles  .[4] Intuitively, regions with many particles correspond to a greater probability that the robot will be there; and regions with few particles are unlikely to be where the robot is. Each particle contains a possible state hypothesis.

The algorithm assumes the Markov property that the current state's probability distribution is dependent only on the previous state (and not any ones before that), i.e.   depends only on  .[4] This will only work if the environment is static and does not change with time.[4] Typically, on start up, the robot has no information on its current pose so the particles are uniformly distributed over the configuration space.[4]

Overview of algorithm

edit

At every time   the algorithm takes as input the previous belief  , an actuation command  , and data received from sensors  ; and the algorithm outputs the new belief  , which is an updated estimate of the robot's state.[4]

   Algorithm MCL :
        
       for   to  :
             motion_update 
             sensor_update 
            
       endfor
       for   to  :
           draw   with probability  
            
       endfor
       return  

Example for 1D robot

edit

Consider a robot that is trying to localize itself in a one-dimensional circular corridor with three doors, using a sensor that returns either true or false depending on whether there is a door.

 

 
The algorithm initializes with a uniform distribution of particles. The robot considers itself equally likely to be at any point in space along the corridor.
 
Sensor update: the robot detects a door. It assigns a weight to each of the particles. The particles which are likely to give this sensor reading receive a higher weight.
 
Resampling: the robot generates a set of new particles, with most of them generated around the previous particles with more weight. It now believes it is at one of the three doors.

 

 
Motion update: the robot moves some distance to the right. All particles also move to the right, and some noise is applied to simulate the real world.
 
Sensor update: the robot detects the absence of a door. It assigns a weight to each of the particles. The particles which are likely to give this sensor reading receive a higher weight.
 
Resampling: the robot generates a set of new particles, with most of them generated around the previous particles with more weight. It now believes it is at one of two doors.

 

 
Motion update: the robot moves some distance to the left. All particles also move to the left, and some noise is applied to simulate the real world.
 
Sensor update: the robot detects a door. It assigns a weight to each of the particles. The particles which are likely to give this sensor reading receive a higher weight.
 
Resampling: the robot generates a set of new particles, with most of them generated around the previous particles with more weight. The robot has successfully localized itself!

Observe that, at the end of the three iterations, most of the particles are correctly converged around the actual position of the robot.

Motion update

edit
 
Belief after moving several steps for a 2D robot using a typical motion model without sensing.

During the motion update, the robot predicts its new location based on the actuation command given, by applying the simulated motion to each of the particles.[1] For example, if a robot moves forward, all particles will move forward in their own direction no matter which way they are pointing. If a robot rotates 90 degrees clockwise, all particles will rotate 90 degrees clockwise regardless of where they are. However, in the real world, no actuator is perfect: they may overshoot or undershoot the desired amount of motion; and when a robot tries to drive in a straight line, it will inevitably curve to one side or the other due to minute differences in wheel radius.[1] Hence, the motion model must be carefully designed to include noise as necessary. Inevitably, the particles will diverge or "spread out" during the motion update as a consequence of this noise. This is expected since a robot becomes less and less sure of its position if it drives around blindly without sensing the environment.

Sensor update

edit

When the robot senses its environment, it will naturally update its particles to more accurately reflect where it is. For each particle, the robot computes the probability that, had it been at the state of the particle, it would perceive what its sensors have actually sensed. It therefore assigns a weight   for each particle. Then, it randomly draws   new particles from the previous belief, with probability proportional to  . In effect, particles which happen to be consistent with sensor readings are more likely to be chosen (possibly more than once) and particles which are inconsistent with sensor readings are rarely picked. As such, particles converge towards a better estimate of the robot's state. This is expected since a robot becomes surer and surer of its position as it senses its environment.

Properties of Monte Carlo localization

edit

Non-parametricity

edit

The particle filter can approximate many different kinds of probability distributions, since it is a non-parametric representation.[4] Some other Bayesian localization algorithms, such as the Kalman filter (and its variants, the extended Kalman filter and the unscented Kalman filter), assume the belief of the robot is close to being a Gaussian distribution and do not perform well for situations where the belief is multimodal.[4] For example, a robot in a long corridor with many similar-looking doors may arrive at a belief that has a peak for each door, but the robot is unable to distinguish which door it is at. In such situations, the particle filter can give better performance than parametric filters.[4]

Another non-parametric approach to Markov localization is the grid-based localization, which uses a histogram to represent the belief distribution. Compared with the grid-based approach, the Monte Carlo localization is more accurate because the state represented in samples is not discretized.[2]

Computational requirements

edit

The particle filter's time complexity is   with respect to the number of particles. Naturally, the more particles, the better the accuracy, so there is a compromise between speed and accuracy. One strategy to choose the number of particles is to simply continuously generate additional particles until the next pair of comamnd   and sensor reading   has arrived.[4] This way, the greatest possible number of particles is obtained while not impeding the function of the rest of the robot. Consequently, the implementation is adaptive to available computational resources: the faster the processor, the more particles can be generated and therefore the more accurate the algorithm is.[4]

Compared to grid-based Markov localization, Monte Carlo localization has drastically reduced memory usage since memory usage only depends on number of particles and does not scale with size of the map[2], and can integrate measurements at a much higher frequency.[2]

Particle depravation

edit

A serious drawback of the naive implementation of Monte Carlo localization can be seen by considering a scenario where a robot simply sits at one spot and repeatedly senses the environment without moving.[4] Suppose that the particles all converge towards an erroneous state, or if an occult hand picks up the robot and moves it to a new location after particles have already converged to the previous state. As particles far away from the converged state are rarely selected for the next iteration, they become scarcer and scarcer until they disappear altogether. At this point, the algorithm is unable to recover.[4] This problem is more likely to occur for small number of particles, e.g.  , and when the particles are spread over a large volume.[4] In fact, any particle filter algorithm may accidentally discard all particles near the correct state during the resampling step.[4]

One way to mitigate this issue is to randomly add extra particles on every iteration.[4] This is equivalent to assuming that, at any point in time, the robot has some small probability of being kidnapped to a random position in the map, thus causing a fraction of random states in the motion model.[4] By guaranteeing that no area in the map will be totally deprived of particles, the algorithm is now robust against particle depravation.

References

edit
  1. ^ a b c d Ioannis M. Rekleitis. "A Particle Filter Tutorial for Mobile Robot Localization." Centre for Intelligent Machines, McGill University, Tech. Rep. TR-CIM-04-02 (2004).
  2. ^ a b c d Frank Dellaert, Dieter Fox, Wolfram Burgard, Sebastian Thrun. "Monte Carlo Localization for Mobile Robots." Proc. of the IEEE International Conference on Robotics and Automation Vol. 2. IEEE, 1999.
  3. ^ Dieter Fox, Wolfram Burgard, Frank Dellaert, and Sebastian Thrun, "Monte Carlo Localization: Efficient Position Estimation for Mobile Robots." Proc. of the Sixteenth National Conference on Artificial Intelligence John Wiley & Sons Ltd, 1999.
  4. ^ a b c d e f g h i j k l m n o p q r s t u v Sebastian Thrun, Wolfram Burgard, Dieter Fox. Probabilistic Robotics MIT Press, 2005. Ch. 8.3 ISBN 9780262201629.
  5. ^ Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert. "Robust monte carlo localization for mobile robots." Artificial Intelligence 128.1 (2001): 99-141.