Mobile Robot localization using Odometry and Kinect sensor

mobile robot localization by tracking geometric beacons and mobile robot localization and map building and mobile robot localization using optical flow sensors
Dr.NaveenBansal Profile Pic
Dr.NaveenBansal,India,Teacher
Published Date:25-10-2017
Your Website URL(Optional)
Comment
5 Mobile Robot Localization 5.1 Introduction Navigation is one of the most challenging competences required of a mobile robot. Success in navigation requires success at the four building blocks of navigation: perception, the robot must interpret its sensors to extract meaningful data; localization, the robot must determine its position in the environment (figure 5.1); cognition, the robot must decide how to act to achieve its goals; and motion control, the robot must modulate its motor outputs to achieve the desired trajectory. Of these four components (figure 5.2), localization has received the greatest research attention in the past decade and, as a result, significant advances have been made on this front. In this chapter, we explore the successful localization methodologies of recent years. First, section 5.2 describes how sensor and effector uncertainty is responsible for the diffi- culties of localization. Then, section 5.3 describes two extreme approaches to dealing with the challenge of robot localization: avoiding localization altogether, and performing explicit map-based localization. The remainder of the chapter discusses the question of rep- ? Figure 5.1 Where am I?182 Chapter 5 position Position Update (Estimation?) Prediction of Encoder matched Position observations (e.g. odometry) YES predicted position Map Matching data base raw sensor data or extracted features Observation Figure 5.2 General schematic for mobile robot localization. resentation, then presents case studies of successful localization systems using a variety of representations and techniques to achieve mobile robot localization competence. 5.2 The Challenge of Localization: Noise and Aliasing If one could attach an accurate GPS (global positioning system) sensor to a mobile robot, much of the localization problem would be obviated. The GPS would inform the robot of its exact position, indoors and outdoors, so that the answer to the question, “Where am I?”, would always be immediately available. Unfortunately, such a sensor is not currently prac- tical. The existing GPS network provides accuracy to within several meters, which is unac- ceptable for localizing human-scale mobile robots as well as miniature mobile robots such as desk robots and the body-navigating nanorobots of the future. Furthermore, GPS tech- nologies cannot function indoors or in obstructed areas and are thus limited in their work- space. But, looking beyond the limitations of GPS, localization implies more than knowing one’s absolute position in the Earth’s reference frame. Consider a robot that is interacting with humans. This robot may need to identify its absolute position, but its relative position PerceptionMobile Robot Localization 183 with respect to target humans is equally important. Its localization task can include identi- fying humans using its sensor array, then computing its relative position to the humans. Furthermore, during the cognition step a robot will select a strategy for achieving its goals. If it intends to reach a particular location, then localization may not be enough. The robot may need to acquire or build an environmental model, a map, that aids it in planning a path to the goal. Once again, localization means more than simply determining an absolute pose in space; it means building a map, then identifying the robot’s position relative to that map. Clearly, the robot’s sensors and effectors play an integral role in all the above forms of localization. It is because of the inaccuracy and incompleteness of these sensors and effec- tors that localization poses difficult challenges. This section identifies important aspects of this sensor and effector suboptimality. 5.2.1 Sensor noise Sensors are the fundamental robot input for the process of perception, and therefore the degree to which sensors can discriminate the world state is critical. Sensor noise induces a limitation on the consistency of sensor readings in the same environmental state and, there- fore, on the number of useful bits available from each sensor reading. Often, the source of sensor noise problems is that some environmental features are not captured by the robot’s representation and are thus overlooked. For example, a vision system used for indoor navigation in an office building may use the color values detected by its color CCD camera. When the sun is hidden by clouds, the illumination of the building’s interior changes because of the windows throughout the building. As a result, hue values are not constant. The color CCD appears noisy from the robot’s perspective as if subject to random error, and the hue values obtained from the CCD camera will be unusable, unless the robot is able to note the position of the sun and clouds in its representation. Illumination dependence is only one example of the apparent noise in a vision-based sensor system. Picture jitter, signal gain, blooming, and blurring are all additional sources of noise, potentially reducing the useful content of a color video image. Consider the noise level (i.e., apparent random error) of ultrasonic range-measuring sen- sors (e.g., sonars) as discussed in section 4.1.2.3. When a sonar transducer emits sound toward a relatively smooth and angled surface, much of the signal will coherently reflect away, failing to generate a return echo. Depending on the material characteristics, a small amount of energy may return nonetheless. When this level is close to the gain threshold of the sonar sensor, then the sonar will, at times, succeed and, at other times, fail to detect the object. From the robot’s perspective, a virtually unchanged environmental state will result in two different possible sonar readings: one short and one long. The poor signal-to-noise ratio of a sonar sensor is further confounded by interference between multiple sonar emitters. Often, research robots have between twelve and forty-184 Chapter 5 eight sonars on a single platform. In acoustically reflective environments, multipath inter- ference is possible between the sonar emissions of one transducer and the echo detection circuitry of another transducer. The result can be dramatically large errors (i.e., underesti- mation) in ranging values due to a set of coincidental angles. Such errors occur rarely, less than 1% of the time, and are virtually random from the robot’s perspective. In conclusion, sensor noise reduces the useful information content of sensor readings. Clearly, the solution is to take multiple readings into account, employing temporal fusion or multisensor fusion to increase the overall information content of the robot’s inputs. 5.2.2 Sensor aliasing A second shortcoming of mobile robot sensors causes them to yield little information con- tent, further exacerbating the problem of perception and, thus, localization. The problem, known as sensor aliasing, is a phenomenon that humans rarely encounter. The human sen- sory system, particularly the visual system, tends to receive unique inputs in each unique local state. In other words, every different place looks different. The power of this unique mapping is only apparent when one considers situations where this fails to hold. Consider moving through an unfamiliar building that is completely dark. When the visual system sees only black, one’s localization system quickly degrades. Another useful example is that of a human-sized maze made from tall hedges. Such mazes have been created for centuries, and humans find them extremely difficult to solve without landmarks or clues because, without visual uniqueness, human localization competence degrades rapidly. In robots, the nonuniqueness of sensor readings, or sensor aliasing, is the norm and not the exception. Consider a narrow-beam rangefinder such as an ultrasonic or infrared rangefinder. This sensor provides range information in a single direction without any addi- tional data regarding material composition such as color, texture, and hardness. Even for a robot with several such sensors in an array, there are a variety of environmental states that would trigger the same sensor values across the array. Formally, there is a many-to-one mapping from environmental states to the robot’s perceptual inputs. Thus, the robot’s per- cepts cannot distinguish from among these many states. A classic problem with sonar- based robots involves distinguishing between humans and inanimate objects in an indoor setting. When facing an apparent obstacle in front of itself, should the robot say “Excuse me” because the obstacle may be a moving human, or should the robot plan a path around the object because it may be a cardboard box? With sonar alone, these states are aliased and differentiation is impossible. The problem posed to navigation because of sensor aliasing is that, even with noise-free sensors, the amount of information is generally insufficient to identify the robot’s position from a single-percept reading. Thus techniques must be employed by the robot programmer that base the robot’s localization on a series of readings and, thus, sufficient information to recover the robot’s position over time.Mobile Robot Localization 185 5.2.3 Effector noise The challenges of localization do not lie with sensor technologies alone. Just as robot sen- sors are noisy, limiting the information content of the signal, so robot effectors are also noisy. In particular, a single action taken by a mobile robot may have several different pos- sible results, even though from the robot’s point of view the initial state before the action was taken is well known. In short, mobile robot effectors introduce uncertainty about future state. Therefore the simple act of moving tends to increase the uncertainty of a mobile robot. There are, of course, exceptions. Using cognition, the motion can be carefully planned so as to minimize this effect, and indeed sometimes to actually result in more certainty. Furthermore, when the robot’s actions are taken in concert with careful interpretation of sensory feedback, it can compensate for the uncertainty introduced by noisy actions using the information pro- vided by the sensors. First, however, it is important to understand the precise nature of the effector noise that impacts mobile robots. It is important to note that, from the robot’s point of view, this error in motion is viewed as an error in odometry, or the robot’s inability to estimate its own posi- tion over time using knowledge of its kinematics and dynamics. The true source of error generally lies in an incomplete model of the environment. For instance, the robot does not model the fact that the floor may be sloped, the wheels may slip, and a human may push the robot. All of these unmodeled sources of error result in inaccuracy between the physical motion of the robot, the intended motion of the robot, and the proprioceptive sensor esti- mates of motion. In odometry (wheel sensors only) and dead reckoning (also heading sensors) the posi- tion update is based on proprioceptive sensors. The movement of the robot, sensed with wheel encoders or heading sensors or both, is integrated to compute position. Because the sensor measurement errors are integrated, the position error accumulates over time. Thus the position has to be updated from time to time by other localization mechanisms. Other- wise the robot is not able to maintain a meaningful position estimate in the long run. In the following we concentrate on odometry based on the wheel sensor readings of a differential-drive robot only (see also 4, 57, 58). Using additional heading sensors (e.g., gyroscope) can help to reduce the cumulative errors, but the main problems remain the same. There are many sources of odometric error, from environmental factors to resolution: • Limited resolution during integration (time increments, measurement resolution, etc.); • Misalignment of the wheels (deterministic); • Uncertainty in the wheel diameter and in particular unequal wheel diameter (determin- istic); • Variation in the contact point of the wheel;186 Chapter 5 • Unequal floor contact (slipping, nonplanar surface, etc.). Some of the errors might be deterministic (systematic), thus they can be eliminated by proper calibration of the system. However, there are still a number of nondeterministic (random) errors which remain, leading to uncertainties in position estimation over time. From a geometric point of view one can classify the errors into three types: 1. Range error: integrated path length (distance) of the robot’s movement → sum of the wheel movements 2. Turn error: similar to range error, but for turns → difference of the wheel motions 3. Drift error: difference in the error of the wheels leads to an error in the robot’s angular orientation Over long periods of time, turn and drift errors far outweigh range errors, since their con- tribution to the overall position error is nonlinear. Consider a robot whose position is ini- tially perfectly well-known, moving forward in a straight line along the x -axis. The error in the yd -position introduced by a move of meters will have a component of d sin∆ θ , which can be quite large as the angular error ∆θ grows. Over time, as a mobile robot moves about the environment, the rotational error between its internal reference frame and its orig- inal reference frame grows quickly. As the robot moves away from the origin of these ref- erence frames, the resulting linear error in position grows quite large. It is instructive to establish an error model for odometric accuracy and see how the errors propagate over time. 5.2.4 An error model for odometric position estimation Generally the pose (position) of a robot is represented by the vector x p = y (5.1) θ For a differential-drive robot the position can be estimated starting from a known posi- tion by integrating the movement (summing the incremental travel distances). For a dis- crete system with a fixed sampling interval ∆ t the incremental travel distances () ∆ x;∆ y;∆ θ are ∆ x = ∆ s cos() θ∆ +θ ⁄ 2 (5.2)Mobile Robot Localization 187 X I v(t) θ ω(t) X I Figure 5.3 Movement of a differential-drive robot. ∆ y = ∆ s sin() θ∆ +θ ⁄ 2 (5.3) ∆ s –∆ s r l - ∆θ = (5.4) b ∆ s + ∆ s r l - - ∆ s = (5.5) 2 where () ∆ x;∆ y;∆ θ = path traveled in the last sampling interval; ∆ s ;∆ s = traveled distances for the right and left wheel respectively; r l b = distance between the two wheels of differential-drive robot. Thus we get the updated position p' : x' ∆ scos() θ∆ +θ ⁄ 2 x ∆ scos() θ∆ +θ ⁄ 2 p'== y' p + ∆ ssin() θ∆ +θ ⁄ 2= y + ∆ s sin() θ∆ +θ ⁄ 2 (5.6) θ' ∆θ θ ∆θ By using the relation for () ∆ s;∆ θ of equations (5.4) and (5.5) we further obtain the basic equation for odometric position update (for differential drive robots):188 Chapter 5 ∆ s –∆ s ∆ s + ∆ s r l r l  - - - cos θ +  2b 2 x ∆ s –∆ s ∆ s + ∆ s r l r l p'== fxy() ,,θ∆, s, ∆ s + - (5.7) y - - sin θ + r l  2b 2 θ ∆ s –∆ s r l - b As we discussed earlier, odometric position updates can give only a very rough estimate of the actual position. Owing to integration errors of the uncertainties of p and the motion errors during the incremental motion () ∆ s ;∆ s the position error based on odometry inte- r l gration grows with time. In the next step we will establish an error model for the integrated position p' to obtain the covariance matrix Σ of the odometric position estimate. To do so, we assume that at p' the starting point the initial covariance matrix Σ is known. For the motion increment p () ∆ s ;∆ s we assume the following covariance matrix Σ : r l ∆ k ∆ s 0 r r Σ== covar() ∆ s , ∆ s (5.8) ∆ r l 0 k ∆ s l l where ∆ s and ∆ s are the distances traveled by each wheel, and k , k are error con- r l r l stants representing the nondeterministic parameters of the motor drive and the wheel-floor interaction. As you can see, in equation (5.8) we made the following assumptions: 5 • The two errors of the individually driven wheels are independent ; • The variance of the errors (left and right wheels) are proportional to the absolute value of the traveled distances () ∆ s ;∆ s . r l These assumptions, while not perfect, are suitable and will thus be used for the further development of the error model. The motion errors are due to imprecise movement because of deformation of wheel, slippage, unequal floor, errors in encoders, and so on. The values for the error constants k and k depend on the robot and the environment and should be r l experimentally established by performing and analyzing representative movements. If we assume that p and ∆ =() ∆ s ;∆ s are uncorrelated and the derivation of f equa- rl r l tion (5.7) is reasonably approximated by the first-order Taylor expansion (linearization), we conclude, using the error propagation law (see section 4.2.2), 5. If there is more knowledge regarding the actual robot kinematics, the correlation terms of the covariance matrix could also be used.Mobile Robot Localization 189 T T Σ = ∇ f⋅∇ Σ ⋅ ∇ f + f⋅ Σ ⋅ ∇ f (5.9) p' p p ∆ ∆ ∆ p rl rl The covariance matrix Σ is, of course, always given by the Σ of the previous step, p p' and can thus be calculated after specifying an initial value (e.g., 0). Using equation (5.7) we can develop the two Jacobians, F = ∇ fand F = ∇ f: p p ∆ ∆ rl rl 10 –∆ ssin() θ∆ +θ ⁄ 2 T ∂f ∂f ∂f F== ∇ f ∇() f= = (5.10) p p p - - - - 01 ∆ s cos() θ∆ +θ ⁄ 2 ∂x ∂y ∂θ 00 1 ∆θ ∆θ ∆θ ∆θ 1 ∆ s 1 ∆ s     - - - - - cos θ + – - -sin θ + - cos θ + + - - sin θ +     2 2 2 2 2 2b 2 2b ∆ s ∆θ ∆θ ∆θ ∆θ F = 1 ∆ s 1 (5.11)     ∆ - - - - - - rl - sin θ + + - - cos θ + - sin θ + – cos θ +    2b  2 2 2 2 2 2b 2 1 1 - - – b b The details for arriving at equation (5.11) are ∂f ∂f F== ∇ f = … (5.12) - ∆ ∆ rl rl ∂∆ s ∂∆ s r l ∆θ ∆θ ∆θ ∆θ ∂∆ s ∆ s ∂∆θ ∂∆ s ∆ s ∂∆θ     - - - - - - - - - cos θ + + - -–sin θ + - - - cos θ + + - -–sin θ + - -     2 2 2 2 ∂∆ s 2 ∂∆ s ∂∆ s 2 ∂∆ s r r l l ∆θ ∆θ ∆θ ∆θ ∂∆ s ∆ s ∂∆θ ∂∆ s ∆ s ∂∆θ     - - - - - - - - - sin θ + + - - cos θ + - - - sin θ + + - - cos θ + - -     2 2 2 2 ∂∆ s 2 ∂∆ s ∂∆ s 2 ∂∆ s r r l l ∂∆θ ∂∆θ - - - ∂∆ s ∂∆ s r l (5.13) and with ∆ s + ∆ s ∆ s –∆ s r l r l - - - ∆ s = ; ∆θ = (5.14) 2 b190 Chapter 5 Figure 5.4 Growth of the pose uncertainty for straight-line movement: Note that the uncertainty in y grows much faster than in the direction of movement. This results from the integration of the uncertainty about the robot’s orientation. The ellipses drawn around the robot positions represent the uncertainties in the x,y direction (e.g. 3σ ). The uncertainty of the orientation θ is not represented in the picture although its effect can be indirectly observed. ∂∆ s 1 ∂∆ s 1 ∂∆ θ 1 ∂∆ θ 1 - - - - - - = ; = ; = ; = – (5.15) ∂∆ s 2 ∂∆ s 2 ∂∆ s b ∂∆ s b r l r l we obtain equation (5.11). Figures 5.4 and 5.5 show typical examples of how the position errors grow with time. The results have been computed using the error model presented above. Once the error model has been established, the error parameters must be specified. One can compensate for deterministic errors properly calibrating the robot. However the error parameters specifying the nondeterministic errors can only be quantified by statistical (repetitive) measurements. A detailed discussion of odometric errors and a method for cal- ibration and quantification of deterministic and nondeterministic errors can be found in 5. A method for on-the-fly odometry error estimation is presented in 105. Mobile Robot Localization 191 Figure 5.5 Growth of the pose uncertainty for circular movement (r = const): Again, the uncertainty perpendic- ular to the movement grows much faster than that in the direction of movement. Note that the main axis of the uncertainty ellipse does not remain perpendicular to the direction of movement. 5.3 To Localize or Not to Localize: Localization-Based Navigation versus Programmed Solutions Figure 5.6 depicts a standard indoor environment that a mobile robot navigates. Suppose that the mobile robot in question must deliver messages between two specific rooms in this environment: rooms A and B. In creating a navigation system, it is clear that the mobile robot will need sensors and a motion control system. Sensors are absolutely required to avoid hitting moving obstacles such as humans, and some motion control system is required so that the robot can deliberately move. It is less evident, however, whether or not this mobile robot will require a localization system. Localization may seem mandatory in order to successfully navigate between the two rooms. It is through localizing on a map, after all, that the robot can hope to recover its position and detect when it has arrived at the goal location. It is true that, at the least, the robot must have a way of detecting the goal location. However, explicit localization with reference to a map is not the only strategy that qualifies as a goal detector. An alternative, espoused by the behavior-based community, suggests that, since sensors and effectors are noisy and information-limited, one should avoid creating a geometric map for localization. Instead, this community suggests designing sets of behaviors that together result in the desired robot motion. Fundamentally, this approach avoids explicit reasoning about localization and position, and thus generally avoids explicit path planning as well.192 Chapter 5 B A Figure 5.6 A sample environment. This technique is based on a belief that there exists a procedural solution to the particular navigation problem at hand. For example, in figure 5.6, the behavioralist approach to nav- igating from room A to room B might be to design a left-wall following behavior and a detector for room B that is triggered by some unique queue in room B, such as the color of the carpet. Then the robot can reach room B by engaging the left-wall follower with the room B detector as the termination condition for the program. The architecture of this solution to a specific navigation problem is shown in figure 5.7. The key advantage of this method is that, when possible, it may be implemented very quickly for a single environment with a small number of goal positions. It suffers from some disadvantages, however. First, the method does not directly scale to other environ- ments or to larger environments. Often, the navigation code is location-specific, and the same degree of coding and debugging is required to move the robot to a new environment. Second, the underlying procedures, such as left-wall-follow, must be carefully designed to produce the desired behavior. This task may be time-consuming and is heavily dependent on the specific robot hardware and environmental characteristics. Third, a behavior-based system may have multiple active behaviors at any one time. Even when individual behaviors are tuned to optimize performance, this fusion and rapid switching between multiple behaviors can negate that fine-tuning. Often, the addition of each new incremental behavior forces the robot designer to retune all of the existing behav- iors again to ensure that the new interactions with the freshly introduced behavior are all stable.Mobile Robot Localization 193 communicate data discover new area sensors detect goal position actuators Σ avoid obstacles follow right / left wall coordination / fusion e.g. fusion via vector summation Figure 5.7 An architecture for behavior-based navigation. perception localization / map-building sensors actuators cognition / planning motion control Figure 5.8 An architecture for map-based (or model-based) navigation. In contrast to the behavior-based approach, the map-based approach includes both local- ization and cognition modules (see figure 5.8). In map-based navigation, the robot explic- itly attempts to localize by collecting sensor data, then updating some belief about its position with respect to a map of the environment. The key advantages of the map-based approach for navigation are as follows: • The explicit, map-based concept of position makes the system’s belief about position transparently available to the human operators. • The existence of the map itself represents a medium for communication between human and robot: the human can simply give the robot a new map if the robot goes to a new environment.194 Chapter 5 • The map, if created by the robot, can be used by humans as well, achieving two uses. The map-based approach will require more up-front development effort to create a nav- igating mobile robot. The hope is that the development effort results in an architecture that can successfully map and navigate a variety of environments, thereby amortizing the up- front design cost over time. Of course the key risk of the map-based approach is that an internal representation, rather than the real world itself, is being constructed and trusted by the robot. If that model diverges from reality (i.e., if the map is wrong), then the robot’s behavior may be undesir- able, even if the raw sensor values of the robot are only transiently incorrect. In the remainder of this chapter, we focus on a discussion of map-based approaches and, specifically, the localization component of these techniques. These approaches are partic- ularly appropriate for study given their significant recent successes in enabling mobile robots to navigate a variety of environments, from academic research buildings, to factory floors, and to museums around the world. 5.4 Belief Representation The fundamental issue that differentiates various map-based localization systems is the issue of representation. There are two specific concepts that the robot must represent, and each has its own unique possible solutions. The robot must have a representation (a model) of the environment, or a map. What aspects of the environment are contained in this map? At what level of fidelity does the map represent the environment? These are the design questions for map representation. The robot must also have a representation of its belief regarding its position on the map. Does the robot identify a single unique position as its current position, or does it describe its position in terms of a set of possible positions? If multiple possible positions are expressed in a single belief, how are those multiple positions ranked, if at all? These are the design questions for belief representation. Decisions along these two design axes can result in varying levels of architectural com- plexity, computational complexity, and overall localization accuracy. We begin by discuss- ing belief representation. The first major branch in a taxonomy of belief representation systems differentiates between single-hypothesis and multiple-hypothesis belief systems. The former covers solutions in which the robot postulates its unique position, whereas the latter enables a mobile robot to describe the degree to which it is uncertain about its posi- tion. A sampling of different belief and map representations is shown in figure 5.9. 5.4.1 Single-hypothesis belief The single-hypothesis belief representation is the most direct possible postulation of mobile robot position. Given some environmental map, the robot’s belief about position isMobile Robot Localization 195 a) position x b) position x c) position x d) node of topological map AB C D E F G Figure 5.9 Belief representation regarding the robot position (1D) in continuous and discretized (tessellated) maps. (a) Continuous map with single-hypothesis belief, e.g., single Gaussian centered at a single continuous value. (b) Continuous map with multiple-hypothesis belief, e.g;. multiple Gaussians cen- tered at multiple continuous values. (c) Discretized (decomposed) grid map with probability values for all possible robot positions, e.g.; Markov approach. (d) Discretized topological map with proba- bility value for all possible nodes (topological robot positions), e.g.; Markov approach. probability P probability P probability P probability P196 Chapter 5 expressed as a single unique point on the map. In figure 5.10, three examples of a single- hypothesis belief are shown using three different map representations of the same actual environment (figure 5.10a). In figure 5.10b, a single point is geometrically annotated as the robot’s position in a continuous 2D geometric map. In figure 5.10c, the map is a discrete, tessellated map, and the position is noted at the same level of fidelity as the map cell size. In figure 5.10d, the map is not geometric at all but abstract and topological. In this case, the single hypothesis of position involves identifying a single node i in the topological graph as the robot’s position. The principal advantage of the single-hypothesis representation of position stems from the fact that, given a unique belief, there is no position ambiguity. The unambiguous nature of this representation facilitates decision-making at the robot’s cognitive level (e.g., path planning). The robot can simply assume that its belief is correct, and can then select its future actions based on its unique position. Just as decision-making is facilitated by a single-position hypothesis, so updating the robot’s belief regarding position is also facilitated, since the single position must be updated by definition to a new, single position. The challenge with this position update approach, which ultimately is the principal disadvantage of single-hypothesis representa- tion, is that robot motion often induces uncertainty due to effector and sensor noise. There- fore, forcing the position update process to always generate a single hypothesis of position is challenging and, often, impossible. 5.4.2 Multiple-hypothesis belief In the case of multiple-hypothesis beliefs regarding position, the robot tracks not just a single possible position but a possibly infinite set of positions. In one simple example originating in the work of Jean-Claude Latombe 21, 99, the robot’s position is described in terms of a convex polygon positioned in a 2D map of the environment. This multiple-hypothesis representation communicates the set of possible robot positions geometrically, with no preference ordering over the positions. Each point in the map is simply either contained by the polygon and, therefore, in the robot’s belief set, or outside the polygon and thereby excluded. Mathematically, the position polygon serves to partition the space of possible robot positions. Such a polygonal representation of the multiple-hypothesis belief can apply to a continuous, geometric map of the environment 35 or, alternatively, to a tessellated, discrete approximation to the continuous environ- ment. It may be useful, however, to incorporate some ordering on the possible robot positions, capturing the fact that some robot positions are likelier than others. A strategy for repre- senting a continuous multiple-hypothesis belief state along with a preference ordering over possible positions is to model the belief as a mathematical distribution. For example, 50, 142 notate the robot’s position belief using an X, Y point in the 2D environment as theMobile Robot Localization 197 a) b) robot position (x,y,θ) c) d) Figure 5.10 Three examples of single hypotheses of position using different map representations: (a) real map with walls, doors and furniture; (b) line-based map → around 100 lines with two parameters; (c) occupancy grid-based map → around 3000 grid cells size 50 x 50 cm; (d) topological map using line features (Z/S lines) and doors → around 50 features and 18 nodes. node i198 Chapter 5 Path of the robot Belief states at positions 2, 3, and 4 Figure 5.11 Example of multiple-hypothesis tracking (courtesy of W. Burgard 49). The belief state that is largely distributed becomes very certain after moving to position 4. Note that darker coloring repre- sents higher probability. mean µ plus a standard deviation parameter σ , thereby defining a Gaussian distribution. The intended interpretation is that the distribution at each position represents the probabil- ity assigned to the robot being at that location. This representation is particularly amenable to mathematically defined tracking functions, such as the Kalman filter, that are designed to operate efficiently on Gaussian distributions. An alternative is to represent the set of possible robot positions, not using a single Gaus- sian probability density function, but using discrete markers for each possible position. In this case, each possible robot position is individually noted along with a confidence or probability parameter (see figure 5.11). In the case of a highly tessellated map this can result in thousands or even tens of thousands of possible robot positions in a single-belief state. The key advantage of the multiple-hypothesis representation is that the robot can explic- itly maintain uncertainty regarding its position. If the robot only acquires partial informa- tion regarding position from its sensors and effectors, that information can conceptually be incorporated in an updated belief. A more subtle advantage of this approach revolves around the robot’s ability to explic- itly measure its own degree of uncertainty regarding position. This advantage is the key to a class of localization and navigation solutions in which the robot not only reasons about reaching a particular goal but reasons about the future trajectory of its own belief state. For instance, a robot may choose paths that minimize its future position uncertainty. An exam- ple of this approach is 141, in which the robot plans a path from point A to point B that takes it near a series of landmarks in order to mitigate localization difficulties. This type ofMobile Robot Localization 199 explicit reasoning about the effect that trajectories will have on the quality of localization requires a multiple-hypothesis representation. One of the fundamental disadvantages of multiple-hypothesis approaches involves deci- sion-making. If the robot represents its position as a region or set of possible positions, then how shall it decide what to do next? Figure 5.11 provides an example. At position 3, the robot’s belief state is distributed among five hallways separately. If the goal of the robot is to travel down one particular hallway, then given this belief state, what action should the robot choose? The challenge occurs because some of the robot’s possible positions imply a motion tra- jectory that is inconsistent with some of its other possible positions. One approach that we will see in the case studies below is to assume, for decision-making purposes, that the robot is physically at the most probable location in its belief state, then to choose a path based on that current position. But this approach demands that each possible position have an asso- ciated probability. In general, the right approach to such decision-making problems would be to decide on trajectories that eliminate the ambiguity explicitly. But this leads us to the second major disadvantage of multiple-hypothesis approaches. In the most general case, they can be computationally very expensive. When one reasons in a 3D space of discrete possible posi- tions, the number of possible belief states in the single-hypothesis case is limited to the number of possible positions in the 3D world. Consider this number to be N . When one moves to an arbitrary multiple-hypothesis representation, then the number of possible N belief states is the power set of N , which is far larger: 2 . Thus explicit reasoning about the possible trajectory of the belief state over time quickly becomes computationally unten- able as the size of the environment grows. There are, however, specific forms of multiple-hypothesis representations that are some- what more constrained, thereby avoiding the computational explosion while allowing a limited type of multiple-hypothesis belief. For example, if one assumes a Gaussian distri- bution of probability centered at a single position, then the problem of representation and tracking of belief becomes equivalent to Kalman filtering, a straightforward mathematical process described below. Alternatively, a highly tessellated map representation combined with a limit of ten possible positions in the belief state, results in a discrete update cycle that is, at worst, only ten times more computationally expensive than a single-hypothesis belief update. And other ways to cope with the complexity problem, still being precise and com- putationally cheap, are hybrid metric-topological approaches 145, 147 or multi-Gaussian position estimation 35, 60, 81. In conclusion, the most critical benefit of the multiple-hypothesis belief state is the abil- ity to maintain a sense of position while explicitly annotating the robot’s uncertainty about its own position. This powerful representation has enabled robots with limited sensory200 Chapter 5 information to navigate robustly in an array of environments, as we shall see in the case studies below. 5.5 Map Representation The problem of representing the environment in which the robot moves is a dual of the problem of representing the robot’s possible position or positions. Decisions made regard- ing the environmental representation can have impact on the choices available for robot position representation. Often the fidelity of the position representation is bounded by the fidelity of the map. Three fundamental relationships must be understood when choosing a particular map representation: 1. The precision of the map must appropriately match the precision with which the robot needs to achieve its goals. 2. The precision of the map and the type of features represented must match the precision and data types returned by the robot’s sensors. 3. The complexity of the map representation has direct impact on the computational com- plexity of reasoning about mapping, localization, and navigation. In the following sections, we identify and discuss critical design choices in creating a map representation. Each such choice has great impact on the relationships listed above and on the resulting robot localization architecture. As we shall see, the choice of possible map representations is broad. Selecting an appropriate representation requires understanding all of the trade-offs inherent in that choice as well as understanding the specific context in which a particular mobile robot implementation must perform localization. In general, the environmental representation and model can be roughly classified as presented in chapter 4, section 4.3. 5.5.1 Continuous representations A continuous-valued map is one method for exact decomposition of the environment. The position of environmental features can be annotated precisely in continuous space. Mobile robot implementations to date use continuous maps only in 2D representations, as further dimensionality can result in computational explosion. A common approach is to combine the exactness of a continuous representation with the compactness of the closed-world assumption. This means that one assumes that the repre- sentation will specify all environmental objects in the map, and that any area in the map that is devoid of objects has no objects in the corresponding portion of the environment. Thus, the total storage needed in the map is proportional to the density of objects in the environment, and a sparse environment can be represented by a low-memory map.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.