Journal of Intelligent and Robotic Systems 24: 235–251, 1999. © 1999 Kluwer Academic Publishers. Printed in the Netherlands.
235
Path Planning in Unknown Environment With Obstacles Using Virtual Window M. A. MANSOR and A. S. MORRIS Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, U.K. (Received: 2 October 1997; accepted: 9 August 1998) Abstract. A virtual window has been applied for real-time path planning of a mobile robot in the presence of unknown obstacles. The path planner projects a vertical plane ahead of the robot, and calculations of the space ahead and any intersections are done with respect to this window. A radial basis function is used to decipher the information on the window and to relay it to the planner. Using this technique, path planning through an unknown space can be performed which avoids collisions with obstacles present in the space. A simple method of defining an obstacle or many obstacles is also proposed by using rectangles of different sizes. Key words: path planning, vertical plane, unknown space, collision avoidance, virtual window, radial basis function, obstacles.
1. Introduction The aim of a path planner is to take a robot safely through a space. This can either be a known space or an unknown space. Ideally the path produced should be as efficient as possible and processing time as short as possible. There has been a lot of work reported in path planning where the traversed space is known a priori, using potential fields and similar methods (Balding [1], Khatib [6], Hou et al. [3], Zhu et al. [14]). Unfortunately, the number of research papers describing path planning in unknown or uncertain environment is much smaller. McCarthy [8] and RamírezSerrano et al. [10] employed ultrasonic sensors to detect obstacles in front of the robot in an unknown environment with a fuzzy logic controller. The sonar returns an image of the space ahead of the robot but, for multiple obstacles, this will involve a lot of computation work which precludes real-time implementation. Skewis et al. [12] applied three different types of algorithms, namely the Bug2, Tarry and VisBug, and implemented them on a mobile platform moving in the cluttered environment of a laboratory. Kamon et al. [4] applied a variation of Bug (Lumelsky et al. [7]) and Bug2 algorithms called the DistBug. As the algorithms applied by both authors were designed to follow the boundary of an obstacle whenever one was met, their algorithms do not give an optimal solution. Further, as the algorithms were designed to navigate a point robot in an operational or task
VTEX(P) PIPS No.: 190762 (jintkap:mathfam) v.1.15 JINT1446.tex; 25/01/1999; 14:19; p.1
236
M. A. MANSOR AND A. S. MORRIS
workspace, the implementation of the algorithms onto a practical robot with real dimensions would be a little time consuming. Zhang et al. [13] proposed a path planning method that was based on a fuzzy logic controller. The method employed did not have the flexibility of being able to go under an obstacle when a passable tunnel exists. A major problem with all of the classical path planning algorithms discussed so far is the large computational requirement which prevents real-time implementation in workspaces with unknown obstacles. The virtual window approach developed to overcome this computational limitation is described in Section 2. Section 3 describes some simulation runs using the virtual window approach in different configurations and environments and Section 4 provides a discussion and conclusion on the work. Finally, Sections 5 and 6 present the references, and the figures and tables, respectively.
2. Theoretical Background The basis of the novel path planning method described in this report is the use of a vertical plane acting as a virtual window for a mobile robot. This virtual window is positioned ahead of the robot in the direction of motion. All the required computation in planning the path of the robot and avoiding any obstacles is done with respect to this virtual window, and computation is fast enough to permit realtime implementation. The image in the window, formed due to the intersection of the virtual window and the space that it occupies, is used to guide the mobile robot to its goal position. This will be described in detail in Section 2.1. The path planning algorithms for the work were written using the Matlab programming language. Matlab has a good capability for manipulating the matrix which was used to define the virtual window into a form that allows for easier computation by the path planner algorithms, especially in calculating any intersections between the window and the obstacles. A detailed description of the path planner and a brief account of the calculations of intersections using a geometric analysis method are given in Section 2.2. Furthermore, matrices were also used extensively in defining objects, especially obstacles, within the workspace of the mobile robot. They provide an easy way to store information about the locations and sizes of the obstacles. They also do not take up a lot of space within the programs or algorithms and lend themselves toward easier calculations of intersections. Section 2.3 describes the procedures used in defining the obstacles in detail. A block diagram of the system is given in Figure 1. The dimensions of the robot, that is, its length, width and height, are not limited to any particular values and, in this implementation, for ease of computation, the base is assumed to be square in shape with its sides being 2 units, and with a height of 4 units (Figure 2). Two sizes of the virtual window were implemented, one 7 units wide and the other 9 units, with both having a height of 4 units corresponding to the height of the robot.
JINT1446.tex; 25/01/1999; 14:19; p.2
237
PATH PLANNING IN UNKNOWN ENVIRONMENT
Figure 1. A block diagram of the planner.
Figure 2. The mobile robot with its virtual window.
The reasons for the two widths of the two virtual windows will be made clear in Section 2.1. The robot was also assumed to be able to move forward and backward, sideways to the left or right, and make a right or left 90◦ turn. This mimics the flexibility in movement of human beings. Sideways and turning movements are defined according to the position of the virtual window. During a sideways movement, the axis of the virtual window is perpendicular or normal to the direction of motion, that is, the virtual window remains in its original position and only the direction of motion of the mobile robot changes. On the other hand, when the mobile robot makes a turn, the virtual window changes position, that is, the virtual window turns towards the same direction as the new direction of motion. Figure 3 shows the proposed wheel arrangement of the mobile robot. This is specially designed to avoid the steering angle ambiguities associated with the usual type of mobile robot base. It consists of two sets of wheels mounted on orthogonal axes that are raised and lowered to provide forward/backward and sideways movement as required. Such a wheel arrangement has been shown (Skewis et al. [12]) to allow accurate adhesion to the planned path, and thus exact knowledge of the base position is maintained during motion. 2.1.
THE DESIGN OF THE WINDOW
The virtual window as defined in this work is a two-dimensional vertical plane of a predefined size which is projected or positioned at a specified distance in front of the robot in its direction of movement in order to aid in the planning of the robot’s
JINT1446.tex; 25/01/1999; 14:19; p.3
238
M. A. MANSOR AND A. S. MORRIS
Figure 3. The layout of the wheels for the mobile base. The small circles are the fixed legs.
path. All the required computation in planning the path of the robot and avoiding any obstacles is done with respect to this window. The image of the intersection of this vertical plane (or virtual window) with the space at that point will be the field of vision of the robot. The information contained in it will be used to determine the path the robot will take towards achieving its goal. One way of implementing the virtual window would be to have a laser beam scanning a vertical plane the size of the window at a fixed distance away from the robot, just like an electron beam paints a picture on the screen in the cathode ray tube as shown in Figure 4. The laser system can be set to give a signal if there is a reflected beam within a specified time, indicating that there is an obstruction at or within a specified distance in front of the robot. If there is no reflected beam received after the specified time then the forward path of the beam is unobstructed up to the specified distance. A sonar system can also be similarly implemented instead of a laser system. However, a laser system is preferred due to the low divergence of the laser beam which will give more precise scanning of the space in front of it. Further, a sonar system can only be implemented for short range signals due to the higher divergence of the sound. Also, due to the ‘slower’ speed of the sound, the maximum sampling rate will also be less than with a laser system. In this implementation, the distance of the virtual window from the robot is fixed at 1 unit or 2 units ahead of the centre of its body as the robot only moved one unit at each time step. This placement of the virtual window safeguards the robot from colliding with obstacles, especially when it makes a turn. When the robot makes a turn √ about the centre of its body, the longest dimension of its body is its diagonal (at 2 = 1.4142) which is still less than 2 units, and this thus ensures
JINT1446.tex; 25/01/1999; 14:19; p.4
239
PATH PLANNING IN UNKNOWN ENVIRONMENT
Figure 4. A simple implementation of a laser range finder as the ‘eye’ of a robot.
that there is no collision with obstacles provided there is no intersection between the window and obstacles that will jeopardise the safety of the mobile robot. In theory, the virtual window in front of the mobile robot does not need to be at a fixed position, but rather it can be moved either closer to or further away from the robot depending on: 1. The robot having a higher speed and thus the need to process the image further ahead than normal. In this case, the virtual window needs to be moved further away from the robot just as a human being looks a little bit further away when driving a car, for example, faster. 2. The need to know what is ahead of the robot for its safety and for planning the path before actually traversing it, especially in the case of unknown environment. This action can be compared to a human being standing still and looking ahead to see and judge or decide on the path in front, whether to proceed or turn. The only constraint imposed on this positioning of the virtual window is due to the capability of the sensor used to create the virtual window. Apart from the position of the virtual window, the dimensions, that is, the width and height, of the virtual window can also be varied. An increase in the width of the window corresponds to an increase in its field of vision and thus gives the path
JINT1446.tex; 25/01/1999; 14:19; p.5
240
M. A. MANSOR AND A. S. MORRIS
Figure 5. The figure shows the design and coding of the windows: (a) 4 × 7 and (b) 4 × 9.
planner more information to avoid getting stuck. Two virtual windows, having the same height but with different widths, were implemented to show and compare the benefit of having a wider field of vision or view. There are two steps to the virtual window method of path planning as implemented in this research. Initially, an image is formed by the intersection of the virtual window at a position in front of the mobile robot with the space at that point. The next step is in reading this image in order to output a reaction. This can be done by dividing the virtual window into horizontal and vertical grids and then calculating the width and height of intersections between the window and an obstacle by summing up the grids to give a row of data, as will be described in detail below. This data can then be passed through a logic function or procedure, for example an if . . . else function, in order to get a corresponding reaction to the image. As can be seen, the computation process is quite lengthy. Thus, in this research it was decided to apply an artificial neural network which reads or ‘identifies’ the image and outputs an appropriate reaction. An artificial neural network can greatly reduce the computation effort once the training session is over (Rao [11]). Figure 5 shows the two virtual windows with the codings representing the information within them. In a practical implementation, training patterns consisting of the image in the virtual window of the space (that is the image of free space, nonfree space or combination of both) along with their appropriate reactions will be used to train the artificial neural network. In this study though, as only simulations were done, a different method of converting or transforming the image formed into training patterns to be used to train the artificial neural network was found. This was done by the method described below? :
? The number in the bracket indicates the virtual window with the bigger width (4 × 9 window).
JINT1446.tex; 25/01/1999; 14:19; p.6
PATH PLANNING IN UNKNOWN ENVIRONMENT
241
1. The four elements in the first column starting from the left side are summed up together to get a value. This value is then divided by 5 and then added to 0.05. For the example shown, this value is 0.85 (underlined). 2. The four elements in each of the other 6 (8) columns (as the virtual window is made up of 7 (9) columns and 4 rows) are similarly summed, giving sums of 0.85, 0.85, 0.05, 0.85, 0.85 and 0.85 respectively. (N.B. The sums of each column were divided by 5 rather than 4 to ensure that the final values were smaller than 1, and then 0.05 was added to ensure a value greater than 0. This achieves values within the specified range of 0 and 1.) 3. A similar process is also done for the first row. The seven (nine) elements from the first row (that is the top row or row 1) are summed up together to get a value. This value is then divided by 10 and then added to 0.05. In this example, the value is 0.65 (underlined). 4. The seven (nine) elements in each of the other 3 rows are similarly summed, giving sums of 0.65, 0.65 and 0.65 respectively. (N.B. The sums of each row were divided by 10 rather than 7 (or 9), and 0.05 was then added to ensure that the values were within the specified range, as explained in the note following step {2} above.) 5. These eleven summed values are then arranged in a single row, with the first seven (nine) elements consisting of the column sums as calculated in {1} and {2} above, and the final four elements consisting of the row sums as calculated in {3} and {4} above. This final single row of values is then paired with a specified reaction to be used as one of the training patterns. 6. The same procedures, as stated from {1} to {5} above, are done for other combinations of the free space and non-free space with their corresponding reactions to be used as training patterns. This will give a total of 11 (13) inputs for every output or reaction. This output is the action that needs to be taken by the mobile robot given the image in the window. There are 4 basic reactions that can be taken by the mobile robot when using the smaller virtual window (4 × 7 window): 1. No Go (1): This value indicates that there is no free space ahead of the robot and it needs to either turn left or right, depending on which direction will bring it nearer to its goal; 2. Go (2): This indicates that the robot can move forward as the free space ahead is at least large enough for the robot to go through; 3. Right (3): The robot needs to move sideways to the right as there is free space to the right; 4. Left (4): This is similar to 3 except that it moves sideways towards the left. This is in contrast to the 4 × 9 window, which has a total of 7 reactions or commands available. The reactions are: 1. No Go (1): This value indicates that there is no free space ahead of the robot and it needs to either turn left or right, depending on which direction will bring it nearer to its goal;
JINT1446.tex; 25/01/1999; 14:19; p.7
242
M. A. MANSOR AND A. S. MORRIS
2. Ahead/Left/Right (2): This value indicates that the robot has freedom in its movement. It can either go forward, move sideways either to the left or right. This is only possible when the image in the virtual window is free of obstacles. The sideways movement of the mobile robot depends on which direction will bring it nearer to its goal; 3. Ahead/Turn left (3): This indicates that the mobile robot can either move forward or turn left. This movement will be taken when there is intersection between obstacles with columns 8 and 9 of the virtual window (Figure 5); 4. Ahead/Turn right (4): This is similar to {3} above and indicates that the mobile robot can either move forward or turn right. This movement will be taken when there is intersection between obstacles with columns 1 and 2 of the virtual window (Figure 5); 5. Sideways left (5): When the intersection with obstacle on the right side of the virtual window involves more than just columns 8 and 9, the mobile robot can only move sideways to the left; 6. Sideways right (6): When the intersection with obstacle on the left side of the virtual window involves more than just columns 1 and 2, the mobile robot can only move sideways to the right; 7. Go (7): This indicates that the robot can only move forward as the free space ahead is only large enough for the robot to go through; As an example, for Figure 5a, the row of data is [0.85 0.85 0.85 0.05 0.85 0.85 0.85 0.65 0.65 0.65 0.65] and its reaction will be [0.2] for No Go, while for Figure 5b, the row of data is [0.85 0.85 0.85 0.85 0.85 0.05 0.05 0.05 0.05 0.55 0.55 0.55 0.55] and its reaction or output is [0.6] for Sideways right. 2.2.
THE PATH PLANNING ALGORITHM
The algorithms for the work were all written using Matlab. This has a good capability for manipulating the matrices that are used to define the window so that they can be easily processed by the planner. After every step that the robot takes, the planner calls the function imge which “paints” an image of the space at the position occupied by the virtual window. This image is passed to the radial basis function to be decoded. Its output is a code corresponding to the reaction that the robot should take. This process is repeated at every step. This is similar to what a human would do when walking through an unknown area. As the algorithm is applied as such, the robot ‘looks at’ the space ahead before every step is taken and then decides on the appropriate action of proceeding or turning or otherwise. In a practical implementation, there will be no need to do any calculation of the image as a laser (described earlier), camera or sonar will give the image of the window to the neural network. However, in this computer simulation, there is a need to calculate the image and the collision or intersection in order to be able to paint an image of the space occupied by the window at that point. The image is formed by calculating the intersection of the window with the space at that point.
JINT1446.tex; 25/01/1999; 14:19; p.8
243
PATH PLANNING IN UNKNOWN ENVIRONMENT
Figure 6. Two straight lines parallel to each other.
The method adopted is a variant of the geometric analysis method (Bowyer et al. [2], Mortenson [9]). Consider two line segments KL and MN in a plane (Figure 6). The two lines are parallel to each other as the motion of the mobile robot is assumed to be either parallel to the x- or y-axis and only to make 90◦ turns either to the left or right. Both lines can then be represented as: M0 = M + s
and
K0 = K + t
(1)
where M 0 and K 0 can be any values on lines KL and MN respectively. If the lines were to intersect, then M 0 and K 0 will have the same value, thus giving: t = (M − K) + s
(2)
From Equation (2), an image of the intersection can then be painted on the window by iterating s from 0 to (N − M) and checking that t falls between 0 and (L − K). This image is then given to the radial basis function for decoding. As can be seen, no calculations for collision are done and the planner reacts to what it “sees” in front of it. Due to the different widths of the windows and thus the amount of information contained, two different path planner algorithms were written. As will be shown in the next section, the algorithm using the wider window (4 × 9 window) is more able to plan a path without getting stuck. This is because of the extra commands that can be included in the algorithm to cater for the extra information.
2.3.
DEFINING OBSTACLES
The data that define each obstacle are organised in the program as a single row with six columns. The first three columns give the x, y and z co-ordinate of the bottom left front corner of the obstacle. The remaining three columns are filled with the length, depth and height of the obstacle. Another obstacle can be defined likewise and its data added as the next row. Thus many obstacles in a workspace can be defined within a single matrix for easy computation during the simulation. A single complex obstacle can also be defined by a combination of simple cubes as shown in Figure 7.
JINT1446.tex; 25/01/1999; 14:19; p.9
244
M. A. MANSOR AND A. S. MORRIS
Figure 7. Definition of obstacles.
Figure 8. A top view of an obstacle broken up into rectangles.
JINT1446.tex; 25/01/1999; 14:19; p.10
245
PATH PLANNING IN UNKNOWN ENVIRONMENT
Figure 9. The path taken by robot to avoid collision with obstacle.
Figure 8 shows an odd shaped obstacle that has been broken up into smaller rectangles so that it can be stored in the program as a matrix to be processed by the path planner as shown below:
x 17 15 14 16 19 12 8
y 6 11 15 15 15 18 20
z l 0 10 0 11 0 2 0 3 0 4 0 2 0 4
d 5 4 12 9 5 9 12
h 15 15 15 15 15 15 15
(3)
The obstacle is assumed to have a height of 15 units. A path was planned from start to goal for the mobile robot by decomposing the obstacle into many small rectangles. Figure 9 shows the points needed to be traversed by the robot in order to get to its goal position.
JINT1446.tex; 25/01/1999; 14:19; p.11
246
M. A. MANSOR AND A. S. MORRIS
Figure 10. The path taken by robot to avoid collision with an obstacle having a tunnel (a) and (b). (c) and (d) shows the respective intersections of the window with obstacle while (e) shows the sign convention used to denote robot orientation.
3. Simulations and Results 3.1. 4 × 7 WINDOW Figures 10a and 10b show two paths taken by a robot in an unknown space. The space contained an obstacle with a tunnel but this obstacle was unknown to the robot. The ‘unknown’ obstacle is drawn in the diagram for the sake of clarity to the reader, but the robot’s path was actually planned without the planner knowing about the environment a priori. Figures 10c and 10d show the intersections between the window and the obstacle. Each row in each table indicates a step taken by the robot. 1 is where there is an intersection while a 0 indicates that no intersection has
JINT1446.tex; 25/01/1999; 14:19; p.12
247
PATH PLANNING IN UNKNOWN ENVIRONMENT
occurred. Thus, where the row shows ‘0000000’ it indicates that there is no intersection between the window and the obstacle, and where the row shows ‘1111111’ it indicates complete intersection. θ next to the boxes show the orientation of the robot at that particular point or position, as shown in Figure 10e. 3.2. 4 × 9 WINDOW Even though the algorithm works well with window 1 (4 × 7), there are cases when the robot gets stuck, such as when the start and goal positions are at (15,20) and (50,22) respectively. The robot was stuck at (13,22) (Figure 11). Table I shows the co-ordinates traversed and their respective orientation up to 14 steps. As before, a 1 indicates intersection while a 0 indicates no intersection. As can be seen, the robot is ‘confused’ as to the direction that it should proceed at the point. This is due to the limited options (only 4 options or codes as shown in Figure 5a) that the robot has available for its reaction. In Figure 12 below, a similar simulation was carried out with the robot having to move between the same start and goal positions but with a wider field of vision (4 × 9 window 2). This time the robot had no problem navigating through the unknown environment to get to the goal position. Other path planning tests with different combinations of obstacles were also done as shown in Figure 13. The times taken to run the simulations of path planning on a 80386 PC (40 MHz) were 51.02 s, 52.89 s, 64.98 s and 62.39 s, respectively for the paths from start to goal position shown in Figures 13a, 13b, 13c and 13d, which translates to a computation time of about 0.6 s per step. (N.B. This time does not include the time taken for the robot to move along each path segment after each ‘next path segment’ has been computed.) Figure 13a shows the mobile robot taking
Figure 11. The path taken by the robot with window 1 and stuck at (13,22).
JINT1446.tex; 25/01/1999; 14:19; p.13
248
M. A. MANSOR AND A. S. MORRIS
Table I. Table showing the orientations and intersections of the robot at every position x
y
θ
15 14 13 12 12 12 12 13 13 12 12 13 13 12
20 20 20 20 21 22 22 22 22 22 22 22 22 22
0 0 0 0 0 0 –90 –90 0 0 –90 –90 0 0
Intersections 0 0 0 0 0 0 0 0 1 0 0 0 1 0
0 0 0 0 0 0 0 0 1 0 0 0 1 0
0 0 0 0 0 0 0 0 1 0 0 0 1 0
0 1 0 0 0 0 0 0 1 0 0 0 1 0
0 1 1 0 0 0 0 0 1 0 0 0 1 0
0 1 1 1 0 0 0 0 1 1 0 0 1 1
0 1 1 1 1 1 1 0 1 1 1 0 1 1
Figure 12. The path taken by the robot with window 2.
an initial left turn before proceeding to its goal. This is due to the image having a free space on the left side of the virtual window. Figure 13b shows the mobile robot going through the tunnel, while Figure 13c shows the mobile robot going between two obstacles. Figure 13d is similar to Figure 13c but due to the gap between two obstacles being too small to accommodate the mobile robot, it just passes by. 4. Discussions and Conclusions A novel collision-free path planning procedure based on a virtual window has been presented. Essentially this virtual window is projected ahead in the direction of
JINT1446.tex; 25/01/1999; 14:19; p.14
249
PATH PLANNING IN UNKNOWN ENVIRONMENT
Figure 13. Four figures showing path taken in different combinations of obstacles.
motion or intended motion to aid in the planning of the robot’s path. Collision avoidance computations are done with respect to the window. In the simulations, it was shown that path planning can be successfully implemented using the virtual window method. It is no longer necessary to calculate either the potential field around all objects or transform them into configuration space. It has also been shown that the path planner algorithm using the virtual window can take a mobile robot through a tunnel if such a path is possible. Furthermore, this algorithm is applicable unchanged in both known and unknown environments, because only the image in the window is necessary for the calculations of collision avoidance and path planning. The method described mimics the way a human would move from start to its destination. The robot gets a 2D image of the space ahead and processes the information in that image. Based on the image, the robot will make a decision on its next movement. As the path planner only processes a 2D image of the space in front of it, the computation time is minimised. If required, the path computation times shown in Section 3 could be further improved by compiling the path planner algorithm rather than interpreting the program as was done in this case. There is also scope for optimising the programming of the code in the algorithm, utilising the built-in matrix operations in MATLAB (Luke [5]). An additional reduction in computation time for the path planner algorithm could be achieved if a programming language such as C was used instead of MATLAB.
JINT1446.tex; 25/01/1999; 14:19; p.15
250
M. A. MANSOR AND A. S. MORRIS
In this research, an artificial neural network (that is, a radial basis function neural network) was applied to decode the image in the virtual window. Unlike a human being whereby, on seeing an object or obstacle, he or she recognises the object (that is, its size and position, and what the object is) and takes an appropriate action, the aim of the artificial neural network implemented with the virtual window was to identify the space just within the window and to output an appropriate action. With this in mind, a potential field method could also be utilised within the virtual window to repel the window and thus the mobile robot away from obstacles. Fuzzy logic could also be applied to guide the mobile robot around an obstacle based on the information in the virtual window. In this form, the virtual window can be regarded as a representational tool to aid in the path planning of the mobile robot. As mentioned before, the advantage of the virtual window is that its computation time is minimal, as only a small portion of the environment is processed. This does not in any way reduce the efficiency of the path planner when using the virtual window method as the mobile robot may not need to traverse some of the spaces within the environment in order to get to its goal. In fact, it can be considered as a waste of time to calculate the configuration space of objects or obstacles that are not within the path of the mobile robot. An ancillary outcome of the work has also been the development of a satisfactory way of decomposing complex and odd shaped obstacles. The obstacles were decomposed into many small rectangles of different sizes to ease the computation for collision avoidance. The method works well, as shown in Figures 8 and 9. For smoother path generation, the obstacle can be decomposed further into smaller rectangles. This will enable the planner to plan a path closer to the obstacle without colliding with it. References 1. 2. 3. 4. 5.
6. 7. 8. 9.
Balding, N. W.: Real-time path planning for robot arms, PhD Thesis, University of Durham, UK, 1987. Bowyer, A. and Woodwark, J.: A programmer’s Geometry, Butterworths, London, 1983. Hou, Edwin S. H. and Zheng, D.: Mobile robot path planning based on hierarchical hexagonal decomposition and artificial potential fields, J. of Robotic Systems 11(7) (1994), 605–614. Kamon, I. and Rivlin, E.: Sensory-based motion planning with global proofs, IEEE Trans. on Robotics and Automation 13(6) (1997), 609–615. Luke, N.: The investigation of a method for use in the simulation of variable speed time delay processes within MATLAB, Development and Testing Report 1, Department of Automatic Control and Systems Engineering, University of Sheffield, 1998. Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots, Int. J. Robotic Res. 5(1) (1986), 91–98. Lumelski, V. J.: Path-planning strategies for a point mobile automaton moving amidst obstacles of arbitrary shape, Algorithmica 2 (1987), 403–430. McCarthy, W. E.: A fuzzy-logic controller for an autonomous vehicle operating in an unknown environment, MSc Thesis Dept. of Mechanical Engineering, University of Nevada, USA, 1994. Mortenson, M. E.: Computer Graphics: An Introduction to the Mathematics and Geometry, Heinemann Newnes, Oxford, 1989.
JINT1446.tex; 25/01/1999; 14:19; p.16
PATH PLANNING IN UNKNOWN ENVIRONMENT
10.
11. 12. 13. 14.
251
Ramírez-Serrano, A. and Boumedine, M.: Real-time navigation in unknown environments using fuzzy logic and ultrasonic sensing, in: Proc. of the IEEE Int. Symp. on Intelligent Control, Dearborn, MI, September 15–18, 1996, pp. 26–30. Rao, D. H.: Neural network in robotics and control: some perspectives, in: Int. Conf. on Industrial Automation and Control, 1995, pp. 451–456. Skewis, T. and Lumelsky, V.: Experiments with a mobile robot operating in a cluttered unknown environment, J. of Robotic Systems 11(4) (1994), 281–300. Zhang, M., Peng, S., and Meng, Q.: Neural network and fuzzy logic techniques based collision avoidance for a mobile robot, Robotica 15 (1997), 627–632. Zhu, D. and Latombe, J.-C.: New heuristic algorithms for efficient hierarchical path planning, IEEE Trans. on Robotics and Automation 7(1) (1991), 9–20.
JINT1446.tex; 25/01/1999; 14:19; p.17