A Combination of Terrain Prediction and Correction for Search and Rescue Robot Autonomous Navigation International Journal of Advanced Robotic Systems, Vol 6, No 3 (2009) ISSN 1729 8806, pp 207 214 20[.]
A Combination of Terrain Prediction and Correction for Search and Rescue Robot Autonomous Navigation Yan Guo1, Aiguo Song1, Jiatong Bao1, Hongru Tang2 and Jianwei Cui1 School of Instrument Science and Engineering, Southeast University, China School of Energy and Power Engineering, Yangzhou Univeristy, China Corresponding author E-mail: a.g.song@seu.edu.cn, y.guo@seu.edu.cn Abstract: This paper presents a novel two-step autonomous navigation method for search and rescue robot The algorithm based on the vision is proposed for terrain identification to give a prediction of the safest path with the support vector regression machine (SVRM) trained off-line with the texture feature and color features And correction algorithm of the prediction based the vibration information is developed during the robot traveling, using the judgment function given in the paper The region with fault prediction will be corrected with the real traversability value and be used to update the SVRM The experiment demonstrates that this method could help the robot to find the optimal path and be protected from the trap brought from the error between prediction and the real environment Keywords: mobile robot, image analyze, terrain prediction/correction, navigation Introduction Search and rescue robot is a special type of mobile robot, for its application in the search and rescue works after nature or man-made disasters such as earthquake, hurricane, debris and mine collapse Most of search and rescue robots work under the remote way, and more and more researches have been focused on the autonomous navigation, which means running from the start point to the specified point autonomously and safely It reflects the intelligence level of the search and rescue robot Working in an unstructured environment, the robots have to detect the surroundings and make decision how to arrive the target safely, and the troubles are obstacles, like rocks and vegetations, and non-traveled regions For this reason it is beneficial for a search and rescue robot to know which way is the safest For searching the safest path, ladar sensors were used to segment the ground surface from vegetation and from rocks and trunks (Talukder, A., Manduchi, R., Rankin, A., Matthies, L., 2002) (Hebert, M., Vandapel, N., 2003) (Vandapel, N., Huber, D., Kapuria, A., Hebert, M., 2004) (Manduchi, R., Castano, A., Taluker, A., Matthies, L., 2005) This method depends on the positions of the targets, in another word, the distance between the obstacles and the robot It only can identify whether and where the obstacles are, but can not describe their types and have no consciousness about whether the obstacles are fatal or available to travel Vision-based terrain classification and prediction receive more and more attentions of the researchers in last few years (Bellutta, P., Manduchi, L., Matthies, K., Owens, K., Rankin, A., 2000) (Castano, R., Manduchi, L., Fox, J., 2001) The approach using the nature feature, just like color, shape and texture, divides the terrain into two classes which are travelable and non-travelable (Angelova, A., Matthies, L., Helmick, D., Perona, P., 2007) Robot terrain interaction parameters associated with training images are used to visually forecast terrain traversability (Seraji, H., 1999) (Kim, D., Sang, M O., James, M R., 2007) (Poppingga, J., Birk, A., Pathak, K., 2008) All these methods are trying to build a perfect prediction of the terrain in front of robot, but it is hard to certificate the prediction always correct For example, we hardly identify the road and water both covered with leaves just from the data of ladar sensors and visually images Some other mobile robot research groups focus on the methods based vibration Vibration-based terrain classification was first suggested by Iagnemma and Dubowsky (Iagnemma, K., Dubowsky, S., 2002) The method collects the vibration data when the robot is running, and then they reduce the dimensionality of the data by Principal Component Analysis (PCA) and use Linear Discriminant Analysis (LDA) for classification (Brooks, C A., Iagnemma, K., 2005) And Support Vector Machine (SVM) method is used to the classification of the terrain for the mobile robot (Weiss, C., Fröhlich, H., Zell, A., 2006) But all these methods just focus on the terrain style recognition thought the vibration data just when the robot covers on the interesting region It means that the robot could just know the terrain styles of the region where he has passed through and where he is standing on, and he has no power to understand the future path 207 International Journal of Advanced Robotic Systems, Vol 6, No (2009) ISSN 1729-8806, pp 207-214 Source: International Journal of Advanced Robotic Systems, Vol 6, No 3, ISSN 1729-8806, pp 256, September 2009, INTECH, Croatia, downloaded from SCIYO.COM International Journal of Advanced Robotic Systems, Vol 6, No (2009) We propose an alternative approach which includes two steps for the autonomous navigation of search and rescue robot At first, the method based on the vision is proposed for terrain identification to give a prediction of the safest path In an off-line training phase, the Support Vector Regression Machine (SVRM) is trained on a set of extracted features of images from our terrain database Once the SVRM is trained, the newly collected images during the running of robot could be calculated online and we could resolve the optimal solution of the safest path And then, running following the prediction, the robot collects the vibration data to make a judgment of the abovementioned prediction If it displays the false from the result of the judgment, the robot has enough reasons to believe the pervious prediction incorrect and must stop For the region with fault prediction, the features and the real traversability could be collected as a new data point added to the training database The SVRM would be recalculated and updated The robot would calculate the prediction with the up-to-date SVRM The rest of this paper is organized as follows: In section 2, we describe our approach to terrain prediction In section 3, the method of correction of prediction mentioned in section is developed Section presents our experimental results Section concludes the paper and suggests future work The smoothness R of the gray image The feature is a measurement of the relative smoothness of the intensity in a region R ∈ [ 0,1] , and R is for a region of constant intensity and approaches for regions with large excursions in the values of its intensity levels ∑ ( z − m) p ( z ) R= + ∑ ( z − m) p ( z ) i∈H i (3) i The third moment μ The feature is a measurement of the skewness of a histogram μ is for symmetric histograms, positive by histograms skewed to the right, about the mean, and negative for histograms skewed to the left μ = ∑ ( z i − m ) p ( zi ) (4) i∈H The uniformity U The feature is a measurement of uniformity of intensity histogram and is the maximum when all the gray levels are equal U = ∑ p ( zi ) (5) i∈H The entropy e The feature is a measurement of randomness for the all gray levels of the intensity histogram e = − ∑ p ( zi ) log p ( zi ) As humans, we recognize the ways with our vision which means we find out the optimal path, from the images captured by our eyes, depending on the experience established past Following this opinion, we extract several features from the images captured by the onboard camera, and the conformation of optimal path is calculated under a classification based on the extracted features And the classifier is trained with the method of support vector regression 2.1 Features Extraction The color and texture features are thought significant for the images captured by the onboard camera The entries of the feature representation are the following (Gonzales, R C., Woods, R E., Eddins, S L, 2005): The average value r of the red content in the image The average value g of the green content in the image The average value b of the blue content in the image The mean m of the gray image The feature is a measurement of average intensity m = ∑ zi p ( zi ) (1) i∈H The standard deviation σ of the gray image The feature is a measurement of average contrast ∑(z i∈H 208 i 2 Terrain prediction σ= i i∈H − m ) p ( zi ) i (2) (6) i∈H In equations (1) ~ (6), H is the intensity levels, zi is random variable indicating intensity, and p ( zi ) is histogram of the intensity levels Using these nine features, we create the training and test raw vector v of to describe the feature information of each image v = (r g b m σ R μ3 U e ) (7) For describing the traversability of the terrain where the robot covers, standard deviations of angular accelerations of roll and pitch are adopted Shown in fig 1, φ is the roll and θ is the pitch So the traversability is the following, ⎛1 T=⎜ ⎜N ⎝ N ∑ (ϕ − E(ϕ) ) i =1 i N ∑ (θ − E(θ)) N i =1 i ⎞ ⎟⎟ ⎠ (8) N is sample number The traversability T represents the difficulty that robot pass through the region 2.2 SVRM Training Support Vector Regression Machine (SVRM) belongs to the family of kernel methods The special idea is to transfer the nonlinear problem to some high dimensional feature space where could find the approximate linear relationship between inputs and targets, through the first mapping method based on kernel function SVRM is a convex quadratic optimization, and the solution is global optimal Yan Guo, Aiguo Song, Jiatong Bao, Hongru Tang and Jianwei Cui: A Combination of Terrain Prediction and Correction for Search and Rescue Robot Autonomous Navigation Fig The coordinate of the robot Given the dataset points Fig The optimal regions and path {( v , T ) ,( v , T ) , ,( v , T )} , 1 2 n n n is the number of dataset points, such that v i ∈ Rn , i = 1,2, , n is the ith input and Ti ∈ R , i = 1,2, , n is the ith target output The standard format of SVRM (Vapnik, V., 1998) is: * w , b ,ξ ,ξ l l T w w + C ∑ ξ i + C ∑ ξi* i =1 i =1 (9) wTφ ( v i ) + b − Ti ≤ ε + ξ i , Subject Ti − wTφ ( v i ) − b ≤ ε + ξ i* , ξ i ,ξ * ≥ 0, i = 1, , l i The dual is: * α ,α l l T α − α * ) Q ( α − α * ) + ε ∑ (α i + α i* ) + ∑ Ti (α i − α i* ) ( i =1 i =1 l Subject ∑ (α i =1 i ( − α i* ) = 0,0 ≤ α i ,α i* ≤ C , i = 1, , l (10) ) Where Qij = K v i , v j , the approximate function is: l ∑ (α i =1 * i − α i )K ( v i , v ) + b with these features When the traversability predictions of all of the sub-regions are received, the optimal regions are considered with the sub-regions which have the highest traversability prediction value in each row and the optimal path is that covering these optimal regions It is shown in Fig.2 Correction of prediction The optimal path developed from the section is the prediction of the terrain in direction, depending on the experience the robot received before, the off-line training Obviously, the prediction could not match the real situation completely For the limited of the experience of robot, some traps could not be identified just using the image features So we develop the method to correct the error between the prediction and the real environment The slip is one of the fatal situations for the search and rescue robot, and it would result to the loss of traveling ability and fail to complete the task The slip is defined as following, S(τ ) = (11) In off-line training, the image of one region will be captured by the onboard camera and be extracted the features which shown in equation (7) Then the robot traverses this region and the Inertia Measurement Unit (IMU) on board would record the roll and pitch angular accelerations The standard deviations of angular accelerations of roll and pitch, in another word meaning the real traversability of this region, are calculated with the equation (8) Following this method, the training dataset points ( v , T ) are collected for several different regions And the result of the training will be used in the prediction of optimal path As SVRM implementation we use LIBSVM (Chang, C.C., Lin C.J., 2009) 2.3 Optimal Path The robot takes the photo in front and divides into M×N sub-regions The features of each sub-region image are extracted with the abovementioned method, and then the trained SVRM is to calculate the traversability prediction rω − ∫ aY dt − vτ −1 τ rω (12) r is the radius of the driving wheel, ω is the angular velocity measured with the encoder on board, aY is the acceleration value in Y axis measured with IMU, τ is the sample time, and vτ −1 is the actual velocity of the robot in last sample time S ∈ [ 0,1] , S=0 means slip never occupied and S=1 means there is completely slip between the robot and ground We develop a judgment function which is used to judge the error between the prediction and real traversability, using the parameters X, X*, S, which donate real traversability measured with IMU on board, traversability prediction and the slip The judgment function is as following, f ( X , X * , S ) = − K ( X , X * ) + βφ (S) (13) The K ( X , X * ) is kernel function, we use radial basis function exp( − X − X * / 2σ ) here And φ (S) is response function, β is scale coefficient we use sign function sgn(S − α ) here The α is the threshold of slip 209 International Journal of Advanced Robotic Systems, Vol 6, No (2009) f ( X , X * , S ) = − exp( − X − X * / 2σ ) + β sgn(S − α ) (14) The threshold of judgment function is consisted of kernel function threshold and response function threshold, fthreshold ( X , X * , S ) = − Kthreshold ( X , X * ) + βφthreshold (S) (15) Because of the value of φ (S) is and -1, in order to travel safely the φ (S) must be -1, and φthreshold (S) = −1 The threshold of judgment function just depends on the threshold of kernel function and scale coefficient ( fthreshold ( X , X * , S ) = − Kthreshold ( X , X * ) + β ) (16) When the robot travels following the optimal path, it is keeping measuring the slip and real traversability and calculates the error using the way of equation (14) If the result of judgment function is below the given threshold, that means the prediction and the real situation are matched Otherwise, it means that the prediction could not describe the situation of the region and the prediction is inaccurate In order to prevent the robot out of control in the dangerous terrain, our strategy is that the robot must stop and move to the block which is on the side of the current And moving to left or right depends on the location of the next optimal region We require minimizing the distance between the new block and the next optimal region For the region with fault prediction, the features and the real traversability could be collected as a new data point added to the training database The SVRM would be recalculated and updated The robot would calculate the prediction with the up-to-date SVRM It is shown in Fig Experiment The proposed method has been applied to field terrain test for the purpose of autonomous navigation The search and rescue robot designed ourselves (Guo, Y., Bao, J.T., Song, A.G., 2009), shown in Fig 4, is used in the experiment The robot is driven with tracks and carries a CCD camera on top and the IMU (Crossbow VG400) inside 4.1 Off-line Training We use the on-board camera of the robot to take the photos of the terrain to extract the features abovementioned in section to form the feature vectors And then the robot traverses the terrain shown in the photos and collects the standard derivations of the angular acceleration of pitch and roll The feature vectors and the standard derivations compose the training points We collect 300 training points for the off-line training, part of them are shown in Tab.1 For the parameters of SVRM, σ = 0.1 , C ∈ [ 0.1,1] and ε ∈ [ 0.01,0.5] Using the different C and ε, the figurer of mean squared error is developed and shown in Fig.5 In Fig.5, we get the optimal parameter of C and ε through search the mesh to find the parameter point that has the minimal mean squared error The optimal parameter point is ( C , ε ) = ( 1,0.5 ) Fig The picture of the search and rescue robot Fig Error correction of the prediction 210 Fig Mean squared error for different values of C and ε Yan Guo, Aiguo Song, Jiatong Bao, Hongru Tang and Jianwei Cui: A Combination of Terrain Prediction and Correction for Search and Rescue Robot Autonomous Navigation No r g b m σ R μ3 U e T 10 … 91.56 104.70 119.50 121.44 110.45 97.28 83.14 71.19 63.48 58.19 … 69.69 84.37 94.88 98.72 83.36 69.67 58.55 51.79 45.98 48.48 … 62.35 77.11 85.80 90.76 72.22 58.53 46.23 41.33 36.33 43.31 … 75.31 89.78 101.36 104.55 90.22 76.89 64.34 56.33 50.42 51.29 … 34.02 40.42 33.28 33.71 32.02 30.42 23.33 22.24 22.48 26.50 … 0.0175 0.0245 0.0167 0.0172 0.0155 0.0140 0.0083 0.0075 0.0077 0.0107 … 0.515 0.596 0.424 0.395 0.388 0.478 0.164 0.147 0.209 0.153 … 0.00933 0.00778 0.0094 0.0091 0.0100 0.0113 0.0135 0.0143 0.0147 0.0112 … 6.975 7.240 8.988 7.019 6.918 6.787 6.479 6.400 6.361 6.6425 … 0.7812 0.7761 0.8977 0.9056 0.8813 0.8673 0.8096 0.7834 0.6897 0.5485 … Table The partial training points 4.2 Autonomous Navigation The start point and man-specified goal points are signed in the picture of experiment filed, shown in Fig.6, which is covered with rubble and sand The robot should travel from the start point to the first goal point and then travel to the second goal point At first, the robot turn around face to the goal and the camera carried in the robot captures one image in front, shown in Fig.7 The image is linearly divided into 5×5 sub-images as the optimal region candidates The noise points are removed from each sub-image through gauss filter Features are extracted from the sub-images using the abovementioned algorithm in section and the feature vectors are sent to off-line trained SVRM to calculate the traversability prediction T , the result is shown in Fig.8 Searching the mesh of prediction result, the sub-images which have the highest traversability prediction in each row are picked up as the optimal regions All the optimal regions compose the optimal path It is shown in Fig.9, and the regions in black box are the optimal regions From the optimal regions selected using the color and texture features, we could find that the result of this prediction algorithm is approximately identical compared with that found out using human experience And this Fig The result of prediction Fig The experiment filed Fig The image in front of robot Fig The optimal path based on prediction 211 International Journal of Advanced Robotic Systems, Vol 6, No (2009) (a) (b) Fig 12 The real velocity and measured velocity (c) (d) Fig 10 Navigation based on the optimal path including (a)(b)(c)(d) Fig 13 The result of judgment function equation (14) with the parameters σ = 0.1 , α = 0.2 and β = 0.2 The result of judgment function is shown in Fig.13 For the safety of navigation, the Kthreshold ( X , X * ) = 0.8 and Fig 11 The acceleration of roll and pitch measured by IMU algorithm could avoid the interference of result from the obstacles that have different color or texture features from the terrain environment Receiving the optimal path, the robot could travel to the goal following this path through the method of inertia navigation The process is shown in Fig.10 During the traveling, the accelerations of roll and pitch are measured by the onboard IMU and shown in Fig.11 The real traversability of the current region is developing based the data For the slip estimation, the real velocity is measured by the IMU and the real velocity is the actual speed of robot, shown in Fig.12 with blue line The measured velocity is the calculated velocity based on the angular velocity measured with the coder in the robot, and it describes the actual speed of the driving trucks of robot The measured velocity is shown in Fig.12 with red line According to the definition in equation (12), the slip estimation could be received Judging the error between traversability prediction and real traversability, we use the judgment function in 212 fthreshold ( X , X * , S ) = −1 base on the value of Kthreshold ( X , X * ) and β From the result of judgment function, we could find that the judgment function value is under the threshold when the robot travels on the regions shown in Fig.10(a)(b)(c) That because the traversability prediction and real traversability is approximately identical However, when the robot travel on the region shown in Fig.10(d), the sand in this region leads to the serious slip and the response function φ (S) is active So the value of judgment function jumps over the threshold as soon as possible This region is that with fault prediction The real traversability value and the features based on color and texture are collect as a new data point added to the training database The SVRM is recalculated and updated Because of the threshold over, the robot stops and move to the right block The camera recaptures one image and calculates optimal path with the up-to-date SVRM The robot safely travels to the final goal with the new optimal path It is shown in Fig.14 We repeat this experiment ten times in the same environment, and the success rate is up to 90% Yan Guo, Aiguo Song, Jiatong Bao, Hongru Tang and Jianwei Cui: A Combination of Terrain Prediction and Correction for Search and Rescue Robot Autonomous Navigation demonstrates that this method is effective But limited with the performance of the embedded computer system in the robot, the process speed of the algorithm is not enough fast to allow the robot travel in the fast speed In future, we will continue some works to increase the algorithm efficiency and decrease the performance time Acknowlege This research is made possible with support from the Project under Science Innovation Program of Chinese Education Ministry (No.708045) References Fig 14 Recaptured image and new optimal path Conclusion In this paper, we propose an alternative approach which includes two steps for the autonomous navigation of search and rescue robot At first, for the purpose of finding the relative features with the difficulty of traveling, we pick up nine features of color and texture from the image as feature vectors The Support Vector Regression Machine (SVRM) is trained to find the relationship between the traveling difficulty and the features Using the off-line trained SVRM, the traversability prediction is calculated and the optimal path is developing During the traveling following the optimal path, the real traversability based on the vibration information measured by onboard IMU is received The slip of robot is recognized with the real velocity measured by IMU and the measured velocity calculated with the angular velocity got from the coder inside We develop a judgment function with the traversability prediction, real traversability and slip to find the prediction fault It could protect the robot from the trap caused by the prediction error For the region with fault prediction, the features and the real traversability could be collected as a new data point added to the training database The SVRM would be recalculated and updated Our method is to resolve the problem that the prediction algorithm can not check the prediction result during the traveling following the prediction The experiment Talukder, A., Manduchi, R., Rankin, A., Matthies, L (2002) Fast and Reliable Obstacle Detection and Segmentation for Cross-country Navigation IEEE Intelligence Vehicles Symposium, Versailles, France, 2002 Hebert, M., Vandapel, N (2003) Terrain Classification Techniques from Ladar Data for Autonomous Navigation Collaborative Technology Alliances Conference, 2003 Vandapel, N., Huber, D., Kapuria, A., Hebert, M (2004) Natural Terrain Classification using 3-d Ladar Data IEEE international Conference on Robotics and Automation, New Orleans, USA, 2004 Manduchi, R., Castano, A., Taluker, A., Matthies, L (2005) Obstacle Detection and Terrain Classification for Autonomous Off-road Navigation Robotics and Automation Vol 18, pp 81-102, 2005 Bellutta, P., Manduchi, L., Matthies, K., Owens, K., Rankin, A (2000) Terrain Perception for Demo III IEEE Intelligent Vehicles Symposium, Dearborn, USA, 2000 Castano, R., Manduchi, L., Fox, J (2001) Classification Experiments on Real-Word Textures Workshop on Empirical Evaluation in Computer Vision, Kauai, USA, 2001 Angelova, A., Matthies, L., Helmick, D., Perona, P., (2007) Fast Terrain Classification Using VariableLength Representation for Autonomous Navigation IEEE Computer Society Conference on Computer Vision and Pattern Recognition Minneapolis, USA, 2007 Seraji, H (1999) Traversability Index: A New Concept for Planetary Rover IEEE International Conference on Robotics and Automation Detroit, USA, 1999 Kim, D., Sang, M O., James, M R (2007) Traversability Classification for UGV Navigation: A Comparison of Patch and Superpixel Representations IEEE International Conference on Robotics and Automation San Diego, USA, 2007 Poppingga, J., Birk, A., Pathak, K (2008) Hough Based Terrain Classification for Realtime Detection of Drivable Ground Journal of Field Robotics Vol 25, 1, pp 67-88, 2008 213 International Journal of Advanced Robotic Systems, Vol 6, No (2009) Iagnemma, K., Dubowsky, S (2002) Terrain Classification for High-Speed Rough-Terrain Autonomous Vehicle Navigation SPIE Conference on Unmanned Ground Vehicle Technology IV, 2002 Brooks, C A., Iagnemma, K (2005) Vibration-Based Terrain Classification for Planetary Exploration Rovers IEEE Transactions on Robotics Vol 21, 6, pp 1185-1191, 2005 Weiss, C., Fröhlich, H., Zell, A., (2006) Vibration-Based Terrain Classification Using Support Vector Machines IEEE International Conference on Intelligent Robots and Systems Beijing, China, 2006 214 Gonzalez, R C., Woods, R E., Eddins, S L (2005) Digital Image Processing Using Matlab Prentice Hall, Upper Saddle River, NJ, 2005 Vapnik, V (1998) Statistical Learning Theory Wiley, New York, NY, 1998 Chang, C.C., Lin C.J., (2009) LIBSVM: a Library for Support Vector Machines http://www.csie.ntu.edu.tw/~cjlin/libsvm, 2009 Guo, Y., Bao, J.T., Song, A.G (2009) Designed and implementation of semi-autonomous search robot IEEE International Conference on Mechatronics and Automation Changchun, China, 2009 ... prediction The real traversability value and the features based on color and texture are collect as a new data point added to the training database The SVRM is recalculated and updated Because of the threshold... M., Vandapel, N (2003) Terrain Classification Techniques from Ladar Data for Autonomous Navigation Collaborative Technology Alliances Conference, 2003 Vandapel, N., Huber, D., Kapuria, A. , Hebert,... M (2004) Natural Terrain Classification using 3-d Ladar Data IEEE international Conference on Robotics and Automation, New Orleans, USA, 2004 Manduchi, R., Castano, A. , Taluker, A. , Matthies,