1. Trang chủ
  2. » Ngoại Ngữ

algorithms and a framework for indoor robot mapping in a noisy environment using clustering in spatial and hough domains

20 3 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 20
Dung lượng 22,6 MB

Nội dung

International Journal of Advanced Robotic Systems ARTICLE Algorithms and a Framework for Indoor Robot Mapping in a Noisy Environment using Clustering in Spatial and Hough Domains Regular Paper Ankit A Ravankar1*, Yohei Hoshino 2, Abhijeet Ravankar1, Lv Jixin1, Takanori Emaru1 and Yukinori Kobayashi1 Graduate School of Engineering, Hokkaido University, Sapporo, Japan Kitami Institute of Technology, Kitami, Hokkaido, Japan *Corresponding author(s) E-mail: ravankar@mech-hm.eng.hokudai.ac.jp Received 23 April 2014; Accepted 25 November 2014 DOI: 10.5772/59992 © 2015 The Author(s) Licensee InTech This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited Abstract Map generation by a robot in a cluttered and noisy envi‐ ronment is an important problem in autonomous robot navigation This paper presents algorithms and a frame‐ work to generate 2D line maps from laser range sensor data using clustering in spatial (Euclidean) and Hough domains in noisy environments The contributions of the paper are: (1) it shows the applicability of density-based clustering methods and mathematical morphological techniques generally used in image processing for noise removal from laser range sensor data; (2) it presents a new algorithm to generate straight-line maps by applying clustering in the spatial domain; (3) it presents a new algorithm for robot mapping using clustering in a Hough domain; and (4) it presents a new framework to load, delete, install or update appropriate kernels in the robot remotely from the server The framework provides a means to select the most appropriate kernel and fine-tune its parameters remotely from the server based on online feedback, which proves to be very efficient in dynamic environments with noisy conditions The accuracy and performance of the techni‐ ques presented in this paper are discussed with conven‐ tional line segment-based EKF-SLAM and the results are compared Keywords Robot mapping, clustering, noise reduction, Hough transform Introduction Autonomous navigation by a mobile robot in an unknown environment is an important problem in mobile robotics The robot mainly perceives the outer world by utilizing measurement sensors attached to it These sensors (typi‐ cally laser range finders, ultrasonic sensors, cameras and 3D sensors) give an estimate of the robot‘s position in the environment and incrementally build the map of the environment This problem is often referred to as ’simulta‐ neous localization and mapping’ (SLAM), in which a robot Int J Adv Robot Syst, 2015, 12:27 | doi: 10.5772/59992 simultaneously localizes its position in the environment and builds a map of it Hence, it becomes important for the robot to have an accurate map While sensors such as wheel encoders measure the relative robot displacement, external sensor data can be used to correlate subsequent robot positions to get the relative pose or estimate, improving upon the odometry data However, in reality, sensors are prone to errors and generate noise in the data This noise then accumulates as error over time and gives the wrong estimate or else the wrong map The environments in which the robot navigates are mostly dynamic, which adds to the complexity of the mapping The main task of the robot is to incrementally build the map of the workspace as it discov‐ ers the environment, and to avoid obstacles, in order to reach its final goal Various approaches to solving this problem have been discussed by researchers An earliest attempt to solve the SLAM problem was proposed in work given in [40], in which uncertainty models were introduced to solve the localization and mapping simultaneously The trajectories and positions of landmarks in the environment are estimated without any need for prior knowledge of the location of the landmark To get a complete perception of the region, the scan data from the sensors must be matched and fused in order to get a desirable map of the region [19] In [45], various methods for solving SLAM are discussed in detail, including the extended Kalman filter SLAM (EKFSLAM) as well as particle filters Many have looked to solve SLAM using visual sensors, like cameras [4] Recently, 3D sensors, like MS-Kinect, have also been employed to map surroundings [37] There are two common techniques used in mobile robotics for mapping: (1) point-based matching, and (2) feature-based matching In the first technique, measured data points are directly used for mapping, such as in [20] where they utilize range data to localize in a polygonal environment Using an iterative least squares minimiza‐ tion technique, they proposed a matching algorithm between point images and target models in an a priori map A study by [16] proposed a similar self-localiza‐ tion approach, but not necessarily in a polygonal environment Their algorithm iteratively improves the alignment between two scans by minimizing the dis‐ tance between them A major drawback of using pointbased matching is that it requires a large amount of memory to handle and the storage grows disproportion‐ ately for large environments, which in turn reduces performance The other technique for working with scan data involves utilizing the raw measurement points into geometric features by transforming the raw scans into meaningful features These features can then be used for matching and scan corrections In comparison to pointbased matching, feature-based matching techniques require less memory, are more efficient and provide more information about the environment In addition, the extracted features, such as corners and line segments, can be subsequently used as building blocks for localization or mapping Large data can be scaled down to simple Int J Adv Robot Syst, 2015, 12:27 | doi: 10.5772/59992 geometric features, even for large environments Be‐ cause of these reasons, feature-based matching has been extensively researched and employed into SLAM Feature extraction, feature tracking and efficient mapping algorithms work with much less data and are more stable, specially when dealing with noise In most scenarios in which mobile robots work (both indoors and outdoors), planar surfaces are common and they are typically modelled by line segments Line segments are the simplest geometric primitives used to describe structur‐ al environments The work in [48] presents an extensive survey and summary of line feature extraction algo‐ rithms from range sensors for indoor mapping, includ‐ ing split and merge [17], incremental [33], a Hough transform [41, 15, 25], line regression [27], RANSAC [31] and an expectation-maximization algorithm [13, 43] This paper only focuses on the mapping problem, using laser range finder (LRF) data in noisy environments The present work is a larger extension of our previous work [2, 3], in which we proposed mapping using clustering techniques The authors have extended the work by proposing new noise removal algorithms and mapping techniques in this paper For accurate map building, we first tackle the problem of removing noise from the LRF data The noise results in the generation of inaccurate maps and it must be removed Noise reduction in SLAM has been discussed in [6, 36, 18] This paper propose using densitybased clustering algorithms and mathematical morpholog‐ ical kernels like dilation and erosion to remove noise These techniques have mainly been used in processing images our results show that they are also effective for noise removal from LRF data For noise removal using mathe‐ matical morphological techniques, this paper provides various kernels which can be selected according to the nature of the noise present in the LRF data We also propose a new algorithm for extracting line segments using cluster‐ ing in a Hough domain This algorithm can detect line segments effectively in cluttered narrow environments with noisy data The algorithms proposed in this paper are mainly cluster‐ ing-based Clustering is the main technique used in exploratory data mining, and is a common technique for statistical data analysis used in many fields, including machine learning, pattern recognition, image analysis, information retrieval and bioinformatics [1] Clustering has been applied to SLAM as well In [28], the authors proposed a nested dissection algorithm which leads to a cluster tree for a multilevel sub-map-based SLAM In [17], a combined Kalman filtering and fuzzy clustering algorithm is pro‐ posed The work in [8] proposed a convolution-based clustering algorithm and a Hough transform-based line tracker using a laser scanner In [49], a technique for online segment-based map building in an unknown indoor environment from sonar sensor observations is proposed An enhanced adaptive fuzzy clustering algorithm (EAFC) along with noise clustering (NC) is proposed to extract and classify the line segments in order to construct a complete map for an unknown environment Recent works like [9] have proposed a method called ’dis‐ tance-based convolution clustering’ (DCC), in which the robot‘s scanned points are grouped into clusters using a convolution operation They also proposed a new algo‐ rithm to detect lines by appending a cluster of points using a combined Hough transform and line-tracking algorithms Another work by [32] presented clustering methods to convert data points into point clusters and line segments They used a rank order clustering (ROC) technique where prior information on a number of clusters is not required A similarity index matrix (SIM) using fuzzy membership is used to extract and merge suitable line segments Mean‐ while, in [9], a ’reduced Hough transform line tracker’ (REHOLT) is discussed, where voting in the accumulator space in the Hough domain is reduced by first fitting the scan points by a line tracking algorithm and then obtaining a single line using the Hough transform method It requires tuning the parameters obtained from the line tracking algorithm prior to getting a good estimate of the position of the line Our technique for line segment extraction does not require such prior adjustment and can detect single straight lines from the scanned data points A random window randomized Hough transform (RWRHT) is proposed in [5] to find line segments from an image The ’randomized Hough transform’ (RHT) method is based on the fact that a single parameter item can be determined uniquely with a pair of edge points [14] Such point pairs are selected randomly, the parameter point is solved from the line equation and the corresponding cell is accumulated in the accumulator space The authors extended the RHT by clustering a variable bandwidth mean shift algorithm Cluster modes are selected as the set of base lines Next, projections on the edge points onto the corresponding base lines are grouped to obtain the line segments 1(a) shows the differential drive robot (Model:Plat-F1, Japan Systems Design) used during the experiments The sensor used for the experiment is the scanning LRF (URG-04LX) manufactured by Hokuyo Co Ltd shown in Figure 1(b) The specifications of the laser sensor have been summarized in Table (a) (b) Figure1 Description of theof robot androbot sensorand used for the experiments: Figure Description the sensor used for (a) the a differential (a) drive (Plat-F1)drive with arobot laser sensor, and with (b) Hokuyo experiments: a robot differential (Plat-F1) a laser URG-04LX specifications sensor, and (b) Hokuyo URG-04LX specifications the scanned data A random window randomized Laser rangepoints sensor URG-04LX Hough transform (RWRHT) is proposed in [? Distance range 60 mm ~ 4000 mm] to find line segmentsAccuracy from an image The 20 ~’randomized 1000 mm: ±25 mmHough transform’ (RHT) method is based 1000 on the fact ~ 4000 mm:that ±2.5%a single parameterDistance item can be determined uniquely with a pair of resolution 3.8 mm ((4000-60)mm / 1024) edge points [? ] Such point pairs are selected randomly, Scan angle 240 deg the parameter point is solved from thedegline and Angular resolution 1.875 (240 equation deg / 128) the corresponding cell is accumulated in the accumulator Table Laser range sensor specifications space The authors extended the RHT by clustering a variable bandwidth mean shift algorithm Cluster modes robot coordinate frame system is defined as shown in We also present a framework for robot map building to are The selected as the set of base lines Next, projections on the Figure The global and local coordinate system for the load, delete or fine-tune the parameters of the map building edge points onto the corresponding base lines are grouped robot are represented by O − xy and O ' − x 'y ', respectively applications in the robot This can be done by a remote user to obtain the line segments based on feedback The framework allows temporary maps to be seen on the server, and this live update enables the user to change the parameters of the map-building appli‐ cations of the robot Thus, the framework provides flexi‐ bility for changing or tuning the software according to the nature of the noise present in the environment by the user based on feedback The diameter of the wheels is d and the tread is given by ', which The midpoint is fixed at point is thebuilding origin We walso present aof w framework for Orobot map O ' −parameters x 'y ' fixed onofthethe of the local coordinate systemthe robot to load, delete or fine-tune map y ' perpendicular The robot moves in the the building applications in direction the robot This can beto done y ' axis The angle is the angle between by direction a remotex '.user basedθ on feedback The the framework and temporary the x axis maps in a to counter-clockwise direction allows be seen on the server, andThe this of the wheel ) and the right wheel liverotational update angle enables theleft user to (ϕ change the parameters L of the applications of the robot Thus, the is defined in the forward direction of motion The (ϕR )map-building framework provides flexibility for changing or tuning the angular rotation of the robot is given by ω The velocities software according to robot the nature present vL andofvRthe of the wheels of the arenoise expressed as: in the environment by the user based on feedback Finally, we present experimental results in both artificial and real environments and discuss maps obtained by the proposed algorithms The proposed algorithm is compared with traditional line segment-based EKF-SLAM and the results are discussed Finally, we presentvRexperimental = f&Rd , vL = f&Ldresults in both artificial (1) and real environments and discuss maps obtained by the proposed algorithms The proposed algorithm is Here, ϕ˙ represents the differential operator with respect to This section describes the robot model and the LRF used compared with traditional line segment-based EKF-SLAM t From Figure 2(a), considering the circular motion time for the experiments The odometry model of the robot is and the results are discussed Figure the a direct (Mode experi scann Ltd sh sensor The ro Figure robot The d w Th origin robot to the axis a rotatio (φR ) i angul of the Here, time t and th Robot Odometry derived in order to obtain the accurate position of the robot and the curvature radius ρ , we can define the velocity v as: Δx and Δy represent the displacement in the x and y Robot Odometry directions respectively, and Δθ represents the angular v = rw (2) This section describes the robot model and the LRF displacement in the counter-clockwise direction Figure used for the experiments The odometry model of the robot is derived inLvorder to obtain position Ankit A Ravankar, Yohei Hoshino , Abhijeet Ravankar, Jixin, Takanori Emaruthe andaccurate Yukinori Kobayashi: Algorithms and a Framework for Indoor Robot Mapping a Noisy Environment using∆y Clustering in Spatial Hough Domainsin ofinthe robot ∆x and represent theand displacement the x and y directions respectively, and ∆θ represents Table Laser range sensor specifications On th writte Using are ob On the other hand, using the tread w , vL and vR can be Here, assuming v and ω as constant values during time [t0,t1), integrating Equations (5) and (6) in the interval written as: [t0,t1] yields the following equations: ổ wử vR = ỗ r + ÷ w , 2ø è ỉ wư vL = ç r - ÷ w 2ø è (3) ( t1 - t0 )w = ( D (t ) - D (t )) - ( D (t ) - D (t )) , (7) ( t1 - t0 ) v = ( D (t ) - D (t )) + ( D (t ) - D (t )) , (8) r r l l w r r l l where the displacement of the left and right wheels at time t is Dl (t ) and Dr (t ) From Equations (7) and (8), the robot’s displacement and orientation with respect to O − xy are given by: (a) Robot Model for the a laser omized to find Hough single pair of domly, on and mulator ering a modes on the rouped uilding e map e done mework nd this meters us, the ing the t in the rtificial ned by thm is SLAM e LRF of the osition ment in resents LX mm mm 5% (b) Robot Global Coordinate System Figure Robot coordinate frame system Figure Robot coordinate frame system Using Equations (1), (2) and (3), the following relationships the are angular obtained: displacement in the counter-clockwise direction Figure 1(a) shows the differential drive robot (Model:Plat-F1, Japan Systems Design) used during the - v ) for the experiment is the ( v used experiments The sensor (4) w= R L , scanning LRF (URG-04LX) wmanufactured by Hokuyo Co Ltd shown in Figure 1(b) The specifications of the laser sensor have been summarized in Table ( v + vL ) , is defined as shown(5) The robot coordinatev frame in = R system Figure The global and local coordinate system for the robot are represented by O − xy and O − x y , respectively The diameter of the wheels is d and the tread is given by and: w The midpoint of w is fixed at point O , which is the origin of the local coordinate system O − x y fixed on the w(v + v ) robot The robot moves y perpendicular r = inRthe Ldirection (6) to the direction x The 2angle ( vR - θvLis) the angle between the y axis and the x axis in a counter-clockwise direction The rotational angle of the left wheel (φL ) and the right wheel (φR ) is defined in the forward direction of motion The angular rotation of the robot is given by ω The velocities of the wheels of the robot v L and v R are expressed as: v R = φ˙R d, v L = φ˙L d ( D (t ) - D (t )) - ( D (t ) - D (t )) , (9) Dx = ( D ( t ) - D ( t ) ) + ( D ( t ) - D ( t ) ) cosq ( t0 ) , (10) Dy = ( D ( t ) - D ( t ) ) + ( D ( t ) - D ( t ) ) sinq ( t0 ) (11) Dq = r r r l l w r l l r r l l Assuming that Δθ is sufficiently small, from Equations (9), (10) and (11), we get the displacement from time [t0,t1] The sensor data at time t0 is represented at O − xy and the sensor data at time t1 is represented by O ' − x 'y ' In the next section, these parameters (Δx , Δy , Δθ ) will be used to define algorithms for map generation (1) Here, φ˙ represents the differential operator with respect to time t From Figure 2(a), considering the circular motion and the curvature radius ρ, we can define the velocity v as: v = ρω (a) (2) (b) (c) On the other hand, using the tread w, v L and v R can be written Figure 3.as: (a) Online Mode, (b) Offline Mode, and (c) Mixed Mode The red dots represent the centroid obtained after clustering Figure (a) Online Mode, (b) Offline Mode, and (c) Mixed Mode The red dots represent the centroid obtained after clustering w w v R = (ρ + )ω, v L = (ρ − )ω (3) (D t1 ) Spatial − Dr (t0Domain )) + ( Dl (t1 )2 − Dl (t0 )) r (the Clustering in ing Clustering (ex k -means clustering) is sensitive to noise data segmentation techniques described in Section 3.1 (t1 − Equations t0 ) v = , (8) Using (1), (2) and (3), the following relationships [35],each which may result∆s int , inaccurate cluster formation In step-interval the raw LRF data obtained areThis obtained: section describes LRF data clustering in the spatial is segmented Later, to techniques fornoise noise removal Therefore, it is necessary first remove from the LRFor (v left − vand where the displacement of the wheels at time L ) right domain for robot mapping ω = R A spatial , domain represents (4) clustering are apply applied to each segment data and then clustering Noise removal is discussed w t isLRF Dl (tdata ) andinDspace Fromx,y Equations (7) for andthe (8),2D the robot’s r ( t ) (i.e., coordinates case) with in Section and Sectionthree 3.2.2,modes while clustering using k displacement and orientation with respect to O − xy are We define3.2.1 the following for map generation the direct manipulation of the LRF data during its process‐ ( v + v ) given by: L after the noise reduction and clustering of the raw LRF v= R , (5) data to generate maps: Int J Adv Robot Syst, 2015, 12:27 |2doi: 10.5772/59992 and: ∆θ = ( Dr (t1 ) − Dr (t0 )) − ( Dl (t1 ) − Dl (t0 )) , (9) w(v Rw+ v L ) Online Mode: In Online Mode, the robot continuously ρ= (6) 2( v R − v L ) applies noise reduction and clustering on the data ( Dr (t1 ) − Dr (t0 )) + ( Dl (t1 ) − Dl (t0 )) ∆x =assuming cos θ (t0 time ), Here, v and ω as constant values during as soon as they are available with each scan after means and fuzzy- c means in the spatial domain is dis‐ cussed in Section 3.3 shown in Figure 3(b)-(i) obtained at each step-interval Δstn = tn − tn−1 Here, N smax is set to ’9’ Unlike Online Mode, two operations of noise removal and clustering are applied to the accumulated LRF data after × Δst A robot with a LRF mounted on it moves in small steps, which are the robot’s displacement in step-time Δstn = tn − tn−1 step-intervals Figure 3(b)-(i) shows the accumulated LRF data with N smax = Figure 3(b)-(ii) shows the within the two time intervals tn−1 and tn From Equation (11), for the robot’s movement perpendicular to the x -axis, θ = 90 degrees and displacement Δx = 0, while displacement Δy is given by: Dy = ( D (t ) - D (t )) + ( D (t ) - D (t )) , r n r n -1 l n l n -1 (12) Once the data is read from the LRF, it is grouped into segments S i ,i > 0, as shown in Figure 4, using the data segmentation techniques described in Section 3.1 In each step-interval Δst , the raw LRF data obtained is segmented Later, techniques for noise removal or clustering are applied to each segment We define the following three modes for map generation after the noise reduction and clustering of the raw LRF data to generate maps: results of the noise removal Figure 3(b)-(iii) shows nine centroids obtained after the clustering of the offline data Notice that the number of clusters of offline data may vary For example, Figure 3(b)-(iv) shows five centroids obtained after the clustering of noise-free data shown in Figure 3(b)-(ii) However, if the robot changes it’s direction (i.e., for a change in θ ), noise removal and clustering are applied after n × Δst , Online Mode: In Online Mode, the robot continuously applies noise reduction and clustering on the data as soon as they are available with each scan after the stepinterval Δst Figure 3(a) shows the working of the robot in Online Mode The raw LRF data measured at each step-interval Δstn = tn − tn−1 is shown in Figure 3(a)-(i) In even if n < N smax, to eliminate data processing for the special case when θ changes during accumulation Mixed Mode: In Mixed Mode, some of the LRF data processing takes place in Online Mode, whereas others take place in Offline Mode For example, noise reduc‐ tion and clustering can take place in Online Mode and Offline Mode, respectively This is shown in Figure 3(c) The LRF data is scanned at each step-interval as shown in Figure 3(c)-(i) Noise is removed after each step-interval, as shown in Figure 3(c)-(ii) However, clustering is performed on accumulated (offline) data after × Δst step-intervals ( N smax =3) Three clusters are made offline, as shown in Figure 3(c)-(iii), whereas two clusters are shown in Figure 3(c)-(iv) each step-interval Δst , two operations are performed First, noise is removed by the methods discussed in Section 3.2.1 and/or Section 3.2.2 The noise removed at each Δst is shown in Figure 3(a)-(ii) Later, in the same step-interval, clustering is performed, as descri‐ bed in Section 3.3, to obtain single or multiple clusters This is shown in Figure 3(a)-(iii) with the red dots representing the centroids of individual clusters Generally, any given single cluster (and hence a single centroid) obtained in each step-interval as raw LRF data in Δst is small However, multiple clusters (and hence multiple centroids) can also be obtained if the raw LRF in Δst is large enough The data which has been processed once in a step-interval Δstn is marked as ‘processed’ and is not processed again in the next step-interval Δstn+1 Thus, in Δstn+1, noise removal and clustering is applied on newly obtained data and the process is repeated Offline Mode: In this mode, raw LRF data is accumu‐ Figure LRF data segmentation LRF data segmentation lated until a predefined N smax time-steps (Δst ),Figure and noise reduction and clustering is applied on the accumulated (offline) LRF data after N smax × Δst steps for 3.1 LRF Data Segmentation the offline data Notice that the number of clusters of a given θ This is shown in Figure 3(b) Offline Mode is Prior to extracting lines, the LRF data is pre-processed into offline data may vary For example, Figure 3(b)-(iv) useful when step-time Δst produces very little data, distinct segments The LRF scans the environment and shows fiven centroids obtainedas:after the clustering of which is inefficient for clustering The raw LRF data is acquires scan points, represented noise-free data shown in Figure 3(b)-(ii ) However, if Ankit A Ravankar, Yohei Hoshino , Abhijeet Ravankar, Lv Jixin, Takanori Emaru and Yukinori Kobayashi: the inrobot changes using it’s Clustering direction (i.e., a change in θ), Algorithms and a Framework for Indoor Robot Mapping a Noisy Environment in Spatial andfor Hough Domains noise removal and clustering are applied after n × ∆st , even if n < Nsmax , to eliminate data processing for the 3(c) The LRF data is scanned at each step-interval as shown in Figure 3(c)-(i ) Noise is removed after each step-interval, as shown in Figure 3(c)-(ii ) However, clustering is performed on accumulated (offline) data after × ∆st step-intervals (Nsmax = 3) Three clusters are made offline, as shown in Figure 3(c)-(iii ), whereas two clusters are shown in Figure 3(c)-(iv) Pi = éë xi , yi ùû = éëri cos ji , ri sin ji ùû , i = 1,2,L , n 3.1 LRF Data Segmentation (13) Prior torextracting lines, the LRF datathisscanned pre-processed into where point with i is the distance between the i distinct segments The LRF scans the environment and respect to the origin of the LRF, and φi is the angle of the acquires n scan points, represented as: i th scan point with respect to the X L axis of the LRF frame Pi =by[ xOi , −yX cosFigure ϕi , ri sin (13) given ϕi ], i = 1, 2, · · · , n i ] L=Y[Lri in method detects these breakpoints by finding the Euclidean distance between the two consecutive points Pi and Pi+1 : Pi+1 − Pi < Dth (14) ABD uses the distance threshold Dth via the following: sin(∆φa ) + 3σr (15) Dth = ri sin(λ − ∆φa ) breakpoints by finding the Euclidean distance between the two consecutive points Pi and Pi+1 : where ∆φa is the angular resolution of the LRF and σr is the residual variance to encompass the stochastic behaviour of the sequence of the scanned points pi and the related noise P - P < Dth associated with rii + Thei deviation is provided by (14) the LRF manufacturer and λ is an auxiliary parameter The value of λ has been found to perform well when setting the value as λ = 10◦ where ri is the distance between the ith scanned point with respect to the origin of the LRF, and ϕi is the angle of the After getting the breakpoints, the clusters are split using the iterative end point fitting (IEPF) algorithm [? ] and segments Si are obtained IEPF is a recursive algorithm for line extraction and for finding clusters The principle of the IEPF algorithm is shown in Figure For a cluster of points given by Ni , it uses a virtual line Liv between two end points of the cluster It then calculates the distance of every point d j to the line Liv and finds the maximum distance dim If dim > DT , the IEPF algorithm splits the cluster of points N i into two subsets N i1 and N i2 , where DT is a predefined splitting threshold The process is iterated to Figure LRF data segmentation Figure The principle behind the IEPF algorithm Figure principle behind the IEPF algorithm each6 The subset until no new subset can be found The value the offline data Notice that the number of clusters of of D is generally chosen to be small, but long-range ithT scan point with respect to the X L axis the LRFfor frame uses the distance threshold Dth viaofthe following: offline data may vary For example, Figure 3(b)-(ivABD ) sensors given the by O data − X L Ygets as the range increases and it noisy L in Figure shows five centroids obtained after the clustering of Geometry of a scanned point and a corresponding line the splitting A the modified iterative end point fitting Initially, we adopted standard split and merge noise-free data shown in Figure 3(b)-(ii ) However, if affects Figure Figure Geometry of a scanned point and a corresponding line technique [? ] for the data without line the robot changes it’s direction (i.e., for a change in θ), sinsegmenting Dfa extractionDfor = itsrrobustness and speed + s r In the split process, (15) noise removal and clustering are applied after n × ∆st , th i sin l - Dfa detector we used the adaptive breakpoint (ABD) method Initially, we adopted theif standard and merge techni‐ for the even n < Nsmax , split to eliminate data processing to detect any discontinuity in the data [? ? ] This can make special case when θ changes during accumulation que [48] for segmenting the data without line extraction for www.intechopen.com : inferences about the possible presence of discontinuities speed Mixed Mixed Mode, ofMapping thethe LRF data its robustness and In theInsplit process, we used Algorithms and aMode: Framework for Indoorsome Robot in a Noisy Environment Clustering Spatial and world, Hough Domains inΔϕ a sequence ofusing validresolution range data.in In the real where is the angular of the LRF and σ is the place in Online Mode, whereas others r such adiscontinuity accounts for open doors, windows, adaptive breakpointprocessing detectortakes (ABD) method to detect any take place in Offline Mode For example, noise reduction residual variance to encompass the stochastic behaviour of open passages and other areas which it is important to discontinuity in theand dataclustering [17, 10] can Thistake canplace makeininferences Online Mode and recognize.ofA point distance-based (PDBS)noise the sequence the scanned points psegmentation and the related i Offline Mode, respectively This is shown in Figure about the possible presence of discontinuities in a sequence method detects these breakpoints by finding the Euclidean The LRF data is scanned at each step-interval as associated with ri The deviation is provided byPi+the LRF distance between the two consecutive points Pi and of valid range data.3(c) In the real world, such discontinuity 1: shown in Figure 3(c)-(i ) Noise is removed after each manufacturer and λ is an auxiliary parameter The value of accounts for open step-interval, doors, windows, passages as shownopen in Figure 3(c)-(ii )and However, Pi+1 − Pi < Dth (14) is performed on accumulated (offline) data λ has been found to perform well when setting the value other areas which clustering it is important to recognize A point after × ∆st step-intervals (Nsmax = 3) Three clusters ABD uses the distance threshold Dth via the following: distance-based segmentation (PDBS) method detects these as λ = 10 are made offline, as shown in Figure 3(c)-(iii ), whereas sin(∆φa ) two clusters are shown in Figure 3(c)-(iv) + 3σr (15) Dth = ri sin(λ − ∆φa ) ( 3.1 LRF Data Segmentation Prior to extracting lines, the LRF data is pre-processed into distinct segments The LRF scans the environment and acquires n scan points, represented as: Pi = [ xi , yi ] = [ri cos ϕi , ri sin ϕi ], i = 1, 2, · · · , n (13) ( ) ) where ∆φa is the angular resolution of the LRF and σr is the residual variance to encompass the stochastic behaviour of the sequence of the scanned points pi and the related noise associated with ri The deviation is provided by the LRF manufacturer and λ is an auxiliary parameter The value of λ has been found to perform well when setting the value as λ = 10◦ ith where ri is the distance between the scanned point with respect to the origin of the LRF, and ϕi is the angle of the After getting the breakpoints, the clusters are split using the iterative end point fitting (IEPF) algorithm [? ] and segments Si are obtained IEPF is a recursive algorithm for line extraction and for finding clusters The principle of the IEPF algorithm is shown in Figure For a cluster of points given by Ni , it uses a virtual line Liv between two end points of the cluster It then calculates the distance of every point d j to the line Liv and finds the maximum distance dim If dim > DT , the IEPF algorithm splits the cluster (a) of points N i into(b) two subsets N i1 and N i2 , where DT is a predefined splitting threshold The process is iterated to Figure (a) Raw laser scan of the environment with noise, and (b) the DBSCAN result each subset until no new subset can be found The value Figure (a) Raw laser scan of the environment with noise, and (b) the DBSCAN result of DT is generally chosen to be small, but for long-range sensors the data gets noisy as the range increases and it Geometry of aclusters scanned point a corresponding affects the splitting A modified iterative point fitting After getting theFigure breakpoints, the areand split using line IEPF algorithm in point Figure Foraend acore cluster of points Border point :isAshown border is6.not point but falls the iterative end point fitting (IEPF) algorithm [42] and segments S i are obtained IEPF is a recursive algorithm for www.intechopen.com line extraction and for finding clusters The principle of the within neighbourhood a core point In points Figure 9, given by N i ,the it uses a virtual line Lofvi between two end point B is a border point A border point can fall within of the cluster It then calculates the distance of every point : i the neighbourhood of several core points i dj intoa Noisy the line L v and finds theinmaximum distance Algorithms and a Framework for Indoor Robot Mapping Environment using Clustering Spatial and Hough Domainsdm If Noise point : A noise point is any point that is neither a core point nor a border point and which does not satisfy Int J Adv Robot Syst, 2015, 12:27 | doi: 10.5772/59992 the and MinPts criteria In Figure 9, C is a noise point This section shows the results of applying DBSCAN clustering on LRF data to remove noise Figure Test environment consequently decreased - step by step - until a minimum n artificial setting ratio that accounts for is achieved, which gives an optimum noise reduction ce of the LRF and the user’s experience Figure shows the test environment where the robot was ance of the measurement at range r0 The moved to collect the data points Figure 7(a) shows the is set to restrict the loosing threshold x raw LRF data of the environment without removing noise eal performance of the LRF Figure shows Figure 7(b) shows the result of DBSCAN applied on the on of the raw dataalgorithm using splits the the process In Figure 9, point A is a core point for the Є radius and dmi > DTLRF , the IEPF cluster of points N i raw data , i1S2 , · · · i2, S5 ) MinPts ≤ e to obtaininto segments (S two subsets N and N , where D is a predefined T splitting threshold The process is iterated to each subset Border point: A border point is not a core point but falls removal using Mathematical Morphology DT is Noise until no new subset can be found The value of 3.2.2 within the neighbourhood of a core point In Figure 9, point val in the Spatial Domain B is a border point A border point can fall within the generally chosen to be small, but for long-range sensors the Morphology is a branch of mathematics which deals with neighbourhood of several core points data gets noisy as the range increases and it affects the a wide range of operators and which is particularly useful splitting A modified iterative end point fitting (MIEPF) Noise point: A noise point is any point that is neither a core algorithmof is utilized to set the with value of DT [29] The for valuethe analysis of binary images This section shows the spatial clustering applications noise point nor a border point and which does not satisfy the є is set as: applicability of these LRF data The basic ? ? ] is based on density reachability and MinPts criteria.techniques In Figure 9, C ison a noise point idea is to each point Px,y obtained the LRF as efined as the numberD of points (MinPts) Thistreat section shows the results of applyingfrom DBSCAN (16) T = 3σr + єa(ρh − r0) a pixel clustering Initially, the data entire gridnoise of size height × width of fied radius A centre-based approach to on LRF to remove the setup environment is supposed to be null (i.e., zero) ecify a point as: (1) within the interior of where єa is an artificial setting ratio that accounts for the (core point), (2) onofthe edge performance the LRF andof thethe user’sdense experience σr is the point), or (3) in a sparsely occupied region variance of the measurement at range r0 The value of DT 1.5 NOISE POINT he neighbourhood of a point refers to allbased the on the real is set to restrict the loosing threshold =1 C the LRF.9Figure shows the the segmentation he specifiedperformance radius ofFigure illustrates 0.5 the raw LRF data usingin the2D process described above to re, a borderofand a noise point data 0 max obtain segments (S ,S , ⋯ ,S ) Y B A -0.5 These points form the interior of the CORE POINT BORDER POINT 3.2 Noise Removal in the Spatial Domain ed cluster A point is a core point if -1 DBSCAN r of points3.2.1 within a given neighbourhood MinPts = -1.5 e point as determined by the distance -2 -1 Density-based spatial clustering of applications with noise X d a user specified distance parameter, , (DBSCAN) [34, 1, 39] is based on density reachability ’Den‐ ertain threshold, MinPts Figure 9, point sity’ is defined as theIn number of points (MinPts) within a Figure Noise point, border point and core point Figure Noise point, border point and core point є AMinPts centre-based to density can radius point for thespecified radius and ≤ approach The values of є and N MinPts are set by training the robot and specify a point as: (1) within the interior of a dense region looking into the noise characteristics The robot is trained (core point), (2) on the edge of the dense region (border in an actual environment based on user feedback For a point), or (3) in a sparsely occupied region (noise point) (a) (b) assigned to є , fixed value of N MinPts , a maximum value iswww.intechopen.com TheNo, neighbourhood me, 2013, Vol No:2013 of a point refers to all the points within the specified radius є Figure illustrates the concept of a i.e., є = єmax The value of є is consequently decreased - step border and scan a noiseof point 2D data Figure 7.core, (a) aRaw laser theinenvironment with noise,by and theaDBSCAN step(b) - until minimum result є is achieved, which gives an optimum noise reduction Figure shows the test environ‐ ment where the robot was moved to collect the data points Border point : Aofborder point is not a co Figure 7(a) shows the raw LRF data the environment without removing within noise Figure shows the result ofof a core p the7(b) neighbourhood DBSCAN applied on the raw data point B is a border point A border poi the Mathematical neighbourhood of several core poi 3.2.2 Noise removal using Morphology Noise point : A noise point is any point Morphology is a branch of mathematics which deals with core point noris aparticularly border point a wide range of operators and which useful and whic for the analysis of binary images This section shows the and MinPts criteria the In Figure 9, C applicability of these techniques on LRF data The basic idea is to treat each point P x, y obtained from the LRF as a This section shows the results of app pixel Initially, the entire grid of size height × width of the clustering ontoLRF data remove noise setup environment is supposed be null (i.e., to zero) The Figure Test environment Figure Test environment points in the grid where the LRF data have a value greater The values of we apply and anNerosion MinPts are set Core point: These points form the interior of the densitythan zero are treated as one Next, (MIEPF)based algorithm is utilized to set the value of D [? ] cluster A point is a core point if the number of points T algorithm overrobot and scanned looking the untreated data.into Detailsthe of thenoise char within a given erosion technique can be found in various sources [26, The value is set as: neighbourhood around the point as deter‐ robot is trained in an actual11].environmen mined by the distance function and a user specified Erosion basically takes the logical AND operator of feedback For a fixed value of NMinP distance parameter, є , exceeds a certain threshold, MinPts neighbouring data (which can be or 1) to output the DT = 3σr0 + a (ρh − r0 ) (16) value is assigned to , i.e., = max T -u is achieved, which gives an optimum Figure shows the test environment whe moved to collect the data points Figur Ankit A Ravankar, Yohei Hoshino , Abhijeet Ravankar, Lv Jixin, Takanori Emaru and Yukinori Kobayashi: consequently decreased - step by7 step Algorithms andsetting a Frameworkratio for Indoor Robotaccounts Mapping in a Noisy artificial that forEnvironment using Clustering in Spatial and Hough Domains where a is an the performance of the LRF and the user’s experience σr0 is the variance of the measurement at range r0 The resultant data In order to effectively apply an erosion technique over raw sensor data, this paper describes different kinds of kernels, which can be selected by the user according to the characteristics of noise Each kernel is characterized by three parameters given by a logical vector: V = ëé ±y H ±y V ±y C ûù T (17) Here, H , V and C represent the horizontal, vertical and crosswise (or diagonal) directions, respectively Figure 10 shows the kernels DENSE × 3, PLUS × 4, PLUS × and STAR × (a) (b) (c) (d) an erosion algorithm over the untreated scanned data Details of the erosion technique can be found in various sources [? ? ] Erosion basically takes the logical AND operator of neighbouring data (which can be or 1) to output the resultant data In order to effectively apply an erosion technique over raw sensor data, this paper describes different kinds of kernels, which can be selected B S denotes the to symmetric of B , i.e.: of noise Each bywhere the user according the characteristics kernel is characterized by three parameters given by a logical vector: BS = x Î E| - x Î B { } T ±2 where (17) ψV ψC Kernel Type Here,Fig.No H,V and C representψHthe horizontal, vertical and Fig 10(a)(or DENSE ×3 ±1 ±1 ±1 Figure crosswise diagonal) directions, respectively Fig 10(b)the kernels PLUS × DENSE ±1 ×3, PLUS ±1×4, PLUS0×8 and 10 shows Fig 10(c) PLUS × ±2 ±2 STAR×8 STAR × A⊕ (23) V = ±ψH ±ψV ±ψC Fig 10(d) noise case o faster Some perfo ±2 ±2 The raw LRF data (denoted by A) is viewed as a subset of Table Definition of various kernels with parameters the integer grid E = Zdim where dim = The raw data is probed with a predefined shape (structuring element, The kernels shown in Figure 10 are defined by the three denoted by B) shown in Figure 10(b), which is a binary parameters (ψH , ψV , ψC ) summarized in Table It is datum itself2.whose shape is defined by with vector ψi where i ∈ Table Definition of various kernels parameters × will element remove more noise×8 thatexample, the kernelthe DENSE H,obvious V, C For structuring Fig.No Kernel Type ψH ψB=PLUS V ψC × × than the kernel PLUS Similarly, the kernel STAR (Figure 10(c)) is defined Fig 10(a) by: DENSE×3 ±1 ±1 ±1 will remove more noise than the kernel PLUS × Since Fig 10(b) PLUS×4 ±1 ±1 some actual LRF data also the noise ψi = ±2, i ∈gets ( H,removed V ), ψ =during (18) Fig 10(c) PLUS×8 C ±2 ±2 removal process, it is beneficial to actually try out and train Fig 10(d) STAR×8 ±2 ±2 ±2 The k param obvio than will r some remov the ro This which show PLUS Notic which which the robot using different kernels in the actual environment This can easily be done using the proposed framework, Once where Bz is the translation of vector B by vector z: 2discussed in detail in Section Figure 11(b) shows which is applie E = Z ; B ={(−2, 0), (−1, 0), (0, 0), (1, 0), (2, 0), (19) × the results of noise reduction by Erosion using the PLUS follow B = b + z | b ∈ B , ∀ ∈ E (21) { } (0, −1z), (0, −2), (0, 1), (0, 2z)} The raw LRF data (denoted by A) is viewed as a subset of kernel on noisy LRF data shown in Figure 11(a) Notice = data E = ℤdim thepoints integer where The raw is The in grid the grid where thedim LRF havedata a value thatErosion some of may the actual data the also actual gets removed, whichwith the 3.3 K also[?LRF remove data along defined by by ]: probed withzero a predefined (structuring greater than are treatedshape as one Next, element, we apply Erosion cannoise be is recovered applying dilation (the result of which in the Although this may seem to be a drawback, B ) shownover in Figure 10(b), which is a binary an denoted erosionby algorithm the untreated scanned data is shown in Figure 11(c)) After for the AcaseBof = LRF E| Bzit⊆actually A} or helps A Bin=data reduction A−b (20) {z ∈data datumofitself whose shape is defined by found vector ψ Details the erosion technique can be ini where various noise Figure 10 Structuring element kernel type: (a) DENSE×3, (b) PLUS×4, (c) Figure 10 Structuring element kernel type: (a) DENSE×3, (b) PLUS×8, (d) STAR×8 The central data point indicated in green is the point PLUS ×4, (c) PLUS×8, (d) STAR×8 The central data point indicated to be processed in green is the point to be processed sources ] example, Erosion the basically takeselement the logical AND i ∈ H ,V[?,C ?For B =PLUS structuring operator of neighbouring × (Figure 10(c)) is defineddata by: (which can be or 1) to output the resultant data In order to effectively apply an erosion technique over data, this (18) paper y i = ±2, i Ỵ =0 ( H , Vraw ) ,y C sensor describes different kinds of kernels, which can be selected by the user according to the characteristics of noise Each E =characterized Z ; B = {( -2,0),(by -1,0),(0,0),(1,0),(2,0), kernel is three parameters given by a (19) logical vector: (0, -1),(0, -2),(0,1),(0,2)} V = ±ψ Erosion is defined by [38]:H ±ψV ±ψC T (17) faster computation of clustering described in Section 3.3 b∈ B real-t Some of the actual data lost by erosion can be obtained by a pre performing a dilate operation, which is defined by: the m the nu S A ⊕ B = z ∈ E|( B )z ∩ A = or A ⊕ B = Ab (22) Mode, b∈ B forme be gr where BS denotes the symmetric of B, i.e.: k-mea BS = { x ∈ E| − x ∈ B} (23) descr prese 11 Raw laser data of with a wall with (b)operation, the Figure 11 kernels (a)(a) Raw laser data of a wall noise, (b) noise, the erode The shown in Figure 10 are defined byerode the three k-mea Here, H,V and C represent the horizontal, vertical and Figure and (c) theand dilate(c) operation on (b) operation, the dilate operation on (b) parameters (ψ , ψ , ψ ) summarized in Table It is which | A ! B = z Ỵ E B Í A or A ! B = A H V C } crosswise (or {diagonal) directions, respectively Figure I -b z (20) bỴB obvious that the kernel DENSE × will remove more 10 shows the kernels DENSE×3, PLUS×4, PLUS×8 and Once the noise has been removed, clustering can then be noise such than the kernel PLUS×4 Similarly, the kernel STAR×8 STAR×8 applied in the spatial domain, which is discussed in the where Bz is the translation of vector B by vector z : will remove more noise than the kernel PLUS×8 Since section The raw LRF data (denoted by A) is viewed as a subset of following some actual LRF data also gets removed during the noise dim where dim = The raw data www.intechopen.com the integer gridB E= = Z removal process,and it isa beneficial try out and train (21) {b + z |b Ỵ B} , "z Ỵ E Algorithms Framework to foractually Indoor Robot Mapping in a Noisy E z K-means Fuzzy C-means Clustering is probed with a predefined shape (structuring element, 3.3 the robotand using different kernels in the actual environment denoted by B) shown in Figure 10(b), which is a binary This can easily be done using the proposed framework, Erosion may also remove the actual data along with the removing the noise, clustering is applied on the noisedatum itself whose shape is defined by vector ψi where i ∈ After which is discussed in detail in Section ?? Figure 11(b) noise Although this may seem to be a drawback, in the case free LRF data Clustering can be performed on real-time H, V, C For example, the structuring element B=PLUS×8 shows the results of noise reduction by Erosion using the of LRF data it actually helps in data reduction for the faster LRF data after each step in Online Mode or after a predefined (Figure 10(c)) is defined by: PLUS×4 kernel on noisy LRF data shown in Figure 11(a) computation of clustering described in Section 3.3 Some of N smaxsteps in Offline Mode The choice of the mode of Notice that some of the actual LRF data also gets removed, the actual dataψlost by erosion can be obtained by perform‐ (18) operation of the robot will also determine the number of i = ±2, i ∈ ( H, V ), ψC = which can be recovered by applying dilation (the result of ing a dilate operation, which is defined by: clusters at Figure each step For Online Mode, the whichperformed is shown in 11(c)) { ( ) A Å B = z Ỵ E| BS z } ầ A ỉ or A B = U Ab E = Z2 ; B ={(−2, 0), (−1, 0), (0, 0), (1, 0bỴ)B, (2, 0), (22) (19) following section (0, −1), (0, −2), (0, 1), (0, 2)} number of clusters (and hence the centroid) formed at each the noise been removed, clustering stepOnce is one, while forhas Offline Mode it can be greater thancan one then be in the spatial domain, in our test environment) Either kwhich -meansisordiscussed fuzzy c - in the (≤ 4applied Int J Adv Robot Syst, 2015, 12:27 | doi: 10.5772/59992 3.3 K-means and Fuzzy C-means Clustering Erosion is defined by [? ]: A B = {z ∈ E| Bz ⊆ A} or A B= A−b (20) After removing the noise, clustering is applied on the noise-free LRF data Clustering can be performed on nearest mean The k-means algorithm aims to minimize an objective function (a squared error function): means clustering is applied A brief description of k -means k n ( j) and fuzzy c -means clustering Q = ∑ is p − c j here (24) ∑presented j =1 i =1 i k -means clustering [21,46, 1] is a method of cluster analysis ( j) which aims of data error pointsdistance into k measured clusters, Here,to partition p − c aisset a squared j i such that each observation belongs to the cluster with the ( j) between a data point pi belonging to the jth cluster and k -means algorithm aims minimize an nearest mean The a cluster-centre c j for n data points fromtotheir respective objectivecluster function (a squared error function): centres The fuzzy c-means clustering [? ? ] (or ’FCM’) is an k n extension of k-means clustering FCM is similar to k-means Q= p( j ) - c j data point in the data set (24)is clustering except thati every j =1 i =1 given membership by a weight ranging from to 1, and each data point in the set belongs to every cluster with a weight not belong at all, and formeasured absolutely pi( j) − cj (02for is does a squared error distance belongs) The objective function that FCM optimizes is: åå Here, between a data point pi( j) belonging to the j th cluster and a N C cluster-centre Qcjmfor = n data uijmpoints pi − c jfrom , i =1 j =1 ∑∑ cluster centres their ≤ m ≤respective ∞ (25) where m ∈ R clustering and m ≥ 1, uij 1] denotes the degree of c -means [22, (or ’FCM’) is an The fuzzy membership of pi in the cluster j, pi is the ith r-dimensional extension of k -means clustering FCM is similar to k -means measured data, c j is the r-dimensional centre of the cluster, clustering that every pointthe insimilarity the databetween set is andexcept · is the norm whichdata expresses given membership a weight ranging from to 1, and any measuredby data and the centre each data point the set belongs to every cluster a The fuzzyin partitioning of the data set is carried outwith by the weight (0 for does not belong at objective all, and function for absolutely iterative optimization of the Qm , with membership uij andthat cluster centres c j by: is: belongs).updated The objective function FCM optimizes uij =N C Q m = ååCuijm pi p-i −ccjj i =1 ∑ j =1 k =1 , cj = ∑iN=1 uijm pi , m1−2 £1 m £ ¥ ∑iN=1 uijm (26) (25) pi − c k k +1 k uijmembership < ν where when max uij −of ij degree ∈ ℝiteration 1, uij denotes where mthe and m ≥stops the ν is a termination criterion between and 1, whereas k j , pi is the r -dimensional i thThe cluster of pi in the denotes the iteration steps process alwaysmeasured converges to a local minimum or Qm The implementation of the k-means and fuzzy c-means algorithms is straightforward using Equations (24), (25) and 12(a) and Figure 12(b) of shows the result cj isFigure the r -dimensional centre the cluster, and ⋅ data,(26) of applying k-means clustering and c-means clustering on is the norm which expresses the similarity between any noise-free LRF data with 52 clusters, respectively The measured data and the centre different colours applied to the LRF data clearly show the different clusters and have no other significance The The fuzzy partitioning the data set isincarried centroids obtained are alsoof shown as circles the case out of by the iterativeclustering optimization of the objective Qm, with k-means and rectangles in the casefunction of c-means clustering in Figure 12(a)uand Figure 12(b), respectively updated membership c and cluster centres by: ij Later, these centroids are joined to form the map j 3.4 Generating Straight-line Maps m p ij i i = If the centroids 12(a) and 12(b) are , uij = shown in Figure c = j joined, exact straight-line maps2may not be Nobtained as the uijm cases, it æ on ö m -1 slope Inåsome pi -the c j same centroids notC lie ỗ ữ i =1 may be desirable to obtain straight-line maps This can be k =1 ỗ pi - c k ữ solved by applying è SVD to øthe centroids or by taking a (26) Hough transform of the centroids A brief description of both approaches is given in the following sections k +1 k the iteration stops when maxij {| uij − uij |} < ν where ν is a termination and 1, whereas k denotes 3.4.1 Applying criterion SVD to thebetween Centroids of0 the Clusters the iteration steps The process alwaysSVD converges A straight line can be obtained by applying [? ] on theto a local minimum or Qism.composed of the centroids (c1 , c2 · · · cn ) matrix A which obtained after the clustering of the LRF data for a given θ: The implementation of the  k -means and fuzzy c -means c1 algorithms is straightforward  c1  using Equations (24), (25)   A = (27)result of and (26) Figure 12(a) and Figure 12(b) shows the · cn and c -means clustering on applying k -means clustering noise-free LRF data with 52 clusters, respectively The The SVD ofcolours a matrixapplied A is expressed different to theas: LRF data clearly show the different clusters and The A = have USV T no other significance (28) centroids obtained are also shown as circles in the case of k T where U and V are orthonormal matrices consisting of -means clustering and rectangles the of c -means the left and right singular vectors, andin S is thecase diagonal clustering 12(a) and Figure respectively matrix whichinis Figure formed by the singular values12(b), arranged in decreasing order of the on-diagonal follows: Later, these centroids are joinedelements to formasthe map U = [ u1 · · · u n ] (a) N åu (29) (b) Figure 12 Figure (a) Results clustering k-meansusing algorithm with 52 clusters,with and52 (b)clusters, results ofand clustering using c-means clustering algorithm with 52 12.of (a) Results using of clustering a k-means algorithm (b) results of fuzzy clustering using a fuzzy c-means clusters algorithm with 52 clusters 3.4 Generating Straight-line Maps Short Journal Name, 2013, Vol No, No:2013 If the centroids shown in Figure 12(a) and 12(b) are joined, exact straight-line maps may not be obtained as the centroids not lie on the same slope In some cases, it may be desirable to obtain straight-line maps This can be solved by applying SVD to the centroids or by taking a Hough transform of the centroids A brief description www.intechopen.com of both approaches is given in the following sections 3.4.1 Applying SVD to the Centroids of the Clusters A straight line can be obtained by applying SVD [30] on the matrix A which is composed of the centroids (c1,c2 ⋯ cn ) obtained after the clustering of the LRF data for a given θ : Ankit A Ravankar, Yohei Hoshino , Abhijeet Ravankar, Lv Jixin, Takanori Emaru and Yukinori Kobayashi: Algorithms and a Framework for Indoor Robot Mapping in a Noisy Environment using Clustering in Spatial and Hough Domains é c1 ù ê ú c A = ê 1ú ê×ú ê ú êëcn úû The equation of the line in the Hough space (ρ,θ ) can be expressed as: (27) The SVD of a matrix A is expressed as: A = USV T (28) where U and V T are orthonormal matrices consisting of the left and right singular vectors, and S is the diagonal matrix which is formed by the singular values arranged in decreasing order of the on-diagonal elements as follows: U = éëu1 Lun ùû édiag (s 1s Ls n ) ù (a) S=ê ú (s ³ s L ³ s n ) ëê ûú (29) diag(σ σ2 · · · σn )T T U U Vσ =nI), where The matrices U and V 1satisfy (σ=1 I≥and σ2 ·V· · ≥ (30) S= the matrix I represents the identity matrix V = [ v1 · · · v n ] (31) The SVD calculations consist of evaluating eigenvalues and TV = T The matrices U and V satisfy U T Ueigenvectors = I and V of A T A The A T AI, eigenvectors of AA and where the matrix I represents the identity matrix make up the columns of V and the eigenvectors of AA T Theup SVD calculations of evaluating eigenvalues make the columns of Uconsist Moreover, the singular values T and A T A The eigenvectors and eigenvectors of AA in S Tare square roots of the eigenvalues from AA T orof A A make up the columns of V and the eigenvectors of A T AA A T make up the columns of U Moreover, the singular values in S are the square of the eigenvalues from AA T After computing SVDroots of the centroids, the minimum T or A A (near-zero) singular values (σ ) of the matrix S are ignored computing theisSVD of the the minimum andAfter set to zero This similar to centroids, the technique used in (near-zero) singularanalysis values (σ)The of the aresmaller ignored principal component with Sthe Snewmatrix and set to zero This is similar to the technique used in singular values ignored is then used to recalculate the principal component analysis The Snew with the smaller matrix Anew consisting of points on the regression line as: singular values ignored is then used to recalculate the matrix Anew consisting of points on the regression line as: Anew = USnew V T (32) (32) 3.4.2 Applying a Hough Transform to the Centroids of the Clusters 3.4.2 Applying a Hough Transform to the Centroids of the Clusters A Hough transform is a technique for line extraction from A Hough is a are technique forinline binary imagestransform and its details available [12, extraction 41] The from binary images and its details are available in [? motivating idea behind the the Hough transform technique ? ] The motivating idea behind the the Hough for line detection is that each input measurement (centroids transform technique for line detection is that each in this case) indicates its(centroids contribution to acase) globally consis‐ input measurement in this indicates its tentcontribution solution Into the spatial domain, the straight line be a globally consistent solution In thecan spatial expressed domain,as: the straight line can be expressed as: y+ =cmx , + c, y = mx (33) (33) The equation of the line in the Hough space (ρ, θ) can be expressed as: ρ = x cos θ + y sin θ (34) 10 Int J Adv Robot Syst, 2015, 12:27 | doi: 10.5772/59992 Algorithm Straight-line generation from centroids using a Hough transform 2: (30) The Hough transform algorithm is applied on the centroids obtained from k-means or fuzzy c-means clustering After calculating the highest votes in the (ρ, θ) space, an inverse Hough transform is applied which gives the line passing through the maximum number (34) The Hough transform algorithm is applied on the centroids obtained from k -means or fuzzy c -means clustering After calculating the highest votes in the (ρ,θ ) space, an inverse Hough transform is applied which gives the line passing through the maximum number of centroids The Hough transform and inverse Hough Transform algorithm applied on the centroids is described in Algorithm 1: Figure 13 Straight-line generation from centroids using SVD and a Hough transform The blue points represent the centroids, the red line represents the SVD line, while the green line represents éë v1Hough V =the L ùû transform the line obtained from (31) Anew = USnew V T r = x cosq + y sin q 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: Pi ← { p1 , p2 , p3 , · · · , pn }/ ∗ centroids ∗ / V (ρ, θ ) ← 0; / ∗ clear vote − accumulator ∗ / Max_θ; / ∗ max θ ∗ / W; / ∗ max x size ∗ / / ∗ Convert points P( xi , yi ) to V (ρi , θi ) ∗ / for ∀ xi ∈ Pi for ∀yi ∈ Pi if P( xi , yi ) == Bright then / ∗ Eval : P( xi , yi ) → V (ρi , θi ) ∗ /; for ≤ θ ≤ Max_θ ρ ← xi × cos(θ ) + yi × sin(θ ) / ∗ increment vote − accumulator ∗ / V (ρi , θi ) → V (ρi , θi ) + 1; end for end if end for end for / ∗ Find Max V (ρi , θi ) ∗ / Tρ,θ ← for ∀ρi ∈ Vi for ∀θi ∈ Vi if Tρ,θ < Vρi ,θi then Tρ,θ ← Vρi ,θi ; end if end for end for / ∗ Extract line f rom Tρ,θ ∗ / for ∀ xi ≤ W ρ ← Tρ θ ← Tθ (ρ− xi ×cos(θ )) yi ← sin(θ ) Line xi ,yi ← Bright end for Draw Line xi ,yi wide band of LRF data The red line represents the line Figure 13 shows resultsSVD of straight-line using obtained after the applying The greengeneration line represents the and line the obtained using theonHough transform is SVD Houghbytransform the centroids The It blue clear represent from Figure that SVD generates the regression points the ?? centroids of clusters obtained after the line, reduction whereas the Hough transform generates a lineband which k -means noise and clustering of a wide of passes through the maximum number of centroids Since LRF data The red line represents the line obtained after the number of clusters nc is significantly less than the applying SVD The green line represents the line obtained actual number of data points N (i.e., nc

Ngày đăng: 08/11/2022, 15:03

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN