1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

a simulated autonomous car

81 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

This dissertation describes a simulated autonomous car capable of driving on urbanstyle roads. The system is built around TORCS, an open source racing car simulator. Two realtime solutions are implemented; a reactive prototype using a neural network and a more complex deliberative approach using a sense, plan, act architecture. The deliberative system uses vision data fused with simulated laser range data to reliably detect road markings. The detected road markings are then used to plan a parabolic path and compute a safe speed for the vehicle.

A Simulated Autonomous Car Iain David Graham Macdonald Master of Science School of Informatics University of Edinburgh 2011 Abstract This dissertation describes a simulated autonomous car capable of driving on urbanstyle roads The system is built around TORCS, an open source racing car simulator Two real-time solutions are implemented; a reactive prototype using a neural network and a more complex deliberative approach using a sense, plan, act architecture The deliberative system uses vision data fused with simulated laser range data to reliably detect road markings The detected road markings are then used to plan a parabolic path and compute a safe speed for the vehicle The vehicle uses a simulated global positioning/inertial measurement sensor to guide it along the desired path with the throttle, brakes, and steering being controlled using proportional controllers The vehicle is able to reliably navigate the test track maintaining a safe road position at speeds of up to 40km/h Acknowledgements I would like to thank all of the lectures who have taught me over the past year, each of whom contributed to this thesis in some way Particular thanks must go to my supervisor, Prof Barbara Webb, for agreeing to supervise this project and for her advice and encouragement throughout, and to Prof Bob Fisher for his many useful suggestions Declaration I declare that this thesis was composed by myself, that the work contained herein is my own except where explicitly stated in the text, and that this work has not been submitted for any other degree or professional qualification except as specified (Iain David Graham Macdonald) Table of Contents CHAPTER INTRODUCTION 1.1 PURPOSE 1.2 MOTIVATION 1.3 OBJECTIVES 1.4 DISSERTATION OUTLINE CHAPTER BACKGROUND 2.1 INTRODUCTION 2.1.1 Motivation 2.1.2 A Brief History of Autonomous Vehicles 2.2 THE URBAN CHALLENGE 2.2.1 The Challenge 2.3 BOSS 2.3.1 Route Planning 2.3.2 Intersection Handling 2.4 JUNIOR 2.4.1 Localisation 2.4.2 Obstacle Detection 10 2.5 ODIN 12 2.5.1 Path Planning 12 2.5.2 Architecture 12 2.6 DISCUSSION 14 2.7 CONCLUSION 15 CHAPTER SIMULATION SYSTEM 17 3.1 ARCHITECTURE 17 3.2 TRACK SELECTION 18 3.3 CAR SELECTION 19 CHAPTER REACTIVE PROTOTYPE 20 4.1.1 Image Processing 21 4.1.2 Training 22 4.1.3 Results 24 4.1.4 Evaluation 25 CHAPTER DELIBERATIVE APPROACH 26 5.1 SUMMARY 26 5.2 GROUND TRUTH DATA 27 5.3 SENSING 28 5.3.1 Some Initial Experiments 28 5.3.2 The MIT Approach 32 5.3.3 Road Geometry Modelling 43 5.3.4 Lane Marking Verification 48 5.3.5 Lane Marking Classification 49 5.4 PLANNING 52 5.4.1 Trajectory Calculation 52 5.4.2 Speed Selection 54 5.5 ACTING 54 5.5.1 Speed Control 55 5.5.2 Steering Control 56 CHAPTER EVALUATION 57 6.1 LANE MARKING DETECTION AND CLASSIFICATION 57 6.2 TRAJECTORY PLANNING 58 6.2.1 Generation of Trajectory Points 58 6.2.2 Flat Ground Assumption 59 6.2.3 Non-continuous Path 60 6.2.4 Look-ahead Distance 61 6.3 PHYSICAL PERFORMANCE 62 6.3.1 Path Following 62 6.3.2 Speed in Bends 64 6.3.3 G Force Analysis 64 6.3.4 Maximum Speed 66 6.4 REAL-TIME PERFORMANCE 67 CHAPTER CONCLUSION 69 7.1 SUMMARY 69 7.2 FUTURE WORK AND CONCLUSION 70 BIBLIOGRAPHY 73 Chapter Introduction "The weak point of the modern car is the squidgy organic bit behind the wheel." Jeremy Clarkson 1.1 Purpose This dissertation describes a simulated autonomous car capable of navigating on urban-style roads at a variable speeds whilst staying in-lane The simulation uses TORCS, an open source racing car simulator which is known for its accurate vehicle dynamics [23] Two solutions to the problem are provided; firstly, a reactive approach using a neural network and secondly, a deliberative approach inspired by the recent DARPA Urban Challenge with separate sensing, planning, and control stages In particular, the latter system fuses vision and simulated laser range data to reliably detect road markings to guide the vehicle 1.2 Motivation The recent DARPA Grand Challenges and the work of teams such as Tartan Racing [8] and Stanford Racing [9] have demonstrated that the goal of fully autonomous vehicles may be within reach The development of such vehicles has the potential to save many of the thousands of lives that are lost in collisions every year [1] Due to the need for autonomous vehicles to interact in an environment populated by human drivers and pedestrians, there is clear safety risk in the development process This risk can act as a barrier to development as any autonomous vehicle must prove a sufficient level of safety before it is able to enter such an environment One approach to this problem, as seen in the DARPA challenges, is to create controlled environments that are representative of the intended operating environment Whilst the benefits of this approach are clear, the logistics and the expense of organising these events mean that they are likely to remain rare Another approach is to simulate the environment thereby eliminating the safety risk and reducing costs Simulations have the additional benefit of being able to test performance under unusual circumstances and allow algorithms to be optimised and Chapter Introduction improved Such simulations were an essential part of the development process for the entrants in the DARPA events However, the development of accurate simulations of complex systems and environments is a non-trivial task and may be beyond the budgets of many researchers This project, therefore, explores the low-cost alternative of using an open source racing simulator to develop a simulated autonomous car 1.3 Objectives The goal of this project was to develop an simulated autonomous vehicle capable of driving on urban style roads The vehicle must be capable of navigating around a test track in a safe and controlled manner Specifically, the vehicle must remain in the correct lane and drive at an appropriate speed Although the environment is simulated, the intention is to approach the project as though the vehicle is real With that in mind, the vehicle should only make use of information that would be available in the real-world, and the system must run in real-time The project looks to the recent DARPA Urban Challenge for inspiration 1.4 Dissertation Outline The remainder of this document is structured as follows: Chapter provides a brief history of autonomous vehicle research and discusses some of the techniques used by state of the art vehicles Chapter describes the simulator and system architecture Chapter describes a sub-project which was undertaken to establish the feasibility of the main project Here, a neural network is used to control a simulated autonomous vehicle Chapter forms the main body of the dissertation and describes a deliberative approach using image processing, data fusion, planning, and control techniques to solve the problem Chapter provides the results of experiments performed on the completed system as an evaluation Finally, Chapter offers a summary of the work undertaken, conclusions, and suggestions for future work Chapter Background 2.1 Introduction This chapter examines the current state of the art in driverless cars It focuses on the 2007 DARPA Urban Challenge, a competition held to promote research in the field and the main inspiration behind this project The motivation behind autonomous vehicles is discussed, followed by a retrospective that places the Urban Challenge in context The challenges posed by the Urban Challenge are described and the vehicles which finished in the top three positions, Boss, Junior, and Odin examined As this project aims to build a complete autonomous system, different aspects of each vehicle are examined giving a broad overview of the field Although some of the techniques described not relate directly to this project, they have helped shape it and represent the aspirations of the project had more time been available 2.1.1 Motivation The benefits of autonomous vehicles can be split into three main categories; safety, efficiency and lifestyle The World Health Organisation estimates that 1.2 million people are killed on the roads each year and a further 40 million are injured Human errors, such as distraction or inappropriate speed are the primary cause of road accidents [1] The hope is that lives can be saved through the use of technology Additionally, the US government has mandated that 1/3 of US military ground vehicles must be unmanned by 2015 [4] There is increasing pressure on manufactures to produce more economical vehicles, and on governments to maintain an efficient road network The number of vehicles on our roads increases every year with further congestion as the result For human drivers, the recommended safe inter-car gap is seconds but autonomous vehicles may be able to reduce this, increasing the capacity of the road network The car has been a significant force for social change, improving the mobility of the population Access to this mobility will increase the quality of life for certain groups, such as the elderly and disabled, who cannot drive themselves For others, being Chapter Background released from time spent behind the wheel will simply allow that time to be put to better use [1] 2.1.2 A Brief History of Autonomous Vehicles This section provides a history of the development of autonomous vehicles from the 1980s to the present In the early 1980s, pioneer Ernst Dickmanns began developing what can be considered the first real robot cars He developed a vision system which used saccadic camera movements to focus attention on the most relevant visual input Probabilistic techniques such as extended Kalman filters were used to improve robustness in the presence of noise and uncertainty By 1987 his vehicle was capable of driving at high speeds, albeit on empty streets [5] In the late 80s Dickmanns participated in the European Prometheus project (PROgraMme for a European Traffic of Highest Efficiency and Unprecedented Safety) With an estimated investment of billion dollars in today’s money, the Prometheus project laid the foundation for most subsequent work in the field By the mid-90s, the project produced vehicles capable of driving on highways at speeds of 80km/h in busy traffic [6] Techniques such as tracking other vehicles, convoy driving, and autonomous passing were developed Another pioneer in the field was Dean Pomerleau who developed ALVINN (Autonomous Land Vehicle in a Neural Network) in the early 90s [7] ALVINN was notable for its ability to learn to drive on new road types with only a few minutes training from a human driver After the successes of the 80s and early 90s, progress seems to have plateaued in the late 90s It was not until DARPA (Defence Advanced Research Projects Agency) launched the first of its Grand Challenges that interest in the field was renewed In 2004, DARPA offered a $1 million prize to the first autonomous vehicle capable of negotiating a 150 mile course through the Mojave Desert For the first time, the vehicles were required to be fully autonomous with no humans allowed in the vehicle during the competition By this time, GPS systems were widely available, significantly improving navigational abilities Despite several high profile teams, and Chapter Evaluation Whilst the process of merging the trajectory-sets operated correctly, the resulting vehicle motion was significantly worse than before With n set to between two and ten the vehicle oscillated wildly and control was quickly lost No investigation was made as to why the technique failed but is possibly due to the inaccuracy of trajectory points far in the distance By using historic data in this manner, inaccurate points persist and are allowed to influence the trajectory more than they ought to One resolution to this would be to weight the trajectory sets with older data providing less influence A better approach, however, would be to use connected splines rather than parabolas for path generation Doing so would guarantee that consecutive paths are continuous Figure 6-4 three consecutive parabolic paths that are not continuous The red circles indicate the start point of each path 6.2.4 Look-ahead Distance The look-ahead distance is a measure of how far into the distance the system can see and is equivalent to the length of the path generated for the vehicle to follow To measure this, every path generated during the course of a single lap was recorded Figure 6-5 shows a histogram of the individual path lengths The distribution is multimodal, with peaks around the mean (20.70m), the 9.0m mark, the 14.0m mark, and the 40.0m mark The peaks at 9.0m and 14.0m are likely to represent the paths through bends due to the reduced look-ahead distance The peak at 40.0m is explained by the software truncating any longer paths Although it is difficult to make direct comparisons, the figure of 20.7m compares well with the 10-15m lookahead distance of Odin [10] 61 Chapter Evaluation If we assume that the average value of 20.70m is representative of the path length on a straight section of road, we can compute that maximum safe speed of the vehicle To this, we apply the constraint that the vehicle must be able to stop within the distance for which it has a valid path Taking 20.70m as a maximum stopping distance gives an approximate maximum speed of 63km/h in good conditions [27] Figure 6-5 Histogram of path lengths over a single lap 6.3 Physical Performance This section evaluates the system’s ability to act according to the instructions issued by the AI controller 6.3.1 Path Following As the width of the lanes are constant and the desired path aims to be along the centre of the lane, the centre of the lane can be used as a baseline for the evaluation of the actual path taken by the vehicle As the vehicle progresses around the track the lateral offset from the centre of the lane is recorded Figure 6-6 shows the results for a single lap The maximum lateral offset experienced during the lap was 118cm Given that the width of the vehicle is 200cm and the width of the lane is 470cm we can conclude that at no time does the vehicle cross the centre line The system, therefore, meets the project objective of maintaining a safe position on the road 62 Chapter Evaluation Comparing the performance of the autonomous vehicle with the expert driver when driving at a constant 20km/h we can see that the expert driver strays less from the correct path As both use the same proportional steering control method to follow their respective paths, we can conclude that the large lateral offsets experienced by the autonomous vehicle are the result of poor path planning rather than poor path following By way of comparison, the figure shows lateral offset corrections experienced by Junior during the Urban Challenge [9] Note that these offsets are what Junior would have experienced had it been relying solely on its GPS/IMU system rather than its probabilistic localisation system Performance, therefore, seems broadly comparable with Junior’s GPS/IMU based system Figure 6-6 Lateral offsets of the vehicle Top left, autonomous vehicle travelling at variable speed Top right, autonomous vehicle travelling at constant 20km/h Bottom left, expert vehicle travelling at constant 20km/h Bottom right, lateral offset corrections Interestingly, the autonomous driver has peaks around 20cm for both constant and variable speed The fact that the histogram does not peak at zero suggests that there may be some bias in the calculation of the vehicle’s path A small error in the estimation of distance between the lane markings will result in the trajectory points not forming a single line In such a case, the lane marking that contributes the most 63 Chapter Evaluation points is able to cause bias in the calculated path This point was not investigated further.1 6.3.2 Speed in Bends The following figure shows the commanded speed of the vehicle as it progresses around the track The commanded speed is the speed that the autonomous driver wishes the vehicle to maintain and can take the values 20, 30, or 40km/h Upon receiving a new speed command, the vehicle’s speed controller will accelerate or decelerate as required Figure 6-7 Commanded vehicle speeds around the track Red = 20km/h, green = 30km/h, and blue = 40km/h The left image shows the whole track (travelling clockwise), the right image shows detail of a complex set of bends (travelling top to bottom) Figure 6-7 illustrates the system’s ability to correctly select the appropriate speed based on the curvature of planned trajectory In many cases, the vehicle is commanded to reduce speed for a bend prior to entering the bend The system, therefore, exhibits predictive behaviour and successfully meets the project objective of maintaining a speed appropriate to the current road geometry 6.3.3 G Force Analysis As mentioned earlier, the vehicle motion can be jerky whilst navigating bends This prompted me to compare the lateral g-forces experienced by the expert driver with those of the autonomous driver Figure 6-8 shows the expert driver in red and the autonomous driver in blue; both were driving at 20km/h The difference between the two is striking The cause of this bias was later confirmed as being due to an error in the coding of the road width However, due to time constraints, the experiments in this chapter were not re-run 64 Chapter Evaluation Looking at the expert driver, the flat sections correspond to straight sections of track, the positive areas correspond to right-hand bends, and the negative areas are lefthand bends A close-up view of the behaviour in a straight section and a bend are also provided On the straight sections, the expert driver experiences virtually no lateral force with only slight oscillations around zero In contrast, the autonomous driver experiences significantly higher forces The peak g-force experienced by the expert driver during the straight section shown is 0.02g (std dev 0.005) compared to 0.29g (std dev 0.068) for the autonomous driver; a factor of almost 15 The typical behaviour as the expert driver enters a bend is for the g-force to transition from near zero to an almost constant value for the duration of the bend before returning to zero on exit In contrast, the autonomous driver continues to oscillate, with both positive and negative g-forces being experienced throughout the bend The peak force during the bend shown is 0.23g (std dev 0.07) for the expert driver and 0.97g (std dev 0.17) for the autonomous driver The likely cause of this is the noncontinuous path problem described in section 6.2.3 65 Chapter Evaluation Figure 6-8 Comparison of lateral g-force between expert (red) and autonomous (blue) drivers Top, complete lap Bottom left, straight road Bottom right, road bends to the right 6.3.4 Maximum Speed One of the benefits of using simulations is the ability to test a system to the point of failure without suffering the consequences of such a failure in the real world It is, therefore, possible to determine the maximum speed at which the vehicle remains controllable by increasing the speed until control is lost For this experiment, the vehicle was driven, under autonomous control along an almost straight section of track As the vehicle progressed, its lateral position relative to the centre of the lane was recorded The process was then repeated at increasing speeds until the vehicle was unable to stay within the lane boundary Figure 6-9 shows the results from runs at 20km/h, 100km/h and 110km/h At 20km/h the vehicle maintains a steady path along the track At 100km/h the vehicle oscillates 66 Chapter Evaluation severely along the track but, with a peak lateral offset of only 39cm, it remains well within the boundaries of the lane At 110km/h the vehicle becomes unstable with the oscillations increasing in amplitude until control is lost and the vehicle leaves the road In this case, the vehicle drove onto a grassy surface with low friction causing it to skid many metres from the track It is, therefore, possible to state that the maximum straight-line speed at which the vehicle is controllable is around 100km/h This easily exceeds the 48km/h limit set during the Urban Challenge However, in practice, the maximum safe speed would be lower than this and the maximum comfortable speed even lower still Figure 6-9 Maximum controllable speed Blue = 20km/h, black = 100km/h, and red = 110km/h 6.4 Real-time Performance Table 6-1 gives typical processing times for the main stages of the processing pipeline The system runs on a standard Windows laptop with an Intel i7 quad core processor The total cycle time is around 42ms giving a frame rate of 24Hz Both TORCS and the AI controller consume ~12% of total processor capacity giving a total processor load of ~25% There is, therefore, ample capacity for more computationally intensive algorithms to be incorporated 67 Chapter Evaluation Table 6-1 Pipeline performance Stage Time (ms) Pre-processing ~1  Grey-scale conversion  Binary image for verification Matched filters 24 Point cloud processing ~3  Vertical feature identification  Data fusion (image masking) Line tracking 14     Total 42 Line segments Distance transform Classification Parabola fitting 68 Chapter Conclusion 7.1 Summary This dissertation has examined the importance of the recent DARPA challenges, particularly the Urban Challenge, on the field of autonomous vehicles Advances in sensor technologies such as LIDAR and GPS along with ever-increasing processing power have brought the goal of autonomous vehicles closer Two approaches to autonomous vehicle control have been simulated using TORCS The first, a reactive approach drawing on the work of pioneer Dean Pomerleau, served as a feasibility study for the project The second, a deliberative approach with separate sensing, planning, and acting stages forms the main body of the dissertation The deliberative approach was heavily influenced by the work of the MIT racing team and, in particular, the vision system described by Huang The system extends Huang’s approach by using twin matched filters to extract road markings from a forward facing camera Simulated LIDAR data is fused with the image data in order to remove noise caused by roadside barriers In contrast to Huang’s approach, road markings are modelled using variable length line segments and a simple verification process is used to remove false-positive detections The verified lane markings are then projected onto the ground plane and classified as left, right, or centre markings according on their position relative to the vehicle’s planned trajectory Prior knowledge of the road width is used to convert the lane markings into a parabolic path for the vehicle The curvature of this path is used to compute an appropriate speed for the vehicle A separate process using simulated GPS/IMU data is used to guide the vehicle along the desired path at the desired speed The vehicle’s steering, throttle, and brakes are controlled using PI controllers Analysis of the system’s performance shows that the lane detection and classification system operates very well, with 91.9% of all lane markings being correctly detected and classified The system is able to generate a path for the vehicle such that the vehicle maintains a safe position and an appropriate speed at all times Furthermore, the vehicle is able to navigate the test track reliably having completed well over 100 69 Chapter Conclusion autonomous laps at speeds of up to 40km/h without failure The vehicle remains controllable at speeds of up to 100km/h on straight roads However, analysis has also shown that the vehicle experiences considerably higher lateral g-forces than the benchmark expert driver The primary cause of this problem is the fact that consecutive paths generated by the system are not continuous The result is a very slight but definite change in direction when a new path is generated which is particularly apparent in bends The vehicle also suffers from oscillations on the approach to changes in road gradient This is the result of the flat ground-plane assumption not holding in these circumstances 7.2 Future Work and Conclusion The system successfully meets the project objective of maintaining a safe road position and speed Furthermore, the broader goal of using techniques that were seen in the Urban Challenge has been met by combining vision, LIDAR, and GPS/IMU data This project implemented a camera based vision system despite traditional vision systems playing a secondary role is the Urban Challenge However, it is worth noting that although vehicles such as Junior use lasers to detect road markings, the process still makes use of intensity images Therefore, the image processing techniques described in this thesis are still of relevance The vision system implemented here operates at around 25Hz which is faster than necessary This is demonstrated by the neural network’s vision/control cycle operating as low as 11Hz and the state-of-the-art road detection system employed by Boss operating at around 10Hz [8] Furthermore, the average path generated by the system is sufficient for ~1.8s of driving, suggesting that frame rates less than 10Hz may still be capable of maintaining vehicle control Despite the overall success, each part of the system could be improved Indeed, as it stands, the system represents the bare minimum functionality expected of an autonomous vehicle The most important improvement would be to remedy the noncontinuous path problem to reduce the lateral g-forces Using a more sophisticated curve model such as cubic splines would be a vast improvement 70 Chapter Conclusion The project attempted to simulate LIDAR data and to some extent, succeeded in this However, the approach taken was a compromise between finding a useful data representation and a practical means of generating it Initial fears regarding insufficient processing power were unfounded and, given a little more time, a raytracing approach could have been implemented Such an approach would allow the sensor to be extended to detect dynamic objects in the environment rather than only static features This would pave the way for researching interactions with other vehicles, something of particular importance in an urban environment The use of simple PI controllers to steer the vehicle also has its drawbacks The technique is only capable of responding to deviations once they occur i.e the controller steers to minimise an existing error A better approach, commonly used in the Urban Challenge, was to implement a kinematic bicycle model (see section 2.5.2.3) Such a model would allow the motion of the vehicle to be predicted allowing it to steer to prevent deviations from the desired path The use of a simulator was, of course, essential to this project Given the low driving speeds used in the project, the dynamic capabilities of TORCS easily provided adequate realism in this respect The fact that the roadside barriers caused falsepositive marking detections in exactly the same manner described by Huang confirms that TORCS provides good visual realism However, as the project focussed on a single track, the style and quality of the road markings were highly consistent This lack of variety does not represent reality and is a weakness with the project This could be addressed by using multiple tracks and indeed, some informal testing was done on other tracks The results of this were very encouraging but there were problems when the lane markings differed significantly from the test track Although it was not the purpose of this project to compare the reactive approach with the deliberative approach, it is worth finishing by commenting on the difference between the two Superficially, the two systems are equivalent in terms of functionality, with each being able to navigate the test track The key difference is that the reactive approach operates as a black-box with the output being a direct function of the input whereas the deliberative approach decouples perception and control This allows the control system to operate at higher frequencies than the 71 Chapter Conclusion perception system and also facilitating functionality improvements The deliberative approach is aware of the road layout and of the vehicle’s position relative to the road Consequently, functionality such as driving on the other side of the road is an almost trivial modification, simply requiring the trajectory points to be placed accordingly By contrast, the neural network would require retraining to perform this task However, that is not to suggest that the deliberative approach is inherently better than the reactive Indeed, as described in Chapter 2, state of the art vehicles such as Odin are combining the approaches in novel ways To conclude, the project successfully met its objective of producing a real-time simulation of an autonomous car and has demonstrated that low-cost simulators such as TORCS can be useful for research and development in the field 72 Bibliography [1] A Broggi, A Zelinsky, M Parent, C Thorpe: Intelligent Vehicles In: Springer Handbook of Robotics, Springer, Chapter 51 [2] G Dudek, M Jenkin: Inertial Sensors, GPS, and Odometry In: Springer Handbook of Robotics, Springer, Chapter 20 [3] B Schwarz: LIDAR: Mapping the World in 3D In: Nature Photonics, Volume 4, July 2010 [4] DARPA, 2007 DARPA, Urban Challenge Rules, http://www.darpa.mil/grandchallenge/rules.asp, October 27, 2007 [5] E Dickmanns, A Zapp: Autonomous High Speed Road Vehicle Guidance by Computer Vision In: Triennial World Congress of the International Federation of Automatic Control, Volume IV, July 1987 [6] E Dickmanns, R Behringer, D Dickmanns, T Hildebrandt, M Maurer, J Schiehlen, F Thomanek: The Seeing Passenger Car VaMoRs-P In: Proceedings of the International Symposium on Intelligent Vehicles, Paris, France, 1994 [7] D Pomerleau: ALVINN: An Autonomous Land Vehicle in a Neural Network In: Advances in Neural Information Processing Systems, Morgan Kaufmann, San Mateo, CA (1989) [8] C Urmson, J Anhalt, D Bagnell et al: Autonomous Driving in Urban Environments: Boss and the Urban Challenge, Springer Tracts in Advanced Robotics, Springer, Volume 56, 2009 [9] S Thrun, M Montemerlo, J Becker et al: Junior: The Stanford Entry in the Urban Challenge, Springer Tracts in Advanced Robotics, Springer, Volume 56, 2009 73 [10] C Reinholtz, D Hong, A Wicks et al: Odin: Team VictorTango’s Entry in the DARAP Urban Challenge, Springer Tracts in Advanced Robotics, Springer, Volume 56, 2009 [11] P Currier, D Hong, A Wicks: VictorTango Architecture for Autonomous Navigation in the DARPA Urban Challenge, Journal of Aerospace Computing, Information, and Communication, Volume 5, 2008 [12] S Thrun, M Montemerlo, H Dahlkamp et al: Stanley, the Robot That Won the DARPA Grand Challenge Journal of Field Robotics 23(9), 661–692 (2006) [13] R R Murphy: Introduction to AI Robotics, The MIT Press, 2000 [14] B J Pratz, Y Papelis, R Pillat, G Stein, D Harper: A Practical Approach to Robotic Design for the DARPA Urban Challenge, Springer Tracts in Advanced Robotics, Springer, Volume 56, 2009 [15] Loiacono, D.; Togelius, J.; Lanzi, P.L.; Kinnaird-Heether, L.; Lucas, S.M.; Simmerson, M.; Perez, D.; Reynolds, R.G.; Saez, Y.; , The WCCI 2008 simulated car racing competition, IEEE Symposium On Computational Intelligence and Games, vol., no., pp.119-126, 15-18 Dec 2008 [16] Muad, A.M.; Hussain, A.; Samad, S.A.; Mustaffa, M.M.; Majlis, B.Y.; , Implementation of inverse perspective mapping algorithm for the development of an automatic lane tracking system, TENCON 2004 2004 IEEE Region 10 Conference , vol.A, no., pp 207- 210 Vol 1, 21-24 Nov 2004 [17] Aly, M.; Real time detection of lane markers in urban streets, Intelligent Vehicles Symposium, 2008 IEEE , vol., no., pp.7-12, 4-6 June 2008 [18] Felzenszwalb, Pedro F, and Daniel P Huttenlocher Distance Transforms of Sampled Functions, Cornell Computing and Information Science TR20041963 2004 [19] Albert S Huang, Lane Estimation for Autonomous Vehicles using Vision and LIDAR, PhD Thesis, Massachusetts Institute of Technology, 2009 74 [20] J Leonard, J How, S Teller, M Berger, S Campbell, G Fiore, L Fletcher, E Frazzoli, A Huang, S Karaman, O Koch, Y Kuwata, D Moore, E Olson, S Peters, J Teo, R Truax, M Walter, D Barrett, A Epstein, K Maheloni, K Moyer, T Jones, R Buckley, M Antone, R Galejs, S Krishnamurthy, and J Williams, A perception driven autonomous urban vehicle, J Field Robot., vol 25, no 10, pp 727–774, 2008 [21] Martin A Fischler and Robert C Bolles Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Commun ACM 24, (June 1981), 381-395 [22] Bresenham, J E.; Algorithm for computer control of a digital plotter, IBM Systems Journal , vol.4, no.1, pp.25-30, 1965 [23] Onieva, E.; Pelta, D.A.; Alonso, J.; Milanes, V.; Perez, J.; A modular parametric architecture for the TORCS racing engine, IEEE Symposium on Computational Intelligence and Games, 2009 CIG 2009., vol., no., pp.256-262, 7-10 Sept 2009 [24] Stuart Russell, Peter Norvig Artificial Intelligence A Modern Approach Prenitce Hall, 1995, p 578 [25] Currier, P.; Development of an automotive ground vehicle platform for autonomous urban operations Master’s thesis, Virginia Polytechnic Institute and State University, Blacksburg, VA 2008 [26] Wikipedia, Ziegler-Nichols Method, http://en.wikipedia.org/wiki/ZieglerNichols_method, 14 August 2011 [27] Vehicle Stopping Distance Calculator, http://www.csgnetwork.com/stopdistcalc.html, 14 August 2011 [28] GNU Scientific Library, http://www.gnu.org/s/gsl/, 14 August 2011 [29] TORCS Source Code, http://sourceforge.net/projects/torcs/, 14 August 2011 75

Ngày đăng: 28/05/2023, 22:44