1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Dynamic Vision for Perception and Control of Motion - Ernst D. Dickmanns Part 5 potx

30 276 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 30
Dung lượng 480,9 KB

Nội dung

3.4 Behavioral Capabilities for Locomotion 105 control in an active vision system. By extending these types of explicit representa- tions to all processes for perception, decision-making, and mission planning as well as mission performance and monitoring, a very flexible overall system will re- sult. These aspects have been discussed here to motivate the need for both smooth parts of mission performance with nice continuity conditions alleviating percep- tion, and sudden changes in behavior where sticking to the previous mode would lead to failure (or probably disaster). Efficient dynamic vision systems have to take advantage of continuity condi- tions as long as they prevail; however, they always have to watch out for disconti- nuities in motion, both of the subject’s body and of other object observed, to be able to adjust readily. For example, a vehicle following the rightmost lane on a road can be tracked efficiently using a simple motion model. However, when an obstacle occurs suddenly in this lane, for example, a ball or an animal running onto the road, there may be a harsh reaction to one side. At this moment, a new motion phase begins, and it cannot be expected that the filter tuning for optimal tracking remains the same. So the vision process for tracking (similar to the bouncing ball example in Section 2.3.2) has two distinctive phases which should be handled in parallel. 3.4.6.1 Smooth Evolution of a Trajectory Continuity models and low-pass filtering components can help to easily track phases of a dynamic process in an environment without special events. Measure- ment values with high-frequency oscillations are considered due to noise, which has to be eliminated in the interpretation process. The natural sciences and engi- neering have compiled a wealth of models for different domains. The methods de- scribed in this book have proven to be well suited for handling these cases on net- works of roads. However, in road traffic environments, continuity is interrupted every now and then due to initiation of new behavioral components by subjects and maybe by weather. 3.4.6.2 Sudden Changes and Discontinuities The optimal settings of parameters for smooth pursuit lead to unsatisfactory track- ing performance in cases of sudden changes. The onset of a harsh braking maneu- ver of a car or a sudden turn may lead to loss of tracking or at least to a strong tran- sient motion estimated, especially so, if delay times in the visual perception process are large. If the onsets of these discontinuities could be well predicted, a switch in model or tracking parameters at the right time would yield much better results. The example of a bouncing ball has already been mentioned. In road traffic, the compulsory introduction of the braking (stop) lights serves the same purpose of indicating that there is a sudden change in the underlying be- havioral mode (deceleration). Braking lights have to be detected by vision for de- fensive driving; this event has to trigger a new motion model for the car at which it is observed. The level of braking is not yet indicated by the intensity of the braking lights. There are some studies under way for the new LED-braking lights to couple 3 Subjects and Subject Classes 106 the number of LEDs lighting up to the level of braking applied; this could help finding the right deceleration magnitude for the hypothesis of the observed braking vehicle and thus reduce transients. Sudden onsets of lateral maneuvers are supposed to be preceded by warning lights blinking at the proper side. However, the reliability of behaving according to this convention is rather low in many parts of the world. As a general scheme in vision, it can be concluded that partially smooth sections and local discontinuities have to be recognized and treated with proper methods both in the 2-D image plane (object boundaries) and on the time line (events). 3.4.6.3 A Capability Network for Locomotion The capability network shows how more complex behaviors depend on more basic ones and finally on the actuators available. The timing (temporal sequencing) of their activation has to be learned by testing and corresponding feedback of errors occurring in the real world. Figure 3.28 shows the capability network for locomo- tion of a wheeled ground vehicle. Note that some of the parameters determining the trigger point for activation depend on visual perception and on other measurement values. The challenges of system integration will be discussed in later chapters af- ter the aspects of knowledge representation have been discussed. Figure 3.28. Network of behavioral capabilities of a road vehicle: Longitudinal and lateral control is fully separated only on the hardware level with three actua- tors; many basic skills are realized by diverse parameterized feed-forward and feedback control schemes. On the upper level, abstract schematic capabilities as triggered from “central decision” are shown [Maurer 2000, Siedersberger 2004] Brakes Stand still Avoid obstacle Keep speed Constant steering angle Actuators Skills Schematic capabilities Halt Approach Steering rate Ȝ-dot Keep course Turn Ȝ to zero Drive circular arc Keep lane Road running Waypoint navigation Stop in front of obstacle Drive at distance y along guide line Turn to heading Turn Ȝ to Ȝ com Accelerate Decelerate Keep distance Throttle Longitudinal control Lateral control 3.6 Growth Potential of the Concept, Outlook 107 3.5 Situation Assessment and Decision-Making Subjects differ from objects (proper) in that they have perceptual impressions from the environment and the capability of decision-making with respect to their control options. For subjects, a control term appears in the differential equation constraints on their motion activities, which allows them to influence their motion; this makes subjects basically different from objects. If decisions on control selection are not implicitly given in the code implement- ing subject behavior, but may be made according to some explicit goal criteria, something like free will occurs in the behavior decision process of the subject. Be- cause of the fundamentally new properties of subjects, these require separate meth- ods for knowledge representation and for combining this knowledge with actual perception to achieve their goals in an optimal fashion (however defined). The col- lection of all facts of relevance for decision-making is called the situation. It is es- pecially difficult if other subjects, who also may behave at will to achieve their goals, form part of this process; these behaviors are unknown, usually, but may be guessed sometimes from reasoning as for own decision-making. Some expectations for future behavior of other subjects can be derived from try- ing to understand the situation as it might look for oneself in the situation supposed to be given for the other subject. At the moment, this is beyond the actual state of the art of autonomous systems. But the methods under development for the sub- ject’s decision-making will open up this avenue. In the long run, capabilities of situation assessment of other subjects may be a decisive factor in the development of really intelligent systems. Subjects may group together, striving for common goals; this interesting field of group behavior taking real-world constraints into ac- count is even further out in the future than individual behavior. But there is no doubt that the methods will become available in the long run. 3.6 Growth Potential of the Concept, Outlook The concept of subjects characterized by their capabilities in sensory perception, in data processing (taking large knowledge bases for object/subject recognition and situation assessment into account), in decision-making and planning as well as in behavior generation is very general. Through an explicit representation of these ca- pabilities, avenues for developing autonomous agents with new mental capabilities of learning and cooperation in teams may open up. In preparation for this long- term goal, representing humans with all their diverse capabilities in this framework should be a good exercise. This is especially valuable for mixed teams of humans and autonomous vehicles as well as for generating intelligent behavior of these ve- hicles in environments abounding with activities of humans, which will be the standard case in traffic situations. In road traffic, other subjects frequently encountered (at least in rural environ- ments) beside humans are four-legged animals of different sizes: horses, cattle, 3 Subjects and Subject Classes 108 sheep, goats, deer, dogs, cats, etc.; birds and poultry are two-legged animals, many of which are able to fly. Because of the eminent importance of humans and four-legged animals in any kind of road traffic, autonomous vehicles should be able to understand the motion capabilities of these living beings in the long run. This is out into the future right now; the final section of this chapter shows an approach and first results developed in the early 1990s for recognition of humans. This field has seen many activities since the early work of Hogg (1984) in the meantime and has grown to a special area in technical vision; two recent papers with application to road traffic are [Ber- tozzi et al. 2004; Franke et al. 2005] 3.6.1 Simple Model of Human Body as Traffic Participant Elaborate models for the motion ca- pabilities of human bodies are avail- able in different disciplines of physi- ology, sports, and computer animation [Alexander 1984; Bruderlin, Calvert 1989; Kroemer 1988]. Humans as traffic participants with the behav- ioral modes of walking, running, rid- ing bicycles or motor bikes as well as modes for transmitting information by waving their arms, possibly with additional instruments, show a much reduced set of stereotypical move- ments. Kinzel (1994a, b), therefore, se- lected the articulated body model shown in Figure 3.29 to represent humans in traffic activities in connec- tion with the 4-D approach to dy- namic vision. Visual recognition of moving humans becomes especially difficult due to the vast variety of clothing encountered and of objects carried. For normal Western style clothing the cyclic activities of extremities are characteristic of humans moving. Motion of limbs should be separated from body motion since they behave in dif- ferent modes and at different eigenfrequencies, usually. Head (with neck) 0, 1 upper arms 1 2 0 3 8 9 10 2, 3 lower arms upper torso lower 0, 1 shoulders 2, 3 elbows 12 waist 6, 7 hips (13 neck) 0 1 23 4 5 67 12 4 5 6 7 4, 5 upper legs 6, 7 lower legs Body segments 4, 5 hands 8, 9 knees 10, 11 feet joints 8 9 10 11 Figure 3.29. Simple generic model for hu- man shape with 22 degrees of freedom, af- ter [Kinzel 1994] Limbs tend to be used in typical cyclic motion, while the body moves more steadily. The rotational movements of limbs may be in the same or in opposite di- rection depending on the style and the phase of grasping or running. Figure 3.30 shows early results achieved with the lower part of the body model from Figure 3.29; cyclic motion of the upper leg (hip angle, amplitude § 60°, upper graph) and the lower leg (knee angle, amplitude § 100°, bottom graph) has been recognized roughly in a computer simulation with real-time image sequence 3.6 Growth Potential of the Concept, Outlook 109 Fig. 3.31. Quantitative recognition of motion parameters of a human leg while running: simulation with real image sequence processing (after [Kinzel 1994]). Figure 3.30. Quantitative recognition of motion parameters of a human leg while running: simulation with real image sequence processing, after [Kinzel 1994]. evaluation and tracking. At that time, microprocessor resources were not sufficient to do this onboard a car in real time (at least a factor of 5 was missing). In the meantime, computing power has increased by more than two orders of magnitude per processor, and human gesture recognition has attracted quite a bit of attention. Also the wide-spread activities in computer animation with humanoid robots, and especially the demanding challenge of the humanoid robo-cup league have ad- vanced this field considerably, lately. From the field last-mentioned and from analysis of sports as well as dancing ac- tivities there will be a pressure towards automatically recognizing human (-oid) motion. This field can be considered developing on its own; application within semi-autonomous road or autonomous ground vehicles will be more or less a side product. The knowledge base for these application areas of ground vehicles has to be developed as a specific effort, however. In case of construction sites or accident areas with human traffic regulation, future (semi-) autonomous vehicles should 3 Subjects and Subject Classes 110 also have the capability of proper understanding of regulatory arm gestures and of proper behavior in these unusual situations. Recognizing grown-up people and children wearing various clothing and riding bicycle or carrying bulky loads will remain a challenging task. 3.6.2 Ground Animals and Birds Beside humans, two superclasses of other animals play a role in rural traffic: Four- legged animals of various sizes and with various styles of running, and birds (from crows, hen, geese, turkeys, to ostrich), most of which can fly and run or hop on the ground. This wide field of subjects has hardly been touched for technical vision systems. In principle, there is no basic challenge for successful application of the 4-D approach. In practice, however, a huge volume of work lies ahead until techni- cal vision systems will perceive animals reliably. 4 Application Domains, Missions, and Situations In the previous chapters, the basic tools have been treated for representing objects and subjects with homogeneous coordinates in a framework of the real 3-D world and with spatiotemporal models for their motion. Their application in combination with procedural computing methods will be the subject of Chapters 5 and 6. The result will be an estimated state of single objects/subjects for the point “here and now” during the visual observation process. These methods can be applied multiple times in parallel to n objects in different image regions representing different spa- tial angles of the world around the set of cameras. Vision is not supposed to be a separate exercise of its own but to serve some purpose in a task or mission context of an acting individual (subject). For deeper understanding of what is being seen and perceived, the goals of egomotion and of other moving subjects as well as the future trajectories of objects tracked should be known, at least vaguely. Since there is no information exchange between oneself and other subjects, usually, their future behavior can only be hypothesized based on the situation given and the behavioral capabilities of the subjects observed. However, out of the set of all objects and subjects perceived in parallel, generally only a few are of direct relevance to their own plans of locomotion. To be efficient in perceiving the environment, special attention and thus percep- tual resources and computing power for understanding should be concentrated on the most important objects/subjects. The knowledge needed for this decision is quite different from that one needed for visual object and state recognition. The de- cision has to take into account the mission plan and the likely behavior of other subjects nearby as well as the general environmental conditions (like quality of visual perception, weather conditions and likely friction coefficient for maneuver- ing, as well as surface structure). In addition, the sets of rules for traffic regulation valid in the part of the world, where the vehicle is in operation, have to be taken into account. 4.1 Structuring of Application Domains To survey where the small regime, onto which the rest of the book will be concen- trating, fits in the overall picture, first (contributions to) a loosely defined ontology for ground vehicles will be given. Appendix A shows a structured proposal which, of course, is only one of many possible approaches. Here, only some aspects of certain missions and application domains are discussed to motivate the items se- 4 Application Domains, Missions, and Situations 112 lected for presentation in this book. An all-encompassing and complete ontology for ground vehicles would be desirable but has not yet been assembled in the past. From the general environmental conditions grouped under A.1, up to now only a few have been perceived explicitly by sensing, relying on the human operator to take care for the rest. More autonomous systems have to have perceptual capabili- ties and knowledge bases available to be able to recognize more of them by them- selves. Contrary to humans, intelligent vehicles will have much more extended ac- cess to satellite navigation (such as GPS now or Galileo in the future). In combination with digital maps and geodetic information systems, this will allow them improved mission planning and global orientation. Obstacle detection both on roads and in cross-country driving has to be per- formed by local perception since temporal changes are too fast, in general, to be re- liably represented in databases; this will presumably also be the fact in the future. In cross-country driving, beside the vertical surface profiles in the planned tracks for the wheels, the support qualities of the ground for wheels and tracks also have to be estimated from visual appearance. This is a very difficult task, and decisions should always be on the safe side (avoid entering uncertain regions). Representing national traffic rules and regulations (Appendix A.1.1) is a straightforward task; their ranges of validity (national boundaries) have to be stored in the corresponding databases. One of the most important facts is the gen- eral rule of right- or left-hand traffic. Only a few traffic signs like stop and one-way are globally valid. With speed signs (usually a number on a white field in a red cir- cle) the corresponding dimension has to be inferred from the country one is in (km/h in continental Europe or mph in the United Kingdom or the United States, etc.). Lighting conditions (Appendix A.1.2) affect visual perception directly. The dy- namic range of light intensity in bright sunshine with snow and harsh shadows on dark ground can be extremely large (more than six orders of magnitude may be en- countered). Special high-dynamic-range cameras (HDRC) have been developed to cope with the situation. The development is still going on, and one has to find the right compromise in the price-performance trade-off. To perceive the actual situa- tion correctly, representing the recent time history of lighting conditions and of po- tential disturbances from the environment may help. Weather conditions (e.g., blue skies) and time of day in connection with the set of buildings in the vicinity of the trajectory planned (tunnel, underpass, tall houses, etc.) may allow us to estimate expected changes which can be counteracted by adjusting camera parameters or viewing directions. The most pleasant weather condition for vision is an overcast sky without precipitation. In normal visibility, contrasts in the scene are usually good. Under foggy condi- tions, contrasts tend to disappear with increasing distance. The same is true at dusk or dawn when the light intensity level is low. Features linked to intensity gradients tend to become unreliable under these conditions. To better understand results in state estimation of other objects from image sequences (Chapters 5 and 6), it is therefore advantageous to monitor average image intensities as well as maximal and minimal intensity gradients. This may be done over entire images, but comput- ing these characteristic values for certain image regions in parallel (such as sky or larger shaded regions) gives more precise results. 4.1 Structuring of Application Domains 113 It is recommended to have a steady representation available of intensity statis- tics and their trends in the image sequence: Averages and variances of maximum and minimum image intensities and of maximum and minimum intensity gradients in representative regions. When surfaces are wet and the sun comes out, light re- flections may lead to highlights. Water surfaces (like puddles) rippled by wind may exhibit relatively large glaring regions which have to be excluded from image in- terpretation for meaningful results. Driving toward a low standing sun under these conditions can make vision impossible. When there are multiple light sources like at night in an urban area, regions with stable visual features have to be found al- lowing tracking and orientation by avoiding highlighted regions. Headlights of other vehicles may also become hard to deal with in rainy condi- tions. Backlights and stoplights when braking are relatively easy to handle but re- quire color cameras for proper recognition. In RGB-color representation, stop lights are most efficiently found in the R-image, while flashing blue lights on vehi- cles for ambulance or police cars are most easily detected in the B-channel. Yellow or orange lights for signaling intentions (turn direction indicators) require evalua- tion of several RGB channels or just the intensity signal. Stationary flashing lights at construction sites (light sequencing, looking like a hopping light) for indication of an unusual traffic direction require good temporal resolution and correlation with subject vehicle perturbations to be perceived correctly. Recognition of weather conditions (Appendix A.1.3) is especially important when they affect the interaction of the vehicle with the ground (acceleration, decel- eration through friction between tires and surface material). Recognizing and ad- justing behavior to rain, hail, and snow conditions may prevent accidents by cau- tious driving. Slush and loose or wet dirt or gravel on the road may have similar effects and should thus be recognized. Heavy winds and gusts can have a direct ef- fect on driving stability; however, they are not directly visible but only by secon- dary effects like dust or leaves whirling up or by moving grass surfaces and plants or branches of trees. Advanced vision systems should be able to perceive these weather conditions (maybe supported by inertial sensors directly feeling the accel- erations on the body). Recognizing fine shades of texture may be a capability for achieving this; at present, this is beyond the performance level of microprocessors available at low cost, but the next decade may open up this avenue. Roadway recognition (Appendix A.2) has been developed to a reasonable state since recursive estimation techniques and differential geometry descriptions have been introduced two decades ago. For freeways and other well-kept, high-speed roads (Appendices A.2.1 and A.2.2), lane and road recognition can be considered state of the art. Additional developments are still required for surface state recogni- tion, for understanding the semantics of lane markings, arrows, and other lines painted on the road as well as detailed perception of the infrastructure along the road. This concerns repeating poles with different reflecting lights on both sides of the roadway, the meaning of which may differ from one country to the next, and guiderails on road shoulders and many different kinds of traffic and navigation signs which have to be distinguished from advertisements. On these types of roads there is only unidirectional traffic (one-way), usually, and navigation has to be done by proper lane selection. 4 Application Domains, Missions, and Situations 114 On ordinary state roads with two-way traffic (Appendix A.2.3) the perceptual capabilities required are much more demanding. Checking free lanes for passing has to take oncoming traffic with high speed differences between vehicles and the type of central lane markings into account. With speeds allowed of up to 100 km/h in each direction, relative speed can be close to 60 m/s (or 2.4 m per video cycle of 40 ms). A 4-second passing maneuver thus requires about 250 m look-ahead range, way beyond what is found in most of today’s vision systems. With the resolution required for object recognition and the perturbation level in pitch due to nonflat ground, inertial stabilization of gaze direction seems mandatory. These types of roads may be much less well kept. Lane markings may be re- duced to a central line indicating by its type whether passing is allowed (dashed line) or not (solid line). To the sides of the road, there may be potholes to be avoided; sometimes these may be found even on the road itself. On all of these types of road, for short periods after (re-) construction there may be no lane markings at all. In these cases, vehicles and drivers have to orient them- selves according to road width and to the distance from “their” side of the sealed surface. “Migrating construction sites” like for lane marking may be present and have to be dealt with properly. The same is true for maintenance work or for grass cutting in the summer. Unmarked country roads (Appendix A.2.4) are usually narrow, and oncoming traffic may require slowing down and touching the road shoulders with their outer wheels. The road surface may not be well kept, with patches of dirt and high- spatial frequency surface perturbations. The most demanding item, however, may be the many different kinds of subjects on the road: People and children walking, running and bicycling, carrying different types of loads or guarding animals. Wild animals range from hares to deer (even moose in northern countries) and birds feeding on cadavers. On unsealed roads (Appendix A.2.5) where speed driven is much slower, usu- ally, in addition to the items mentioned above, the vertical surface structure be- comes of increasing interest due to its unstable nature. Tracks impressed into the surface by heavily loaded vehicles can easily develop, and the likelihood of pot- holes (even large ones into which wheels of usual size will fit) requires stereovi- sion for recognition, probably with sequential view fixation on especially interest- ing areas. Driving cross-country, tracks (Appendix A.2.6) can alleviate the task in that they show where the ground is sufficiently solid to support a vehicle. However, due to non-homogeneous ground properties, vertical curvature profiles of high spa- tial frequency may have developed and have to be recognized to adjust speed so that the vehicle is not bounced around losing ground contact. After a period of rain when the surface tends to be softer than usual, it has to be checked whether the tracks are not so deep that the vehicle touches the ground with its body when the wheels sink into the track. Especially, tracks filled with water pose a difficult chal- lenge for decision-making. In Appendix A.2.7, all infrastructure items for all types of roads are collected to show the gamut of figures and objects which a powerful vision system for traffic application should be able to recognize. Some of these are, of course, specific to certain regions of the world (or countries). There have to be corresponding data [...]... has not allowed applying this in real-time onboard vehicles; the next decade should allow tackling this task for better and more robust scene understanding 5. 1.2 Fields of View, Multi-focal Vision, and Scales In dealing with real-world tasks of surveillance and motion control very often coverage of the environment with a large field of view is needed only nearby For a vehicle driving at finite speed,... F B L e v e l 1 Figure 5. 1 Structured knowledge base for three stages of visual dynamic scene understanding in expectation-based, multi-focal, saccadic (EMS) vision 5. 1 Visual Features 1 25 The results of all of these single-object–recognition processes have to be presented to the situation assessment level in unified form so that relative motion between objects and movements of subjects can be appreciated... up in a decreasing number of rows with distance f Optical axis horizontal; H 0 .5 1 2 3 4 5 6 7 ==> L/H 9 20 .5 19 .5 10 10 .5 L/H 4 5 7 10 20 30 Zo/ pel 167 136 100 71 36.6 24.6 Zu / pel 214 167 1 15 79 38 .5 25. 4 Z / pel 47 31 15 8 1.9 0.8 Figure 5. 4 Mapping of a horizontal slice at distance L/H (from Zu = (L/H – 0 .5) to Zo = (L/H + 0 .5) into the image plane (focal length f = 750 pixel) Confining regional... characteristics of their limbs and wings will change to a large extent 120 4 Application Domains, Missions, and Situations 4.3.3 Rule Systems for Decision-Making Perception systems for driver assistance or for autonomous vehicle guidance will need very similar sets of rules for the perception part (maybe specialized to some task of special interest) Once sufficient computing power for visual scene analysis and. .. representations of perceptual and behavioral capabilities of subjects are a precondition for this performance level Tables 3.1 and 3.3 list the most essential capabilities and behavioral modes needed for road traffic participants Based on data in the ring-buffer of the DOB for each subject observed, this background knowledge now allows guessing the intentions of the other subject This qualitatively new information... information on the object recognized This, in turn, may allow much more efficient recognition and visual tracking of objects by attention focusing over time and in image regions of special interest (window concept) According to these considerations, the rest of Chapter 5 will be organized as follows: In Section 5. 1.2, proper scaling of fields of view in multi-focal vision and in selecting scales for. .. analysis and understanding is affordable, the information anyway in the image streams can be fully exploited, since both kinds of application will gain from deeper understanding of motion processes observed This tends to favor three separate rule bases in a modular system: The first one for perception (control of gaze direction and attention) has to be available for both types of systems In addition,... Behavior Decision for Gaze and Attention; Optimization of gaze direction and sequencing; Gaze Control 7 Time histories of state variables (for objects / subjects of special interest) Scene tree representation of all objects tracked 5 (Homogeneous Coordinate Transformations) Dynamic object database (DOB, distributed system wide, time stamped) Top-down feature extraction in specified fields of view; find... http://iris.usc.edu /Vision- Notes/bibliography/contents.html; detection and analysis of edges and lines (Chapter 5 there); 2-D feature analysis, extraction, and representations (Chapter 7); Chapter 3 there gives a survey on books (3.2), collections, overviews, and surveys) 5. 2.1 Generic Types of Edge Extraction Templates A substantial reason for the efficiency of the methods developed at UniBwM for edge feature... out of range for machine vision However, detecting and recognizing moving volumes (partially) filled with massive bodies is in the making and will become available soon for real-time application Avoiding these areas with a relatively large safety margin may be sufficient for driver assistance and even for autonomous driving Some nice results for assistance in recognizing humans crossing in front of . freeways and other well-kept, high-speed roads (Appendices A.2.1 and A.2.2), lane and road recognition can be considered state of the art. Additional developments are still required for surface. single in- 5 Extraction of Visual Features 124 Figure 5. 1. Structured knowledge base for three stages of visual dynamic scene under- standing in expectation-based, multi-focal, saccadic (EMS) vision . precondition for this performance level. Tables 3.1 and 3.3 list the most essential capabilities and behavioral modes needed for road traffic partici- pants. Based on data in the ring-buffer of

Ngày đăng: 10/08/2014, 02:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN