1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Dynamic Vision for Perception and Control of Motion - Ernst D. Dickmanns Part 16 pps

30 440 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 30
Dung lượng 446,88 KB

Nội dung

14.6 Experimental Results of Mission Performance 435 traction is suppressed during saccadic motion. [In this case, the saccade was per- formed rather slowly and lighting conditions were excellent so that almost no mo- tion blur occurred in the image (small shutter times), and feature extraction could well have been done.] The white curve at the left side of the road indicates that the internal model fits reality well. The sequence of saccades performed during the approach to the crossing can be seen from the sequence of graphs in Figure 14.14 (a) and (b): The saccades are started at time § 91 s; at this time, the crossroad hypothesis has been inserted in the scene tree by mission control expecting it from coarse navigation data (object ID for the crossroad was 2358, subfigure (e). At that time, it had not yet been visually detected. Gaze control computed visibility ranges for the crossroad [see graphs (g) and (h)], in addition to those for the road driven [graphs (i) and (j), lower right]. Since these visibility ranges do not overlap, saccades were started. Eleven saccades are made within 20s (till time 111). The “saccade bit” (b) sig- nals to the rest of the system that all processes should not use images when it is “1”; so they continue their operation based only on predictions with the dynamic models and the last best estimates of the state variables. Which objects receive at- tention can be seen from graph [(e) bottom left]: Initially, it is only the road driven; the wide-angle cameras look in the near (local, object ID = 2355) and the tele- camera in the far range (distant, ID number 2356). When the object crossroad is in- serted into the scene tree (ID number 2358) with unknown parameters width and angle (but with default values to be iterated), determination of their precise values and of the distance to the intersection is the goal of performing saccades. At around t = 103 s, the distance to the crossroad starts being published in the DOB [graph (f), top right]. During the period of performing saccades (91 – 111), the decision process for gaze control BDGA continuously determines “best view- ing ranges” (VR) for all objects of interest [graphs (g) to (j), lower right in Figure 14.14]. Figure 14.14 (g) and (h) indicate, under which pan (platform yaw) angles the crossroad can be seen [(g) for optimal, (h) for still acceptable mapping]. Graph (i) shows the allowable range for gaze direction so that the road being driven can be seen in the far look-ahead range (+2° to í4°), while (j) does the same for the wide-angle cameras (± 40°). During he approach to the intersection the amplitude of the saccades increases from 10 to 60° [Figure 14.14 (a), (g), (h)]. For decision-making in the gaze control process, a quality criterion “information gain” has been defined in [Pellkofer 2003]; the total information gain by a visual mode takes into account the number of objects observed, the individual informa- tion gain through each object, and the need of attention for each object. The proce- dure is too involved to be discussed in detail here; the interested reader is referred to the original work well worth reading (in German, however). The evolution of this criterion “information input” is shown in graphs (c) and (d). Gaze object 0 (road nearby) contributes a value of 0.5 (60 to 90 s) in roadrunning, while gaze ob- ject 1 (distant road) contributes only about 0.09 [Figure 14.14 (d)]. When an inter- section for turning off is to be detected, the information input of the tele-camera jumps by about a factor of 4, while that of the wide-angle cameras (road nearby) is reduced by ~ 20% (at t = 91 s). When the crossroad is approached closely, the road driven loses significance for larger look-ahead distances and gaze direction for crossroad tracking becomes turned so much that the amplitudes of saccades would 436 14 Mission Performance, Experimental Results have to be very large. At the same time, fewer boundary sections of the road driven in front of the crossing will be visible (because of approaching the crossing) so that the information input for the turnoff maneuver comes predominantly from the crossroad and from the wide-angle cameras in the near range (gaze object 0). At around 113 s, therefore, the scene tree is rearranged, and the former crossroad with ID 2358 becomes two objects for gaze control and attention: ID 2360 is the new local road in the near range, and ID 2361 stands for the distant road perceived by the telecamera, Figure 14.14 (e). This re-arrangement takes some time (graphs lower right), and the best viewing ranges to the former crossroad (now the refer- ence road) make a jump according to the intersection angle. While the vehicle turns into the crossroad, the small field of view of the telecamera forces gaze direction to be close to the new road direction; correspondingly, the pan angle of the cameras relative to the vehicle decreases while staying almost constant relative to the new reference road, i.e., the vehicle turns underneath the platform head [Figure 14.14 (i) and (a)]. On the new road, the information input from the near range is com- puted as 0.8 [Figure 14.14 (c)] and that from the distant road as 0.4 [Figure 14.14 (d)]. Since the best visibility ranges for the new reference road overlap [Figure 14.14 (i) and (j)], no saccades have to be performed any longer. Note that these gaze maneuvers are not programmed as a fixed sequence of pro- cedures, but that parameters in the knowledge base for behavioral capabilities as well as the actual state variables and road parameters perceived determine how the f) distance to crossroad Figure 14.14. Complex viewing behavior for performing a turnoff after recognizing the crossroad including its parameters: width and relative orientation to the road section driven (see text) 1 0.8 0.6 0.4 0.2 0 local road local road distant road distant road information input information input number of saccades number of saccades cross road object-ID camera head pan angle [deg] saccade bit gaze object 1: gaze object 0: i[] pan visibility ranges (VR) [deg] best VR of distant road best VR of cross road second-best VR of cross road best VR of local road g) h) i) j) -50 -50 -50 -60 -60 60 70 80 90 100 110 120 130 140 time / sec (a) (f) (b) (g) (c) (h) (d) (i) (e) (j) 14.6 Experimental Results of Mission Performance 437 maneuver will evolve. The actual performance with test vehicle VaMoRs can be seen from the corresponding video film. 14.6.6 On- and Off-road Demonstration with Complex Mission Elements While the former sections have shown single, though complex behavioral capabili- ties to be used as maneuvers or mission elements, in this section, finally, a short mission for demonstration is discussed that requires some of these capabilities. The mission includes some other capabilities in addition, too complex to be detailed here in the framework of driv- ing on networks of roads. The mission was the final demon- stration in front of an interna- tional audience in 2001 for the projects in which expectation- based, multifocal, saccadic (EMS) vision has been devel- oped over 5 years with a half dozen PhD students involved. Figure 14.15 shows the mis- sion schedule to be performed on the taxiways and adjacent grass surfaces of the former air- port Neubiberg, on which UniBwM is located. The start is from rest with the vehicle casu- ally parked by a human on a single-track road with no lane markings. This means that no special care has been taken in positioning and aligning the vehicle on the road. Part of this road is visible in Fig- ure 14.16 (right, vertical center). The inserted picture has been taken from the posi- tion of the ditch in Figure 14.15 (top right); the lower gray stripe in Figure 14.16 is from the road between labels 8 and 9. Figure 14.15. Schedule of the mission to be per- formed in the final demonstration of the project, in which the third-generation visual perception system according to the 4-D approach, EMS vi- sion, has been implemented (see text) In phase 1 (see digit with dot at lower right), the vehicle had to approach the in- tersection in the standard roadrunning mode. On purpose, no digital model of the environment has been stored in the system; the mission was to be performed rely- ing on information such as given to a human driver. At a certain distance in front of the intersection (specified by an imprecise GPS waypoint), the mission plan or- dered taking the next turnoff to the left. The vehicle then had to follow this road across the T-junction (2); the widening of the road after some distance should not interfere with driving behavior. At point 3, a section of cross-country driving, guided by widely spaced GPS waypoints was initiated. The final leg of this route (5) would intersect with a road (not specified by a GPS waypoint!). This road had to be recognized by vision and had to be turned onto to the left through a (drivable) 438 14 Mission Performance, Experimental Results shallow ditch to its side. This per- turbed maneuver turned out to be a big challenge for the vehicle. In the following mission ele- ment, the vehicle had to follow this road through the tightening section (near 2) and across the two junctions (one on the left and one on the right). At point 9, the vehicle had to turnoff to the left onto another grass surface on which again a waypoint-guided mission part had to be demon- strated. However, on the nominal path, there was a steep deep ditch as a negative obstacle, which the vehicle was not able to traverse. This ditch had to be detected and bypassed in a proper manner, and the vehicle was to return onto the intended path given by the GPS waypoints of the original plan (10). Figure 14.16. VaMoRs ready for mission dem- onstration 2001. The vehicle and road sections 1 and 8 (Figure 14.15) can be seen in the inserted picture. Above this picture, the gaze control platform is seen with five cameras mounted; there was a special pair of parallel stereo cam- eras in the top row for using hard- and software of Sarnoff Corporation in a joint project ‘Autonav’ between Germany and the USA. Except for bypassing the ditch, the mission was successfully demonstrated in 2001; the ditch was detected and the vehicle stopped correctly in front of it. In 2003, a shortened demo was performed with mission elements (1, 8, 9, and 10) and a sharp right turn from 1 to 8. In the meantime, the volume of the special processor system (Pyramid Vision Technology) for full frame-rate and real-time stereo per- ception had shrunk from a volume of about 30 liters in 2001 to a plug-in board for a standard PC (board size about 160 × 100 mm). Early ditch detection was achieved, even with taller grass in front of the ditch partially obscuring the small image region of the ditch, by combining the 4-D approach with stereovision. Photometric obstacle detection with our vision system turned out to be advanta- geous for early detection; keep in mind that even a ditch 1 m wide covers a very small image region from larger distances for the aspect conditions given (relatively low elevation above the ground). When closing in, stereovision delivered the most valuable information. The video “Mission performance” fully covers this abbrevi- ated mission with saccadic perception of the ditch (Figure 14.3) and avoiding it around the right-hand corner, which is view-fixated during the initial part of the maneuver [Pellkofer 2003; Siedersberger 2004, Hofmann 2004]. Later on, while return- ing onto the trajectory given by given by GPS waypoints, the gaze direction is con- trolled according to Figure 14.2. 15 Conclusions and Outlook Developing the sense of vision for (semi-) autonomous systems is considered an animation process driven by the analysis of image sequences. This is of special im- portance for systems capable of locomotion which have to deal with the real world, including animals, humans, and other subjects. These subjects are defined as capa- ble of some kind of perception, decision–making, and performing some actions. Starting from bottom-up feature extraction, tapping knowledge bases in which ge- neric knowledge about ‘the world’ is available leads to the ‘mental’ construction of an internal spatiotemporal (4-D) representation of a framework that is intended to duplicate the essential aspects of the world sensed. This internal (re-)construction is then projected into images with the parameters that the perception and hypothesis generation system have come up with. A model of perspective projection underlies this “imagination” process. With the initial in- ternal model of the world installed, a large part of future visual perception relies on feedback of prediction errors for adapting model parameters so that discrepancies between prediction and image analysis are reduced, at best to zero. Especially in this case, but also for small prediction-errors the process observed is supposed to be understood. Bottom-up feature analysis is continued in image regions not covered by the tracking processes with prediction-error feedback. There may be a variable number N of these tracking processes running in parallel. The best estimates for the relative (3-D) state and open parameters of the objects/subjects hypothesized for the point in time “now” are written into a “dynamic object database” (DOB) updated at the video rate (the short-term memory of the system). These object descriptions in physical terms require several orders of magnitude less data than the images from which they have been derived. Since the state variables have been defined in the sense of the natural sciences/engineering so that they fully decouple the future evo- lution of the system from past time history, no image data need be stored for un- derstanding temporal processes. The knowledge elements in the background data- base contain the temporal aspects from the beginning through dynamic models (differential equation constraints for temporal evolution). These models make a distinction between state and control variables. State vari- ables cannot change at one time, they have to evolve over time, and thus they are the elements for continuity. This temporal continuity alleviates image sequence understanding as compared to the differencing approach, after having analyzed consecutive single images bottom-up first, favored initially in computer science and AI. Control variables, on the contrary, are those components in a dynamic system that can be changed at any time; they allow influencing the future development of 15 Conclusions and Outlook 440 the system. (However, there may be other system parameters that can be adjusted under special conditions: For example, at rest, engine or suspension system pa- rameters may be tuned; but they are not control variables steadily available for sys- tem control.) The control variables thus defined are the central hub for intelligence. The claim is that all “mental” activities are geared to the challenge of finding the right control decisions. This is not confined to the actual time or a small temporal window around it. With the knowledge base playing such an important role in (es- pecially visual) perception, expanding and improving the knowledge base should be a side aspect for any control decision. In the extreme, this can be condensed into the formulation that intelligence is the mental framework developed for arriving at the best control decisions in any situation. Putting control time histories as novel units into the center of natural and techni- cal (not “artificial”) intelligence also allows easy access to events in and maneu- vers on an extended timescale. Maneuvers are characterized by specific control time histories leading to finite state transitions. Knowledge about them allows de- coupling behavior decision from control implementation without losing the advan- tages possible at both ends. Minimal delay time and direct feedback control based on special sensor data are essential for good control actuation. On the other hand, knowledge about larger entities in space and time (like maneuvers) are essential for good decision-making taking environmental conditions, including possible actions from several subjects, into account. Since these maneuvers have a typical timescale of seconds to minutes, the time delays of several tenths of a second for grasping and understanding complex situations are tolerable on this level. So, the approach developed allows a synthesis between the conceptual worlds of “Cybernetics” [Wiener 1948] and “Artificial Intelligence” of the last quarter of last century. Figure 15.1 shows the two fields in a caricaturized form as separate entities. Systems dynamics at the bottom is con- centrated on control input to actuators, either feed-forward control time histories from previous experience or feedback with direct coupling of control to meas- ured values; there is a large gap to the ar- tificial intelligence world on top. In the top part of the figure, arrows have been omitted for immediate reuse in the next figure; filling these in mentally should pose no problem to the reader. The es- sential part of the gap stems from ne- glecting temporal processes grasped by differential equations (or transition ma- trices as their equivalent in discrete time). This had the fundamental differ- ence between control and state variables in the real world be mediated away by computer states, where the difference is absent. Strictly speaking, it is hidden in the control effect matrix (if in use). Figure 15.1. Caricature of the separate worlds of system dynamics (bottom) and Artificial Intelligence (top) Sensors Actuators numerical computations behaviors (primitive) High level p l a n n i n g goals actions evaluations symbolic representations 15 Conclusion and Outlook 441 Figure 15.2 is intended to show that much of the techniques developed in the two separate fields can be used in the unified approach; some may even need no or very little change. However, an interface in common terminology has to be devel- oped. In the activities described in this book, some of the methods needed for the synthesis of the two fields mentioned have been developed, and their usability has been demonstrated for autonomous guidance of ground vehicles. However, very much remains to be done in the future; fortunately, the constraints encountered in our work due to limited computing power and communication bandwidth are about to vanish, so that prospects for this technology look bright. Figure 15.2. The internal 4-D representation of ‘the world’ (central blob) provides links between the ‘systems dynamics’ and the AI approach to intelligence in a natural way. The fact that all ‘measurement values’ derived from vision have no direct physical links to the objects observed (no wires, only light rays) enforces the creation of an ‘internal world’. – Situations – Landmarks – Objects – Characte- ristic feature groupings Recognition skilled Behaviors Basic capa- bili- ties ‚4-D‘ Mission elements – Mode switching, transitions – Generic feed-forward control time histories: u t = g t (t, x) – feedback control – laws u x = g x (x) Sensors Actuators Numerical computations Behaviors (primitive) High level p l a n n i n g Goals Actions Evaluations Symbolic representations local (differential) global (intergral) 4-D processes top- down Object hypo- thesis generation Feature extraction intelligently controlled bottom-up Taking into account that about 50 million ground vehicles are built every year and that more than 1 million human individuals are killed by ground vehicles every year worldwide, it seems mandatory that these vehicles be provided with a sense of vision allowing them to contribute to reducing the latter number. The ideal goal of zero death-toll seems unreachable (at least in the near future) and is unrealistic for open-minded individuals; however, this should not be taken as an excuse for not developing what can be achieved with these new types of vehicles with a sense of vision, on any sensor basis what-ever. Providing these vehicles with real capabilities for perceiving and understanding motion processes of several objects and subjects in parallel and under perturbed conditions will put them in a better position to achieve the goal of a minimal acci- dent rate. This includes recognition of intentions through observation of onsets of maneuvering, such as sudden lane changes without signaling by blinking. In this 15 Conclusions and Outlook 442 case, a continuous buildup of lateral speed in direction of one’s own lane is the critical observation. To achieve this “animation capability”, the knowledge base has to include “maneuvers” with stereotypical trajectories and time histories. On the other hand, the system also has to understand what typical standard perturba- tions due to disturbances are, reacting to it with feedback control. This allows first, making distinctions in visual observations and second, noticing environmental conditions by their effects on other objects/subjects. Developing all these necessary capabilities is a wide field of activities with work for generations to come. The recent evolution of the capability network in our approach [Siedersberger 2004; Pellkofer 2003] may constitute a starting point for more general developments. Figure 15.3 shows a proposal as an outlook; the part real- ized is a small fraction on the lower levels confined to ground vehicles. Especially the higher levels with proper coupling down to the engineering levels of automo- tive technology (or other specific fields) need much more attention. C o m pu t e r s i m u l a t i on , - g r ap hi c s Gaze control Loco- motion control PlanningScene understanding Figure 15.3. Differentiation of capability levels (vertical at left side) and categories of capabilities (horizontal at top): Planning happens at the higher levels only in internal representations. In all other categories, both hardware available (lowest level) and ways of using it by the individual play an important role. The uppermost levels of social inter- action and learning need more attention in the future. Perception (Here and now ): Online data interpretation in the context of preconceived models Collect sensor data on ‘the world’ smoothing, feature extraction Photometric properties; 1- D to 3-D shape elements; dynamic motion elements. Visualize appearance of general shapes and motion processes with basic generic models. Imagination (extended presence ): Inter- pretation of longer term object motion and subject maneuvers Situation assessment in the light of the actual mission element to be performed Utilize actuators own body Utilize actuators gaze control Underlying actuator software Underlying actuator software Gaze control basic skills Vehicle control basic skills Maneuvers Special feedback modes Maneuvers Special feedback modes Global & local (to category) mode switching; and replanning Performance of mission elements by coordinated behaviors [Yet to be developed] Learning based on generalized performance criteria and values communication, behave as member of a team Goal-oriented use of all capabilities in mission context goal-oriented specific capabilities in each category elementary skills (automated capabilities) in each category basic software preconditions hardware preconditions Separate within all , and in combination across all categories Understand the social situation and own role in it Preprocess data: Category of Capabilities Appendix A Contributions to Ontology for Ground Vehicles A.1 General Environmental Conditions A.1.1. Distribution of ground on Earth to drive on (global map) Continents and Islands on the globe Geodetic reference system, databases Specially prepared roadways: road maps Cross-country driving, types of ground Geometric description (3-D) Support qualities for tires and tracks Ferries linking continents and islands National Traffic Rules and Regulations Global navigation system availability A.1.2. Lighting conditions as a function of time Natural lighting by sun (and moon) Sun angle relative to the ground for a given location and time Moon angle relative to the ground for a given location and time Headlights of vehicles Lights for signaling intentions/special conditions Urban lighting conditions Special lights at construction sites (incl. flashs) Blinking blue lights A.1.3 Weather conditions Temperatures (Effects on friction of tires) Winds Bright sunshine/Fully overcast/Partially cloudy Rain/Hail/Snow Fog (visibility ranges) Combinations of items above Road surface conditions (weather dependent) Dry/Wet/Slush/Snow (thin, heavy, deep tracks) /Ice Leaf cover (dry – wet)/Dirt cover (partial – full) A.2 Roadways A.2.1.Freeways, Motorways, Autobahnen etc. Defining parameters, lane markings Limited access parameters Behavioral rules for specific vehicle types Traffic and navigation signs Special environmental conditions A.2.2. Highways (State-), high-speed roads Defining parameters, lane markings (like above) Appendix A 444 A.2.3. Ordinary state roads (two-way traffic) (like above) A.2.4. Unmarked country roads (sealed) A.2.5. Unsealed roads A.2.6. Tracks A.2.7. Infrastructure along roadways Line markers on the ground, Parking strip, Arrows, Pedestrian crossings Road shoulder, Guide rails Regular poles (reflecting, ~1 m high) and markers for snow conditions A.3 Vehicles (as objects without driver/autonomous system; wheeled vehicles, vehicles with tracks, mixed wheels and tracks) A.3.1. Wheeled vehicles Bicycle: Motorbike, Scooter; Bicycle without a motor: Different sizes for grown-ups and children Tricycle Multiple (even) number of wheels Cars, Vans/microbuses, Pickups/Sports utility vehicles, Trucks, Buses, Recreation vehicles, Tractors, Trailers A.3.2. Vehicles with tracks A.3.3. Vehicles with mixed tracks and wheels A.4 Form, Appearance, and Function of Vehicles (shown here for cars as one example; similar for all classes of vehicles) A.4.1. Geometric size and 3-D shape (generic with parameters) A.4.2. Subpart hierarchy Lower body, Wheels, Upper body part, Windshields (front and rear) Doors (side and rear), Motor hood, Lighting groups (front and rear) Outside mirrors A.4.3. Variability over time, shape boundaries (aspect conditions) A.4.4. Photometric appearance (function of aspect and lighting conditions) Edges and shading, Color, Texture A.4.5. Functionality (performance with human or autonomous driver) Factors determining size and shape Performance parameters (as in test reports of automotive journals; en- gine power, power train) Controls available [throttle, brakes, steering (e.g., “Ackermann”)] Tank size and maximum range Range of capabilities for standard locomotion: Acceleration from standstill Moving into lane with flowing traffic Lane keeping (accuracy) Observing traffic regulations (max. speed, passing interdiction) [...]... engineering: Feed-forward control components Uff are derived from a deeper understanding of the process controlled and the maneuver to be performed They are part of the knowledge base of autonomous dynamic systems (derived from systems engineering and optimal control theory) They are stored in generic form for classes of maneuvers Actual application is triggered from an instance for behavior decision and implemented... Vehicles95, Helsinki; also published in Control Engineering Practice (1996), 4(5): 589599 Dickmanns E .D., Wuensche H.J (1999): Dynamic Vision for Perception and Control of Motion In: Jaehne B., Hauòenecker H., Geiòler P (eds): Handbook of Computer Vision and Applications, Vol 3, Academic Press: 569620 Dickmanns E.D (2002a): The development of machine vision for road vehicles in the last decade Proceedings... Recognition and Real-Time Relative State Estimation Under Egomotion In: Jain AK(ed) (1988): Real-Time Object Measurement and Classification Springer-Verlag, Berlin: 4156 Dickmanns E.D (1989): Subject-Object Discrimination in 4-D Dynamic Scene Interpretation by Machine Vision Proc IEEE-Workshop on Visual Motion, Newport Beach: 298304 Dickmanns E .D., Christians T (1989): Relative 3-D state estimation for autonomous... Mission Performance on Road Networks Proceedings International Symposium on Intelligent Vehicles, Dearborn, MI: 140145 Gregor R., Luetzeler M., Dickmanns E.D (2001): EMS -Vision: Combining on- and off-road driving Proc SPIE-Aero-Sense, Orlando, FL Gregor R., Lỹtzeler M., Pellkofer M., Siedersberger K.H., Dickmanns E.D (2001): A Vision System for Autonomous Ground Vehicles with a Wide Range of Maneuvering... Robotic Systems: 12731284 Dickmanns E .D., Mysliwetz B (1992): Recursive 3-D Road and Relative Ego-State Recognition IEEE-Trans Pattern Analysis and Machine Intelligence (PAMI) 14(2), Special Issue on 'Interpretation of 3-D Scenes': 199213 Dickmanns E .D., Behringer R., Dickmanns D., Hildebrandt T., Maurer M., Thomanek F., Schiehlen J (1994): The Seeing Passenger Car 'VaMoRs-P' In Masaki I (ed): Proceedings... Press, Cambridge MA Dickmanns E .D., Zapp A (1987): Autonomous High Speed Road Vehicle Guidance by Computer Vision 10th IFAC World Congress, Munich, Preprint 4: 232237 Dickmanns E .D., Graefe V (1988): (a) Dynamic monocular machine vision Journal of Machine Vision and Application, Springer International 1:22 3-2 40 (b) Applications of dynamic monocular machine vision (ibid): 241261 Dickmanns E.D (1988):... Vehicles02, Versailles Dickmanns E.D (2002b): Vision for ground vehicles: History and prospects International Journal of Vehicle Autonomous Systems, 1(1): 144 Dickmanns E.D (2003): Expectation-based, Multi-focal, Saccadic Vision - (Understanding dynamic scenes observed from a moving platform) In: Olver P.J., Tannenbaum A (eds) (2003) Mathematical Methods in Computer Vision, Springer-Verlag: 1935 Duda R.,... decision making capabilities A.5 Form, Appearance, and Function of Humans (Similar structure as above for cars plus modes of locomotion) A.6 Form, Appearance, and Likely Behavior of Animals (relevant in road traffic: Four-legged, birds, snakes) A.7 General Terms for Acting Subjects in Traffic Subjects: Contrary to objects (proper), having passive bodies and no capability of self-controlled acting, subjects... Appendix A 447 histories perform better than others It is essential knowledge for good or even optimal control of dynamic systems, to know in which situations to perform what type of maneuver with which set of parameters; usually, the maneuver is defined by certain time histories of (coordinated) control input The unperturbed trajectory corresponding to this nominal feed-forward control is also known, either... applications For linear (linearized) systems, linking the control output to the entire set of state variables allows specifying the eigenmodes at will (in the range of validity of the linear models) In output feedback, adding components proportional to the derivative (D) and/ or integral (I) of the signal allows improving speed of response (PD) and long-term accuracy (PI, PID) Combined feed-forward and feedback . dynamics and control engineering: Feed-forward control components U ff are derived from a deeper understanding of the process controlled and the maneuver to be performed. They are part of. passive bodies and no capability of self-controlled acting, “subjects” are defined as objects with the capability of sensing and self-decided control actuation. Between sensing and control ac- tuation,. the de- rivative (D) and/ or integral (I) of the signal allows improving speed of re- sponse (PD) and long-term accuracy (PI, PID). Combined feed-forward and feedback control: For counteracting

Ngày đăng: 10/08/2014, 02:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN