1. Trang chủ
  2. » Thể loại khác

Vitual and Remote Control Tower Research Design Development and Validation

348 812 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 348
Dung lượng 10,47 MB

Nội dung

Free ebooks ==> www.Ebook777.com Research Topics in Aerospace Norbert Fürstenau Editor Virtual and Remote Control Tower Research, Design, Development and Validation www.Ebook777.com Free ebooks ==> www.Ebook777.com Research Topics in Aerospace www.Ebook777.com More information about this series at http://www.springer.com/series/8625 Norbert Fuărstenau Editor Virtual and Remote Control Tower Research, Design, Development and Validation Free ebooks ==> www.Ebook777.com Editor Norbert Fuărstenau Institute of Flight Guidance German Aerospace Center (DLR) Braunschweig, Germany ISSN 2194-8240 ISSN 2194-8259 (electronic) Research Topics in Aerospace ISBN 978-3-319-28717-1 ISBN 978-3-319-28719-5 (eBook) DOI 10.1007/978-3-319-28719-5 Library of Congress Control Number: 2016940305 © Springer International Publishing Switzerland 2016 This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG Switzerland www.Ebook777.com Foreword: On the Origins of the Virtual Tower It’s a pleasure to write a personal account regarding the origins of the virtual air traffic control tower as reflected in our work at the NASA Ames Research Center This type of air traffic display is now sometimes called the remote tower, but I think there is a significant difference between the two The virtual tower is actually a much more radical proposal and is only in the last few years becoming clearly possible at a reasonable cost But, as I discuss later, whether it provides any additional benefit beyond the remote tower depends strongly on the specific content and application The Ames work on the virtual tower can be traced to a meeting I had with my boss, Tom Wempe, to whom I first reported in the late 1970s I was a National Research Council (NRC) postdoc working for him studying pilot’s eye movements looking at a newly proposed Cockpit Display of Traffic Information This display was an electronic moving map that was intended for use in commercial aircraft cockpits to aid air traffic avoidance and to help pilots accept automatic avoidance commands When Tom not so subtly hinted that “It would be good for me to known around here as a displays person rather than an eye movement person,” I got the point This was the first time I had ever been explicitly directed to work on something specific Even in grad school at McGill University, I never got specific direction Part of the education there was to be able to figure out for yourself what was important to work on So when Tom got even more specific and pointed out that “We were having trouble coming up with a good way to depict vertical separation on the 2D planview map” and that he would like me to work on this problem, I really began to worry I didn’t want to work on a display! So in some desperation I suggested, “Well, why don’t we make it look like a view out the window?” At the time I drew on his blackboard a sketch of what a pilot might see out the forward window And Tom said, “OK, why don’t you work on that.” But I had absolutely no idea what I would or how I would it I proposed that I should try to find some interested colleagues for this project in Professor Larry Stark’s lab at Berkeley and the next week at his lab meeting v vi Foreword: On the Origins of the Virtual Tower suggested we find a student to work on the project He had a new student named Michael McGreevy who was interested in the Bioelectronics Option for a graduate engineering program He turned out to be perfect He was an engineer with a background in art who was also interested in computer graphics, which he was then studying in a class by Brian Barsky We began a multiyear collaboration in which we worked on the design, implementation, and testing of a perspective format for a Cockpit Display of Traffic Information (CDTI) What interested me particularly were the perceptual phenomena associated with interpreting an accurate geometric projection of the relative position and direction of targets that might be presented on a pilot’s display of surrounding aircraft Mike was beginning to program the Evans and Sutherland Picture System and we initiated a design collaboration to investigate the geometric and symbolic elements that would be needed to make a perspective CDTI suitable for a cockpit The goal was to make a traffic display useable at a glance Before our project all CDTIs were plan-view The perspective CDTI was eventually called VERT It ultimately was evaluated with respect to a conventional plan-view CDTI called INTRUD (Ellis et al 1987) From the design and testing of prototypes, we learned many things For example, a “God’s-eye” view from behind and slightly offset was better than a forward, egocentric view as if directly out the cockpit But most interestingly was that we found from systematic testing of pilot’s direction judgments an apparent perceptual distortion we called the “telephoto” bias It was as if when spatially interpreting the display, the users were seeing through a telephoto lens and that their visual attention would therefore not be correctly directed out the window for visual contact with traffic It turned out that theoretical models developed from work with Mike (McGreevy and Ellis 1986), and later Arthur Grunwald (Grunwald et al 1988), and still later Gregory Tharp (Tharp and Ellis 1990), provided several alternative but related techniques we could use to distort the display for better spatial interpretability It should be noted that considerable effort went into the initial design of the three-dimensional symbolic content of the perspective CDTI In this design process, we learned that many of the difficulties of spatially interpreting perspective displays can be removed by appropriate design of its geometry and symbology Consequently, it became apparent that simple performance comparisons of perspective versus plan-view formats could be misleading Symbology can be introduced to remove interpretive difficulties with the perspective format For example, segmented vertical reference lines can remove spatial ambiguities due to the geometric projection Later in the early 1980s after being hired as a Civil Servant at Ames, Mike McGreevy became interested in jumping into the data space of the maneuvering aircraft as seen on at CDTI, as if it were a virtual environment He began a series of projects to develop a head-mounted display for visualization of a variety of data spaces and environments This was the birth of “VR” at NASA in 1985 The very first real-world digital content viewed in this was a complex pattern of interacting air traffic called the “Atlanta Incident.” It was a series of worrisome close encounters of aircraft generally within the Atlanta TRACON Despite the very poor visual Foreword: On the Origins of the Virtual Tower vii and dynamic quality of the early NASA HMDs, which was not reflected in the contemporary accounts of the work in the press, the reincarnation of Ivan Sutherland’s “Ultimate Display” was clearly demonstrated with these air traffic data I was generally not directly involved with development of the virtual environment displays at Ames until the early 1990s when I began to work on the relationship of objective measures of system performance to virtual environment system usability We studied, for example, full system latency and countermeasures for it such as predictive filtering My principal collaborator for this work was Bernard “Dov” Adelstein The visual environments we studied at the time for our scientifically motivated design work were generally not particularly visually interesting, so it became strategically and programmatically important to show realistic possible uses of the display format for applications that would interest NASA Since we were receiving support from both space and aeronautics programs at Headquarters, I felt we needed two separate demonstration environments The “space” one was a fly-around of the Shuttle Orbiter with the task of identifying damaged tiles The “aeronautics” one was a visualization of simulated aircraft landing at SFO Initially, we used synthesized trajectories but later replaced them with recordings of live approach and landing data from DFW which was provided by Ronald Reisman I called our display a virtual tower in that the head-mounted display user would appear to be immersed in the traffic pattern I was surprised how much attention this second demo attracted One possible reason was the high visual and very high dynamic fidelity we achieved for the 1990s, attracting attention outside our agency This time, however, the popular representations of our system’s performance were more accurate However, I ultimately became concerned that advocacy for a virtual tower would involve way too much technological push, so rather than pursuing a line of system development, I sought to back up and investigate the visual aspects of tower operation I wanted to better understand the visual requirements for tower operations beyond the visual detection, recognition, and identification functions that seemed to circumscribe the visual concerns of the FAA when it came to visual tower operation Better understanding of the visual features used by Tower controllers would help establish performance requirements for either virtual or remote towers Two of our papers as well as six chapters in this volume (“Visual Features Used by Airport Tower Controllers: Some Implications for the Design of Remote or Virtual Towers,” “Detection and Recognition for Remote Tower Operations,” “Videopanorama Frame Rate Requirements Derived from Visual Discrimination of Deceleration during Simulated Aircraft Landing,” “Which Metrics Provide the Insight Needed? A Selection of Remote Tower Evaluation Metrics to Support a Remote Tower Operation Concept Validation,” “Model-Based Analysis of Two-Alternative Decision Errors in a Videopanorama-Based Remote Tower Work Position,” and “The Advanced Remote Tower System and Its Validation,” including the quasi-operational shadow mode validation) address this concern The virtual tower history sketched above describes work leading to a virtual tower that could be essentially worn on a controller’s head as a totally immersing viii Foreword: On the Origins of the Virtual Tower virtual environment Such a format isolates its users from their immediate physical environment and probably only makes operational sense when compactness, low power consumption, and portability are important In fact, this head-worn display format might be appropriate for use by Forward Air Controllers on a battlefield These soldiers have a job somewhat similar to an air traffic controller, though their goal may be different In fact, a version of such an application called the Forward Air Controller Training Simulator (FACSIM) was developed at TNO, the Hague But now, as can be seen in the following volume, the time for a virtual, or more properly labeled, remote tower has come The sensors, communications links, rendering software, and aircraft electronics needed for implementation of a practical system all seem to be in place As will be evident from the following chapters, much of the system integration work needed to complete such systems is afoot Moffett Field, CA Stephen R Ellis References Ellis S, McGreevy MW, Hitchcock R (1987) Perspective traffic display format and airline pilot traffic avoidance Hum Factors 29:371–382 Grunwald A, Ellis SR, Smith SR (1988) Spatial orientation in pictorial displays IEEE Trans Syst Man Cybern 18:425–436 McGreevy MW, Ellis SR (1986) The effect of perspective geometry on judged direction in spatial information instruments Hum Factors 28:439–456 McGreevy MW, Ellis SR (1991) Format and basic geometry of a perspective display of air traffic for the cockpit NASA TM 86680 Ames Research Center, Moffett Field, CA Tharp GK, Ellis SR (1990) The effects of training on errors of perceived direction in perspective displays NASA TM 102792 NASA Ames Research Center, Moffett Field, CA Free ebooks ==> www.Ebook777.com Preface The paradigmatic symbol in air traffic control (ATC), essentially unchanged since the beginning of commercial air traffic early last century, is the characteristic control tower with its large tilted windows, situated at an exposed location, and rising high above the airport Besides the impressive 360 panoramic far view out of windows, it provides the tower controller an aura of competence and power It actually hides the fact that tower controllers as employees of the air navigation service provider (ANSP) are members of a larger team of collaborating colleagues at different locations, including the apron, approach, and sector controllers, not all of them enjoying the exciting view out of the tower windows (for more details, see Sect in chapter “Introduction and Overview”) Only the apron controllers supervising the traffic on the movement area in front of the gates, mostly as employees of the airport operator, enjoy a similar panorama, although usually from a lower tower The topic of this book, Virtual and Remote Control Tower, questions the necessity of the established direct out-of-windows view for aerodrome traffic control It describes research toward an alternative work environment for tower and apron controllers, the Virtual Control Tower It is probably no exaggeration to assert that this book is about a paradigm change in air traffic control, where paradigm in this context means a generally accepted way of thinking and acting in an established field of technology As explained already by Steve Ellis in the Foreword to this volume, Virtual and Remote Tower refers to the idea of replacing the traditional aerodrome traffic control tower by a sensor-based control center which eliminates the need for a physical tower building For small low-traffic airports, the main topic of this book, the out-of-windows view will be reconstructed by a high-resolution videopanorama which may be located anywhere on the airport or even hundreds of kilometers away at a different location This concept quite naturally leads to a new type of aerodrome control center which allows for remote control of several airports from a single distant location It is understandable that many tower controllers are not really happy with this revolutionary idea, viewing videos instead of enjoying the reality behind the windows The detailed research toward the Virtual Tower presented in ix www.Ebook777.com Appendix B: Signal Detection Theory and Bayes Inference 323 Fig B.1 Measured response matrices (probabilities) visualized within Venn diagrams as relative size of respective areas Here, a concrete example is shown, taken from the experimental results of chapter “Model Based Analysis of Two-Alternative Decision Errors in a Videopanorama-Based Remote Tower Work Position” (visual discrimination task: gear up or down of approaching aircraft) for quantifying RTO performance Areas correspond to probabilities of Eq (B.3) Dotted lines indicate standard errors of mean Corresponding to Eq (B.2), the probability of causating world state Si for response Rj is obtained as inversion of the conditional response probability p(Rj|Si) due to an event observation (e.g., observers response probability approximated by the hit rate H of a decision experiment), given a certain precondition, e.g., the a priori knowledge of one of the two possible world states (situations) S1, S2 This a priori knowledge on Si is known through the experimental design (relative number N1, N2 of situations S1, S2) The corresponding a priori probability p(Si) is multiplied with the likelihood of the observed evidence p(Rj|Si)/p(Ri), e.g., for calculating via the Bayes theorem the risk of an unexpected situation Si (false conclusion on the world state) as cause for the subjective observation (erroneous perception) Rj if i 6¼ j: À Á À Á p Rj jSi p Si jRj ¼ À Á pðSi Þ p Rj ðB:4Þ with situations and responses Si, Rj; i, j ¼ 1, We may choose R1 ¼ signal detected (or alternative 1), R2 ¼ noise detected: no signal (or alternative 2) Subjects’ response probability is p(Rj) ¼ p(Rj|Si)·p(Si) + p(Rj|Sj) p(Sj) and p(Ri|Si) + p(Rj|Si) ¼ H + M ¼ CR + FA ¼ (i.e., for a given experimentally determined world state (situation), the subjects decision is either correct or incorrect) The 324 Appendix B: Signal Detection Theory and Bayes Inference design of the experiment with N(S1) ¼ N1, N(S2) ¼ N2, N ¼ N1 + N2 yields for the prior probabilities p(S1) + p(S2) ¼ For practical purpose, it is quite often convenient to use Bayes odds as relative measure instead of probabilities For this purpose, Eq (B.4) may be written with the likelihood ratio (e.g., ratio of Hit rate to False Alarm rate) defined by À Á À  Á À  Á LRji Rj ¼ p Rj Sj = p Rj Si ðB:5Þ The Bayes inference of Eq (B.4) can then be expressed using LR À Á p Rj jSi À Á À Á pð Si Þ ¼ p Si jRj ¼ À  Á À Á pðSj Þ pðSi Þ ðB:6Þ p Rj Si pðSi Þ þ p Rj jSj p Sj þ LRji Rj pðSi Þ À Á With the prior odds for a two-state world (derived from the known world states with our experimental ratio N1/N2) given by À Á À Á À Oprji ẳ p Sj = pSi ị ẳ p Sj = À p Sj ðB:7Þ Analogously, the posterior odds (ratio of world state probabilities as modified by the hypothesis due to observed evidence based response Rj) is given by À Á À  Á À  Á OPo Rj ij ¼ p Si  Rj = p Sj  Rj ðB:8Þ With Eq (B.6), we obtain from Eq (B.8), the posterior odds for the world state i contrary to prediction (unexpected situation due to the decision derived from perceived evidence) as: À  Á À Á À Á À Á p Rj  Si p ð S i ị p Rj B:9ị OPo Rj ij ẳ Á À  Á À Á ¼ LRij Rj OPrij p Rj p Rj Sj p S j Signal Detection Theory Within psychophysics, signal detection theory (SDT) plays an important role in quantifying decision making, in particular for two-alternative experiments The standard paradigm is to discriminate a signal embedded in a noisy background from the noise without a signal In the RTO context (see main chapters), we have used this method to discriminate aircraft landing with sufficient braking deceleration (signal case) from landing with too weak braking, leading to runway overrun (noise) Another example was the discriminability of flight maneuvers for quantifying the RTO performance as compared to the standard tower work condition The Appendix B: Signal Detection Theory and Bayes Inference 325 unique feature of SDT is its capability to separate the intrinsic “detector” sensitivity or system discriminability of the observer from his subjective preference to judge more conservative (avoiding false alarms at the cost of missing some correct ones, i.e., increasing the number of misses) or more liberal (preference for identifying as much as possible signals at the cost of increasing the FA rate) Parametric Discriminability d0 and Subjective Criterion c For this purpose, it has to be assumed that the observers’ internal response or familiarity with the two alternative signals (S2 or noise, S1 or stimulus + noise) is distributed according to a Gaussian density Discriminability d0 and decision criterion c are then defined by means of the z scores (integration limits) of the inverse cumulative densities This is visualized with the two density functions in Fig A.2 The subjective criterion at the position c of the familiarity axis of the two possible random events depicts the integration limits, separating the H and M areas of S1 (right density function) on the one hand and CR and FA for the S2 density on the other The inverse of the normalized cumulative Gaussian densities f1(x) for S1 (situation or signal + noise) and f2(x) for S2 (situation noise), i.e., z scores of hit (H ) and false alarm (FA) rates, defines a linear relationship with discriminability d0 ¼ (μ1–μ2)/σ as intersection with the z(H ) axis: zH ị ẳ ỵ zFAị B:10ị Probability Density (PDF) f(x) H and FA are taken as estimates of the indicated areas in the density functions of Fig B.2, with respective integration limits or z scores (inverse ΦÀ1 of the cumulative normalized probability density f(x)) defined by criterion c If a sufficient number of data (hit rate H, false alarm rate FA) are given, e.g., between subjects with different confidence ratings, a linear regression may be performed in order to µ S c µ S CR H M FA Internal Response x Fig B.2 Gaussian density assumption of observers’ internal random response (or familiarity) x to noise (S2) and noise + signal (S1) stimulus 326 Appendix B: Signal Detection Theory and Bayes Inference determine the distance between the means μ1, μ2 of the two densities f1, f2 as intersection with the z(H ) axis If the variances of the two Gaussian densities cannot be assumed equal as precondition, e.g., σ Signal ¼ σ 6¼ σ ¼ σ Noise, Eq (B.10) can be shown to change as follows [e.g., (Metz et al 1998)]: zH ị ẳ ỵ zðFAÞ σ1 σ1 ðB:11Þ From (B.10), it follows that with two equal variance, Gaussian densities for the subjective (internal) response or familiarity with situations S1, S2, the discriminability d0 is defined as difference between normalized mean values: d :¼ ị =s ẳ F1 Hị F1 FAị ẳ zH ị zFAị B:12ị measured in units of standard deviations between signal means Φ is the Gaussian probability integral (cumulative density) of density f(x) (x ¼ subjective response or familiarity with situations S1 (signal + noise), S2 (noise)) Correspondingly, the criterion value c is obtained as c :ẳ 0:5 zH ị ỵ zFAịị B:13ị In Fig B.2, c separates the M from H area and CR from FA area in Fig B.2 Due to the independence of the two alternative events S1, S2 (with independently normalized densities f(S1), f(S2)), the results of the response matrix are unambiguously represented by the (FA, H ) data pair As a standard graph of SDT, the so-called receiver operating characteristic (ROC) unambiguously characterizes the observer in this experiment via his discriminability d0 and decision criterion c A single data point in (FA, H ) ROC space, typically as average over many runs and/or participants of an experiment (representing the average of, e.g., 100 decisions), is unambiguously characterized by a pair of (isosensitivity d0 , isobias c) parametrized ROC curves In this way, the same conditional probabilities p(R1|S1) ¼ H, p(R1|S2) ¼ FA that were used with the Bayes inference for calculation of the risk of world state contrary to expectation (Si 6¼ Rj) can be employed for deriving an unbiased discriminability value for the observer/decision maker Examples of ROC curves calculated with the above equations for concrete experimental data are presented in chapters “Videopanorama Frame Rate Requirements Derived from Visual Discrimination of Deceleration during Simulated Aircraft Landing” and “Model Based Analysis of Two-Alternative Decision Errors in a Videopanorama-based Remote Tower Work Position.” Each point (H, FA) on an ROC curve is unambiguously determined by the criterion c, separating the subjective yes/no, signal/noise, and world state 1/2 decision threshold It follows that c is unambiguously characterized by the ROC curve slope that decreases with more liberal decisions, i.e., gathering more hits H at the cost of allowing for more false alarms FA when c shifts to the right Appendix B: Signal Detection Theory and Bayes Inference 327 (decreases) Because the criterion corresponds to the integration boundary c of the two densities f(S1), f(S2) in Fig B.2, it can be expressed through the likelihood ratio [see Eq (B.5) for the discrete case] via the probability densities À  Á Àf cS1 lðcÞ ¼ À  Á ðB:14Þ Àf c S2 that in fact equals the slope of the ROC curve at c For details, see Green and Swets (1988) If sufficient data are available, they may be used for deriving optimum d0 and c via data fitting Quite often, however, the amount of data is limited, and a single average pair (, ) is used for deriving an unambiguous pair of d0 and c parameterized ROC curve crossing at this (FA, H ) coordinate Figure B.3 depicts a series of d0 parametrized ROC curves that shows how different discriminability values can be attributed to three series of measurements (red, green, blue (H, FA) datapoints from frame rate experiments described in chapter “Videopanorama Frame Rate Requirements Derived from Visual Discrimination of Deceleration during Simulated Aircraft Landing“) in this case by using the average of each set of four points Fig B.3 Series of d0 parametrized ROC curves with three sets of example data points (red: 6, green: 12, blue: 24 Hz) from chapter “Videopanorama Frame Rate Requirements Derived from Visual Discrimination of Deceleration during Simulated Aircraft Landing” (frame rate experiments) Unambiguous discriminability parameter for each set via average for each set Axes titles indicate calculation of ROC curves via the cumulative probability densities of noise and signal (n ¼ S2, s ¼ S1), with criterion cr (c) as integration boundary 328 Appendix B: Signal Detection Theory and Bayes Inference In chapter “Videopanorama Frame Rate Requirements Derived from Visual Discrimination of Deceleration during Simulated Aircraft Landing,” in Fig 11 the unambiguously (, ) parametrized curve pairs are plotted, intersecting at the single average data point of each data set, represented by the crosses with error bars (standard errors) They correspond to three groups of subjects with three different experimental conditions (in that case different frame rates) used for generating three average pairs (, ) In this way, within the experimental uncertainty, three different pairs of isosensitivity/isobias curves are attributed to the measured average responses Quite often, the limited set of measured data is not sufficient for verifying the Gaussian density precondition with regard to the familiarity or subjective response to the signal and noise vs noise without signal In this case, a nonparametric variant may be advantageous for calculating discriminability and decision bias Such a method based on the area under the ROC curve is described in the next section Nonparametric Discriminability A and Subjective Bias b This method is based on an estimate of the average area under ROC curves For the estimate, the possible areas for the sets of straight lines enclosing all proper ROC curves (with non-negative slope) for any specific (FA, H ) point are determined as depicted in Fig B.4 Proper ROC curves must lie within areas A1, A2 Different formulas for average area A as discriminability index and a corresponding index b as nonparametric subjective bias were derived in the literature, but only recently a final correct version was published by Zhang and Mueller (2005) A1 Hit Rate (F,H) = p A2 I False Alarm Rate Fig B.4 Proper ROC curves must lie within areas A1, A2 Redrawn after Zhang and Mueller (2005) References 329 The isosensitivity and isobias curves are calculated directly from the measured conditional probabilities H, FA, and are given by the Zhang and Mueller formulas as follows for the A isopleth: H À FA > > ỵ FA1 H ị if FA 0:5 H > > > 4 > > < H FA FA ỵ if FA H 0:5 Aẳ B:15ị 4 4H > > > > > H À FA 1ÀH > > À if 0:5 < FA H : ỵ 4 41 À FAÞ and for the associated measure of decision bias which is based on the slope of the constant discriminability A isopleths, and which corresponds to the likelihood ratio: À 4H > > if FA 0:5 H > > > ỵ 4FA > > > < H ỵH if FA H < 0:5 bẳ B:16ị H ỵ FA > > > > > FAị2 ỵ Hị > > > if 0:5 < FA H : FAị2 ỵ FAÞ A further advantage of the discriminability index A is its limited range of values 0.5 A as compared to the parametric index with 0.5 d0 Figure 14 (chapter “Videopanorama Frame Rate Requirements Derived from Visual Discrimination of Deceleration during Simulated Aircraft Landing”) and Fig (chapter “Model Based Analysis of Two-Alternative Decision Errors in a Videopanoramabased Remote Tower Work Position”) illustrate the application of this method with the example of increase of discriminability of moving objects on a videopanorama with video frame rate The position of the group average (H, FA) results (large crosses) on the A isopleth determines the corresponding decision bias b which in this case indicates conservative decision making, i.e., avoiding false alarms References Birbaumer N, Schmidt R (2010) Biologische psychologie, 7th edn Springer Medizin Verlag, Heidelberg Gobrecht H (1978) Lehrbuch der Experimentalphysik III: Optik, edn In: Gobrecht H (ed) Walter de Gruyter, Berlin Hecht E, Zajac A (1974) Optics Addison-Wesley, Reading, MA 330 References Mahmoudzadeh Vaziri G (2013) Projekt RAiCe: Modulationsuăbertragungsfunktion (MTF) der Panoramakameras Internal Report IB-112-2013/9, Institute of Flight Guidance, German Aerospace Center (DLR), Braunschweig Quick MTF—an image quality test application (2013) Retrieved from Online reference: www quickmtf.com Robert CP (2001) The Bayesian choice Springer, New York Green DM, Swets JA (1988) Signal detection theory and psychophysics Peninsula Publishing: Reprint Edition, Los Altos Hills, CA MacMillan N, Creelman CD (2005) Detection theorys Psychology Press, New York Zhang J, Mueller ST (2005) A note on ROC analysis and non-parametric estimate of sensitivity Psychometrica 70(1):203–212 Metz CE, Benjamin AH, Shen J-H (1998) Maximum likelihood estimation of receiver operating characteristic (ROC) curves from continuously-distributed data Stat Med 17:1033–1053 https://www.norsys.com/(2014) Index A A, 254, 328 A isopleth, 329 A parameter, 135 A priori knowledge, 127 Abnormal braking, 131 Accidents, 264 Acuity, 56, 62 Adaptation, 319 Addressable resolution, 62 Advanced HMI, 203 Advanced Remote Tower (ART), 10, 55, 65, 264, 270, 271 Advanced Surface Movement Guidance and Control Systems (ASMGCS), 13 Aerodrome circling, 244, 245 Affordances, 23, 26 Air situational display, 229 Air traffic system validation, 70 Aircraft Derived Data, 287 Airport capacity, 23 Airport circling, 318 Airport Flight Information Service (AFIS), 274, 277 Airport reference point, 122, 182 Airport Surface Detection Equipment (ASDE), 55 Airy disk, 316 Angular resolution, 170, 313, 316 Angular state space, 126 Angular velocity, 127 Anticipated separation, 23, 33 Anticipation, 46 Aperture angle, 172 Aperture stop diameter, 315 Apparent ground speed, 125 Approach control, A-priori, 323 A-priori knowledge, 249 Apron controller, Area control center, ASDE-x displays, 22 Assistance System, 142 Asymptote, 117 Augmented reality, Augmented vision, 6, 164, 165, 177, 206 Augmented vision video panorama, 172 Automated NextGen Tower, 280 Automatic camera adjustment, 271 Automatic exposure, 276 Available time, 256 Awareness, 271, 273, 276 Azimuth rate, 31 B b, 253, 328 Back projection, 75, 174 Background radiation, 214 Background subtraction, 208 Bandwidth, 202 Baseline, 224 Bayer color format, 202 Bayer images, 203 Bayes inference, 124, 127, 249–251, 321, 324 Bayes law, 127 Bayes theorem, 249, 322 Bayesian belief network, 322 Bias, 128 Bias parameter, 133 © Springer International Publishing Switzerland 2016 N Fuărstenau (ed.), Virtual and Remote Control Tower, Research Topics in Aerospace, DOI 10.1007/978-3-319-28719-5 331 332 Index Bias-free, 132 Binoculars, 64 Black body radiation, 319 Blur-radius, 316 Braking, 126 Braking acceleration, 119 Braking dynamics, 126 Braking phase, 46 Cost benefit, 264, 277 CR, 325 Criterion, 131, 251, 325 Criterion parameter, 253 Cue, 54 Cumulative densities, 251, 325 CWP-remote, 224, 227 CWP-tower, 224 C c, 325 C, 256 C classification, 238 Camera mode, 75 Camera performance, 83 Camera platform, 226 Cameras, 172 Case Study, 73 Causating world state, 323 CCD-chip, 314 CERDEC, 28 Certainty, 121 Channel capacity, 256 Classification algorithms, 208 Clearances, 80 Cloud motion, 207 Cognitive processing rate, 256 Cognitive task, 28 Cognitive walk-through, 31 Cognitive work and task analysis (CWA), 165 Color feature, 211 Colorado Initiative, 309–310 Colored Petri Nets, 166 Comparability, 234, 235 Compression, 23, 172 Conditional probabilities, 124, 246, 322, 326 Conservative, 329 Console, 244 Contrast, 55, 169, 314, 317 Contrast enhancement, 217–219 Contrast measurement, 317 Contrast requirements, 64 Contrast sensitivity, 64 Control zone, 5, 225, 265 Controlled airspace, Controller assistance information augmentation, 91 Controller interviews, 136 Controller model, 73 Controller working positions, 167 Coordinator, 76 Correct rejections, 322 D d0 , 251, 325 Data analysis, 248–258 Data glove, Data path, 202 Data synchronization, 79 Daylight, 271 Debriefing questionnaire, 236 Debriefings, 125, 230 Decay hypothesis, 132 Deceleration, 45 Deceleration profiles, 49, 118, 120 Decision bias, 128, 251, 329 Decision error, 128 Decision processes, 22 Decision Support Tools, 286 Decision task, 322 Delay, 169, 180, 213 Demosaicing, 203 Density functions, 325 Depth of view, 271 Detectability, 133, 169 Detectability Experiments, 318 Detection, 28, 56, 58, 62, 265, 271, 274, 276 Detection distance, 318 Detection theory, 231 Diffraction effects, 216, 316 Diffraction fringes, 316 Discriminability, 117, 131, 133, 197, 251–253, 325, 326 Discriminability A, 328–329 Display γ-setting, 320 Distributional assumptions, 133 Dynamic range, 169 Dynamic visual features, 45 Dynamical range, 198, 217 E Effective Resolution, 318 Effective video resolution, 318 Electromagnetic compatibility, 214 Electromagnetic emission, 214 Index Emission band, 214 Equal variance, 326 Equation of motion, 119, 126 Equivalent Visual Operations, 23 ER classification, 237 Erroneous perception, 323 Error prediction, 128 Error probability, 256 Error risk, 250 Ethernet, 172 European Commission, 264 European Operational Concept Validation Methodology (E-OCVM), 70, 270 European standard, 70 Event-detection, 184 Events, 127, 182 Event-time, 182 Evidence, 322 Executive controller, 76 Expense of realization, 234, 235 Experimental design, 229, 245–246, 323 Expert Judgement, 270, 274–275 Exponential fit, 134 Exponential model, 134 Exponentially, 132 Exposure times, 316 Extrapolation, 132 Eye, 313 Eye tracking, 82, 98 F FA, 325 False alarm rates, 134 False alarms, 123, 322 Familiarity, 131, 325 Familiarity-axis, 325 Far view, 9, 88 Farfield, 316 Fast-time simulations, 70 Feasibility, 234, 235 Feature tracking, 208 Field trial, 224, 229 Field-of-view, 170, 314 Flight maneuvers, 244 Flight plan, 182 Flight status, 26 Flight tests, 182 Flight-maneuvers, 244 Flow Analysis, 206–208 f-number, 315 Focal width, 169, 314 333 Fourier transformation, 317 FOV, 197, 314 Frame rate conditions, 119 Frame rate requirements, 124 Frame rates, 116, 131, 172 Frame-rate requirements, 12 Frequency range, 214 G Gamma adjustment, 217 Gamma correction, 314 γ-values, 217, 319 Gaussian densities, 131, 325 Geometrical optics, 313 GigE Vision, 203 Grabber, 202 Ground executive, Group-average, 134 H H, 325 HD-cameras, 314 Head-down-times, 96 Head-mounted, Head-mounted stereoscopic displays (HMD), Head tracking, Head-up display, Heuristic decision basis, 136 Heuristics, 28 High resolution video panorama, 168 High speed turnoff, 131 High-fidelity simulation, 88 Hit, 123 Hit rates, 134, 322 HITL simulations, 71 Holographic projection screen, Human eye, 319 Human factors evaluation, 158 experts, 145, 149, 151 Human Machine Interface (HMI), 264 Human observer, 319 Human perception, 317 Human performance case, 70 Human performance model, 72 Human-in-the-loop, 188 Human-in-the-loop simulations, 11 Human-machine interface, 145 aesthetics, 156 Hypothesis, 322 334 I Identification, 28, 56, 60 Image compression, 181 Image contrast, 64 Image generator, 75 Image optimization, 215–219 Image processing, 178, 206 Image size, 313 Incidents, 264 Independent probabilities, 322 Independent variable, 229 Information data handling system (IDVS), 196, 201 Information fusion, 209 Information processing, 256–258 Information requirements, 167, 256 Information sources, 195, 248 Infrared filter, 211 Infrared spectrum, 211 Instrumental Flight Rules, 4, 90 Integration boundary, 327 Interferences, 215 Internal response, 325 Inversion, 322 IP-theory, 257 Iris control, 316 Isobias, 326, 329 Isobias-curves, 131 Isopleths, 133, 134, 254 Isosensitivity, 131, 326, 329 J Judge, 57 Judgment, 56, 60, 62 Just-noticeable difference (JND), 46 K Kalman-filtering, Kell-factor, 62 L Landing profile, 48 Landmarks, 136 Latency, 6, 7, 169, 213 Leesburg Executive Airport, 310 Lens distortions, 316 Lens equation, 170 Levels of analysis, 26 Levels of information, 201 Likelihood bias, 133 Likelihood ratio, 324, 327, 329 Index Likelihoods, 127 Line pair, 317 Line targets, 317 Linear extrapolation, 132 Linear model, 185 Linear regression, 185, 318 Linear relationship, 320 Live trials, 70 Logistic function, 256 Long-distance connection, 202 LR, 324 Luminance, 217, 317 Luminance sensitivity, 319 M M, 325 Magnification, 229 MANTEA, 28 Master Controller, 141 MasterMAN, 142, 143, 145–158 Maturity, 275 Maximum discriminability, 125 Memory effect, 129 Metrics, 223, 237 Microphones, 172 Minimal decision time, 256 Minimum video frame rate, 136 Misses, 322 Mode-S, 205, 251 Mode-S transponder, 13, 254 Modulation transfer characteristic, 248 Modulation transfer function (MTF), 169, 171, 315, 317 Movement detection, 178, 205 MTF50, 318 MTF-calculation, 317 Multi-Airport Control, 140–143 Multi remote tower, 81 N Network bandwidth, 203 Network interface cards, 203 Newton, 313 NEXTGen, 10 Night operation, 60 Night time, 271 Noisy background, 324 Nominal deceleration, 120 Non-answers, 246 Nonlinear fitting, 257 Nonparametric, 133–136, 328 Nonparametric discriminability, 125, 254–255 Index Non-Towered Airports, 302–303 Normality, 135 Nyquist limit, 169, 317 O Object classes, 208 Object detectability, 197 Object detection, 317 Object size, 313 Object tracking, 178, 207 Objective data, 234 Observability conditions, 195 Observation, 323 Obstacles, 33 Off-Nominal Events, 290 Operational importance, 34 Operational validity, 88 feasibility, 100 usability, 100 Optical axis, 314 Optical flow analysis, 206 Optical requirements, 64 Optical see-through, 6, 177 OTW view, 229 Out-of-windows view, 81, 196, 313 Overshoot, 126 P Panorama, 265, 267, 270, 271, 275 Panorama camera system, 198 Panorama replay, 172, 184, 187 Pan-tilt zoom (PTZ), 172, 227, 269, 270, 273, 275, 277, 314 Pan-tilt zoom camera, 6, 172, 244 Parallel events, 72 Paraxial, 313 Participants, 243 Passive shadow mode, 270, 275 Passive shadow mode test, 243 PCT/IP, 256 Pen touch-input display, 175 Pen-input interaction, 244 Perceived resolution, 319 Perceived speed, 49 Perceptual discriminations, 46 Perceptual properties, 319 Performance, 237 Performance asymptotes, 135 Performance measurements, 229 Performance models, 28 Perspective, 23 Perspective view, 22 335 Petri nets, 28 Physiological Optics, 319 Picture-in-Picture (PIP), 269, 273 Pilot intent, 33 Pixel resolution, 215, 314 Planning tool, 142, 154, 158 Plan-view, 22 Point-spread function, 316 Position data, 185 Posterior odds, 324 Prediction, 126 Prediction errors, 136 Predictive, 23 Preset viewing directions, 244 Prior odds, 324 Prior probabilities, 324 Procedural Control, 55 Procedures, 28 Projection system, 75 Proper ROC-curves, 328 Prototype RTO-Video panorama, 200 Psychophysics, 12, 324 PTZ camera, 227 Q Questionnaire, 230 R Radiant power, 319 Radiation optics, 319 RAiCe, 11 RaiCon, 11 Random event, 321 Range, 57 RApTOR, 10 Rates of motion, 31 Reaction times, 233 Real-time simulations, 70, 188 Receiver operating characteristics (ROC), 131, 251, 326 Recognition, 28, 56, 59, 60, 62, 64, 197, 265 Reference frame, 126 Remote control, 152 Control Centre, 141 controller, 141 tower, 8, 141 Remote controller model, 72 Remote Tower (RTO) video panorama, 118 Remote Tower Center, 8, 80, 164, 168, 189 Remote tower concept validation, 238 Remote tower console, 75 336 Remote tower demonstration, 310–311 Remote tower metrics (RTM), 224, 226, 228, 229 Remote tower operation (RTO), 164, 242 Remotely Operated Towers (ROT), 265 Requirement specifications, 195 Research aircraft, 224, 229 Resolution, 54, 55, 61–64, 271, 274–276 Resolution angle, 185 Response alternatives, 322 Response bias, 125 Response matrices, 123, 246, 322 Response times, 120, 122 Retinal scanning, RGB, 211 RGB images, 202 Risk, 326 ROC space, 326 ROC-curve, 326 Router, 202 RS scores, 238 RTMs Score, 236 RTO Center, 76 RTO-Center, 72 RTO-CWP, 200, 244 Runway coordinates, 120 Runway excursion, 128 Runway Visual Range, 267, 272 S Safety, 57, 272, 276 Safety assessment, 269 Safety related metrics, 232 Sampling rate, 49 Scenario retrieval, 72 Scenarios, 181, 229 Select Services Concept, 304–305 Sensor system, 244 Separation, 26 SESAR, 10 SFO, 25 Shadow-mode, 70, 229 Shadow-mode validation, 12 Shooter game, 132 Shooting score, 132 Signal detection theory (SDT), 12, 131, 251, 324 Simulation engine, 188 Simulation environment, 188 Situation, 322 Situational awareness, 265, 269 Smart cameras, 217 SNT Configurations, 287–288 SNT facility, 280 Index SNT Walkthrough, 281 Software-grabber, 203 Spacial frequencies, 317 Spatial frequency, 317 Spatial separation, 27 Spectral sensitivity distribution, 319 Spectrum, 214 Speed, 318 Staffed NextGen Tower (SNT), 10, 280 Standard instrument departures (SID), 26 Standard metrics, 238 STARS, 26 Start-up, State space diagram, 126 Stereoscopic, Stevens function, 320 Stimulus–response matrices, 123 Stitching, 174 Stop times, 119 Subject groups, 119 Subjective, 317 Subjective criterion, 325 Subjective measures, 246 Surprise events, 128 Surveillance function, 28 Survey software, 228 Synchronization, 77, 79 Synchronized questioning, 233 Synthetic, 266, 272, 276 Synthetic vision, 8, 83 T Ta, 256 Task analysis, 56 Task definition, 55 Task requirements, 132 Task sequences, 245 Technical optics, 319 Thermal imager, 211 Thermal imaging, 211 Thermal signatures, 211 Thin lens equation, 313 360 panorama, 199 Threshold, 256 Time pressure, 256–258 Time Pressure Theory, 237 Time-measurement, 184 Time-of-detection, 318 Time–pressure theory, 231 Timing precision, 73 Tower control, 4, 31, 55–56 Tower executive, Tower position, 126 Tower simulator, 75 Free ebooks ==> www.Ebook777.com Index 337 Tower-Lab, 78, 243 Track, 267, 272, 273, 275, 276 Tracking, 205 Traffic circle, 232 Traffic events, 73 Training, 264, 274 Trajectories, 182 Trajectory planning, 33 Transponder, 205 Triangulation, 179 Two-alternative, 322 Two-alternative decision, 243 Two-alternative events, 245 TWR-CWP, 244 U Unbiased discriminability, 326 Unexpected situation, 323 Usability, 143–144, 150 evaluation, 143–157 evaluator (see Human Factors Experts) heuristics, 146–149 principles, 146–149 problem, 150, 155 severity rating, 149–150 User Experience (UX), 144–145, 158 User-Centred Design, 140, 143 V Validation, 232, 269–270, 275 Validation concept, 223 Validation exercises, 223, 231 Velocity profiles, 46 Venn-diagrams, 123, 322 Verbal Protocol Analysis, 282 Verification, 213–219 Vertical FOV, 195 VFR traffic, 13 VICTOR, 141 Video frame rate, 189 Video game, 132 Video latency, 213 Video panorama, 75, 229, 244 Video reconstruction, 320 Video replay, 318 Video resolution, 314 Video see-through, 7, 177 Video stream, 202 Video-observation, 318 Viewing angle coordinates, 126 Viewing distance, 118 Viewing geometry, 46 Virtual holography, Virtual joystick, 204, 244 Virtual reality, 5, 83 Virtual tower, Visibility, 55, 271–273 Visibility condition, 56 Visibility Enhancement Technology (VET), 265, 273, 276 Visibility limitations, 248 Visionary Projects competition, Visual acuity, 56 Visual attention, 31 Visual contact, 24, 29, 31 Visual contact rates, 37 Visual cues, 23 Visual decelerations, 45 Visual detection, 56 Visual environment, 23 Visual features, 29, 31, 38, 116, 136 Visual flight rules, 4, 90 Visual functions, 26 Visual identification, 56 Visual information, 28, 166 Visual judgment, 56, 62 Visual observation, 55 Visual performance, 28 Visual psychophysics, 37 Visual recognition, 56 Visual resolution, 169, 227, 248, 313 Visual short-term memory, 134 Visual stimuli, 28 Visual tasks, 26, 225 Visual velocity, 34, 49 Visual working memory, 136 W Wavelength, 316 Weather, 267, 272, 276 Weather conditions, 197 Weber fraction, 46 Weber’s Law, 46 Weber-Fechner law, 319 Work and task analyses, 164 Work environments, 75 Workload, 72, 256, 272, 274 World states, 249, 321 Z Zoom, 172 Zoom camera, 314 Zoom factors, 204, 244, 314 Zoom function, 197 z-scores, 131, 325 www.Ebook777.com ... from LFV and Saab/Sweden contributed chapters “Detection and Recognition for Remote Tower Operations” and “The Advanced Remote Tower System and Its Validation on the basics of detection and recognition... II 21 53 Remote Tower Simulation and Remote Tower Center Remote Tower Simulation Environment Sebastian Schier Assessing Operational Validity of Remote Tower Control in... Pan-tilt-zoom Remote Airport traffic Center (DLR project 2008–2012) Remote Airport Tower Operation Research (DLR project 2004–2007) Royal Netherlands Airforce Remotely Operated Tower Remote Tower Center/Remote

Ngày đăng: 15/01/2018, 11:12

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN