Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 40 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
40
Dung lượng
5,28 MB
Nội dung
Guidance Laws for Autonomous Underwater Vehicles 71 and υυ υ υυ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − ⎣ ⎦ pp p pp cos 0 sin () 0 1 0 , sin 0 cos R (53) respectively. Hence, the full rotation can be represented by χ υχυ pp p p (,) ()(),RRR (54) such that χυ ϖ Τ − pp p ( ) = ( , ) ( ( ) ( )),ttε Rpp (55) where [] Τ ∈R 3 ()= (),(), ()t stethtε represents the along-track, cross-track, and vertical-track errors relative to ϖ p ()p , decomposed in the path-fixed reference frame. The path-following control objective is identical to (32), and ()tε can be reduced to zero by assigning an appropriate steering law to the velocity of ()tp as well as a purposeful collaborative behavior to ϖ p ()p . Specifically, the steering law involves χ ⎛⎞ − ⎜⎟ Δ ⎝⎠ r () () arctan , et e (56) which is equivalent to (21) with Δ > 0 , used to shape the convergence behavior toward the xz-plane of the path-fixed frame, and υ ⎛⎞ ⎜⎟ ⎜⎟ +Δ ⎝⎠ r 22 () ( ) arctan , () ht h et (57) used to shape the convergence behavior toward the xy-plane of the path-fixed frame, see Fig. 8. Also, ϖ p ()p moves collaboratively toward the direct projection of ()tp onto the x- axis of the path-fixed reference frame by χυγ ϖ ϖ + rr p ()cos ()cos ( ) () =, () ' Ut e h st p (58) where γ >0 and ϖ ϖϖϖ ++ 222 pppp ()= () () () '''' xyzp . In sum, four angular variables (50), (51), (56), and (57) are used to specify the 3D steering law required for path-following purposes. Fortunately, these variables can be compactly represented by the azimuth angle ( ) χχυχυ χυχυ χυχυ pp rr pp rr pp rr (,,,)=atan2(,,,),(,,,),fg (59) where Underwater Vehicles 72 Fig. 8. The main variables associated with steering for regularly parameterized 3D paths χ υχυ χ χ υ χ υ υ − + pprr p r r ppr ( , , , )= cos sin cos sin sin sinf χ υχυ pp rr sin cos cos cos (60) and χ υχυ χ χ υ χ υ υ − −+ pp rr p r r p p r ( , , , )= sin sin cos cos sin sing χ υχυ pp rr cos cos cos cos , (61) and the elevation angle υ υχυ υ χ υ υ υ + prr p r r p r ( , , )= arcsin(sin cos cos cos sin ). (62) Through the use of trigonometric addition formulas, it can be shown that (59) is equivalent to (19) in the 2D case, i.e., when υ υ pr ==0. 5.1 Path parameterizations Applicable (arc-length) parameterizations of straight lines and helices are now given. 5.1.1 Parameterization of straight lines A spatial straight line can be parameterized by ϖ ∈ R as ϖ ϖαβ + pf ()= coscosxx (63) ϖ ϖαβ + pf ()= sincosyy (64) ϖ ϖβ − pf ()= sin,zz (65) Guidance Laws for Autonomous Underwater Vehicles 73 where [] Τ ∈ R 3 ffff ,,xyzp represents a fixed point on the path (for which ϖ is defined relative to), and α ∈ S represents the azimuth angle of the path, while β ∈ S represents the elevation angle of the path (both corresponding to the direction of increasing ϖ ). 5.1.2 Parameterization of helices A helix can be parameterized by ϖ ∈ R as ϖ ϖ ⎛⎞ + ⎜⎟ ⎜⎟ ⎝⎠ pcc c ()= cos 2 xxr r (66) ϖ ϖλ ⎛⎞ + ⎜⎟ ⎜⎟ ⎝⎠ pcc c ()= sin 2 yyr r (67) ϖ ϖ − pc ()= , 2 zz (68) where [] Τ ∈R 3 cccc =,,xyzp represents the origin of the helix center (for which ϖ is defined relative to), c >0r represents the radius of the horizontally-projected circle of the helix, and { } λ ∈−1,1 decides in which direction this horizontally-projected circle is traced; λ −=1 for anti-clockwise motion and λ = 1 for clockwise motion. Here, an increase in ϖ corresponds to movement in the negative direction of the z-axis of the stationary frame. 6. Conclusions This work has given an overview of guidance laws applicable to motion control of AUVs in 2D and 3D. Specifically, considered scenarios have included target tracking, where only instantaneous information about the target motion is available, as well as path scenarios, where spatial information is available apriori. For target-tracking purposes, classical guidance laws from the missile literature were reviewed, in particular line of sight, pure pursuit, and constant bearing. For the path scenarios, enclosure-based and lookahead-based guidance laws were presented. Relations between the guidance laws have been discussed, as well as interpretations toward saturated control. 7. References Aguiar, A. P. & Hespanha, J. P. (2004). Logic-based switching control for trajectory-tracking and path-following of underactuated autonomous vehicles with parametric modeling uncertainty. In: Proceedings of the ACC’04, Boston, Massachusetts, USA Aicardi, M.; Casalino, G.; Bicchi, A. & Balestrino, A. (1995). Closed loop steering of unicycle- like vehicles via Lyapunov techniques. IEEE Robotics and Automation Magazine 2(1), 27—35 Antonelli, G.; Fossen, T. I. & Yoerger, D. R. (2008). Underwater robotics. In: Springer Handbook of Robotics (B. Siciliano and O. Khatib, Eds.). pp. 987— 1008. Springer- Verlag Berlin Heidelberg Underwater Vehicles 74 Battin, R. H. (1982). Space guidance evolution - A personal narrative. Journal of Guidance, Control, and Dynamics 5(2), 97—110 Blidberg, D. R. (2001). The development of autonomous underwater vehicles (AUVs); a brief summary. In: Proceedings of the ICRA’01, Seoul, Korea Breivik, M. & Fossen, T. I. (2004a). Path following of straight lines and circles for marine surface vessels. In: Proceedings of the 6th IFAC CAMS, Ancona, Italy Breivik, M. & Fossen, T. I. (2004b). Path following for marine surface vessels. In: Proceedings of the OTO’04, Kobe, Japan Breivik, M. & Fossen, T. I. (2005a). Guidance-based path following for autonomous underwater vehicles. In: Proceedings of the OCEANS’05, Washington D.C., USA Breivik, M. & Fossen, T. I. (2005b). Principles of guidance-based path following in 2D and 3D. In: Proceedings of the CDC-ECC’05, Seville, Spain Breivik, M.; Subbotin, M. V. & Fossen, T. I. (2006). Kinematic aspects of guided formation control in 2D. In: Group Coordination and Cooperative Control (K. Y. Pettersen, J. T. Gravdahl and H. Nijmeijer, Eds.). pp. 54—74. Springer-Verlag Heidelberg Breivik, M. & Fossen, T. I. (2007). Applying missile guidance concepts to motion control of marine craft. In: Proceedings of the 7th IFAC CAMS, Bol, Croatia Breivik, M.; Hovstein, V. E. & Fossen, T. I. (2008). Ship formation control: A guided leader-follower approach. In: Proceedings of the 17th IFAC World Congress, Seoul, Korea Breivik, M. & Fossen, T. I. (2008). Guidance laws for planar motion control. In: Proceedings of the CDC’08, Cancun, Mexico Børhaug, E. & Pettersen, K. Y. (2006). LOS path following for underactuated underwater vehicle. In: Proceedings of the 7th IFAC MCMC, Lisbon, Portugal Børhaug, E.; Pettersen, K. Y. & Pavlov, A. (2006). An optimal guidance scheme for cross- track control of underactuated underwater vehicles. In: Proceedings of the MED’06, Ancona, Italy Caccia, M.; Bruzzone, G. & Veruggio, G. (2000). Guidance of unmanned underwater vehicles: Experimental results. In: Proceedings of the ICRA’00, San Francisco, California, USA Castaño, A. R.; Ollero, A.; Vinagre, B. M. & Chen, Y. Q. (2005). Synthesis of a spatial lookahead path tracking controller. In: Proceedings of the 16th IFAC World Congress, Prague, Czech Republic Cloutier, J. R.; Evers, J. H. & Feeley, J. J. (1989). Assessment of air-to-air missile guidance and control technology. IEEE Control Systems Magazine 9(6), 27—34 Craven, P. J.; Sutton, R. & Burns, R. S. (1998). Control strategies for unmanned underwater vehicles. Journal of Navigation 51, 79—105 Davidson, M.; Bahl, V. & Moore, K. L. (2002). Spatial integration for a nonlinear path tracking control law. In: Proceedings of the ACC’02, Anchorage, Alaska, USA Do, K. D. & Pan, J. (2003). Robust and adaptive path following for underactuated autonomous underwater vehicles. In: Proceedings of the ACC’03, Denver, Colorado, USA Draper, C. S. (1971). Guidance is forever. Navigation 18(1), 26—50 Encarnação, P. & Pascoal, A. (2000). 3D path following for autonomous underwater vehicle. In: Proceedings of the CDC’00, Sydney, Australia Fossen, T. I. (2002). Marine Control Systems: Guidance, Navigation and Control of Ships, Rigs and Underwater Vehicles. Marine Cybernetics Fossier, M. W. (1984). The development of radar homing missiles. Journal of Guidance, Control, and Dynamics 7(6), 641—651 Guidance Laws for Autonomous Underwater Vehicles 75 Gomes, P.; Silvestre, C.; Pascoal, A. & Cunha, R. (2006). A path-following controller for the DELFIMx autonomous surface craft. In: Proceedings of the 7th IFAC MCMC, Lisbon, Portugal Haeussermann, W. (1981). Developments in the field of automatic guidance and control of rockets. Journal of Guidance and Control 4(3), 225—239 Hagen, P. E.; Størkersen, N. J. & Vestgård, K. (2003). The HUGIN AUVs – Multi-role capability for challenging underwater survey operations. EEZ International Healey, A. J. & Lienard, D. (1993). Multivariable sliding-mode control for autonomous diving and steering of unmanned underwater vehicles. IEEE Journal of Oceanic Engineering 18(3), 327—339 Justh, E. W. & Krishnaprasad, P. S. (2006). Steering laws for motion camouflage. Proceedings of the Royal Society A 462(2076), 3629—3643 Lapierre, L.; Soetanto, D. & Pascoal, A. (2003). Nonlinear path following with applications to the control of autonomous underwater vehicles. In: Proceedings of the CDC’03, Maui, Hawaii, USA LaValle, S. M. (2006). Planning Algorithms. Cambridge University Press Lin, C F. (1991). Modern Navigation, Guidance, and Control Processing, Volume II. Prentice Hall, Inc. Lin, C L. & Su, H W. (2000). Intelligent control theory in guidance and control system design: An overview. Proceedings of the National Science Council, ROC 24(1), 15—30 Locke, A. S. (1955). Guidance. D. Van Nostrand Company, Inc. MacKenzie, D. A. (1990). Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance. MIT Press Mizutani, A.; Chahl, J. S. & Srinivasan, M. V. (2003). Motion camouflage in dragonflies. Nature 423, 604 Ollero, A. & Heredia, G. (1995). Stability analysis of mobile robot path tracking. In: Proceedings of the IROS’95, Pittsburgh, Pennsylvania, USA Papoulias, F. A. (1991). Bifurcation analysis of line of sight vehicle guidance using sliding modes. International Journal of Bifurcation and Chaos 1(4), 849—865 Papoulias, F. A. (1992). Guidance and control laws for vehicle pathkeeping along curved trajectories. Applied Ocean Research 14(5), 291—302 Pastrick, H. L.; Seltzer, S. M. & Warren, M. E. (1981). Guidance laws for short-range tactical missiles. Journal of Guidance and Control 4(2), 98—108 Piccardo, H. R. & Honderd, G. (1991). A new approach to on-line path planning and generation for robots in non-static environments. Robotics and Autonomous Systems 8(3), 187—201 Rankin, A. L.; Crane III, C. D. & Armstrong II, D. G. (1997). Evaluating a PID, pure pursuit, and weighted steering controller for an autonomous land vehicle. In: Proceedings of the SPIE Mobile Robotics XII, Pittsburgh, Pennsylvania, USA Refsnes, J. E.; Sørensen, A. J. & Pettersen, K. Y. (2008). Model-based output feedback control of slender-body underactuated AUVs: Theory and experiments. IEEE Transactions on Control Systems Technology 16(5), 930—946 Roberts, G. N. & Sutton, R. (2006). Advances in Unmanned Marine Vehicles. The Institution of Electrical Engineers Samson, C. (1992). Path following and time-varying feedback stabilization of a wheeled mobile robot. In: Proceedings of the ICARCV’92, Singapore Sciavicco, L. & Siciliano, B. (2002). Modelling and Control of Robot Manipulators. Springer- Verlag London Ltd. Underwater Vehicles 76 Sharp, R. S. (2007). Application of optimal preview control to speed-tracking of road vehicles. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 221(12), 1571—1578 Sheridan, T. B. (1966). Three models of preview control. IEEE Transactions on Human Factors in Electronics 7(2), 91—102 Shneydor, N. A. (1998). Missile Guidance and Pursuit: Kinematics, Dynamics and Control. Horwood Publishing Ltd. Siouris, G. M. (2004). Missile Guidance and Control Systems. Springer-Verlag New York, Inc. Skjetne, R.; Fossen, T. I. & Kokotović, P. V. (2004). Robust output maneuvering for a class of nonlinear systems. Automatica 40(3), 373—383 Spearman, M. L. (1978). Historical development of worldwide guided missiles. In: AIAA 16th Aerospace Sciences Meeting, Huntsville, Alabama, USA Subbotin, M. V.; Dăcić, D. B. & Smith, R. S. (2006). Preview based path-following in the presence of input constraints. In: Proceedings of the ACC’06, Minneapolis, Minnesota, USA Valavanis, K. P.; Gracanin, D.; Matijasevic, M.; Kolluru, R. & Demetriou, G. A. (1997). Control architectures for autonomous underwater vehicles. IEEE Control Systems Magazine 17(6), 48—64 Wernli, R. L. (2000). AUV commercialization – Who’s leading the pack?. In: Proceedings of the OCEANS’00, Providence, Rhode Island, USA Westrum, R. (1999). Sidewinder: Creative Missile Development at China Lake. Naval Institute Press Whitcomb, L. (2000). Underwater robotics: Out of the research laboratory and into the field. In: Proceedings of the ICRA’00, San Francisco, California, USA White, B. A. & Tsourdos, A. (2001). Modern missile guidance design: An overview. In: Proceedings of the IFAC Automatic Control in Aerospace, Bologna, Italy Yanushevsky, R. (2008). Modern Missile Guidance. CRC Press Yoshimoto, K.; Katoh, M. & Inoue, K. (2000). A vision-based speed control algorithm for autonomous driving. In: Proceedings of the AVEC’00, Ann Arbor, Michigan, USA Zarchan, P. (2002). Tactical and Strategic Missile Guidance. 4th ed American Institute of Aeronautics and Astronautics, Inc. 5 Enhanced Testing of Autonomous Underwater Vehicles using Augmented Reality & JavaBeans Benjamin C. Davis and David M. Lane Ocean Systems Laboratory, Heriot-Watt University Scotland 1. Introduction System integration and validation of embedded technologies has always been a challenge, particularly in the case of autonomous underwater vehicles (AUVs). The inaccessibility of the remote environment combined with the cost of field operations have been the main obstacles to the maturity and evolution of underwater technologies. Additionally, the analysis of embedded technologies is hampered by data processing and analysis time lags, due to low bandwidth data communications with the underwater platform. This makes real- world monitoring and testing challenging for the developer/operator as they are unable to react quickly or in real-time to the remote platform stimuli. This chapter discusses the different testing techniques useful for unmanned underwater vehicle (UUVs) and gives example applications where necessary. Later sections digress into more detail about a new novel framework called the Augmented Reality Framework (ARF) and its applications on improving pre-real-world testing facilities for UUVs. To begin with more background is given on current testing techniques and their uses. To begin with some background is given about Autonomous Underwater Vehicles (AUVs). An AUV (Healey et al., 1995) is a type of UUV. The difference between AUVs and Remotely operated vehicles (ROVs) is that AUVs employ intelligence, such as sensing and automatic decision making, allowing them to perform tasks autonomously, whilst ROVs are controlled remotely by a human with communications running down a tether. AUVs can operate for long periods of time without communication with an operator as they run a predefined mission plan. An operator can design missions for multiple AUVs and monitor their progress in parallel. ROVs require at least one pilot per ROV controlling them continuously. The cost of using AUVs should be drastically reduced compared with ROVs providing the AUV technology is mature enough to execute the task as well as an ROV. AUVs have no tether, or physical connection with surface vessels, and therefore are free to move without restriction around or inside complex structures. AUVs can be smaller and have lower powered thrusters than ROVs because they do not have to drag a tether behind them. Tethers can be thousands of metres in length for deep sea missions and consequently very heavy. In general, AUVs require less infrastructure than ROVs i.e. ROVs usually require a large ship and crew to operate which is not required with an AUV due to being easier to deploy and recover. In general, autonomous vehicles (Zyda et al., 1990) can go where humans cannot, do not want to, or in more relaxed terms they are suited to doing the “the dull, the dirty, and the Underwater Vehicles 78 dangerous”. One of the main driving forces behind AUV development is automating , potentially tedious, tasks which take a long time to do manually and therefore incur large expenses. These can include oceanographic surveys, oil/gas pipeline inspection, cable inspection and clearing of underwater mine fields. These tasks can be monotonous for humans and can also require expensive ROV pilot skills. AUVs are well suited to labour intensive or repetitive tasks, and can perform their jobs faster and with higher accuracy than humans. The ability to venture into hostile or contaminated environments is something which makes AUVs particularly useful and cost efficient. AUVs highlight a more specific problem. Underwater vehicles are expensive because they have to cope with the incredibly high pressures of the deepest oceans (the pressure increases by 1 atmosphere every 10m). The underwater environment itself is both hazardous and inaccessible which increases the costs of operations due to the necessary safety precautions. Therefore the cost of real-world testing, the later phase of the testing cycle, is particularly expensive in the case of UUVs. Couple this with poor communications with the remote platform (due to slow acoustic methods) and debugging becomes very difficult and time consuming. This incurs huge expenses, or more likely, places large constraints on the amount of real-world testing that can be feasibly done. It is paramount that for environments which are hazardous/inaccessible, such as sea, air and space, that large amounts of unnecessary real-world testing be avoided at all costs. Ideally, mixed reality testing facilities should be available for pre-real-world testing of the platform. However, due to the expense of creating specific virtual reality testing facilities themselves, adequate pre- real-world tests are not always carried out. This leads to failed projects crippled by costs, or worse, a system which is unreliable due to inadequate testing. Different testing mechanisms can be used to keep real-world testing to a minimum. Hardware-in-the-loop (HIL), Hybrid Simulation (HS) and Pure Simulation (PS) are common pre-real-world testing methods. However, the testing harness created is usually very specific to the platform. This creates a problem when the user requires testing of multiple heterogeneous platforms in heterogeneous environments. Normally this requires many specific test harnesses, but creating them is often time consuming and expensive. Therefore, large amounts of integration tests are left until real-world trials, which is less than ideal. Real world testing is not always feasible due to the high cost involved. It would be beneficial to test the systems in a laboratory first. One method of doing this is via pure simulation (PS) of data for each of the platform’s systems. This is not a very realistic scenario as it doesn’t test the actual system as a whole and only focuses on individual systems within a vehicle. The problem with PS alone is that system integration errors can go undetected until later stages of development, since this is when different modules will be tested working together. This can lead to problems later in the testing cycle by which time they are harder to detect and more costly to rectify. Therefore, as many tests as possible should to be done in a laboratory. A thorough testing cycle for a remote platform would include HIL, HS and PS testing scenarios. For example, an intuitive testing harness for HIL or HS would include: A 3D Virtual world with customisable geometry and terrain allowing for operator observation; A Sensor simulation suite providing exterioceptive sensor data which mimics the real world data interpreted by higher level systems; and a distributed communication protocol to allow for swapping of real for simulated systems running in different locations. Thorough testing of the remote platform is usually left until later stages of development because creating a test harness for every platform can be complicated and costly. Therefore, Enhanced Testing of Autonomous Underwater Vehicles using Augmented Reality & JavaBeans 79 when considering a testing harness it is important that it is re-configurable and very generic in order to accommodate all required testing scenarios. The ability to extend the testing harness to use specialised modules is important so that it can be used to test specialized systems. Therefore a dynamic, extendible testing framework is required that allows the user to create modules in order to produce the testing scenario quickly and easily for their intended platform/environment. 2. Methods of testing Milgrim’s Reality-Virtuality continuum (Takemura et al., 1994), shown in Figure 1, depicts the continuum from reality to virtual reality and all the hybrid stages in between. The hybrid stages between real and virtual are known as augmented reality (Behringer et al., 2001) and augmented virtuality. The hybrid reality concepts are built upon by the ideas of Hardware-in-the-loop (HIL) and Hybrid Simulation (HS). Figure 1 shows how the different types of testing conform to the different types of mixed reality in the continuum. There are 4 different testing types: 1. Pure Simulation (PS) (Ridao et al., 2004) - testing of a platform’s modules on an individual basis before being integrated onto the platform with other modules. 2. Hardware-in-the-loop (HIL) (Lane et al, 2001) - testing of the real integrated platform is carried out in a laboratory environment. Exterioceptive sensors such as sonar or video, which interact with the intended environment, may have to be simulated to fool the robot into thinking it is in the real world. This is very useful for integration testing as the entire system can be tested as a whole allowing for any system integration errors to be detected in advance of real world trials. 3. Hybrid Simulation (HS) (Ridao et al., 2004; Choi & Yuh, 2001) - testing the platform in its intended environment in conjunction with some simulated sensors driven from a virtual environment. For example, virtual objects can be added to the real world and the exterioceptive sensor data altered so that the robot thinks that something in the sensor dataset is real. This type of system is used if some higher level modules are not yet reliable enough to be trusted to behave as intended using real data. Consequently, fictitious data is used instead, augmented with the real data, and inputted to the higher level systems. Thus, if a mistake is made it doesn’t damage the platform. An example of this is discussed in section 4.2. 4. Real world testing - This is the last stage of testing. When all systems are trusted the platform is ready for testing in the intended environment. All implementation errors should have been fixed in the previous stages otherwise this stage is very costly. For this stage to be as useful as possible the system designers and programmers need to have reliable intuitive feedback, in a virtual environment, about what the platform is doing otherwise problems can be very hard to see and diagnose. ARF provides functionality across all stages of the continuum allowing for virtually any testing scenario to be realised. For this reason it is referred to as a mixed reality framework. In the case of Augmented Reality, simulated data is added to the real world perception of some entity. For example, sonar data on an AUV could be altered so that it contains fictitious objects i.e. objects which are not present in the real world, but which are present in the virtual world. This can be used to test the higher level systems of an AUV such as obstacle detection (See Obstacle detection and avoidance example in Section 4.2). A virtual world is used to generate synthetic sensor data which is then mixed with the real world data. The Underwater Vehicles 80 virtual world has to be kept in precise synchronization with the real world. This is commonly known in Augmented Reality as the registration problem. The accuracy of registration is dependent on the accuracy of the position/navigation systems onboard the platform. Registration is a well known problem with underwater vehicles when trying to match different sensor datasets to one another for visualisation. Accurate registration is paramount for displaying the virtual objects in the correct position in the simulated sensor data. Fig. 1. Reality Continuum combined with Testing Types. Augmented Virtuality is the opposite of augmented reality i.e. instead of being from a robot’s/person’s perspective it is from the virtual world’s perspective - the virtual world is augmented with real world data. For example, real data collected by an AUV’s sensors is rendered in real time in the virtual world in order to recreate the real world in virtual reality. This can be used for Online Monitoring (OM) and operator training (TR) (Ridao et al., 2004). This allows an AUV/ROV operator to see how the platform is situated in the remote environment, thus increasing situational awareness. In Hybrid Simulation the platform operates in the real environment in conjunction with some sensors being simulated in real time by a synchronized virtual environment. Similar to Augmented Reality, the virtual environment is kept in synchronization using position data transmitted from the remote platform. Thus simulated sensors are attached to the virtual platform and moved around in synchronization with the real platform. Simulated sensors collect data from the virtual world and transmit the data back to the real systems on the remote platform. The real systems then interpret this data as if it were real. It is important that simulated data is very similar to the real data so that the higher level systems cannot distinguish between the two. In summary, the real platform’s perception of the real environment is being augmented with virtual data. Hence HS is inherently Augmented Reality. An example of a real scenario where AR testing procedures are useful is in obstacle detection and avoidance in the underwater environment by an AUV. See Obstacle detection and avoidance example in Section 4.2. Hardware-in-the-Loop (HIL) is another type of mixed reality testing technique. This type of testing allows the platform to be tested in a laboratory instead of in its intended environment. This is achieved by simulating all required exterioceptive sensors using a virtual environment. Virtual sensor data is then sent to the real platform’s systems in order to fool them. In essence this is simply virtual reality for robots. Concurrently, the outputs of [...]... which scenarios can be created using ARF against scenarios created using the same JavaBean classes but programmed by hand 35 00 29 03 3000 Tim (sec) e 2500 235 8 2000 1680 1498.4 1500 1 230 1111 1000 750.9 742.2 810.1 700.6 10 13 857 847 805 1460.74 136 3 132 8 716.88 659 544 500 32 5 251 0 1 2 3 4 5 6 ARF 7 8 9 10 11 12 Netbeans Fig 11 Scenario creation times: ARF vs Netbeans For a fair test, the exact same JavaBean... Autonomous Underwater Vehicles using Augmented Reality & JavaBeans 89 Other applications of the DELPHÍS multi agent architecture have been demonstrated using ARF These include a potential scenario for the MOD Grand Challenge This involves both surface and air vehicles working together to find targets, in a village, and inspect them DELPHÍS was tested using simulated air vehicles, rather than underwater vehicles, ... conventions ARF is capable of identifying which Beans are Java3DBeans and therefore knows how to deal with them The only real difference between Java3DBeans and JavaBeans is that Java3DBeans are added to the 3D virtual world part of ARF as JavaBeans are only added as objects to the ARF BeanBoard (which keeps track of all objects) However, Java3DBeans can still communicate with any other objects in ARF’s... flexibility of the communications protocol Other 84 Underwater Vehicles external communications protocols are easily implemented by extending the event passing currently used by ARF’s JavaBeans ARF provides a new type of JavaBean which allows the user to create a 3D environment out of JavaBeans It is called a Java3DBean and is based on Java3D and JavaBeans Java3DBeans inherently have all the functionality... advantage that they are Java3D Scenegraph nodes This gives them extra features and functionality such as 3D geometry, behaviours and able to interact with other Java3DBeans within the virtual world ARF provides a user interface which extends the JavaBean PropertySheet allowing for Java3DBeans to be configured in the same way The user is able to construct the 3D environment using Java3DBeans and decide which... underwater robots using multidimensional, synthetic environment In Robotics and Automation, 2001 Proceedings 2001 ICRA IEEE International Conference on Volume 1, 2001 pp.926- 931 Healey, A J.; Pascoal, A M.; Pereira, F L (1995) Autonomous underwater vehicles: An application of intelligent control technology: Proceedings of the American Control Conference, Seattle, Washinton June 21- 23, 1995, pp 29 43- 2949... XML file ARF provides many utility JavaBeans and Java3DBeans which the user can use directly, or extend These include: • Geometric Shapes for building scenes • Mesh File loaders for importing VRML, X3D, DXF and many more 3D file types • Input listeners for controlling 3d objects with input devices (keyboard, mouse, joystick) • Behaviours for making 3d objects do things such as animations • Camera control... together the different components Figure 3 shows the ARF graphical user interface which the user uses to create their virtual environment The 3D virtual environment is built using the scenegraph displayed in the top left of figure 3 Fig 3 ARF GUI Overview 86 Underwater Vehicles ARF provides many programming libraries which allow a developer to create their own components In addition ARF has many components... 7th Unmanned Underwater Vehicle Showcase September 2005 Patrón, P.; Evans, J.; Brydon, J & Jamieson, J (2006) AUTOTRACKER: Autonomous pipeline inspection: Sea Trials 2005: World Maritime Technology Conference - Advances in Technology for Underwater Vehicles, March 2006 Pêtrès, C.; Pailhas, Y.; Patrón, P.; Petillot, Y.; Evans, J &Lane, D.M.; (2007) Path Planning for Autonomous Underwater Vehicles: proceedings... environment for HIL testing 2007 Enhanced Testing of Autonomous Underwater Vehicles using Augmented Reality & JavaBeans 87 ARF provides a collection of special utility components for the user These components are also provided in the programming API to be extended by the programmer to create their own JavaBeans Java3DBeans are merely extensions to Java3D objects which adhere to the JavaBean programming conventions . Lienard, D. (19 93) . Multivariable sliding-mode control for autonomous diving and steering of unmanned underwater vehicles. IEEE Journal of Oceanic Engineering 18 (3) , 32 7 33 9 Justh, E. W 462(2076), 36 29 36 43 Lapierre, L.; Soetanto, D. & Pascoal, A. (20 03) . Nonlinear path following with applications to the control of autonomous underwater vehicles. In: Proceedings of the CDC’ 03, . ϖ ϖαβ + pf ()= coscosxx ( 63) ϖ ϖαβ + pf ()= sincosyy (64) ϖ ϖβ − pf ()= sin,zz (65) Guidance Laws for Autonomous Underwater Vehicles 73 where [] Τ ∈ R 3 ffff ,,xyzp represents a fixed