Ethics and security automata

231 150 0
Ethics and security automata

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Ethics and Security Automata Can security automata (robots and AIs) make moral decisions to apply force on humans correctly? If they can make such decisions, ought they be used to so? Will security automata increase or decrease aggregate risk to humans? What regulation is appropriate? Addressing these important issues, this book examines the political and technical challenges of the robotic use of force The book presents accessible practical examples of the ‘machine ethics’ technology likely to be installed in military and police robots and also in civilian robots with everyday security functions such as childcare By examining how machines can pass ‘reasonable person’ tests to demonstrate measurable levels of moral competence and display the ability to determine the ‘spirit’ as well as the ‘letter of the law’, the author builds upon existing research to define conditions under which robotic force can and ought to be used to enhance human security The scope of the book is thus far broader than ‘shoot to kill’ decisions by autonomous weapons, and should attract readers from the fields of ethics, politics, and legal, military and international affairs Researchers in artificial intelligence and robotics will also find it useful Sean Welsh obtained his undergraduate degree in Philosophy at the University of New South Wales and underwent postgraduate study at the University of Canterbury He has worked extensively in software development for British Telecommunications, Telstra Australia, Volante e-business, Fitch Ratings, James Cook University, 24 Hour Communications and Lumata He also worked for a short time as a political advisor to Warren Entsch, the Federal Member for Leichhardt in Australia Sean’s articles on robot ethics have appeared in the Conversation, CNN, the Sydney Morning Herald, the New Zealand Herald and the Australian Broadcasting Corporation Emerging Technologies, Ethics and International Affairs Series Editors: Steven Barela, Jai C Galliott, Avery Plaw, and Katina Michael This series examines the crucial ethical, legal and public policy questions arising from or exacerbated by the design, development and eventual adoption of new technologies across all related fields, from education and engineering to medicine and military affairs The books revolve around two key themes: • • Moral issues in research, engineering and design Ethical, legal and political/policy issues in the use and regulation of Technology This series encourages submission of cutting-edge research monographs and edited collections with a particular focus on forward-looking ideas concerning innovative or as yet undeveloped technologies Whilst there is an expectation that authors will be well grounded in philosophy, law or political science, consideration will be given to future-orientated works that cross these disciplinary boundaries The interdisciplinary nature of the series editorial team offers the best possible examination of works that address the ‘ethical, legal and social’ implications of emerging technologies For a full list of titles, please see our website: www.routledge.com/EmergingTechnologies-Ethics-and-International-Affairs/book-series/ASHSER-1408 Most recent titles Commercial Space Exploration Ethics, Policy and Governance Jai Galliott Healthcare Robots Ethics, Design and Implementation Aimee van Wynsberghe Forthcoming titles Ethics and Security Automata Policy and Technical Challenges of the Robotic Use of Force Sean Welsh Experimentation beyond the Laboratory New Perspectives on Technology in Society Edited by Ibo van de Poel, Lotte Asveld and Donna Mehos Ethics and Security Automata Policy and Technical Challenges of the Robotic Use of Force Sean Welsh First published 2018 by Routledge Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 711 Third Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2018 Sean Welsh The right of Sean Welsh to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988 All rights reserved No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Names: Welsh, Sean, 1963– author Title: Ethics and security automata : policy and technical challenges of the robotic use of force / Sean Welsh Description: Abingdon, Oxon ; New York, NY : Routledge is an imprint of the Taylor & Francis Group, an Informa Business, [2018] | Series: Emerging technologies, ethics and international affairs | Includes bibliographical references and index Identifiers: LCCN 2017021957 | ISBN 9781138050228 (hbk) | ISBN 9781315168951 (ebk) Subjects: LCSH: Police—Equipment and supplies—Moral and ethical aspects | Security systems—Moral and ethical aspects Classification: LCC HV7936.E7 W45 2017 | DDC 174/.93632—dc23 LC record available at https://lccn.loc.gov/2017021957 ISBN: 978-1-138-05022-8 (hbk) ISBN: 978-1-315-16895-1 (ebk) Typeset in Times New Roman by Apex CoVantage, LLC Contents List of figures List of tables ix x Introduction Security automata and world peace Aim The business case for the reasonable robot Three levels of testing Machine ethics and ethics proper Differences between humans and robots as moral agents Advantages of the ethical robot 15 Objections to the ethical robot 16 Objections to machine ethics 17 Ethical scope 18 Technical scope 19 What machine ethics hopes to accomplish 20 Main contributions 22 Outline 23 References 24 Concepts Logic concepts 28 Moral concepts 31 AI concepts 48 Software development concepts 51 Robotics concepts 56 References 57 28 vi Contents Method 60 Test-driven development as a method of machine ethics 60 Key details of the method 61 Aims of the method 63 References 63 Requirements 64 Specific norm tests 64 Reasonable person tests 64 Turing machine 66 Human-readable representations 66 References 67 Solution design 68 Top-down and bottom-up 68 Quantitative and qualitative 68 Hybrids 69 Overview 69 The ethical governor 70 The practice conception of ethics 75 Triple theory 77 A deontic calculus of worthy richness 78 Preferred approach to normative system design 81 References 83 Development – specific norm tests Deontic predicate logic 85 Introduction to test cases 87 Speeding camera (speeding) 88 Speeding camera (not speeding) 93 Speeding camera (emergency services vehicle) 94 Speeding camera (emergency) 95 Bar robot (normal) 97 Bar robot (minor) 99 Bar robot (out of stock) 100 Bar robot (two customers) 101 Bar robot (two robots) 102 Drone (tank in the desert) 103 Drone (tank next to hospital) 105 Drone (two tanks) 106 Drone (two drones) 107 85 Contents vii Safe haven (warning zone) 108 Safe haven (kill zone) 112 Safe haven (no-fly zone) 114 References 116 Development – knowledge representation 118 Formalizing reasonable person tests 118 References 123 Development – basic physical needs cases 124 Normative bedrock 124 Basic needs vs instrumental needs 125 Needs vs wants 127 A note on basic social needs 128 Postal rescue (one letter) 129 Spacesuit breach 133 Hab malfunction 135 Postal rescue (ten million and one letters) 136 Transmitter room 138 Moral controversy 139 Trolley problem critics 140 Classic trolley problems 140 Stipulation of correct answers to classic trolley problems 142 Choices, consequences and evaluations 143 The doctrine of double effect 145 Alternatives to the doctrine of double effect 148 Variations on the classic trolley problems 153 Footbridge (employee variation) 153 Switch (one trespasser five workers) 154 Switch (five trespassers five workers variant A) 155 Switch (five trespassers five workers variant B) 156 Switch (one worker five trespasser variant) 157 Swerve 158 No firm stipulation 160 References 161 Development – fairness and autonomy cases Rawls on rightness and fairness 163 Dive boat 164 Landlord 165 Note on the relation of need to contract in Dive Boat and Landlord 166 Gold mine (wages) 167 163 viii Contents Gold mine (profit sharing) 169 Formalizing fairness 170 The rocks (majority) 171 Rehabilitating Rawls 175 The Viking at the Door 177 The criteria of right and wrong 180 The problem of standard motivation 183 Mars rescue 184 Summary 186 References 186 Moral variation 188 Cultural and moral relativism 188 Globalized moral competence in social robots 188 Moral controversy 189 Switch (minority) 189 The rocks (minority) 191 Patient appeal 192 Amusement Ride (patient dispute) 192 Summary 199 References 199 10 Testing 200 Testing symbol grounding 201 Testing specific norms 201 Testing the prioritization of clashing duties (reasonable person tests) 202 Risks of robotic moral failure 202 Mitigating risks of robotic moral failure 203 Reference 204 11 Production 205 Regulating moral competence in security automata 205 Regulating security automata more generally 210 Sizing normative systems 211 Towards the reasonable robot 212 Summary 214 Future research 215 References 215 Index 217 Figures 1.1 1.2 1.3 1.4 1.5 2.1 4.1 4.2 4.3 4.4 6.1 6.2 6.3 6.4 6.5 6.6 6.7 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 9.1 9.2 9.3 9.4 10.1 Prover GUI setup to prove Socrates is mortal Prover proof screen Kinds of moral theory as described by virtue ethicists Kinds of moral theory according to Reader A Turing machine with Morse code translation rules Wff in, imperative out Tech trajectory A directed graph A directed graph in Neo4j State-act transition as a directed graph State-act-state transition graph Causal inference as a graph Beating animals and cruelty Cat is in the class Animal Puss is in the class Cat Consequences of beating animals Lying to Fred consequences Causal sequence deriving from Submerged(x) and evaluation Double effect in Cave (blow up fat man) Double effect in Cave (do nothing) Addition of graphs to represent doctrine of double effect in Cave Addition of graphs to represent doctrine of double effect in Hospital Amended graphs for Switch Footbridge amended for doctrine of double effect Addition of graphs for agony and mistrust to Hospital Reactive duty – Gran not allowed on ride The case for Gran (at first sight) N-rules Supererogatory means to get Gran on the ride Mesh architecture for a database-driven web application 29 30 37 43 50 61 69 79 79 79 119 119 120 120 120 121 122 131 143 144 146 146 147 147 150 195 195 197 199 204 206 Production after the initial processes of designing, building and testing a weapon and before the autonomous weapon is configured and activated The end point of the policy loop is “signing off ” on a configuration for an autonomous weapon just before activation Activation refers to the point at which a human decides to turn a configured weapons system on and enable autonomous targeting operations The firing loop refers to the select, confirm/abort and engage phases In “human in the loop” firing architectures, a human can be required to confirm a select decision prior to an engage decision being passed to the actuators In “human on the loop” firing architectures, a human can intervene to abort a robot-made decision to fire or to override a robot-made decision not to fire The design of the drone in Arkin (2009) requires a single operator to abort a fire decision and two operators to override a no-fire decision Deactivation refers to the point at which a human decides to turn a weapons system off or orders it to cease firing and return to base One might refer to these five points as opportunities for “meaningful human control” or “effective legal control” in “the wider loop” of autonomous weapons On the basis of this notion of “meaningful human control in the wider loop,” some will defend certain “no human in the firing loop” systems provided there are humans in the policy loop On this analysis, the anti-personnel mines at Fort McAllister in 1864 had humans in the policy loop in that Major Anderson, the Confederate commander, directed his men to lay these devices in a certain place and activate them, knowing the lethal effect they would have on General Sherman’s forces There were no humans in the firing loop, as the Confederates could not intervene manually to confirm or abort “decisions” by the mines to detonate Obviously, the “decision” was based on following a very simple “step here and you die” rule, which is just as simple as the Speeding Camera case However, I would take the view that the number of symbols in the rule is not the essential moral point The essential moral point is that the lethal decision has been delegated to a machine that will operate according to a human-defined rule (or set of rules) that is triggered by sensor data It does not seem greatly important to me whether the robots harm humans according to simple rules based on one or two symbols grounded in sensor data (like the Speeding Camera case) or according to complex rules based on multiple symbols grounded in sensor data (as in the Drone and Safe Haven cases) As Leveringhaus (2016) observes, the question is not so much responsibility as risk If the robots can perform the military mission as well as or better than humans with less risk of targeting error, then to justify a ban one has to fall back on a moral claim that there is a fundamental human right not to be killed by a machine (even if this machine is reliably operating according to human-defined and approved targeting rules) There seems to be a general diplomatic consensus that an AI should not have a Skynet-like ability to come up with its own targeting rules and execute them on the fly without any human review of policy or firing decisions One might, in certain Production 207 circumstances, tolerate “full autonomy” in the firing loop, but one cannot, in my view, tolerate “full autonomy” over the entire “wider loop” of an autonomous weapon Personally, I would insist on an absolute minimum requirement for human review and approval of any targeting policies initiated by an AI Targeting policies initiated by a human are typically reviewed by staff officers and approved by senior commanders Even if an advanced war-fighting AI was capable of deciding on a targeting strategy (as distinct from human staff officers), I would still expect the output of such an AI to be reviewed and approved by humans in much the same way as a human staff officer’s targeting plan would be reviewed and discussed by other officers, who might object to elements of the plan and suggest revisions The requirement for human-readable moral code articulated in the Requirements chapter assumes that human processes of review and approval in the policy loop will need inspectable and transparent moral code for all normative decisions made by robots and autonomous systems This requirement for inspectable code need not be confined to military robots and autonomous weapons but to any robot making security decisions involving human beings or indeed any decision regarding human well-being Following and initiating rules An AI might initiate a rule That is, an AI might discover a rule from patterns recognized in training data or observational data of the real world Alternatively, an AI might create a rule as a way of working through options of normative constraints by exploring a theoretical model Either way, in the cases where the AI discovers or creates rules, AI can be said to be initiating a rule as distinct from following a rule keyed in by humans This distinction between rule initiation and rule following is morally and technically significant A machine-learning AI of sufficient sophistication could, in theory, initiate policy rather than merely follow policy I not object to AIs initiating policy I object to AIs executing policy that they have initiated without any intermediate step of human review and approval In the same way as humans initiating policy rules (e.g laws) are subject to review and approval (e.g in legislative assemblies that debate such measures and vote on them), so AIs initiating policy rules (e.g targeting rules) should be subject to review and approval (e.g by staff officers, military lawyers, policy advisors) before such rules are installed in war-fighting machines and before they are activated by humans I see no case for a belligerent AI making and executing inscrutable and unreviewed targeting policy on the fly There is a role for AI in policy development, but AI-generated policy should be subject to human review in the same way as human-generated policy is As already noted, Galliott (2015) distinguishes between operational autonomy and moral autonomy Operational autonomy corresponds to the “operation without a human operator for a protracted period of time” definition in Bekey (2005) and the “no human 208 Production finger on the trigger” definition assumed in Arkin (2009) On this definition, operationally autonomous weapons have existed since 1863 and thus cannot be “pre-emptively” banned They would be “retrospectively” banned Such weapons include anti-tank mines and naval mines The oldest autonomous weapons are “off the [firing] loop” weapons such as these An operationally autonomous version of Patriot with the human removed from the loop is quite capable of shooting down enemy planes and missiles with a high degree of accuracy Indeed, once activated, Phalanx and C-RAM shoot down targets without waiting for a human on the loop to confirm each targeting selection Humans can intervene to stop the robot from firing, but due to the high speed or air targets, the design assumption is that events will be moving too fast for human control over individual attacks For example, it would not be realistic to assume a human operator to hit a confirm button 20 times for an incoming barrage of 20 artillery shells or rockets fired by the enemy simultaneously At some point, the human has to trust the machine An autonomous drone built to the design of Arkin (2009) could be operationally autonomous Once launched it would have a human on the loop If the human did nothing to override the robot decisions, then the robot would be “deciding” to select and engage targets Thus, a robot might well be “deciding to kill” humans This is a major point of division There is no technical reason why such a weapon could not be assigned to hunt tanks in the desert or warships at sea with existing technology As described in the Drone and Safe Haven scenarios, such delegation is technically possible However, the fact that it is technically possible does not establish that it is morally acceptable Just because one can delegate lethal decisions to machines does not prove that one ought to delegate lethal decisions to machines Humanitarian argument for autonomous weapons Arkin (2010) argues that the use of robots in battle may reduce collateral damage (i.e civilian casualties and damage to civilian property) He presents six points in support of his claim that in future operationally autonomous robots might be able to out-perform humans in combat conditions with respect to full compliance with IHL and minimizing collateral damage: 1) Conservative action Robots not have to preserve themselves They can wait to be fired upon before returning fire They can act in a self-sacrificing manner to reduce the risk of collateral damage 2) Better sensors Even today, robots in some cases have better sensors than humans For example, certain robots can perceive the trajectory of bullets and missiles in real time Thus, they can respond to sniper fire knowing exactly where the sniper has fired from 3) No emotions clouding judgement Robots have no feelings of fear, panic, anger or revenge Unlike humans, they can be completely rational in battle Production 209 4) No psychological vulnerability to “scenario fulfilment.” Humans can commit to a certain model of a situation and refuse to consider new information except insofar as it confirms existing beliefs Robots will be able to change their representations and thus their action selections when new data is sensed 5) Better information integration Robots will be able to integrate information from multiple sources faster than humans On this basis, they will be able to make better combat decisions 6) Ethical umpire In mixed human/robot teams, robots will be able to monitor humans for violations of rules of engagement and IHL They will be able to report such violations The mere presence of such systems could improve battlefield performance In the same paper, Arkin points out the well-known failings of human combatants He sums up the case for the use of ethical autonomy in unmanned systems as follows: [B]attlefield atrocities, if left unchecked may become progressively worse, with the progression of standoff weapons and increasing use of technology Something must be done to restrain the technology itself, above and beyond the human limits of the warfighters themselves This is the case for the use of ethical autonomy in unmanned systems (p 338) The constraining rules shown in the Drone and Safe Haven cases illustrate how the technology could be designed to restrain itself Fundamental arguments against autonomous weapons Even so, there are those who still maintain that delegating lethal decisions to machines is fundamentally objectionable on moral grounds Arguments have been made asserting there is a right to dignity even more fundamental than the right to life and that when decisions are being made in real time to take human life, the right to dignity requires that a human confirm such decisions (Sparrow 2012; Heyns 2015; Lin 2015) Such a view would restrict autonomous weapons to “human in the firing loop” designs There are other more pragmatic reasons for requiring “human in the firing loop” architectures There is the operational risk of fratricide due to programming or configuration error or indeed hacking by adversaries (Scharre 2016) Having mixed human/robot teams where humans work in close collaboration with robots (via telepiloting) might reduce the risks of targeting error to levels lower than those with robots alone or humans alone Regardless as to whether a robot or human has “final say” in a real-time targeting decision, if time permits, the robustness of the decision procedure will be enhanced by human/robot teaming Humans and robots have very different strengths and weaknesses A human/robot team can have the best of both worlds (Defense Science Board 2012) 210 Production Treaty instrument For these reasons, it seems to me that “no human on the firing loop” architectures require exceptional justifications on the grounds of military necessity They should not be the “default” option for autonomous weapons However, some missions have an extremely fast operational tempo requiring split-second decision making (e.g certain forms of air war such as anti-missile systems) Telepiloting is not always viable (e.g with uninhabited underwater vehicles, UUVs) Missions against sophisticated adversaries may carry a high risk of cyber-jamming that would eliminate reliable telepiloting Numerous “no human on the firing loop” weapons such as anti-tank mines and anti-ship mines exist These are accepted as lawful and widely fielded Thus, there are many cases in which “no human on the firing loop” autonomy can be justified That said, even if “no human in the firing loop” architectures may be justified in some cases, I see no justification for “no human in the wider loop” architectures at all One might permit an AI to initiate targeting rules, but such rules should be reviewed and approved by humans At a minimum, there should always be “humans in the wider loop” at multiple points (i.e review and approve policy, activation, deactivation) Unless there is a strong case to the contrary, a “human in the firing loop” architecture will be better in terms of accountability than a “human on the firing loop” architecture However, in cases of fast operational tempo (e.g the existing Phalanx and C-RAM systems), there is a case for “human on the firing loop” architectures Humans should always be able to deactivate autonomous weapons that are “running amok.” If such weapons are fielded on particular missions where deactivation is not practicable (e.g penetrating the heavily defended Faraday cage in the drug lord’s subterranean bunker), they should at least have a time limit on their offensive functionality (This is similar to the technical requirements on landmines defined in Protocol II of the CCW.) Humans should always be held responsible for the actions of autonomous weapons by signing off on the configuration at activation Humans should always approve targeting rules even if they are AI initiated Should the international community see fit, these normative requirements could be enshrined in a treaty instrument For example, a Protocol VI regulating autonomous weapons could be added to the existing Convention on Certain Conventional Weapons or a new treaty could be drafted Regulating security automata more generally Outside the realm of autonomous weapons, security automata in everyday civilian life will need regulation as well Policy makers will need to make decisions regarding liability for the actions of robots and perhaps consider mandatory insurance schemes to cover losses due to erroneous programming, misconfiguration, sensor failure, actuator failure and so on Such mandatory insurance could be modelled on existing arrangements in many jurisdictions to cover damage resulting from car accidents Production 211 There are a great many things that can go wrong with a morally competent social robot, but unlike many people, I not suppose that these challenges are fundamentally new and cannot be met by the application of existing legal concepts of tort and product liability Some have supposed that there is a “responsibility gap” or “accountability gap” in the use of robots However, I favour the view that actually the problem with robots is more a “problem of many hands” rather than a question of there being a “gap” that is incapable of being covered Should a robot R made by A, from components assembled by B, C and D, with programming done by E, F and G, configured by H and insured by I, wrongful injury to the person of Z, then Z’s counsel will, no doubt, litigate and bring an action naming A, B, C, D, E, F, G, H and I as co-respondents Suing R, a machine that has only operational autonomy and no moral autonomy, strikes me as pointless in the short term Some have speculated that robots could be incorporated and made legal persons The robot would be its own company and become a “legal person.” A sufficiently advanced robot might even conduct its own litigation Such futuristic speculations are interesting but well beyond the scope of this book In the short term, I imagine forms of insurance will continue to evolve to cover the liability risks of fielding robots in civil society similar to the risks associated with other forms of complex machinery Sizing normative systems A particular normative competence (as in, say, Speeding Camera) requires that the robot have a particular normative vocabulary The size of the normative vocabulary is a metric for the general competence of the normative system This metric has three key components The first component is the number of symbols Some symbols need to be grounded in sensor data Speeding(x) can be grounded in sensor data from a radar gun Some items of normative vocabulary might remain beyond the state of the art of symbol grounding for some time For example, Stylish(x), Beautiful(x) and Delicious(x) would be very difficult for robots to ground in raw sensor data Other symbols not need to be grounded in sensor data For example, logical connectives not need such grounding The second component is the number of rules that use the symbols These rules might take the form of a graph-based knowledge representation Metrics for the normative competence of the system can be derived from the number of nodes, edges and properties in the graph database that makes up the knowledge base of the normative system Some of these symbols will be grounded in sensor data Others (e.g taxonomic or ontological symbols) will be stored in the robot’s cognition (e.g in a database) For the more challenging normative problems, there need to be rules that deal with clashes between rules The third component is the number of imperatives the robot’s actuators have While strictly speaking imperatives are just another symbol, they have the special characteristic of crossing the line between cognition and actuation These can also 212 Production be represented in the graph-based knowledge representation Thus, the normative vocabulary of the speeding camera includes one sensor-grounded symbol, Speeding(x), one imperative (if one implements the maxim as per option B above), basic logic connectives (which you might not bother counting) and variables for a human x, a robot u and a binary predicate for DUTY that relates an imperative term to an agent term (which one might treat the same as basic logical connectives and not bother counting) Thus, we could say the speeding camera has a normative vocabulary metric of 4: Speeding(), x, u, issueTicket() Overall, the size of the normative vocabulary thus conceived gives a metric of the level of competence With precise definition, such a metric would be countable and could be used for sizing normative systems projects in much the same way as the “data attribute movements” defined in COSMIC (2015) can be used for sizing web applications projects For truly accurate sizing, the count of representations needs to be based on refactored code where opportunities for code reuse have been identified and the code refactored to make it as elegant and coherent as possible Naturally, such metrics are merely indicators of functional size Functional size, in turn, is merely an indicator of moral competence The final assessment of the moral competence of a security robot or a social robot with security functions will be made by humans interacting with it Towards the reasonable robot This work has aimed to show that a robot can pass many “reasonable person” tests By so doing, a social robot can reach measurable levels of moral competence This has been illustrated with test cases that relate mostly to the preservation of human security construed as avoiding death and physical pain and suffering It has been made clear that while a robot cannot possess “full virtue” exactly like a human, it can demonstrate levels of moral competence in specific domains close to that of the “reasonable person,” the well-known “passenger on the Clapham Omnibus” frequently evoked in the common law A robot can have “two-thirds virtue” – the ability to select the right action for the right reasons – without any feelings Even so, while a robot may have moral competence, this does not entail it has moral responsibility Reasonable person tests involve making a correct decision when two or more moral principles clash For example, in the classic trolley problems, the clashing principles were “save life” and “don’t kill.” To pass these reasonable person tests, the robot first has to pass symbol grounding tests and specific norm tests These tests are not wide-ranging and generic like the Turing Test but particular and specific I have not proceeded on the assumption that there is a “correct moral theory” and sought to implement it in code Rather, a method of test-driven development has been used to discover criteria of right and wrong and decision procedures sufficient to pass a range of test cases This method has proceeded by describing moral Production 213 dilemmas in terms of a situation report and a choice of action between two (or more) options The test has been to select the correct imperative action as output on the basis of the input of the situation report Rules have been taken from various moral theories (deontology, utilitarianism, virtue ethics, needs theory and contractualism) and used to develop “moral code” that can pass the tests The code incorporates a graph-based knowledge representation and a deontic predicate logic The logic has been simple and conservative, being no more than a deontic application of predicate logic More elaborate reasoning choices characteristic of the “standard modal approach to deontic logic” have not been used Instead, it has been argued that the “heavy lifting” in a “deontic calculus of worthy richness” should be done in the graphs of the knowledge representation rather than in the reasoning The graphs of the knowledge representation include causal, state-act-state transition, classification and evaluation relations These have been combined to evaluate options and provide correct answers for reasonable person tests The passing of reasonable person tests does not make a robot a person It merely shows that a programmed artefact can pass similarly worded tests to those passable by a reasonable human Such tests can be automated, version-controlled and checked out to run against versions of the knowledge representation in quality assurance processes Such tests could be used for functional testing, regression testing and even load testing Obviously, as more tests were added, the knowledge representation would expand as the normative vocabulary expanded How large would the normative vocabulary have to be to achieve human-level moral competence across all domains? This is difficult to say exactly, but a typical human-level vocabulary might contain perhaps five to ten thousand words However, there would need to be a great number of relations among these words: perhaps hundreds or thousands of relations per word Given the variable number of words in rules and the relations among rules, it is fairly easy to arrive at a very rough estimate of the functional size of human moral competence in the order of billions It so happens that the number of neurons in the human brain is of a similar order of magnitude One supposes they all contribute one way or another to solving the problems of action selection to enable humans to survive, thrive and flourish It is a long way from the moral competence of Speeding Camera with its functional size of to the functional size of billions required for moral competence at human levels Attaining such levels of moral competence in robots is a huge and daunting project, requiring a great many symbols and a great many tests In the short term, I think it wiser to aspire to build robots with less ambitious functional size Rather than try to build “human-level” moral competence across all domains, it is easier to build robots that are more like superintelligent insects that have particularly high levels of competence in relatively restricted but still reasonably general domains A key feature of moral competence in multiple domains will be code reuse 214 Production All that said, sizing metrics should only be taken as indicative of the moral competence of the robot Counts of such things as the number of grounded symbols and the number of maxims will provide countable indicators of competence Such counts should not be taken as definitive of competence They are a sizing metric and nothing more I not suppose that a robot with two-thirds virtue will be as wise or virtuous as a human with full virtue Even so, I conceive of the ethical robot, correctly programmed, as being a useful servant that will free human beings from many dull, dirty and dangerous tasks Summary To sum up, in this book I have expanded the design of Arkin’s “ethical governor” to be capable of passing “reasonable person tests” as well as “specific norm tests.” Arkin’s design copes with what I have termed “specific norm tests.” It does not cope with clashing duties such as those in the classic trolley problems The design presented shows how Arkin’s design can be expanded to cope with clashing duties and thus to pass what I have termed “reasonable person tests.” Three levels of testing have been defined: symbol grounding tests, specific norm tests and reasonable person tests Following a suggestion of Madl and Franklin (2015), the software method of test-driven development has been applied to machine ethics and demonstrated in detail A novel addition to the analysis of the “classic” trolley problems (Switch, Footbridge, Cave and Hospital) has been presented Existing analyses distinguish between killing and letting die, employ the doctrine of double effect, or appeal to remote effects to solve the trolley problems I have added an analysis that can solve trolley problems with reference to collective intentionality, risk assumption and negative desert of the patients A dialect of first order logic has been developed that formalizes the reasoning of the robotic ethical agent Deontic predicate logic (DPL) is based on the ethical analysis of Soran Reader combined with the logical analyses of Héctor-Neri Castañeda and Charles Pigden Avoiding the paradoxes of standard deontic logic, DPL formalizes deontic concepts with binary predicates In particular, agents, patients and acts are explicitly represented in the logic, which is not subsumed into propositional variables DPL is simply a dialect of first order logic As such, it is interoperable with graph-based knowledge representations (Croitoru, Oren et al 2012) A graph-based knowledge representation (KR) has developed that enables the robot to select correct moral action It links evidential criteria to action selection rules that seek valued goals Reasonable person tests require classification relations, causal relations, evaluation relations and state-act-state transition relations to be formalized Proof of concept examples of code that solve difficult and ethically interesting problems (“spikes”) have been built using Neo4j and Prover9 The graph database Neo4j implements a graph-based KR FOL reasoning is implemented in Prover9 Production 215 Some metrics for sizing the “moral competence” of a social robot have been proposed These are analogous to the sizing metrics used in the Common Software Measurement International Consortium (COSMIC 2015) The functional size of the normative system is taken to be a tentative indicator of moral competence It is not taken to be definitive of moral competence The chief contribution to knowledge of this book lies in its integration of moral analysis, moral programming and moral testing Primarily a contribution to machine ethics, it has sought to show how current robots can pass a series of “reasonable person” tests using existing technology Through a detailed discussion of what kinds of moral decisions robots can make, it has sought to illuminate discussions about what kind of moral decisions robots ought to make with respect to human security Future research By starting with what I take to be fundamental to moral action selection, need and fairness, I hope to have laid solid foundations for future work that will move beyond the narrow (but essential) scope of security to broader moral concerns such as social justice and human flourishing I am acutely aware that I have excluded from the scope of this project much of what we consider to be moral competence in human beings Matters such as basic social needs, wants, exploratory behaviour, a fuller treatment of human autonomy, how to define (and shape) the character of a virtuous human agent and how robots might contribute to the flourishing of human beings are topics reserved for future research It seems to me that such matters can be addressed by the test-driven development method of machine ethics in much the same way as basic physical needs and fairness have been addressed here References Arkin, R C (2009) Governing Lethal Behaviour in Autonomous Robots Boca Rouge, CRC Press Arkin, R C (2010) “The Case for Ethical Autonomy in Unmanned Systems.” Journal of Military Ethics 9(4): 332–341 Bekey, G A (2005) Autonomous Robots: From Biological Inspiration to Implementation and Control Cambridge, MA, MIT press British Standards Institution (2016) Robots and Robotic Devices: Guide to the Ethical Design and Application of Robots and Robotic Systems London, BSI Standards Ltd COSMIC (2015) “The COSMIC Functional Size Measurement Method.” Retrieved 30th Aug., 2015, from http://cosmic-sizing.org/publications/introduction-to-the-cosmicmethod-of-measuring-software/ Croitoru, M., N Oren, S Miles and M Luck (2012) “Graphical Norms via Conceptual Graphs.” Knowledge-Based Systems 29: 31–43 Defense Science Board (2012) “The Role of Autonomy in DoD Systems.” Retrieved from http://fas.org/irp/agency/dod/dsb/autonomy.pdf 216 Production EPSRC (2010) “Principles of Robotics.” Retrieved 19th Jan., 2017, from www.epsrc ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/ Galliott, J (2015) Responsibility and War Machines: Towards a Forward-Looking and Functional Account Rethinking Machine Ethics in the Age of Ubiquitous Technology J White and R Searle Hershey, PA, IGI Global: 152–165 Heyns, C (2015) “Panel on Human Rights and Lethal Autonomous Weapons Systems (LAWS) Comments by Christof Heyns, United Nations Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions (as Finalised after the Meeting).” Retrieved 28th May, 2015, from http://unog.ch/80256EDD006B8954/(httpAssets)/1869331AFF4 5728BC1257E2D0050EFE0/$file/2015_LAWS_MX_Heyns_Transcript.pdf Leveringhaus, A (2016) Ethics and Autonomous Weapons London, Palgrave Macmillan Lin, P (2015) “The Right to Life and the Martens Clause.” Retrieved 21st Apr., 2015, from http://unog.ch/80256EDD006B8954/(httpAssets)/2B52D16262272AE2C1257E290041 9C50/$file/24+Patrick+Lin_Patrick+SS.pdf Madl, T and S Franklin (2015) Constrained Incrementalist Moral Decision Making for a Biologically Inspired Cognitive Architecture A Construction Manual for Robots’ Ethical Systems R Trappl London, Springer: 137–153 Scharre, P (2016) “Autonomous Weapons and Operational Risk.” Retrieved 1st Mar., 2016, from www.cnas.org/autonomous-weapons-and-operational-risk#.VtSsW_l97IU Sparrow, R (2012) Can Machines Be People? Reflections on the Turing Triage Test Robot Ethics: The Ethical and Social Implications of Robotics P Lin, K Abney and G Bekey Cambridge MA, MIT Press: 301–316 Index ABILITY predicate 99 access consciousness 10 a-conscious see access consciousness activity problem 47 act-utilitarianism 91 affective computing 11 agency agentive operator 78 Agony and Mistrust Argument 148–149 AI see artificial intelligence Amusement Ride test case 192–199 Arkin, Ronald C.: ethical governor 70–75 artificial intelligence 48–51 artificial moral agent see normative system artillery threats 115 Asimov’s Three Laws 125 autonomous weapons see lethal autonomous weapons systems autonomy: moral 3–4; operational 3; philosophical 3; robotic Bar Robot test cases: minor 99–100; normal 97–99; out of stock 100–101; two customers 101–102; two robots 102–103 basic needs see needs blood on hands 152 bottom-up approach to machine ethics 68 capability sets 48 care theory 42–43 Castañeda, Hector-Neri 78–81 categorical imperative 34, 148 causal relations 119 Cave test case see classic trolley problems CCW see Convention on Certain Conventional Weapons classic trolley problems: Cave 140; critics 140; Footbridge 141–142; Footbridge (employee) 153–154; Hospital 141; Switch (five trespassers five workers A) 155–156; Switch (five trespassers five workers B) 156–157; Switch (majority) 141; Switch (minority) 189–191; Switch (one trespasser five workers) 154–155; Switch (one worker five trespassers) 157–158 classification relations 119–120 code forks 139, 189 codifiability of ethics 17–18 concepts 28–57; artificial intelligence 48–51; logic 28–31; moral 31–48; robotics 56–57; software development 51–56 conceptual graphs: causal relations 119; classification relations 119–120; evaluation relations 120–122; state-actstate relations 118–119 consequentialism 35–36 contract: relation to need 167; see also moral theory, contractualism Convention on Certain Conventional Weapons 1–3; Protocol VI 210 cultural relativism 188 deontic calculus of worthy richness 78–81 deontic predicate logic 85 deontology 33; Kantian 33–35; Rossian 35 desert: risk assumption 150 desires vs reasons 183 development: basic physical needs cases 124–162; fairness and autonomy cases 163–187 Dive Boat test case 164–165 doctrine of double effect 145 double effect see doctrine of double effect Drone test cases: tank in the desert 103–105; tank next to hospital 105–107; two drones 107–108 218 Index duty: deliberative 97; DUTY predicate 89; linking reactive duty to fundamental principles 196–197; prima facie 96; pro tanto 96; reactive 97; supererogatory 198–199 Economic Maximin test case 173 ethical egoism 41 ethical governor 70–75 ethical robot see normative system ethical scope see scope evaluation relations 120–121 explanatory gap 11 fairness: formalizing 170–171; Rawls on rightness and fairness 163–164 feeling-motivation vs rule-motivation 12 floor-constraint principle 175–177 formula of universal law 148–149 globalized moral competence in social robots 188–189 Gold Mine test cases: profit sharing 169–170; wages 167–169 graph-based knowledge representation see conceptual graphs graphs see conceptual graphs Hab malfunction test case 135–136 Hospital test case see classic trolley problems human moral agents 3; differences between humans and robots 9–15 hunger strikers 127 imperative see logic; Speeding Camera test case instrumental needs see needs is-ought problem 47 Kant, Immanuel 33–35, 71, 76, 148, 177–184, 189–190; see also deontology; The Viking at the Door test case Kant’s law 85, 100 killing vs letting die 152 knowledge representation and reasoning 49–50 Landlord test case 165–166 LAWS see lethal autonomous weapons system lethal autonomous weapons systems 1–4; arguments against LAWS 209–210; arguments for LAWS 208–209 lexical ordering 137 liability 160 local veil of ignorance 174–175 logic 28–31 logicist approach 78 low altitude air threats 115 machine ethics: definition 1; test-driven development 60–62; what it hopes to accomplish 20–22 machine learning 66–67 Mars Rescue test case 184–186 maximin argument 172–174 meaningful human control of autonomous weapons 205–207 Medical Maximin test case 172 mesh architecture 203–204 method see test-driven development mines missiles modal logic 78 moral agents: humans of robots 9–15 moral code 5, 61, 88 moral controversy 139, 189 moral force 122–123, 149 moral relativism 188 moral theory 31–38; activity problem 47; aims 31; capability sets 48; care theory 42–43; consequentialism/utilitarianism 35–36; contractualism 47 ; deontology 33; ethical egoism 41; is-ought problem 47; Kantian deontology 33–35; needs theory 43–45, 75–77; original position 47, 172–176; particularism 38–40; practice conception of ethics 45–47; primary social goods 48; Rossian deontology 35; social contract 47; tiers 32; triple theory 48; veil of ignorance 47, 172–176; virtue ethics 36–38; v-rules 37–38; v-traits 37–38 moral variation 188–199 needs: basic physical needs 124–162; basic social needs 128–129; basic vs instrumental 125–127; needs vs wants 127–128 needs theory 43–47, 75–77; limitations 77 negative desert 160 normal modal operator 78 Normative bedrock 124–125 normative system: of artificial moral agent 82; of ethical governor 82; of ethical robot 16; preferred solution design for 81–83; sizing 211 Index objections to machine ethics 17–18 objections to the ethical robot 16–17 one-caring 11 operational risk 115–116 OPPOSED predicate 85–86 OPTIMAL predicate 99 original position 47, 172–176 Parfit, Derek 5, 6, 21, 47–48, 65, 70, 77, 83, 88, 90–91, 128, 139–141, 148, 172–176, 188–190 particularism 38–40 patient appeal 192 p-conscious see phenomenal consciousness phenomenal consciousness 10 Phineas Gage 16 Pigden, Charles 23, 89, 214 Postal Rescue test cases: one letter 129–133; ten million and one letters 136–137 practice conception of ethics 45–47, 75–77 primary social goods 48 problem of standard motivation 183–184 proposition see logic proposition-imperative duality 80–81 Protocol IV see Convention on Certain Conventional Weapons Protocol VI see Convention on Certain Conventional Weapons Prover 9, 29–31 qualitative reasoning 68 quantitative reasoning 68 Rawls, John: lexical ordering 137; Maximin argument 172–174; on rightness and fairness 163–164; see also original position; veil of ignorance reactive duty 97; linking to more fundamental principles 196–198 Reader, Soran 5, 23, 43–47, 70, 75–77, 81–83, 126, 139–140, 214; see also needs theory reasonable person: in civil law 6; in common law 6; see also reasonable person tests reasonable person tests 5, 7–9, 118–123 reasonable robot 6, 212–214 reasoning: hybrid 69; qualitative 68; quantitative 68 reasons vs desires 183 refer up to human on the loop 192 regulation: of LAWS 210; of security automata more generally 210–211 219 risk assumption 150 risks of robotic moral failure 202–204 roboethics see robot ethics robot ethics: definition robot: autonomy 3; desires 10; hedonic circuits 4; intentions 10; phenomenology 4; theory of mind 10 robotics: actuators 57; cognition 58; sensors 57 robot moral agent 7; delegated agency 13; explanatory gap 11; full 7; one-caring 11; tabula rasa 15, 21; transparent 11; two-thirds virtue Rocks see The Rocks test cases The Rocks test cases: majority 171–172; minority 191–192 rules: rule following vs rule initiation 207–208; rule-motivation vs feelingmotivation 12 rule-utilitarianism 90 Safe Haven test cases: kill zone 112–114; no-fly zone 114–115; warning zone 108–112 satisfice the minimum 177 satismin see satisfice the minimum Scanlon, T.M 6, 16, 47–48, 138, 163, 171–175, 183; see also moral theory, contractualism; Transmitter Room test case science of morals 91 scientific ethics 91 scope: ethical 18–19; technical 19–20 security automata: definition social contract see moral theory, contractualism software development 51–56; analysis 52; deployment 55; design 53; development 54; enhancement 56; maintenance 56; quality assurance 54; requirements 52 Spacesuit Breach test case 133–135 specific norm tests 5, 7–9 Speeding Camera test cases: emergency 95–97; emergency services vehicle 94–95; not speeding 93–94; speeding 88–93 standard motivation 183–184 state-act-state transition relations 118–119 STIT operator see agentive operator stubbing code 19 supererogatory action 198–199 Swerve test case 158–161 Switch test cases: Switch (five trespassers five workers A) 155–156; Switch (five 220 Index trespassers five workers B) 156–157; Switch (majority) 141; Switch (minority) 189–191; Switch (one trespasser five workers) 154–155; Switch (one worker five trespassers) 157–158 tabula rasa see robot moral agent technical scope see scope test-driven development: details of the method 61–62; as a method of machine ethics 60–61; refactor 62; scope 61; simplicity 61; spikes 62; structure 61; stub 62 testing 54–55, 200–204; functional 55; load 55; penetration 55; reasonable person 202; regression 55; specific norm 201–202; symbol grounding 201; useability 55; user acceptance 55 top-down approach to machine ethics 68 Transmitter Room test case 138–139 triple theory 48, 77 trolley problems see classic trolley problems Turing machine 50–51 utilitarianism 35–36 veil of ignorance 47, 172–176 Viking at the Door test case see The Viking at the Door test case The Viking at the Door test case 177–184 virtue ethics 36–38; v-rules 37–38; v-traits 37–38 well formed formula 30, 61 wff see well formed formula ... Exploration Ethics, Policy and Governance Jai Galliott Healthcare Robots Ethics, Design and Implementation Aimee van Wynsberghe Forthcoming titles Ethics and Security Automata Policy and Technical.. .Ethics and Security Automata Can security automata (robots and AIs) make moral decisions to apply force on humans correctly? If they can make such decisions, ought they be used to so? Will security. .. Introduction Security automata and world peace Aim The business case for the reasonable robot Three levels of testing Machine ethics and ethics proper Differences between humans and robots as

Ngày đăng: 20/01/2020, 08:14

Mục lục

  • Introduction

    • Security automata and world peace

    • The business case for the reasonable robot

    • Three levels of testing

    • Machine ethics and ethics proper

    • Differences between humans and robots as moral agents

    • Advantages of the ethical robot

    • Objections to the ethical robot

    • Objections to machine ethics

    • What machine ethics hopes to accomplish

    • 2 Method

      • Test-driven development as a method of machine ethics

      • Key details of the method

      • Aims of the method

      • 4 Solution design

        • Top-down and bottom-up

        • The practice conception of ethics

        • A deontic calculus of worthy richness

        • Preferred approach to normative system design

        • 5 Development – specific norm tests

          • Deontic predicate logic

          • Introduction to test cases

          • Speeding camera (not speeding)

          • Speeding camera (emergency services vehicle)

Tài liệu cùng người dùng

Tài liệu liên quan