Analysing the behaviour of neural networks

203 115 0
Analysing the behaviour of neural networks

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Analysing the Behaviour of Neural Networks A Thesis by Stephan Breutel Dipl.-Inf In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Queensland University of Technology, Brisbane Center for Information Technology Innovation March 2004   Copyright c Stephan Breutel, MMIV All rights reserved The author hereby grants permission to the Queensland University of Technology, Brisbane to reproduce and distribute publicly paper and electronic copies of this thesis document in whole or in part Keywords Artificial Neural Network, Annotated Artificial Neural Network, Rule-Extraction, Validation of Neural Network, Polyhedra, Forward-propagation, Backward-propagation, Refinement Process, Non-linear optimization, Polyhedral Computation, Polyhedral Projection Techniques Analysing the Behaviour of Neural Networks by Stephan Breutel Abstract A new method is developed to determine a set of informative and refined interface assertions satisfied by functions that are represented by feed-forward neural networks Neural networks have often been criticized for their low degree of comprehensibility It is difficult to have confidence in software components if they have no clear and valid interface description Precise and understandable interface assertions for a neural network based software component are required for safety critical applications and for the integration into larger software systems The interface assertions we are considering are of the form “if the input ✁ of the neural network is in a region ✂ of the input space then the output ✄✆☎✝✁✟✞ of the neural network will be in the region ✠ of the output space” and vice versa We are interested in computing refined interface assertions, which can be viewed as the computation of the strongest pre- and postconditions a feed-forward neural network fulfills Unions of polyhedra (polyhedra are the generalization of convex polygons in higher dimensional spaces) are well suited for describing arbitrary regions of higher dimensional vector spaces Additionally, polyhedra are closed under affine transformations Given a feed-forward neural network, our method produces an annotated neural network, where each layer is annotated with a set of valid linear inequality predicates The main challenges for the computation of these assertions is to compute the solution of a non-linear optimization problem and the projection of a polyhedron onto a lower-dimensional subspace Contents List of Figures vi List of Tables vii List of Listings ix Introduction 1.1 Motivation and Significance 1.2 Notations and Definitions 1.3 Software Verification and Neural Network Validation 1.4 Annotated Artifical Neural Networks 1.5 Highlights and Organization of this Dissertation 13 1.6 Summary of this Chapter 15 Analysis of Neural Networks 17 2.1 Neural Networks 17 2.2 Validation of Neural Network Components 22 2.2.1 Propositional Rule Extraction 28 2.2.2 Fuzzy Rule Extraction 35 2.2.3 Region-based Analysis 44 2.3 Overview of Discussed Neural Network Validation Techniques and Validity Polyhedral Analysis 62 Polyhedra and Deformations of Polyhedral Facets under Sigmoidal Transformations 65 i ii CONTENTS 3.1 Polyhedra and their Representation 65 3.2 Operations on Polyhedra and Important Properties 69 3.3 Deformations of Polyhedral Facets under Sigmoidal Transformations 70 3.4 Summary of this Chapter 82 Nonlinear Transformation Phase 83 4.1 Mathematical Analysis of Non-Axis-parallel Splits of a Polyhedron 83 4.2 Mathematical Analysis of a Polyhedral Wrapping of a Region 86 4.2.1 Sequential Quadratic Programming 89 4.2.2 Maximum Slice Approach 90 4.2.3 Branch and Bound Approach 95 4.2.4 Binary Search Approach 98 4.3 Complexity Analysis of the Branch and Bound and the Binary Search Method 109 4.4 Summary of this Chapter 111 Affine Transformation Phase 113 5.1 Introduction to the Problem 113 5.2 Backward Propagation Phase 114 5.3 Forward Propagation Phase 117 5.4 Projection of a Polyhedron onto a Subspace 118 5.4.1 Fourier-Motzkin 120 5.4.1.1 A Variation of Fourier-Motzkin 123 5.4.2 Block Elimination 126 5.4.3 The -Box Approximation 131 5.4.4 ✡ 5.4.3.1 Projection of a face 133 5.4.3.2 Determination of Facets of 5.4.3.3 Further Improvements of the S-Box method 137 ☛ 134 Experiments 138 5.5 Further Considerations about the Approximation of the Image 140 5.6 Summary of this Chapter 140 CONTENTS iii Implementation Issues and Numerical Problems 143 6.1 The Framework 143 6.2 Numerical Problems 147 6.3 Summary of this Chapter 150 Evaluation of Validity Polyhedral Analysis 151 7.1 Overview and General Procedure 151 7.2 Circle Neural Network 153 7.3 Benchmark Data Sets 156 7.3.1 Iris Neural Network 156 7.3.2 Pima Neural Network 158 7.4 SP500 Neural Network 161 7.5 Summary of this Chapter 163 Conclusion and Future Work 165 8.1 Contributions of this Thesis 165 8.2 Fine Tuning of VPA 167 8.3 Future Directions and Validation Methods for Kernel Based Machines 168 A Overview of used symbols 171 B Linear Algebra Background 175 Bibliography 185 171 172 Chapter A Overview of used symbols Appendix A Overview of used symbols P previous layer S subsequent layer hyperplane, defined with direction vector positive half-space, where q➺✼✁❺➯➂✛ negative half-space, where ➇➺✼✁❺➫➂✛ polyhedron defining a set of points projected of a polyhedron approximation of polyhedron, given by (non linear) region ✓ denotes an output vector of a function ✁ denotes an input vector to a function ✁ activation vector net input vector weight matrix bias vector sigmoid transfer-function subspace orthogonal to the kernel of the transformation matrix denotes the affine transformation polyhedron in y-space, which we want to back- ➵ ➵♥➾ ➵ ★ ✍ ☛ ☛ ● þ ✤ ❣✟❤❥✐ ❦ ❧ ✦ ❰ ➞ ✍✖⑧➊✎ ✑✔✓✖✕ ✗✘✓✚✙➂✛➥✢ ê and constant ê ê ✍ onto a subspace ☛ þ ✎➀➞✩☎✝✍✏✞ ❦ propagate through the transfer function layer ✛ 173 ✍✖⑦➲✎✒✑✔✁✩✕ ✗➢➁ò✁❇✙✜✛➻➁❲✢ polyhedron in ✁ -space, i.e the polyhedral approximation of ✤✉✎✒✑✔✁✩✕ ✗✘✦✩☎✝✁✟✞➔✙➂✛➥✢ true reciprocal image of the polyhedron in x-space, called region ✤ ✠❲⑧ ❫ wrapping box of ✍ ⑧ , i.e the smallest axis parallel hypercube containing ✠ ⑦❫ ✠ ⑦➛ ✍➔⑧ wrapping box of box in x-space, used for the intersection detection, after the k-th iteration ✠ ⑧➛ ❳ê box in y-space, ditto to ✠ maximum of the cost-function in x-space ⑦❫ ) point in y-space when linear optimizing max ✐ê ⑦➛ (constraint to ✠ ✐➅ ❈❬ ✓ ✐ corresponding point to ✼⑧ in x-space, i.e ✐å⑦➡✎✧✦❭★✫✪✬☎✳✐å⑧❉✞ In the linear case this would be already the optimal point for ✡✛✚❭☎❝ëq✞ ⑦ ➊ ➈ ➇→ë ☎Ø×✷✞ ✎ P ♣q▼sr❯❱ ✯❳ ⑦ ➑çë✟☎✳✐ ⑦ ◗➁❳ ⑦ ✞ , shift line point used for the positioning of the hyperplane in the binary search method rate of change between two consecutive values for volume of a box ♠ ➵ we call the line segment ✜ ✡ ✚❭☎❝ëq✞➥✎ ë➆❹❇P ➝➤▼✔➓✴❱ ➆ ✗✘✓➅✙➂✛ subject to ➊ ë expresses a small positive real number box operator, smallest hypercube containing a region ✤ polyhedron ✍ or boolean operator to compare if two expressions are equivalent denotes the interval ✑✔t✆✕ ♣✉✙✈t➃✙ r✔✢ Appendix B Linear Algebra Background We recall a few important properties about linear transformations, which are relevant for the algorithm for the computation of the image of a polyhedron under an affine transformation Let ❦ be a P ☞❿▼❖✌å❱ matrix, i.e a matrix with ☞ We can view the matrix î ➞ to î ➜ ❦ rows and ✌ columns as the representation of a linear transformation from The Kernel (sometimes called null-space) of mapped to the zero vector under ❦ ❦ We write ➚✸➪➷➶➎☎ The kernel is a subspace of the domain, i.e of ➚ to refer to the subspace ➚✸➪➷➶➎☎ ❦ ✞ The image (sometimes called range) of reachable by an input vector ✑✔✓✚❹♠î ➜ ✂ ✕ ✁✆▼ ✓❏✎ The restriction of tween ❰ and ➽✓➾➃☎ ❦ ❦ ❦ ✁➠✢ ✁ is the set of all vectors that get ❦ ❦ ✞✣✎ ✑✔✁❇❹✭î ❦ ✁➃✎❙➥å✢ is the entire set of vectors in ❦ The image is a subspace of ❰ ✕ orthogonal to ➚ We write ➽✓➾✱☎ î➜ ❦ î ➜ ✞➄✎ is a linear bijection be- ✞ Proof: ❦ ❦ ▼❖✁ ➙ ➁ ❹ ❮ and ✁ ✪ ✎ ✁ ➙Ó ✪ ❦ then: ✝☎ ✁ ◗✘✁ ✞✆✎✧➝ ❹ ➚ ,but: ❮✏➈ ✪ ↔ ✁ ✪ ◗➢✁ ➙ ⑩ ➙ (i) Injection: let ✁ 175 î ➞ In the following we also use when multiplied with to a subspace ➞ ➚ ✎✒✑❃➝➛✢ Hence: ✁ ✪ ✎❍✁ ➙ ✇ 176 Chapter B Linear Algebra Background ✺➽✓➾ Õ ,where: ✪ç ✺➚ Õ ✮ç (ii) Surjection: let ✓❼❹ ✮ç ✁❏✎❍✁ ✓❿✎ ❼➑✚✁ ❦ ✁➃✎ ❦ ✁ ✱☎ ☎✝✁ ➑❼✁ ❦ ✒❹ ➊✞✣✎ ✞ ✂ ♠î ➞ ➴✣ ➚ ✢ ❮ Õ ❮ ✁❺❹ ↔ ❺▼❖✁ ❦ ✎ ➥▼❖✓❏✎ ❦ ✁ ✁ ❹ Õ ✇ ✆➽✓➾ In other words any vector ✁✚❹➃❰ gets mapped to a distinct vector ✓✈❹ We will write ➞ Õ ✱☎ ❦ ✞ Bibliography [ADT95] R Andrews, J Diederich, and A Tickle A survey and critique of techniques for extracting rules from trained artificial neural networks Knowledge-Based Systems (1995) 6, pages 373–389, 1995 [Ama97] Saman Prabhath Amarasinghe Parallelizing Compiler Techniques Based on Linear Inequalities PhD thesis, Stanford University, Stanford, CA 94305, January 1997 [Azo94] Michal E Azoff Neural Network Time Series Forecasting of Financial Markets John Wiley & Sons Ltd., 1994 [Bal98] Egon Balas Projection with a minimal system of inequalities Computational Optimization and Applications, 10:189–193, April 1998 [BBDR96] J.M Benitey, A Blanco, M Delgado, and I Requena Neural methods for obtaining fuzzy rules 3:371–382, 1996 [Bis94] C.M Bishop Neural networks and their applications Rev.Sci Instrum., 65(6):1803–1831, 1994 [Bis95] C.M Bishop Neural Networks for Pattern Recognition Oxford University Press, 1995 [BMH03] S Breutel, F Maire, and R Hayward Extracting interface assertions from neural networks in polyhedral format In Michel Verleysen, editor, ESANN 2003, pages 463–468 Kluwer, 2003 177 178 BIBLIOGRAPHY [Bon83] A Boneh PREDUCE: A Probabilistic Algorithm for Identifying Redundancy by a Random Feasible Point Generator Springer-Verlag, Berlin, Germany, 1983 [Bre01] S Breutel Neural network time series prediction for the sp500 index Internal Report, 2001 [Bro97] M Broy Informatik - Eine grundlegende Einf“uhrung Springer, 1997 [BS81] Bronstein and Semendjajew Taschenbuch der Mathematik Harri Deutsch und Thun, Frankfurt, 1981 [BT96] Paul T Boggs and John W Tolle Sequential quadratic programming pages 1–000, 1996 [Cer61] R.N Cernikov The solution of linear programming problems by elimination of unknowns Doklady Akademii Nauk 139, pages 1314–1317, 1961 [CMP] R.J Caron, J.F McDonald, and C.M Ponic Classification of linear constraints as redundant or necessary Technical Report WMR-85-09, University of Windsor, Windsor Mathematics Report [CMP89] R.J Caron, J.F McDonald, and C.M Ponic A degenerate extreme point strategy for the classification of linear constraints as redundant or necessary Journal of Optimization Theory and Applications, 62(2):225–237, August 1989 [Cov65] T.M Cover Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition IEEE Transactions on Electronic Comuters, EC-14:326–334, 1965 [Cra96] M Craven Extracting comprehensible models from trained neural networks Ph.D dissertation,Univ Wisconsin, Madison, WI, 1996, 1996 [CS96] M.W Craven and J.W Shavlik Extracting tree-structured representations of trained neural networks Advances in Neural Information Processing Systems, 8, 1996 BIBLIOGRAPHY [Dax97] A Dax 179 An elementary proof of farkas’ lemma SIAM Review, 39(3):503–507, 1997 [DM97] M Dyer and N Meggiddo Chapter 38 in The Handbook of Discrete and Computational Geometry, pages 699–710, July 1997 [Dut02] Mathieu Dutour Computational methods for cones and polytopes with symmetry January 2002 [Far02] J Farkas Ueber die theorie der einfachen ungleichungen Journal fuer die reine und angewandte Mathematik, 124:1–24, 1902 [FJ99] Maciej Faifer and Cezyry Janikow Extracting fuzzy symbolic representation from artifical neural networks 18 th International Conference of the North American Fuzzy Information Processing Society, June 1999 [Fou27] J.B.J Fourier (reported in:) analyse des travaux de l’academ´emie royale des sciences pendant l’ann´ee 1824 Partie math´ematique, 1827 [FP96] Komei Fukuda and Alain Prodon Double description method revisted Combinatorics and Computer Science, pages 91–111, 1996 [Fri98] Bernd Fritzke Vektorbasierte Neuronale Netze Shaker Verlag, 1998 [Fu94] LiMin Fu Rule-generation from neural networks IEEE SMC, 24(8):1114–1124, 1994 [Fuk00] Komei Fukuda Frequently asked questions in polyhedral computation 2000 [GSZ00] Adam E Gaweda, Rudy Setiono, and Jacek M Zurada Rule extraction from feedforward neural network for function approximation In Neural Networks and Soft Computing, Zakapone,Poland, June 2000 [GVL89] G Golub and C Van Loadn Matrix computations John Hopkins University Press, 1989 180 BIBLIOGRAPHY [Hay99] Simon Haykin Neural networks - A comprehensive foundation Prentice Hall, 1999 [HEFROG03] Carlos Hernandez-Espinosa, Mercedes Fernandez-Redondo, and Mamen Ortiz-Gomez A new rule extraction algorithm based on interval arithmetic In Michel Verleysen, editor, ESANN 2003, pages 155–160 Kluwer, 2003 [HHSD97] R Hayward, C Ho-Stuart, and J Diederich Neural Networks as Oracles for Rule Extraction Connectionist Systems for Knowledge Representation and Deduction, 1997 [HSW89] K Hornik, M Stinchcombe, and H White Multilayer Feedforward Networks are Universal Approximators Neural Networks, 2:359–366, 1989 [HU79] John E Hopcroft and Jeffrey D Ullman Introduction to Automata Theory, Languages, and Computation, chapter 3, pages 65–71 AddisonWesley, 1979 [Huc99] T Huckle Kleine bugs grosse gaus, 1999 Seminar 2.12.1999, wwwzenger.informatik.tu-muenchen.de/persons/huckle/bugs.html [IN] H Ishibushi and M Nii Generating fuzzy if-then rules from trained neural networks: Linguistic analysis of neural network pages 1133– 1138 [INT] H Ishibushi, M Nii, and K Tanaka Linguistic rule extraction from neural networks and genetic-algorithm-based rule selection pages 2390–2395 [INT99] H Ishibushi, M Nii, and K Tanaka Linguistic rule extraction from neural networks for higher-dimensional classification problems Complexity International, 6, 1999 [Jan93] C.Z Janikow Fuzzy processing in decision trees Proceedings of the Sixth International Symposium on AI, pages 360–367, 1993 BIBLIOGRAPHY [Jan96] C.Z Janikow Fuzzy decision trees: Issues and methods Technical report, Department of Mathematics and Computer Science, University of Missouri-St Louis, 1996 [Kal02] Bohdan L Kaluzny Polyhedral computation:a survey of projection methods Technical Report 308-760B, McGill University, April 2002 [Ker00] Eric C Kerrigan Robust Constraint Satisfaction: Invariant Sets and Predictive Control PhD thesis, Control Group,Department of Engineering, University of Cambridge, 2000 [KG85] A Kaufmann and M.M Gupta Introduction to fuzzy arithmetic 1985 [Koh87] T Kohonen Self-Organization and Associative Memory SpringerVerlag, edition, 1987 [Las83] J Lassere An analytical expression and an algorithm for the volume of convex polyhedron in Journal of Optimization Theory and Applications, 39(4), 1983 [Len93] C Lengauer Loop parallelization in the polytope model CONCUR ’93,Lecture Notes in Computer Science 715, pages 398–416, 1993 [Lis01] Paulo J G Lisboa Industrial use of safety-related artificial neural networks Technical report, Health and Safety Executive, 2001 [LMP89] J Li, A.N Michel, and W Porod Analysis and synthesis of a class of neural networks: linear systems operating on a closed hypercube IEEE Transactions on Circuits and Systems, 36(11):1405–1422, November 1989 [LVE00] P.J.G Lisboa, A Vellido, and B.(eds.) Edisbury Neural network applications in business World Scientific, 2000 [Mai98] F Maire Rule-extraction by backpropagation of polyhedra Neural networks, 12:717–725, 1998 181 182 BIBLIOGRAPHY [Mai00a] F Maire On the convergence of validity interval analysis IEEE Transactions on Neural Networks, 11(3), 2000 [Mai00b] F Maire Polyhedral analyis of neural networks SDL, 2000 [Mat00a] MathWorks, 24 Prime Park Way, Natrick,MA The Neural Network Toolbox, 2000 [Mat00b] MathWorks, 24 Prime Park Way, Natrick,MA The Optimization Toolbox 2.0, 2000 [Mat00c] MathWorks, 24 Prime Park Way, Natrick,MA Using Matlab, 2000 [Men01] Jerry M Mendel Rule-Based Fuzzy Logic Systems Prentice Hall, Upper Saddle River,NJ 07458, 2001 [Mit97] Tom M Mitchell Machine Learning McGRAW-HILL, 1997 [MKW03] Urszula Markowska-Kaczmar and Trelak Wojciecj Extraction of fuzzy rules from trained neural network using evolutionary algorithm In elisiver, editor, ESANN 2003, pages 149–154 Kluwer, 2003 [MP00] Ofer Melnik and Jordan Pollack feed-forward networks Exact representations from Technical report, Brandeis University, Waltham,MA,USA, April 2000 [Mur86] John Murphey Technical Analysis of the Future Markets The New York Institute of Finance, Prentice Hall, New York, 1986 [NP00] C.D Neagu and V Palade An interactive fuzzy operator used in rule extraction from neural networks Neural Network World 4, pages 675– 684, 2000 [PNJ01] Vasile Palade, Daniel-Ciprian Neagu, and Ron J.Patton Interpretation of trained neural networks by rule extraction pages 152–161, 2001 [PNP00] Vasile Palade, Daniel-Ciprian Neagu, and Gheorghe Puscasu Rule extraction from neural networks by interval propagation 2000 BIBLIOGRAPHY [Qui86] 183 J.R Quinlan Induction of decision trees Machine Learning 1, pages 81–106, 1986 [Ram94] Joerg Rambau Polyhedral Subdivisions and Projections of Polytopes Dissertation, TU Berlin, 1994 [Rep] UCI Machine Learning Repository http://www.ics.uci.edu/ mlearn/mlsummary.html [Rep99] DTI Final Report Evaluation of parallel processing and neural computing application programs Technical report, DTI, 1999 [RHW86a] D E Rumelhart, G E Hinton, and R J Williams Learning Internal Representations by Error Propogation Parallel Distributed Processing, Vol I + II, MIT Press, 1986 [RHW86b] D.E Rumelhart, G.E Hinton, and R.J Winston nal representations by error propagation Learning inter- In D.E Rumelhart and J.L.McCleland, editors, Parallel Distrbuted Processing: Explorations in the Microstructure of Cognition, volume 1, Cambridge,MA, 1986 MIT Press [RHW86c] D.E Rumelhart, G.E Hinton, and R.J Winston Learning representations of back-propagation errors Nature (London), 323:533–536, 1986 [Ros58] F Rosenblatt The perceptron: A probabilistic model for information storage and organization in the brain Psychological Review, 65:386– 408, 1958 [RR89] Bruno Riedmueller and Klaus Ritter Lineare und quadratische optimierung Institut fuer angewandte Mathematik und Statistik, Technische Universitaet Muenchen, 1989 [Sch90] A Schrijver Theory of Linear and Integer Programming WileyInterscience Publication, 1990 184 BIBLIOGRAPHY [SL97] R Setino and H Liu Neurolinear: From neural networks to oblique decision rules Neurocomputing, 17(1):1–24, 1997 [SLZ02] Rudy Setiono, Wee Kheng Leow, and Jacek M Zurada Extraction of rules from artificial neural networks for nonlinear regression IEEE Transactions on Neural Networks, 13(3), May 2002 [SN88] K Saito and R Nakano Medical diagnostic expert system based on pdp model Proceedings of IEEE, International Conference on Neural Networks, 1:255–262, 1988 [SS95] Hava T Siegelmann and Eduardo D Sontag On the computational power of neural nets Journal of Computer and System Sciences, 50(1):132–150, 1995 [SS02] Bernhard Schoelkopf and Alexander J Smola Learning with Kernels MIT Press, Cambridge, Massachusetts, 2002 [TAGD98] A Tickle, R Andrews, Mostefa Golea, and J Diederich The truth will come to light: Directions and challenges in extracting the knowledge embedded within trained artificial neural networks IEEE Transactions on Neural Networks, 9(6):1057–1068, 1998 [TBS93] T.M.Martiney, S Berkovich, and K Schulten Neural gas network for vector quantization and its applications to time series prediction IEEE Transactions on Neural Networks, 4:558–569, 1993 [TG96] I Taha and J Ghosh Symbolic interpretation of artifical neural networks (TR-97-01-106), 1996 [Thr90] S B Thrun Inversion in Time In Proceedings of the EURASIP Workshop on Neural Networks Springer Verlag, 1990 [Thr91] S B Thrun The monk’s problems - a performance comparison of different learning algorithms Technical Report CMU-CS-91-197, Carnegie Mellon University, Pittsburgh,PA, December 1991 BIBLIOGRAPHY [Thr93] 185 S B Thrun Extracting Provably Correct Rules from Artificial Neural Networks Technical Report IAI-TR-93-5, Department of Computer Science III, University of Bonn, 1993 [Thr95] S B Thrun Extracting Rules from Artificial Neural Networks with Distributed Representations Advances in Neural Information Processing Systems (NIPS) 7, 1995 [TM] Theodore B Trafalis and Alexander M Malyscheff An analytic center machine Machine Learning [TS93a] G Towell and J Shavlik The extraction of refined rules from knowledge-based neural networks Machine Learning, 131:71–101, 1993 [TS93b] G Towell and J.W Shavlik Extracting refined rules from knowledgebased neural networks Machine Learning, 13:71–101, 1993 [Van83] James S Vandergraft Introduction to numerical computation Academic Press, New York, 1983 [Wil97] Doran K Wilde A library for doing polyhedral operations Technical report, Brigham Young University, Department of Electrical and Computer Engineering, 459 CB,Box 24099,Provo,Utah, 1997 [WvdB98] T Weiters and A van den Bosch Interpretable neural networks with bp-som Tasks and Methods in Applied Artificial Intelligence Lectures Notes in Artificial Intelligence 1416, pages 564–573, 1998 [Zad75] L A Zadeh The concept of a linguistic variable and its application to approximate reasoning-1 Information Sciences, 8:199–249, 1975 [Zie94] G.M Ziegler Lectures on polytopes Springer-Verlag, 1994 [Zir97] Joseph S Zirilli Financial Prediction using Neural Networks International Thomson Computer Press, 1997 ... considering are of the form “if the input ✁ of the neural network is in a region ✂ of the input space then the output ✄✆☎✝✁✟✞ of the neural network will be in the region ✠ of the output space” and vice... a thorough introduction to the basics of the Hoare calculus Within the scope of this thesis the rule of statement sequence, the rule of consequence and the concepts of weaker ☛➌▼➍☛ ✪ ❖▼ ✤➆▼❖✤... set of points in the input space and ✤ ⑥ a set of points in the output space For some of the form: if the input is in region methods, these region based rules agree exactly with the behaviour of

Ngày đăng: 07/08/2017, 15:33

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan