1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Intelligent systems engineers and scientists ( pdfdrive com )

461 2 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

Second Edition Intelligent Systems for Engineers and Scientists © 2001 by CRC Press LLC Second Edition Intelligent Systems for Engineers and Scientists Adrian A Hopgood CRC Press Boca Raton London New York Washington, D.C disclaimer3 Page Thursday, August 2, 2001 1:50 PM Library of Congress Cataloging-in-Publication Data Hopgood, Adrian A Intelligent systems for engineers and scientists / Adrian A Hopgood. 2nd ed p cm Includes bibliographical references and index ISBN 0-8493-0456-3 Expert systems (Computer science) Computer-aided engineering I Title QA76.76.E95 H675 2000 006.3′3′02462 dc21 00-010341 This book contains information obtained from authentic and highly regarded sources Reprinted material is quoted with permission, and sources are indicated A wide variety of references are listed Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale Specific permission must be obtained in writing from CRC Press LLC for such copying Direct all inquiries to CRC Press LLC, 2000 N.W Corporate Blvd., Boca Raton, Florida 33431 Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe Visit the CRC Press Web site at www.crcpress © 2001 by CRC Press LLC No claim to original U.S Government works International Standard Book Number 0-8493-0456-3 Library of Congress Card Number 00-010341 Printed in the United States of America Printed on acid-free paper Preface “Intelligent systems” is a broad term, covering a range of computing techniques that have emerged from research into artificial intelligence It includes symbolic approaches — in which knowledge is explicitly expressed in words and symbols — and numerical approaches such as neural networks, genetic algorithms, and fuzzy logic In fact, many practical intelligent systems are a hybrid of different approaches Whether any of these systems is really capable of displaying intelligent behavior is a moot point Nevertheless, they are extremely useful and they have enabled elegant solutions to a wide variety of difficult problems There are plenty of other books available on intelligent systems and related technologies, but I hope this one is substantially different It takes a practical view, showing the issues encountered in the development of applied systems I have tried to describe a wide range of intelligent systems techniques, with the help of realistic problems in engineering and science The examples included here have been specifically selected for the details of the techniques that they illustrate, rather than merely to survey current practice The book can be roughly divided into two parts Chapters to 10 describe the techniques of intelligent systems, while Chapters 11 to 14 look at four broad categories of applications These latter chapters explore in depth the design and implementation issues of applied systems, together with their advantages and difficulties The four application areas have much in common, as they all concern automated decision making, while making the best use of the available information The first edition of this book was published as Knowledge-Based Systems for Engineers and Scientists It was adopted by the Open University for its course T396: Artificial Intelligence for Technology and, as a result, I have received a lot of useful feedback I hope that this new edition addresses the weaknesses of the previous one, while retaining and building upon its strengths As well as updating the entire book, I have added new chapters on intelligent agents, neural networks, optimization algorithms (especially genetic algorithms), and hybrid systems A new title was therefore needed to reflect the broader scope of this new edition Intelligent Systems for Engineers and © 2001 by CRC Press LLC Scientists seems appropriate, as it embraces both the explicit knowledge-based models that are retained from the first edition and the implicit numerical models represented by neural networks and optimization algorithms I hope the book will appeal to a wide readership In particular, I hope that students will be drawn toward this fascinating area from all scientific and engineering subjects, not just from the computer sciences Beyond academia, the book will appeal to engineers and scientists who either are building intelligent systems or simply want to know more about them The first edition was mostly written while I was working at the Telstra Research Laboratories in Victoria, Australia, and subsequently finished upon my return to the Open University in the UK I am still at the Open University, where this second edition was written Many people have helped me, and I am grateful to them all The following all helped either directly or indirectly with the first edition (in alphabetical order): Mike Brayshaw, David Carpenter, Nicholas Hallam, David Hopgood, Sue Hopgood, Adam Kowalzyk, Sean Ogden, Phil Picton, Chris Price, Peter Richardson, Philip Sargent, Navin Sullivan, Neil Woodcock, and John Zucker I am also indebted to those who have helped in any way with this new edition I am particularly grateful to Tony Hirst for his detailed suggestions for inclusion and for his thoughtful comments on the drafts I also extend my thanks to Lars Nolle for his helpful comments and for supplying Figures 7.1, 7.7, and 8.18; to Jon Hall for his comments on Chapter 5; to Sara Parkin and Carole Gustafson for their careful proofreading; and to Dawn Mesa for making the publication arrangements Finally, I am indebted to Sue and Emily for letting me get on with it Normal family life can now resume Adrian Hopgood www.adrianhopgood.com Email me: adrian.hopgood@ntu.ac.uk © 2001 by CRC Press LLC The author Adrian Hopgood has earned his BSc from Bristol University, PhD from Oxford University, and MBA from the Open University After completing his PhD in 1984, he spent two years developing applied intelligent systems for Systems Designers PLC He subsequently joined the academic staff of the Open University, where he has established his research in intelligent systems and their application in engineering and science Between 1990 and 1992 he worked for Telstra Research Laboratories in Australia, where he contributed to the development of intelligent systems for telecommunications applications Following his return to the Open University he led the development of the course T396 – Artificial Intelligence for Technology He has further developed his interests in intelligent systems and pioneered the development of the blackboard system, ARBS © 2001 by CRC Press LLC For Sue and Emily © 2001 by CRC Press LLC Contents Chapter one: Introduction 1.1 Intelligent systems 1.2 Knowledge-based systems 1.3 The knowledge base 1.4 Deduction, abduction, and induction 1.5 The inference engine 1.6 Declarative and procedural programming 1.7 Expert systems 1.8 Knowledge acquisition 1.9 Search 1.10 Computational intelligence 1.11 Integration with other software References Further reading Chapter two: Rule-based systems 2.1 2.2 2.3 2.4 2.5 2.6 2.7 Rules and facts A rule-based system for boiler control Rule examination and rule firing Maintaining consistency The closed-world assumption Use of variables within rules Forward-chaining (a data-driven strategy) 2.7.1 Single and multiple instantiation of variables 2.7.2 Rete algorithm 2.8 Conflict resolution 2.8.1 First come, first served 2.8.2 Priority values 2.8.3 Metarules 2.9 Backward-chaining (a goal-driven strategy) 2.9.1 The backward-chaining mechanism 2.9.2 Implementation of backward-chaining 2.9.3 Variations of backward-chaining 2.10 A hybrid strategy 2.11 Explanation facilities © 2001 by CRC Press LLC 2.12 Summary References Further reading Chapter three: Dealing with uncertainty 3.1 3.2 Sources of uncertainty Bayesian updating 3.2.1 Representing uncertainty by probability 3.2.2 Direct application of Bayes’ theorem 3.2.3 Likelihood ratios 3.2.4 Using the likelihood ratios 3.2.5 Dealing with uncertain evidence 3.2.6 Combining evidence 3.2.7 Combining Bayesian rules with production rules 3.2.8 A worked example of Bayesian updating 3.2.9 Discussion of the worked example 3.2.10 Advantages and disadvantages of Bayesian updating 3.3 Certainty theory 3.3.1 Introduction 3.3.2 Making uncertain hypotheses 3.3.3 Logical combinations of evidence 3.3.4 A worked example of certainty theory 3.3.5 Discussion of the worked example 3.3.6 Relating certainty factors to probabilities 3.4 Possibility theory: fuzzy sets and fuzzy logic 3.4.1 Crisp sets and fuzzy sets 3.4.2 Fuzzy rules 3.4.3 Defuzzification 3.5 Other techniques 3.5.1 Dempster–Shafer theory of evidence 3.5.2 Inferno 3.6 Summary References Further reading Chapter four: Object-oriented systems 4.1 4.2 4.3 4.4 4.5 Objects and frames An illustrative example Introducing OOP Data abstraction 4.4.1 Classes 4.4.2 Instances 4.4.3 Attributes (or data members) 4.4.4 Operations (or methods or member functions) 4.4.5 Creation and deletion of instances Inheritance © 2001 by CRC Press LLC 4.5.1 Single inheritance 4.5.2 Multiple and repeated inheritance 4.5.3 Specialization of methods 4.5.4 Browsers 4.6 Encapsulation 4.7 Unified Modeling Language (UML) 4.8 Dynamic (or late) binding 4.9 Message passing and function calls 4.9.1 Pseudovariables 4.9.2 Metaclasses 4.10 Type checking 4.11 Further aspects of OOP 4.11.1 Persistence 4.11.2 Concurrency 4.11.3 Overloading 4.11.4 Active values and daemons 4.12 Frame-based systems 4.13 Summary References Further reading Chapter five: Intelligent agents 5.1 5.2 5.3 Characteristics of an intelligent agent Agents and objects Agent architectures 5.3.1 Logic-based architectures 5.3.2 Emergent behavior architectures 5.3.3 Knowledge-level architectures 5.3.4 Layered architectures 5.4 Multiagent systems 5.4.1 Benefits of a multiagent system 5.4.2 Building a multiagent system 5.4.3 Communication between agents 5.5 Summary References Further reading Chapter six: Symbolic learning 6.1 6.2 6.3 Introduction Learning by induction 6.2.1 Overview 6.2.2 Learning viewed as a search problem 6.2.3 Techniques for generalization and specialization Case-based reasoning (CBR) 6.3.1 Storing cases 6.3.2 Retrieving cases © 2001 by CRC Press LLC failure might be monitored, where “failure” occurs if the cell concentration Cc drifts beyond prescribed limits The longer the time to failure, the better the performance of the controller Woodcock et al [11] consider this approach to be midway between supervised and unsupervised learning (Chapters and 8), as the controller receives an indication of its performance but not a direct comparison between its output and the desired output start determine state, S, of system find box (i, j) that contains S note time score(−1) > score(+1) ? no yes select and execute action −1 select and execute action +1 pause such that cycle time = ∆t system failed? no yes for every box (i, j) visited, update scores clear all counters restart Figure 14.13 The BOXES learning algorithm, derived from [12] In the example shown here, performance is gauged by time to failure © 2001 by CRC Press LLC For each box, a score is stored for both the +1 action and the 1 action These scores are a measure of “degree of appropriateness” and are based on the average time between selecting the control action in that particular box and the next failure The learning strategy of Michie and Chambers [12] for bang-bang control is shown in Figure 14.13 During a run, a single box may be visited N times For each box, the times (t1, … ti, … tN) at which it is visited are recorded At the end of a run, i.e., after a failure, the time tf is noted and the +1 and 1 scores for each visited box are updated Each score is based on the average time to failure after that particular control action had been carried out, i.e., the lifetime l The lifetimes are modified by a usage factor n, a decay factor D, a global lifetime lg, a global usage factor ng, and a constant E, thereby yielding a score These modifications ensure that, for each box, both alternative actions have the chance to demonstrate their suitability during the learning process and that recent experience is weighted more heavily than old experience The full updating procedure is as follows: lg Dl g  t f (14.7) ng Dn g  (14.8) For each box where score(+1) > score(–1): N l( 1 ) Dl ( 1 )  ¦ ( t f  t i ) (14.9) i n( 1 ) Dn( 1 )  N l( 1 )  E score( 1 ) (14.10) lg ng u ( 1 )  E (14.11) For each box where score(–1) > score(+1): N l( 1 ) Dl( 1 )  ¦ ( t f  t i ) (14.12) Dn( 1 )  N (14.13) i n( 1 ) l( 1 )  E score( 1 ) © 2001 by CRC Press LLC lg ng u ( 1 )  E (14.14) y2 y2 y1 force y1 Figure 14.14 The cart-and-pole control problem After a controller has been run to failure and the scores associated with the boxes have been updated, the controller becomes competent at balancing in only a limited part of the state-space In order to become expert in all regions of state-space, the controller must be run to failure several times, starting from different regions in state-space The BOXES algorithm has been used for control of a bioreactor as described, and also for balancing a pole on a mobile cart (Figure 14.14) The latter is a similar problem to the bioreactor, but the state is described by four rather than two variables The boxes are four-dimensional and difficult to represent graphically In principle, the BOXES algorithm can be applied to state-space with any number of dimensions The cart-and-pole problem, shown in Figure 14.14, has been used extensively as a benchmark for intelligent controllers A pole is attached by means of a hinge to a cart that can move along a finite length of track The cart and the pole are restricted to movement within a single plane The controller attempts to balance the pole while keeping the cart on the length of track by applying a force to the left or right If the force has a fixed magnitude in either direction, this is another example of bang-bang control The four state variables are the cart’s position y1 and velocity y1 and the pole’s angle y2 and angular velocity y Failure occurs when y1 or y2 breach constraints placed upon them The constraint on y1 represents the limited length of the track Rather than use a BOXES system as an intelligent controller per se, Sammut and Michie [13] have used it as a means of eliciting rules for a © 2001 by CRC Press LLC rule-based controller After running the BOXES algorithm on a cart-and-pole system, they found clear relationships between the learned control actions and the state variables They expressed these relationships as rules and then proceeded to use analogous rules to control a different black box simulation, namely, a simulated spacecraft The spacecraft was subjected to a number of unknown external forces, but the rule-based controller was tolerant of these Similarly, Woodcock et al.’s BOXES controller [11] was virtually unaffected by random variations superimposed on the control variables One of the attractions of the BOXES controller is that it is a fairly simple technique, and so an effective controller can be built quite quickly Woodcock et al [11] rapidly built their controller and a variety of black box simulations using the Smalltalk object-oriented language (see Chapter 4) Although both the controller and simulation were developed in the same programming environment, the workings of the simulators were hidden from the controller Sammut and Michie also report that they were able to build quickly their BOXES controller and the rule-based controller that it inspired [13] 14.7.2 Fuzzy BOXES Woodcock et al [11] have investigated the suggestion that the performance of a BOXES controller might be improved by using fuzzy logic to smooth the bang-bang control [14] Where different control actions are associated with neighboring boxes, it was proposed that states lying between the centers of the boxes should be associated with intermediate actions The controller was trained as described above in order to determine appropriate bang-bang actions After training, the box boundaries were fuzzified using triangular fuzzy sets The maximum and minimum control actions (bang-bang) were normalized to +1 and –1 respectively, and intermediate actions were assigned a number between these extremes Consider again the bioreactor, which is characterized by two-dimensional state-space If a particular state S falls within the box (i, j), then the corresponding control action is uij This can be stated as an explicit rule: IF state S belongs in box (i,j) THEN the control action is uij If we consider Cn and Cc separately, this rule can be rewritten: IF Cn belongs in interval i AND Cc belongs in interval j THEN the control action is uij The same rule can be applied in the case of fuzzy BOXES, except that now it is interpreted as a fuzzy rule We know from Equation 3.36 (in Chapter 3) that: © 2001 by CRC Press LLC µ(Cc belongs in interval j) Membership, µ Cc inte rva lj µ(Cn belongs in interval i) val i inter µ(state S belongs in box (i, j)) Cn Figure 14.15 Fuzzy membership functions for boxes in the bioreactor state space (adapted from [11]) µ(Cn belongs in interval i AND Cc belongs in interval j) = min[µ(Cn belongs in interval i), µ(Cc belongs in interval j)] Thus, if the membership functions for Cn belongs in interval i and Cc belongs in interval j are both triangular, then the membership function for state S belongs in box (i,j), denoted by µij(S), is a surface in state space in the shape of a pyramid (Figure 14.15) As the membership functions for neighboring pyramids overlap, a point in state space may be a member of more than one box The control action uij for each box to which S belongs is scaled according to the degree of membership µij(S) The normalized sum of these actions is then interpreted as the defuzzified action u0: ¦ ¦ P ij (S)u ij u0 i j ¦ ¦ P ij (S) i j This is equivalent to defuzzification using the centroid method (Chapter 3), if the membership functions for the control actions are assumed to be symmetrical about a vertical line through their balance points Woodcock et al have tested their fuzzy BOXES controller against the cart-and-pole and bioreactor simulations (described above), both of which are adaptive control problems They have also tested it in a servo control © 2001 by CRC Press LLC application, namely, reversing a tractor and trailer up to a loading bay In none of these examples was there a clear winner between the nonfuzzy and the fuzzy boxes controllers The comparison between them was dependent on the starting position in state-space This was most clearly illustrated in the case of the tractor and trailer If the starting position was such that the tractor could reverse the trailer in a smooth sweep, the fuzzy controller was able to perform best because it was able to steer smoothly The nonfuzzy controller, on the other hand, was limited to using only full steering lock in either direction If the starting condition was such that full steering lock was required, then the nonfuzzy controller outperformed the fuzzy one 14.8 Neural network controllers Neural network controllers tackle a similar problem to BOXES controllers, i.e., controlling a system using a model that is automatically generated during a learning phase Two distinct approaches have been adopted by Valmiki et al [15] and by Willis et al [16] Valmiki et al have trained a neural network to associate directly particular sets of state variables with particular action variables, in an analogous fashion to the association of a box in state-space with a control action in a BOXES controller Willis et al adopted a less direct approach, using a neural network to estimate the values of those state variables that are critical to control but cannot be measured on-line The estimated values are then fed to a PID controller as though they were real measurements These two approaches are discussed separately below 14.8.1 Direct association of state variables with action variables Valmiki et al have applied a neural network to a control problem that had previously been tackled using rules and objects, namely, the control of a glue dispenser [17] As part of the manufacture of mixed technology circuit boards, surface-mounted components are held in place by a droplet of glue The glue is dispensed from a syringe by means of compressed air The size of the droplet is the state variable that must be controlled, and the change in the air pressure is the action variable Valmiki et al have built a 6–6–5 multilayer perceptron (Figure 14.16), where specific meanings are attached to values of and on the input and output nodes Five of the six input nodes represent ranges for the error in the size of the droplet The node corresponding to the measured error is sent a 1, while the other four nodes are sent a The sixth input node is set to or depending on whether the error is positive or negative Three of the five output nodes are used to flag particular actions, while the other two are a coded © 2001 by CRC Press LLC act (0) / nothing (1) increase (1) / decrease (0) warning (1) / continue (0) magnitude of change (coded) output layer (action variables) hidden layer input layer (state variables) too big (0) / too small (1) range range range range range attached to node if error falls within range, otherwise Figure 14.16 Using a neural network to map directly state variables to action variables (based on the glue-dispensing application of Valmiki et al [15]) representation of the amount by which the air pressure should be changed, if a change is required The three action flags on the output are: • decrease (0) or increase (1) pressure; • something (0) or nothing (1) (overrides the increase/decrease flag); • no warning (0) or warning (1) if the error in the droplet size is large The training data were generated by hand Since the required mapping of input states to outputs was known in advance — allowing training data to be drawn up — the problem could have been tackled using rules One advantage of a neural network approach is that an interpolated meaning can be attached to output values that lie between and However, the same effect could also be achieved using fuzzy rules This would have avoided the need to classify the state variables according to crisp sets Nevertheless, Valmiki et al.’s experiment is important in demonstrating the feasibility of using a neural network to learn to associate state variables with control actions This is useful where rules or functions that link the two are unavailable, although this was not the case in their experiment © 2001 by CRC Press LLC 14.8.2 Estimation of critical state variables Willis et al [16] have demonstrated the application of neural network controllers to industrial continuous and batch-fed fermenters, and to a commercial-scale high purity distillation column Each application is characterized by a delay in obtaining the critical state variable (i.e., the controlled variable), as it requires chemical or pathological analysis The neural network allows comparatively rapid estimation of the critical state variable from secondary state variables The use of a model for such a purpose is discussed in Section 11.4.4 The only difference here is that the controlled plant is modeled using a neural network The estimated value for the critical state variable can be sent to a PID controller (see Section 14.2.5) to determine the action variable (Figure 14.17) As the critical variable can be measured offline in each case, there is no difficulty in generating training sets of data Each of the three applications demonstrates a different aspect to this problem The chemotaxis learning algorithm was used in each case (see Section 8.4.3) The continuous fermentation process is dynamic, i.e., the variables are constantly changing, and a change in the value of a variable may be just as significant as the absolute value The role of a static neural network, on the other hand, is to perform a mapping of static input variables onto static output variables One way around this weakness is to use the recent history of state variables as input nodes In the continuous fermentation process, two secondary state variables were considered Nevertheless, six input nodes were required, since the two previously measured values of the variables were used as well as the current values (Figure 14.18) In contrast, the batch fermentation process should move smoothly and slowly through a series of phases, never reaching equilibrium In this case, the time since the process began, rather than the time history of the secondary variable, was important Thus, for this process there were only two input nodes, the current time and a secondary state variable desired values of critical state variables error PID action variables plant estimated critical state variables neural network estimator secondary state variables Figure 14.17 Using a neural network to estimate values for critical state variables © 2001 by CRC Press LLC output layer (critical state variable) hidden layer input layer (secondary state variables) current values of secondary variables (time=t ) values of secondary variables at time=t−2∆t values of secondary variables at time=t−∆t Figure 14.18 Using time histories of state variables in a neural network (based on the continuous fermentation application of Willis et al [16]) LPF output layer (critical state variable) LPF LPF LPF LPF LPF LPF LPF LPF LPF LPF LPF LPF LPF LPF LPF LPF LPF LPF hidden layers LPF LPF input layer (secondary state variables) Figure 14.19 Dealing with changing variables by using low pass filters (LPF) (based on the industrial distillation application of Willis et al [16]) © 2001 by CRC Press LLC desired values of critical state variables error PID action variables plant previous estimates estimated critical state variables secondary state variables neural network estimator estimator adjustments error critical state variables measured off-line Figure 14.20 Feedback control of both the plant and the neural network estimator In the methanol distillation process, an alternative approach was adopted to the problem of handling dynamic behavior As it is known that the state variables must vary continuously, sudden sharp changes in any of the propagated values can be disallowed This is achieved through a simple lowpass digital filter There are several alternative forms of digital filter (see, for example, [18]), but Willis et al used the following: y(t) = : y(t – 1) + (1 – :) x(t) 0d:d1 (14.15) where x(t) and y(t) are the input and output of the filter, respectively, at time t The filter ensures that no value of y(t) can be greatly different from its previous value, and so high-frequency fluctuations are eliminated Such a filter was attached to the output side of each neuron (Figure 14.19), so that the unfiltered output from the neuron was represented by x(t), and the filtered output was y(t) Suitable values for the parameters : were learned along with the network weightings Willis et al were able to show improved accuracy of estimation and tighter control by incorporating the digital filter into their neural network Further improvements were possible by comparing the estimated critical state variable with the actual values, as these became known The error was then used to adjust the output of the estimator There were two feedback loops, one for the PID controller and one for the estimator (Figure 14.20) © 2001 by CRC Press LLC 14.9 Statistical process control (SPC) 14.9.1 Applications Statistical process control (SPC) is a technique for monitoring the quality of products as they are manufactured Critical parameters are monitored and adjustments are made to the manufacturing process before any products are manufactured that lie outside of their specifications The appeal of SPC is that it minimizes the number of products that are rejected at the quality control stage, thereby improving productivity and efficiency Since the emphasis of SPC lies in monitoring products, this section could equally belong in Chapter 11 SPC involves inspecting a sample of the manufactured products, measuring the critical parameters, and inferring from these measurements any trends in the parameters for the whole population of products The gathering and manipulation of the statistics is a procedural task, and some simple heuristics are used for spotting trends The monitoring activities, therefore, lend themselves to automation through procedural and rule-based programming Depending on the process, the control decisions might also be automated 14.9.2 Collecting the data Various statistics can be gathered, but we will concentrate on the mean and standard deviation* of the monitored parameters Periodically, a sample of consecutively manufactured products is taken, and the critical parameter x is measured for each item in the sample The sample size n is typically in the range 5–10 In the case of the manufacture of silicon wafers, thickness may be the critical parameter The mean x and standard deviation V for the sample are calculated After several such samples have been taken, it is possible to arrive at a mean of means x and a mean of standard deviations V The values x and V represent the normal, or set-point, values for x and V, respectively Special set-up procedures exist for the manufacturing plant to ensure that x corresponds to the set-point for the parameter x Bounds called control limits are placed above and below these values (Figure 14.21) Inner and outer control limits, referred to as warning limits and action limits, respectively, may be set such that: warning limit for x x r2 V n * The range of sample values is often used instead of the standard deviation © 2001 by CRC Press LLC action limit for x x r3 V n Action limits are also set on the values of V such that: upper action limit for V − x CU V (a) tolerance action limit warning limit = x set point warning limit action limit tolerance time σ (b) action limit − σ set point action limit time Figure 14.21 Control limits (action and warning) applied to: (a) sample means ( x ); (b) sample standard deviations (V) © 2001 by CRC Press LLC lower action limit for V V CL where suitable values for CU and CL can be obtained from standard tables for a given sample size n Note that both CU and CL are greater than The heuristics for interpreting the sample data with respect to the control limits are described in Section 14.9.3, below Any values of x that lie beyond the action limits indicate that a control action is needed The tolerance that is placed on a parameter is the limit beyond which the product must be rejected It follows that if the tolerance is tighter than the action limits, then the manufacturing plant is unsuited to the product and attempts to use it will result in a large number of rejected products irrespective of SPC 14.9.3 Using the data As the data are gathered, a variety of heuristics can be applied Some typical ones are reproduced below: IF a single x value lies beyond an action limit THEN a special disturbance has occurred that must be investigated and eliminated IF there are x values beyond both action limits THEN the process may be deteriorating IF two consecutive values of x lie beyond a worrying limit THEN the process mean may have moved IF eight consecutive values of x lie on an upward or downward trend THEN the process mean may be moving IF seven consecutive values of x lie all above or all below x THEN the process mean may have moved IF there are V values beyond the upper action limit THEN the process may be deteriorating IF eight consecutive values of V lie on an upward trend THEN the process may be deteriorating IF there are V values beyond the lower action limit THEN the process may have improved and attempts should be made to incorporate the improvement permanently The conclusions of these rules indicate a high probability that a control action is needed They cannot be definite conclusions, as the evidence is statistical © 2001 by CRC Press LLC Furthermore, it may be that the process itself has not changed at all, but instead some aspect of the measuring procedure has altered Each of the above rules calls for investigation of the process to determine the cause of any changes, perhaps using case-based or model-based reasoning (Chapters and 11) 14.10 Summary Intelligent systems for control applications draw upon the techniques used for interpreting data (Chapter 11) and planning (Chapter 13) Frequently, the stages of planning are interleaved with execution of the plans, so that the controller can react to changes in the controlled plant as they occur This contrasts with the classical planning systems described in Chapter 13, where the world is treated as a static “snapshot.” As control systems must interact with a dynamic environment, time constraints are placed upon them There is often a trade-off between the quality of a control decision and the time taken to derive it In most circumstances it is preferable to perform a suboptimal control action than to fail to take any action within the time limits The control problem can be thought of as one of mapping a set of state variables onto a set of action variables State variables describe the state of the controlled plant, and action variables, set by the controller, are used to modify the state of the plant Adaptive controllers attempt to maintain one or more critical state parameters at a constant value, minimizing the effects of any disturbance In contrast, servo controllers attempt to drive the plant to a new state, which may be substantially different from its previous state The problems of adaptive and servo control are similar, as both involve minimizing the difference, or error, between the current values of the state variables and the desired values An approximate distinction can be drawn between low-level “reflex” control and high-level supervisory control Low-level control often requires little intelligence and can be most effectively coded procedurally, for instance, as the sum of proportional, integral, and derivative (PID) terms Improvements over PID control can be made by using fuzzy rules, which also allow some subtleties to be included in the control requirements, such as bounds on the values of some variables Fuzzy rules offer a mixture of some of the benefits of procedures and crisp rules Like crisp rules, fuzzy rules allow a linguistic description of the interaction between state and action variables On the other hand, like an algebraic procedure, fuzzy rules allow smooth changes in the state variables to bring about smooth changes in the action variables The nature of these smooth changes is determined by the membership functions that are used for the fuzzy sets © 2001 by CRC Press LLC Any controller requires a model of the controlled plant Even a PID controller holds an implicit model in the form of its parameters, which can be tuned to specific applications When a model of the controlled plant is not available, it is possible to build one automatically using the BOXES algorithm or a neural network Both can be used to provide a mapping between state variables and action variables They can also be used in a monitoring capacity, where critical state variables (which may be difficult to measure directly) are inferred from secondary measurements The inferred values can then be used as feedback to a conventional controller If a plant is modeled with sufficient accuracy, then predictive control becomes a possibility A predictive controller has two goals, to tackle the immediate control needs and to minimize future deviations, based on the predicted behavior References Bennett, M E., “Real-time continuous AI,” IEE Proceedings–D, vol 134, pp 272–277, 1987 Franklin, G F., Powell, J D., and Emami-Naeini, A., Feedback Control of Dynamic Systems, 3rd ed., Addison-Wesley, 1994 Sripada, N R., Fisher, D G., and Morris, A J., “AI application for process regulation and process control,” IEE Proceedings–D, vol 134, pp 251–259, 1987 Leitch, R., Kraft, R., and Luntz, R., “RESCU: a real-time knowledge based system for process control,” IEE Proceedings–D, vol 138, pp 217– 227, 1991 Laffey, T J., Cox, P A., Schmidt, J L., Kao, S M., and Read, J Y., “Real-time knowledge-based systems,” AI Magazine, pp 27–45, Spring 1988 Lesser, V R., Pavlin, J., and Durfee, E., “Approximate processing in realtime problem solving,” AI Magazine, pp 49–61, Spring 1988 Hopgood, A A., “Rule-based control of a telecommunications network using the blackboard model,” Artificial Intelligence in Engineering, vol 9, pp 29–38, 1994 Taunton, J C and Haspel, D W., “The application of expert system techniques in on-line process control,” in Expert Systems in Engineering, Pham, D T (Ed.), IFS Publications / Springer-Verlag, 1988 Hopgood, A A., Phillips, H J., Picton, P D., and Braithwaite, N S J., “Fuzzy logic in a blackboard system for controlling plasma deposition © 2001 by CRC Press LLC

Ngày đăng: 06/04/2023, 14:04