Chapter 3.1 Distributed Control Systems Dobrivoje Popovic University of Bremen, Bremen, Germany 1.1 INTRODUCTION The evolution of plant automation systems, from very primitive forms up to the contemporary com- plex architectures, has closely followed the progress in instrumentation and computer technology that, in turn, has given the impetus to the vendor to update the system concepts in order to meet the user's grow- ing requirements. This has directly encouraged users to enlarge the automation objectives in the ®eld and to embed them into the broad objectives of the pro- cess, production, and enterprise level. The integrated automation concept [1] has been created to encompass all the automation functions of the company. This was viewed as an opportunity to optimally solve some interrelated problems such as the ef®cient uti- lization of resources, production pro®tability, pro- duct quality, human safety, and environmental demands. Contemporary industrial plants are inherently com- plex, large-scale systems requiring complex, mutually con¯icting automation objectives to be simultaneously met. Effective control of such systems can only be made feasible using adequately organized, complex, large-scale automation systems like the distributed computercontrolsystems[2](Fig.1).Thishasfora long time been recognized in steel production plants, where 10 million tons per annum are produced, based on the operation of numerous work zones and the associated subsystems like: Iron zone with coke oven, palletizing and sintering plant, and blast furnace Steel zone with basic oxygen and electric arc fur- nace, direct reduction, and continuous casting plant, etc. Mill zone with hot and cold strip mills, plate bore, and wire and wire rod mill. To this, the laboratory services and the plant care con- trol level should be added, where all the required cal- culations and administrative data processing are carried out, statistical reviews prepared, and market prognostics data generated. Typical laboratory services are the: Test ®eld Quality control Analysis laboratory Energy management center Maintenance and repair department Control and computer center and typical utilities: Gas and liquid fuel distribution Oxygen generation and distribution Chilled water and compressed air distribution Water treatment Steam boiler and steam distribution Power generation and dispatch. The dif®culty of control and management of complex plants is further complicated by permanent necessity of 185 Copyright © 2000 Marcel Dekker, Inc. steady adaptation to the changing demands, particu- larly due to the quality variations in the raw materials and the fact that, although the individual subsystems are speci®c batch-processing plants, they are ®rmly incorporated into the downstream and upstream pro- cesses of the main plant. This implies that the inte- grated plant automation system has to control, coordinate, and schedule the total plant production process. On the other hand, the complexity of the hierarch- ical structure of the plant automation is further expanding because the majority of individual subplants involved are themselves hierarchically organized, like the ore yard, coke oven, sintering plant, BOF/LD (Basic Oxygen Furnace LD-Converter) converter, elec- tric arc furnace, continuous casting, etc. Onshore and offshore oil and gas ®elds represent another typical example of distributed, hierarchically organized plants requiring similar automation con- cepts. For instance, a typical onshore oil and gas pro- duction plant consists of a number of oil and gas gathering and separation centers, serving a number of remote degassing stations, where the crude oil and industrial gas is produced to be distributed via long- distance pipelines. The gas production includes gas compression, dehydration, and puri®cation of liquid components. The remote degassing stations, usually unmanned and completely autonomous, have to be equipped with both multiloop controllers and remote terminal units that should periodically transfer the data, sta- tus, and alarm reports to the central computer. These stations should be able to continue to operate also when the communication link to the central computer fails. This is also the case with the gather- ing and separation centers that have to be equipped with independent microcomputer-based controllers [3] that, when the communication link breaks down, have to automatically start running a prepro- grammed, failsafe routine. An offshore oil and gas production installation usually consists of a number of bridge-linked platforms for drilling and produc- tion, each platform being able to produce 100,000 or more barrels of crude oil per day and an adequate quantity of compressed and preprocessed gas. Attached to the platforms, beside the drilling modules, are also the water treatment and mud handling modules, power generation facilities, and other utilities. In order to acquire, preprocess, and transfer the sensing data to the central computer and to obtain control commands from there, a communication link is required and at the platform a supervisory control data acquisition system (SCADA). An additional link 186 Popovic Figure 1 Distributed computer control system. Copyright © 2000 Marcel Dekker, Inc. is requried for interconnection of platforms for exchange of coordination data. Finally, a very illustrative example of a distributed, hierarchically organized system is the power system in which the power-generating and power-distributing subsystems are integrated. Here, in the power plant itself, different subsystems are recognizable, like air, gas, combustion, water, steam, cooling, turbine, and generator subsystems. The subsystems are hierarchi- cally organized and functionally grouped into: Drive-level subsystem Subgroup-level subsystem Group-level subsystem Unit-level subsystem. 1.2 CLASSICAL APPROACH TO PLANT AUTOMATION Industrial plant automation has in the past undergone three main development phases: Manual control Controller-based control Computer-based control. The transitions between the individual automation phases have been so vague that even modern automa- tion systems still integrate all three types of control. At the dawn of industrial revolution and for a long time after, the only kind of automation available was the mechanization of some operations on the produc- tion line. Plants were mainly supervised and controlled manually. Using primitive indicating instruments, installed in the ®eld, the plant operator was able to adequately manipulate the likely primitive actuators, in order to conduct the production process and avoid critical situations. The application of real automatic control instru- mentation was, in fact, not possible until the 1930s and 40s, with the availability of pneumatic, hydraulic, and electrical process instrumentation elements such as sensors for a variety of process variables, actuators, and the basic PID controllers. At this initial stage of development it was possible to close the control loop for ¯ow, level, speed, pressure, or temperature control in the ®eld (Fig. 2). In this way, the plants steadily became more and more equipped with ®eld control instrumentation, widely distributed through the plant, able to indicate, record, and/or control indivi- dual process variables. In such a constellation, the duty of the plant operator was to monitor periodically the indicated measured values and to preselect and set the controlling set-point values. Yet, the real breakthrough in this role of the plant operator in industrial automation was achieved in the 1950s by introducing electrical sensors, transducers, Distributed Control Systems 187 Figure 2 Closed-loop control. Copyright © 2000 Marcel Dekker, Inc. actuators, and, above all, by placing the plant instru- mentation in the central control room of the plant. In this way, the possibility was given to supervise and control the plant from one single location using some monitoring and command facilities. In fact, the intro- duction of automatic controllers has mainly shifted the responsibility of the plant operator from manipulating the actuating values to the adjustment of controllers' set-point values. In this way the operator became a supervisory controller. In the ®eld of plant instrumentation, the particular evolutionary periods have been marked by the respec- tive state-of-the art of the available instrumentation technology, so that here an instrumentation period is identi®able that is: Pneumatic and hydraulic Electrical and electronic Computer based. The period of pneumatic and hydraulic plant instru- mentation was, no doubt, technologically rather primi- tive because the instrumentation elements used were of low computational precision. They, nevertheless, have still been highly reliable andÐabove allÐexplosion proof, so that they are presently still in use, at least in the appropriate control zones of the plant. Essential progress in industrial plant control has been made by introducing electrical and electronic instrumentation, which has enabled the implementa- tion of advanced control algorithms (besides PID, also cascaded, ratio, nonlinear, etc. control), and con- siderably facilitated automatic tuning of control para- meters. This has been made possible particularly through the computer-based implementation of indivi- dual control loops (Fig. 3). The idea of centralization of plant monitoring and control facilities was implemented by introducing the concept of a central control room in the plant, in which the majority of plant control instrumentation, with the exception of sensors and actuators, is placed. For con- necting the ®eld instrumentation elements to the cen- tral control room pneumatic and electrical data transmission lines have been installed within the plant. The operation of the plant from the central con- trol room is based on indicating, recording, and alarm elements, situated there, as well asÐfor better local orientationÐon the use of plant mimic diagrams. The use of plant mimic diagrams has proven to be so useful that they are presently still in use. Microcomputers, usually programmed to solve some data acquisition and/or control problems in the ®eld, have been connected, along with other instrumentation elements, to the facilities of the central control room, where the plant operators are in charge of centralized plant monitoring and process control. Closed-loop control is essential for keeping the values of process variables, in spite of internal and external disturbing in¯uences, at prescribed, set-point values, particularly when the control parameters are optimally tuned to the process parameters. In indus- trial practice, the most favored approach for control parameter tuning is the Ziegler±Nichols method, the application of which is based on some simpli®ed rela- tions and some recommended tables as a guide for determination of the optimal step transition of the loop while keeping its stability margin within some given limits. The method is basically applicable to the stationary, time-invariant processes for which the values of relevant process parameters are known; the control parameters of the loop can be tuned of¯ine. This cannot always hold, so the control parameters have to be optimally tuned using a kind of trial-and- error approach, called the Ziegler±Nichols test. It is an open-loop test through which the pure delay of the 188 Popovic Figure 3 Computer-based control loop. Copyright © 2000 Marcel Dekker, Inc. loop and its ``reaction rate'' can be determined, based on which the optimal controller tuning can be under- taken. 1.3 COMPUTER-BASED PLANT AUTOMATION CONCEPTS Industrial automation has generally been understood as an engineering approach to the control of systems such as power, chemical, petrochemical, cement, steel, water and wastewater treatment, and manufacturing plants [4,5]. The initial automation objectives were relatively simple, reduced to automatic control of a few process variables or a few plant parameters. Over the years, there has been an increasing trend toward simulta- neous control of more and more (or of all) process variables in larger and more complex industrial plants. In addition, the automation technology has had to provide a better view of the plant and process state, required for better monitoring and operation of the plant, and for improvement of plant performance and product quality. The close cooperation between the plant designer and the control engineer has, again, directly contributed to the development of bet- ter instrumentation, and opened perspectives to imple- ment larger and more complex production units and to run them at full capacity, by guaranteeing high pro- duct quality. Moreover, the automation technology is presently used as a valuable tool for solving crucial enterprise problems, and interrelating simultaneous solution of process and production control problems along with the accompanying ®nancial and organiza- tional problems. Generally speaking, the principal objectives of plant automation are to monitor information ¯ow and to manipulate the material and energy ¯ow within the plant in the sense of optimal balance between the pro- duct quality and the economic factors. This means meeting a number of contradictory requirements such as [3]: Maximal use of production capacity at highest pos- sible production speed in order to achieve max- imal production yield of the plant Maximal reduction of production costs by Energy and raw material saving Saving of labor costs by reducing the required staff and staff quali®cation Reduction of required storage and inventory space and of transport facilities Using low-price raw materials while achieving the same product quality Maximal improvement of product quality to meet the highest international standards while keeping the quality constant over the production time Maximal increase of reliability, availability, and safety of plant operation by extensive plant mon- itoring, back-up measures, and explosion-proof- ing provisions Exact meeting of governmental regulations concern- ing environmental pollution, the ignorance of which incurs ®nancial penalties and might pro- voke social protest Market-oriented production and customer-oriented production planning and scheduling in the sense of just-in-time production and the shortest response to customer inquiries. Severe international competition in the marketplace and steadily rising labor, energy, and raw material costs force enterprise management to introduce advanced plant automation, that simultaneously includes the of®ce automation, required for compu- ter-aided market monitoring, customer services, pro- duction supervision and delivery terms checking, accelerated order processing, extensive ®nancial balan- cing, etc. This is known as integrated enterprise auto- mation and represents the highest automation level [1]. The use of dedicated comptuers to solve locally restricted automation problems was the initial compu- ter-based approach to plant automation, introduced in the late 1950s and largely used in the 1960s. At that time the computer was viewedÐmainly due to its low reliability and relatively high costsÐnot so much as a control instrument but rather as a powerful tool to solve some special, clearly de®ned problems of data acquisition and data processing, process monitoring, production recording, material and energy balancing, production reporting, alarm supervision, etc. This ver- satile capability of computers has also opened the pos- sibility of their application to laboratory and test ®eld automation. As a rule, dedicated computers have individually been applied to partial plant automation, i.e., for auto- mation of particular operational units or subsystems of the plant. Later on, one single large mainframe com- puter was placed in the central control room for cen- tralized, computer-based plant automation. Using such computers, the majority of indicating, recording, and alarm-indicating elements, including the plant mimic diagrams, have been replaced by corresponding application software. Distributed Control Systems 189 Copyright © 2000 Marcel Dekker, Inc. The advent of larger, faster, more reliable, and less expensive process control computers in the mid 1960s even encouraged vendors to place the majority of plant and production automation functions into the single central computer; this was possible due to the enor- mous progress in computer hardware and software, process and man±machine interface, etc. However, in order to increase the reliability of the central computer system, some backup provisions have been necessary, such as backup controllers and logic circuits for automatic switching from the computer to the backup controller mode (Fig. 4) so that in the case of computer failure the controllers take over the last set-point values available in the computer and freeze them in the latches available for this purpose. The values can later on be manipulated by the plant opera- tor in a similar way to conventional process control. In addition, computer producers have been working on some more reliable computer system structures, usually in form of twin and triple computer systems. In this way, the required availability of a central con- trol computer system of at least 99.95% of production time per year has enormously been increased. To this comes that the troubleshooting and repair time has dramatically been reduced through online diagnostic software, preventive maintenance, and twin-computer modularity of computer hardware, so that the number of really needed backup controllers has been reduced down to a small number of most critical ones. The situation has suddenly been changed after the microcomputers have increasingly been exploited to solve the control problems. The 8-bit microcomputers, such as Intel's 8080 and Motorola's MC 6800, designed for bytewise data processing, have proved to be appropriate candidates for implementation of programmable controllers [6]. Moreover, the 16- and 32-bit microcomputer generation, to which Intel's 8088 and 8086, Motorola's 68000, Zilog's Z 8000 and many others belong, has even gained a relatively high respect within the automation community. They have worldwide been seen as an ef®cient instrumentation tool, extremely suitable to solve a variety of automa- tion problems in a rather simple way. Their high relia- bility has placed them at the core of digital, single-loop and multiloop controllers, and has ®nally introduced the future trend in building automation systems by transferring more and more programmed control loops from the central computer into microcomputers, distributed in the ®eld. Consequently, the duties left to the central computer have been less and less in the area of process control, but rather in the areas of higher- level functions of plant automation such as plant mon- 190 Popovic Figure 4 Backup controller mode. Copyright © 2000 Marcel Dekker, Inc. itoring and supervision. This was the ®rst step towards splitting up the functional architecture of a computer- based automation system into at least two hierarchical levels (Fig. 5): Direct digital control Plant monitoring and supervision. The strong tendency to see the process and produc- tion control as a unit, typical in the 1970s, soon accel- erated further architecture extension of computer- based automation systems by introducing an addi- tional level on top of the process supervisory level: the production scheduling and control level. Later on, the need was identi®ed for building the centralized data ®les of the enterprise, to better exploit the available production and storage resources within the produc- tion plant. Finally, it has been identi®ed that direct access to the production and inventory ®les helps opti- mal production planning, customer order dispatching, and inventory control. In order to integrate all these strongly interrelated requirements into one computer system, computer users and producers have come to the agreement that the structure of a computer system for integrated plant and production automation should be hierarchical, comprising at least the following hierarchical levels: Process control Plant supervision and control Production planning and plant management. This structure has also been professionally implemen- ted by computer producers, who have launched an abundant spectrum of distributed computers control systems, e.g.: ASEA MASTER (ASEA) CENTUM (Yokogawa) CONTRONIC P (Harman and Braun) DCI 4000 (Fisher and Porter) HIACS 3000 (Hitachi) LOGISTAT CP 80 (AEG-Telefunken) MOD 300 (Taylor Instruments) PLS (Eckardt) PMS (Ferranti) PROCONTROL I (BBC) PROVOX (Fisher Controls) SPECTRUM (Foxboro) TDC 3000 (Honeywell) TeLEPERM M (Siemens) TOSDIC (Toshiba). 1.4 AUTOMATION TECHNOLOGY Development of distributed computer control systems evidently depends on the development of their essential parts: hardware, software, and communication links. Thus, to better conceive the real capabilities of modern automation systems it is necessary to review the tech- nological level and the potential application possibili- ties of the individual parts as constituent subsystems. Distributed Control Systems 191 Figure 5 Hierarchical systems level diagram. Copyright © 2000 Marcel Dekker, Inc. 1.4.1 Computer Technology For more than 10 years, the internal, bus-oriented Intel 80 Â 86 and Motorola 680 Â 0 microcomputer archi- tectures have been the driving agents for development of a series of powerful microprocessors. However, the real computational power of processors came along with the innovative design of RISC (reduced instruc- tion set computers) processors. Consequently, the RISC-based microcomputer concept has soon outper- formed the mainstream architecture. Today, most fre- quently used RISC processors are the SPARC (Sun), Alpha (DEC), R4X00 (MIPS), and PA-RISC (Hewlett Packard). Nevertheless, although being powerful, the RISC processor chips have not found a ®rm domicile within the mainstream PCs, but rather have become the core part of workstations and of similar computational facilities. Their relatively high price has decreased their market share, compared to microprocessor chips. Yet, the situation has recently been improved by introducing emulation possibilities that enable com- patibility among different processors, so that RISC- based software can also run on conventional PCs. In addition, new microprocessor chips with the RISC architecture for new PCs, such as Power PC 601 and the like, also promote the use of RISCs in automation systems. Besides, the appearance of portable operating systems and the rapid growth the workstation market contributes to the steady decrease of price-to-perfor- mance ratio and thus to the acceptance of RISC pro- cessors for real-time computational systems. For process control applications, of considerable importance was the Intel initiative to repeatedly mod- ify its 80 Â86 architecture, which underwent an evolu- tion in ®ve successive phases, represented through the 8086 (a 5 MIPS, 29,000-transistor processor), 80286 (a 2 MIPS, 134,000-transistor processor), 80386 (an 8 MIPS, 175,000-transistor processor), 80486 (a 37 MIPS 1.2-million-transistor processor), up to the Pentium (a 112 and more MIPs, 3.1-million-transistor processor). Currently, even an over 300 MIPS version of the Pentium is commercially available. Breaking the 100 MIPS barrier, up to then mono- polized by the RISC processors, the Pentium has secured a threat-free future in the widest ®eld of appli- cations, relying on existing systems software, such as Unix, DOS, Windows, etc. This is a considerably lower requirement than writing new software to ®t the RISC architecture. Besides, the availability of very advanced system software, such as operating systems like Windows NT, and of real-time and object-oriented languages, has essentially enlarged the application pos- sibilities of PCs in direct process control, for which there is a wide choice of various software tools, kits, and tool boxes, powerfully supporting the computer- aided control systems design on the PCs. Real-time application programs developed in this way can also run on the same PCs, so that the PCs have ®nally become a constitutional part of modern distributed computer systems [7]. For distributed, hierarchically organized plant auto- mation systems, of vital importance are the computer- based process-monitoring stations, the human± machine interfaces representing human windows into the process plant. The interfaces, mainly implemented as CRT-based color monitors with some connected keyboard, joystick, mouse, lightpen, and the like, are associated with individual plant automation levels to function as: Plant operator interfaces, required for plant moni- toring, alarm handling, failure diagnostics, and control interventions. Production dispatch and production-monitoring inter- faces, required for plant production management Central monitoring interfaces, required for sales, administrative, and ®nancial management of the enterprise. Computer-based human±machine interfaces have functionally improved the features of the conventional plant monitoring and command facilities installed in the central control room of the plant, and completely replaced them there. The underlying philosophy of new plant-monitoring interfaces (that only those plant instrumentation details and only the process variables selected by the operator are presented on the screen) releases the operator from the visual saturation present in the conventional plant-monitoring rooms where a great number of indicating instruments, recorders, and mimic diagrams is permanently present and has to be continuously monitored. In this way the plant operator can concentrate on monitoring only those process variables requiring immediate intervention. There is still another essential aspect of process monitoring and control that justi®es abandoning the conventional concept of a central control room, where the indicating and recording elements are arranged according to the location of the corresponding sensors and/or control loops in the plant. This hampers the operator in a multialarm case in intervening accord- ingly because in this case the plant operator has to simultaneously monitor and operationally interrelate the alarmed, indicated, and required command values 192 Popovic Copyright © 2000 Marcel Dekker, Inc. situated at a relative large mutual distance. Using the screen-oriented displays the plant operator can, upon request, simultaneously display a large number of pro- cess and control variables in any constellation. This kind of presentation can evenÐguided by the situation in the ®eldÐbe automatically triggered by the computer. It should be emphasized that the concept of modern human interfaces has been shaped, in cooperation between the vendor designers and the users, for years. During this time, the interfaces have evolved into ¯exible, versatile, intelligent, user-friendly work- places, widely accepted in all industrial sectors throughout the world. The interfaces provide the user with a wide spectrum of bene®cial features, such as: Transparent and easily understandable display of alarm messages in chronological sequence that blink, ¯ash, and/or change color to indicate the current alarm status Display scrolling by advent of new alarm messages, while handling the previous ones Mimic diagram displays showing different details of different parts of the plant by paging, rolling, zooming, etc. Plant control using mimic diagrams Short-time and long-time trend displays Real-time and historical trend reports Vertical multicolor bars, representing values of pro- cess and control variables, alarm limit values, operating restriction values, etc. Menu-oriented operator guidance with multipur- pose help and support tools. 1.4.2 Control Technology The ®rst computer control application was implemen- ted as direct digital control (DDC) in which the com- puter was used as a multiloop controller to simultaneously implement tens and hundreds of con- trol loops. In such a computer system conventional PID controllers have been replaced by respective PID control algorithms implemented in programmed digital form in the following way. The controller output yt, based on the difference et between the control input ut and the set-point value SPV is de®ned as ytK p et 1 T R t 0 ed T D det dt P R Q S where K p is the proportional gain, T R the reset time, and T D the rate time of the controller. In the computer, the digital PID control algorithm is based on some discrete values of measured process variables at some equidistant time instants t 0 ; t 1 ; FFF; t n of sampling, so that one has mathematically to deal with the differences and the sums instead of with deri- vatives and integrals. Therefore, the discrete version of the above algorithm has to be developed by ®rst differ- entiating the above equation, getting ytK p et 1 T R etT D et ! where et and et are the ®rst and the second deriva- tive of et, and yt the ®rst derivative of yt. The derivatives can be approximated at each sampling point by ykykÀyk À 1=Át ekekÀek À 1=Át and ek ekÀ ek À 1=Át to result in ykÀuk À 1=Át K p 4 ekÀek À 1 Át 1 T R ek T D ekÀ2ek À 1ek À 2 Át 2 5 or in ykyk À 1K p 1 Át T R T D Át ek K p À1 À 2T D =Átek À 1 K p T D Át ek À 2 This is known as the positional PDI algorithm that delivers the new output value yk, based on its pre- vious value yk À1 and on some additional calcula- tions in which the values of et at three successive samplings are involved. The corresponding velocity version is ÁykykÀyk À 1 Better resutls can be achieved using the ``smoothed'' derivative ek 1 n nÀ1 i0 e kÀi À e kÀiÀ1 Át or the ``weighted'' derivative Distributed Control Systems 193 Copyright © 2000 Marcel Dekker, Inc. ek nÀ1 i0 W i ek À iÀek À i À 1 Át nÀ1 i0 W i in which the weighting factors are selected, so that W i i W 0 and nÀ1 i0 W i 1 In this case the ®nal digital form of the PID algorithm is given by yk yk À 1b 0 ekb 1 ek À 1b 2 ek À 2 b 3 ek À 3b 4 ek À 4 with b 0 K p 1 6 Át T R T D 6Át b 1 K p 1 2 T D 3Át b 2 K p À 1 2 À T D Át b 3 K p À 1 2 T D 3Át b 4 K p T D 6Át Another form of discrete PID algorithm, used in the ®rst DDC implementations, was ykK p ek 1 T R k i0 eiÁt T D ekÀek À 1 Át 45 Due to the sampling, the exact values of measured process variables are known only at sampling instances. Information about the signal values between the sampling instances is lost. In addition, the require- ment to hold the sampled value between two sampling instants constantly delays the value by half of the sam- pling period, so that the choice of a large sampling period is equivalent to the introduction of a relatively long delay into the process dynamics. Consequently, the control loop will respond very slowly to the changes in that set-point value, which makes it dif®cult to properly manage urgent situations. The best sampling time Át to be selected for a given control loop depends on the control algorithm applied and on the process dynamics. Moreover, the shorter the sampling time, the better the approximation of the continuous closed-loop system by its digital equiva- lent, although this does not generally hold. For instance, the choice of sampling time has a direct in¯u- ence on pole displacement of the original (continuous) system, whose discrete version can in this way become unstable, unobservable, or uncontrollable. For systems having only real poles and which are controlled by a sampled-version algorithm, it is recom- mended to choose the sampling time between 1/6 and 1/3 of the smallest time constant of the system. Some practical recommendations plead for sampling times of 1 to 1.5 sec for liquid ¯ow control, 3 to 5 sec of pres- sure control, and 20 sec for temperature control. Input signal quantization, which is due to the limited accuracy of the analog-to-digital converters, is an essen- tial factor in¯uencing the quality of a digital control loop. The quantization level can here produce a limit cycle within the frame of the quantization error made. The use of analog-to-digital converters with a reso- lution higher than the accuracy of measuring instru- ments makes this in¯uence component less relevant. The same holds for the quantization of the output signal, where the resolution of the digital-to-analog converter is far higher than the resolution of position- ing elements (actuators) used. In addition, due to the low-pass behavior of the system to be controlled, the quantization errors of output values of the controller have no remarkable in¯uence on the control quality. Also, the problem of in¯uence of the measurement noise on the accuracy of a digital controllers can be solved by analog or digital pre®ltering of signals, before introducing it into the control algorithm. Although the majority of distributed control sys- tems is achieving a higher level of sophistication by placing more emphasis on the strategy in the control loops, some major vendors of such systems are already using arti®cial intelligence technology [8] to implement knowledge-based controllers [9], able to learn online from control actions and their effects [10,11]. Here, particularly the rule-based expert controllers and fuzzy-logic-based controllers have been successfully used in various industrial branches. The controllers enable using the knowledge base around the PID algo- rithm to make the control loop perform better and to cope with process and system irregularities including the system faults [12]. For example, Foxboro has developed the self-tuning controller EXACT based on a pattern recognition approach [4]. The controller uses a direct performance feedback by monitoring the controlled process variable to determine the action 194 Popovic Copyright © 2000 Marcel Dekker, Inc. [...]... current-to-voltage conversion Voltage-to-frequency and frequency-to-voltage conversion Input signal preprocessing (®ltering, smoothing, etc.) Signal range switching Input/output channel selection Galvanic isolation In addition, the signal format and/or digital signal representation has also to be adapted using: Analog-to-digital and digital-to-analog conversion Parallel-to-serial and serial-to-parallel... Automatica 27(4): 599±609, 1991 30 PJ Gawthrop Self-tuning pid controllersÐalgorithms and implementations IEEE Trans Autom Control 31 (3) : 201±209, 1986 31 L Sha, SS Sathaye A systematic approach to designing distributed real-time systems IEEE Computer 26(9): 68±78, 19 93 32 MS Shatz, JP Wang Introduction to distributed software engineering IEEE Computer 20(10): 23 31 , 1987 33 D Popovic, G Thiele, M Kouvaras,... control algorithms like feed-forward, predictive, deadbeat, state feedback, self-tuning, nonlinear, and multivariable control Control loop con®guration [33 ] is a two-step procedure, used for determination of: Structure of individual control loops in terms of functional modules used and of their interlinkage, required for implementation of the desired overall characteristics of the loop under con®guration,... speci®ed implementation of control algorithms Typical functions of this group are: Lead/lag Dead time Copyright © 2000 Marcel Dekker, Inc 201 Differentiate Integrator Moving average First-order digital ®lter Sample-and-hold Velocity limiter Basic control algorithms mainly include the PID algorithm and its numerous versions, e.g.: PID-ratio PID-cascade PID-gap PID-auto-bias PID-error squared I, P, PI,... and C-implementation of a microcomputer-based programmable multi-loop controller J Microcomputer Applications 12: 159± 165, 1989 34 MR Tolhurst, ed Open Systems Interconnections London: Macmillan Education, 1988 35 L Hutchison Local Area Network Architectures Reading, MA: Addison-Wesley, 1988 36 W Stalling Handbook of Computer Communications Standards Indianapolis, In: Howard W Sams & Company, 1987 37 ... Harston, R Pap, eds Handbook of Neural Computing Applications New York: Academic Press, 1990 19 A Ray Distributed data communication networks for real-time process control Chem Eng Commun 65 (3) : 139 ±154, 1988 20 D Popovic, ed Analysis and Control of Industrial Processes Braunschweig, Germany: Vieweg-Verlag, 1991 21 PH Laplante Real-time Systems Design and Analysis New York: IEEE Press, 19 93 Copyright © 2000... executed in an interlocked mode, in which a number of real-time tasks are executed synchronously, both in time- or event-driven mode Two outstanding examples of process-oriented languages are: Ada, able to support implementation of complex, comprehensive system automation software in which, for instance, the individual software packages, generated by the members of a programming team, are integrated in a cooperative,... to the design of software, e.g [38 ]: Modular, free con®gurable software should be used with a rich library of well-tested and online-veri®ed modules Available loop and display panels should be relatively simple, transparent, and easy to learn A suf®cient number of diagnostic, check, and test functions for online and of ine system monitoring and maintenance should be provided Special software provisions... hand, control system designers often specify the relative stability of a system, that is, they specify some measure of how close a system is to being unstable In the remainder of this chapter, stability and relative stability for linear time-invariant systems, both continuous-time and discrete-time, and stability for nonlinear systems, both continuous-time and discrete-time, will be de®ned Following... positive integers m and n are as de®ned in Eq (1) For a discrete-time system modeled by a difference equation as in Eq (2), a transfer function can be developed by taking the one-sided z-transform of Eq (2), ignoring all initial-value terms, and forming the ratio Copyright © 2000 Marcel Dekker, Inc of the z-transform of the output to the z-transform of the input The result is Impulse Responses Transfer functions . input/output channels, mainly by signal condi- tioning using: Voltage-to-current and current-to-voltage conver- sion Voltage-to-frequency and frequency-to-voltage con- version Input signal preprocessing. using: Analog-to-digital and digital-to-analog conversion Parallel-to-serial and serial-to-parallel conversion Timing, synchronization, triggering, etc. The recent development of FIELDBUS, the interna- tional. performance and the state -of- the-art of the production sche- dule Provides the plant management with the extensive up-to-date reports including the statistical and historical reviews of production and