1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Handbook of Reliability, Availability, Maintainability and Safety in Engineering Design - Part 26 ppt

10 108 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 89,83 KB

Nội dung

3.3 Analytic Development of Reliability and Performance in Engineering Design 233 Boolean truth table Entry R 1 R 2 R 3 R 4 R 5 Success or failure Prob. of success 1 00000F – 2 00001F – 3 00010F – 4 00011F – 5 00100F – 6 00101S 0.01008 7 00110F – 8 00111S 0.00252 9 01000F – 10 01001S 0.03888 11 01010S 0.03888 12 01011S 0.00972 13 01100F – 14 01101S 0.00432 15 01110S 0.00432 16 01111S 0.00108 17 10000F – 18 10001F – 19 10010S 0.01008 20 10011S 0.00252 etc. g) Bayesian Updating Procedure in Reliability Evaluation The elements of a Bayesian reliability evalua tion are sim ilar to those for a discrete process, considered in Eq. (3.179) above, i.e.: P(c|f)= P(c) ·P( f|c) P( f) . However, the structure differs because the failure rate, λ , is well as the reliability, R S , are continuous-valued. In this case, the Bayesian reliability evaluation is given by the formulae P( λ i | β i , μ i , γ i )= P( λ i ) ·P( β i , μ i , γ i | λ i ) P( β i , μ i , γ i ) , (3.203) where: P(R S | β i , μ i , γ i )= P(R S ) ·P( β i , μ i , γ i |R S ) P( β i , μ i , γ i ) (3.204) 234 3 Reliability and Performance in Engineering Design and: P( λ i |t)= λ j ·t ( j−1) ( j −1)! · e − λ t P(R S |a,b)= (a+ b+ 1)! a!b! R b S (1−R S ) b j = number of components with the same λ , t = operating time for determining λ and R S , a = the number of survivals out of j, b = the number o f failures out of j (i.e. j −a). For both the failure rate λ and reliability R S , the probability P( β j , μ j , γ j ) may be either continuous or discrete, whereas the probabilities of P( λ j ) for failure and of P(R S ) for reliability are always continuous. Therefore, the prior and posterior distri- butions are always continuous, whereas the marginal distribution, P( β j , μ j , γ j ),may be either continuous or discrete. Thus, in the case of expert judgment, new estimate values in the form of a like- lihood function are incorporated into a Bayesian reliability model in a conventional way, representing updated information in the form of a posterior (a posteriori) prob- ability distribution that depends upon a prior (a priori) probability distribution that, in turn, is subject to the estimated values of the Weibull parameters. Because the prior distribution and that for the new estimated values represented by a likelihood function are conjugate to one another (refer to Eq. 3.179), the mixing of these two distributions, by way of Bayes’ theorem, ultimately results in a posterior distribution of the same form as the prior. h) Updating Expert Judgment The initial prediction of reliabilities m ade during the conceptual design phase may be quite poor with large uncertainties. Upon review, experts can decide which parts or processes to change, where to plan for tests, what prototypes to build, what ven- dors to use, or the type of what–if questions to ask in order to improve the design’s reliability and redu c e uncertainty. Before a ny usually expensive actions are taken (e.g. building prototypes), what–if cases are calculated to predict the effects on esti- mated reliability of such proposed changesor tests. These cases can involve changes in the structure, structural model, experts’ estim ates, and the terms of the reliability model as well as effects of proposed test data results. Further breakdown of systems into component failure modes may be required to properly map these changes and to modify proposed test data in the reliability model ( Booker et al. 2000). Because designs are underprogressive development or undergoingconfiguration change dur- ing the engineering design process, new information continually becomes available at various stages of the process. Design changes may include adding, replacing or eliminating processes and/or components in the light of new engineering judgment. 3.3 Analytic Development of Reliability and Performance in Engineering Design 235 Incorporating these changes and new information into the existing reliability esti- matesisreferredtoastheupdating process. New information and data from different sources or of different types (e.g. tests, engineering judgment) are merged by combining uncertainty distribution functions of the old and new sources. This merging usually takes the form of a weighting scheme (Booker et al. 2000), (w 1 f 1 + w 2 f 2 ), where w 1 and w 2 are weights and f 1 and f 2 are functions of parameters, random variables, probability distributions, or reliabilities, etc. Experts often provide the weights, and sensitivity analyses are performed to demonstrate the effects of their choices. Alternatively, the Bayes theorem can be used as a particular weighting scheme, providing weights for the prior and the likelihood through application of the theorem. Bayesian combination is, in effect, Bayesian updating. If the prior and likelihood distributions overlap, then Bayesian combination will produce a posterior distribution with a smaller variance than if the two were combined via other methods, such as a linear combination of random variables. This is a significant advantage of using the Bayes theorem. Because test data at the early stages of engineering design are lacking, initial reliability estimates, R 0 (t, λ , β ), are developed from expert judgment, and form the prior distribution for the system (as indicated in Fig. 3.40above). As the engineering design develops, data and information may become available for certain processes (e.g. systems, assemblies, components), and this would be used to form likelihood distributions for Bayesian updating. All of the distribution information in the items at the various levels must be combined upwards through the system hierarchy lev- els, to produce final estimates of the reliability and its uncertainty at various levels along the way, until reaching the top process or system level. As more data and information become available and are incorporated into the r eliability calculation through Bayesian updating, they will tend to dominate the effects of the experts’ es- timates developed through expert judgment. In other words, R i (t, λ , β ) formulated from i = 1, 2,3, ,n test results will look less and less like R 0 (t, λ , β ) derived from initial expert estimates. Three different combination methods are used to form the following (updated) expert reliability estimate of R 1 (t, λ , β ): • For each prior distribution that is combined with data or likelihood distribution, the Bayes theorem is used for a posterior distribution. • Posterior distributions within a g iven level are combined according to the model configuration (e.g. multiplication of reliabilities for systems/sub-systems/equip- ment in series) to form the prior distribution of the next higher level (Fig. 3.40). • Prior distributions at a given level are combined within the same systems/sub- systems/equipment to form the combined prior (for that level), which is then merged with the data (for that system/sub-system/equipment). This approach is continued up the levels until a process-level posterior distribution is developed. For general updating, test data and other new information can be added to the existing reliability calculation at any level and/or for any process, system or equip- ment. These data/information may be applicable only to a single failure mode at 236 3 Reliability and Performance in Engineering Design equipment level. When new data or information become available at a higher level (e.g. sub-system) for a reliability calculation at step i, it is necessary to back prop- agate the effects of this new infor mation to the lower levels (e.g. assembly or com- ponent). The reason is that at some future step, i + j, updating may be required at the lower level, and its effect pr opagated up the systems hierarchy. It is also possi- ble to back propagate by apportioning either the reliability or its parameters to the lower hierarchy levels according to their contributions (criticality) at the higher sys- tems level. Th e statistical analy sis involved with this back propagation is difficult, requiring techniques such as fault-tree analysis (FTA) (Martz and Almond 1997). While it can be shown that, for well-behaved functions, certain solutions are pos- sible, they may not be unique. Therefore, constraints are placed on the types of solutions desired by the experts. For example, it may be required that, regardless of the apportioning used to propagate downwards, forward propagating maintain original results at the higher systems level. General updating is an extremely use- ful decision tool for asking what–if questions and for planning resources, such as pilot test facilities, to de termine if the reliability requirements can be met before ac- tually manufacturing and/or constructing the engineered installation. For example, the reliability uncertainty distributions obtained through simulation are empirical with no particular distribution form but, due to their asymmetric nature and because their range is from 0 to 1, they often appear to fit well to beta distributions. Thus, consider a beta distribution of the following form, for 0 = x = 1, a > 0, b > 0 Beta(x|a,b)= Γ (a+ b) Γ (a) Γ (b) x (a−1) (1−x) (b−1) . (3.205) The beta distribution has important applications in Bayesian statistics, where proba- bilities are sometimes looked upon as randomvariables, andthere is thereforea need for a relatively flexible probability density (i.e. th e distribution can take on a great variety of shapes), which assumes non-zero values in the interval from 0 to 1. Beta distributions are used in r eliability evaluation as estimates of a component’s relia- bility with a continuous distribution over the range 0 to 1. Characteristics of the Beta Distribution The mean or expected value The mean, E(x), of the two-parameter beta proba- bility density function p.d .f. is given b y E(x)= a (a+ b) . (3.206) The mean a/(a+ b) depends on the ratio a/b. If this ratio is constant but the values for both a and b are increased, then the variance decreases and the p.d.f. tends to the unit normal distribution. The median The beta distribution (as with all continuous distributions) has mea- sures of location termed p ercentage points, X p . The best known of these percentage 3.3 Analytic Development of Reliability and Performance in Engineering Design 237 points is the median, X 50 , the value of which there is as much chance that a random variable will be above as below it. For a successes in n trials, the lower confidence limit u, at confidence level s, is expressed as a percentage point on a beta distribution. The median ¯u of the two- parameter beta p .d.f. is given by ¯u = 1−F(u 50 |a,b) . (3.207) The mode The mode or value with maximum probability, ˚u, of the two-parameter beta p.d.f. is given by ˚u = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ a−1 (a+ b−2) for a > 1, b > 1 0 and 1 for a < 1,b < 1 0fora < 1,b ≥1andfora = 1,b > 1 1fora ≥ 1,b < 1andfora > 1,b = 1 (3.208) ˚u does not exist for a = b = 1. If a < 1, b < 1, there is a minimum value or antimode. The variance Moments about the mean describe the shape of beta p.d.f. The vari- ance v is the second moment about the mean, and is indicative of the spr ead or dispersion of the distribution. The variance v of the two-parameter beta p.d.f. is given by v = ab (a+ b) 2 (a+ b+ 1) . (3.209) The standard deviation The standard deviation σ T of the two-parameter beta p.d.f. is the positive square root of the variance, v 2 , which indicates the closeness one can expect the value of a random variable to be to the mean of the distribution, and is given by σ T =  ab/(a+ b) 2 (a+ b+ 1) . (3.210) Three-parameter beta distribution function The probability density function, p.d.f., of the three-parameter beta distribution function is given by f(Y )=1/c·Beta(x|a,b) ·(Y/c) a−1 ·(1−Y/c) b−1 , (3.211) for 0 ≤Y ≤ c and 0 < a,0< b,0< c. From this general three-parameter beta p.d.f., the standard two-parameter beta p.d.f. can be derived with the transform x = Y/c. In the case where a beta distribution is fitted to a reliability uncertainty distribu- tion, R i (t, λ , β ), resulting in certain values for parameters a and b, the experts would want to determine what would be the result if they had the components manufac- tured under the assumption that most would not fail. Taking advantage of the beta distribution as a conjugate prior for the binomial data, the combined component 238 3 Reliability and Performance in Engineering Design reliability distribution R j (t, λ , β ) would also be a beta distribution. For instance, the beta expected value (mean), variance and mode, together with the fifth percentile for R j can be determined from a reliability uncertainty distribution, R j (t, λ , β ). As an example, a beta distribution represents a reliability uncertainty distribution, R 1 (t, λ , β ), with values for parameters a = 8andb = 2. The beta expected value (mean), variance and mode, together with the fifth percentile value fo r R 1 are: R 1 (t, λ , β ) number of successes a = 8 and number of failures b = 2: Distribution mean: 0.80 Distribution variance: 0.0145 Distribution mode: 0.875 Beta coefficient (E-value): 0.5709 Expert decision to have the components manufactured under the assumption that most will not fail depends upon the new component reliability distribution. The new reliability distribution would also be a beta distribution R 2 (t, λ , β ) with modified values for the parameters being the following: a = 8+ number of successful proto- types and b = 2+ number unsuccessful. Assume that for five and ten manufactured components, the expectation is that one and two will fail respectfully: For five components: R 2 (t, λ , β ) a = 8+ 5andb = 2+ 1: Distribution mean: 0.8125 Distribution variance: 0.0089 Distribution mode: 0.8571 Beta coefficient (E-value): 0.6366 For ten components: R 3 (t, λ , β ) a = 8+ 10 and b = 2+ 2: Distribution mean: 0.8182 Distribution variance: 0.0065 Distribution mode: 0.85 Beta coefficient (E-value): 0.6708 The expected value improves slightly (from 0.8125 to 0.8182) but, more impor- tantly, the 5th percen tile E-value improves fr om 0.57 to 0.67, which is an incentive to invest in the components. The general updating cycle can continue throughout the engineering design p ro- cess. Figure 3.48 depicts tracking of the reliability evaluation throughout a system’s design, indicating the three percentiles (5th, med ian or 50th, and 95th) of the relia- bility uncertainty distribution at various points in time (Booker et al. 2000). The individual data points begin with the experts’ initial reliability characteri- sation R 0 (t, λ , β ) for the system and continue with the events associated with the general updates, R i (t, λ , β ),aswellasthewhat–if cases and incorporation of test results. As previously noted, asking what–if questions and evaluating the effects on reliability provides valuable information for engineering design integrity, and for modifying designs based on prototype tests before costly decisions are made. 3.3 Analytic Development of Reliability and Performance in Engineering Design 239 Crit. Equipment Ref. SBS00125 Time period 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 mnth. 3 mnth. 4 mnth. 5 mnth. 6 mnth. 7 mnth. 8 mnth. 9 mnth. 10 mnth. 11 mnth. 12 Reliability eval. 90% uncertainty Median estimate Test data Fig. 3.48 Tracking reliability uncertainty (Booker et al. 2000) Graphs such as Fig. 3.48 are constructed for all the hierarchical levels of crit- ical systems to monitor the effects of updating for individual processes. Graphs are constructed for these levels at the desired prediction time values (i.e. monthly, 3-monthly, 6-monthly and annual) to determine if reliability requirements are met at these time points during the engineering design process as well as the manufac- turing/construction/ramp-uplife cycle of the process systems. These graphs capture the results of the experts’ efforts to improve reliability and to reduce uncertainty. The power of the approach is that the roadmap developed leads to h igher reliability and reduced uncertainty, and the ab ility to characterise all of the efforts to achieve improvement. i) Example of the Application of Fuzzy Judgment in Reliability Evaluation Consider an assembly set with series components that can influence the reliability of the assembly. The components are subject to various failures (in this case, the po- tential failure cond ition of wear), potentially degrading the assembly’s reliability. For d ifferent component reliabilities, the assembly reliab ility will be variable. Fig- ure 3.49 shows membership functions for three component condition sets, {A = no wear, B = moderate wear, C = severe wear}, which are derived from minimum (best), most likely (median) and maximum (worst) estimates. Figure 3.50 shows membershipfunctions for performance-levelsets, correspond- ing to responses {a = acceptable, b = marginal, c = poor}. Three if–then rules define the condition/performancerelationship: • If condition is A, then performan ce is a. • If condition is B, then p erformance is b. • If condition is C, then performance is c. 240 3 Reliability and Performance in Engineering Design X-component condition A X B C 0 0 5 10 15 20 0.2 0.4 0.6 0.8 1 1.2 Membership Fig. 3.49 Component condition sets for membership functions Y-performance level abc 9008007006005004003002001000 0.0 0.2 0.4 0.6 0.8 1.0 -100 Fig. 3.50 Performance-level sets for membership functions Referring to Fig. 3.49, if the component condition is x = 4.0, then x has member- ship of 0.6 in A and 0.4 in B. Using the rules, the defined component condition membership values are mapped to performance-level weights. Following fuzzy sys- tem methods, the membership functions for performance-levelsets a and b are com- bined, based on the weights 0.6 and 0.4. This combined membershipfunction can be used to form the basis of an u ncertainty distribution for characterising performance for a given condition level. An equivalent probab ilistic approach involving mixtures of distributions can be developed with the construction of the membership func- tions (Laviolette et al. 1995). In addition, linear combinations of random variables provide an alternative combination method when mixtures produce m ulti-modality results—which can be undesirable, from a physical interpretation standpoint (Smith et al. 1998). Departing from standard fuzzy systems methods, the combined performance membership function can be normalised so that it integrates to 1.0. The resulting function, f(y|x), is the uncertainty distribution for performance, y, corresponding to the situation where component condition is equal to x. The cumulative distri- bution function can now be developed, of the uncertainty distribution, F(y|x).If performance must exceed some threshold, T, in order for the system to meet certain design criteria, then the reliability of the system for the situation where component condition is equal to x can be expressed as R(x)=1 −F(T|x). A specific threshold of T corresponds to a specific reliability of R(4.0) (Booker et al. 1999). 3.4 Application Modelling of Reliability and Performance in Engineering Design 241 In the event that the uncertain ty in wear, x, is characterised by some distribu- tion, G(x), the results of repeatedly sampling x from G(x) and calculating F(y|x) produce an ‘envelope’ of cumulative distribution functions. This ‘envelope’ repre- sents the uncertainty in the degradation probability that is due to uncertainty in the level of wear. The approximate distribution of R(x) can be obtained from such a nu- merical simulation. 3.4 Application Modelling of Reliability and Performance in Engineering Design In Sect. 1 .1, the five main objectives that need to be accomplished in pursuit of the goal of the research in this handbook are: • the development of appropriate theory on the integrity of engineering design for use in mathematical and computer models; • determination of th e validity of the developed theory by evaluating several case studies of engineering designs that have been recently constructed, that are in the process of being constructed, or that have yet to be constructed; • application o f mathematical and computer modelling in engineering design veri- fication; • determination of the feasibility of a practical application of intelligent computer automated methodology in engineering design reviews through the development of the appropriate industrial, simulation and mathematical models. The following models have been developed, each for a specific purpose and with specific expected results, in part achieving these objectives: • RAMS analysis model to validate the developed theory on the determination of the integrity of engineering design. • Process equipment models (PEMs), for application in dynamic systems simula- tion modelling to initially determine mass-flow balances for preliminary engi- neering designs of large integrated process systems, and to evaluate and verify process design integrity of complex integrations of systems. • Artificial in telligence-based (AIB) model, in which relatively new artificial intel- ligence (AI) modelling techniques, such as inclusion of knowledge-based expert systems within a blackboard model, have been applied in the development of intelligent computer automated methodology for determining the integrity of en- gineering design. The first model, the RAMS analysis model, will now be looked at in detail in this section of Chap. 3. The RAMS analysis model was applied to an engineered installation, an environ- mental plant, for the recovery of sulphur dioxide emissions from a metal smelter to produce sulphuric acid. This model is considered in detail with specific reference 242 3 Reliability and Performance in Engineering Design to the inclusion of the theory on reliability as well as performance prediction, as- sessment and evaluation, during the conceptual, schematic and detail design phases respectively. Eighteen months after the plant was commissioned and placed into operation, failure data were obtained from the plant’s distributed control system (DCS) opera- tion a nd trip logs, and analysed with a view to matching the RAMS theory, specifi- cally of systems and equipment criticality and reliability, with real-time operational data. The matching of theory with real-time data is stu died in detail, with specific conclusions. The RAMS analysis computer model (ICS 2000) provides a ‘first-step’ approach to the development of an artificial intelligen ce-based (AIB) model with knowledge- based expert systems within a blackboard model, for automated continual design reviews throughout the engineering design process. Whereas the RAMS analysis model is basically implemented and used by a single engineer for systems analysis, or at most a group of engineers linked via a local area network focused on gen- eral plant analysis, the AIB blackboard model is implemented by multi-disciplinary groups of design engineers who input specific design data and schematics into their relevant knowledge-based expert systems. Each designed system or related equip- ment is evaluated for integrity by remotely located design groups communicating either via a corporate intranet or via the internet. The measures of integrity are based on the theory for predicting, assessing and evaluating reliability, availabil- ity, maintainability and safety requ irements for complex integrations of engineering systems. Consequently, the feasibility of practical application of the AIB blackboard model in the design of large engineered installations has been based on the suc- cessful application of the RAMS analysis computer model in several engineering design projects, specifically in large ‘super projects’ in the metals smelting and pro- cessing industries. Furthermore, where only the conceptual and preliminary design phases were considered with the RAMS analysis model, all the engineering design phases are considered in the AIB blackboard model, to include a complete range of methodologies for determining the integrity of engineering design. Implementation of the RAMS analysis model was considered sufficient in reaching a meaningful conclusion as to the practical application of the AIB b lackboard model. 3.4.1 The RAMS Analysis Application Model The RAMS analysis model was used not only for plant analysis to determine the integrity of engineering design but also for d esign reviews as verification and evalu- ation of the commissioning of designed systems for installation and operation. The RAMS analysis application model was initially developed for analysis of the in- tegrity of engineering design in an environmental plant for the recovery of sulphur dioxide emissions from a metal smelter to produce sulphuric acid. . computer modelling in engineering design veri- fication; • determination of the feasibility of a practical application of intelligent computer automated methodology in engineering design reviews. Application Modelling of Reliability and Performance in Engineering Design In Sect. 1 .1, the five main objectives that need to be accomplished in pursuit of the goal of the research in this handbook are: •. process. Design changes may include adding, replacing or eliminating processes and/ or components in the light of new engineering judgment. 3.3 Analytic Development of Reliability and Performance in Engineering

Ngày đăng: 02/07/2014, 10:20

TỪ KHÓA LIÊN QUAN