Designing Capable and Reliable Products Episode 5 pot

30 139 0
Designing Capable and Reliable Products Episode 5 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

The variability risks table for the redesign is shown in Figure 2.45 and the Conformability Matrix in Figure 2.46. Clearly, machining the critical faces on the impact extruded components has reduced the risks associated with conforming to the Æ0:2 mm tolerance for the plunger displacement. The associated potential cost of failure has reduced signi®cantly to a little over £3000. However, there is an additional cost associated with the extra machining process which adds to the overall product cost. Since it is likely a secondary process involving machining will take place on the body thread anyway, the case for turning these critical faces may be further justi®ed. Although machining these faces will raise the cost of the component slightly, this must be secondary to satisfying the overriding customer requirement of meeting the plunger displacement tolerance. As highlighted by the dierence in the potential failure costs, the redesign scheme must be chosen for further design development. Of course, other design schemes Figure 2.46 Conformability matrix for the solenoid end assembly redesign Case studies 105 could also be explored, but the initial design shown here is of inherently poor quality and therefore must be rejected. 2.8 Summary Decisions made during the design stage of the product development process account for a large proportion of the problems that incur failure costs in production and service. It is possible to relate these failure costs back to the original design intent where variability, and the lack of understanding of variability, is a key failure costs driver. The correct choice of tolerance on a dimensional characteristic can be crucial for the correct functioning of the product in service and tolerance selection can have a large contribution to the overall costs of the product, both production and quality loss. Process capability indices are not generally speci®ed by designers and subsequently the impact of design decisions made on the production department cannot be fully understood because tolerances alone do not contain enough information. Variability in component manufacture has proved dicult to predict in the early stages of the design process and there are many in¯uencing factors that the designer may not necessarily be able to anticipate. The material and geometrical con®guration of the design, and the compatibility with the manufacturing process are the main variability drivers. Although design rules and general manufacturing capability information are available, they are rarely presented in a useful or practical form, especially when innovative design is required. There is a need to set realistic tolerances and anticipate the variability associated with the design to help reduce failure costs later in the product's life-cycle. The CA methodology is useful in this respect. It is comprised of three sections: the Component Manufacturing Variability Risks Analysis, the Component Assembly Variability Risks Analysis and the determination of the Eects of Non-conformance through the Conformability Map. The Component Manufacturing Variability Risks Analysis presented, models the important design/manufacture interface issues which re¯ect the likely process capability that can be achieved. Included is the assessment of tolerance, geometry, material and surface roughness variability in component manufacture. Quantitative and qualitative manufacturing knowledge is used to support various aspects of the analysis and is taken from a wide range of sources. The concept of an ideal design allows the analysis to generate risk indices, where values greater than unity have a potential for increased variation in production. A simple cost±tolerance relationship is used in the Process Capability Maps, developed for over 60 manufacturing process/ material combinations. The maps are subsequently employed to determine the process capability estimates for the component characteristics analysed. Through empirical studies, a close correlation between the process capability estimates using the Component Manufacturing Variability Risks Analysis and shop-¯oor process capability has been observed. Most literature tends only to focus on tolerance stack analysis when assessing the capability of assemblies. The variability of the actual assembly operations is rarely considered and does not rely solely on the tolerances accumulating throughout the assembly, but on the feasibility and inherent technical capability of the assembly 106 Designing capable components and assemblies operations performed, manually or automatically. Developers and practitioners of DFA techniques reason that an assembly with a high assembly eciency is a better quality product. The natural outcome of having a high assembly eciency leads to fewer parts in the assembly and, therefore, fewer quality problems to tackle in produc- tion. The outcome is not due to any speci®c analysis process in the DFA technique to address variability, and there still exists a need for analysing the assembly capability of designs, rather than a production cost driven approach. A useful technique for facilitating an assembly risks analysis is the declaration of a sequence of assembly for the components. Through such a diagram, each component in the assembly and, therefore, the potential areas for assembly risk can be logically mapped through the product design. The Component Assembly Variability Risks Analysis has the purpose of better understanding the eects of a component's assembly situation on variability, by quantifying the risks that various assembly operations inherently exhibit. The analysis processes are supported by expert knowledge and presented in charts. Again, the theory is that an ideal component assembly situation exists where the assembly risk is unity. Using the charts to re¯ect the handling process risk, ®tting process risk and the risks associated with additional assembly and joining processes, the assembly situation of the component is questioned, accruing penalties at each stage if the design has increased potential for variability. Current quality±cost models are useful for identifying general trends in a long-term improvement programme, but are of limited use in the identi®cation of the failure costs associated with actual design decisions. A link between the costs that can be typically expected in practice due to failure or non-conformance of the product in production or service, and the probability of fault occurrence, is made using FMEA through the Conformability Map. The underlying concept assumes that as failures become more severe, they are going to cost more when they fail. The quality±cost model embedded in the Conformability Map allows the designer to assess the level of acceptability, special control or unacceptability for non-safety critical and safety critical component characteristics in the design by determination of the process capability measures from the previous two stages of the analysis. The Conformability Map also allows failure isocosts (percentages of total product cost), and, therefore, the total failure cost to be estimated with knowledge of the likely product cost and production volume. The nature of the underlying cost models limits the accuracy of the failure cost estimates at an absolute level and so they become useful in evaluating and comparing design schemes for their potential quality loss. The model can alternatively be employed in setting capability targets for characteristics to incur allowable failure costs dependent on the failure severity of the product. Through performing an analysis using CA, many modes of application have been highlighted. This has resulted from the way that the CA design performance measures allow a non-judgemental `language' to develop between the design team. It has also been found not to inhibit the design process, but provide a structured analysis with which to trace design decisions. The knowledge embedded within CA also allows the designer to generate process capable solutions and open up discussion with suppliers. The analysis is currently facilitated through the use of a paper-based assessment. This has many bene®ts, including improved team working, and provides Summary 107 a more unconstrained approach than if the analysis were computer based. It also allows the knowledge to be readily visible and available at anytime for the designer to scrutinize and manipulate if they chose to do so. The potential bene®ts of using CA in the early stages of the product development process have been found to be: . Early awareness of potential design problems through a systematic analysis . Produces more process capable designs with regard to their manufacture and assembly . Reduces internal/external failure costs . Reduces lead times . Focused discussions with suppliers. Finally, the main bene®t as far as competitive business performance is concerned is the potential for reduction in failure costs. Studies using CA very early in the development process of a number of projects have indicated that the potential failure costs were all reduced through an analysis. This is shown in Figure 2.47, where this potential failure cost reduction is shown as the dierence between pre-CA and post- CA application by the teams analysing the product designs. Figure 2.47 In¯uence of the team-based application of CA on several product introduction projects 108 Designing capable components and assemblies 3 Designing capable assembly stacks 3.1 Introduction The analysis of process capable tolerances on individual component dimensions at the design stage has already been explored in Chapter 2. An important extension to this work is the identi®cation and assignment of capable tolerances on individual component dimensions within an assembly stack forming some speci®ed assembly tolerance, where typically the number of component tolerances in the assembly stack is two or more à . This engineering task is called tolerance allocation and is a design function performed before any parts have been produced or tooling ordered. It involves three main activities. First, deciding what tolerance limits to place on the critical clearances/®ts for the assembly based on performance/functional require- ments. Next, creating an assembly model to identify what dimensions characterize the ®nal assembly dimension. And ®nally, deciding how much of the assembly tol- erance to assign to each of the components in the assembly (Chase and Parkinson, 1991). Tolerance analysis, on the other hand, is used for determining the assembly tolerance from knowledge of the tolerances on each component. Tolerance analysis does not readily lend itself to a `designing for quality' philoso- phy because the required functional parameter of the assembly tolerance is not required as an input. However, by setting a target tolerance on the assembly from the customer speci®cation and for functional performance, as in tolerance allocation, we can establish the component tolerances in order to keep the failure cost of the assembly to an acceptable level. This way, there is no need to calculate a failure cost associated with the assembly tolerance as would be performed using CA; an acceptably low failure cost is actually built in at the design stage using manufacturing knowledge to optimize the capability of the individual components. In essence, the philosophy is for customer, for function and for conformance. Today's high technology products and growing international competition require knowledgeable design decisions based on realistic models that include manu- facturability requirements. A suitable and coherent tolerance allocation methodology à See Appendix VIII for an approach to solve clearance/interference problems with two tolerances. can be an eective interface between the customer, design and manufacturing. How- ever, there are several issues with regard to tolerance allocation that must be addressed (Chase and Greenwood, 1988): . Engineering design and manufacturing need to communicate their needs eectively . The choice of tolerance model must be both realistic and applicable as a design tool . The role of advanced statistical and optimization methods in the tolerance model . Sucient data on process distributions and costs must be collated to characterize manufacturing processes for advanced tolerance models. The eective use of the capability data and knowledge as part of CA is bene®cial in the design of capable assembly stacks providing the necessary information, which has been lacking previously. The aim of this chapter is to present a methodology for the allocation of capable component tolerances within assembly stack problems (one dimensional) and optimize these with respect to cost/risk criteria in obtaining a func- tional assembly tolerance through their synthesis. The methodology and the demon- stration software presented to aid an analysis, called CAPRAtol, forms part of the CAPRA methodology (CApabilty and PRobabilistic Design Analysis). 3.2 Background Proper assignment of tolerances is one of the least well-understood engineering tasks (Gerth, 1997). Assignment decisions are often based on insucient data or incomplete models (Wu et al., 1988). The precise assignment of the component tolerances for this combined eect is multifarious and is dictated by a number of factors, including: . Number of components in the stack . Functional performance of the assembly tolerance . Level of capability assigned to each component tolerance . Component assemblability . Manufacturing processes available . Accuracy of process capability data . Assumed characteristic distributions and degree of skew and shift . Cost models used . Allowable costs (both production and quality loss) . Tolerance stack model used . Optimization method. Some of the above points are worth expanding on. Tolerances exist to maintain product performance levels. Engineers know that tolerance stacking or accumulation in assemblies controls the critical clearances and interferences in a design, such as lubrication paths or bearing mounts and that these aect performance (Vasseur et al., 1992). Tolerances also in¯uence the selection of manufacturing processes and determine the assemblability of the ®nal product (Chase and Greenwood, 1988). The ®rst concern in allocating tolerances should be then to guarantee the proper functioning of the product, and therefore to satisfy technical constraints. In general design practice, the ®nal assembly speci®cations are usually derived from customer 110 Designing capable assembly stacks requirements (Lin et al., 1997). The functional assembly tolerance is a speci®cation of the design and maintains integrity with its mating assemblies only when this tolerance is realized within a suitable level of capability. The random manner by which the inherent inaccuracies within the process are generated produces a pattern of variation for the dimension that resembles the Normal distribution, as discussed in Chapter 2. As a ®rst supposition then in the optimization of a tolerance stack with `n' number of components, it is assumed that each component follows a Normal distribution, therefore giving an assembly tolerance with a Normal distribution. It is also a good approximation that if the number of components in the stack is greater than 5, then the ®nal assembly charac- teristic will form a Normal distribution regardless of the individual component distributions due to the central limit theorem (Mischke, 1980). Shift or drift is a critical factor in an assembly model as is the determination of the variability for each component tolerance. It has been estimated that over a very large number of batches produced, the mean of a tolerance distribution could expect to drift about 1.5 times the standard deviation due to tool wear, dierences in raw material or change of suppliers (Evans, 1975; Harry and Stewart, 1988). The degree of shift inherent within the process or the shift over time should also be accounted for in the tolerance stack model as its omission can severely aect the precision of the results. Figure 3.1(a) shows the eect that a dominant component distribution, prone to shift, in a stack of `n' components has on the overall assembly distribution. It is more than likely for this case that the assembly tolerance distribution will also be shifted. As dominance reduces in any one component, the probability of the assembly distribution being shifted from the target is much lower and tends to average out to a Normal centred distribution as shown in Figure 3.1(b). This is also true for skew or kurtosis of the distribution (Chase and Parkinson, 1991). Designers seldom have sucient data by which to specify the variability of the manufacturing processes. In practice, most designers do not worry about the true behaviour of the process and compensate for the lack of knowledge with large process capability indices as discussed in Chapter 2. The precise process capability of a process cannot be determined before statistical control of the actual process has been established. Therefore, during the product design phase, the designer must use the best available process capability data for similar processes (Battin, 1988). A good tolerance allocation model should maximize component tolerances for a speci®ed assembly tolerance in order to keep production costs low (Wu et al., 1988). Any concessions to this should be in meeting the functional assembly tolerance in order to keep the failure costs low. This can only be achieved by optimization of the component tolerances by important technical and/or economic criteria. Design optimization techniques are useful in minimizing an objective function such as production cost or quality loss (Chase and Parkinson, 1991), the optimal statistical tolerances being those that minimize this aggregate quantity of production cost plus quality loss (Vasseur et al., 1992). Optimization of the tolerances assigned to the tolerance stack to satisfy the assembly tolerance is required because of the risk of assigning impractical and costly values. The tolerance stack models, presented later, do not optimize the toler- ances and additional methods are required. The actual optimization operations usually take the form of minimization or maximization of the results from the cost Background 111 models through computer coded algorithms. Common optimization methods given in the literature include: Lagrange multipliers, linear and non-linear programming, geo- metric programming and genetic algorithms (Chase and Parkinson, 1991; Lin et al., 1997; Wu et al., 1988). The combination of the cost model and the optimization method will then give an augmented model from which the allocation of the component tolerances are optimized for competitive results. Optimization methods have also been extended to include procedures that select the most cost-eective manufacturing process for each component tolerance in the assembly stack (Chase and Parkinson, 1991). Research looking into tolerance allocation in assembly stacks is by no means new. A current theme is towards an optimization approach using complex routines and/ or cost models (Lin et al., 1997; Jeang, 1995). Advanced methods are also available, such as Monte Carlo Simulation and Method of Moments (Chase and Parkinson, 1991; Wu et al., 1988). The approach presented here is based on empirical process capability measures using simple tolerance models, cost analogies and optimization Figure 3.1 Effect of component distribution shift scenarios on the ®nal assembly distribution 112 Designing capable assembly stacks procedures. Using the methodology described, a number of design schemes can be quickly compared and the most capable, relatively, selected as the ®nal design solution. The optimization of the tolerances allocated will be based on achieving the lowest assembly standard deviation for the largest possible component tolerances. By aiming for tolerances with low standard deviations also makes the tolerances robust against any further unseen variations whilst at the design stage. A low standard deviation translates into a lower probability of encountering assembly problems, which in turn means higher manufacturing con®dence, lower costs, shorter cycle time, and perhaps the most important, enhanced customer satisfaction (Harry and Stewart, 1988). It is important to predict the probability of successful assembly of the parts so that the tolerance speci®cations can be re-evaluated and modi®ed if necessary in order to increase the probability of success and lower the associated production costs (Lee et al., 1997). 3.3 Tolerance stack models Many references can be found reporting on the mathematical/empirical models used to relate individual tolerances in an assembly stack to the functional assembly toler- ance. See the following references for a discussion of some of the various models developed (Chase and Parkinson, 1991; Gilson, 1951; Harry and Stewart, 1988; Henzold, 1995; Vasseur et al., 1992; Wu et al., 1988; Zhang, 1997). The two most well-known models are highlighted below. In all cases, the linear one-dimensional situation is examined for simplicity. In general, tolerance stack models are based on either the worst case or statistical approaches, including those given in the references above. The worst case model (see equation 3.1) assumes that each component dimension is at its maximum or minimum limit and that the sum of these equals the assembly tolerance (initially this model was presented in Chapter 2). The tolerance stack equations are given in terms of bilateral tolerances on each component dimension, which is a common format when analysing tolerances in practice. The worst case model is:  n i 1 t i t a 3:1 where: t i  bilateral tolerance for ith component characteristic t a  bilateral tolerance for assembly stack: The statistical model makes use of the fact that the probability of all the components being at the extremes of their tolerance range is very low (see equation 3.2). The statistical model is given by: z a  n i 1  t i z i  2 ! 0:5 t a 3:2 Tolerance stack models 113 where: z a  assembly tolerance standard deviation multiplier z i  ith component tolerance standard deviation multiplier: Equation 3.2 is essentially the root of the sum of the squares of the standard deviations of the tolerances in the stack, which equals the standard deviation of the assembly tolerance, hence its other name, Root Sum Square or RSS model. This can be represented by:  n i 1  2 i ! 0:5  a 3:3 where:  a  assembly tolerance standard deviation  i  ith component tolerance standard deviation: The statistical model is potentially unappealing to designers because a defective assembly can result even if all components are within speci®cation, although the probability of this occurring may be low. The worst case model is, therefore, more popular as a safeguard (Gerth, 1997), although it has been argued that it results in tighter tolerances that are often ignored by manufacturing when the design goes into production. From the above considerations and models, we will now develop the relationships used in the CAPRAtol methodology. The tolerance model developed in addressing the assembly stack problem is based on the statistical model in equation 3.2, which is generally an accurate predictor (Wu et al., 1988). 3.4 A methodology for assembly stack analysis 3.4.1 Application of the process capability estimates from CA In Chapter 2, the Component Manufacturing Variability Risk, q m , was eectively used to predict the process capability measures, C pk and C p , for individual component tolerances. This risk index therefore becomes useful in the allocation of capable tolerances and analysis of their distributions in the assembly stack problem. The key element of the CA methodology for determining the tolerance risk is the use of the process capability maps for the manufacturing processes in question. Figure 3.2 shows a process capability map for turning/boring and the equations modelling the risk contours. The contours indicate the level of risk associated with the achievement of a tolerance on a dimensional characteristic. The valid tolerance range of a produc- tion operation represents the accuracy improving capability of a production opera- tion. Within this valid range, a tighter tolerance or higher accuracy demand leads to higher manufacturing costs, and a looser tolerance or lower accuracy demand leads to lower manufacturing costs (Dong, 1997). 114 Designing capable assembly stacks [...]... statistical de®nitions of Cpk and Cp respectively: Cpk % 4 q2 m …3:4† and j ÿ Ln j 3 4 Cp % 4=3 qm Cpk ˆ …3 :5 …3:6† 1 15 116 Designing capable assembly stacks and Cp ˆ t 3 …3:7† where:  ˆ mean Ln ˆ nearest tolerance limit qm ˆ component manufacturing variability risk  ˆ standard deviation t ˆ bilateral tolerance: It is dicult to determine j ÿ Ln j=3 in equation 3 .5 without statistical process... ductile and brittle materials and various loading conditions (values shown in brackets from 19 05, without brackets from 19 65) (Su ˆ ultimate tensile strength, Sy ˆ yield strength) Type of loading Steel (ductile metals) Cast iron (brittle metals) Based on Su Dead load Repeated one direction/mild shock Repeated reversed/mild shock Shock Based on Sy Based on Su 3 to 4 (3) 6 (5) 8 (8) 10 to 15 (12) 1 .5 to 2 5. .. design techniques can be found, 134 Designing reliable products Figure 4.1 The `true' margin of safety (adapted from Furman, 1981 and Nixon, 1 958 ) although NASA has found the deterministic approach to be adequate for some structural analysis (NASA, 19 95) Non-complex and/ or non-critical applications in mechanical design can also make use of probabilistic design techniques and justify a more in-depth approach... methods with integrated manufacturing knowledge, through user-friendly platforms in order to design capable products Comparing and evaluating assembly stacks as shown in the case study above is an essential way of identifying the capability of designs and indicating areas for redesign 4 Designing reliable products 4.1 Deterministic versus probabilistic design For many years, designers have applied so... approaching Æ1 :5 standard deviations, where in fact Cpk ˆ Cp ÿ 0 :5 (Harry, 1987), but on average an o€set determined by the equation: Cpk ˆ 0:93Cp ÿ 0:19 …3:17† 117 118 Designing capable assembly stacks Figure 3.3 Empirical relationship between Cp and Cpk showing degree of process shift expected for the components analysed Therefore, Cpk as predicted by qm can be used to measure the potential shift... have been reviewed and the application has been demonstrated via an industrial case study The data used to determine the tolerances and standard deviations has a realistic base and can provide reliable results in the design problem of allocating optimum capable tolerances in assembly stacks The CAPRAtol method uses empirical capability data for a number of processes, including material and geometry e€ect... 1 2 3 4 5 6 1.2 1.2 1 1.2 1 1 1 1.1 1 1.7 1 1 0.010 0. 250 0.0 85 0.300 0.004 0.080 1.7 1.7 1.7 1.7 1.7 1.7 0.002 0.060 0.020 0.072 0.001 0.019 0.06 37 .55 4.34 54 .07 0.01 3. 85 0.04 37.70 4.19 54 .28 0.01 3.78 Assembly tolerance, ta ˆ Æ0:408 mm Assembly standard deviation, a ˆ 0:098 mm Assembly tolerance, Cpk ‰% ta =…3a †Š ˆ 1:39 The solenoid design will ®rst be analysed in a `paper-based' approach, followed... the true standard deviation, but an estimate to measure the potential shift in the distribution The standard deviation multiplier, z, is the ratio of the tolerance and standard deviation, for one half of the distribution in this case: t zˆÆ …3:10†  Hi ˆ Therefore, combining equations 3.9 and 3.10 gives an estimate for zi in equation 3.2, and for bilateral tolerances for the speci®c case of the ith... Severity Rating (S) to be capable The bilateral tolerance stack model including a factor for shifted component distributions is given below It is derived by substituting equations 3.11 and 3.18 into equation 3.2 This equation is similar to that derived in Harry and Stewart (1988), but using the qm estimates for Cpk and a target Cpk for the assembly tolerance 119 120 Designing capable assembly stacks... made from a particular material to a particular loading situation, and in general the greater the uncertainties experienced, the greater the factor of safety used (Faires, 19 65) Table 4.1 shows recommended factor of safety values published 60 years apart, ®rst by Unwin (c.19 05) and by Faires (19 65) They are very similar in nature, and in fact the earlier published values are lower in some cases As . 0.06 0.04 2 1.2 1.1 0. 250 1.7 0.060 37 .55 37.70 3 1 1 0.0 85 1.7 0.020 4.34 4.19 4 1.2 1.7 0.300 1.7 0.072 54 .07 54 .28 5 1 1 0.004 1.7 0.001 0.01 0.01 6 1 1 0.080 1.7 0.019 3. 85 3.78 Assembly tolerance,. going to be capable, and if not what components require redesigning. In addition to understanding the statistical tolerance stack models and the FMEA process in developing a process capable solution,. several product introduction projects 108 Designing capable components and assemblies 3 Designing capable assembly stacks 3.1 Introduction The analysis of process capable tolerances on individual component

Ngày đăng: 13/08/2014, 08:21

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan