1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Materials Selection and Design (2010) Part 15 docx

120 254 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 120
Dung lượng 2,33 MB

Nội dung

unique parts, the domains of cost estimation expand dramatically. So, although domain limitation is necessary for cost- estimates accuracy, it is not a panacea. Database Commonality. Estimating the costs of a complex product through various phases of development and production requires organization of large amounts of data. If the data for design, manufacturing, and cost are linked, there is database commonality. It has been found (Ref 3) that having database commonality results in dramatic reductions in cost and schedule overruns in military programs. In the same study, domain limitation was found to be essential in achieving database commonality. Having database commonality with domain limitation implies that the links between the design and specific manufacturing processes, with their associated costs, are understood and delineated. Focusing on specific manufacturing processes allows one to collect and organize data on where and how costs arise in specific processes. With this focus, the accuracy of cost estimates can be determined, provided that uniform methods of estimation are used, and provided that, over time, the cost estimates are compared with the actual costs as they arise in production. In this manner, the accuracy of complex cost estimates may be established and improved. In present engineering and design practice, many organizations do not have adequate database commonality, and the accuracy of cost estimates is not well known. Database commonality requires an enterprise-wide description of cost- dominant manufacturing processes, a way of tracking actual costs for each part, and a way of giving this information in an appropriate format to designers and cost estimators. Most "empirical methods" of cost estimation, which are based on industrywide studies of statistical correlation of cost, may or may not apply to the experience of a specific firm (see the discussion in the sections that follow). Costs are "rolled up" for a product when all elements of the cost of a product are accounted for. Criteria for cost estimation using database commonality is simple: speed (how long does it take to roll up a cost estimate on a new design), accuracy (what is the standard deviation of the estimate, based on comparison with actual costs) and risk (what is the probability distribution of the cost estimate; what fraction of the time will the estimate be more than 30% too low, for example). One excellent indicator of database commonality is the roll-up time criteria. World-class cost-estimation roll- up times are minutes to fractions of days. Organizations that have such rapid roll-up times have significantly less cost and schedule overruns on military projects (Ref 3). Cost allocation is another general issue. Cost allocation refers to the process by which the components of a design are assigned target costs. The need for cost allocation is clear: how else would an engineer, working on a large project, know how much the part being designed should cost? And, if the cost is unknown and the target cost is not met, there will be time delays, and hence costs incurred due to unnecessary design iteration. It is generally recognized that having integrated product teams (IPTs) is a good industrial practice. Integrated product teams should allocate costs at the earliest stages of a development program. Cost estimates should be performed concurrently with the design effort throughout the development process. Clearly, estimating costs at early stages in a development program, for example, when the concept of the product is being assessed, requires quite different tools than when most or all the details of the design are specified. Various tools that can be used to estimate cost at different stages of the development process are described later in this section. Elements of Cost. There are many elements of cost. The simplest to understand is the cost of material. For example, if a part is made of aluminum and is fabricated from 10 lb of the material, if the grade of aluminum costs $2/lb, the material cost is $20. The estimate gets only a bit more complex if, as in the case of some aerospace components, some 90% of the materials will be machined away; then the sale on scrap material is deducted from the material cost. Tooling and fixtures are the next easiest items to understand. If tools are used for only one product, and the lifetime of the tool is known or can be estimated, then only the design and fabrication cost of the tool is needed. Estimates of the fabrication costs for tooling are of the same form as those for the fabricated parts. The design cost estimate raises a difficult and general problem: cost capture (Ref 4). For example, tooling design costs are often classified as overhead, even though the cost of tools relates to design features. In many accounting systems, manufacturing costs are assigned "standard values," and variances from the standard values are tabulated. This accounting methodology does not, in general, allow the cost engineer to determine the actual costs of various design features of a part. In the ledger entries of many accounting systems, there is no allocation of costs to specific activities or no activity-based accounting (ABC) (Ref 5). In such cases there are no data to support design cost estimates. Direct labor for products or parts that have a high yield in manufacturing normally have straightforward cost estimates, based on statistical correlation to direct labor for past parts of a similar kind. However, for parts that have a large amount of rework the consideration is more complex, and the issues of cost capture and the lack of ABC arise again. Rework may be an indication of uncontrolled variation of the manufacturing process. The problem is that rework and its supervision may be classified all, or in part, as manufacturing overhead. For these reasons, the true cost of rework may not be well known, and so the data to support cost estimates for rework may be lacking. The cost estimates of those parts of overheads that are associated with the design and production of a product are particularly difficult to estimate, due to the lack of ABC and the problem of cost capture. For products built in large volumes, of simple or moderate complexity, cost estimates of overheads are commonly done in the simplest possible way: the duration of the project and the level of effort are used to estimate the overhead. This practice does not lead to major errors because the overhead is a small fraction of the unit cost of the product. For highly engineered, complex products built in low volume, cost estimation is very difficult. In such cases the problem of cost capture is also very serious (Ref 4). Machining costs are normally related to the machine time required and a capital asset model for the machine, including depreciation, training, and maintenance. With a capital asset model, the focus of the cost estimate is the time to manufacture. A similar discussion holds for assembly costs: with a suitable capital asset model, the focus of the cost estimate is the time to assemble the product (Ref 1). Methods of Cost Estimations. There are three methods of cost estimation discussed in the following sections of this article. The first is parametric cost estimation. Starting from the simplest description of the product, an estimate of its overall cost is developed. One might think that such estimates would be hopelessly inaccurate because so little is specified about the product, but this is not the case. The key to this method is a careful limitation of the domain of the estimate (see the previous section). This example deals with the estimate of the weight of an aircraft. The cost of the aircraft would then be calculated using dollars/pound typical of the aircraft type. Parametric cost estimation is the generally accepted method of cost estimation in the concept assessment phases of a development program. The accuracy is surprisingly good about 30% (provided that recent product-design evolution has not been extensive). The second method of cost estimation is empirically based: one identifies specific design features and then uses statistical correlation of costs of past designs to estimate the cost of the new design. This empirical method is by far the most common in use. For the empirical method to work well, the features of the product for which the estimate is made should be unambiguously related to features of prior designs, and the costs of prior designs unambiguously related to design features. Common practice is to account for only the major features of a design and to ignore details. Empirical methods are very useful in generating a rough ranking of the costs of different designs and are commonly used for that purpose (Ref 1, 6, 7). However, there are deficiencies inherent in the empirical methods commonly used. The mapping of design features to manufacturing processes to costs is not one-to-one. Rather, the same design feature may be made in many different ways. This difficulty, the feature mapping problem, discussed in Ref 4, limits the accuracy of empirical methods and makes the assessment of risk very difficult. The problem is implicit in all empirical methods. The problem is that the data upon which the cost correlation is based may assume the use of manufacturing methods to generate the features of the design that do not apply to the new design. It is extraordinarily difficult to determine the implicit assumptions made about manufacturing processes used in a prior empirical correlation. A commonly stated accuracy goal of empirical cost estimates is 15 to 25%, but there is very little data published on the actual accuracy of the cost estimate when it is applied to new data. The final method discussed in this article is based on the recent development called complexity theory. A mathematically rigorous definition of complexity in design has been formulated (Ref 8). In brief, complexity theory offers some improvement over traditional empirical methods: there is a rational way to assess the risk in a design, and there are ways of making the feature mapping explicit rather than implicit. Perhaps the most significant improvement is the capability to capture the cost impact of essentially all the design detail in a cost estimate. This allows designers and cost estimators to explore, in a new way, methods to achieve cost savings in complex parts and assemblies. References cited in this section 1. G. Boothroyd, P. Dewhurst, and W. Knight, Product Design for Manufacture and Assembly, Marcel Dekker, 1994, Chapt. 1 3. D.P. Hoult and C.L. Meador, "Cost Awareness in Design: the Role of Database Commonality," SAE 96008, Society of Automotive Engineers, 1996 4. D.P. Hoult and C.L. Meador, "Methods of Integra ting Design and Cost Information to Achieve Enhanced Manufacturing Cost/Performance Trade- Offs," Save International Conference Proceedings, Society for American Value Engineers, 1996, p 95-99 5. H.T. Johnson and R.S. Kaplan, Relevance Lost, the Rise and Fall of Management Accounting, Harvard Business School Press, 1991 6. G. Boothroyd, Assembly Automation and Product Design, Marcel Dekker, 1992 7. P.F. Ostwald, "American Machinist Cost Estimator," Penton Educational Division, Penton Publishing, 1988 8. D.P. Hoult and C.L. Meador, "Predicting Product Manufacturing Costs from Design Attributes: A Complexity Theory Approach," No. 960003, Society of Automotive Engineers, 1996 Manufacturing Cost Estimating David P. Hoult and C. Lawrence Meador, Massachusetts Institute of Technology Parametric Methods An example for illustrating parametric cost estimation is that of aircraft. In Ref 9, Roskam a widely recognized researcher in this field describes a method to determine the size (weight) of an aircraft. Such a calculation is typical of parametric methods. To determine cost from weight, one would typically correlate costs (inflation adjusted) of past aircraft of similar complexity with their weight. Thus weight is surrogate for cost for a given level of complexity. Most parametric methods are based on such surrogates. For another simple example, consider that large coal-fired power plants, based on a steam cycle, cost about $1500/kW to be built. So, if the year the plant is to be built (for inflation adjustment) and its kW output is known, parametric cost estimate can be readily obtained. Parametric cost estimates have the advantage that little needs to be known about the product to produce the estimate. Thus, parametric methods are often the only ones available in the initial (concept assessment) stages of product development. The first step in a parametric cost estimation is to limit the domain of application. Roskam correlates statistical data for a dozen types of aircraft and fifteen sub types. The example he uses to explain the method is that of a twin-engine, propeller-driven airplane. The mission profile of this machine is given in Fig. 1 (Ref 9). Fig. 1 Mission profile Inspection of the mission specifications and Fig. 1 shows that only a modest amount of information about the airplane is given. In particular, nothing is specified about the detailed design of the machine! The task is to estimate the total weight, W TO or the empty weight, W E , of the airplane. Roskam argues that the total weight is equal to the sum of the empty weight, fuel weight, W F , payload and crew weight, W PL + W crew , and the trapped fuel and oil, which is modeled as a fraction, M tfo , to the total weight. M tfo is to be a small (constant) number, typically 0.001 to 0.005. Thus the fundamental equation for aircraft weight is: W TO = W E + W F + W PL + W crew + M tfo W TO (Eq 1) The basic idea of Roskam is that there is an empirical relationship between aircraft empty and total weights, which he finds to be: log 10 W E = {(log 10 W TO ) - A}/B (Eq 2) The coefficients, A and B, depend on which of the dozen types and fifteen subtypes of aircraft fit the description in Table 1 and Fig. 1. It is at this point that the principle of domain limitation first enters. For the example used by Roskam, the correlation used to determine A = 0.0966 and B = 1.0298 for the twin-engine, propeller-driven aircraft spans a range of empty weights from 1000 to 7000 lb. Table 1 Mission specification for a twin-engine, propeller-driven airplane 1. Payload Six passengers at 175 lb each (including the pilot) and 200 lb total baggage 2. Range 1000 statute miles with maximum payload 3. Reserves 25% of mission fuel 4. Cruise speed 250 knots at 75% power at 10,000 ft and at takeoff weight 5. Climb 10 min to 10,000 ft at takeoff weight 6. Takeoff and landing 1500 ft ground fun at sea level, standard day. Landing at 0.95 of takeoff weight 7. Powerplants Piston/propeller 8. Certification base FAR23 The method proceeds as follows to determine the weight of fuel required in the following way. The mission fuel, W F , can be broken down into the weight of the fuel used and the reserve fuel: W F = W Fres + W Fused (Eq 3) Roskam models the reserve fuel as a fraction of the fuel used (see Table 1). The fuel used is modeled as a fraction of the total weight, and depends on the phase of the mission, as described in Fig. 1. For mission phases that are not fuel intensive, a fixed ratio of the weight at the end of the phase to that at the beginning of the phase is given. Again, these ratios are specific to the type of aircraft. For fuel-intensive phases, in this example the cruise phase, there is a relationship between the lift/drag ratio of the aircraft, the engine fuel efficiency, and the propeller efficiency. Again, these three parameters are specific to the type of aircraft. When the fuel fraction of the total weight is determined by either a cruise calculation, or by the ratio of weight at the end of a mission phase to the beginning of a mission phase, the empty weight can be written in terms of the total weight. Then Eq 2 is used to find the total weight of the aircraft. For the problem posed, Roskam obtains an estimated total weight of 7900 lb. The accuracy can be estimated from the scatter in the correlation used to determine the coefficients A and B, and is about 30%. For details of the method Roskam uses for obtaining the solution, refer to Ref 9. Some limitations of the parametric estimating method are of general interest. For example, if the proposed aircraft does not fit any of the domains of the estimating model, the approach is of little use. Such an example might be the V-22, a tilt wing aircraft (Ref 10), which flies like a fixed-wing machine, but tilts its wings and propellers, allowing the craft to hover like a helicopter during take-off and landing. Such a machine might be considered outside the domain of Roskam's estimating model. The point is not that the model is inadequate (the V-22 is more recent than Roskam's 1986 article), but the limited product knowledge in the early stages of development makes it difficult to determine if a cost estimate for the V-22 fits in a well-established domain. Conversely, even complex machines, such as aircraft, are amenable to parametric cost estimates with fairly good accuracy, provided they are within the domain of the cost model. In the same article, Roskam presents data for transport jets, such as those used by airlines. It should be emphasized that the weight (and hence cost) of such machines, with more than one million unique parts, can be roughly estimated by parametric methods. Of course, cost is not the same as weight or, for that matter, any other engineering parameter. The details of the manufacturing process, inventory control, design change management, and so forth, all play a role in the relationship between weight and cost. The more complex the machine, the more difficult it is to understand if the domain of the parametric cost-estimating model is the same as that of the product being estimated. References cited in this section 9. J. Roskam, Rapid Sizing Method for Airplanes, J. Aircraft, Vol 23 (No.7), July 1986, p 554-560 10. The Bell-Boeing V-22 Osprey entered Low Rate Initial Production with the MV- 22 contract signed June 7, 1996, Tiltrotor Times, Vol 1 (No. 5), Aug 1996 Manufacturing Cost Estimating David P. Hoult and C. Lawrence Meador, Massachusetts Institute of Technology Empirical Methods of Cost Estimation Almost all the cost-estimating methods published in the literature are based on correlation of some feature or property of the part to be manufactured. Two examples are presented. The first is from the book by Boothroyd, Dewhurst, and Knight (Ref 1), hereafter referred to as BDK. Chapter 9 of this book is devoted to "Design for Sheet Metalworking." The first part of this chapter is devoted to estimates of the costs of the dies used for sheet metal fabrication. This example was chosen because the work of these authors is well recognized. (Boothroyd and Dewhurst Inc. sells widely used software for design for manufacture and design assembly.) In this chapter of the book, the concept of "complexity" of stamped sheet metal parts arises. The complexity of mechanical parts is discussed in the section "Complexity Theory" in this article. Example 1: Cost Estimates for Sheet Metal Parts. Sheet metal comes in some 15 standard gages, ranging in thickness from 0.38 to 5.08 mm. It is commonly available in steel, aluminum, copper, and titanium. Typical prices for these materials are 0.80-0.90$/lb for low-carbon steel, $6.00- $7.00/lb for stainless steel, $3.00/lb for aluminum, $10.00/lb for copper, and $20.00/lb for titanium. It is typically shipped in large coils or large sheets. Automobiles and appliances use large amounts of steel sheet metal. Aluminum sheet metal is used in commercial aircraft manufacture, but in lesser amounts due to the smaller number of units produced. Sheet metal is fabricated by shearing and forming operations, carried out by dies mounted in presses. Presses have beds, which range in size from 50 by 30 cm to 210 by 140 cm (20 by 12 in. to 82 by 55 in.). The press force ranges from 200 to 4500 kN (45 to 1000 lbf). The speed ranges from 100 strokes/min to 15 strokes/min in larger sizes. Dies typically have four components: a basic die set; a punch, held by the die set, which shears or forms the metal; a die plate through which or on which the punch acts; and a stripper plate, which removes the scrap at the end of the fabrication process. BDK estimate the basic die set cost (C ds , in U.S. dollars) as basically scaling with usable area (A u , in cm 2 ): C ds = 120 + 0.36A u (Eq 4) The coefficients (Eq 4) arise from correlating about 50 data points of die set cost with useable area. The tooling elements (the punch, die plate, and stripper plate) are estimated with a point system as follows: let the complexity of the part to be fabricated be X p . Suppose that the profile has a perimeter P (cm), and that the part has an over width and length of W (cm) and L (cm) of the smallest dimensions which surround the punch. The complexity of the part is taken to be: X p = (P/L)(P/W) (Eq 5) The assessment of how part complexity affects cost arises repeatedly in cost estimating. The subject is discussed at length in the next section "Complexity Theory" . From the data of BDK, the basic time to manufacture the die set (M, in hours) can be estimated by the following steps: Define the basic manufacturing points (M po ) as M po = 30 + 0.56 (Eq 6) Note that the manufacturing time increases a bit less than linearly with part complexity. This is consistent with the section "Complexity Theory" . BDK goes on to add two correction factors to M po . The first is a correction factor due to plate size and part complexity, f LW . From BDK data it is found: f LW = 1 + 0.0276LW (Eq 7) The second correction factor is to account for the die plate thickness. BDK cites Nordquist (Ref 11), who gives a recommended die thickness, h d , as: h d = 9 + 2.5 log e (U/U ms )Vh 2 (Eq 8) where U is the ultimate tensile stress of the sheet metal, U ms is the ultimate stress of mild steel, a reference value, V, is the required production volume, and h is the thickness (in mm) of the metal to be stamped. BDK recommends the second correction factor to be: f d = 0.5 + 0.02h d or f d = 0.75 (Eq 9) whichever is greater. The corrected labor hours, M p , are then estimated as: M p = f d f LW M po (Eq 10) The cost of the die is the sum of the corrected labor hours times the labor rate of the die fabricator plus the basic die set cost, from Eq 4. As a typical example of the empirical cost estimating methods, the BDK method takes into account several factors such as the production volume, the strength of the material (relating to how durable the die needs to be), the die size, and complexity of the part. These factors clearly influence die cost. However, the specific form of the equations are chosen as convenient representations of the data at hand. (As, indeed, are Eq 6 and 7, derived by fitting BDK data.) The die cost risk (i.e., uncertainty of the resulting estimate of die cost) is unknown, because it is not known how the model equations would change with different manufacturing processes or different die design methods. It is worth noting carefully that only some features of the design of the part enter the cost estimate: the length and width of the punch area, the perimeter of the part to be made, the material, and the production volume. Thus, the product and die designers do not need to be complete in all details to make a cost estimate. Hence, the estimate can be made earlier in the product-development process. Cost trades between different designs can be made at an early stage in the product- development cycle with empirical methods. Example 2: Assembly Estimate for Riveted Parts. The American Machinist Cost Estimator (Ref 7) is a very widely used tool for empirical cost estimation. It contains data on 126 different manufacturing processes. A spreadsheet format is used throughout for the cost analysis. One example is an assembly process. It is proposed to rivet the aluminum frame used on a powerboat. The members of the frame are made from 16-gage aluminum. The buttonhead rivets, which are sized according to recommendations in Ref 12, are in. in diameter and conform to ANSI standards. Figure 2 shows the part. Fig. 2 Powerboat frame assembly There are 20 rivets in the assembly, five large members of the frame, and five small brackets. Chapter 21 in Ref 7 includes six tables for setup, handling, pressing in the rivets, and riveting. A simple spreadsheet (for the first unit) might look like Table 2. The pieces are placed in a frame, the rivets are inserted, and riveted. The total cycle time for the first unit is 18.6 min. There are several points to mention here. First, the thickness of the material and the size of the rivets play no direct part in this simple calculation. The methods of Ref 7 do not include such details. Table 2 Spreadsheet example for assembly of frame (Fig. 2) Source (a) Process description Table time, min Setup, min 21.2-S Setup 15 21.2-1 Get 5 frame members from skid 1.05 21.2-1 Get 5 brackets from bench 0.21 21.2-2 Press in hardware (20 rivets) 1.41 21.2-3 Set 20 rivets 0.93 Total cycle time (minutes) 3.60 15 (a) Tables in Ref 7, Chapter 21 Yet common sense suggests that some of the details must count. For example, if the rivet holes are sized to have a very small clearance, then the "press-in-hardware" task, where the rivets are placed in the rivet holes, would increase. In a like manner, if the rivets fit looser in the rivet holes, the cycle time for this task might decrease. The point of this elementary discussion is that there is some implied tolerance with each of the steps in the assembly process. In fact, one can deduce the tolerance from the standard specification of the rivets. From Ref 12, in the tolerance on in. diameter buttonhead rivets is 0.010 in. So the tolerance of the hole would be about the same size. The second point is that there are 30 parts in this assembly. How the parts are stored and how they are placed in the riveting jig or fixture determines how fast the process is done. With experience, the process gets faster. There is a well- understood empirical model for process learning. The observation, often repeated in many different industries, is that inputs decrease by a fixed percentage each time the number of units produced doubles. So, for example, L i is the labor in minutes of the ith unit produced, and L 0 is the labor of the first unit, then: L i = L 0 i (Eq 11) The parameter measures the slope of the learning curve. The learning curve effects were first observed and documented in the aircraft industry, where a typical rate of improvement might be 20% between doubled quantities. This establishes an 80% learning function, that is, = 0.80. Because this example is fabricated from aluminum, with rivets typical of aircraft construction, it is easy to work out that the 32nd unit will require 32.7% of the time (6.1 min) compared to the first unit (18.6 min). Learning occurs in any well-managed manual assembly process. With automated assembly, "learning" occurs only when improvements are made to the robot used. In either case, there is evidence that, over substantial production runs and considerable periods of time, the improvement is a fixed percentage between doubled quantities. That is, if there is a 20% improvement between the tenth and twentieth unit, there will likewise be a 20% improvement between the hundredth and two hundredth unit. The cost engineer should remember that, according to this rule, the percentage improvement from one unit to the next is a steeply falling function. After all, at the hundredth unit, it takes another hundred units to achieve the same improvement as arose between the 10th and 20th units (Ref 13). References cited in this section 1. G. Boothroyd, P. Dewhurst, and W. Knight, Product Design for Manufacture and Assembly, Marcel Dekker, 1994, Chapt. 1 7. P.F. Ostwald, "American Machinist Cost Estimator," Penton Educational Division, Penton Publishing, 1988 11. W.N. Nordquist, Die Designing and Estimating, 4th ed., Huebner Publishing, 1955 12. E. Oberg, F.D. Jones, and H.L. Horton, Machinery's Handbook, 22nd ed., Industrial Press, 1987, p 1188- 1205 13. G.J. Thuesen, and W.J. Fabrycky, Engineering Economy, Prentice Hall, 1989, p 472-474 Manufacturing Cost Estimating David P. Hoult and C. Lawrence Meador, Massachusetts Institute of Technology Complexity Theory Up to now this article has dealt with the cost-estimation tools that do not require a complete description of the part or assembly to make the desired estimates. What can be said if the design is fully detailed? Of course, one could build a prototype to get an idea of the costs, and this is often done, particularly if there is little experience with the manufacturing methods to be used. For example, suppose there is a complex wave feed guide to be fabricated out of aluminum for a modern radar system. The part has some 600 dimensions. One could get a cost estimate by programming a numerically controlled milling machine to make the part, but is there a simpler way to get a statistically meaningful estimate of cost, while incorporating all of the design details? The method that fulfills this task is complexity theory. There has been a long search for the "best" metric to measure how complex a given part or assembly is. The idea of using dimensions and tolerances as a metric comes from Wilson (Ref 14). The idea presented here is that the metric is a sum of log (d i /t i ), where d i is the ith dimension and t i is its associated tolerance (i ranges over all the dimensions needed to describe the part). According to complexity theory, how complex a part is, I, is measured by: (Eq 12) Originally, the log function was chosen from an imperfect analogy with information theory. It is now understood that the log function arises from a limit process in which tolerance goes to zero while a given dimension remains fixed. In this limit, if good engineering practice is followed, that is, if the accuracy of the machine making the part is not greatly different than the accuracy required of the part, and if the "machine" can be modeled like a first-order damped system, then it can be shown that the log function is the correct metric. Because of historical reasons, the log is taken to the base 2, and I is measured in bits. Thus Eq 12a is written: (Eq 12a) There are two main attractions of the complexity theory. First, I will include all of the dimensions required to describe the part. Hence, the metric captures all of the information of the original design. For assemblies, the dimensions and tolerances refer to the placement of each part in the assembly, and second, the capability of making rigorous statements of how I effects costs. In Ref 8 it is proven that if the part is made by a single manufacturing process, the average time (T) to fabricate the part is: T = A · I, A = const (Eq 13) Again, in many cases, the coefficient A must be determined empirically from past manufacturing data. The same formula applies to assemblies made with a single process, such as manual labor. The extension to multiple processes is given in Ref 8. A final aspect of complexity theory worth mentioning is risk. Suppose a part with hundreds of dimensions is to be made on a milling machine. The exact sequence in which each feature of the part is cut out will determine the manufacturing time. But there are a large number of such sequences, each corresponding to some value of A. Hence there is a collection of As, which have a mean that corresponds to the average time to fabricate the part. That is the meaning of Eq 13. It can be shown that the standard deviation of manufacturing time is: s T = A I (Eq 14) where T is the standard deviation of the manufacturing time, and A is the standard deviation of the coefficient A. A can be determined from past data. These results have a simple interpretation: Parts or assemblies with tighter (smaller) tolerances take longer to make or assemble because with dimensions fixed, the log functions increase as the tolerances decrease. More complex parts, larger I, take longer to make (Eq 13), and more complex parts have more cost risk (Eq 14). These trends are well known to experienced engineers. In Eq 8, a large number of parts from three types of manufacturing processes were correlated according to Eq 13. The results of the following manual lathe process are typical of all the processes studied in Eq 8. Figure 3 shows the correlation of time with I, the dimension information, measured in bits. An interesting fact, shown in Fig. 4 is that the accuracy of the estimate is no different than that of an experienced estimator. Fig. 3 Manufacturing time and dimension information for the lathe process (batch size 3 to 6 units) Fig. 4 Accuracy comparison for the lathe process In Eq 13, the coefficient, A, is shown to depend on the machine properties such as speed, operation range, and time to reach steady-state speed. Can one estimate their value from first principals? It turns out that for manual processes one can make rough estimates of the coefficient. The idea is based on the basic properties of human performance, known as Fitts' law. Fitts and Posner reported the maximum human information capacity for discrete, one-dimensional positioning tasks at about 12 bits/s (Ref 15). Other experiments have reported from 8 to 15 bits/s for assembly tasks (Ref 16). The rivet insertion process discussed previously in this article is an example. The tolerance of the holes for the rivets is estimated to be 0.010 in., that is, the same as the handbook value of the tolerance of the barrel of the rivet (Ref 12). Then it is found that d/t 0.312/0.010 = 31.2 and log 2 = 4.815 bits for each insertion. The initial rate of insertion (Ref 7) was [...]... solidification by allowing solid sand mass to collapse during shrinkage and minimize restraint Dimensions given in inches Source: Ref 45 References cited in this section 41 ASM Handbook, Vol 15, Casting, D.M Stefanescu, Ed., ASM International, 1988 42 A Kearny and E.L Rooy, Aluminum Foundry Products, ASM Handbook, Vol 2, Properties and Selection: Nonferrous Alloys and Special-Purpose Materials, ASM International,... Eng Fracture Mech., Vol 38 (No 2/3), 1991, p 157 41 ASM Handbook, Vol 15, Casting, D.M Stefanescu, Ed., ASM International, 1988 42 A Kearny and E.L Rooy, Aluminum Foundry Products, ASM Handbook, Vol 2, Properties and Selection: Nonferrous Alloys and Special-Purpose Materials, ASM International, 1990, p 139 43 J Cech et al., Rationalizing Foundry Production and Assuring Quality of Castings with the Aid... performance of the part Once the decision is made on how the part is to be manufactured, the details of the processing parameters employed also affect component properties Component design is usually approached with the assumption that the material is uniform and isotropic and that there is an inherent small and random scatter in properties Exceptions are made for fiber-reinforced composite materials and for... Trans., Vol 102, 1994, p 341 6 Manufacturing Considerations in Design, Steel Castings Handbook, 5th ed., P.F Wieser, Ed., Steel Founders' Society of America, 1980, p 5-6 7 Materials Handbook, Vol 15, Casting, ASM International, 1988, p 598 8 Investment Casting, P.R Beeley and R.F Smart, Ed., The Institute of Materials, 1995, p 334 9 Design and Procurement of High-Strength Structural Aluminum Castings,... A.L Kearney and J Raffin, Heat Tear Control Handbook for Aluminum Foundrymen and Casting Designers, American Foundrymen's Society, 1987 35 D.M Stefanescu et al., Cast Iron Penetration in Sand Molds: Part I: Physics of Penetration Defects and Penetration Model, Paper 96-206, AFS Trans., Vol 104, 1996 36 R.L Naro and J.F Wallace, Effect of Mold-Steel Interface Reactions on Casting Surface and Properties,... this reason, designers should consider more than one casting process during the design stage and select a process that offers the best combination of dimensions, properties, and cost Each casting process has its particular strengths, and designers should acquaint themselves with each (Ref 41) Table 1 lists factors that affect the selection of an appropriate casting process for aluminum alloy parts Additional... one coefficient for hand assembly of small parts A 1.6 bits/s and no lookup tables One can make small changes in design, for example, change the screw size, and get an indication of the change in assembly time If one started with a preliminary design, the assembly time estimate would grow more accurate as more details of the design itself and the manufacturing process to build the design become known... Trans., Vol 102, 1994, p 341 6 Manufacturing Considerations in Design, Steel Castings Handbook, 5th ed., P.F Wieser, Ed., Steel Founders' Society of America, 1980, p 5-6 7 Materials Handbook, Vol 15, Casting, ASM International, 1988, p 598 8 Investment Casting, P.R Beeley and R.F Smart, Ed., The Institute of Materials, 1995, p 334 9 Design and Procurement of High-Strength Structural Aluminum Castings,... A.L Kearney and J Raffin, Heat Tear Control Handbook for Aluminum Foundrymen and Casting Designers, American Foundrymen's Society, 1987 35 D.M Stefanescu et al., Cast Iron Penetration in Sand Molds: Part I: Physics of Penetration Defects and Penetration Model, Paper 96-206, AFS Trans., Vol 104, 1996 36 R.L Naro and J.F Wallace, Effect of Mold-Steel Interface Reactions on Casting Surface and Properties,... that designers and foundries can use to obtain maximum performance from cast parts Designers must begin the design process with a thorough understanding of what properties the component must have in addition to strength and ductility Fundamental properties such as Young's modulus, Poisson's ratio, density, thermal conductivity, and coefficient of thermal expansion vary between alloy families and within . how much the part being designed should cost? And, if the cost is unknown and the target cost is not met, there will be time delays, and hence costs incurred due to unnecessary design iteration to features of prior designs, and the costs of prior designs unambiguously related to design features. Common practice is to account for only the major features of a design and to ignore details in the time it takes an assembler to pick up and orient each part before the part is assembled. Jigs and trays, and so forth, that reduce this pick -and- place orientation effort would save assembly

Ngày đăng: 11/08/2014, 14:20

TỪ KHÓA LIÊN QUAN