TREATMENT OF FREE SURFACES 447 (a) (b) (c) (d) Figure 19.13. (a)–(d): LNG tanker fleet: evolution of the free surface plane’ and the ship are moved, and the Navier–Stokes/VOF equations are integrated using the arbitrary Lagrangian–Eulerian frame of reference. The LNG tanks are assumed to be 80% full. This leads to an interesting interaction of the sloshing inside the tanks and the drifting ship. The mesh had approximately nelem=2,670,000 elements, and the integration to 3 minutes of real time took 20 hours on a PC (3.2 GHz Intel P4, 2 Gbytes RAM, Linux OS, Intel compiler). Figure 19.12(b) shows the evolution of the flowfield, and Figures 19.12(c) and (d) the body motion. Note the change in position for the ship, as well as the roll motion. 19.2.8.5. Drifting fleet of ships This example shows the use of interface capturing to predict the effects of drift and shielding in waves for a group of ships. The ships are the same LNG tankers as used in the previous example, but the tanks are considered full. The boundary conditions and mesh size distribution are similar to the ones used in the previous example. The ships are treated as free, floating objects subject to the hydrodynamic forces of the water. The surface nodes of the ships move according to a 6-DOF integration of the rigid-body motion equations. Approximately 30 layers of elements close to the ‘wave-maker plane’ and the ships are moved, and the Navier–Stokes/VOF equations are integrated using the arbitrary Lagrangian– Eulerian frame of reference. The mesh had approximately 10 million elements and the 448 APPLIED COMPUTATIONAL FLUID DYNAMICS TECHNIQUES integration to 6 minutes of real time took 10 hours on an SGI Altix using six processors (1.5 GHz Intel Itanium II, 8 Gbytes RAM, Linux OS, Intel compiler). Figures 19.13(a)–(d) show the evolution of the flowfield and the position of the ships. Note how the ships in the back are largely unaffected by the waves as they are ‘blocked’ by the ships in front, and how these ships cluster together due to wave forces. 19.2.9. PRACTICAL LIMITATIONS OF FREE SURFACE CAPTURING Free surface capturing has been used to compute violent free surface flows with overturning waves and changes in topology. Even though in principle free surface capturing is able to compute all interface problems, some practical limitations do remain. The first and foremost is accuracy. For smooth surfaces, free surface fitting can yield far more accurate results with less gridpoints. This is even more pronounced for cases where a free surface boundary layer is present, as it is very difficult to generate anisotropic grids for the free surface capturing cases. 20 OPTIMAL SHAPE AND PROCESS DESIGN The ability to compute flowfields implicitly implies the ability to optimize shapes and processes. The change of shape in order to obtain a desired or optimal performance is denoted as optimal shape design. Due to its immense industrial relevance, the relative maturity (accuracy, speed) of flow solvers and increasingly powerful computers, optimal shape design has elicited a large body of research and development (Newman et al. (1999), Mohammadi and Pironneau (2001)). The present chapter gives an introduction to the key ideas, as well as the optimal techniques to optimize shapes and processes. 20.1. The general optimization problem In order to optimize a process or shape, a measurement of quality is required. This is given by one – or possibly many – so-called objective functions I, which are functions of design variables or input parameters β, as well as field unknowns u (e.g. a flowfield) I(β, u(β)) →min, (20.1) and is subject to a number of constraints. - PDE constraints: these are the equations that describe the physics of the problem being considered, and may be written as R(u) = 0. (20.2) - Geometric constraints: g(β) ≥0. (20.3) - Physical constraints: h(u) ≥ 0. (20.4) Examples for objective functions are: - inviscid drag (e.g. for trans/supersonic airfoils): I = pn x d; - prescribed pressure (e.g. for supercritical airfoils): I = (p − p 0 ) 2 d; - weight (e.g. for structures): I = ρd; - uniformity of magnetic field (electrodynamics): I = (B −B 0 ) 2 d. Applied Computational Fluid Dynamics Techniques: An Introduction Based on Finite Element Methods, Second Edition. Rainald Löhner © 2008 John Wiley & Sons, Ltd. ISBN: 978-0-470-51907-3 450 APPLIED COMPUTATIONAL FLUID DYNAMICS TECHNIQUES Examples for PDE constraints R(u) are all the commonly used equations that describe the relevant physics of the problem: - fluids: Euler/Navier–Stokes equations; - structures: elasticity/plasticity equations; - electromagnetics: Maxwell equations; -etc. Examples for geometric constraints g(β) are: - wing area cross-section (stress, fuel): A>A 0 ; - trailing edge thickness (cooling): w>w 0 ; - width (manufacturability): w>w 0 ; -etc. Examples for physical constraints h(u) are: - a constrained negative pressure gradient to avoid separation: s ·∇p>pg 0 ; - a constrained pressure to avoid cavitation: p>p 0 ; - a constrained shear stress to avoid blood haemolysis: |τ | <τ 0 ; - a constrained stress to avoid structural failure: |σ |<σ 0 ; -etc. Before proceeding, let us define with a higher degree of precision process and shape optimization. With shape optimization, we can clearly define three different optimization options (Jakiela et al. (2000), Kicinger et al. (2005)): topological optimization (TOOP),shape optimization(SHOP) and sizing optimization (SIOP). These options mirror the typical design cycle (Raymer (1999)): preliminary design, detailed design and final design. With reference to Figure 20.1, we can define the following. (a) (b) (c) Figure 20.1. Different types of optimization: (a) topology; (b) shape; (c) sizing OPTIMAL SHAPE AND PROCESS DESIGN 451 - Topological optimization. The determination of an optimal material layout for an engineering system. TOOP has a considerable theoretical and empirical legacy in structuralmechanics (Bendsoe and Kikuchi (1988), Jakiela et al. (2000),Kicinger et al. (2005), Bendsoe (2004)), where the removal of material from zones where low stress levels occur (i.e. no load bearing function is being realized) naturally leads to the common goal of weight minimization. For fluid dynamics, TOOP has been used for internal flow problems (Borrvall and Peterson (2003), Hassine et al. (2004),Moos et al. (2004), Guest and Prévost (2006), Othmer et al. (2006)). - Shape optimization. The determination of an optimal contour, or shape, for an engi- neering system whose topology has been fixed. This is the classic optimization task for airfoil/wing design, and has been the subject of considerable research and development during the last two decades (Pironneau (1985), Jameson (1988, 1995), Kuruvila et al. (1995), Reuther and Jameson (1995), Reuther et al. (1996), Anderson and Venkata- krishnan (1997), Elliott and Peraire (1997, 1998), Mohammadi (1997), Nielsen and Anderson (1998), Medic et al. (1998), Reuther et al. (1999), Nielsen and Anderson (2001), Mohammadi and Pironneau (2001), Dreyer and Matinelli (2001), Soto and Löhner (2001a,b, 2002)). - Sizing optimization. The determination of an optimal size distribution for an engineer- ing system whose topology and shape has been fixed. A typical sizing optimization in fluid mechanics is the layout of piping systems for refineries. Here, the topology and shape of the pipes is considered fixed, and one is only interested in an optimal arrangement of the diameters. For all these types of optimization (TOOP, SHOP, SIOP) the parameter space is defined by a set of variables β. In order for any optimization procedure to be well defined, the set of design variables β must satisfy some basic conditions (Gen (1997)): - non-redundancy: any process, shape or object can be obtained by a one and only one set of design variables β; - legality: any set of design variables β can be realized as a process, shape or object; - completeness: any process, shape or object can be obtained by a set of design variables β; this guarantees that any process, shape or object can be obtained via optimization; - causality (continuity): small variations in β lead to small changes in the process, shape or object being optimized; this is an important requirement for the convergence of optimization techniques. Admittedly, at first sight all of these conditions seem logical and easy to satisfy. However, it has often been the case that an over-reliance on ‘black-box’ optimization has led users to employ ill-defined sets of design variables. 20.2. Optimization techniques Given the vast range of possible applications, as well as their immediate benefit, it is not surprising that a wide variety of optimization techniques have emerged. In the simplest case, 452 APPLIED COMPUTATIONAL FLUID DYNAMICS TECHNIQUES the parameter space β is tested exhaustively. An immediate improvement is achieved by testing in detail only those regions were ‘promising minima’ have been detected. This can be done by emulating the evolution of life via ‘survival of the fittest’ criteria, leading to so-called genetic algorithms. With reference to Figure 20.2, for smooth functionsI one can evaluate the gradient I ,β and change the design in the direction opposite to the gradient. In general, such gradient techniques will not be suitable to obtain globally optimal designs, but can be used to quickly obtain local minima. In the following, we consider in more detail the recursive exhaustive parameter scoping, genetic algorithms and gradient-based techniques. Here, we already note the rather surprising observation that with optimized gradient techniques and adjoint solvers the computational cost to obtain an optimal design is comparable to that of obtaining a single flowfield (!). EI( )I( ) EEEEE Figure 20.2. Local minimum via gradient-based optimization 20.2.1. RECURSIVE EXHAUSTIVE PARAMETER SCOPING Suppose we are given the optimization problem I(β, u(β)) →min. (20.5) In order to norm the design variables, we define a range β i min ≤ β i ≤ β i max for each design variable. An instantiation is then given by β i = (1 − α i )β i min + α i β i max , (20.6) implying I(β) =I(β(α)). By working only with the α i , an abstract, non-dimensional, bounded ([0, 1]) setting is achieved, which allows for a large degree of commonality among various optimization algorithms. The simplest (and most expensive) way to solve (20.1) is to divide each design parameter into regular intervals, evaluate the cost function for all possible combinations, and retain the best. Assuming n d subdivisions per design variable and N design variables, this amounts to n N d cost function evaluations. Each one of these cost function evaluations corresponds to one (or several) CFD runs, making this technique suitable only for problems where N is relatively small. An immediate improvement is achieved by restricting the number of subdivisions n d OPTIMAL SHAPE AND PROCESS DESIGN 453 to a manageable number, and then shrinking the parameter space recursively around the best design. While significantly faster, such a recursive procedure runs the risk of not finding the right minimum if the (unknown)local ‘wavelength’of non-smooth functionalsis smaller than the interval size chosen for the exhaustive search (see Figure 20.3). EI( )I( ) E Search Region 2 Search Region 1 Figure 20.3. Recursive exhaustive parameter scoping The basic steps required for the recursive exhaustive algorithm can be summarized as follows. Ex1. Define: - Parameter space size for α [0,1]; - Nr. of intervals (interval length h =1/n d ); Ex2. while: h>h min : Ex3. Evaluate the cost function I(β(α)) for all possible combinations of α i ; Ex4. Retain the combination α opt with the lowest cost function; Ex5. Define new search range: [α opt − h/2, α opt + h/2] Ex6. Define new interval size: h := h/n end while 20.2.2. GENETIC ALGORITHMS Given the optimization problem (20.1), a simple and very general way to proceed is by copying what nature has done in the course of evolution: try variations of β and keep the ones that minimize (i.e. improve) the cost function I(β, u(β)). This class of optimization techniques are called genetic algorithms (Goldberg (1989), Deb (2001), De Jong (2006)) or evolutionary algorithms (Schwefel (1995)). The key elements of these techniques are: -afitness measure,givenbyI(β, u(β)), to measure differentdesigns against each other; - chromosome coding, to parametrize the design space given by β; - population size required to achieve robust design; - selection, to decide which members of the present/next generation are to be kept/used for reproductive purposes; and - mutation, to obtain ‘offspring’ not present in the current population. 454 APPLIED COMPUTATIONAL FLUID DYNAMICS TECHNIQUES The most straightforward way to code the design variables into chromosomes is by defining them to be functions of the parameters 0 ≤α i ≤ 1. As before, an instantiation is given by β i = (1 − α i )β i min + α i β i max . (20.7) The population required for a robust selection needs to be sufficiently large. A typical choice for the number of individuals in the population M as compared to the number of chromosomes (design variables) N is M>O(2N). (20.8) Given a population and a fitness measure associated with each individual, the next generation has to be determined. Depending on the life cycle and longevity of the species, as well as the climatic and environmental conditions, in nature several successful strategies have emerged. For many insect species, the whole population dies and is replaced when a new generation is formed. Denoting by µ the parent population and by λ the offspring population, this complete replacement strategy is written as (µ, λ). Larger mammals, as well as many birds, reptiles and fish, live long enough to produce several offspring at different times during their lifetime. In this case the offspring population consists of a part that is kept from the parent population, as well as new individuals that are fit enough to compete. This partial replacement strategy is written as (µ + λ). In order to achieve a monotonic improvement in designs, the (µ + λ) strategy is typically used, and a percentage of ‘best individuals’ of each generation is kept (typical value, c k = O(10%)). Furthermore, a percentage of ‘worst individuals’ are not admitted for reproductive purposes (typical value, c c = O(75%)). Each new individual is generated by selecting (randomly) a pair i, j from the allowed list of individuals and combining the chromosomes randomly. Of the many possible ways to combine chromosomes, we mention the following. (a) Chromosome splicing. A random crossover point l is selected from the design parame- ters. The chromosomes for the new individual that fall below l are chosen from i,the rest from j: α k = α i k , 1 ≤ k ≤ l, α k = α j k ,l≤ k ≤ n. (20.9) (b) Arithmetic pairing. A random pairing factor −ξ<γ<1 +ξ is selected and applied to all variables of the chromosomes in a uniform way. The chromosomes for the new individual are given by α = (1 −γ)α i + γ α j . (20.10) Note that γ lies outside [0, 1](a typical value is ξ = 0.2). This is required, as otherwise the only way to reach locally outside the chromosome interval given by the pair i, j (or the present population) is via mutation, which is a slow and therefore expensive process. (c) Random pairing. The arithmetic pairing can be randomized even further by choosing a different proportionality factor γ for each design variable. We then obtain α k = (1 − γ k )α i k + γ k α j k . (20.11) OPTIMAL SHAPE AND PROCESS DESIGN 455 Note that chromosome splicing and arithmetic pairing constitute particular cases of random pairing. The differences between these pairings can be visualized by considering the 2-D search space shown in Figure 20.4. If we have two points x 1 , x 2 whicharebeingpairedto form a new point x 3 , then chromosome splicing, arithmetic pairing and random pairing lead to the regions shown in Figure 20.4. In particular, chromosome splicing only leads to two new possible point positions, arithmetic pairing to points along the line connecting x 1 , x 2 and random pairing to points inside the extended bounding box given by x 1 , x 2 . x 1 x 2 Arithmetic Pairing Chromosome Splicing Random Pairing Figure 20.4. Regions for possible offspring from x 1 , x 2 A population that is not modified continuously by mutations tends to become uniform, implying that the optimizationmay end in a local minimum. Therefore, a mutation frequency: c m = O(0.25/N) has to be appliedto the newgeneration, modifyingchromosomesrandomly. The basic steps required per generation for genetic algorithms can be summarized as follows. Ga1. Evaluate the fitness function I(β(α)) for all individuals; Ga2. Sort the population in ascending (descending) order of I ; Ga3. Retain the c k best individuals for the next generation; Ga4. while: Population incomplete - Select randomly a pair i, j from c c list - Obtain random pairing factors 0.0 <γ k < 1.2 - Obtain the chromosomes for the new individual: α = (1 − γ)α i + γ α j end while For cases with a single, defined optimum, one observes that: - the best candidate does not change over many generations – only the occasional mutation will yield an improvement, and thereafter the same pattern of unchanging best candidate will repeat itself; - the top candidates (e.g. top 25% of population) become uniform, i.e. the genetic pool collapses. Such a behaviour is easy to detect, and much faster convergence to the defined optimum can be achieved by ‘randomizing’ the population. If the chromosomes of any two individuals i, j are such that d ij =|α i − α j | <, (20.12) the difference (distance) d ij is artificially enlarged by adding/subtracting a random multiple of to one of the chromosomes. This process is repeated for all pairs i, j until none of 456 APPLIED COMPUTATIONAL FLUID DYNAMICS TECHNIQUES them satisfies (20.12). As the optimum is reached, one again observes that the top candidate remains unchanged over many generations. The reason for this is that an improvement in the cost function can only be obtained with variations that are smaller than . When such a behaviour is detected, the solution is to reduce and continue. Typical reduction factors are 0.1–0.2. Given that 0 < α < 1, a stopping criterion is automatically achieved for such cases: when the value of is smaller than a preset threshold, convergence has been achieved. The advantages of genetic algorithms are manifold: they represent a completely general technique, able to go beyond local minima and hence are suitable for ‘rough’ cost functions I with multiple local minima. Genetic algorithms have been used on many occasions for shape optimization (see, e.g.,Gage and Kroo (1993),Crispin (1994),Quagliarella and Cioppa (1994), Quagliarella (1995), Doorly (1995), Periaux (1995), Yamamoto and Inoue (1995), Vicini and Quagliarella (1997,1999), Obayashi (1998), Obayashi et al. (1998), Zhu and Chan (1998), Naujoks et al. (2000), Pulliam et al. (2003)). On the other hand, the number of cost function evaluations (and hence field solutions) required is of O(N 2 ),whereN denotes the number of design parameters. The speed of convergence can also be strongly dependent on the crossover, mutation and selection criteria. Given the large number of instantiations (i.e. detailed, expensive CFD runs) required by genetic algorithms, considerable efforts have been devoted to reduce this number as much as possible. Two main options are possible here: - keep a database of all generations/instantiations, and avoid recalculation of regions already visited/ explored; - replace the detailed CFD runs by approximate models. Note that both of these options differ from the basic modus operandi of natural selection. The first case would imply selection from a semi-infinite population without regard to the finite life span of organisms. The second case replaces the actual organism by an approximate model of the same. 20.2.2.1. Tabu search By keeping in a database the complete history of all individuals generated and evaluated so far, one is in a position to reject immediately offspring that: - are too close to individuals already in the database; - fall into regions populated by individuals whose fitness is low. The regions identified as unpromising are labelled as ‘forbidden’ or ‘tabu’, hence the name. 20.2.2.2. Approximate models As stated before, considerable efforts have been devoted to the development of approximate models. The key idea is to use these (cheaper) models to steer the genetic algorithm into the promising regions of the parameter space, and to use the expensive CFD runs as seldomly as possible (Quagliarella and Chinnici (2005)). The approximate models can be grouped into the following categories. [...]... differences (Haftka ( 198 5), Papay and Walters ( 199 5), Hou et al ( 199 5), Besnard et al ( 199 8), Newman et al ( 199 9), Hino ( 199 9), Miyata and Gotoda (20 00), Tahara et al (20 00)) For each βi , vary its value by a small amount βi , recompute the cost function I and measure the gradient with respect to βi : I,βi = I (β + βi ) − I (β) βi (20 .23 ) 460 APPLIED COMPUTATIONAL FLUID DYNAMICS TECHNIQUES This implies... ( 197 3, 197 4, 198 5), Jameson ( 198 8, 199 5), Kuruvila et al ( 199 5), Anderson and Venkatakrishnan ( 199 7), Elliott and Peraire ( 199 7, 199 8), Mohammadi ( 199 7), Nielsen and Anderson ( 199 8), Medic et al ( 199 8), Reuther et al ( 199 9), Mohammadi and Pironneau (20 01), Dreyer and Matinelli (20 01), Soto and Löhner (20 01, 20 02) ) Consider a variation in the objective function I and the PDE constraint R: δI = I,β δβ +... Obtain λmin α = α − λmin I,α0 Evaluate I2 = I (β(α)) If: I2 > I0 ⇒ Reduce λ (goto Sd1) Set: λ = λmin do: icont=1,mcont ! Continuation Steps - α3 = α − λ I,α0 - Evaluate I3 = I (β(α3 )) - If: I3 > I2 : exit loop - Replace I2 = I3 , α = α3 enddo (20 .28 ) 4 62 APPLIED COMPUTATIONAL FLUID DYNAMICS TECHNIQUES The convergence behaviour of genetic and gradient-based techniques can be illustrated by considering... design variables 1e-00 45000 Cost Function 1e-01 1e- 02 1e-03 1e-04 FD GA 40000 Nr of Function Evaluations GA N= 5 GA N=10 GA N =20 GA N=40 FD N= 5 FD N=10 FD N =20 FD N=40 35000 30000 25 000 20 000 15000 10000 5000 1e-05 0 0 50 100 150 20 0 25 0 300 350 400 450 500 5 Nr of GA Generations/FD Steps 10 15 20 25 30 35 40 Nr of Design Variables Figure 20 .7 Convergence history and function evaluations 20 .3 Adjoint solvers... 2 (20 .47) 466 APPLIED COMPUTATIONAL FLUID DYNAMICS TECHNIQUES where γ is the ratio of specific heats Denoting u = v1 , v = v2 , w = v3 , q = u2 + v 2 + w2 and c1 = γ − 1, c2 = c1 /2, c3 = 3 − γ , the Jacobian matrices are given by 0; 1; 0; 0; 0; −u2 + c2 q; c3 u; −c1 v; −c1 w; c1 ; −uv; v; u; 0; 0; , Ax = −uw; w; o; u; 0; −(γ e − c1 q)u; γ e − c2 q − c1 u2... networks, that are trained to reproduce the input–output obtained from the cost functions evaluated so far (Papila et al ( 199 9)); - proper orthogonal decompositions (LeGresley and Alonso (20 00)); - kriging (Simpson et al ( 199 8), Kumano et al (20 06)); - tessellations; - etc 20 .2. 2.3 Constraint handling The handling of constraints is remarkably simple for genetic algorithms For any individual that does... p0 )2 d , (20 .58) 468 APPLIED COMPUTATIONAL FLUID DYNAMICS TECHNIQUES where p0 denotes the prescribed pressure From (20 . 39) and (20 .44) this implies n· v = 2( p − p0 ) (20 . 59) As before, the normal adjoint velocity is prescribed while the tangential adjoint velocity is free to change No condition is required for the adjoint pressure (c) Prescribed velocity This condition is given by Iv0 = (v − v0 )2. .. directly from the original code (Griewank and Corliss ( 199 1), Berz et al ( 199 6)) For example, u = v*w u = v/w ⇒ ⇒ du = dv*w + v*dw , du = dv/w - v*dw/(w*w), etc Several efforts have been reported in this area, most notably the Automatic DIfferentiation of FORtran (ADIFOR) (Bischof et al ( 19 92 ) , Hou et al ( 199 5)) and Odyssee (Rostaing et al ( 199 3)) (b) Finite differences 1 finite difference Perhaps the... concurrently using the concept of non-dominated individuals (Goldberg ( 198 9), Deb (20 01)) Given multiple objectives Ii (β, u(β)), i = 1, m, the objective vector of individual k is partially less than the objective 458 APPLIED COMPUTATIONAL FLUID DYNAMICS TECHNIQUES vector of individual k if Ii (βk , u(βk )) ≤ Ii (βl , u(βl )) i = 1, m and ∃j \Ij (βk , u(βk )) < Ij (βl , u(βl )) (20 .14) All individuals that... −uv; v; u; 0; 0; y 2 + c q; −v −c1 uv; c3 v; −c1 w; c1 ; , A = 2 −uw; 0; w; v; 0; −(γ e − c1 q)v; −c1 uv; γ e − c2 q − c1 v 2 ; −c1 vw γ v; 0; 0; 0; 1; 0; −uv; w; 0; u; 0; −uw; 0; w; v; 0; Az = (20 .48) −w2 + c2 q; −c1 u; −c1 v; c3 w; c1 ; −(γ e − c1 q)w; −c1 uw; −c1 uw; γ e − c2 q − c1 w2 ; γ w; 20 .3.3 .2 Incompressible Euler/Navier–Stokes . Kroo ( 199 3),Crispin ( 199 4),Quagliarella and Cioppa ( 199 4), Quagliarella ( 199 5), Doorly ( 199 5), Periaux ( 199 5), Yamamoto and Inoue ( 199 5), Vicini and Quagliarella ( 199 7, 199 9), Obayashi ( 199 8), Obayashi. ( 198 5), Jameson ( 198 8, 199 5), Kuruvila et al. ( 199 5), Reuther and Jameson ( 199 5), Reuther et al. ( 199 6), Anderson and Venkata- krishnan ( 199 7), Elliott and Peraire ( 199 7, 199 8), Mohammadi ( 199 7),. so-called adjoint variables (Pironneau ( 197 3, 197 4, 198 5), Jame- son ( 198 8, 199 5), Kuruvila et al. ( 199 5), Anderson and Venkatakrishnan ( 199 7), Elliott and Peraire ( 199 7, 199 8), Mohammadi( 199 7),Nielsen