International journal of computer integrated manufacturing , tập 23, số 6, 2010

98 327 0
International journal of computer integrated manufacturing , tập 23, số 6, 2010

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

International Journal of Computer Integrated Manufacturing Vol 23, No 6, June 2010, 487–499 Sources of variability in the set-up of an indoor GPS Carlo Ferria*, Luca Mastrogiacomob and Julian Farawayc a Via XI Febbraio 40, 24060 Castelli Calepio, BG, Italy; bDISPEA, Politecnico di Torino, Corso Duca degli Abruzzi 24, Torino 10129, Italy; cDepartment of Mathematical Sciences, University of Bath, Bath BA2 7AY, UK (Received 17 June 2009; final version received January 2010) An increasing demand for an extended flexibility to model types and production volumes in the manufacture of large-size assemblies has generated a growing interest in the reduction of jigs and fixtures deployment during assembly operations A key factor enabling and sustaining this reduction is the constantly expanding availability of instruments for dimensional measurement of large-size products However, the increasing complexity of these measurement systems and their set-up procedures may hinder the final users in their effort to assess whether the performance of these instruments is adequate for pre-specified inspection tasks In this paper, mixed-effects and fixed-effects linear statistical models are proposed as a tool to assess quantitatively the effect of set-up procedures on the uncertainty of measurement results This approach is demonstrated on a Metris Indoor GPS system (iGPS) The main conclusion is that more than 99% of the variability in the considered measurements is accounted for by the number of points used in the bundle adjustment procedure during the set-up phase Also, different regions of the workspace have significantly different error standard deviations and a significant effect on the transient duration of measurement This is expected to affect adversely the precision and unbiasedness of measurements taken with Indoor GPS when tracking moving objects Keywords: large scale metrology; large volume metrology; distributed coordinate measuring systems; Indoor GPS; iGPS; uncertainty Introduction During the last decades research efforts in coordinatemeasuring systems for large-size objects have led to a broadening of the range of instruments commercially available (cf Estler et al 2002) These coordinate measurement instruments can be grouped into two categories: centralised and distributed systems (Maisano et al 2008) A centralised instrument is a measuring system constituted by a single hardware element that in performing a measurement may require one or more ancillary devices such as, typically, a computer An example of a centralised instrument is a laser tracker that makes use of a spherically-mounted reflector (SMR) to take a measurement of point spatial coordinates and that needs to be connected to a monitor of environmental conditions and to a computer A distributed instrument is a collection of separate independent elements whose separately gathered measurement information needs to be jointly processed in order for the system to determine the coordinates of a point A single element of the system typically cannot provide measurements of the coordinates of a point when standing alone Precursors of these apparatuses can be identified in wireless indoor networks of sensors *Corresponding author Email: info@carlo.comyr.com ISSN 0951-192X print/ISSN 1362-3052 online Ó 2010 Taylor & Francis DOI: 10.1080/09511921003642147 http://www.informaworld.com for automatic detection of object location (cf Liu et al 2007) These networks can be deployed for inspection tasks in manufacturing operations once their trueness has been increased The term trueness is defined in BS ISO 5725-1:1994 as ‘the closeness of agreement between the average value obtained from a large series of test results and an accepted reference value’ (Section 3.7) When inspecting parts and assemblies having large dimensions, it is often more practical or convenient to bring the measuring system to the part rather than vice versa, as is typically the case on a smaller scale Therefore, instruments for the inspection of large size objects are usually portable In performing a measurement task, a single centralised instrument, say a laser tracker, can then be deployed in a number of different postions which can also be referred to as stations By measuring some fixed points when changing station, the work envelope of the instrument can be significantly enlarged enabling a single centralised instrument to be used for inspection of parts significantly larger than its original work envelope To illustrate this concept, in Figure 1(a) the top view of three geometrical solids, a cylinder, a cube and an octahedron (specifically a hexagonal prism) is displayed These 488 Figure C Ferri et al Centralised and distributed measurement systems solids are inspected by a single centralised instrument such as a laser tracker, which is moved across different positions (1, 2, , in the figure) from each of which the coordinates of the points P1, P2 and P3 are also measured In this respect, a single centralised system appears therefore comparable with a distributed system, whose inherent multi-element nature enables work envelopes of any size to be covered, provided that a sufficient number of elements are chosen This characteristic of a measuring system of adapting itself to suit the scale of a measuring task is often referred to as scalability (cf Liu et al 2007) The concept above can therefore be synthesised by saying that a centralised system is essentially scalable in virtue of its portability, whereas a distributed system is such due to its intrinsic modularity With a single centralised instrument, measurement tasks within a working envelope, however extended, cannot be performed concurrently but only serially Each measurement task to be performed at a certain instant in time needs a dedicated centralised instrument This is shown in Figure 1(a) where the cylinder is measured at the current instant with the instrument in position 2, whereas the hexagonal prism is going to be measured in a future instant when the instrument will be placed in position With a distributed system this limitation does not hold With a distributed system, concurrent measurement tasks can be performed provided that each of the concurrent tasks has a sensor or subgroup of sensors dedicated to it at a specific instant within the distributed instrument In Figure 1(b), the same three objects considered in the case of a centralised instrument are concurrently inspected using a distributed system constituted by six signal transmitter elements (1, 2, , 6) and three probes, each carrying two signal receiving elements whereby the coordinates of the probe tips are calculated This characteristic of distributed systems is especially advantageous when concurrently tracking the position of multiple large-size components during assembly operations The sole way of performing the same concurrent operation with a centralised system would require the availability and use of more than a single centralised instrument (laser tracker, for instance), with potentially-detrimental economic consequences on the manufacturing organisation in terms of increased fixed assets, maintenance costs and increased complexity of the logistics A number of different distributed systems have been developed recently, some as prototypes for research activities (cf., for instance, Priyantha et al 2000; Piontek et al 2007), some others with a level of maturity sufficient for them to be made commercially available (cf., for instance, Welch et al 2001; Maisano et al 2008) In this second case, the protection of intellectual property (IP) rights prevents users’ transparent access to the details of the internal mechanisms and of the software implemented in the systems This may constitute a barrier to a full characterisation of the performance of the equipment This investigation endeavours to provide better insight into the performance of such systems by using widespread statistical techniques The main objective is therefore not to criticise or evaluate the specific instrument considered thereafter, but to demonstrate the use of techniques that may be beneficially deployed also on other distributed systems In particular, the effect of discretionary set-up parameters on the variability and stability of the measurement results has been analysed In the next section the main characteristics of the Metris iGPS, which is the instrument considered, are described A cone-based mathematical model of the system is then presented in Section The experimental set-up is described in Section and the results International Journal of Computer Integrated Manufacturing of the tests are analysed in Section Conclusions are drawn thereafter Physical description of the instrument The instrument used in this study is the iGPS (alias indoor GPS) manufactured by Metris The description of such a system provided in this section is derived from publicly available information The elements constituting the system are a set of two or more transmitters, a number of wireless sensors (receivers) and an unit controlling the overall system and processing the data (Hedges et al 2003; Maisano et al 2008) Transmitters are placed in fixed locations within the volume where measurement tasks are performed Such a volume is also referred to as a workspace Each transmitter has a head rotating at a constant angular velocity, which is different for each transmitter, and radiates three light signals: two infrared fanshaped laser beams generated by the rotating head, and one infrared strobe signal generated by light emitting diodes (LEDs) The LEDs flash at constant time intervals ideally in all directions, but practically in a multitude of directions Each of these time intervals is equal to the period of revolution of the rotating head on which the LEDs are mounted For any complete revolution of the rotating head a single flash is emitted virtually in all directions In this way, the LED signals received by a generic sensor from a transmitter constitute a periodic train of pulses in the time domain where each pulse is symmetric (cf Hedges et al 2003, column 6) The rotating fan-shaped laser beams are tilted by two pre-specified opposite angles, f1 and f2 (e.g 730 and 308, respectively) from the axis of rotation of the 489 head These angles are also referred to as slant angles The fact that the angular velocity of the head is different for different transmitters enables each transmitter to be distinguished (Sae-Hau 2003) A schematic representation of a transmitter at the instant t1 when the first fanned beam L1 intersects the sensor in position P and at the instant t2 when the second fanned beam L2 passes through P is shown in Figure 2, where two values for the slant angles are also shown Ideally, the shape of each of the fanned beams should be adjustable to adapt to the characteristics of the measurement tasks within a workspace Although two beams are usually mounted on a rotating head, configurations with four beams per head have also been reported (Hedges et al 2003, column 5) To differentiate between the two fanned beams on a transmitter, their time position relative to the strobe signal is considered (see Figure 2) The fanned beams are often reported as planar (Liu et al 2008; Maisano et al 2008), as depicted in Figure Yet, the same beams when emitted from the source typically have a conical shape that is first deformed into a column via a collimating lens and then into a fan-shape via a fanning lens (Hedges et al 2003, column 6) It is believed that only an ideal chain of deformations would transform completely and perfectly the initial conical shape into a plane For these reasons, the final shape of the beam is believed to preserve traces of the initial shape and to be more accurately modelled with a portion of a conical surface, rather than a plane Each of the two conical surfaces is then represented by a vector, called a cone vector, that is directed from the apex to the centre of the circular directrix of the cone The angle between a cone vector and any of the generatrices on the cone surface is called the cone central angle This angle is Figure Schema of a transmitter at the instants t1 and t2 when the first and second fanned beams, respectively, intersect the position (P) of a sensor 730 and þ308 are two arbitrary values of the slant angle 490 C Ferri et al designated by a1 and a2 for the first and the second beams, respectively The apex of each cone lies on the axis of rotation of the spinning head In Figure 3, a schema of the portion of the conical surface representing a rotating laser beam is displayed In this figure, two portions of conical surfaces are shown to illustrate a2 and f2 (f2 0, having established counterclockwise angle measurements around the x-axis as positive) The angular separation between the optical axes of the two laser modules in the rotating head is denoted by yoff, when observed from the direction of the rotational axis of the spinning head The rotation of the head causes each of the cone surfaces, and therefore their cone vectors, to revolve around the same axis The angular position of the cone vector at a generic instant is denoted by y1(t) and y2(t) for the first and second fanned beams, respectively These angles are also referred to as scan angles and are defined relative to the strobe LED synchronisation signal, as illustrated below Wireless sensors are made of one or more photodetectors and a wireless connection to the controlling unit for the transmission of the positional information to the central controlling unit The use of the photodetectors enables the conversion of a received signal (stroboscopic LED, first fanned laser, second fanned laser) into the instant of time of its arrival (t0, t1 and t2 in Figure 2) The time intervals between these instants can then be converted into measurements of scan angles from the knowledge of the angular velocity of the head for each transmitter (o in Figure 2) It is expected that y1 ¼ o (t17t0) and that y2 ¼ o (t27t0) At the instant t0 when the LED signal reaches the generic position P, the same LED signal also flashes in any direction Therefore, at the very same instant t0, the LED fires also in the reference direction where the angles in the plane of rotation are measured from (i.e y1 ¼ y ¼ 0) In this study, any plane orthogonal to the axis of rotation is referred to as a plane of rotation For any spherical coordinate system having the rotational axis of the transmitter as the z-axis and the apex common to the aforementioned conical surfaces as the origin, the angle y1 swept by the cone vector of the first fanned beam in the time interval t17t0 is connected with the azimuth of P measured from any possible reference direction x established in the xy-plane, which is the plane of rotation passing through the common apex of the conical surfaces From a qualitative point of view, the elevation (or the zenith) of P can be related to the quantity o (t27t1) By analogy with Figure 2, it is argued that, also in the case of conical fanned shaped beams, when the elevation (or zenith) of P is increasing (decreasing), the time interval t27t1 is also increasing Vice versa, the reason why a time interval t27t1 is larger than another can only be found in the fact that the position of the sensor in the first case has a higher elevation than in the second In the most typical configuration, two receivers are mounted on a wand or a bar in calibrated positions A tip of the wand constitutes the point for which the location is calculated based on the signals received by the two sensors When the receivers are mounted on a bar, the bar is then often referred to as vector bar If such a receivers-mounted bar is short, say with a length between 100 and 200 mm, it is then called a mini vector bar These devices are equipped with firmware providing processing capabilities The firmware enables the computation of azimuth and elevation of the wand or bar tip for each of the spherical reference systems associated with each of the transmitters in the system This firmware is called a position computation engine (PCE) A vector bar therefore acts as a mobile instrument for probing points as shown in the schema of Figure 1(b) More recently, receiving instruments with four sensors have been developed, enabling the user to identify both the position of the tip and the orientation of the receiving instrument itself The role of the bundle adjustment algorithms in the indoor GPS Figure Schema of a shaped laser beam with two portions of conical surfaces to show the central angle a2 and the slant angle f2 The computation of the azimuth and elevation of the generic position P in the spherical reference system of a generic transmitter enables the direction of the oriented straight line l from the origin (the apex of the cones) to P to be identified However, it is not possible to determine the location of P on l In other words, it is International Journal of Computer Integrated Manufacturing not possible to determine the distance of P from the origin Therefore, at least a second transmitter is necessary to estimate the position of P in a user arbitrarily predefined reference system {Uref} In fact, assuming that the position and orientation of the ith and jth transmitters in {Uref} are known, then the coordinates of the generic point on li and on lj can be transformed from the spherical reference system of the transmitters to the common reference system {Uref} (cf Section 2.3 in Craig 1986) Then, P can be estimated with some nonlinear least squares procedure, which minimises the sum of the squared distances between the estimates of the coordinates of P in {Uref} and the generic point on li and lj Only in an ideal situation would li and lj intersect In any measurement result, the azimuth and elevation are only known with uncertainty (cf Sections 2.2 and 3.1 in JCGM 2008) Very little likelihood exists that these measured values for li and lj coincide with the ‘true’ unknown measurands The same very little likelihood applies therefore to the existence of an intersection between li and lj When adding a third kth transmitter, qualitative geometrical intuition supports the idea that the distances of the optimal P from each of the lines li, lj and lk are likely to be less variable until approaching and stabilising around a limit that can be considered typical for the measurement technology under investigation Increasing the number of transmitters is therefore expected to reduce the variability of the residuals The estimation of the coordinates of P, when the position of the transmitters is known, is often referred to as a triangulation problem (Hartley and Sturm 1997; Savvides et al 2001) If the position and orientation of the transmitters in {Uref} are not known, then they need to be determined before the actual usage of the measurement system To identify the position and orientation of a transmitter in {Uref}, six additional parameters need to be estimated (cf Section 2.2 in Craig 1986) This more general engineering problem is often referred to as three-dimensional (3D) reconstruction and occurs in areas as diverse as surveying networks (Wolf and Ghilani 1997), photogrammetry and computer vision (Triggs et al 2000; Lourakis and Argyros 2009) The estimation of three-dimensional point coordinates together with transmitter positions and orientations to obtain a reconstruction which is optimal under a prespecified objective function and an assumed errors structure is called bundle adjustment (BA) The objective or cost function describes the fitting of a mathematical model for measurement procedure to the experimental measurement data Most often, but not necessarily, this results in minimising the sum of the squares of the deviations of the measurement data from their values predicted with nonlinear functions of the unknown parameters (Triggs et al 2000; Lourakis 491 and Argyros 2009) A range of general purpose optimisation algorithms, such as for instance those of Gauss–Netwon and Levenberg–Marquardt, can be used to minimise the nonlinear objective function Alternatively, significantly increased efficiency can be gained if these algorithms are adjusted to account for the sparsity of the matrices arising in the mathematical description of 3D reconstruction problems (Lourakis and Argyros 2009) In the measurement system investigated, a BA algorithm is run in a set-up phase whereby the position and orientation of each transmitter in {Uref} are determined Therefore, during the subsequent deployment of the system (measuring phase), the coordinates of a point are calculated using the triangulation methods mentioned above However, as is typically encountered in commercial measurement systems, the BA algorithms implemented in the system are not disclosed completely to the users This makes it difficult for both users and researchers to devise analytical methods to assess the effects of these algorithms on the measuring system In this investigation, consideration is given to experimental design and statistical techniques to estimate the effect that decisions taken when running the built-in BA algorithm exert on measurement results Experimental set-up Four transmitters were mounted on tripods and placed at a height of about two metres from floor level The direction of the rotational axis of each transmitter spinning head was approximately vertical Each of the four transmitters was placed at the corners of an approximate square of side about eight metres A series of six different targets fields labelled I, II, III, IV, V and VI and respectively consisting of 8, 9, 10, 11, 12 and 13 targets was considered during the BA procedure Each of these fields was obtained by adding one target to the previous field, so that the first eight targets are common to all the fields, the first nine targets are common to the last five fields and so on A schema of this experimental configuration is shown in Figure All the fields were about 1.2 m above floor level The target positions were identified using an isostatic support mounted on a tripod which was moved across the workspace A set of the same isostatic supports was also available on a carbon-fibre bar that was used to provide the BA algorithm built in the system with a requested measurement of length (i.e to scale the system) A distance of 1750 mm between two isostatic supports on the carbon-fibre bar was measured on a coordinate-measuring machine (CMM) The carbonfibre bar was then placed in the central region of 492 C Ferri et al the workspace The coordinates of the two targets 1750 mm apart were measured with iGPS and their 1750 mm distance was used to scale the system in all the target fields considered In this way, the scaling procedure is not expected to contribute to the variability of the measurement results even when different target fields are used in the BA procedure Figure shows an end of the vector bar used in this set-up (the large sphere in the figure), while coupled with an isostatic support (the three small spheres) during the measurement of a target position on the carbon-fibre bar The BA algorithm was run on each of these six targets fields so that six different numerical descriptions of the same physical positions and orientations of the transmitters were obtained Six new targets locations were then identified using the isostatic supports on the carbon-fibre bar mentioned above Using the output of the BA executions, the spatial coordinates of these new target locations were measured The approximate position of the six targets relative to the transmitters is shown in the schema of Figure Each target measurement consisted in placing the vector bar in the corresponding isostatic support and holding it for about 30 s This enabled the measurement system to collect and store about 1200 records of target coordinates in {Uref} for each of the six targets In this way, however, the number of records for each target is different, owing to the human impossibility of manually performing the measurement procedure with a degree of time control sufficient to prevent this situation occurring Figure Target fields I, II, III, IV, V and VI Results Each of the six target positions displayed in Figure and labelled 1, 2, 3, 4, 5, was measured using each of the six BA set-ups I, II, , VI, giving rise to a Figure Isostatic support identifying a target Figure Target field when running the instrument International Journal of Computer Integrated Manufacturing grouping structure of 36 measurement conditions (cells) When measuring a target location, its three Cartesian coordinates in {Uref} are obtained To reduce the complexity of the analysis from three-dimensional to mono-dimensional, instead of these coordinates the distance of the targets from the origin of {Uref} is considered Central to this investigation is the estimation of the effect on the target–origin distance due to the choice of a different number of target points when running the BA algorithm The target locations 1, 2, , not identify points on a spherical surface, so they are at different distances from the origin of {Uref}, regardless of any possible choice of such a reference system These target locations therefore contribute to the variability of the measurements of the target–origin distance whereby the detection of a potential contribution of the BA set-ups to the same variability can be hindered To counteract this masking effect, the experiment was carried out by selecting first a target location and then randomly assigning all the BA set-ups for that location to the sequence of tests This was repeated for all the six target positions Such an experimental strategy introduces a constraint to a completely random assignment of the 36 measurement conditions to the the run order In the literature (cf Chapters 27, 16 and in Neter et al 1996, Faraway 2005 and Faraway 2006, respectively), this strategy is referred to as randomised complete block design (RCBD) The positions of the targets 1, 2, , constitutes a blocking factor identifying an experimental unit or block, within which the BA set-ups are tested The BA set-ups I, II, , VI constitute a random sample of all the possible set-ups that differ only in the choice of the location and number of points selected when running the BA algorithm during the system set-up phase On the other hand, the analysis of the obvious contribution to the variability of the origin–target distance when changing the location of the targets would not add any interesting information to this investigation These considerations lead to describing the experimental data of the RCBD with a linear mixed-effects statistical model, which is first defined and then fitted to the experimental data 5.1 Mixed-effects models The distance dij of the i th (i ¼ 1, , 6) target from the origin measured when using the jth (j ¼ I, , VI) BA procedure is modelled as the sum of four contributions: a general mean m, a fixed effect ti due to the selection of the i th target point, a random effect bj due to the assignment of the jth BA set-up and a random error eij due to all those sources of variability inherent in any experimental investigation that is not 493 possible or convenient to control This is described by the equation dij ¼ m þ ti þ bj þ eij : ð1Þ In Equation (1) and hereafter, the Greek symbols are parameters to be estimated and the Latin symbols are random variables In particular, the bj’s have zero mean and standard deviation sb; the eij’s have zero mean and standard deviation s The eij’s are assumed to be made of independent random variables normally distributed, i.e eij * N(0,s2) The same applies to the bj’s, namely bj * N(0,s2b ) The eij’s and the bj’s are also assumed to be independent of each other Under these assumptions, the variance of dij, namely s2d , is given by the equation s2d ¼ s2b þ s2 : ð2Þ Using the terminology of the ‘Guide to the expression of uncertainty in measurement’ (cf Definition 2.3 in JCGM 2008), sd is the standard uncertainty of the result of the measurement of the origin–target distance As pointed out in the previous section, the number of the determinations of the target–origin distance that have been recorded is different for each of the 36 measurement conditions For simplicity of the analysis, the number of samples gathered in each of these conditions has been made equal by neglecting the samples in excess of the original minimum sample size over all the cells This resulted in considering 970 observations in each cell The measurement result provided by the instrument in each of these conditions and used as a realisation of the response variable dij in Equation (1) is then defined as the sample mean of these 970 observations There is a single measurement result in each of the 36 cells The parameters of the model, i.e m, ti, sb and s, have been estimated by the restricted maximum likelihood (REML) method as implemented in the lme() function of the package nlme of the free software environment for statistical computing and graphics called R (cf R Development Core Team 2009) More details about the REML method and the package nlme are presented in Pinheiro and Bates (2000) The RCBD assumes that there is no interaction between the block factor (target locations) and the treatment (BA set-up) This hypothesis is necessary so that the variability within a cell represented by the variance s2 of the random errors can be estimated when only one experimental result is present in one cell In principle, such an estimation is enabled by considering the variation of the deviations of the data from their predicted values across all the cells This would estimate the variability 494 C Ferri et al of an interaction effect, if it were present If an interaction between target locations and BA set-ups ^ of s provided in this actually exists, the estimate s study would account for both interaction and error variability in a joint way and it would not be possible to separate the two components Therefore, from a practical point of view, the more the hypothesis of no ^ overestimates s interaction is violated, the more s After fitting the model, an assessment of the assumptions on the errors has been performed on the realised residuals, i.e the deviation of the experimental results from the results predicted by the fitted model for corresponding cells (eˆij ¼ dij7dˆij) The realised residuals plotted against the positions of the targets not appear consistent with the hypothesis of constant variance of the errors In fact, as shown in Figure 7(a), the variability of the realised residuals standardised by ^, namely e^ij ¼ ðdij À d^ij Þ=^ s s seems different in different target locations For this reason, an alternative model of the data has been considered which accounts for the variance structure of the errors This alternative model is defined as the initial model (see Equation 1), bar the variance of the errors which is modelled as different in different target locations, namely: si ¼ snew  di ; d1 ¼ 1: ð3Þ From Equation (3) it follows that snew is the unknown parameter describing the error standard deviation in the target position 1, whereas the di’s (i ¼ 2, , 6) are the ratios of the error standard deviation in the ith target position and the first The alternative model has been fitted using one of the class variance functions provided in the package nlme and the function lme() so that also snew and the di’s are optimised jointly with the other model parameters (m, ti and sb) by the application of the REML method (Section 5.2 in Pinheiro and Bates 2000) For the alternative model, diagnostic analyses of the realised residuals were not in denial of its underlying assumptions The standardised realisations of the residuals, i.e e^ij ¼ ðdij À d^ij Þ=^ si , when plotted against the target locations (Figure 7(b)) not appear any longer to exhibit different variances in different target locations, as was the case in the initial model (Figure 7(a)) The same standardised realisations were also found not to exhibit any significant departure from normality The fact that all the target fields have more than 50% of the targets in common together with the fact that each field has been obtained by recursively adding a single target to the current field may cause the experimenters to expect that the measurement results obtained when different target fields have been used in Figure Realisations of the standardised residuals (dimensionless) grouped by target positions for the initial and the alternative mixed-effects models International Journal of Computer Integrated Manufacturing the BA procedure have some degree of correlation If that were the case, then the experimental results should be in denial of the assumed independence of the random effects bj’s The random effects, like the errors, are unobservable random variables Yet, algorithms have been developed to predict the realisations of these unobservable random effects on the basis of the experimental results and their assumed model (Equations 1, and with the pertinent description above) The predictor used in this investigation is referred to as the best linear unbiased predictor (BLUP) It has been implemented in nlme and it is described, for instance, in Pinheiro and Bates (2000) The predicted random effects bˆj’s for the model and the measurement results under investigation are displayed in Figure 8(a) To highlight a potential correlation between predicted random effects relative to target fields that differ by only one target, the bˆjþ1’s have been plotted against the bˆj in Figure 8(b) (j ¼ 1, , 5) From a graphical examination of the diagrams of Figure it can be concluded that, in contrast with what the procedure for establishing the targets fields may lead the experimenter to expect, the measurement results not appear to support a violation of the hypothesis of independence of the random effects Similar values for the BLUPs and therefore similar conclusions can be drawn also for the initial mixedeffect model (the BLUPs for the initial model have not been reported for brevity) As suggested in Pinheiro and Bates (2000) (Section 5.2, in particular), to support the selection between 495 the initial and the alternative model, a likelihood ratio test (LRT) has been run using the generic function anova() implemented in R A p-value of 0.84% led to the rejection of the simpler initial model (8 parameters to be estimated) when compared with the more complex alternative model (8 þ parameters to be estimated) The same conclusion would hold if the selection decision is made on the basis of the Akaike information criterion (AIC) also provided in the output of anova() (read more about AIC in Chapters and of Pinheiro and Bates 2000) This model selection bears significant practical implications From a practitioner’s point of view, in fact, selection of the alternative model means that the random errors have significantly different variances when measuring targets in different locations of the workspace The workspace is not homogeneous: there are regions where the variability of the random errors is significantly lower than in others This also means that a measurement task can therefore be potentially designed so that this measuring system can perform it satisfactorily in some regions of its workspace but not in others REML estimates of the parameters that have practical implications are as follows: ^b ¼ 160:7 mm; s ð4Þ ^ ¼ 14:28 mm; ^d2 ¼ 0:2625; ^d3 ¼ 0:8599; s ^d4 ¼ 0:3706; ^d5 ¼ 0:1260; ^d6 ¼ 0:5446: ð5Þ Figure BLUPs of the random effects for each targets field and the graphically insignificant autocorrelation between BLUPs of random effects associated with consecutive targets fields 569 International Journal of Computer Integrated Manufacturing Figure Random locations of the same item storage throughout the system Figure Three different configurations of out-spiral conveyors (consecutively Type A, B and C) Table The impact of the three variables through simulation Variables Test 10 11 12 Conveyor speed (m/s) Sequencing algorithm Output spiral conveyor Time for first item out (s) Time for last item out (s) Gap between the first item and the last item (s) 0.4 0.4 0.4 0.4 0.4 0.4 0.6 0.6 0.6 0.6 0.6 0.6 NN NN NN SL SL SL NN NN NN SL SL SL Type A Type B Type C Type A Type B Type C Type A Type B Type C Type A Type B Type C 50 43 43 34 35 35 35 30 30 25 25 25 367 243 174 239 176 128 247 164 118 167 125 91 317 200 131 205 141 93 212 134 88 142 100 66 achieving this, a material-handing solution should be constructed based on optimal algorithms in order to generate the shortest pick-up sequence and route for the material-handing equipment; this will maximise the efficiency and minimise the total material-handing cost in the warehouse 570 Q Wang et al Modelling a novel distribution system capability The designed warehouse shown in Figure can be modelled as a discrete event system in which the system’s behaviour is stochastic rather than deterministic as a customer order arrives randomly with a random order size each time Computer simulation was used to provide an aid for evaluating alternative designs by observing a visual insight of material flows, work-in-process levels Figure and possible bottlenecks which may incur in the designed system The simulation model was also used to observe dynamic changes of system performance by altering a number of system variables These variables include numbers of collection points, different configurations of spiral conveyors shown in Figure 5, varying SKU levels, different loading/unloading tote sequencing algorithms and varying speeds of conveyors Several issues were considered when modelling the warehousing system The arrival time as a function of the number of item deliveries under different testing scenarios International Journal of Computer Integrated Manufacturing Figure 571 (Continued) (1) Determining the resources and their characteristics that mostly affect the system performance; (2) Formulating a description that represents these resources and their relationships; (3) Determining the performance measures of interest under given scenarios To simplify the simulation, it was assumed that the picking system of item supply and the replenishment system occur independently The simulation results not include the effect of the picking and replenishment systems in this case study The modelling and simulation process was divided into three steps: (1) System model development; (2) Experimental frame development; (3) Data analysis 572 Q Wang et al The system model describes the physical elements and their logical interrelationship by placing and interconnecting a thread of simulation modules with specific rules to form a model of the system from its engineering description The experimental frame defines the experimental conditions, including the analyst’s specification, under which the model is run to generate specific output data The experimental conditions were specified externally through a developed platform of user-friendly MS Excel worksheets, which interact with the developed simulation model using Witness Once the system model and the experimental frame have been defined, they can be linked and executed by Witness to generate output data files, which can be automatically exported to be displayed on external MS Excel worksheets in the form of both statistical data records and plots for analysis Table shows a study on the impact of system performance by comparing simulation results based on the alternative designs using three major variables These variables are two different conveyor speeds (0.4 m/s and 0.6 m/s), two different sequencing algorithms: nearest-neighbour (NN) and shortest-leg (SL), and three different types of output spiral conveyors (Type A, B and C) One of key design tasks in achieving a better system performance is to reduce the waiting time of all items per order delivered from the first arrival of an item(s) to the last arrival of an item(s) at a pick location As indicated in Table 1, the system, which is set at the same conveyor speed and the same sequencing algorithm, will have the longest waiting time using Type A spiral conveyor and the shortest waiting time using Type C spiral conveyor With the design using the SL sequencing algorithm, the system will have a shorter waiting time compared to the design using the NN sequencing algorithm at the same conveyor speed By increasing the overall conveyor speed, the waiting time can be significantly reduced; this is a particular case with the design using the SL sequencing algorithm and Type C spiral conveyor The comparison can be further illustrated in Figure 6(a), 6(b), 6(c) and 6(d) respectively Each figure shows the distribution of arrival times as a function of the varying number of items per order delivered from the first arrival to the last arrival at an assigned pick location for a specified customer The system parameters for each testing scenario are shown in Table It can be seen in Figure 6(a) that the arrival time is widely distributed in a range between 30 s and 240 s through test using Type A spiral conveyor The gap between the first arrival time and the last arrival time can be narrowed through test using Type B spiral conveyor shown in Figure 6(b); and this gap can be further narrowed through Test using Type C spiral conveyor shown in Figure 6(c) By jointly studying all the tests, the best result was obtained from Test 12 shown in Figure 6(d), it indicates that the arrival time is normally distributed approximately between 20 s and 90 s, in which about 95% arrival times take place between 30 s and 80 s for each customer to collect all the items Discussion and conclusions This paper draws a framework of designing a newgeneration warehouse that is expected to cope with the increasing challenges in logistics in future The simulation model and results were used to illustrate the level of capability that the conceptual system may offer A case study investigation with a major UK mail order company has indicated potential advantages in staffing levels, response rates and volume throughput capability This capability can be enhanced by implementing a number of emerging technologies such as RFID tags and wireless communication networks on its existing automated distribution centres A mechanism by integrating these IT technologies into the proposed warehousing system is presented in this paper The new design has shown advantages by being more compact, requiring greatly reduced numbers of staff and yet outperforming the design of the existing warehouse at the company Despite these potential improvements, the research to date suggests that the application of bar-coding systems will not be replaced by RFID systems overnight The widespread adoption of using RFID-based systems hinges on a significant drop in costs and development of standards for RFIDrelated products and communication networks (Cao et al 2009; Jun et al 2009) A costing study of a modular automated warehousing system to replace a conventional automated warehousing system at the mail order company has also been researched The design developed during this exercise was estimated to be approximately 15% more expensive to build and install for the new system Acknowledgements The authors wish to thank Weijun Li, previously at the University of Bath, for his contribution to this project The authors also gratefully acknowledge the extensive support provided by the industrial partners to this project The work was partially carried out at the IdMRC, Department of Mechanical Engineering, University of Bath, UK References Ashayeri, J and Gelders, L.F., 1985 Warehouse design optimisation European Journal of Operational Research, 21 (3), 285–294 Baker, P and Halim, Z., 2007 An exploration of warehouse automation implementation: cost, service and flexibility issues Supply Chain Management, 12 (2), 129–138 International Journal of Computer Integrated Manufacturing Berenyi, Z and Charaf, H., 2008 Retrieving frequent walks from tracking data in RFID-equipped warehouses In: 2008 Conference on Human System Interactions, Crakow, Poland, and 2, 669–673 Brusey, J and McFarlane, D.C., 2009 Effective RFID-based object tracking for manufacturing International Journal of Computer Integrated Manufacturing, 22 (7), 638–647 Cao, H., Folan, P., Mascolo, J., and Browne, J., 2009 RFID in product lifecycle management: a case in the automotive industry International Journal of Computer Integrated Manufacturing, 22 (7), 616–637 Chang, T.H., Fu, H.P., and Hu, K.Y., 2007 A two-sided picking model of M-AS/RS with an aisle-assignment algorithm International Journal of Production Research, 45 (17), 3971–3990 Chow, H.K.H., Choy, K.L., and Lee, W.B., 2005 Design of a RFID-based resource management system for warehouse operation In: 3rd IEEE International Conference on Industrial Informatics (INDIN), Perth, Australia, 785–790 Connolly, C., 2008 Warehouse management technologies Sensor Review, 28 (2), 108–114 Gan, O.P., Zhang, J.B., Ng, T.J., and Wong, M.M., 2006 Discrete event system modelling and dynamic service allocation for RFID networks In: 32nd Annual Conference of the IEEE-Industrial-Electronics-Society, Paris, France, 07–10, 2127–2132 Gu, J., Goetschalckx, M., and McGinnis, L.F., 2007 Research on warehouse operation: a comprehensive review European Journal of Operational Research, 177, 1–21 Huang, G.Q., Zhang, Y.F., and Jiang, P.Y., 2008 RFIDbased wireless manufacturing for real-time management of job shop WIP inventories International Journal of Advanced Manufacturing Technology, 36 (7–8), 752–764 Huang, G.Q., Wright, P.K., and Newman, S.T., 2009 Wireless manufacturing: a literature review, recent development, and case studies International Journal of Computer Integrated Manufacturing, 22 (7), 1–16 Jordan-Smith, M., 2008 Looking back Warehouse, 17 (1), 20–21 573 Jun, H.B., Shin, J.H., Kim, Y.S., Kirtsis, D., and Xirouchakis, P., 2009 A framework for RFID applications in product lifecycle management International Journal of Computer Integrated Manufacturing, 22 (7), 595–615 Kator, C., 2008 Top 20 systems suppliers, Modern materials handling Available online at: http://www.mmh.com/ article/CA6546213.html Lee, Y.M., Cheng, F., and Leung, Y.T., 2004 Exploring the impact of RFID on supply chain dynamics In: Proceedings of the 2004 Winter Simulation Conference, 1–2, 1145– 1152 Lian, X.Q., Zhang, X.L., Weng, Y.F., and Duan, Z.G., 2007 Warehouse logistics control and management system based on RFID In: 2007 IEEE International Conference on Automation and Logistics, Jinan, China, 1–6, 2907– 2912 Maropoulos, P., Chauve, M., and Da-Cunha, C., 2008 Review of trends in production and logistic networks and supply chain evaluation In: 1st International Conference on Dynamics in Logistics, Bremen, Germany, 39–55 Martinez-Sala, A.S., Egea-Lopez, E., Garcia-Sanchez, F., and Garcia-Haro, J., 2009 Tracking of returnable packaging and transport units with active RFID in the grocery supply chain Computers in Industry, 60 (3), 161–171 Poon, T.C., et al., 2009 A RFID case-based logistics resource management system for managing order-picking operations in warehouses Expert Systems with Applications, 36 (4), 8277–8301 Preuveneers, D and Berbers, Y., 2009 Modelling human actors in an intelligent automated warehouse In: 2nd International Conference in Digital Human Modelling, San Diego, USA, 5620, 285–294 Ramudhin, A., et al., 2008 A generic framework to support the selection of an RFID-based control system with application to the MRO activities of an aircraft engine manufacturer Production Planning and Control, 19 (2), 183–196 Te Lindert, M., 2007 Een order in een minuut TransportþOpslag, (8), 18–20 International Journal of Computer Integrated Manufacturing Vol 23, No 6, June 2010, 574–583 An automatic method of measuring foot girths for custom footwear using local RBF implicit surfaces Yihua Dinga, Jianhui Zhaoa*, Ravindra S Goonetillekeb, Shuping Xiongc, Zhiyong Yuana, Yuanyuan Zhanga and Chengjiang Longa a Computer School, Wuhan University, Wuhan, Hubei, 430079, PR, China; bDepartment of Industrial Engineering and Logistics Management, Hong Kong University of Science and Technology, Hong Kong; cDepartment of Industrial Engineering and Logistics Management, Shanghai Jiao Tong University, Shanghai, 200240, PR China (Received September 2009; final version received February 2010) Three-dimensional point cloud data of a foot are used to determine the critical dimensions for making custom footwear However, automatic and accurate measurement of dimensions, especially girths, is an issue of concern to many designers and footwear developers Existing methods for measuring girths are primarily based on points or generated triangles, but their accuracy is heavily dependent on the density of the point cloud data In this paper we present the use of the Radial Basis Function (RBF) surface modelling technique for measuring girths as it has the advantage of being able to operate on unorganised three-dimensional points, so that the generated surface passes through every scanned point, while repairing incomplete meshes To overcome the high computational expense of the RBF method, local surface recovery, octree division and combination, inverse power method and improved Cholesky factorisation are used The girth measurements obtained from adopting these approaches are compared against the existing measurement methods Experimental results demonstrate that the local RBF implicit surface can provide more stable and accurate measurements using relatively less time, proving its value in custom footwear manufacture Keywords: 3D scan; point cloud; foot; RBF; girth; anthropometry; footwear; customisation; scanning Introduction Computer-based methods have gained popularity in the shoe making industry (Jatta et al 2004, Paris and Handley 2004, Vigano et al 2004) primarily owing to the availability of three-dimensional (3D) digitalisation technologies for automatic measurement of critical dimensions Lengths, widths, heights and girths, are very important for the manufacture of custom footwear and hence it is no surprise that many researchers have developed algorithms to automatically determine them Bunch et al (1988) used a 3D digitiser to obtain the coordinates of 34 landmarks on the foot surface from which a series of foot dimensions were calculated using a computer program Similarly, Liu et al (1999) obtained the coordinates of 26 points on the surface of the foot and leg using an electromagnetic digitising device, and thereafter computed 23 variables that comprised heights, lengths, widths and angles Many others (Butdee 2002, Klassen et al 2004, Yahara et al 2005, Hu et al 2007) have used differing techniques to obtain dimensions such as foot length, foot width, rear foot width, contact area, arch height, and arch angle Compared with the linear dimensions, girth measures *Corresponding author Email: jianhuizhao@whu.edu.cn ISSN 0951-192X print/ISSN 1362-3052 online Ó 2010 Taylor & Francis DOI: 10.1080/09511921003682648 http://www.informaworld.com are more difficult to determine as the digitised points are only discrete samples of the continuous surface To overcome the discreteness, Xu et al (2004) constructed triangular meshes from the unorganised point cloud, generated continuous multiple lines, and determined the girth as the sum of all the line lengths The weakness of this method is that the accuracy is heavily dependent on the density of triangles Witana et al (2006) proposed a somewhat different technique by projecting the 3D points on the foot surface nearby the plane of measurement onto one two-dimensional (2D) plane, and then using a convex hull of the points to determine the girth This method was aimed at simulating a tape measurement of the foot wherein there are apertures between the tape and the skin surface The method works on point cloud data but the measurement can be affected by the density of the 3D points as well Zhao et al (2008) generated a smooth surface from the point cloud data using Non-Uniform Rational B-Splines (NURBS), and then the intersections between the NURBS surface and the tape plane was determined The length of the intersection curve was taken as the simulated girth measurement International Journal of Computer Integrated Manufacturing However, the NURBS method has two limitations: control points of the NURBS surface should first be ordered and thus the method cannot be directly used on unorganised 3D points; the NURBS surface only approximates, but does not pass through each control point even though the scanned points are on the surface of the foot Therefore, a more suitable surface modelling approach, without such limitations, should be developed for the determination of girths from 3D points As an important interpolation tool for point cloud data, the implicit representation of objects’ shapes with Radial Basis Function (RBF) offers a unified framework for several problems such as surface reconstructing, smoothing and blending, and has attracted attention The RBF methods are a series of exact interpolation techniques, conceptually similar to fitting a rubber membrane through the measured sample values while minimising the total curvature of the surface The thin-plate spline RBF (Carr et al 1997) was used to deal with the problem of interpolating incomplete surfaces in 3D medical graphics, and the defects of a skull were repaired Then they applied polyharmonic RBFs to reconstruct smooth, manifold surfaces from point cloud data and to repair incomplete meshes (Carr et al 2001) Ohtake et al (2004) developed an adaptive RBF fitting procedure for highquality approximation of a set of points The computational time and the number of approximation centres depended not only on the size of the dataset but also on the geometric complexity They further proposed a hierarchical approach to 3D scattered data interpolation and approximation with compactly supported RBFs (Ohtake et al 2005) Their method can integrate the best qualities of scattered data fitting with locally and globally supported basis functions To generate smooth and seamless models from sparse, noisy, non-uniform, and low-resolution vision-based data sets, Dinh et al (2002) proposed a method of surface reconstruction Their method is based on a 3D implicit surface formulated as a sum of weighted RBFs, and the reconstructed surface is locally detailed yet globally smooth because RBFs are used to achieve multiple orders of smoothness For recovering surfaces from scattered 3D points, Marie et al (2006) proposed an approach combining generalised RBFs and Voronoi-based surface reconstruction Since RBF solutions are global in nature, processing millions of points may be beyond the capabilities of most present day personal computers (Ohtake et al 2003, Tobor et al 2004, Qiang et al 2007) Therefore, improvements to computational speed and evaluating RBFs are highly desirable especially with large chunks of data, and methods (Corrigan and Dinh 2005, Ho et al 2005, Ho et al 2006, Sui et al 2008, Lin et al 2009) are emerging 575 including intricate problem decomposition, speed up with GPU, and so on In this paper, RBF was used to reconstruct a surface from which the girths can be determined Considering that only partial surface of the object really needs to be recovered for measurement, we propose an approach to generate the RBF-based local implicit surface and thus avoid whole surface recovery, thereby reducing computational time To further decrease the computing cost, we present several related methods to improve the efficiency of RBF implementation The efficacy of the RBF method is thereafter compared with methods used before The paper is organised as follows Two published methods are introduced in Section 2, RBF-based reconstruction of local implicit surface is proposed in Section 3, approaches for efficiency improvements are presented in Section 4, girth simulation using RBF surface is described in section 5, and then the experimental results and analyses are presented in section Existing methods for girth measurement Girth measurements can be obtained in many different ways Two methods that have been reported are given below, and the results of these two methods are compared with the proposed RBF method (1) Project all 3D points nearby the measuring tape plane and then use the 2D convex hull of the projected points to simulate the girth measurement; (2) Compute the intersection lines between the tape plane and 3D triangles generated from scanned points, and then use the line segments to simulate girth measurement 2.1 Method with 2D convex hull Suppose the 3D tape plane, C, is defined by three control points in 3D space as c1(x1, y1, z1), c2(x2, y2, z2) and c3(x3, y3, z3) The distance between each point of the scanned point cloud and plane C is computed The 3D points on the surface of the foot near the measuring tape are projected onto a 2D plane Then the convex hull of the projected 2D points is obtained and used to simulate the girth measure As illustrated in Figure 1, the convex hull of a set of 2D points is first determined: (1) Find the point with minimum Y value, and take it as the starting point; (2) Test each of the other points, and find the one which makes the largest right-hand turn (with respect to the starting point) as the second point on the hull; (3) Repeat to test each of the other points (including the starting point) and find the one which makes the largest right-hand turn (with respect to the previous point on the hull) as the next point on the hull; (4) Stop 576 Figure Y Ding et al 2D convex hull for projected points when the starting point is selected; (5) Save the T points that were found and set them as the boundary points of the convex hull, p1, p2, , pT The girth can then be calculated as the sum of the Euclidean distances between two neighbouring points on the convex hull: Figure Four cases of intersection The above method is similar to a manual tape measurement as apertures tend to be present when a tape is stretched over the foot surface as shown in the right of Figure It is clear that the density of the point cloud data is an important determinant of measurement accuracy endpoint p2, then take the other endpoint of the segment as p3; (3) Repeat to test the other segments until the endpoint p1 is found for the second time; (4) Save the T points found as the boundary points for the girth p1,p2, , pT These T points can then be used to calculate the girth value using Equation (1), which simulates the tape measurement with contact along the surface Alternatively, they can be projected onto one 2D plane and then their 2D convex hull can be used to simulate the measurement with the tape stretched around the surface It is clear that the girth measurement with 3D triangular mesh is also affected by the density of the data, 3D points or generated triangles 2.2 Method with 3D triangular mesh To generate a triangular mesh from unorganised 3D points, the IPD algorithm (Lin et al 2004) can be implemented considering the intrinsic nature of the point cloud Starting with a seed triangle, the algorithm grows a partially reconstructed triangular mesh by selecting a new point based on the degree of sampling uniformity Thus the recovered mesh is essentially an approximate minimum-weight triangulation to the point cloud constrained to be on a twodimensional manifold For any edge of each triangle, suppose its two endpoints are M and N, while their distances to tape plane C are M.val and N.val, the intersection point pi of the edge and plane C is calculated as RBF-based local surface recovery Considering that only the local surface of the object is really needed for girth measurement, a constrained partial surface recovery using RBF is proposed The border or edge of the partial 3D points should be considered for the constrained surface during RBFbased reconstruction, somewhat different from the entire surface recovery where the whole model surface is closed and thus has no border or edge Therefore, the method for constrained surface recovery includes the following steps: transformation of the partial point cloud using PCA, generation of off-surface points, RBF construction of the points for recovery, followed by the output of the local surface using a marching cube method girth ¼ T X kpi À piÀ1 k þ kpT À p1 k ð1Þ i¼2 pi ¼ ðN:val à M À M:val à NÞ=ðN:val À M:valÞ ð2Þ There are four possible cases of intersections between any one triangle and the tape plane, as shown in Figure Only two of them, case and case 4, are useful for girth measurement After the intersection segments are determined, they are dealt in the following way: (1) Take one endpoint of a randomly selected segment as point p1, and take the other endpoint of the segment as p2; (2) Check each of the other segments, and find the segment with one 3.1 Partial point cloud transformation When only part of the 3D point cloud needs to be reconstructed for further processing, the edges of these points have to be located to generate the proper shape For a 3D object with different locations in space, there are differing ranges along X, Y and Z axis respectively in the normal coordinate system These can result in differing bounding boxes for the 3D object, and thus cause differing precisions in the definition of the boundary To define a more precise border for the 577 International Journal of Computer Integrated Manufacturing set of partial point clouds, components in order of significance from the points are computed with the help of the Principal Component Analysis (PCA) method Suppose there are N points in the partial point cloud, each point is represented by pi(xi, yi, zi), i [1, N], and the centre O(xa, ya, za) is calculated as the average of the N points Covariance matrix (Johnson and Wichern 2002) for the partial point cloud is defined as: Eigen values of the covariance matrix are sorted in descending order, and the eigenvector corresponding PN i¼1 COV ¼ ðxi À xaÞ2 16 PN ðy À yaÞðxi À xaÞ N Pi¼1 i N i¼1 ðzi À zaÞðxi À xaÞ PN ðxi À xaÞðyi À yaÞ PN i¼1 ðyi À yaÞ i¼1 PN i¼1 to the highest eigen value is taken as the principle component of the data set The ordered eigen vectors are used to set up a local PCA coordinate system with point O as the coordinate origin Then the PCA coordinate system is transformed to be coincident with the global coordinate system In this way, the transformed partial point cloud is more dominant along both X- and Y- dimensions while less significant along the Z-dimension of the global coordinate system 3.2 inside or the outside of the reconstructed surface, an adjustment with the breadth first search method is used to make them consistent Take the point with maximum z value as the root point with normal, nr; visit each neighbour point of the root with normal, nt, if nr Á nt50, change nt to its opposite direction; for each visited point, deal with its un-visited neighbour points in the same way until all the points are processed For one point, pi, on the partial point cloud, its offsurface point is defined as the position moved along its Generation of off-surface points The range of the 3D partial point cloud can be defined by its six extreme coordinate values xmin, xmax, ymin, ymax, zmin, zmax Size R of the cubic bounding box used to subdivide the partial points is computed from Equation (4) Then the partial point cloud can be divided into a number of bounding boxes For point pi, the bounding box containing it is found first, then the points in the same box and the six boxes that are connected are added together to make a local neighbourhood for pi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðxmax À xmin Þðymax À ymin Þðzmax À zmin Þ ð4Þ R¼ N Appling Equation (3) on all the points in the local neighbourhood of point, pi, the plane decided by the two eigen vectors corresponding to the two larger eigen values is taken as the fitted surface, and the eigen vector with the smallest eigen value is taken as the surface normal at point, pi Since the calculated normal for the partial point cloud may point towards the ðzi À zaÞðyi À yaÞ PN i¼1 ðxi À xaÞðzi À zaÞ PN i¼1 ðyi À yaÞðzi À zaÞ PN i¼1 ðzi À zaÞ ð3Þ normal, ni, or the opposite direction, -ni, for a predefined distance, d Therefore, a signed distance function, f, is defined as in Equation (5) In this work, d is defined as 1/10 of the minimum edge of the minimum spanning tree of the 3D partial point cloud Such a definition of d can ensure that the generated offsurface points not intersect with the other parts of the implicit surface to be produced < fðpi Þ ¼ 0; fðpi Þ ¼ d; : fðpi Þ ¼ Àd; pi on surface pi inside surface pi outside surface ð5Þ 3.3 RBF construction Suppose the number of 3D points (including original points and their off-surface points) is n, then the RBF used for surface reconstruction is defined as: fðpi Þ ¼ n X wj fðrij Þ þ Pðpi Þ ð6Þ j¼1 where P(pi) ¼ c0 þ c1xi þ c2yi þ c3zi, f(r) ¼ r3, and rij ¼ jjpi7pjjj, the triharmonic spline is chosen as the basis function since it is a good choice for fitting three variables Since the above equation is linear with respect to the unknown weights wj and coefficients of P(pi), a linear system can be formulated as given in Equation (7) Then the computed weights wj and the coefficients of P(pi) are then utilised to construct the RBF 578 Y Ding et al ÁÁÁ ÁÁÁ fðr1n Þ fðrnn Þ þ ln x1 xn y1 yn ÁÁÁ ÁÁÁ xn 0 0 0 ÁÁÁ yn 0 32 z1 w1 76 7 76 76 7 zn 76 wn 76 76 c0 76 7 07 76 c1 76 54 c2 z1 ÁÁÁ fðp1 Þ 7 6 fðpn Þ 7 ¼6 7 6 7 zn 0 0 fðr11 Þ þ l1 6 6 fðrn1 Þ 6 6 x1 6 y1 ÁÁÁ ð7Þ where l is a parameter (often called the regularisation parameter) to weigh between data fitting and the smoothness of the surface We can allow the surface to be close to, but not necessarily through, the known data points by setting l When l ¼ 0, the function interpolates the data points In this paper, in order to ensure that the generated surface can pass through every scanned point, we set l ¼ 3.4 Figure Cylinder centred with one unit cube c3 Output of the constrained surface The marching cube (MC) method (Lorensen and Cline 1987) is usually used to generate triangles for the implicit surfaces Each of the eight vertices of one unit cube is taken as the input for Equation (6) and its result is calculated From the eight results, using Equation (5), it can be found whether the implicit surface passes through the unit cube Since the RBFbased implicit surface is unlimited in nature, it is very appropriate for shape reconstruction of the entire object or interpolation of the incomplete mesh together with MC However, for partial surface recovery, constraints have to be added to define the edges As the partial point cloud has been transformed with the help of the PCA method, its plot on the XY plane relates to the most significant of the three components Therefore, when a unit cube is processed in the MC, a cylinder centred with it is defined to help decide whether the cube is within the surface edge along Z axis, as illustrated in Figure The height of the cylinder is more than the range of Z value of the partial point cloud, while the circle radius of the cylinder is adaptively defined in the following way: (1) The maximum edge of the minimum spanning tree of the partial point cloud is taken as the initial radius of the cylinder; (2) If the number of points inside the cylinder is not less than N/10 where N is the number of partial points, go to Step 5; (3) Double the cylinder radius; (4) Go to Step 2; (5) Stop Then two extreme coordinate values, czmin and czmax, are found from the points within the cylinder, and they are used to define the surface edge along the Z-axis As shown in the right of Figure 3, unit cubes A and B share the same cylinder, but A is inside the surface edge and thus used for surface generation, while B is discarded as it is outside the surface edge In the same way, surface edges along X- and Y-axes can be defined as well, and they are used together with surface edge along Z-axis to help produce a constrained shape for the partial point cloud when the implicit surface is generated using the MC method Efficiency improvements for RBF Since computation time is a drawback of RBF-based surface reconstruction, several methods are explored for improving the efficiency These determine how to decide the proper number of points in one group and take them as the unit for surface recovery, how to find the suitable approaches for the solutions of covariance matrix and linear system 4.1 Octree-based division and combination Computational cost of RBF surface reconstruction is related with the number of points to be processed (Carr et al 2001) It will take a long time to process a large number of points at the same time, but much time will be saved if the points are divided into groups and then dealt with one group at a time Therefore, space subdivision with octree-based self-adaptive method is adopted to further reduce the time during RBF recovery The minimum cubic bounding box for the partial point cloud and their off-surface points is defined as the root of the octree Then the root is subdivided into smaller cubes with the same size as its children cells The subdivision is operated recursively on the cells until each cell has satisfied the requirements The rules for octree-based adaptive subdivision are as follows: International Journal of Computer Integrated Manufacturing (1) if the number of points in the current cell is more than the predefined maximum value, nmax, whose suitable value is around 300 based on a series of experiments, the cell is a non-leaf cell and subdivided into smaller cells; (2) if the number of points in the current cell is no more than nmax, the cell is taken as a leaf cell without further subdivision; (3) if the number of points in the current cell is zero, the cell is marked as NULL and no further subdivision is needed Besides the upper limit of the number of points in the leaf cells of octree, there should be a lower limit as well If the number of points in one leaf cell is too few, its linear system may be homogeneous, and the surface to be reconstructed will be uncertain Even if the linear system consists of non-homogeneous linear equations, the leaf cell with too few points may not have a unique solution The linear system with non-homogeneous linear equations can be represented by the following equation Ax ¼ b ð8Þ where A is the (n þ 4) (n þ 4) matrix, and the sufficient and necessary conditions that Equation (8) has unique solution are that b can be represented by the column vectors of matrix A, and the column vectors in matrix A are linear independent with each other, i.e cell, its off-surface point usually lies in the same leaf cell As shown in Figure 4, normally, there are pairs of points in the leaf cell Therefore, it is more suitable to replace the condition n ! with n ! in RBF recovery In our paper, the predefined minimum threshold value for the number of points in each leaf cell is nmin ¼ 10 After octree division, the number of points in some leaf cells may be less than nmin, thus a combination method is used to deal with the problem The leaf cells are checked one by one, and the current leaf cell whose number of points less than nmin needs to be combined with one of its neighbour leaf cells Based on the topological relationships, the one with the least number of points is found from all the neighbouring leaf cells, and then combined with the current leaf cell The combination operation does not really turn these two leaf cells into one Instead, the points in both are used during RBF reconstruction If the number of points in the combined two leaf cells is still less than the predefined nmin, the combination operation will be performed recursively After the above self-adaptive operations of division and combination, there should be a suitable number (no more than nmax and no less than nmin) of points in leaf cells of the octree Then the 3D points in each leaf cell are used for the RBF reconstruction, i.e calculation of unknown weights and coefficients from the linear equations of the points 4.2 rððA; bÞÞ ¼ rðAÞ ¼ num of columns in A ð9Þ Relationship between the number of points in one leaf cell and the rank of matrix A is listed in Table From Table 1, we can find that Equation (8) has the unique solution only if the number of points, n ! 4, i.e one certain surface can be reconstructed from the points in the leaf cell The computation precision has to be considered when two neighbouring points are very close, especially for each sampling point and its off-surface point used for RBF If one sampling point lies in one leaf 579 Solutions of covariance matrix and linear system For RBF reconstruction, the surface normal of each point, pi, needs to be computed and its off-surface point is thus obtained Therefore, the covariance matrix for the points in the local neighbourhood of pi is set up, and the eigen vector corresponding to the smallest eigen value of the matrix is taken as the surface normal of point pi As the covariance matrix is a symmetric positive semi-definite matrix and only the smallest eigenvalue needs to be calculated, the inverse power method is adopted For the unknown weights and coefficients of RBF, the linear system including a set of linear equations also needs to be computed LU decomposition is Table Relationship between number of points and rank of matrix Number of points r(A) r(A, b) Full rank Equation (9) n (n 4) nþ4 nþ4 nþ4 False False False True True Figure cell The usual situations of 1*4 points in one leaf 580 Y Ding et al À Á usually used and its time complexity is 23 N3 þ O N2 , while space complexity is N2 þ O(N) When the triharmonic spline, a good choice for fitting functions of three variables, is selected as the basic function of RBF reconstruction, the matrix for the linear system is real symmetric and positive definite Thus the method of improved Cholesky factorisation is employed to calculate the solution À Áof the matrix with time complexity 16 N3 þ O N2 and space complexity N þ O ð N Þ This assures savings in time and space Girth measurement using RBF surface Anthropometric measurements are generally taken on the surface of a local region and surface reconstruction on the complete point cloud is thus unnecessary As shown in Figure 5, only the points on the tape are useful to determine the foot girth, and hence choosing the necessary 3D points followed by partial reconstruction can reduce computation time To determine the necessary points for the tape plane C defined by three 3D control points c1, c2 and c3, the distance between each point of the point cloud and plane C is calculated The points whose distance is no more than the threshold value D, corresponding to half of the tape breadth, are selected, and they are used to make up point set S To facilitate the computation, a local coordinate system is set up with control points c1, c2 and c3, and then it is transformed together with point set S to be coincident with the global coordinate system, as illustrated in Figure With the proposed approach for RBF-based surface reconstruction, the constrained implicit surface for point set S is generated To calculate the girth value, the method for measurement simulation is performed in the following way: Step For each point of point set S, calculate its 3D distance to tape plane C; Figure The points used for girth measurement Step Compare the distance with the threshold value, d, which is the maximum edge of the minimum spanning tree of point set S; Step Take the points whose distance is no more than the threshold value, d, as the initial intersection points (point set I), illustrated as the black points in the right of Figure 5; Step For any point, pi, of point set I, find its K neighbours, i.e repeatedly increase the radius, r, of the circle centred with pi until the number of the points in the circle is no less than K, as shown in Figure 6; Step Fit a straight line, m, for point pi and its K neighbours using the least square method; Step Generate two off-line points for point pi, pM and pN, along the opposite directions of the normal of the fitted line; Step Calculate the exact intersection point pi’ between tape plane, C, and implicit surface of point set, S, corresponding to the initial intersection point pi by p0i ¼ pM þ ðisovalue À VM ÞðpN À pM Þ=ðVN À VM Þ ð10Þ Figure Off-surface points of point pi 581 International Journal of Computer Integrated Manufacturing where pM and pN are two off-surface points of pi, VM and VN are their RBF values using Equation (6), isovalue is used to control the relative position (inside, outside or on) of pi with the implicit surface and here isovalue is set to to make sure that point pi’ lies exactly on the implicit surface; Step Girth is measured using the exact intersection points, i.e the 2D convex hull method in conjunction with Equation (1) is used to simulate the tape stretch around surface, or the points are ordered to simulate the tape contact along surface Experimental results The performance of three methods was compared To be realistic, all methods were simulated a tape measure around surface For surfaces with no concave regions, there is no difference between around and contact along the surface Method 1: Project the 3D points near tape plane, and then use a 2D convex hull for them to simulate girth; Method 2: Calculate the intersection points between tape plane and the generated triangular mesh of 3D points, and then use the intersected points to simulate girth; Method 3: The proposed method to calculate the intersection points between a tape plane and the generated local RBF implicit surface of 3D points, and then use the intersection points to simulate girth 6.1 Accuracy is a measure of closeness to the true value The deviations of cylinder circumference for the different methods of different densities of data are given in Table The results show that, in general, the accuracy decreases with a decrease of data density But it is also clear that Method gives the measurement that is closest to the real value A test is also performed with foot lasts Here the scanned points of the last are processed with differing point densities As shown in Figure 8, Model has 17040 points, Model has 4260 points, while Model has 1080 points Unlike a cylinder, the true value of the girth is not available Thus we compare the measured girths based on the same tape plane using different methods and different models From the experimental results listed in Table 3, it can be found that the proposed method has higher precision For example, compared with Model 1, about 93.66% points are omitted in Model 3, but the measured girth value from the RBF method has changed only around 0.51% Table Deviations of measured cylinder circumference for different methods (Unit: mm) Deviation (calculated – measured) Algorithm Model Model Model Method Method Method 0.03 0.02 0.01 0.11 0.07 0.03 0.46 0.23 0.03 Experiments on measurement accuracy To test the accuracy, computer generated 3D points of a cylinder are used As shown in Figure 7, the surface points of the cylinder, having a radius of 40 mm, are generated with differing density, i.e 1200 points in Model 1, 900 points in Model 2, 450 points in Model Based on the equation C ¼ * p * r, the circumference of the cylinder is * 3.14 * 40 ¼ 251.33 mm Figure Points of last with different densities Table Measured results of last girth from different methods (Unit: mm) Measured girth value Figure Model cylinders with differing point density Algorithm Model Model Model Method Method Method 248.64 248.75 248.90 243.80 247.41 248.48 239.85 244.91 247.63 582 Y Ding et al Table Computational cost for girth measurements (Unit: ms) Number of partial points Time for measurement 6.2 Short heel girth Instep girth Ball girth 9090 943 7112 720 7044 693 Experiments on computing expense The disadvantage of RBF surface modelling is its high computational cost, and hence the computation time is evaluated as well All the experiments are performed on a PC with 1.81GHZ AMD and 512M RAM The time need to find short heel girth, instep girth and ball girth of a foot were recorded The number of partial points and the time for measurement (in milliseconds) are listed in Table The average time for any girth measurement is less than second, which is quite satisfactory in applications such as footwear-making Conclusion Existing algorithms and methods are suitable for girth measurements when the point density of scanned data is high But the accuracy decreases with decreasing point density Even though parametric surfaces such as NURBS have the ability to describe the continuous surface, it cannot be directly used on unorganised 3D points and the generated surface only approximates the scanned points Thus a better method is desirable for automatic girth measurement and is indeed very valuable for custom footwear manufacture In this paper a RBF-based implicit surface modelling is proposed for girth measurement The approach can be used on unorganised 3D points without any prior processing and the generated surface can pass through each sample point Therefore it provides more stable and accurate measurements The efficacy of the method is evaluated using a regular cylinder and a shoe last To reduce computation time, only the required local surface is reconstructed Octree-based division and combination are used to speed the process of RBF reconstruction, inverse power method and improved Cholesky factorisation are presented in solving the covariance matrix and the linear system of equations The girth can be calculated in a relatively short time and the measurements are quite stable and precise when compared with other methods Acknowledgements The work was supported by NSFC (No 60603079), the Research Grants Council of Hong Kong (613008), NSFC (No 70971084), the 985 Project of Cognitive and Neural Information Science Wuhan University (No 904273258), and the Open Fund of the Shanghai Key Lab of Advanced Manufacturing Environment (KF200901) References Bunch, R.P., 1988 Foot measurement strategies for fitting athletes Journal of Testing and Evaluation, 16 (4), 407–411 Butdee, S., 2002 Hybrid feature modeling for sport shoe sole design Computers and Industrial Engineering, 42 (2–4), 271–279 Carr, J.C., et al 2001 Reconstruction and representation of 3D objects with radial basis functions In: Proceedings of SIGGRAPH 2001, pp 67–76 Carr, J.C., Fright, W.R., and Beatson, R.K., 1997 Surface Interpolation with Radial Basis Functions for Medical Imaging IEEE Transactions on Medical Imaging, 16 (1), 96–107 Corrigan, A and Dinh, H.Q., 2005 Computing and rendering implicit surfaces composed of radial basis functions on the GPU In: Proceedings of the international workshop on volume graphics, New York, June Dinh, H.Q., Turk, G., and Slabaugh, G., 2002 Reconstruction surfaces by volumetric regularization using radial basis functions IEEE Transactions on Pattern Analysis and Machine Intelligence, 24 (10), 1358–1371 Ho, S.L., et al 2005 A response surface methodology based on improved compactly supported radial basis function and its application to rapid optimizations of electromagnetic devices IEEE Transactions on Magnetics, 41 (6), 2111–2117 Ho, S.L., et al 2006 A fast global optimizer based on improved CS-RBF and stochastic optimal algorithm IEEE Transactions on Magnetics, 42 (4), 1175–1178 Hu, H., et al 2007 Anthropometric measurement of the Chinese elderly living in the Beijing area International Journal of Industrial Ergonomics, 37 (4), 303–311 Jatta, F., et al 2004 A roughing/cementing robotic cell for custom made shoe manufacture International Journal of Computer Integrated Manufacturing, 17 (7), 645–652 Johnson, R.A and Wichern, D.W., 2002 Applied multivariate statistical analysis Upper Saddle River, NJ: Prentice Hall Klassen, E., et al 2004 Analysis of planar shapes using geodesic paths on shape spaces IEEE Transactions on Pattern Analysis and Machine Intelligence, 26 (3), 372–383 Lin, H., Tai, C., and Wang, G., 2004 A mesh reconstruction algorithm driven by intrinsic property of point cloud Computer-Aided Design, 36 (1), 1–9 Lin, Y.X., et al 2009 Dual-RBF based surface reconstruction The Visual Computer, 25 (5–7), 599–607 Liu, W., et al 1999 Accuracy and reliability of a technique for quantifying foot shape, dimensions and structural characteristics Ergonomics, 42 (2), 346–358 Lorensen, W.E and Cline, H.E., 1987 Marching cubes: A high resolution 3D surface construction algorithm Computer Graphics, 21 (3), 163–169 Marie, S., et al 2006 Reconstruction with Voronoi centered radial basis functions In: Proceedings of the eurographics symposium on geometry, pp 51–60 Ohtake, Y., et al 2003 Multi-level partition of unity implicits In: Proceedings of SIGGRAPH 2003, pp 463– 470 International Journal of Computer Integrated Manufacturing Ohtake, Y., Belyaev, A., and Seidel, H.P., 2004 3D Scattered data approximation with adaptive compactly supported radial basis functions In: Proceedings of the shape modeling international, 2004, pp 31–39 Ohtake, Y., Belyaev, A., and Seidel, H.P., 2005 3D scattered data interpolation and approximation with multilevel compactly supported RBFs Graphical Models, 67 (3), 150–165 Qiang, W., et al 2007 Surface rendering for parallel slice of contours from medical imaging Computing in Science and Engineering, (1), 32–37 Paris, I and Handley, D., 2004 CAD usage and knowledge base technology in shoe design and development International Journal of Computer Integrated Manufacturing, 17 (7), 595–600 Sui, Y.K., Li, S.P., and Guo, Y.Q., 2008 An efficient global optimization algorithm based on augmented radial basis function International Journal for Simulation and Multidisciplinary Design Optimization, (1), 49–55 583 Tobor, I., Reuter, P., and Schilck, C., 2004 Efficient reconstruction of large scattered geometric datasets using the partition of unity and radial basis functions Journal of WSCG, 12 (1–3), 467–474 Vigano, G., et al 2004 Virtual reality as a support tool in the shoe life cycle International Journal of Computer Integrated Manufacturing, 17 (7), 653–660 Witana, C.P., et al 2006 Foot measurements from 3dimensional scans: A comparison and evaluation of different methods International Journal of Industrial Ergonomics, 36 (9), 789–807 Xu, C., et al 2004 The design and implementation for personalized shoe last CAD system Journal of ComputerAided Design and Computer Graphics, 16 (10), 1437–1441 Yahara, H., et al 2005 Estimation of anatomical landmark position from model of 3-dimensional foot by the FFD method Systems and Computers in Japan, 36 (6), 1–13 Zhao, J., et al 2008 Computerized girth determination for custom footwear manufacture Computers and Industrial Engineering, 54 (3), 359–373 [...]... computing, 93 (3 ), 589–603 Li, Z ., Jin, X.L ., Cao, Y ., Zhang, X.Y ., and Li, Y.Y ., 2007 Architecture of a collaborative design grid and its application based on a LAN Advances in Engineering Software, 38 (2 ), 121–132 Li, Z ., Jin, X.L ., Cao, Y ., Zhang, X.Y ., and Li, Y.Y ., 2007 Conception and implementation of a collaborative manufacturing grid International Journal of Advanced Manufacturing Technology, 34... and Yan, J.Q ., 2007 Virtual assembly technologies based on constraint and DOF analysis Robotics and Computer- Integrated Manufacturing, 23 (4 ), 447–456 Zhen, X.J ., Wu, D.L ., Fan, X.M ., and Hu, Y ., 2009 Distributed parallel virtual assembly environment for automobile development Assembly Automation, 29 (3 ), 279–289 International Journal of Computer Integrated Manufacturing Vol 2 3, No 6, June 201 0, 515–528... techniques International Journal of Computer Integrated Manufacturing, 19 (8 ), 805–814 Rycerz, K ., Tirado-Ramos, A ., Gualandris, A ., Portegies Zwart, S.F ., Bubak, M ., and Sloot, P.M.A ., 2007 Interactive N-body simulations on the grid: HLA versus MPI International Journal of High-Performance Computing Applications, 21 (2 ), 210–221 Shyamsundar, N and Gadh, R ., 2002 Collaborative virtual prototyping of product... Graphics, 12 (3 ), 405–416 Noh, S.D ., Park, Y.J ., Kong, S.H ., Han, Y.-G ., Kim, G ., and Lee, K.I ., 2005 Concurrent and collaborative process planning for automotive general assembly International Journal of Advanced Manufacturing Technology, 26 (5 ), 572–584 Pappas, M ., Karabatsou, V ., Mavrikios, D ., and Chryssolouris, G ., 2006 Development of a Web-based collaboration platform for manufacturing product and... (11–12 ), 1224–1235 Liang, J.S ., 2007b Web-based 3D virtual technologies for developing a product information framework International Journal of Advanced Manufacturing Technology, 34 (5–6 ), 617–630 Liu, Q and Shi, Y ., 2008 Grid manufacturing: a new solution for cross-enterprise collaboration International Journal of Advanced Manufacturing Technology, 36 (1–2 ), 205–212 Lu, C ., Fuh, J.Y.H ., Wong, Y.S ., Qiu,... 835–847 Chryssolouris, G ., Mavrikios, D ., Pappas, M ., Xanthakis, E ., and Smparounis, K ., 2009 A web and virtual realitybased platform for collaborative product review and customisation In: L Wang and A.Y.C Nee, eds Collaborative design and planning for digital manufacturing London: Springer, 137–152 Dai, K.Y ., Li, Y.S ., Han, J ., Lu, X.H ., and Zhang, S.S ., 2006 An interactive web system for integrated threedimensional... customization Computers in Industry, 57 (8–9 ), 827–837 514 X.-J Zhen et al Emad, N ., Shahzadeh, S ., and Dongarra, J ., 2006 An asynchronous algorithm on the NetSolve global computing system Future Generation Computer Systems, 22 (3 ), 279–290 Fan, L.Q ., Senthil, K.A ., Jagdish, B.N ., and Bok, S.H ., 2008 Development of a distributed collaborative design framework within a peer-to-peer environment CAD ComputerAided... Internet Computer- Aided Design, 34 (10 ), 755–768 Trappey, A and Hsiao, D ., 2008 Applying collaborative design and modularized assembly for automotive ODM supply chain integration Computers in Industry, 59 (2–3 ), 277–287 Wang, Q.H and Li, J.R ., 2006 Interactive visualization of complex dynamic virtual environments for industrial assemblies Computers in Industry, 5 7, 366–377 Yang, R.D ., Fan, X.M ., Wu, D.L .,. .. Environments, 10 (1 ), 1–21 Wolf, P and Ghilani, C ., 1997 Adjustment computations: statistics and least squares in surveying and GIS New York: Wiley International Journal of Computer Integrated Manufacturing Vol 2 3, No 6, June 201 0, 500–514 A real-time simulation grid for collaborative virtual assembly of complex products X.-J Zhen, D.-L Wu *, Y Hu and X.-M Fan CIM Institute, Shanghai Jiao Tong University, Shanghai,... Bidarra, R ., Kranendonk, N ., Noort, A ., and Bronsvoort, W.F ., 2002 A collaborative framework for integrated part and assembly modeling In: Proceedings 7th ACM symposium on solid modeling and applications, ACM Press, 389–400 Chen, L ., Song, Z ., and Feng, L ., 2004 Internet-enabled realtime collaborative assembly modeling via an e-assembly system: status and promise Computer- Aided Design, 36 (9 ), 835–847

Ngày đăng: 19/07/2016, 20:14

Từ khóa liên quan

Mục lục

  • Cover

  • Sources of variability in the set-up of an indoor GPS

  • A real-time simulation grid for collaborative virtual assembly of complex products

  • Mastering demand and supply uncertainty with combined product and process configuration

  • An integrated system for on-line intelligent monitoring and identifying process variability and its application

  • A knowledge-commercialised business model for collaborative innovation environments

  • A new-generation automated warehousing capability

  • An automatic method of measuring foot girths for custom footwear using local RBF implicit surfaces

Tài liệu cùng người dùng

Tài liệu liên quan