1. Trang chủ
  2. » Thể loại khác

Springer the next wave in computing optimization and decision technologies (2005) DDU

407 142 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

THE NEXT WAVE IN COMPUTING, OPTIMIZATION, AND DECISION TECHNOLOGIES OPERATIONS RESEARCH/COMPUTER SCIENCE INTERFACES SERIES Series Editors Professor Ramesh Sharda Oklahoma State University Prof Dr Stefan Voß Universität Hamburg Other published titles in the series: Greenberg /A Computer-Assisted Analysis System for Mathematical Programming Models and Solutions: A User's Guide for ANALYZE Greenberg / Modeling by Object-Driven Linear Elemental Relations: A Users Guide for MODLER Brown & Scherer / Intelligent Scheduling Systems Nash & Sofer / The Impact of Emerging Technologies on Computer Science & Operations Research Barth / Logic-Based 0-1 Constraint Programming Jones / Visualization and Optimization Barr, Helgason & Kennington / Interfaces in Computer Science & Operations Research: Advances in Metaheuristics, Optimization, & Stochastic Modeling Technologies Ellacott, Mason & Anderson / Mathematics of Neural Networks: Models, Algorithms & Applications Woodruff / Advances in Computational & Stochastic Optimization, Logic Programming, and Heuristic Search Klein / Scheduling of Resource-Constrained Projects Bierwirth / Adaptive Search and the Management of Logistics Systems Laguna & González-Velarde / Computing Tools for Modeling, Optimization and Simulation Stilman / Linguistic Geometry: From Search to Construction Sakawa / Genetic Algorithms and Fuzzy Multiobjective Optimization Ribeiro & Hansen / Essays and Surveys in Metaheuristics Holsapple, Jacob & Rao / Business Modelling: Multidisciplinary Approaches — Economics, Operational and Information Systems Perspectives Sleezer, Wentling & Cude/Human Resource Development And Information Technology: Making Global Connections Voß & Woodruff / Optimization Software Class Libraries Upadhyaya et al / Mobile Computing: Implementing Pervasive Information and Communications Technologies Reeves & Rowe / Genetic Algorithms—Principles and Perspectives: A Guide to GA Theory Bhargava & Ye / Computational Modeling And Problem Solving In The Networked World: Interfaces in Computer Science & Operations Research Woodruff / Network Interdiction And Stochastic Integer Programming Anandalingam & Raghavan / Telecommunications Network Design And Management Laguna & Martí / Scatter Search: Methodology And Implementations In C Gosavi/ Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning Koutsoukis & Mitra / Decision Modelling And Information Systems: The Information Value Chain Milano / Constraint And Integer Programming: Toward a Unified Methodology Wilson & Nuzzolo / Schedule-Based Dynamic Transit Modeling: Theory and Applications THE NEXT WAVE IN COMPUTING, OPTIMIZATION, AND DECISION TECHNOLOGIES Edited by BRUCE GOLDEN University of Maryland S RAGHAVAN University of Maryland EDWARD WASIL American University Springer eBook ISBN: Print ISBN: 0-387-23529-9 0-387-23528-0 ©2005 Springer Science + Business Media, Inc Print ©2005 Springer Science + Business Media, Inc Boston All rights reserved No part of this eBook may be reproduced or transmitted in any form or by any means, electronic, mechanical, recording, or otherwise, without written consent from the Publisher Created in the United States of America Visit Springer's eBookstore at: and the Springer Global Website Online at: http://ebooks.springerlink.com http://www.springeronline.com Contents Preface ix Part I Networks On the Complexity of Delaying an Adversary’s Project Gerald G Brown, W Matthew Carlyle, Johannes O Royset, and R Kevin Wood A Note on Eswaran and Tarjan’s Algorithm for the Strong Connectivity Augmentation Problem S Raghavan Part II 19 Integer and Mixed Integer Programming Generating Set Partitioning Test Problems with Known Optimal Integer Solutions 29 Edward K Baker, Anito Joseph, and Brenda Rayco Computational Aspects of Controlled Tabular Adjustment: Algorithm and Analysis Lawrence H Cox, James P Kelly, and Rahul J Patil The SYMPHONY Callable Library for Mixed Integer Programming Ted K Ralphs and Menal Güzelsoy Part III 45 61 Heuristic Search Hybrid Graph Heuristics Within a Hyper-Heuristic Approach to Exam Timetabling Problems Edmund Burke, Moshe Dror, Sanja Petrovic, and Rong Qu 79 Metaheuristics Comparison for the Minimum Labelling Spanning Tree Problem Raffaele Cerulli, Andreas Fink, Monica Gentili, and Stefan Voß 93 A New Tabu Search Heuristic for the Site-Dependent Vehicle Routing Problem I-Ming Chao and Tian-Shy Liou 107 A Heuristic Method to Solve the Size Assortment Problem Kenneth W Flowers, Beth A Novick, and Douglas R Shier 121 vi Heuristic Methods for Solving Euclidean Non-Uniform Steiner Tree Problems Ian Frommer, Bruce Golden, and Guruprasad Pundoor 133 Modeling and Solving a Selection and Assignment Problem Manuel Laguna and Terry Wubbena 149 Solving the Time Dependent Traveling Salesman Problem Feiyue Li, Bruce Golden, and Edward Wasil 163 The Maximal Multiple-Representation Species Problem Solved Using Heuristic Concentration Michelle M Mizumori, Charles S ReVelle, and Justin C Williams Part IV 183 Stochastic Modeling Fast and Efficient Model-Based Clustering with the Ascent-EM Algorithm Wolfgang Jank 201 Statistical Learning Theory in Equity Return Forecasting John M Mulvey and A J Thompson 213 Sample Path Derivatives for Huiju Zhang and Michael Fu PartV S) Inventory Systems with Price Determination 229 Software and Modeling Network and Graph Markup Language (NAGML) - Data File Formats Gordon H Bradley 249 Software Quality Assurance for Mathematical Modeling Systems 267 Michael R Bussieck, Steven P Dirkse, Alexander Meeraus, and Armin Pruessner Model Development and Optimization with Mathematica János Pintér and Frank J Kampas 285 Verification of Business Process Designs Using MAPS Eswar Sivaraman and Manjunath Kamath 303 ALPS: A Framework for Implementing Parallel Tree Search Algorithms Yan Xu, Ted K Ralphs, Laszlo Ladányi, and Matthew J Saltzman 319 Part VI Classification, Clustering, and Ranking Tabu Search Enhanced Markov Blanket Classifier for High Dimensional Data Sets Xue Bai and Rema Padman Dance Music Classification Using Inner Metric Analysis Elaine Chew, Anja Volk (Fleischer), and Chia-Ying Lee 337 355 vii Assessing Cluster Quality Using Multiple Measures - A Decision Tree Based Approach Kweku-Muata Osei-Bryson Dispersion of Group Judgments Thomas L Saaty and Luis G Vargas 371 385 This page intentionally left blank Preface This book is the companion volume to the Ninth INFORMS Computing Society Conference (ICS 2005), held in Annapolis, Maryland, from January to 7, 2005 It contains 25 high-quality research articles that focus on the interface between operations research/management science (OR/MS) and computer science (CS) The articles in this book were each carefully reviewed and revised accordingly We thank the authors and a small group of academics and practitioners for serving as referees The book is divided into six sections The first section contains two papers on network models The second section focuses on integer and mixed integer programming The third section contains papers in which heuristic search and metaheuristics are applied Three papers using stochastic modeling comprise the fourth section In the fifth section, the unifying theme is software and modeling The sixth section contains four papers on classification, clustering, and ranking Taken collectively, these articles are indicative of the state-of-the-art in the interface between OR/MS and CS and of the high-caliber research being conducted by members of the INFORMS Computing Society We thank the University of Maryland, American University, and George Mason University for sponsoring ICS 2005 In addition, we thank the authors for their hard work and professionalism and Stacy Calo for her invaluable help in producing this book Finally, we note, with great pride, that two of us (BG and EW) have attended each and every one of the nine ICS conferences The three of us hope to attend many more BRUCE GOLDEN, S RAGHAVAN, AND EDWARD WASIL 382 CONCLUSION In this paper we have presented a formal approach for evaluating cluster output that involves the use of decision tree induction and multi-criteria decision analysis This research problem is an important one that has not been adequately addressed in the clustering literature (e.g Ankerst et al., 1999; Jain et al., 1999) Jain et al (1999) describe cluster validity as the assessment of the set of clusters that are generated by the given clustering algorithm They note that there are three approaches for assessing validity: 1) External assessment which involves comparing the generated set of clusters with an a priori structure, typically provided by some experts; 2) Internal assessment which attempts to determine if the generated set of clusters is “intrinsically appropriate” for the data; and 3) Relative assessment which involves comparing two sets of clusters based on some measures (e.g Jain and Dunes, 1988; Dubes, 1993) and measure their relative performance Our multi-criteria DT-based approach could be considered to have some relationship to these three types of approaches: External: Our approach does not require the decision-makers to provide an a priori clustering structure, but does require them to provide preference and value structures which are used to indirectly evaluate and compare the sets of clusters Internal: We assess each set of clusters by assessing the associated DTs At least two of our performance measures (i.e accuracy, and stability) that would appear to provide some indication as to whether the given set of clusters is “intrinsically appropriate” for the data 383 Relative: Our objective is to select the most appropriate set of clusters Since we are never sure which algorithm/set of parameter values combination is the most appropriate, then we experimented with multiple combinations However, it is almost impossible to experiment with all possible combinations, and so the set of clusters that we select as the best one is relative to our set of experimental combinations Acknowledgements This research was supported in part by a grant from the 2004 Summer Research Program of the School of Business of Virginia Commonwealth University I also wish to thank the anonymous referees for their valuable comments that enabled me to improve the quality of this paper REFERENCES Ankerst, M., Breunig, M., Kriegel, H.-P., and Sander, J (1999) “OPTICS: Ordering Points To Identify the Clustering Structure”, Proceedings of ACM SIGMOD’99 International Conference on the Management of Data, pp 49-60 Philadelphia, PA Banfield, J and Raftery, A (1992) “Identifying Ice Floes in Satellite Images”, Naval Research Reviews 43, pp 2-18 Ben-Dor, A and Yakhini, Z (1999) “Clustering Gene Expression Patterns”, Proceedings of the 3rd Annual International Conference on Computational Molecular Biology (RECOMB 99), pp 11-14, Lyon, France Bohanec, M and Bratko, I (1994) “Trading Accuracy for Simplicity in Decision Trees”, Machine Learning 15, pp 223-250 Bryson, N (1995) “A Goal Programming for Generating Priority Vectors”, Journal of the Operational Research Society 46, pp 641-648 Bryson, N., Mobolurin, A., and Ngwenyama, O (1995) “Modelling Pairwise Comparisons on Ratio Scales”, European Journal of Operational Research 83, pp 639-654 Bryson, N (K-M), and Joseph, A (2000) “Generating Consensus Priority Interval Vectors For Group Decision Making In The AHP”, Journal of Multi-Criteria Decision Analysis 9:4, pp 127-137 Bezdek, J (1981) Pattern Recognition with Fuzzy Objective Function Algorithms Plenum Press, New York, NY Bock, H (1996) “Probability Models in Partitional Cluster Analysis”, Computational Statistics and Data Analysis 23, pp 5-28 Cristofor, D and Simovici, D (2002) “An Information-Theoretical Approach to Clustering Categorical Databases using Genetic Algorithms”, Proceedings of the SIAM DM Workshop on Clustering High Dimensional Data, pp 37-46 Arlington, VA Dave, R (1992) “Generalized Fuzzy C-Shells Clustering and Detection of Circular and Elliptic Boundaries”, Pattern Recognition 25, pp 713–722 Dhillon, I (2001) “Co-Clustering Documents and Words Using Bipartite Spectral Graph Partitioning”, Proceedings of the 7th ACM SIGKDD, pp 269-274, San Francisco, CA 384 Dubes, R (1993) “Cluster Analysis and Related Issues”, in Handbook of Pattern Recognition & Computer Vision, C Chen, L Pau, and P Wang, Eds World Scientific Publishing Co., Inc., River Edge, NJ, pp 3–32 Fisher, D (1987) “Knowledge Acquisition via Incremental Conceptual Clustering”, Machine Learning 2, pp 139–172 Jain, A and Dubes, R (1988) Algorithms for Clustering Data Prentice-Hall Advanced Reference Series Prentice-Hall, Inc., Upper Saddle River, NJ Jain, A and Flynn, P (1993) Three Dimensional Object Recognition Systems Elsevier Science Inc., New York, NY Jain, A., Murty, M and Flynn, P (1999) “Data Clustering: A Review”, ACM Computing Surveys 31:3, pp 264-323 Han, J and Kamber, M (2001) Data Mining: Concepts and Techniques, Morgan Kaufman, New York, NY Huang, Z (1997) “A Fast Clustering Algorithm to Cluster Very Large Categorical Data Sets in Data Mining”, Proceedings SIGMOD Workshop on Research Issues on Data Mining and Knowledge Discovery, Tech Report 97-07, UBC, Dept of CS Kim, H and Koehler, G (1995) “Theory and Practice of Decision Tree Induction”, Omega 23:6, pp pp 637-652 Liu, B., Yiyuan, X., and Yu, P (2000) “Clustering through Decision Tree Construction”, Proceedings of the Ninth International Conference on Information and Knowledge Management (CIKM’00), pp 20-29 Murphy, P., and Aha, D (1994) UCI Repository of Machine Learning Databases University of California, Department of Information and Computer Science: Murtagh, F (1983) “A Survey of Recent Advances in Hierarchical Clustering Algorithms which Use Cluster Centers”, Computer Journal 26, pp 354–359 Osei-Bryson, K.-M (2004) “Evaluation of Decision Trees: A Multi-Criteria Approach”, Computers & Operations Research 31:11, pp 1933-1945 Saaty, T (1980) The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation, McGraw-Hill, New York Saaty, T (1989) “Group Decision Making and the AHP”, in B Golden, E Wasil, and P Harker (Editors), The Analytic Hierarchy Process: Application and Studies, pp 59-67 Ward, J (1963) “Hierarchical Grouping to Optimize An Objective Function”, J Am Stat Assoc 58, pp 236–244 DISPERSION OF GROUP JUDGMENTS The Geometric Expected Value Operator THOMAS L SAATY and LUIS G VARGAS Joseph M Katz Graduate School of Business, University of Pittsburgh Abstract: To achieve a decision with which the group is satisfied, the group members must accept the judgments, and ultimately the priorities This requires that (a) the judgments be homogeneous, and (b) the priorities of the individual group members be compatible with the group priorities There are three levels in which the homogeneity of group preference needs to be considered: (1) for a single paired comparison (monogeneity), (2) for an entire matrix of paired comparisons (multigeneity), and (3) for a hierarchy or network (omnigeneity) In this paper we study monogeneity and the impact it has on group priorities Keywords: reciprocal uniform distribution, geometric mean, geometric dispersion, group cohesiveness, group liaison, principal right eigenvector, beta distribution INTRODUCTION In all facets of life groups of people get together to make decisions The group members may or may not be in agreement about some issues and that is reflected in how homogeneous the group is in its thinking In the AHP groups make decisions by building a hierarchy together and providing judgments expressed on a to discrete scale having the reciprocal property Condon et al (2003) mentioned that there are four different ways in which groups estimate weights in the AHP: “ consensus, vote or compromise, geometric mean of the individual judgments, and weighted arithmetic mean.” The first three deal with judgments of individuals while the last deals with the priorities derived from the judgments To achieve a decision with which the group is satisfied, the judgments, and ultimately the priorities, must be accepted by the group members This requires that (a) the judgments be homogeneous, and (b) the priorities of the individual group members be compatible with the group priorities 386 There are three levels in which the homogeneity of group preference needs to be considered: (1) for a single paired comparison (monogeneity), (2) for an entire matrix of paired comparisons (multigeneity), and (3) for a hierarchy or network (omnigeneity) Monogeneity relates to the dispersion of the judgments around their geometric mean The geometric mean of group judgments is the mathematical equivalent of consensus if all the members are considered equal Otherwise one would use the weighted geometric mean Aczel and Saaty (1983) showed that the only mathematically valid way to synthesize reciprocal judgments preserving the reciprocal condition is the geometric mean If the group judgments for a single paired comparison are too dispersed, i.e., they are not close to their geometric mean, the resulting geometric mean may not be used as the representative judgment for the group Multigeneity relates to the compatibility index of the priority vectors The closeness of two priority vectors and can be tested through their compatibility index (Saaty, 1994) given by product, where and with principal eigenvector is the Hadamard or elementwise Note that for a reciprocal matrix eigenvalue and corresponding right Thus, one can test the compatibility of each individual vector with that derived from the group judgments A homogeneous group should have compatible individuals It is clear that homogeneity at the paired comparisons level implies compatibility at the group level, but the converse is not always true At the hierarchy or network level, it appears that it is more meaningful to speak of compatibility than of homogeneity The main thrust of this paper is to study monogeneity Dispersion in judgments leads to violations of Pareto Optimality at both the pairwise comparison level and/or the entire matrix from which priorities are derived Ramanathan and Ganesh (1994) explored two methods of combining judgments in hierarchies but they violated the Pareto Optimality Principle for pairwise comparisons (Saaty and Vargas, 2003), and hence, they incorrectly concluded that the geometric mean violates Pareto Optimality Pareto Optimality at the pairwise level is not sufficient to ensure Pareto Optimality at the priority level Fundamentally, Pareto Optimality means that if all individuals prefer A to B then so should the group The group may be homogeneous in some paired comparisons and heterogeneous in others thus violating Pareto Optimality The degree of violation of Pareto Optimality can be measured by computing compatibility along the rows, which yields a vector of compatibility values What does one when a 387 group is not homogeneous in all its comparisons? Lack of homogeneity (heterogeneity) on some issues may lead to breaking up the group into smaller homogeneous groups How should one separate the group into homogeneous subgroups? Since homogeneity relates to dispersion around the geometric mean, and dispersion itself involves uncertainties, how much of the dispersion is innate and how much is noise that when filtered one can speak of true homogeneity? In other words, how does one separate random considerations from committed beliefs? Dispersion at the single paired comparison level affects the priorities obtained by each group member individually and could lead to violating Pareto Optimality Should one combine or synthesize the priorities of the individuals to obtain the group priority or should one combine their judgments? Here we develop a way to test monogeneity, i.e., how homogeneous the judgments of the members of a group are for each judgment they give in response to paired comparisons This is done by deriving a measure of the dispersion of the judgments based on the geometric mean Computing the dispersion around the geometric mean requires a multiplicative approach rather than the usual additive expected value used to calculate moments around the arithmetic mean This leads to a new multiplicative or geometric expected value used to define the concept of geometric dispersion The geometric dispersion of a finite set of values is given by the geometric mean of the ratios of the values to their geometric mean, if the ratio is greater than 1, or the reciprocal, if the ratio is less than or equal to This measure of variability or dispersion of the judgments around the geometric mean allows us to (a) determine if the geometric mean of the judgments of a group can be used as the synthesized group judgment, (b) if the geometric mean cannot be used, divide the group into subgroups according to their geometric dispersion, and (c) measure the variability of the priorities corresponding to the matrix of judgments synthesized for the group In general, unless a group decides through consensus which judgments to assign in response to a paired comparison, the individual members may give different judgments We need to find if the dispersion of this set of judgments is a normal occurrence in the group behavior To this, we compare the dispersion of the group with the dispersion of a group providing random responses to the paired comparison Thus, we assume that an individual’s pairwise comparison judgments about homogeneous elements is considered random, and expressed on a discrete 1/9, , 1/2, 1, 2, , scale of seventeen equally likely values A sample consists of a set of values selected at random from the set of seventeen values, one for each member of the group It is the dispersion of this sample of numbers around its geometric mean that concerns us This dispersion can be considered a random variable with a distribution Because treating the judgments as discrete variables becomes an intractable computational problem as the 388 group size increases, we assume that judgments belong to a continuous random distribution For example, if there are five people each choosing one of 17 numbers in the scale 1/9, ,1, , 9, there are possible combinations of which 20,417 are different Thus, the dispersion of each sample from its geometric mean has a large number of values for which one needs to determine the frequency and thus the probability distribution To deal with this complexity, we use the continuous generalization instead This allows us to fit probability distributions to the geometric dispersion for groups of arbitrary size Once we have the continuous distribution of the geometric dispersion, the parameters that characterize this distribution are a function of the number of individuals n in the group To use the geometric mean to synthesize a set of judgments given by several individuals in response to a single pairwise comparison, as the representative judgment for the entire group, the dispersion of the set of judgments from the geometric mean must be within some prescribed bounds To determine these bounds, we use the probability distribution of the sample geometric dispersion mentioned above We can then find how likely the observed value of the sample geometric dispersion is This is done by computing the cumulative probability below the observed value of the sample dispersion in the theoretical distribution of the dispersion If it is small then the observed value is less likely to be random, and we can then infer that the geometric dispersion of the group is “small” and the judgments can be considered homogeneous or at that specified level On the other hand, if the dispersion is unacceptable, then we could divide the group of individuals into subgroups representing similarity in judgment The remainder of the paper is structured as follows In section we give a summary of the geometric expected value concept and its generalization to the continuous case that leads to the concept of product integral In section we define the geometric dispersion of a positive random variable and apply it to the judgments of groups In section we approximate the distribution of the group geometric dispersion In section we sketch how groups could be divided into subgroups if the geometric dispersion is large, and in section we show the impact of the dispersion of a group’s judgments on the priorities associated with their judgments GENERALIZATION OF THE GEOMETRIC MEAN TO THE CONTINUOUS CASE Let X be a random variable Given a sample from this random variable the sample geometric mean is given by Let 389 us assume that not all the values are equally likely, and their absolute frequencies are equal to with Then, the sample geometric mean is given by: the probabilities An estimate of is given by Thus the geometric expected value of a discrete random variable X is given by: In the continuous case, because P[X = x] = for all x, we need to use intervals rather than points, and hence, we obtain: Equation (2) is known as the product integral (Gill and Johansen, 1990) If X is defined in the interval we have In In general, we have where is the domain of the variable X and THE GEOMETRIC DISPERSION POSITIVE RANDOM VARIABLE OF A Using the geometric expected value, we define a measure of dispersion similar to the standard deviation Let be the geometric dispersion of a positive random variable X given by where 390 For then and It is possible now to write where the variable has a geometric mean equal to and a geometric dispersion equal to 3.1 Geometric Dispersion of Group Judgments Let k = 1, 2, , n be the independent identically distributed random variables associated with the judgments Let { k = 1, 2, ,n} be continuous random variables distributed according to a reciprocal uniform i.e., the variable is a uniform random variable defined in the interval [– ln 9, ln 9] The probability density function (pdf) of given by and hence, the pdf of is is given by The sample geometric dispersion is given by: Let be the order statistics corresponding to the sample k = 1,2, ,n}, i.e., for if Let be a value for which We have In and hence, we obtain For a group consisting of n is given by individuals, the distribution of 391 where represents the number of occurrences of the event and it is also equal to the index of the largest order statistic less than or equal to the sample geometric mean (Galambos, 1978) Let Since and we have Thus, the density function is given by: that is a convex combination of density functions of variables of the form i.e., the ratio of products of reciprocal uniform variates These density functions are of the form There are closed form expressions for the density function of the geometric dispersion for a group consisting of three or less individuals, but for groups larger than three, it is cumbersome and not much precision is gained from it Instead, we approximate them using simulation APPROXIMATIONS OF THE GEOMETRIC DISPERSION OF GROUP JUDGMENTS We computed the geometric dispersion of randomly generated samples of size 20,000 under the assumption that the judgments are distributed according to a continuous reciprocal uniform distribution We did this for groups consisting of 4, 5, , 15, 20, 25, 30, 35, 40, 45, and 50 392 individuals We found that as the group size increases, the geometric dispersion tends to become gamma distributed with density function given by The parameters and of these gamma distributions with location parameter equal to are given in Table To extend these models to groups of any size, we fit regression models to the parameters of the gamma distributions Regression models of the shape and the scale parameters versus n appear to be surprisingly robust: (R-squared = 99.9741) (R-squared = 99.981) In addition, the average and variance of the geometric dispersion can also be estimated from the parameters of these models: mean = exp(l.03505 – 1.01298/n) (R-squared = 99.8463) (R-squared = 99.9706) Note that as n tends to infinity, the average geometric dispersion tends to 2.81524 (99% C.I (2.79228,2.8384)) and the variance tends to zero (99% C.I (1.44E-9, 2.31E-9)) We now have the basis for a statistical test to decide if the dispersion of a group can be considered larger than usual, i.e., that the probability of obtaining the value of the sample geometric dispersion of the group is greater than a pre-specified significance level (e.g., percent) in the 393 distribution of the group geometric dispersion For example, for a group of size 6, whose judgments on a given issue are equal to {2, 3, 7, 9, 1, 2}, the geometric dispersion of the group is equal to 1.9052169 The average geometric dispersion is estimated to be equal to exp(1.03505 – 1.01298/6) = 2.378 Taking the usual significance level of percent, we observe that Thus, the p-value corresponding to the sample geometric dispersion indicates that it seems rare to observe values of the geometric dispersion smaller than the sample geometric dispersion, and hence, the geometric dispersion of the group is not unusually large, which in turn implies that the geometric mean can be used as the representative preference judgment for the entire group GROUP MEMBER CLASSIFICATION BY THE GEOMETRIC DISPERSION Let us assume that is a group of judgments and let be their order statistics If (where is usually taken to be equal to 0.05) then the geometric mean can be used as a representative of the group judgment On the other hand, if then the group needs to discuss the paired comparisons further in an attempt to reach consensus To determine which members of a group disagree the most and hence make the geometric dispersion large, we find the p-values corresponding to the geometric dispersions of the groups of judgments given by: Let We give without proof because of space limitations the following results Lemma 1: is a non-decreasing function of k, i.e., Theorem 3: Given a set of judgments with corresponding ordered geometric dispersions if for any k, then 394 Definition: A group of judgments k = 1, 2, ,n} is said to be if Definition: A member of a group of judgments is said to be a liaison of the group if the group is not after the elimination of the corresponding judgment from the set of judgments The Liaison Theorem: Given a group of n judgments, a liaison does not exist if and only if all subgroups of cardinality (n-1) are The existence of a liaison means that we may be able to divide a group into two subgroups whose preferences differ, and for which the geometric mean cannot be used as the representative group judgment This is the subject of further study GEOMETRIC DISPERSION AND PRIORITY VARIATION To study the relationship that exists between the geometric dispersion of a group and the dispersion of the corresponding eigenvectors, we find the range of variability of each component of the eigenvector for given sets of group judgments This is done by first finding the distribution of the eigenvector components for random reciprocal matrices whose entries are distributed according to reciprocal uniform distributions Theorem 4: For a random reciprocal matrix with entries distributed according to a reciprocal uniform distribution, the components of the random variable corresponding to the principal right eigenvector are distributed according to a beta, where and and the principal right eigenvector of the reciprocal matrix whose entries are given by the geometric mean of its entries, is given by: Let where is the geometric mean and the geometric dispersion of and Thus, we have is By definition, Let us assume that the reciprocal matrix of geometric means is consistent, i.e., Then the 395 principal right (pr-) eigenvector of the matrix is given by the Hadamard product of the pr-eigenvector of the matrix eigenvector of the matrix reciprocal uniform variables whose geometric Since the geometric dispersion of the and that of the variables we have is the same, because Thus, bounding the dispersion of the entries of the matrix matrix The entries of this matrix are random variables dispersion is given by and the pr- bounds the dispersion of the entries of the For example, consider a group of five people who provide the judgments given in the following matrix: The geometric dispersion of each group and their corresponding p-values (see Table 2) show that the judgments (1,3), (1,4) and (3,4) have large geometric dispersion This leads to large dispersion on the values of the eigenvector components (See Table 3a) and a violation of Pareto Optimality Reducing the dispersion of the judgments as in the following matrix 396 leads to less dispersed eigenvectors that satisfy Pareto Optimality (See Table 3b) CONCLUSIONS In this paper we put forth a framework to study group decision-making in the context of the AHP A principal component of this framework is the study of the homogeneity of judgments provided by the group We developed a new measure of the dispersion of a set of judgments from a group for a single paired comparison, and illustrated the impact that this dispersion has on the group priorities References Aczel, J and T L Saaty, 1983, Procedures for synthesizing ratio judgments Journal of Mathematical Psychology 27: 93-102 Condon, E., B Golden and E Wasil, 2003, Visualizing group decisions in the analytic hierarchy process Computers & Operations Research 30: 1435-1445 Galambos, J., 1978, The asymptotic theory of extreme order statistics New York, J Wiley Gill, R D and S Johansen, 1990, A Survey of product-integration with a view toward application in survival analysis The Annals of Statistics 18(4): 1501-1555 Ramanathan, R and L S Ganesh, 1994, Group preference aggregation methods employed in the AHP: An evaluation and an Intrinsic process for deriving member’s weightages European Journal of Operational Research 79: 249-269 Saaty, T L., 1994, Fundamentals of decision making Pittsburgh, PA, RWS Publications Saaty, T L and L G Vargas, 2003, The Possibility of group choice: Pairwise comparisons and merging functions Working Paper, The Joseph M Katz Graduate School of Business, University of Pittsburgh, Pittsburgh, PA ... of the state-of -the- art in the interface between OR/MS and CS and of the high-caliber research being conducted by members of the INFORMS Computing Society We thank the University of Maryland,... section In the fifth section, the unifying theme is software and modeling The sixth section contains four papers on classification, clustering, and ranking Taken collectively, these articles are indicative... notational convenience they define the two mappings and as follows For every let be the node in corresponding to the strong component in D that contains node For every defines any node in the strongly

Ngày đăng: 11/05/2018, 15:53

Xem thêm:

TỪ KHÓA LIÊN QUAN