Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 20 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
20
Dung lượng
363,17 KB
Nội dung
The author is grateful to Dr Abby Kendrick for her support and guidance through out the project Table of Contents Abstract I INTRODUCTION II BODY Literature review Practices of university ranking Purposes of University rankings Impact of university rankings on stakeholders Issues raised in university rankings 11 5.1 Ranking and University’s role and mission 11 5.2 Problems in ranking process 12 Recommendations 14 III CONCLUSION 15 References: 16 Abstract This report presents the initial purpose of ranking universities, the effects of ranking on stakeholders, the most recent developments in world university rankings and some issues about rankings It can be concluded that, despite the many shortcomings that remain, national and international university ranking systems continue to exist and exert far-reaching influence in society On the one hand, rating agencies should improve the system of criteria and measurement methods as well as protect their objectivity and integrity in the implementation process On the other hand, stakeholders should be more conscious in the use of ranking results, and attention should be paid to the negative effects of ranking, such as the psychology of trying to get high rankings by following indicators and achievement in quantity only I INTRODUCTION Ranking is based on multiple criteria that not all organizations apply the same, so a university can be ranked differently First of all, it can be said that university rankings are shaped by the needs of customers This work aims to provide comparative information on what is the best school, in what areas to make the decision of choosing a particular school or to provide a holistic and objective overview about the rank of universities in the world On the other hand, the competition in education field has drawn attention of institutions to their place in rankings and began to see it as a goal to improve performance and quality to attract students However, these powerful effects have had unintended consequences in recognition, evaluation, selection of universities In particular, ranking results have broadened the influence and had increasingly powerful, multi-dimensional impacts on all objects involved from macro to micro include: The State, National policy planner, managers, researchers, lecturers, employers, parents and students Because of the above impacts, university ranking continues to influence the direction and strategy of all stakeholders from the State to the individual Despite the many shortcomings that are still there, national and international university ranking systems will continue to exist and significantly influence society Thereby, the problem is that people need to use rankings in the right way and be the most conscious to make decision For students, choosing universities is always a difficult choice and has a profound effect on their future as well as the development of society in general This article will aim to analyze the conflict issues around ranking systems, then suggesting some recommendations In order to support this aim, the paper is divided into three parts with some main ideas: Practices of university ranking, Purposes of University rankings, Impact of university rankings on stakeholders, Issues raised in university rankings II BODY Literature review There are two popular rating trends: Overall ranking for the entire University and Ranking by specialty Overall ranking for the entire University These rankings usually look at the entire university, regardless of their particular discipline, field of study, or organizational structure The first attempt at international school ranking, Asiaweek's taken in 1997 This ranking is based on the criteria of academic prestige elected by the schools themselves, competitiveness for students, faculty's resources, research achievements and financial resources Most of the data collected was mainly provided by universities when they accepted to participate in Asiaweek's survey, except that citation data was taken from independent sources After four years of implementation, this ranking table has generated a wide variety of opinions, both supportive and opposed Finally, after the fourth edition of 2000, Asiaweek decided to stop the ranking because too many universities in the region were withdrawing, not cooperating (Salmi, Saroyan, 2007) However, it was not until 2003 to explode the race between university rankings on a global scale when Academic Ranking of World Universities (ARWU) was published by the International University Center and the Institute of Higher Education (Shanghai Transportation University, China) The initial purpose of this ranking was to determine position of leading Chinese universities globally, but then surpassed one of the most influential rankings and was mostly used in the World in later years This ranking uses some criteria to evaluate output, especially in scientific research Specifically, ARWU has four criteria and six indicators with different weights, related to the number of Nobel Prizes and Fields, the number of articles published in both Nature and Science journals, the number of articles indexed and the number of researchers cited based on Thomson Reuters' Science Citation Index-expanded (SCIE) and Social Science Citation Index (SSCI) data (formerly Institute for Scientific Information - ISI) For each indicator, the highest scoring school get 100, and other schools will be calculated as a percentage of the highest score; The scores for each index are weighted to calculate the final total score for each school, and the highest school scores 100 while other schools are proportional to the highest Final score is ranked in descending order, and position of each school is same as the number of other schools above it in the ranking (Liu, Cheng, Liu, 2005) One year after ARWU was first announced, another Times Higher Education (THES) rankings also came out, in partnership with Quacquarelli Symonds (QS) This THES / THE-QS ranking was maintained from 2004 to 2009 with a rationale based on the "four pillars" that make up the value of a world-leading university: high quality research; high quality teaching; high rate of graduates with employment; and an "international perspective" or level of internationalization (Baty, 2009) Since the end of 2009, THE has moved on to work with Thomson Reuters to develop a new and more rigorous ranking The core activities of a university are divided into four categories, including: scientific research, economic activity and creativity, international diversity and academic indexes If, in old charts, only six indicators with very high ratings are available for the reputation survey, the new rankings are expected to give more priority to research activity indicators, with information from Thomson Reuters' scientific publishing databases, and especially to reduce the degree of dependence on a single index to increase the value of multidisciplinary indicators that are closely related to the functions and mission of universities Thereby, they hope to be able to create a more rigorous, balanced and transparent ranking (Baty, 2010) After THE changed its ratings, QS also selected its new partner, US News, to give new raking table "World's Best Universities" in 2009 And from just ranking universities in the United States, US News has begun to reach the global scale The criteria and weight of this ranking are based on both internal and external assessments of the university, basically remaining the same as when QS cooperated with THE, including the results of academic peer-review and employer survey, studentteacher ratio, number of articles cited, number of international students and lecturers In addition to listing the 400 highest scoring schools in the world this year, the table also includes continental sub tables, including Asia and the Middle East, Australia and New Zealand, Britain, Europe, and Canada (Morse, 2010) Ranking by specialty Although the tendency to see the whole university as a collective ranking subject is widespread, it is not the only approach to university assessment and ranking For a long time, a number of places have explored the details of lower levels of universities, often faculties or departments, referred to as programs, from which to give a result considering specific factors in the specialized field There are many charts of this kind, especially in the field of economics and commerce, but the most notable are the rankings by Centrum für Hochschulentwicklung (CHE) and Deutsche Akademische Austauschdienst (DAAD) co-ordinates implementation (Usher, Savino, 2006) In terms of methodology, CHE suggests that the first group of subjects using the rankings are high high school graduates What they need is to choose a specialized program rather than a university, so rankings should focus on one program instead of the whole Moreover, the fact shows that even within a university there is usually a large difference between one and another program, and an overall average assessment of the school does not make sense for the students of a specific program Furthermore, no theoretical basis or practical experience demonstrates the validity of ranking based on a composite value from individual quantitative indicators Instead, focusing on the individuality of the target audience as future student, with different purposes and expectations, CHE chooses to allow users to combine different metrics themselves to set up charts according to their wishes The most notable feature of the CHE is the process of developing a set of rating criteria, with an index system that covers many different aspects when evaluating a training program from the perspective of chart users These indicators reflect a wide range of information from general to detailed, according to a multidimensional perspective of the CHE, with two main characteristics: indicators taken from many different sources and belong to both objective fact and subjective judgements In addition, CHE maintains a quality assurance mechanism bytransferring data collected from universities and programs that are rated to them for reviewing, adding and correcting At the same time every year, an advisory panel is established for majors reviewed to examine and evaluate the indicators and methods used, as well as the validity of the findings (Federkeil, 2002; Dyke, 2005) Having similar perspective with CHE on the role of the specialty in university rankings but different data collection and processing, Van Raan and his colleagues at CWTS, the University of Leiden (Netherlands, developed the "Ranking of Science" based on statistics cited by Web of Science (Thomson Reuters) This ranking includes about 1,000 universities appeared in 700 articles in Web of Science each year As the study of "bibliometrics", CWTS set up a different indicator system with ARWU, although both use similar data sources (Thomson Reuters, formerly known as ISI) The most important feature of this ranking is "crown indicator": The number of citations attributed to degree of impact on international level in the field (CPP / FCSm), a concept developed by CWTS research team in recent years used to assess the impact of Research papers published in international scientific journals, depending on the specificity of each subject area (Van Raan, 2003; Opthof, Leydesdorff, 2010) Even the overall trending charts are also more interested in the tendency to consider these specialized characteristics As ARWU has started adding top-ranking university rankings in five major areas of study (Natural Sciences and Mathematics, Engineering, Technology and Computer Science, Life Sciences and Agriculture, Clinical Medicine and Pharmacology; Social Sciences) since 2007 Recently, the rankings of QS collaborating with US News are also divided into five categories: Arts and Humanities; Engineering, IT, Life Sciences and Biomedical Sciences; Natural Sciences and Physics; Social Sciences Practices of university ranking University ranking lists schools in a hierarchy based on a set of indicators Rankings can be based on subjective quality through statistics, surveys of educators, professionals, teachers, students, and others There has been much debate since the late 1990s on the advantages and disadvantages of ranking universities in the United States Kevin Carey thinks that U.S News & World Report's ratings system is just a list of criteria that reflect the outward appearance of reputable universities According to Kevin Carey, the rating system is not perfect Rather than focusing on the basics of student education and training, the magazine's ranking is based on three factors: reputation, wealth and school exclusivity Carey thinks there are many important things parents and students must know about school choice, such as how they learn, and how likely they are to get a degree As mentioned above, the ranking of universities in the West was born to serve consumers, so the ranking was classified according to majors for users to easily compare The USNWR rating criteria are academic credentials, student selectivity, team quality, financial resources, graduation rates, and alumni satisfaction USNWR's ranking results are mainly based on two main sources of information: the opinion of graduates who often learn a lot of information before deciding to study at a particular school, and the evaluation of other universities’ manager The pragmatism of Westerners is evident in practicality of the above criteria The USNWR criterion is that the "consumer" of higher education are of the greatest interest The category ranking is also used to compare information needed to choose a school Similarly, THES rankings are for the selection of international schools, so THES indicators used for the ranking also attach importance to the international character and prestige of the school These include: the results of teachers’ and scientists’ surveys from different countries (40%), the evaluation of employers (10%), the presence of international lecturer / scientist (5%), international students (5%), ratio of teachers over students (20%), and percentage of reaseaches per faculty teachers (20%) However, even the criteria that could be measured by quantitative methods such as academic credibility or student selectivity, USNWR system still relies on sources of information as the evaluation of individuals This indicates that the national university rankings were not originally scholarly, and that their results were only meaningful as references As THES system expands into international rankings, more attention has been paid to quantitative methods and to the strengthening of scientific basis for ranking, though it is not possible to explain, for example, why a peer review is having such a huge weight (40%) compared to the employer's (10%) rating? The way Western systems work gives us an experience of purposefulness It should be said explicitly what the purpose of the rating is and what it is aimed at, as this answer will determine the way and the usefulness of rankings, as well as explaination of ranking charts Purposes of University rankings Innitially, university rankings are shaped by the need to serve users in order to provide them with comparative information on what is best for them to apply to the school that best suits their needs and abilities This is essentially the same thing as buying a car, because Western society has accepted that education as a service and study is a form of investment for a living in the future This explains why the original rating systems were made by press media to serve their readers, and were not academic This is also the reason why university ranking systems were originally national ratings and later expanded to international scale There are three factors impacting on university ranking that make it change in purpose, nature and scale Firstly, the globalization process in university education makes people want to compare quality of universities in different countries, and this led to the formation of international university ranking systems Secondly, the competition in higher education sector has prompted schools to begin to pay attention to their position in rankings, and began to see it as a competitive goal to attract student, together with reviewing and improving their activities Thirdly, the development of knowledge economy makes governments increasingly aware of the importance of higher education, many countries regard their rankings in international charts as an important indicator of higher education development level and also their competitiveness Impact of university rankings on stakeholders There are a number of events that show the great impact of university rankings on higher education as well as on stakeholders such as the state, the school, faculty, students, and public Someone said, it changed the landscape of higher education First of all, it affects the prestige and individual leaders of schools The Principal of the University of Malaya, Malaysia's oldest and largest public school, was sacked when its ranking in THES 2006 dropped more than 80 steps although it is explained that THES adjusts the way of international student counts for this school, rather than its performance Many schools are worried about their ranking so they refuse to participate in rankings, as the University of Tokyo in 1999 declared not to provide data for Asiaweek so that it would rank annually (Bacani, 1999) Rankings also impact on governments, especially in allocating funds to education It is clearest in China In order to have universities comparable to Harvard or Cambridge, the Chinese state has selected a number of key schools for special investment to help these universities reach the colleges' standards of excellent World class, a project commonly known as Project 985 Recently, Switzerland is considering introducing elite school systems and increasing resources for these institutions to improve quality (Australian Education International, 2007) The UK government has set up Research Assessment Exercise (RAE) and New Zealand establish a Performance-Based Research Fund (PBRF) to ensure that excellence in research activities is encouraged and rewarded University rankings also influence the decision to choose workplace of scientists and teachers Young teachers always dream of working in prestigious schools, not just because they want to be in the school's reputation, but because of the top schools associated with the names of top scientists In that school, they can learn much needed for the development of their scientific career For employers, the ranking of schools also has an impact on their decision to choose a person to work with A survey of employers in the UK and US has shown that the prestige of schools that candidate has studied is one of the top eight characteristics employers consider when choosing their own person (Smith, 2006) As the competition for work is increasingly fierce, graduation from a reputable school with a resume is clearly not a bad way to impress and and show candidate's potential University rankings are definitely influencing students' decision-making, which is well recognized (Bhandari, 2002; Federkil, 2002; Filinov & Ruchkina, 2002; Vaughn, 2002) The research results of Roberts and Thomson (2007) show a strong correlation between the high position of schools in rankings and the quality of entrance students An Australian audit firm working with Monash University interviewed many students who found Monash's reputation and position in the school's ranking as the most important factor for them to decide to study there (AUQA, 2006) This is deepening the gap between schools: the best schools attract good students, the unnamed ones are getting the left as a Western proverb saying “Garbage in, garbage out!” Because of these impacts, university rankings continue to affect the strategic direction of universities Many schools have established their mission statement - to become one of the best universities in the world "(Monash University, 2005), - “One of the truly great universities of the world" (The Ohio State University, 2007) – “one of the top-ranked schools in the 21st century” (Seoul National University, 2006) This will govern school's other policies regarding finance, personnel, expertise, equipment and service activities to achieve that goal Issues raised in university rankings 5.1 Ranking and University’s role and mission Since their birth in Europe in the Middle Ages, universities have become the center of human society's development So far, higher education researchers have agreed that university is the place to create, store and spread knowledge of humanity At different times, in different political, economic and social contexts, these three functions are developed in different paradigms, such as the "universal teaching of knowledge" (Newman, 1852), or a combination of Humboldt-based research and teaching-training (Husén, 1991), to the "triple helix" model of university-businessgovernment with three basic missions of the university (teaching, researching and transferring knowledge to society (Etzkowitz, 2003) Particularly, in the context of globalization, universities are under pressure of market supply and demand and must gradually adopt measures or models of organization and management of enterprises (Denman, 2005) Therefore, if wanting to measure the quality of a university, it is necessary to use a variety of methods of assessing or considering multiple aspects of their mission and reason for existence, not just one viewpoint only (Montesinos et al., 2008) As noted above, the vast majority of university rankings have indexes focused on evaluating two groups of research capacities (through the number of scientific articles published in specialized journals; the number of citations of those articles, scientific research awards, etc.) and teaching (through criteria that allow for the estimation or judging of teaching results as numbers of students, ratio of students: teacher, funds investing in teaching activities, the percentage of graduates, etc.) However, there are very few indicators to measure the university's "third mission": transferring of technology and knowledge to society 5.2 Problems in ranking process Not denying the strong pull of college rankings in recent years, but most of these charts are not convincing, especially for academics, because it also contains many uncertainties in its implementation methodology Errors in bibliometrics The assessment of research capacity based on publication and quotations is known as "bibliometrics" method Despite having to admit the outstanding advantages of professional journals with international peer review system, but the publication, citation of scientific articles as well as data collection still have subjective or objective factors leading to inaccurate results, with error rates in cited statistics up to nearly 1/3 (Moed, 2002; Van Raan, 2005) It still does not mention there is almost no definite scientific basis to confirm the link between the quoted level and the scientific quality of a published work In addition, there is another common form of error in bibliographic information measurement, which is misleading the name of owner units When publishing research results, scientists declare their own agencies and for non-English speaking countries, the risk of error is high due to inaccurate translation Likewise, some people only name the member unit where they work directly, not the official name of the university; or there are some universities in the same name, etc The rate of these various error types in bibliometric data is very significant (Van Raan, 2005; Guillevin, 2007) Inequality in citation information Current rankings almost always use bibliographic information derived from large databases such as ISI / Web of Science, Scopus, etc That leads to an unequal trend First, in terms of specialization, because these databases tend to be in the fields of natural sciences and biomedical sciences, the losing part belong to technical and technology, social sciences, humanities and arts schools but the most unequality is in terms of language: most of the published works indexed in these databases are in English while the scientific publishing department in other languages such as Germany, France, Russia, Spain, Portugal, Japan, United States, etc also contributes importantly to global science (Ren, Zu, Wang, 2002; Van Raan, 2005) Inequality is also expressed in the fact that they tend to "favor" US writers because they always occupy the highest positions in annual rankings and cited sources Abusing of purely scientific criteria that counts the number of scientific citations will be very dangerous in assessing the current scientific level of institutions, countries and continents The fact is that Europe has fallen compared to the United States in scientific research, to some extent, but not as much as it did in some recent ranking charts (Ren, Zu Van Raan, 2005; Albatch, 2006) Lack of convincing in rating criteria It can be said it is the biggest problem of most international university rankings today, of which ARWU is the subject with most criticism According to many researchers, the using of indicators and determining the weight in ranking universities of ARWU is unreasonable, the results published are not repeatable, the deviation from other data sources can be very high The size effect of university and of the whole education system has a decisive influence on ARWU ranking On the other hand, both the set of four criteria and six rating indices of ARWU are closely interrelated, and lack of independence between these criteria affects objective accuracy in the results of the rankings (Altbach, 2006; Enserink, 2007; Salmi, Saroyan, 2007) In addition, a different set of criteria is used in ranking, which is the result of survey from different subjects (faculty, students, leaders, employer businesses) aims at measuring their level of evaluation of training programs or universities The problem is reputation survey in this way implies the risk of resulting in lack of objectivity, especially in self-assessment about themselves (Salmi, Saroyan, 2007; Van Dyke, 2008) Unreasonable rating units The last issue in uncertainty of the charts, which is unit of comparison when ranking As mentioned above, it is often not possible to have a generic picture of everything in one University Each school has its own strengths, mission, organizational and socio-economic conditions, students also study in their own programs, so compare them all by the same criteria will not guarantee enough objectivity and fairness (Federkeil, 2002; Altbach, 2006; Guillevin, 2007; Montesinos et al., 2008) Moreover, the educational system varies among countries also leads to differences in rating For example, the US education system is demand, ranking often aims to meet the need for information provided for educational service users They freely choose what information best suits their needs and expectations Meanwhile, European universities operate under the capacity of providing education services (supply-driven), the ranking is considered as basis for evaluating the quality of training and research Therefore, when ranking globally, overcoming the framework of a system, it is difficult to make a good comparison between schools and majors without regarding to the differences in the organizational system; also, it is not easy to analyze the performance indicators to assess the performance of a university In this issue, so far, no persuasive element of the charts has existed (Clarke, 2005; Jobbins, 2005; Kivinen, Hedman, 2008) Recommendations With these methodological weaknesses, differences in viewpoints, a lack of objective, adequate and reliable data sources, etc as discussed above, it can be seen that it is still too long before one or more charts get consensus from many sides However, the growing and increasing influence of university rankings in the world is unquestionable The problem is, in the face of that rapid spread, people either ignore all rankings, or seek analysis to understand their limited scope to have the most appropriate use (Altbach, 2006; Salmi, Saroyan, 2007) So what to for these rankings? - The answer is that it is necessary to clearly determine the purpose of using the rankings to avoid falling into misuse or misinterpretation of ranking results Salmi and Saroyan (2007) have recommended that: - With universities, rankings can help in developing strategic plans as well as improving the quality of teaching and research It should be noted that it is not a matter of setting a score to be achieved or a ranking to pass in an international ranking, but rather an analysis of specific indicators in order to understand the ranking results Then, they need to think about strengths and weaknesses, work out action strategies to correct, gradually improve the quality of teaching, training and research - With the authorities, charts should be only used to encourage quality culture In countries that not have a quality assessment or quality assurance system, rankings may be instrumental in measuring quality; also international, this can be a quality control tool of foreign students when there are no quality assurance organizations When it comes to building a national ranking system, it is necessary to allow schools to participate, develop methods, define common goals, create collective responsibility and spirit of sharing - For students and their families, employers or the public, it should only be seen as a source of information and a multi-dimensional discussion channel In particular, it is important to focus on multi-criteria charts instead of converting all indicators to a single criterion, since only these rankings can provide information relevant to training program or the subject and occupational skills that business owners need to recruit III CONCLUSION As early as the 1990s in the United States and Western countries, there were fierce criticisms of university ranking systems, primarily because the rating criteria did not reflect true quality of performance which are too biased toward research without regard to training, and the measurement methods of such rating systems have a number of inadequate points and therefore reliability is relative All of these constraints have been discussed about too much and can be improved for the better, but no matter how much improvement, it is difficult to meet very different concerns of different stakeholders Therefore, one school may have a high ranking in this chart but a low ranking in another Since 2006, an international team of university rankings has developed a guide called Berlin University ranking rules, which sets out the need to clarify the purpose of rating, the ranking users, and the source of information As ratings have become increasingly popular, studies on the role of ratings for higher education, perceptions, or stakeholder behavior on ratings have become more common Despite many shortcomings, national and international university ranking systems continue to exist and have a profound influence on society It persists because it meets a real need of life, whether academic or commercial While agencies or organizations implementing the rating should improve the criteria and measurement system as well as protect their objectivity and integrity in implementation process, stakeholders should be more alert in using ranking results Chart users should remember that a school that is ranked highest in a chart only means being the best in listed criteria, but does not mean that all its activities are the best There are so many things that are extremely important but difficult to measure, and therefore, are not expressed in the criteria of any rankings, such as the school culture, a thing is not palpable, but it is real, it is a healthy relationship among individuals in an organization, which is respect for the truth, talent, justice and human dignity References: Aguillo IF, Granadino B, Ortega JL, Prieto JA 2006 Scientific research activity and communication measured with cybermetrics indicators Journal of the American Society for Information Science and Technology, 57(10): 1296-1302 Aguillo IF, Ortega JL, Fernandez M 2008 Webometric ranking of world universities: introduction, methodology, and future developments Higher Education in Europe, 33(23): 233-244 Albatch PG 2006 The dilemmas of ranking International Higher Education, (42): 2-3 Baty P 2009 Rankings 09: Talking points [Online] Times Higher Education, October 2009 Retrieved from: http://www.timeshighereducation.co.uk/story.asp?storycode=408562 [Assessed 15/04/2017] Baty P 2010 THE unveils broad, rigorous new rankings methodology [Online] Times Higher Education, June 2010 Retrieved from: [Assessed 20/04/2017] Clark M 2005 Quality assessment lessons from Australia and New Zealand Higher Education in Europe, 30(2): 183-197 Denman B 2005 Commen définir l'université du XXIe siècle ? Politiques et gestion de l'enseignement supérieur, 17(2): 9-30 Enserink M 2007 Who ranks the university rankers? Science, 317(5841): 1026-1028 Etzkowitz H 2003 Innovation in innovation: the triple helix of university-industry-government relations Social Science Information, 42(3): 293-337 Federkeil G 2002 Quelques aspects de la méthodologie de classement – le classement du CHE des universités allemandes L’Enseignement supérieur en Europe, 27(4): 389-398 Guillevin L 2007 Classement de Shanghai : valorisons nos universités en améliorant le classement des publications La presse médicale, 36(12 I): 1709-1711 Harvey L, Green D 1993 Defining quality Assessment and Evaluation in Higher Education, 18(1): 9-34 Husén T 1991 The idea of a university: changing roles, current crisis and future challenges 2th UNESCO/NGO Collective Consultation on Higher Education, Paris, France, 8-11 April UNESCO New Papers on Higher Education, ED.91/WS/11, 48 p Jobbins D 2005 Moving to a global stage: A media view Higher Education in Europe, 30(2): 137-145 Kivinen O, Hedman J 2008 World-wide university rankings: A Scandinavian approach Scientometrics, 74(3): 391-408 Liu NC, Cheng Y, Liu L 2005 Academic ranking of world universities using scientometrics: A comment to the Fatal Attraction Author's reply Scientometrics, 64(1): 101-112 Moed HF 2002 The impact-factors debate: The ISI's uses and limits Nature, 415(6873): 731732 Montesinos P, Carot JM, Martinez JM, Mora F 2008 Third mission ranking for world class universities: beyond teaching and research Higher Education in Europe, 33(2-3): 259-271 Morse R 2010 World's Best Universities: the methodology [Online] U.S.News & World Report Retrieved from: [Assessed 15/04/2017] Newman JH 1852 Discourse Theology a branch of knowledge In: The idea of a university London: Longmans, Green, and Co., 1907, p 19-42 [new impression] Online: [Assessed 22/04/2017] Opthof T, Leydesdorff L 2010 Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance Journal of Informetrics, 4(3): 423-430 Ren SL, Zu GA, Wang HF 2002 Statistics hide impact of non-English journals Nature, 415(6873): 732 Sadlak J 2006 Validity of university ranking and its ascending impact on higher education in Europe Bridges, vol 12, december 2006 Salmi J, Saroyan A 2007 Les palmarès d’universités comme moyens d’action : usages etabus Politiques et gestion de l’enseignement supérieur, 19(2) : 33-74 United Nations Children’s Fund (UNICEF) 2000 Defining quality in education Working Paper Series, Document No UNICEF/PD/ED/00/02 Presented at the meeting of The International Working Group on Education Florence, Italy, June 2000 Usher A, Savino M 2006 A world of difference: a global survey of university league tables Canadian Education Report Series Toronto, ON: Educational Policy Institute Van Dyke N 2005 Vingt ans de registres de résultats universitaires L’enseignement supérieur en Europe, 30(2): 7-32 Van Dyke N 2008 Self- and peer-assessment disparities in university ranking schemes Higher Education in Europe, 33(2-3): 285-293 Van Raan AF 2003 The use of bibliometric analysis in research performance assessment and monitoring of interdisciplinary scientific developments Technikfolgenabschätzung, Theorie und Praxis [Technology Assessment - Theory and Practice], 12(1): 20-29 Van Raan AF 2005 Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods Scientometrics, 62(1): 133-143 Australian Universities Quality Agency (AUQA) (2006) Report of an audit of Monash University Australian Education International (AEI) (2007) Switzerland considers establishing elite universities Bacani, C (1999) Asia’s Top Universities Retrieved, August 16, 2006, from http://www.asiaweek.com/asiaweek/universities/index99.html Bhandari, N (2006) Question of rank Retrieved January 18, 2007, from http://ww.smh.com.au Birchard, K (2006) A group of Canadian universities says it will boycott a popular magazine ranking Filinov N B., & Ruchkina, S (2002) The ranking of higher education institution in Russia: Some methodological problems Higher Education in Europe, 27(4), 407–421 OECD (2006) Education policy analysis: Focus on higher education 2005– 2006 Retrieved January Roberts, D., & Thomson, L (2007) Reputation management for universities: University league tables and the impact on student recruitment Leeds, UK: The Knowledge Partnership Sadlak, J (2006, December 14) Validity of university ranking and its ascending impact on higher education in Europe Bridges, 12 Retrieved January 3, 2007 Taylor, P., & Braddock, R (n.d.) International university ranking systems and the idea of University excellence Sydney, Australia: Asia Pacific Research Institute January 3, 2007 Sinmon Marginson and Marijk van der Wende, To Rank or To Be Ranked: The Impact of Global Rankings‖ http://www.international.ac.uk/resources/rankings.pdf University of Malaya 100 years, (2005, September 29) New Straits Times, p 11 Vaughn, J (2002) Accreditation, commercial rankings, and new approaches to assessing the quality of university research and education programmes in the United States Higher Education in Europe,27(4), 433–441 ... Purposes of University rankings Impact of university rankings on stakeholders Issues raised in university rankings 11 5.1 Ranking and University? ??s role and mission ... in world university rankings and some issues about rankings It can be concluded that, despite the many shortcomings that remain, national and international university ranking systems continue... holistic and objective overview about the rank of universities in the world On the other hand, the competition in education field has drawn attention of institutions to their place in rankings and