• unlimited access with print and download
    $ 37 00
  • read full document, no print or download, expires after 72 hours
    $ 4 99
More info
Unlimited access including download and printing, plus availability for reading and annotating in your in your Udini library.
  • Access to this article in your Udini library for 72 hours from purchase.
  • The article will not be available for download or print.
  • Upgrade to the full version of this document at a reduced price.
  • Your trial access payment is credited when purchasing the full version.
Continue searching

Measuring quality, measuring difference: International rankings of higher education

ProQuest Dissertations and Theses, 2011
Author: Peter Gregory Ghazarian
International ranking systems provide are an opportunity to higher education institutions (HEIs) to establish a global reputation. However, seeking that recognition comes at a significant cost. By focusing on particular indicators in the ranking systems, HEIs and governments may neglect other aspects of higher education. When choosing certain indicators over others, policymakers are confronted with an opportunity cost when allocating resources to improve rank. The nature of this cost and the relative importance of the indicators remain unclear. This study seeks to (1) contrast the policy pressures from international rankings against regional dialogues on higher education policy, (2) determine interaction between the ranking indicators of HEIs in Continental Europe, East Asia, and the Anglo-Saxon world, (3) reveal the relative importance of indicators as predictors for the overall rank of HEIs in these regions, (4) provide suggestions as to how HEIs could implement strategies to improve their standings in the rankings, and (5) consider how these findings from the ranking systems compare with regional trends in higher education policy dialogue.

TABLE OF CONTENTS Page 1 INTRODUCATION 1 2 LITERATURE REVIEW 6 International Higher Education Rankings 6 Reactions to the Rankings 8 Criticism of the Rankings 10 Technical and Methodological Concerns 10 As Consumer Information 12 The Incomparability of HEIs 13 Skewed Perceptions of Higher Education 14 Changing Practices 15 In the Context of Higher Education Policy 16 3 METHODS 19 Overview of the Study 19 A Question of Indicators 21 A Question of Policy 22 Researcher Perspective 22 Design of Ranking Analysis 24 Design of Policy Analysis 25 4 RANKING ANALYSIS 32 Indicators to Indicator Correlations 32 v

Anglo-Saxon World 32 Continental Europe 33 East Asia 34 Indicator and ARWU Total Score Regressions 36 Anglo-Saxon World 36 Continental Europe 37 East Asia 39 Recursive Partitioning Analysis 40 Anglo-Saxon World 40 Continental Europe 41 East Asia 42 Discussion of Findings 43 Anglo-Saxon World 43 Continental Europe 45 East Asia 48 5 POLICY ANALYSIS 52 Quality in the International Rankings 52 Higher Education Policy Dialogue in the Anglo-Saxon World 53 Economics 53 Access 55 Pedagogy 57 Quality Assurance 58 vi

Marketization 62 Internationals 63 Higher Education Policy Dialogue in Continental Europe 64 The Bologna Process 66 Economics 68 Culture 69 Lifelong Learning 70 HEI Independence 71 Internationalization 73 Quality Assurance 77 Pedagogy 80 Higher Education Policy Dialogue in East Asia 82 Economics 83 Public-Private Partnership 84 Tiered Investment 86 Access 89 Internationalization 91 Quality Assurance 94 Pedagogy 97 Discussion of Findings 97 Teaching Quality 98 University Mission 100 vii

Student Satisfaction 100 Learning Outcomes 101 Student Characteristics 101 Staff Characteristics 102 Research Facilities & Teaching Facilities 102 Funding 103 Curriculum 103 Research Volume 103 Research Quality 104 Graduation Rate & Retention Rate 104 Reputation 105 6 CONCLUSION 106 The Political Influence of International Rankings in HEIs 106 The Political Influence of International Rankings in Policy Dialogue 107 Anglo-Saxon World 108 Continental Europe 109 East Asia I l l Internationals & Internationalization 112 Suggestions for Further Research 114 Funding 114 Internationalization 115 Human Capital Theory 115 viii

APPENDIX 117 REFERENCES 129 VITA 141 LIST OF TABLES Table 1: Indicators and Weights for the ARWU &THES 7 Table 2: The Global Regions 19 Table 3: Elements of Higher Education Quality 30 Table 4: Anglo-Saxon R-Squares for Indicators and ARWU Total Score 36 Table 5: Continental European R-Squares for Indicators and ARWU Total Score. 38 Table 6: East Asian R-Squares for Indicators and ARWU Total Score 39 Table 7: Elements of Higher Education Quality in International Rankings 52 Table 8: Comparative Elements of Higher Education Quality in International Rankings and Regional Policy Dialogue 99 LIST OF FIGURES Figure 1: Input-Process-Output Framework of Quality in Higher Education 28 Figure 2: Revised Minnie-Usher Model of Higher Education Quality 29 ix

1 INTRODUCTION The first modern ranking of universities by US News & World Report in 1983 set into motion a new trend in higher education. Despite initial criticism, ranking systems for higher education institutions (HEIs) have blossomed. Inspired by the comparative nature of the Organization for Economic Cooperation and Development's (OECD) Programme for International Student Assessment (PISA) (Marginson 2009), HEI rankings have expanded to become international assessments. The most notable of the international rankings are the Academic Ranking of World Universities (ARWU) by Shanghai Jiao Tong University's Institute of Higher Education and the London Times Higher Education Supplement (THES). These ranking systems rely on weighted indicators of an HEI's research, students and faculty, reputation, and rewards. The rankings produce their results based on the compilation of these indicators. Although rankings are imperfect as measures of HEI quality, they have an important role in the marketization of higher education (Dill 2003, Slaughter and Rhoades 2004, Wedlin 2008). The rankings function as a higher education stock exchange. They intensify competition between HEIs by accentuating even slight differences. They inform consumers about higher education, influence institutional marketing strategies, and prevent institutional complacency by pressuring HEIs to improve their standings (Buela-Casal et al. 2007). In doing so, they play into the global trend for greater accountability in higher education (Salmi 2009) by providing comparative and publicly available data on HEIs. 1

The transition from national to international ranking systems results from the growth of transient populations. Between 1999 and 2007, cross-border1 student mobility increased by 53% (UNESCO 2009). As student mobility increases, international rankings become much more relevant to higher education (Jobbins 2005).When cross-border students seek employment, unknown foreign HEIs can be referenced against more familiar institutions in the rankings. This international comparability facilitates the future mobility of an HEI's graduates. In a world of multinational corporations, domestic students are also influenced by the international prestige of the HEIs they attend. The consequences of attending a particular institution extend beyond the domain of education. HEIs produce economic and status value for their students (Marginson 2009). Contributing to this production of value, international rankings place countries, individuals, and HEIs within a global hierarchy. This view of higher education has led to the popular use of terms such as "world class" and "world-class universities" (Buela-Casal 2007, 350). The concept of a world-class university has a relatively long, but unclear history. Despite the popularity of the term, it lacks a clear definition. Although the concept predates international rankings, the rankings increasingly define what it means to be world-class (Deem et al. 2008). In fact, the creators of the ARWU admit that a significant part of their motivation in setting up their system of ranking was to measure the gap between local Chinese HEIs and top-ranking "world-class" HEIs (Liu and Cheng 2005, 1 As in Li and Bray (2007), the term 'cross-border' students includes both international students and those that cross a controlled border within one country (i.e. mainland Chinese students attending HEIs in Macau or Hong Kong). 2

127). Based on the results of the early ARWU and THES rankings, the ranking methodologies seem to equate "world-class" with the characteristics common to the traditionally elite HEIs in the United States and United Kingdom: large, comprehensive, English-speaking, research-intensive and science-heavy (Altbach 2006). Thus, rankings as a measure of quality in higher education are more political than technical — certain interests are met at the expense of others (Skolnik 2010). This inherent bias underlies much of the criticism of ranking systems. Yet with no real competition and by merit of broad media exposure, HEI rankings have come to exert great influence over public perceptions of HEI prestige. In other words, "built-in bias... does not rob rankings of their power" and their ability to capture public attention creates pressure on the institutions they evaluate (Marginson and van der Wende 2007, 309). Public acceptance of HEI rankings, in turn, forces the hand of institutions to also accept the rankings and their measurements of HEI quality. Ranking systems offer benefits and drawbacks for HEIs. On one hand, the rankings create a system that pressures HEIs in a particular direction. Institutions that wish to do well must focus resources on the indicators specified by the ranking systems. Rather than allowing HEIs to pick and choose statistics to publish in glossy brochures, the rankings call attention to uniform indicators. As a result, HEIs may begin to focus specifically on that which is measured to determine their rank. Through this pressure, rankings exert influence beyond their intended audience of higher education consumers and start to have sway over higher education policy. 3

The HEI-government relationship is no longer as simple as it may have been in the past. For public officials with little time and the need for a lot of information, rankings serve as a convenient lens through which to evaluate HEIs (Dill and Soo 2005; Marginson and van der Wende 2007). This can reframe the way in which a government concerns itself with higher education by introducing a third party evaluator of HEIs. The rankings complicate the relationships between HEIs and governments by idealizing a particular image of higher education (Deem et al 2008) that may not be in tune with the ideal at every HEI. There can be dissonance between the reality of HEI organizational structure and culture as compared to what policymakers would like to see. Thus, the rankings can create pressure on HEIs through both higher education consumers and public policy. On the other hand, the rankings offer the possibility of a leveled playing field for HEIs. New or relatively unheard of HEIs can establish themselves by improving the defined aspects of their institutions measured by the indicators. This system could facilitate a shift away from "the old elite" (Hazelkorn 2008) by providing clear parameters for HEIs to target. Instead of resisting rankings, HEIs have transformed them into a marketing tool, and ultimately a driver for institutional reform (West 2009). Although the indicators may not be representative of HEI quality, an institution can reap the benefits of improved rank by focusing on improving them. Therein lies the attraction of these international ranking systems to HEIs; they are an opportunity to establish a global reputation. However, seeking that recognition comes at a significant cost. By focusing on particular indicators, HEIs may neglect other aspects 4

of their institutions in order to conform to an international standard. Even within that standard, HEIs are confronted with an opportunity cost when allocating resources to improve ranking indicators. The nature of this cost and the relative importance of the indicators remain unclear. This study seeks to (1) contrast the policy pressures from international rankings against regional dialogues on higher education policy, (2) determine interaction between the ranking indicators of HEIs in Continental Europe, East Asia, and the Anglo-Saxon world, (3) reveal the relative importance of indicators as predictors for the overall rank of HEIs in these regions, (4) provide suggestions as to how HEIs could implement strategies to improve their standings in the rankings, and (5) consider how these findings from the ranking systems compare with regional trends in higher education policy dialogue. 5

2 LITERATURE REVIEW International Higher Education Rankings ARWU and THES attempt to bring coherency to the national ranking systems that preceded them. A lack of agreement on acceptable indicators led to the establishment of unrelated systems around the world (Usher and Savino 2007). ARWU and THES seek to establish international comparability between HEIs by standardizing the measures by which HEIs are judged. The indicators classify and measure six elements of higher education quality. As organized by Usher and Savino (2007), these include: 1. Characteristics of accepted students (i.e. secondary school performance, standardized examination scores, international and ethnic diversity of the student body, institutional selectivity); 2. Learning inputs in terms of resources and staff (i.e. faculty/student ratio, staff qualifications, scholarship availability, total expenditure); 3. Learning outputs (i.e. graduation rates, retention rates); 4. Final outcomes in the form of alumni accomplishments (i.e. employment outcomes); 5. Research (i.e. number of citations and publications, research rewards); and 6. Reputation (i .e. peer appraisal, employer appraisal). Differences across national contexts prevent many indicators from being used in international rankings. The process of broadening indicators to international comparability has occurred differently for each international ranking system. 6

The ARWU and THES share research-intensive universities as the ideal model for their rankings because of their international recognition among HEIs (Marginson and van der Wende 2007). Yet the amount of value each of these ranking systems ascribes to particular aspects of that model differs considerably. The ARWU focuses extensively on research output because of the international comparability and verifiability of the data (Liu and Cheng 2005). THES rankings rely, to a large degree, on employer opinions and peer appraisal. The process of compiling an indicator of reputation involves collecting voluntarily submitted survey data from academics, academic administrators, and employers. The exact breakdown of these ranking systems' indicators and their weights are listed in Table 1. Table 1: Indicators and Weights for the ARWU & THES ARWU Indicators Alumni Nobel Prizes and Fields Medals Faculty Nobel Prizes and Fields Medals Highly-cited Researchers Papers published in Nature & Science Papers Indexed in the Science Citation Index-expanded and the Social Science Citation Index Per-capita Performance Weight 10% 20% 20% 20% 20% 10% THES Indicators Peer Review of HEI Reputation International Reputation among Recruiters Citations in Thomson's Scientific Database (2004-6) or Scopus (2007- 9) per Faculty Member Student-to-faculty Ratio Number of International Students Number of International Faculty Weight 40% 10% 20% 20% 5% 5% 7

While research makes up 60% of the ARWU rankings, it is a mere 20% in THES. Meanwhile, THES assigns 50% of its weight to survey data from "Peer Review" and Employer Perceptions of HEI Reputation. ARWU's heavy concentration on research indicators comes at the cost of not considering other aspects of higher education quality. Meanwhile, THES assumes impartial and informed employers and academics can provide meaningful information about HEI quality (Usher and Savino 2007). These issues only begin to touch on the criticism of the ranking systems and provide insight into the nature of the rankings. With the selection of indicators and the assignment of their respective weights, bias enters the equation. This bias may reflect the status quo of a particular cultural view of HEIs or simply the idiosyncratic views of the ranking compilers. For instance, in order to earn public acceptance, newly founded ranking systems might work with indicators to ensure that traditionally prestigious HEIs rank well. With no natural default for what ought to be measured, rankings can and do reflect particular perspectives on higher education quality. Reactions to the Rankings Initially, HEIs resisted the ranking systems. Examples of such resistance were quite common throughout the world. In the 1990s, Japanese HEIs resisted diversification from the single indicator of standardized test score selectivity; the HEIs refused to release information that would make other indicators a viable option (Yonezawa, Nakatsui, and Kobayashi 2002). National newspapers in the country wanted to build up ranking systems 8

based on measurements of faculty, students, and learning at the HEIs. However the HEI's refused to share information and prevented compilers from including those indicators. The Japanese HEIs were not alone in their resistance to scrutiny. In China, HEIs also attempted to obstruct the publication of rankings. A number of HEIs actively sought government intervention to stop the perceived threat of budding national ranking systems (Wang 2009). The institutions directly appealed to the government to censure the rankings on the grounds of defamation. In both of these cases, HEIs actively resisted external evaluation. In other contexts, early rankings encountered a different kind of resistance. In the United Kingdom and United States, the opacity of early methodologies and their lack of coherency led academics to dismiss the rankings. Bowden (2000, 58) clearly expresses this attitude, stating that: until [rankings provide personalized program-by-program results], league tables really ought to be regarded purely as a source of "infotainment"; that is, as they stand, although they do supply us with a certain amount of "information" (albeit limited) about universities, they also provide those both within and outside of higher education with an unquestionable amount of "entertainment." The early ranking systems' flawed methodologies left them open to ridicule. Yet academics made the easy mistake of assuming that because the rankings were unsound that they were harmless. Rather than argue against the publication of the early rankings, academics more often simply ignored them. 9

Such resistance and avoidance, however, had little effect over public opinion of the rankings. The combination of increasingly competitive admissions and rising tuition fees fueled public demand for information on HEIs. The lack of any strong, publicized and organized resistance over time to rankings secured them a sense of public credibility (Marginson and van der Wende 2007). Criticism of the Rankings The rise of rankings of HEIs has been accompanied by a considerable amount of criticism on the part of academics. Even the THES website concedes it is "rather crude to reduce universities to a single number" (Times Higher Education 2010). The act of ranking HEIs depends on potentially meaningless differences. That the different numerical values assigned to universities must in most cases be statistically insignificant raises serious questions about the validity of any system attempting to "rank" HEIs (Buela-Casaletal.2007). Criticisms of HEI ranking systems can be classified into "three categories of concern," consisting of "technical and methodological process ... usefulness of the results as consumer information [and] comparability of complex institutions with different goals and missions" (Hazelkorn 2007,90-91). Additionally, it is important to consider a fourth, emerging concern in the literature: the alarm over the consequences of ranking systems on HEIs and higher education governance. Technical and Methodological Concerns. Criticism of technical and methodological process focuses on the choice of indicators, the assignment of their 10

weights, and methodological opacity. Critics warn that compilers wield too much control over the results when establishing a methodology for a ranking system. A common criticism focuses on whether the weights assigned to particular indicators are justified or are assigned arbitrarily (Buela-Casal et al. 2007; Usher and Savino 2007). The weighting of indicators remains a delicate issue, as even slight changes to indicator weights could alter the ranking results. Weights provide an easy route by which the bias of ranking compilers can significantly influence HEI showings. Compilers possess similar power over the choice of indicators included in the rankings. Another common argument concentrates on this control; specifically, concern emerges over the possible decision to include weak or biased indicators (Marginson 2009). For instance, there are more Nobel Laureates and Fields Medals in the United States and United Kingdom than in all other countries combined (BBC News 2010). Thus, the inclusion of Nobel Prizes and Fields Medals within the ARWU might be considered a biased indicator. The process of assigning weights to indicators and then choosing which indicators to measure reveals how the views of the compilers impose upon the final results of the ranking systems. Other technical criticism focuses on the absence of potentially significant indicators. Despite their methodological differences, the international rankings tend to produce a similar list of the very top HEIs. Usher and Savino (2007) suggest the possibility of a "lurking indicator" that would explain for so much variation beyond 9th place in various rankings. These critics argue that the differences in the rankings past the 9th place are the result of an essential, but missing indicator. They suggest that if such an 11

indicator could be isolated and included within the rankings, it would reconcile the differences between their results. Another problem emerges out of the possible interaction between rank and indicators over time. Researchers have also called attention to the cyclical nature of ranking systems that attempt to measure HEI reputation. Guarino, Ridge way, Chun, and Buddin (2005) argue that if reputation is included as an indicator (as in THES), the rankings will only perpetuate the dominance of a select number of schools at the top by building upon the very reputations they are intended to measure. These criticisms highlight methodological weaknesses in the ranking systems that still may need to be addressed. Concerns also emerge over lapses in the quality of data collection. The processes that make up the collection of original survey data are of particular interest. Marginson (2009) points out that the 1 % response rate by experts contacted to provide an opinion of HEIs for the THES peer review indicator raises serious methodological concerns. The sample of survey respondents is highly unlikely to be representative of the entire academic community. Consequently, the reputation data compiled for the indicator would probably not reflect the actual views of the academic community. As Consumer Information. Some critics move beyond the process by which rankings are compiled and attack the stated goal of ranking systems as a means of informing consumers of higher education. For instance, the emphasis on quality of research over quality of teaching in rankings (Buela-Casal et al. 2007) indicates that rankings may not be the best source of information on HEIs for those planning to attend 12

undergraduate programs. The exact nature of the relationship between quality in research and quality in teaching remains a topic of debate. Healey (2005) suggests that linking quality research to results in student learning requires certain abilities on the part of a lecturer. That is to say that simply because a lecturer performs well as a researcher, does not necessarily mean she/he is an effective teacher. Thus, critics argue that indicators directly measuring teaching quality and the ability to translate faculty research into learning for students would be more suitable for informing consumers of higher education. Such an approach would focus on the services that an HEI provides to the student rather than on the production of research. Further criticism emerges from the disconnection between the perspectives of ranking compilers and the consumers of rankings. Cremomini, Westerheijden, and Enders (2008) argue that the rankings have limited usefulness to student-consumers because of differences in attitudes and values. A university that ranks well primarily as a result of its strong business program may be meaningless to a future student of the natural sciences. Similarly, cross-border students may find that schools that rank high lack the cultural readiness to successfully host students from other cultures and cannot provide them with the same level of opportunity provided to domestic students (Agnew and ValBalkom 2009). The Incomparability of HEIs. Not all HEIs operate around the same organizational structure and therefore cannot easily be compared. According to the cultures they serve, institutions often differ in their approach and organization. Alternative models to the traditional university include, but are not limited to, small and 13

specialized HEIs, polytechnic institutes, and liberal arts colleges (Kyvik 2004). These alternative approaches to higher education are not without merit. Thus, many critics of higher education rankings argue that these HEIs should not be penalized for their diversity of approach. However in forcing all HEIs into a comparable set of indicators drawn from one idealized model of higher education, international rankings do just that. Skewed Perceptions of Higher Education. A final set of concerns that merit consideration deal with the unintended consequences of rankings. The political force of rankings extends far beyond higher education consumers. In many cases, the rankings are a force for change in higher education. This influence emerges as international rankings expand beyond their original intentions, and acquire more meaning than the indicators used actually represent (Higher Education Funding Council of England — HEFCE 2008). This can have serious consequences for HEIs, because of the subsequent transformation of their goals, changes in the way in which they are managed, and the impact on those institutions that fail to perform according to those indicators outlined by the rankings. Changes to the goals of HEIs have significant impact on the academic community and its relationship to greater society. The competitive, hierarchical nature of rankings interferes with the role of HEIs as the source of "open source knowledge production" in the knowledge economy (Marginson 2009,11). Pitting HEIs against one another in a zero-sum game has the possibility to disrupt the efficient allocation of limited resources, actually impairing HEI productivity. If HEIs focus too heavily on increasing volume of publications and per-capita output, difficult but important long-term studies may be 14

ignored in favor of projects seen to have a quicker return. In their increasing concern for status, HEIs risk drifting away from their primary research and teaching goals. Meanwhile, important regional and local institutions often suffer as a result of international rankings. The rankings can draw attention to national champions engaged in the "world-class" competition and consequently drain resources away from the excluded institutes. Hazelkorn (2009, 19) draws attention to the fact that unranked HEIs may be "ignored, marginalized or by-passed," ultimately suffering as a result of the publication of rankings. By focusing excessively on the excellence of particular institutions, the rankings draw attention away from an effort to build a strong system of higher education. In funneling resources to those HEIs at the top, the rankings could ultimately be to the detriment of education for greater society. Regardless, the onslaught of criticism to ranking systems has done much to shape their reform. Even so, compilers often have not been able to address the root of all these concerns. Changing Practices In response to this growing policy influence, a number of academics began to focus on ranking systems and their methodologies. The increasing involvement of the academic community has served as one of the driving factors behind their reform. From the Warsaw International Meeting on Ranking in 2002 to the Institute for Higher Education Policy and UNESCO's European Centre for Higher Education Washington D.C. 2004 Meeting, scholars devoted to ranking systems made great strides in improving the systems' methodological rigor and establishing greater transparency (Merisotis and Sadlak 2005). A push towards clear methodological processes followed these 15

conferences. For instance, previously opaque and unknown methodological processes gave way to publicly available information on ranking methodologies. Both the ARWU and THES websites clearly define the indicators used, their relative weight, and the processes by which they are collected. A second push for the improvement of ranking systems has concentrated on their divorce from private interests. Responsibility for compiling the rankings has moved away from mass media publications towards independent, non-profit research centers. This shift reflects the growing significance of rankings as an instrument of HEI evaluation (Buela-Casal et al. 2007) and an attempt to establish their credibility. Though the fundamental issue of design bias remains, changes have addressed concerns over methodology. Compilers of the rankings have opened the inner workings of their methodologies to the public and have placed control over rankings into the hands of independent organizations. In the Context of Higher Education Policy As in other realms of public policy, having a flawed account of HEI quality is ultimately seen as better than having nothing at all (Sadlak et al. 2008). Flawed policy instruments can be refined and perfected over time. As Wang (2009) describes in the case of China, ranking systems have the potential to hold HEIs accountable to society by forcing them to be evaluated against their peers. The rankings allow consumers, policymakers, and academics to perceive higher education beyond just word of mouth and personal experience. Although the systems are not perfect, they are an improvement in the sense that they synthesize information on an issue of public concern. 16

In East Asia, their continued publication has earned the public's attention and has given rankings influence over national policymakers and HEIs. In spite of the early resistance of HEIs to the emergence of the ranking system, their popularity with parents and students has pressured academia to accept them (Van Dyke 2005). As the rankings compile and report data back to the public, the systems begin to shape public opinion. This process often translates into policy influence as public perceptions of HEIs and HEI policy extend beyond students and parents and into the policymaking arena (Sadlak et al. 2008). From the perspective of East Asian governments, the rankings exert pressure on higher education policy. This can result in direct pressure from governments on HEIs. As one Japanese HEI leader explains, "The government wants a first class university for international prestige... Rankings are becoming important to present Japan attractively and getting good students and good workers as the population declines. That's the government's motivation" (Hazelkorn 2009,12). Higher education policy fits within the broader framework of public policy plans for the future of a country. Governments cannot afford a weak showing, as it would bode poorly for national reputation, elected policymakers, and economic forecast. As a result of downward pressure, East Asian HEIs have begun transforming higher education at the institutional level. That effort has included developing English language international programs and pressing for changes to the traditional role of the professor. In Japan, these rankings-driven initiatives have taken the form of financial incentives to professors for research and an emphasis on internationalizing higher 17

Full document contains 152 pages
Abstract: International ranking systems provide are an opportunity to higher education institutions (HEIs) to establish a global reputation. However, seeking that recognition comes at a significant cost. By focusing on particular indicators in the ranking systems, HEIs and governments may neglect other aspects of higher education. When choosing certain indicators over others, policymakers are confronted with an opportunity cost when allocating resources to improve rank. The nature of this cost and the relative importance of the indicators remain unclear. This study seeks to (1) contrast the policy pressures from international rankings against regional dialogues on higher education policy, (2) determine interaction between the ranking indicators of HEIs in Continental Europe, East Asia, and the Anglo-Saxon world, (3) reveal the relative importance of indicators as predictors for the overall rank of HEIs in these regions, (4) provide suggestions as to how HEIs could implement strategies to improve their standings in the rankings, and (5) consider how these findings from the ranking systems compare with regional trends in higher education policy dialogue.