Abstract
The major aim of this contribution is to summarize the theoretical underpinnings of the most popular rankings, as they have been identified within the discourse of “world class universities” during the first decade of the 21st century. This period is chosen, because the discourse thereafter did not elicit further key issues to be discussed. Additionally, other dimensions of the critique will be briefly touched upon, notably methodological issues. In this framework, attention will be paid as well to the handling of humanities and social sciences within such rankings. Finally, the question will be raised, where we are now - some years after such a period of heated controversies about the rankings of “world class universities”: Are rankings “here to stay” and thus go on more or less unchanged by the fundamental critique of their specific theoretical underpinnings, often called “ideological underpinnings”? Do we observe changes in the ranking mainstream? Can we expect new accents of higher education policy, which will successfully call for another “map” of the higher education system?
Introduction
Rankings of “world class universities” became a major issue of higher education policy debates worldwide in the first decade of the 21st century. University rankings already had played a role and were a theme of debate in various countries over a long period. And these ranking discourses can be viewed as parts of a wider range of widely spread discourses about the quantitative-structural configuration and notably about the extent as well the modes of diversity in higher education. Since about the 1980s, increasing attention has been paid in many countries to the top of the growing national higher education systems and in this framework to the top of institutional aggregates such as individual institutions of higher education or their sub-units and to ranking lists compiled on the basis of available or easily accessible quantitative data. More recently, worldwide rankings of the top universities moved into the limelight of the search for the “highest” and the “best”. We might argue that the discourse about noteworthy diversity within higher education became increasingly “vertical”, “top-heavy”, strongly based on at least seemingly accurate-looking quantitative information, and international. Also, interest grew to measure vertical differences in higher education not anymore on the basis of broad classifications, which often were formally consolidated according to laws or other formal regulations, but rather informally with the intention to present even tiny vertical differences as enormously important.
Reports naming the specific positions of individual universities within lists of top universities – nationally or worldwide - look like accurate presentations of detailed data. This type of information is so tempting for many observers that they take the appropriateness of the concepts and the validity of the data for granted. A close look, however, reveals that rankings are driven by very specific conceptual frameworks, which are bound to lead to enormous normative controversies, if the readers are not overwhelmed by the beauty of the data and by the charm of simple claims as regards their validity.
The major aim of this contribution is to summarize the theoretical underpinnings of the most popular rankings, as they have been identified within the discourse of “world class universities” during the first decade of the 21st century. This period is chosen, because the discourse thereafter did not elicit further key issues to be discussed. Additionally, other dimensions of the critique will be briefly touched upon, notably methodological issues. In this framework, attention will be paid as well to the handling of humanities and social sciences within such rankings.
Finally, the question will be raised, where we are now - some years after such a period of heated controversies about the rankings of “world class universities”: Are rankings “here to stay” and thus go on more or less unchanged by the fundamental critique of their specific theoretical underpinnings, often called “ideological underpinnings”? Do we observe changes in the ranking mainstream? Can we expect new accents of higher education policy, which will successfully call for another “map” of the higher education system?
The discourse about the theoretical underpinnings of rankings will not be presented here in terms of a typical academic article pointing out the individual authors, who have raised this issue for the first time or most prominently. Rather, the major arguments will be summarized, and just a list of major comprehensive publications will be presented.
Contexts and predecessors
Long traditions of reputational distinctions
Underscoring the quality and the reputation of individual universities or of their disciplinary units and programmes has a long tradition. More attention paid to “vertical” differences – more or less, higher or lower, better and worse – than to “horizontal” differences – substantive profiles, schools of academic thought, areas of specialization, etc. – seemed to have been considered as a matter of procedure everywhere.
In some countries, it was taken for granted that the reputation of the individual universities differs substantially. Names of individual institutions – or just the name of the hosting city - such as “Oxford” and “Cambridge”, “Harvard” and “Chicago”, “Tokyo” and “Kyoto” - were dropped based on the assumption that rumors about quality were quite valid. In other countries, where – in contrast - a similar quality of all major universities was appreciated, the reputation of some individual institutions tended to be underscored as well - fairly ancient universities, such as the University of Heidelberg, the University of Leiden or the University of Uppsala, fairly large universities in capital cities, such as the University of Paris Sorbonne or the University of Vienna.
Early rankings in highly stratified systems
Ranking studies - i.e. vertically sorted lists of individual institutions of institutions of higher education or their sub-units or programmes based on detailed quantitative measurement of quality, reputation, etc. – first surfaced in the United States in the 1920s. Additionally, scholastic ability testing of high school students was introduced in the U.S. in the 1930s, which facilitated the most prestigious universities to be more demanding in their admission policy and which led to rankings of universities according to selection at entry. After World War II, ranking of universities according to the individual disciplines of their graduate programmes became the most popular mode of university ranking. Additionally, an institutional classification became popular in the U.S.: Research universities I, research universities II, comprehensive universities, four-year colleges, etc. Altogether, various analyses came to the conclusion that reputational differences between the universities and their departments were important in the mind of only about a quarter of American students, but it played a role for the whole system of doctoral training, academic careers and research promotion.
Ranking studies developed also at a relatively early stage in Japan. Over the years, several hundreds of rankings surfaced annually – and were published annually in a book by the most highly reputed newspaper in Japan - across an incredible variety of criteria and methods of data collection. Rankings of universities according to difficulty to be admitted and according to the most promising employment immediately after graduation became most popular, whereby analyses did not just address the top or the top half, but all institutions, and the rankings were conceived as a crucial basis of information and guidance for all high school students in Japan.
In contrast, ranking studies remained rare or non-existent in countries, in which vertical differences between universities in terms of quality and reputation were considered to be marginal. The Federal Republic of Germany often was named as an example. The principle of striving for a more or less equal level of quality at all universities was reinforced by the fact that those having passed successfully the final exam of academic secondary education were entitled to enroll openly at each German university in almost each field of study, and they could switch from one university to the other at any time during the course of study. Vertical differences between individual universities also were kept in bound in Germany by the reinforcement of inter-institutional mobility of scholars, e.g. by regulations that the first employment on a professor position had to be at a university different from the preceding one on a junior academic position, and by the fact that professors could be promoted or could get an increased salary only, if they got a “call” from another university. Thus, it does not come as a surprise to note that superficial ranking studies surfaced in Germany not earlier than in the 1970s, and that systematic ranking studies were undertaken only since the late 1980s.
Efforts to diversify higher education in the 1960s and 1970s
During the 1960s and 1970s, the pattern of the higher education system – usually understood as a national higher education system – became a key issue of higher education policy in many economically advanced countries. Expansion of higher education – notably in terms of a substantial increase of student enrolment - reinforced the view that higher education is likely to and has to become more diverse. The concept of “elite”, “mass” and “universal” higher education presented by the U.S. higher education researcher Martin Trow was named most often in this debate as a possibly appropriate conceptual framework for the future development of higher education. Accordingly, an emergence of “mass higher education” as a second sector of higher education was to be expected in each country, when the enrolment rate in higher education reached about 15 percent of the respective age group: Mass higher education could serve the different motives, talents and career perspective of the additional students in a more targeted way and concurrently could help preserving the characteristics of “elite” higher education for the traditional students and for preferential conditions of research. Similarly, “universal higher education” could emerge in the future, when the enrolment rate reaches 50 percent.
The discourse about the need of diversification in the process of expansion of higher education focused also – as the ranking discourse some decades later – on “vertical” diversity, i.e. on distinctions between “higher” and “lower” quality. But in contrast to the rankings discourse, the diversification discourse emphasized the need to pay attention to the lower ranks of the higher education system: How they differ or ought to differ from the top in order to serve the academic system and the society in the best possible way. Terms spread such as “non-university higher education” and “tertiary education”. Most countries strived for a formal diversification, i.e. manifested through laws or other regulations, of types of programmes or institutions – in contrast to the informal diversification through subsequent rankings just published by individual media or individual scholars. Various countries opted for the establishment of a second sector of higher education institutions – British “polytechnics” and German “Fachhochschulen” often were named as examples - whereby these institutions were expected not only to differ “vertically” from traditional universities, but also “horizontally”, for example through a stronger emphasis placed on “applied” research and on “applied” or “practice-oriented” study programmes. In other countries, levels of study programmes – e.g. master, bachelor or even shorter programmes – were viewed as key elements of diversification. When most European countries eventually agreed in the Bologna Declaration of 1999 to strive for “convergent” structure of higher education, similar levels of study programmes and degrees were underscored as major formal dimension of diversity, but individual countries remained free to keep additionally differences of types of higher education institutions.
Increasing interest in quality differences at the top from the 1980s onwards
During the 1980s, interest in vertical differences within the higher education began to play a strong role also in those countries, in which a steep reputational hierarchy among universities had not existed traditionally. In some countries, concern spread that some universities looked weak as compared to the mainstream of universities. In other countries, attention was paid primarily to the excellence of the top. Rankings studies of various kinds were undertaken and stirred up the national higher education policy discourse.
This happened at a time, when increasing attention was paid to the quality of research in higher education and to the role universities play in stimulating technological innovation and economic growth, and in a period, when the international scene was increasingly interpreted as shaped by competition between political systems and world regions. Moreover, activities spread concurrently in many countries to evaluate teaching and research in higher education.
The emergence of rankings of “world class universities”
During the 1990s and the early years of the 21st century, ranking studies became more sophisticated. More importantly, attention shifted toward ranking studies, which had left the traditional national frame and presented worldwide reputational hierarchies. The rankings of U.S. News and World Report as well as the Times Higher Education Supplement-QS World Universitiy Ranking paived the way. And university rankings became eventually a key theme of higher education policy, when the “Academic Ranking of World Class Universities” by the Shanghai Jiaotong University was published from 2003 onwards. Around 2010, even several dozens of international rankings could be noted.
It was the time, when internationalisation of higher education became a key issue of higher educational policy in various respects. Concurrently, the managerial power of university leadership was substantially strengthened in many countries combined with the expectation that individual universities were free to and ought to pursue specific strategies. Finally, competition between countries, between individual institutions and between individual scholars was advocated increasingly through changes of the regulatory system.
Common theoretical underpinnings of rankings
The above named three major rankings of “world class universities” and many similar rankings differ, as far as detailed criteria of quality and reputation as well as the methods of data collection and measurement are concerned. The arguments of most of the proponents of and most of the critics, however, suggest that a certain mainstream of rankings had emerged based on common or at least similar conceptual frameworks. Seven such major theoretical underpinnings are most obvious and will be explained subsequently.
The importance of the top
First, rankings direct our attention to the top of national higher education systems or to the top of higher education worldwide. “League tables” of 500 “World class universities” comprise possibly less than five percent of higher education worldwide. In contrast to the above named “diversity” discourse, according to which the characteristics all sectors and levels are important, only a small segment of the higher education system seems to matter in the world of university rankings. Different terms such as “elite universities”, “excellent universities”, “top universities”, “high-quality universities” or “world class universities” seem to suggest that there is a relatively clear distinction between the top and the not so important “rest”, even though the methodology of rankings just describes positions without underscoring distinctions between positions and clear borderlines, and that just the top deserves attention.
The importance of the top is underscored in many explanations and justifications of rankings. Attention paid notably to the top might be viewed as typical for higher education in the “achievement society”: If everybody in academia strives for the highest possible quality, one certainly likes to be informed about the settings most conducive for success. Ranking proponents, however, justify the primary attention to the top most frequently in referring to the impact of academia: The generation, preservation and dissemination of systematic knowledge in higher education is viewed to be increasingly important for the “knowledge society”, i.e., for the society substantially shaped in many respects by systematic knowledge, or to the “knowledge economy”, i.e., the society in which economic growth is increasingly dependent on systematic knowledge. Accordingly, top knowledge is viewed as being crucial for a desirable society. This “elite knowledge society” view is in contrast to “mass knowledge society” views, according to which the enormous growth of enrolment – to more than half of the age group in many countries – leads to the most relevant changes of society: To a society driven by the “wisdom of the many”.
The importance of vertical distinctions in a steeply diversified system
Second, rankings take for granted that “vertical” distinctions in the higher education system are very important and therefore have to be measured. We should know whether a university is “high” and not “low” according to certain criteria, and correspondingly is “excellent” and not “ordinary” or even “good” and not “bad”.
The major rankings address several dimensions of activities or successes, but these measures can be aggregated to a single score or a single rank, because the aim is not to underscore “horizontal” distinctions, i.e. substantial distinctions, which are not “high” or “low” per se. No emphasis is placed in the most popular rankings on the distinction between a “research university” and a “teaching university”. No attention is paid to the specific characteristics and institutional “profiles”, e. g., specific support for disadvantaged students, a regional or international emphasis, theory vs. application, or traditional high-tech vs. “sustainable development”.
Most interpretations of rankings by their proponents and advocates suggest that a “highly” or “steeply” stratified system of higher education is most desirable. It is taken for granted and considered to be desirable that the publication of rankings will evoke an increasing allocation of funds to the top universities and thus an increasing inequality of sources between higher education institutions.
The university or its academic units as crucial units of performance
Third, when attention moved in the ranking discourse to the worldwide arena around the turn of the century, most ranking lists put the university, i.e. the institution as a whole, into the limelight. In previous decades, countries emphasizing vertical diversity of higher education experienced in the national ranking discourses that frequent references were made to the exceptionality of Oxford University, Harvard University, Tokyo University, etc., but attention tended to be paid primarily to institutional sub-units: For example, whether doctoral programmes in physics were most excellent in Harvard, and whether admission to higher education in chemistry was most demanding for students eventually studying at Tokyo University.
Irrespective of this distinction: Ranking is institutional ranking – either of the whole institution of higher education or of their sub-units. It is not ranking of a country, not ranking of a discipline (at most a discipline-based unit or programme), and it is not ranking of individual scholars.
Rankings of whole universities are based on the conviction that there is a substantial “cross-fertilization” among the departments of a university: A university assembling highly talented academics in the area of physics, for example, is most likely to be academically highly productive in psychology as well. Critics of university ranking, in contrast, argue that universities as a rule are highly heterogeneous institutions with marginal “cross-fertilization” across departments. And rankings of departments similarly are based on the belief that individual scholars have a strong impact on the academic quality of their local colleagues.
The concentration paradigm
Fourth, the core concept of ranking studies is the assumption that the quality of academic “performance” is highest, if the “best” academics within a country – or world-wide - are concentrated in a few select institutional units – in a few sub-units of universities or in a few universities. Both proponents and critics of rankings take for granted that those universities or sub-units of universities, which belong to a highly stratified national higher education and where the most talented scholars try to be located in a top university, are likely to be highly positioned in a – national or international - “league” table. But proponents of rankings do not consider this a sufficient justification for league tables: They do not just want to say “The winner takes it all”, and they expect rankings not merely to list the winners. Rather, proponents of rankings as a rule argue that national higher education systems as a whole are more successful, if they concentrate their academic talents within a few institutions. For example, the fact that a sizeable number of highly ranked universities in the world are U.S. universities is often interpreted as an indicator that the U.S. excels other countries academically in general.
This concentration paradigm is based on the – previously named - assumption that individual academics are more likely to be academically successful, if they are surrounded in their academic unit by scholars of a high academic caliber: “Cross-fertilization” through regular physical vicinity and its consequences for communication and cooperation among talented colleagues is taken for granted. But it goes a step further in claiming that the location of the best in a few institutions lead to the higher quality of the system as a whole.
Critics of ranking challenge these assumptions. They underscore that communication and cooperation among academics located at different institutions often is considered to be more important than communication and cooperation with local colleagues: We hear that “invisible colleges” and “virtual vicinity” can be highly influential. Additionally, critics point out that research on higher education often has provided evidence of the creativity of intra-institutional diversity: Cooperation among scholars of different profiles and different levels of quality often works, and research on student performance often has shown that the diversity of students can be stimulating – thus justifying policies of “diversity management” within higher education institutions.
Finally, and most importantly, available information on the academic productivity of countries calls into question the assumption that national higher education systems are academically more successful, if most of their talents are highly concentrated at the top of a steep institutional hierarchy. For example, according to reports regularly published by the European Commission on the state of national higher education and research systems the “academic productivity” of a country, measured for example per 100,000 scholars or per one million inhabitants, seems to be somewhat higher in only moderately stratified higher education systems (e.g. the Netherlands, Sweden, Switzerland, etc.) than in somewhat more steeply stratified systems (e.g. United Kingdom) and clearly higher than in very steeply stratified systems (e.g. the United States and Japan).
The worldwide arena as a reference
Fifth, as already mentioned, while most rankings up to the 1990s had a national focus, attention is paid since about the turn of century notably to worldwide rankings. Experts point out that “internationalisation” of higher education became more visible in many respects in economically advanced countries since about the 1980s and 1990s. Thus, it does not come as a surprise to note that ranking proponents increasingly perceive the top universities as considering themselves not to compete anymore primarily with other universities of the same country for academic success, but rather with other top universities all over the world.
It has to be added here that national ranking studies seem to continue and even seem to grow along the emergence of international rankings. The increasing perception of higher education as an arena of worldwide competition for academic success, thus, has not limited the opportunities for universities to identify themselves on national competition maps. Moreover, the global rankings tend to be described and interpreted with national or even nationalistic undercurrents: On which positions are “our” universities? In comparison to which countries are “we” academically superior or inferior?
Rankings as drivers of competition
Sixth, the proponents of rankings, as a rule, consider the rankings not merely as a tool for information and transparency. Rather, rankings are viewed by their proponents as instruments reinforcing competition among scholars, institutions of higher education and countries.
There is a widespread belief that rankings help to increase “healthy competition”. Competition for enhancing one's position rankings seems to be part of a “virtuous circle”: Achievement in higher education will altogether grow, if efforts grow to enhance the position of one's university in rankings.
The critics, instead, are convinced that there is “circulus vitiosus” as consequence of rankings: “Over-competition” might be caused, extrinsic motivation is likely to grow, utilitaristic behavior will spread through seeking success according to the measures employed in ranking studies, the inclination to cheat might grow, and altogether attention is paid insufficiently to the goals of enhancing creativity in teaching and research.
Moreover, the international competition among universities for the possible highest reputation often is viewed as not really being meritocratic, i.e. as underscoring the really best ones. Many academics in countries with an outspoken reputation hierarchy of universities do not believe that even impressive efforts of their own suffice to be academically successful; rather, they view the support from the “helo” effect of a famous university as indispensable: You must be “beatified” by a famous university to become a famous scholar! Finally, the fame of a university is not clearly based on the current quality, but myths and imperfections of communication ensure that a reputation of a university persists and can be “sold” successfully for a long time, even though the quality already has gradually faded.
Rankings as instruments of increasing vertical diversity
Seventh, the information provided by rankings is expected not only to have an impact on the academics' attitudes and activities, but also on the overall institutional configuration of higher education. Increased competition to be on the top and to strengthen the top is expected to lead to an overall increase of vertical distinctions between institutions of higher education.
Views might vary, whether increasing competition in higher education as such is likely to lead to steeper or flatter institutional hierarchy. But we note that decisions had been made in many countries to provide more generous funds to top universities in order to increase their opportunity of excelling in worldwide rankings. In such cases, rankings have stimulated policies of a targeted steepening of vertical institutional diversity. And, obviously, as already pointed out. most proponents of rankings believe that a steepening of quality differences between universities is a desirable impact of rankings.
Data-related rationales and biases of rankings
Most rankings studies rely on easily accessible and easily quantifiable data. They are based on the conviction that such “indicators” offer fairly valid information about the character of higher education. Altogether, science research has shown that each indicator employed in the assessment of the vertical dimensions of higher education has certain weaknesses, but that the various indicators correlate positively with other. Thus, a relatively long list of indicators tends to be viewed as ensuring a satisfactory level of validity in pinpointing the quality of higher education institutions.
The choice of criteria and the modes of data handling in ranking studies cannot be viewed as just being an operational affair. Rather, conceptual underpinnings play an important role.
As individual rankings vary from each other in the choice of indicators and measures for academic quality and reputation, it is not possible to provide a more or less complete list of the data-related rationales of rankings. Eight rationales, however, can be named as typical for the majority of international rankings.
Dominance of research
First, most rankings of “world class universities” put prime emphasis on the quality of research, even though they tend to claim that they take into account all functions of higher education. Thus, good quality of teaching and learning often is inferred, if a high quality of research is measured. This is in contrast to the widespread view in many countries that academics have to be professionally competent not only for research, but also for teaching, and that teaching is a demanding task of its own.
Neglect of the “third mission” of higher education
Second, the “third mission” of higher education is not addressed in most university rankings at all, i.e., the increasing expectation in recent years that higher education has to play an active role in shaping society directly. Thus, universities and scholars taking care of the “third mission” are likely to be losers in rankings, because efforts put in this direction tends to be labelled implicitly in rankings as being in vain: Those adhering the “third mission” concept are likely to put less energy is put into the directions measured in rankings.
Low regard of the impact of higher education
Third, impact is hardly taken care of. Emphasis often is placed in rankings on input, process and output measures. Although public attention increasingly has turned worldwide to the impact of academia, i.e., to the effect of academia upon the outside world – for example to the “knowledge society”, impact measures have remained scarce in rankings.
This does not come as a surprise: Impact measures, as a rule, are not easily available. Substantial analytical work would be needed to collect information. Moreover, we note more frequent controversies about appropriate impact criteria than about appropriate output criteria. Last but not least, it is more difficult to show that the university really was the cause for the assumed impact, as other factors come into play. For example, the author of this presentation had shown in a research project that the income of former university students some years of graduation varies by far more according to the wealth of the region, where graduates are employed, than according to the reputation of the university they graduated from. But in the discourse of rankings, we often observe that proponents of rankings just take for granted that high input, process and output scores are bound to have high impact.
Scarcity of measures of relevance
Fourth, social relevance tends to be neglected. We note similarly that most measures employed in ranking studies aim at indicating academic quality. However, universities are expected to be both thriving for high academic quality and for social relevance. In fact, many top universities claim publicly that they equally serve both and, thus downplay the frequently noted tensions between quality and relevance. Again, this might have not only normative, but also operational reasons: Measuring social relevance seems to be too complicated and too controversial to become a normal component of ranking studies.
Biased notions of the internationality of universities
Fifth, international features of higher education tend to be addressed in most ranking studies in a biased way. “Internationalisation” of higher education – a highly regarded strategic aim of most academically highly reputed universities – often is measured in rankings merely in terms of large proportions numbers of foreign staff and of foreign students. This option certainly is based on the convincing assumption that many internationally mobile students, young scholars and mature scholars like to move to famous universities abroad. But taking proportions of inwards mobile students and scholars as the major indications for a high level of internationality is questionable in many respects. For example, many universities consider high proportions of foreign students and scholars not only as beneficial, because the heterogeneity of their background might endanger academic quality.
Moreover, emphasis placed on foreign scholars and students ignores efforts of many universities to reinforce international experience of one's own country's academics and students. For example, the European ministers in charge of higher education came to the conclusion in the Leuven Committee of 2009 that the single highest target of the so-called Bologna Process is to raise the proportion of one's own country's students who have had international experience in the course of study through spending some period or the whole study period for study or for related experience in another country. From this point of view, universities thriving primarily for high proportions of incoming students and scholars from other countries even might be characterized to harbor parochial academic philosophies.
Moreover, a high level of internationality of higher education could be measured successfully only with a wide range of criteria – for example international cooperation in research, international curricular thrust of study programmes (e.g. “internationalisation at home”), activities to reinforce international understanding, etc.
Lingua franca bias
Sixth, there seems to be quite a substantial “lingua franca bias” As international rankings measure “research productivity” or “academic productivity”, as a rule, through texts published in the English language, they are likely to rate universities of English-speaking countries more favorably than universities of other countries. This is to be expected, because many scholars in other countries spend at least part of their energy to communicate with academia and society in their own country and thus look less successful as regards publication in the English language.
Additionally, many analyses have shown that widely employed computerized lists of so-called “international peer-reviewed journals” include various national – “parochial” - U.S. journals, thus making higher rankings of U.S. universities even more likely. For example, the European Science Foundation was quite active in recommending lists of journals in all major disciplines, which really could be viewed as international, but it did not succeed in counteracting this widespread bias of including U.S. national journals in major computerized information systems of academic publications.
Science and engineering bias
Most rankings favor universities strong in science and engineering disciplines and take less care for the humanities and social science. For example, measures of resources employed in rankings, as a rule, do not take into consideration that the amount of resources needed for academic productivity tends to vary substantially between disciplines and thus would need a respective counter-weighing in ranking of whole universities. Moreover, disciplines vary substantially according to the extent which “academic productivity” can be viewed as being documented in journals and notably English-language international peer-reviewed journals.
All these factors contribute to a lower weight of the humanities and the social sciences. In many countries, we note that more than half of the students are enrolled in the humanities and social sciences and almost half of the academic staff is active in these disciplines, but resource provided for the humanities and research (research funds, junior academic staff, other staff, etc.) tends to be lower than a quarter. Moreover, we note that academics in the humanities and social science share the norms to a lesser extent, which guide the rankings: They concentrate the publications less strongly on journal articles, publish less in the international lingua franca, etc. Therefore, a counter-weighing of the disciplines would be indispensable, if one really wanted to measure the quality of the university as a whole.
Some years ago, a public debate started in Japan, whether top universities even should abolish the humanities and social sciences in order to climb to higher positions in the international rankings. Such thoughts are likely to surface, if success on rankings is so highly regarded that it justifies every possible opportunistic strategy. Fortunately, however, this public debate in Japan disappeared after a while.
Neglect of the opus magnum
Eighth: A traditionally important element of output of academic work - notably of the humanities and social sciences - is completely neglected in most rankings: The opus magnum. Traditionally, books authored by academics and book-size dissertation texts were considered to be important means of encouraging complex academic encounters. The clear dominance in rankings put on journal articles, in contrast, encourages academics to package intellectual endeavors into small pieces. This is another example for rankings not striving be measure academic quality neutrally, but rather to redirect the notion of academic quality.
The consequences: ranking-driven higher education systems?
As already pointed out, international rankings were not only established with the aim of providing a quite accurate map of top universities worldwide. Rather proponents and advocates clearly intended to manipulate higher education through the provision of “information”, e.g. to stimulate competition within the top sectors of national higher education system or to steepen the vertical diversity of higher education worldwide. Rankings of “World class universities” intend to change higher education into the direction of the conceptual underpinnings of the rankings.
Experts analyzing ranking issues, as a rule, point out, as already discussed above, first, that the ranking philosophy has been quite successful in influencing the strategic concepts of many top universities worldwide. Major strategic papers of top universities nowadays talk at length about the ranking issue. Second, rankings seem to have had a visible influence on higher education policies in many countries. We often note that certain higher policies are publicly justified as tools to enhance the worldwide role of the higher education system of the respective country through exceptional support of top universities.
A close look, however, reveals that the institutional strategies and national policies stimulated by the international ranking scene vary substantially. A well-known example for the enormous variety is the German “Exzellenz-Initiative”. Proponents of rankings often have claimed that this German national policy starting in 2005 is a typical example for a world-wide higher education policy trend: It shows that even countries traditionally preferring a flat institutional hierarchy of universities have turned towards a policy of reinforcing a steep stratification of higher education. But a close look shows that the German Excellence Initiative has a much flatter vertical institutional hierarchy in mind than already existed in various other countries at a time, before the ranking philosophy and policy got momentum. For example, more funds were provided at the start of the German Excellence Initiative for inter-institutional research and doctoral training consortia than for individual top universities. Funds for top universities were made available for 10 of the about 70 medium and large-scale universities in Germany, i.e. for quite a large proportion rather than for a tiny top; the funds for excellent research and doctoral programme consortia were distributed even more widely. The funds were provided for a short period and were relatively small – compared to the historically grown financial advantage of top universities in countries with a highly stratified higher education system. In sum: The German policy aimed at making a very moderate vertical higher education slightly more vertical, but not to move towards a high degree of verticality.
Third, most analysts point out that increasing emphasis placed on research has been a major trend of higher education in economically advanced countries in the first and the second decade of the 21st century. It is widely assumed that the ranking philosophy contributed to it.
Fourth, experts often point out that rankings have contributed an increased mood of competition in higher education. Also operational effects are often named: More reviewed journals and publications in reviewed journals, more publications in the English language, more doctoral dissertation based on journal essays, a relative decline of support for the humanities and social science, more attention paid to scandalous cheating of researchers, more “sexy” interpretations of the ranking scenario in the public media, etc.
But the consequences of ranking are not so clear in various other respects: Has the ranking scenario stronger effects in terms of “healthy” or “unhealthy” competition? Has ranking turned out to be the enemy or the friend of academic creativity? Did ranking contribute to an over-emphasis on academic quality at the expense of relevance and an over-emphasis on the value of good processes and output of higher education at the expense of attention paid to the impact of academic work? Do ranking reinforce or relativize academic meritocracy? Do rankings offer flexibility for horizontal diversity, or does the intention of university to climb up in “league table” leads to so much imitation that higher education serve the diverse needs of the knowledge society too narrowly?
Last not least, there are indications that rankings have failed completely in some respects. Most prominently, the notion that highly stratified national higher education systems with considerable numbers of highly ranked universities are academically more productive than moderately stratified systems ensure a higher overall academic productivity seems not only to have been wrong from the outset, but also has not to have become more convincing over time.
We certainly ought not to be astonished that rankings are so widely accepted in spite of their questionable data quality. But the consequences of rankings for academia and for the relationships between higher education and society are the most salient issue.
Causes of popularity
Since university rankings have become very popular, we have to reflect the causes of this popularity. On the one hand, many advocates of rankings are convinced that the validity of the conceptual underpinnings and of the actual measurements are crucial. Accordingly, the university ranking “fad” is based on the correct insight that the top of the higher education system is extremely more important than the rest, that steep vertical stratification of higher education systems is beneficial, and that higher education is most successful, if most talents are concentrated in a few institutions.
On the other hand, challenging these assumptions has to speculate about the possible causes: Is the lobby of the winners of rankings – i.e. countries with highly stratified higher education system and top universities – so strong that they easily keep alive the ranking myths? Is identifying and celebrating the “best” so “sexy”, that the media like to go on investing into university rankings and to disseminate their results – treating Nobel Prize winners like Oscar winners and the “University number one” like “Miss Universum”?
The author of this chapter likes to present a third hypothesis: Rankings as announcements of the best are so popular exactly due to the enormous difficulties generally faced in identifying and measuring academic quality. Academia is in need of symbolic activities to praise the best in order to calm down the notions of uncertainty as regards academic quality.
We do not have not any concept and not any method to find out the single most convincing explanation. Therefore, we will continue to ask: Why university rankings have become so popular?
Alternative scenarios
Proponents of the theoretical and the political underpinnings of the mainstream of international rankings certainly have hoped that national higher education policies and institutional strategies of top universities succeed in driving scholars, institutions of higher education and national policies into certain directions described above. These underlying higher education policy “fads” certainly were successful in some respects, but not so successful in other respects.
In describing the characteristics of ranking above, much has been said about the overt and hidden rationales and about the widely assumed effects of university rankings. As those rationales and the desirability of their effects remained controversial, alternative policies and activities should not be overlooked. Some examples might be named:
- -Efforts became popular in Europe to establish a “U-Multirank” system: Not a single ranking ought to measure top universities according to academic quality, but several rankings should pay attention to different major objectives and criteria, e.g. internationality, regional relevance, etc.
- -Rankings of the academic quality of national higher education systems instead of rankings of individual universities were proposed, because the major competition widely is viewed to be between countries and not between universities and because countries could succeed with the help of diverse institutional configurations.
- -Striving to create and reinforce “flagship universities” is often considered to be a more promising alternative, because this concept encourages the individual universities to develop unique settings of excellence rather than striving to be successful according to common criteria of success.
- -National policies often are proposed to put more emphasis on encouraging horizontal diversity of higher education and, thus, encourage individual institutions to develop specific “profiles” in order to ensure that a broadening range of societal needs in the “knowledge society” is taken care of.
- -Strategies such as “diversity management” are based on the belief that individual universities do not need to strive for a relatively homogenous talent pool of academics and students in order to be successful, as the ranking concepts suggest. Instead, intra-institutional diversity is viewed as creative.
- -The basic assumption of ranking often is challenged that we are on the move towards an “elite knowledge society”. Knowledge widely dispersed in society and the “wisdom of the many often” is viewed to be more important.
- -Finally, there are concepts calling higher education to focus its efforts on major thematic issues rather being primarily concerned about institutional configurations. The call for “higher education for sustainable development” is the most prominent example.
So far, we cannot be certain, whether university rankings “are here to stay”. We do not know either, whether other concepts about the functions of higher education and about the configuration of higher education system can move into the limelight instead.
Key literature on stratification of higher education and university rankings
Commission of the European Communities (2010). Progress towards the common European objectives in education and training (2010/2011) – Indicators and benchmarks. Brussels: Commission of the European Communities (SEC (2011)526; Commission Staff Working Document).
De Corte, E. (Ed.) (2003). Excellence in higher education. London: Portland Press.
Dill, D., & Soo, M. (2005). Academic quality, league tables, and public policy: A cross-national analysis of university rankings. Higher Education, 49(4), 495–533. https://doi.org/10.1007/s10734-004-1746-8.
Hazelkorn, E. (2011). Rankings and the reshaping of higher education: The battle for world-class excellence. Houndsmills: Palgrave Macmillan.
Hazelkorn, E. (2017). Global rankings and the geopolitics of higher education: Unterstanding the influence and impact of rankings on higher education, policy and society. Abingdon: Routledge.
Kehm, B. M., & Erkkilä, T. (2014). Editorial: The ranking game. European Journal of Education, 49(1), 3–11. https://doi.org/10.1111/ejed.12062.
Kehm, B. M., & Stensaker, B. (Eds.) (2009). University rankings, diversity, and the new landscape of higher education. Rotterdam and Taipei: Sense Publishers.
Kováts, G. (2015) ‘New’ rankings on the scene: The U21 ranking of national higher education systems and U-Multirank. In A. Curaj, L. Matei, R. Pricopie, J. Salmi, & P. Scott (Eds.), The European higher education area: Between critical reflections and future policies (pp. 301–320). Cham: Springer.
Kwiek, M. (2019). Changing European academics: A comparative study of social stratification, work patterns and research productivity. London and New York: Routledge.
Marginson, S. (2008). The new world order of higher education: Research rankings, outcome measures and institutional classifications. Melbourne, Victoria: University of Melbourne, Centre for the Study of Higher Education.
Rostan, M., & Vaira, M. (Eds.) (2011). Questioning excellence in higher education. Rotterdam: Sense.
Sadlak, J., & Liu, N. C. (Eds.) (2007). The world-class university and ranking: Aimed beyond status. Cluj: Cluj University Press.
Shavit, Y., Arum, R., & Gamoran, A. (Eds.) (2007). Stratification in higher education: A comparative study. Stanford, CA: Stanford University Press.
Shin, J. C., & Kehm, B. M. (Eds). (2013). Institutionalization of world-class universities in global competition. Dordrecht, Heidelberg, New York and London: Springer.
Shin, J. C., Toutkoushian, R. K., & Teichler, U. (Eds.) (2011). University rankings: Theoretical basis, methodology and impacts on global higher education. Dordrecht: Springer.
Teichler, U. (2008). Diversification? Trends and explanations of the shape and the size of higher education. Higher Education, 56(3), 349–379. https://doi.org/10.1007/s10734-008-9122-8.
Teichler, U. (2020). Higher education system. Differentiation, horizontal and vertical. In J. C. Shin, & P. Teixeira (Eds.), Encyclopedia of international higher education systems and institutions. Dordrecht: Springer. https://doi.org/10.1007/978-94-017-9553-136-1.
Trow, M. (1974). Problems in the transition from elite to mass higher education. In Policies for higher education (pp. 51-101). Paris: OECD.
Usher, A., & Savino, M. (2006). A world of difference: A global survey of academic league tables. Toronto: Eductional Policy Institute.
Van Vught, F. (Ed.) (2009). Mapping the higher education landscape. Dordrecht: Springer.