View More View Less
  • 1 Economic and Social Research Institute, Dublin, Ireland
  • 2 Institute for Environmental Studies, Vrije Universiteit, Amsterdam, The Netherlands
  • 3 Department of Spatial Economics, Vrije Universiteit, Amsterdam, The Netherlands
  • 4 Department of Economics, Trinity College, Dublin, Ireland
Open access

Abstract

Performance measures of individual scholars tend to ignore the context. I introduce contextualised metrics: cardinal and ordinal pseudo-Shapley values that measure a scholar's contribution to (perhaps power over) her own school and her market value to other schools should she change job. I illustrate the proposed measures with business scholars and business schools in Ireland. Although conceptually superior, the power indicators imply a ranking of scholars within a school that is identical to the corresponding conventional performance measures. The market value indicators imply an identical ranking within schools and a very similar ranking between schools. The ordinal indices further contextualise performance measures and thus deviate further from the corresponding conventional indicators. As the ordinal measures are discontinuous by construction, a natural classification of scholars emerges. Averaged over schools, the market values offer little extra information over the corresponding production and impact measures. The ordinal power measure indicates the robustness or fragility of an institution's place in the rank order. It is only weakly correlated with the concentration of publications and citations.

Abstract

Performance measures of individual scholars tend to ignore the context. I introduce contextualised metrics: cardinal and ordinal pseudo-Shapley values that measure a scholar's contribution to (perhaps power over) her own school and her market value to other schools should she change job. I illustrate the proposed measures with business scholars and business schools in Ireland. Although conceptually superior, the power indicators imply a ranking of scholars within a school that is identical to the corresponding conventional performance measures. The market value indicators imply an identical ranking within schools and a very similar ranking between schools. The ordinal indices further contextualise performance measures and thus deviate further from the corresponding conventional indicators. As the ordinal measures are discontinuous by construction, a natural classification of scholars emerges. Averaged over schools, the market values offer little extra information over the corresponding production and impact measures. The ordinal power measure indicates the robustness or fragility of an institution's place in the rank order. It is only weakly correlated with the concentration of publications and citations.

Introduction

Measuring the academic quality of schools and scholars is now routine practice. Scholars are ranked according to their output or citations to their work, typically with some correction for quality. The ranking apart, scholars are evaluated without context, with in reality scholars are affiliated to schools and part of communities. Schools are ranked according to some aggregation of the scores of their members (Kalaitzidakis et al. 2003; Nederhof 2008; Nederhof and Noyons 1992; Zhu et al. 1991).1 However, some institutions are carried by a single, exceptional individual, while other institutions have a large number of good researchers (Crewe 1988). This has implications for the power relations within an institution and for the robustness of its ranking to job mobility. The contextualised performance indicators for individual scholars proposed in this article, when aggregated to schools, measure fragility and robustness of the performance of schools. The proposed indices thus complement standard measures of academic quality.

The proposed measures are variations of standard ones in economics. The starting point is the Shapley value (Shapley 1953), which measures the average contribution of an agent to any coalition. Instead of any coalition, I use the existing schools. Value is measured cardinally (i.e., the score of the school) and ordinally (i.e., the rank of the school). As far as I know, I am the first to propose this.2

I use the normalised Herfindahl–Hirschman Index (HHI) to measure concentration (Herfindahl 1951; Hirschman 1964).3 While not new (Cox and Chung 1991; Rubin and Chang 2003), this is not common in the scientometric literature.4

While the new indicators constitute the main contribution of this article, the application is interesting too. I illustrate the new measures using data on business schools and business scholars in Ireland. Business research is rarely evaluated, at least in the academic literature (Harzing 2005; Hodder and Hodder 2010; Hogan 1990; Kao and Pao 2009; Vieira and Teixeira 2010).

This article continues as follows. Section 2 specifies the indicators and presents the data. Section 3 applies this to business schools and business scholars in Ireland and discusses the results. Section 4 concludes.

Methods and data

Methods

Let us consider the number of publications P as an indicator for the production of a scholar, and the number of citations C as an indicator for the quality or impact of the research. These are standard indicators, but they are devoid of any context: the performance of a scholar depends only on her own publications and citations.5 When individual researchers are ranked according to these indicators, a context emerges—but individuals are treated as independent of one another. In reality, however, scholars group together in schools.

Shapley (1953) introduced a measure for the contribution of an agent (scholar) to a coalition (school). The Shapley value of an agent equals the average contribution of that agent to any coalition. I here define the value of a coalition as the average number of publications or citations. The contribution of a researcher is then the change in the coalition's value should this researcher leave or join the coalition. I only consider existing institutions, rather than any coalition, and therefore refer to this indicator as a pseudo-Shapley value. Following the convention in the analysis of cartels (d’Aspremont et al. 1983), I further assume that scholars make decisions independently of other scholars about whether to change affiliation or retire.

The pseudo-Shapley value S for the number of publications pr of a researcher r in institution 1 is defined as:

(1)
where ni is the number of members of institution i, I is the number of institutions, and is the average number of publications per researcher at institution i.6 Institutions can be ordered at will so that (1) defines Sr for researcher in any school. The pseudo-Shapley value Sr is the contribution of an individual scholar to the research performance of a school, averaged over all schools. It is increasing in the number of publications, and decreasing in the number of scholars.

(1) splits the pseudo-Shapley value into two components. The first component is the actual indicator value for the school minus the indicator value in case scholar r departs.7 The second component is the indicator value in case scholar r joins another school minus the actual indicator value, averaged for the other schools.

Note that, in both components, we use the indicator with the scholar minus the indicator without the scholar. Therefore, the pseudo-Shapley value is independent of the current affiliation of the researcher.

The two components are more insightful than the pseudo-Shapley value itself. The first component is a measure of the power of an individual scholar over her school. The second component is a measure of the contribution a scholar would make to the competition, that is, a measure of the market value.

Let us therefore define two new indicators, power P and market value M, as follows:

(2)
(3)
Note that Pr + Mr = Sr. (2) equals the first component of (1), and (3) its second component; in both cases, the equations were simplified. (13) are defined for publication numbers, but the same definitions hold for citation numbers, or quality-weighted publication numbers, or indeed any individual performance indicator that would be averaged to assess the school performance.

In (13), the contribution of a scholar is defined as her contribution to an indicator score of a school. One may also consider the contribution of a researcher to the rank of any existing institution.8 This could be referred to as an ordinal pseudo-Shapley value9 and it is split into an ordinal power indicator and an ordinal market indicator.

In fact, the ordinal power and market indicators contain more information than the ordinal Shapley values. The ordinal Shapley value measures the change in rankings should a scholar leave a school to join another. This would push one school up in the rankings, and another one down—while the ranking of third schools may be affected too. Indeed, the net change in rankings of all schools is zero.

In the application below, for individual scholars, I report the number of publications, the number of citations, the cardinal power and market indicators (based on both publications and citations), and the ordinal power and market indicators (based on both publications and citations).

For schools, I report the average number of publication and citations, and the average cardinal and ordinal power and market indicators. Note that the average cardinal power indicators are included for completeness only. The cardinal power index is an individual's contribution to the average. Its average is zero. A high value of the average ordinal power measure indicates that a few scholars contribute most of the publications or citations of a school—because individual scholars would have low power over their school if all their colleagues perform equally well.

I compare the school's power indicator to the HHI, which is a standard measure for the concentration of market share. The HHI is defined as the sum of the squared shares:

(4)
where pi is the number of publications (say) and n is the number of scholars (including the ones that did not publish). The HHI lies between 1/n and one. The HHI would be one if a single scholar authored all publications of a school (monopoly). The HHI would be 1/n if all scholars in a school published the same number of papers. Because institutions differ in size, I use the normalized HHI:
(5)
HHI* lies between zero and one, regardless of n.

There is no reason to assume that the HHI and the power indicator will yield similar results. The HHI is quadratic in a scholar's number of publications and inversely proportional to the square of the school's publications. The power indicator also increases with a scholar's publications and decreases with her school's number, but the relationships are linear.

Data

I illustrate the above indicators with business scholars and business schools on the island of Ireland. Business schools are hard to define. Some universities have an entity called “school of business”, but other institutions mix business studies with other disciplines, or spread business studies over a number of schools. Table 1 shows, for each of the 11 institutions,10 the schools (colleges, faculties) and their departments (schools, groups). There are two contentious issues. First, it is difficult to draw a line between business studies and economics. The topics are closely connected and often taught together. This study includes those economists who teach in business schools, but excludes other economists. The other issue is tourism. In two institutions, tourist studies are part of the business school, while elsewhere tourist studies are placed in other departments or indeed in a separate department. Here, tourism scholars are included if they teach in a business school.

Table 1

Business schools assessed in this study

AcronymInstitutionSchool and department
DCUDublin City UniversityBusiness School

• Accounting

• Economics, finance and entrepreneurship

• Human resources management

• Management

• marketing
DITDublin Institute of TechnologyCollege of Business

• Accounting and finance

• Management

• Marketing

• Retail and services management
NCINational College of IrelandSchool of Business
NUIGNational University of Ireland at GalwaySchool of Business and Economics

• Economics

• Accountancy, finance and information systems

• Management

• Marketing
NUIMNational University of Ireland at MaynoothSchool of Business

• Management
School of Economics, Finance and Accounting
QUBQueen's University BelfastManagement School
TCDTrinity College DublinSchool of Business
UCCUniversity College CorkFaculty of Commerce

• Accounting and finance

• Business information systems

• Food business and development

• Management and marketing
UCDUniversity College DublinSchool of Business

• Accountancy

• Management information systems

• Industrial relations and human resources

• Marketing

• Management

• Banking and finance

• Corporate governance
ULUniversity of LimerickBusiness School

• Accounting and finance

• Economics

• Management and marketing

• Personnel and employment relations
UUUniversity of UlsterBusiness School

• Accounting

• Business, retail and financial services

• Hospitality and tourism management

• International business

• Management

• Marketing, entrepreneurship and strategy

For this study, business scholars are scholars who are employed in the business schools as defined in Table 1. People were identified as listed on the websites in early September 2010. There is no reason to believe that these lists are accurate. Indeed, several errors were uncovered (and corrected) during the data vetting process (see below). However, it is the only source of information available.

There are a total of 748 business scholars in Ireland. In addition, business schools employ administrative staff, teaching and research assistants, and PhD students—all of whom were excluded (if so identified). Business schools also have a large number of adjunct faculty—typically, senior business people who teach a few classes a year—while some business schools also host research staff from companies. These people were excluded too.

748 is a substantial number of scholars, each of which has to be assessed individually.11 For that reason, a simple method is used. Data were collected from Scopus12 only. Scopus has a much broader coverage than the Web of Science13 for recent years (but a limited coverage before 1996). As Irish business scholars tend to be relatively young and tend to publish outside the core journals, Scopus is a more appropriate source of data. See also (Vieira and Gomes 2009). Nonetheless, some journals are not covered in Scopus, including a number of particular importance to business scholars in Ireland (e.g., Administration, Irish Journal of Management, Irish Marketing Journal, Irish Marketing Review). Google Scholar14 (and thus Publish and Perish (Harzing 2010)15 and Scholarometer16) has a wider coverage than Scopus and would thus be more appropriate for business scholars (Mingers and Lipitakis 2010). However, Google Scholar suffers from a lack of quality control on publications and citations.17

Four statistics were gathered from Scopus: year of first publication, number of publications, number of citations, and h-index (Hirsch 2005). People's name, affiliation, specialization, degree, rank, and sex were also recorded. Here, I only use publications, citations, and affiliations.

The data are available at: http://hdl.handle.net/1902.1/15802 .

The data have been cross-checked with CVs when online. Three preliminary versions of the data were published at IrishEconomy,18 with an explicit invitation to correct data where needed. Heads of departments were all notified of the exercise and invited to comment. This vetting process led to substantial changes in the data—people and indeed departments were added; administrative, adjunct and trainee staff were removed; and publication and citation records were corrected.

The actual performance of business schools and scholars is discussed in detail in a companion paper (Tol 2010). I here focus on the information contained in the new indicators.

Results

Scholars

(2) has that the power indicator is linear in the number of publications or citations, with the slope equal to the inverse of the number of schools times the number of scholars in a school. The negative intercept equals the average number of publications or citations in the school, divided by the number of scholars in the school (−1) and by the number of schools. That is, both the intercept and the slope are different between schools but identical within schools. Because of this, the power ranking within schools is the same as the publication/citation ranking within schools.

Figure 1 shows the cardinal power indicators for the 748 business scholars in Ireland, plotted against the number of publications and citations. Power over the own school increases with production and influence, but the rate differs between schools. TCD is the smallest school and has therefore the steepest incline. UU and DIT are the largest schools, and the slope is thus the shallowest. Very productive or influential TCD scholars therefore have a greater power than equally productive or influential UU scholars.

Fig. 1
Fig. 1Cardinal power indicators for publications (top panel) and citations (bottom panel) as a function of the number of publications and citations, respectively

Citation: Scientometrics Scientometrics 90, 3; 10.1007/s11192-011-0555-y

Figure 2 shows the ordinal power indices. The pattern is very different from that in Fig. 1. Rank changes are integer, so average rank changes are discontinuous. Like the cardinal power indicators, the ordinal power indicators increase in the number of publications and citations. However, the context is much more important. While some TCD scholars have a great impact on the score of their small school, TCD is so far ahead of the other business schools that only one scholar could, by departing affect its ranking on publications—and none could affect TCD's ranking on citations. DCU's performance, on the other hand, is similar to some of its competitors and a number of DCU scholars could affect its ranking by departure.

Fig. 2
Fig. 2Ordinal power indicators for publications (top panel) and citations (bottom panel) as a function of the number of publications and citations, respectively

Citation: Scientometrics Scientometrics 90, 3; 10.1007/s11192-011-0555-y

(3) has that market value is linear in the number of publications or citations. The intercept depends on the average number of publications or citations in all schools but one's own and on the number of scholars in those schools. The slope depends on the number of scholars in other schools. One would therefore expect that all scholars are on roughly the same line. Figure 3 confirms this: Market value rise linearly with the number of either publications or citations, and there is little difference between schools. The slope varies between 0.0165 for TCD (the smallest school) and 0.0202 for DIT (the largest school). This is intuitive: scholars from larger schools have fewer outside opportunities and thus a greater impact. Differences are small, however.

Fig. 3
Fig. 3Cardinal market value indicators for publications (top panel) and citations (bottom panel) as a function of the number of publications and citations, respectively

Citation: Scientometrics Scientometrics 90, 3; 10.1007/s11192-011-0555-y

Figure 4 shows the ordinal market value indicators. The indicator is discontinuous. Market value increases with the number of publications and citations. However, scholars from some schools need to publish more/be cited more to command the same market value as do scholars of other schools. If a scholar moves from one school to another, the impact on the ranking depends on her productivity/influence, the initial score and size of either school and their position relative to other schools. With so many variables, the pattern is highly non-linear and hard to interpret and predict.

Fig. 4
Fig. 4Ordinal market value indicators for publications (top panel) and citations (bottom panel) as a function of the number of publications and citations, respectively

Citation: Scientometrics Scientometrics 90, 3; 10.1007/s11192-011-0555-y

Consider, for instance, the four scholars (one at NUIG, two at QUB, and one at UU) with an ordinal market value based on publications of 0.64. Their publication numbers range from 31 to 49. The discretisation of the ranking groups them together for a single market value. There is another QUB scholar with an ordinal market value of 0.73 and 46 publications. He scores better than his QUB fellows with 35 and 37 papers. He also scores better than the UU scholar with 49 publications. The reason is that ranks change if the QUB scholar moves to UU; but not if the UU scholar moves to QUB. The market value indicator captures part of the context in a way that simple publication or citation numbers cannot.

Schools

Table 2 shows the performance indicators for the 11 business schools in Ireland. The smallest school employs only 20 scholars, the largest 147. Average publication numbers range from 0.2 to 10.9 published papers per scholar, and average citations from 0.7 to 63.3 citations per scholar.

Table 2

Indicators for production (publications) and impact (citations) for eleven research-oriented business schools in Ireland

SchoolStaffPublicationsCitations
AveMarketHHIPowerAveMarketHHIPower
CardOrdCardaOrdCardOrdCardaOrd
TCD20 10.90.1260.1730.10680 0.00063.250.7330.1230.12060 0.000
QUB62 7.30.0520.1070.02890 0.00035.940.1650.0630.05560 0.072
UCD79 5.70.0180.0860.05020 0.01533.390.1120.0450.04720 0.070
NUIG56 4.90.0030.0780.02760 0.00225.55−0.0510.0290.07060 0.002
NUIM31 3.7−0.0230.0380.06760 0.09732.450.093−0.0150.19930 −0.481
UL72 4.0−0.0160.0510.02840 0.00816.26−0.2440.0150.07520 0.001
DCU63 3.3−0.0320.0790.02860 0.21819.46−0.1770.0330.10370 0.092
UU130 3.1−0.0350.0080.05900 0.00118.84−0.1900.0310.10920 0.025
UCC67 3.2−0.0320.0080.03560 −0.2849.04−0.3940.0010.05210 0.000
DIT147 0.3−0.093−0.0720.06120 0.0070.70−0.5670.0000.11970 0.000
NCI21 0.2−0.095−0.0740.24400 0.0002.86−0.5220.0040.90030 0.043

Staff number of teaching and research staff, excluding PhD students and adjunct faculty; Ave average number of publications/citations; Market average market value of staff, be it cardinal market value (card) or ordinal market value (ord); Power average power value of staff; HHI Herfindahl–Hirschman Index

Zero by definition

The average cardinal power indicator is zero by definition. The average ordinal power indicator ranges from −2.28 to +2.22 for publications and from −4.81 to 0.07 for citations. A negative value indicates that the school's rank would improve if the average scholar departs. A positive value indicates that the departure of a scholar would lead, on average, to a lower rank—a scholar could use this threat to exert power. There are also schools with an average power index of zero—that is, the departure of the scholar would not on average affect the ranking. Zero values are more prevalent near the top and bottom of the publication and citation ranks, because ranks can change in one direction only.

Table 2 also shows the Herfindahl–Hirschman Indices which range from 0.03 to 0.24 for publications and from 0.05 to 0.90 for citations. Citations are more concentrated than publications, except for UCD.

Figure 5 plots the concentration indices against the power indices. Although there is a suggestion of some relationship—greater concentration and greater power seem to go together—the two indicators clearly measure different things. NCI, for instance, has an extremely high concentration of citations. However, as NCI is ranked last on citations, highly-cited researchers at NCI cannot exert any influence over NCI's rank. Publications are less concentrated at NCI, but as NCI is ranked 10th, the (threatened) departure of a highly productive scholar would (potentially) affect NCI's publication rank. Thus, while the HHI measures concentration, the power index contextualizes this and measures whether exceptional scholars can exert influence.

Fig. 5
Fig. 5The HHI of concentration of publications (left axis) and citations (right axis) as a function of the average power indicator

Citation: Scientometrics Scientometrics 90, 3; 10.1007/s11192-011-0555-y

DCU scores highest on the ordinal power indicator for publications. The departure of either of its top two scholars would see DCU drop 1.8 places on average (from 7th to 9th in most cases, and from 7th to 8th in some). There are another ten scholars whose departure would cause DCU's publication rank to fall. However, the departure of none of the other 51 scholars would cause a change in rank. The large size of the department also means that HHI is relative small: although a few individuals stand out for the number of publications, their share is in the total output of the department is nonetheless small.

UCC scores lowest on the ordinal power indicator for publications. It would rise in the rankings if one of its 25 worst performers would leave, but the departure of only its 5 best performers would lead to a drop in ranking. NUIM scores lowest on the ordinal power indicator for citations. A departure by any of its 18 worst performer would improve its rankings, while the departure of only two would lead to a drop in rank.

The ordinal power indicator thus identifies schools with potential problems. In one school, two scholars can sway the rankings; in two other schools, a ranking-conscious head of department may ask certain people to leave.

Table 2 further shows the average market value indicators, both cardinal and ordinal. Figure 6 plots the market value indices against the number of publications and citations. Cardinal market value is linear in the number. This is true for the individual scores (cf. Fig. 3), and therefore also for the average scores. Ordinal market value tends to increase with the number, but a richer pattern emerges because ordinal values take context into account. Figure 4 shows that there is a different association between publications/citations and market value for scholars at different schools. Figure 6 confirms this.

Fig. 6
Fig. 6The average market value (cardinal left axis, ordinal right axis) as a function of the number of publications (top panel) and citations (bottom panel)

Citation: Scientometrics Scientometrics 90, 3; 10.1007/s11192-011-0555-y

Discussion and conclusion

Current performance measures of individual scholars ignore the context of scholarship. I introduce pseudo-Shapley values that measure a scholar's contribution to (or power over) her own school and her value to other schools should they hire her. I illustrate the proposed measures with business scholars and business schools in Ireland. Although conceptually superior, the power indicators lead to an identical ranking of scholars within a school as conventional performance measures would, while the market value indicators lead to an identical ranking within schools and a very similar ranking between schools. I introduce both cardinal and ordinal indicators. The ordinal indices further contextualise performance measures and thus deviate more from conventional indicators. Furthermore, as the ordinal measures are discontinuous by construction, a natural classification of scholars emerges. Averaged over schools, the ordinal and particularly the cardinal market values offer little extra information over the average publication and citation numbers. The ordinal power measure gives, for the first time, an indication of the robustness or fragility of an institution's place in the rank order. It is only weakly correlated with the Herfindahl–Hirschman concentration index of publications and citations.

The proposed measures open up new avenues for research. Do scholars prefer to work in schools in which they are powerful, or would they rather work in a place where their average colleague outperforms them (so that they maximise learning)? This cannot be answered with a cross-section (as used in this article); panel data are required to separate cause and effect. Is the ranking of fragile schools (according to the measures proposed here) indeed more volatile? This would again require data for multiple years.

The measures themselves can be further refined too. Particularly, I used publications and citations numbers for scholars, and their averages for schools. The mathematics is therefore rather straightforward. The power and market value indicators could be defined from more complex performance measures too, such as the h-index (Hirsch 2005). Furthermore, I assume that scholars are individuals, and negotiate as such with their schools. It is not uncommon, however, for a team of scholars to move from one school to another. Pseudo-Shapley values naturally generalise to this case—indeed, the actual Shapley value is defined for any coalition—but this was not considered here. All this is deferred to future research.

1

Of course, university departments have wider responsibilities than research (Coronini and Mangematin 1999).

2

A search on Shapley in abstracts of papers published in Scientometrics returns nothing.

3

See Lee (2010) for a discussion of alternative concentration measures.

4

A search on HHI in abstracts of papers published in Scientometrics returned one paper on research performance (Yang et al. 2010).

5

de Witte and Rogge (2010) introduce a metric for publication efficiency which combines individual and environmental characteristics.

6

(1) follows from the recursive property of averages:

7

Note that in the definition of the Shapley value actors are not evaluated relative to an existing coalition structure. This is another reason why the proposed measure is a pseudo-Shapley value.

8

This is intuitively clear, but notationally messy. I therefore skip the formalization. The computations are illustrated in the “Appendix” section.

9

I therefore refer to the indicators S, P, and M as cardinal pseudo-Shapley values.

10

There are also a number of business schools that only teach. These are excluded from the current study.

11

Note that the database contains another 124 individuals who were erroneously included.

17

For instance, Publish and Perish returns over 500 papers for the current author, whose CV counts less than 200 publications.

I am grateful to all who helped to improve the database by checking their entries. I had useful discussions on this subject with Frances Ruane. An anonymous referee had excellent comments.

Appendix: A worked example

Table 3 illustrates the computation of the cardinal and ordinal power and market indices. The columns to the left consider School A. Initially, there are three researchers in School A, two average ones and one outstanding. School A has a higher average score than Schools B and C, and ranks first. This is shown in the top rows of Table 3.

Table 3

An illustration of the computation the cardinal and ordinal power and market indicators for an extraordinary scholar (left) and an ordinary one (right)

ABCABC
Initial situationInitial situation
1100 11 10 1100 11 10 
210 10 10 210 10 10 
310 10 9 310 10 9 
4   4   
Avg40.0010.339.67Avg40.0010.339.67
Rank1 2 3 Rank1 2 3 
Move from school A to BMove from school B to A
1 11 10 1100 11 10 
210 10 10 210  10 
310 10 9 310 10 9 
4 100  410   
Avg10.0032.759.67Avg32.5010.509.67
Rank2 1 3 Rank1 2 3 
Move from school A to CMove from school B to C
1 11 10 1100 11 10 
210 10 10 210  10 
310 10 9 310 10 9 
4  100 4  10 
Avg10.0010.3332.25Avg40.0010.509.75
Rank3 2 1 Rank1 2 3 
Cardinal indicators for schoolCardinal indicators for school
dScore30.0011.2111.29−3.75−0.170.04
Power30.00   −0.17 
Market11.25   −1.85 
Ordinal indicators for schoolOrdinal indicators for school
dRank1.500.501.000.000.000.00
Power1.50   0.00 
Market0.75   0.00 

The Power indicator (in bold) is formed from the changes in the score (dScore) c.q. rank (dRank) in bold. The Market indicator (in italics) is formed from the italicized changes in score c.q. rank

The second set of rows shows what would happen to the schools’ scores if the outstanding scholar moves to School B, and the third set of rows if she were to leave for School C. In both cases, the score of School A would drop from 40 to 10, for an average of 30. 30 is therefore the cardinal measure of power of the outstanding scholar over her original school A. A move would increase the score of 11.21 and 11.29 for School B and C, respectively, for an average of 11.25. This is the cardinal measure of market value.

If the outstanding scholar leaves for School B, School A falls from first to second place, and if she moves to School C, School A falls to third place. The average drop is 1.5 places, and this is the ordinal measure of power.

School B would rise to first place in one case and stay in second in the other. School C would rise to first place in one case and stay third in the other. The average (over cases and schools) rank chance is 0.75. This is ordinal measure of market value.

The right columns of Table 3 repeat the same computations, but now for an average scholar initially in School B. The cardinal indicators are relatively small, and the ordinal ones are zero. This is as one would expect. A scholar who is close to average of the own school and all schools would not exert much influences over rankings.

References

  • Coronini, R, Mangematin, V 1999 From individual scientific visibility to collective competencies: The example of an academic department in the social sciences. Scientometrics 45 1 5580 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cox, RAK, Chung, KH 1991 Patterns of research output and author concentration in the economics literature. Review of Economics and Statistics 73 4 740747 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Crewe, I 1988 Reputation, research and reality: The publication records of UK departments of politics, 1978–1984. Scientometrics 14 3–4 235250 .

  • d’Aspremont, C, Jacquemin, A, Gabszewicz, JJ, Weymark, JA 1983 On the stability of collusive price leadership. Canadian Journal of Economics 16 1 1725 .

  • K de Witte Rogge, N 2010 To publish or not to publish? On the aggregation and drivers of research performance. Scientometrics 85 3 657680 .

  • Harzing, A-W 2005 Australian research output in economics and business: High output, low impact?. Australian Journal of Management 30 2 183200 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Harzing, A-W 2010 The publish or perish book—Tour guide to effective and responsible citation analysis Tarma Software Research Melbourne.

    • Search Google Scholar
    • Export Citation
  • Herfindahl, O. C. (1951). Concentration in the Steel Industry. PhD Thesis, Department of Economics, Colombia University.

  • Hirsch, JE 2005 An index to quantify an individual's scientific research output. Proceedings of the National Academy of Science 102:1656916572 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hirschman, AO 1964 The paternity of an index. American Economic Review 54 5 761.

  • Hodder, APW, Hodder, C 2010 Research culture and New Zealand's performance-based research fund: Some insights from bibliographic compilations of research outputs. Scientometrics 84 3 887901 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hogan, TJ 1990 A measure of accounting faculties and doctoral programs. Scientometrics 19 3–4 207221 .

  • Kalaitzidakis, P, Mamuneas, TP, Stengos, T 2003 Rankings of academic journals and institutions in economics. Journal of the European Economic Association 1 6 13461366 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kao, C, Pao, HL 2009 An evaluation of research performance in management of 168 Taiwan universities. Scientometrics 78 2 261277 .

  • Lee, GJ 2010 Assessing publication performance of research units: Extensions through operational research and economic techniques. Scientometrics 84 3 717734 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mingers, J, Lipitakis, EAEC 2010 Counting the citations: A comparison of Web of Science and Google Scholar in the field of business and management. Scientometrics 85 2 613625 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nederhof, AJ 2008 Policy impact of bibliometric rankings of research performance of departments and individuals in economics. Scientometrics 74 1 163174 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nederhof, AJ, Noyons, ECM 1992 Assessment of the international standing of university departments’ research: A comparison of bibliometric methods. Scientometrics 24 3 393404 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rubin, RM, Chang, CF 2003 A bibliometric analysis of health economics articles in the economics literature: 1991–2000. Health Economics 12 5 403414 .

  • Shapley, LS 1953 A value for n-person games HW Kuhn AW Tucker eds. Contributions to the theory of games, volume II 28 Princeton University Press Princeton 307317.

    • Search Google Scholar
    • Export Citation
  • Tol, R. S. J. (2010). The research output of business schools and business scholars in Ireland. Working Paper 364, Economic and Social Research Institute, Dublin.

    • Search Google Scholar
    • Export Citation
  • Vieira, ES, Gomes, JANF 2009 A comparison of Scopus and Web of science for a typical university. Scientometrics 81 2 587600 .

  • Vieira, PC, Teixeira, AAC 2010 Are finance, management, and marketing autonomous fields of scientific research? An analysis based on journal citations. Scientometrics 85 3 627646 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Yang, S, Ma, F, Song, Y, Qiu, J 2010 A longitudinal analysis of citation distribution breadth for Chinese scholars. Scientometrics 85 3 755765 .

  • Zhu, J, Meadows, AJ, Mason, G 1991 Citations and departmental research ratings. Scientometrics 21 2 171179 .

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • Coronini, R, Mangematin, V 1999 From individual scientific visibility to collective competencies: The example of an academic department in the social sciences. Scientometrics 45 1 5580 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cox, RAK, Chung, KH 1991 Patterns of research output and author concentration in the economics literature. Review of Economics and Statistics 73 4 740747 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Crewe, I 1988 Reputation, research and reality: The publication records of UK departments of politics, 1978–1984. Scientometrics 14 3–4 235250 .

  • d’Aspremont, C, Jacquemin, A, Gabszewicz, JJ, Weymark, JA 1983 On the stability of collusive price leadership. Canadian Journal of Economics 16 1 1725 .

  • K de Witte Rogge, N 2010 To publish or not to publish? On the aggregation and drivers of research performance. Scientometrics 85 3 657680 .

  • Harzing, A-W 2005 Australian research output in economics and business: High output, low impact?. Australian Journal of Management 30 2 183200 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Harzing, A-W 2010 The publish or perish book—Tour guide to effective and responsible citation analysis Tarma Software Research Melbourne.

    • Search Google Scholar
    • Export Citation
  • Herfindahl, O. C. (1951). Concentration in the Steel Industry. PhD Thesis, Department of Economics, Colombia University.

  • Hirsch, JE 2005 An index to quantify an individual's scientific research output. Proceedings of the National Academy of Science 102:1656916572 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hirschman, AO 1964 The paternity of an index. American Economic Review 54 5 761.

  • Hodder, APW, Hodder, C 2010 Research culture and New Zealand's performance-based research fund: Some insights from bibliographic compilations of research outputs. Scientometrics 84 3 887901 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hogan, TJ 1990 A measure of accounting faculties and doctoral programs. Scientometrics 19 3–4 207221 .

  • Kalaitzidakis, P, Mamuneas, TP, Stengos, T 2003 Rankings of academic journals and institutions in economics. Journal of the European Economic Association 1 6 13461366 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kao, C, Pao, HL 2009 An evaluation of research performance in management of 168 Taiwan universities. Scientometrics 78 2 261277 .

  • Lee, GJ 2010 Assessing publication performance of research units: Extensions through operational research and economic techniques. Scientometrics 84 3 717734 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mingers, J, Lipitakis, EAEC 2010 Counting the citations: A comparison of Web of Science and Google Scholar in the field of business and management. Scientometrics 85 2 613625 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nederhof, AJ 2008 Policy impact of bibliometric rankings of research performance of departments and individuals in economics. Scientometrics 74 1 163174 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nederhof, AJ, Noyons, ECM 1992 Assessment of the international standing of university departments’ research: A comparison of bibliometric methods. Scientometrics 24 3 393404 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rubin, RM, Chang, CF 2003 A bibliometric analysis of health economics articles in the economics literature: 1991–2000. Health Economics 12 5 403414 .

  • Shapley, LS 1953 A value for n-person games HW Kuhn AW Tucker eds. Contributions to the theory of games, volume II 28 Princeton University Press Princeton 307317.

    • Search Google Scholar
    • Export Citation
  • Tol, R. S. J. (2010). The research output of business schools and business scholars in Ireland. Working Paper 364, Economic and Social Research Institute, Dublin.

    • Search Google Scholar
    • Export Citation
  • Vieira, ES, Gomes, JANF 2009 A comparison of Scopus and Web of science for a typical university. Scientometrics 81 2 587600 .

  • Vieira, PC, Teixeira, AAC 2010 Are finance, management, and marketing autonomous fields of scientific research? An analysis based on journal citations. Scientometrics 85 3 627646 .

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Yang, S, Ma, F, Song, Y, Qiu, J 2010 A longitudinal analysis of citation distribution breadth for Chinese scholars. Scientometrics 85 3 755765 .

  • Zhu, J, Meadows, AJ, Mason, G 1991 Citations and departmental research ratings. Scientometrics 21 2 171179 .