Round University Ranking Methodology
Round University Ranking (RUR) is designed to compare performance of Higher Education Institutions (HEI) across the globe according to 20 indicators distributed into 4 areas. RUR indicators are grouped in appliance with principal university missions:
• International Diversity;
• Financial Sustainability.
BASIC PRINCIPLES IN METHODOLOGY DEVELOPMENT:
After a careful evaluation we have chosen Institutional Profiles – an annually updated database within the online platform InCites provided by Thomson Reuters. There is more than a hundred of unique indicators within this database, so it allows to choose appropriate indicators for any area of university activity.
Next thing to be done was designing a system of distributing weights among indicators. This task is performed in two stages:
• Selection of weights for both indicator groups and indicators;
• Indicators’ mapping within the same areas/groups.
We started designing our methodology from the smallest percentages and assigned 10% of weights to the "International Diversity" and "Financial Sustainability" areas that fixed 2% to each of their indicators. By getting a 20% total in two areas, we were left with 80% to distribute between Teaching and Research groups. Since “Teaching” and “Research” are in fact two primary missions of every university, we divided weights onto those areas evenly: both teaching and research got 40% with 8% for any of 10 indicators within those groups. We are certain that such an approach should not contradict the praxis of national ranking systems, which is currently used in many countries, and that our methodology, shown in the table below, is free from internal inconsistencies.
|Round University Ranking Methodology|
|1||Academic staff per students||8%|
|2||Academic staff per bachelor degrees awarded||8%|
|3||Doctoral degrees awarded per academic staff||8%|
|4||Doctoral degrees awarded per bachelor degrees awarded||8%|
|5||World teaching reputation||8%|
|6||Citations per academic and research staff||8%|
|7||Doctoral degrees awarded per admitted PhD||8%|
|8||Normalized citation impact||8%|
|9||Papers per academic and research staff||8%|
|10||World research reputation||8%|
|11||Share of international academic staff||2%|
|12||Share of international students||2%|
|13||Share of international co-authored papers||2%|
|14||International teaching reputation||2%|
|15||Share of international bachelor degrees awarded||2%|
|16||Institutional income per academic staff||2%|
|17||Institutional income per students||2%|
|18||Papers per research income||2%|
|19||Research income per academic and research staff||2%|
|20||Research income per institutional income||2%|
Table I. Round University Ranking methodology
I. Teaching (40%)
In our view teaching is one of the two major missions of higher education institutions and at the same time the hardest to measure. As an example, how one may evaluate professor’s talent and inspiration he spreads except by conducting subjective surveys? The only way to do it is to use formalized indicators like the number of faculty divided by the number of students, etc. After careful selection we have chosen the following five indicators to be included in Teaching area:
1. Academic staff per students (8%)
This point describes how many teachers there are per student in a university. This is a common indicator in university rankings due to objective reasons. As it was already mentioned, measuring the quality of education is more difficult than that of research as the educational process is less quantifiable. Using this indicator, we proceed from the assumption that the more teachers there are per student, the higher the quality of education is, in other words, a high faculty/student ratio means in general that the teacher can spend more time with each student.
Both indicators: “Academic staff” and “Students” are measured as Full Time Equivalent characteristics, not the total ones. Thus, the data you find in RUR University Profiles does not fit “headcount” in other information resources.
2. Academic staff per bachelor degrees awarded (8%)
This indicator specifies the number of academic staff per undergraduate degrees awarded in a given year. In fact, this indicator is a special case of “Academic staff per students”. It is assumed that "undergraduate" level is the core level of higher education. Therefore it was decided to count undergraduate level programs as a separate entity in a separate indicator.
3. Doctoral degrees awarded per academic staff (8%)
This indicator shows the number of PhD level degrees or its equivalent per academic staff. The more doctorates graduate an institution, the higher the level of top-tier education can be expected in this institution. Of course, the high proportion between doctorate graduates and faculty is not a 100% guarantee of high quality of education. At the same time, such proportion shows a trend: the more PhD degrees are implied the higher quality of teaching is.
4. Doctoral degrees awarded per bachelor degrees awarded (8%)
This indicator shows the correlation between the output of PhD level and undergraduate students. The resulting value shows the extent to which the university is focused on a serious, fundamental level of training. A high proportion of doctoral students demonstrate that the university has a sufficient amount of high level faculty, resources and time to prepare doctoral students, who in turn act as mentors for undergraduate students.
5. World teaching reputation (8%)
Teaching reputation identifies how well the institution is known in global academic community. The raw data for Teaching reputation is obtained from a special Thomson Reuters Academic Reputation Survey. The participation in the survey is strictly by invitation. In other words, universities are not allowed to send a list of recommended contacts to take part in it. In this academic survey, the respondents are asked to nominate up to 15 best universities in terms of quality of teaching. Annually around 10.000 academics from around the world take part in the survey and that makes it one of the most advanced survey in higher education assessment.
II. Research (40%)
Research is reviewed as the second key component in university activity in RUR Rankings system with 40% of total weight - same as teaching. All of the following indicators in one way or another are related to research productivity, measured on the basis of published data indexed by Web of Science Core Collection and its substitute InCites.
6. Citations per academic and research staff (8%)
For this indicator the number of citations in a two-year-period is divided by the number of publications per year. For example, for the 2014 rating publications from 2012 were taken into account with 2012-2013 citation period. Only “Article” and “Review” types of publication are considered suitable for the indicator.
A short citation period allows to make the general rating and the research indicators rating more flexible. Thus, universities can demonstrate a faster dynamics of efficiency of their research.
Number of citations is divided by both number of academic and research staff. The “research staff” concept stands only for the part of the university staff, who does research and doesn’t teach. It would have been more correct to count the citations for “researchers” and “teachers” separately, but in practice attributing citations separately to academic staff and to research staff is almost impossible, that is why we count them as one entity. Finally we should note that the number of citations in this indicator is an absolute value and it doesn’t take normalization on subject areas into account, as it happens in the procedure of counting the Normalized citation impact indicator.
7. Doctoral degrees awarded per admitted PhD (8%)
This indicator stands for the ratio of numbers of degrees issued at the doctoral level in relation to the amount of PhD (or equivalent) students admitted the same year. Thus, the percentage of defended theses in a given university is identified. The higher the percentage is, the more likely it is that the work with students at the PhD level at the analyzed institution is more organized. Moreover it demonstrates that an adequate built-in system of incentives and motivations is provided for both supervisors and PhD students. In contrast, a low percentage of thesis defenses indicates a relatively lower students’ motivation and lower conditions for scientific work.
8. Normalized citation impact (8%)
Normalized Citation Impact (NCI) connects the current average citations of a given institution compared with world average citation of the same year, subject area and publication type.
The number of citations for NCI is counted for the 6-year-period. E.g., NCI for the 2014 rating was calculated for the number of citations throughout 2008-2013 and for the number of publications throughout 2008-2012. A vaster, comparing to the Citations per academic and research staff indicator period, that is taken into account, makes NCI a long-lasting indicator that shows institution research efficiency within a relatively wide time interval.
Here are several examples to illustrate the functioning of the NCI indicator:
1) Suppose the ratio of citations to publications on the Optics subject area is 1.52 in the University X, counted for the “article” publication type. What does the number mean?
2) We can answer the question by comparing the citations/publications ratio of the University X to the average world ratio. So the average world citation ratio on the Optics subject area throughout the same period and the same types of published work must be compared to the actual University X ratio.
3) For the Optics area the cit./publ. ratio (publications of the “article” type are taken into consideration) throughout the 2008-2013 period is 1.32. Thus, NCI of University X on the Optics subject area is 1.15, in other words, University X works on this field as 115% of world’s average.
9. Papers per academic and research staff (8%)
This indicator reflects the level of scientific productivity of the organization that is the ratio of the number of publications to the number of teachers and researchers. Unlike indicators # 6 and # 8, in this case the total volume of studies is measured, not their effectiveness.
Only those publications are taken into consideration, that were published throughout the year that precedes the year of rating publication by two years. E.g., The 2012 publications are measured for the 2014 rating publications. Only publications of the “article” and “review” types are considered.
10. World research reputation (8%)
This indicator shows the extent to which the conducted university’s research influences the academic community around the world. Answers about the research quality are being collected within the survey of teaching quality and reputation. The interviewee is asked to indicate up to15 higher education institutions of the world that he/she considers most significant on the research area.
III. International diversity (10%)
Internationalization of a university shows its involvement into international educational and scientific area, its’ attractiveness to students and teachers from all over the world. This indicator is given 10% - a little weight according to Teaching and Research dimension areas due to the fact that universities are limited in their ability to influence the level of internationalization because it mostly depends on the country in which the higher education institution is located.
For example, universities in Switzerland, on average, are more international than in Germany, German universities are more international than those in Russia and so on. On the other hand, in Japan or South Korea international diversity of the universities will always be more moderated comparing to West Europe or North America. That is due to cultural peculiarities of each society but in no way to the quality of teaching and research conducted at the university.
This is why the group of International Diversity has a total weight of 10% compared with 8% of any of Teaching and Research indicators.
11. Share of international academic staff (2%)
This indicator demonstrates the number of foreign staff compared to the total number of teachers (i.e., both citizens and non-citizens of the country). Please note that the concept of "international" in the context of the indicators of this group means a person with citizenship of a country which is different from where he works).
A high internationalization level characterizes the conditions under which the teaching staff works. The number of teachers is counted as the Full Time Equivalent and may differ significantly from the total number of the university staff.
12. Share of international students (2%)
This indicator shows the attractiveness and competitiveness of the university in the global arena. The higher the number of foreign students in the university, the more attractive the university is to them. By the term "student" we mean any person who applied to an academic degree program and are studying to get a diploma. Hence, students on short degree programs are excluded from the concept of "student" in the calculation of this indicator. Calculation is determined by summarizing the number of Full time equivalent students (FTE) at the undergraduate and graduate levels.
13. Share of international co-authored papers (2%)
The indicator shows the share of publications with at least one foreign co-author in the total number of publications of the organization. The term “foreign co-author” is related to geography: the author located in a country which is different from the country the university is situated. At the same time, both authors may be compatriots.
The indicator shows the breadth of university’s the external relations, the degree of its involvement in the global scientific community. However, this indicator is strongly dependent on the region where a country or an institution is located. For example, in the Benelux Union publications’ internationalization ratio will always be higher than, for example, in any of the Eastern Europe countries.
14. International teaching reputation (2%)
This indicator stands for the teaching reputation in the area of teaching quality outside the macro-region to which the university belongs. The macro-regions are:
• North America
• South America
This indicator is a special case of the Teaching reputation indicator, which shows the university's reputation in the whole world, including the region where the university is located. This indicator characterizes the influence of the university in the world more accurate, as a significant share of the global reputation may be shown due to the high ratings within the university location.
15. Share of international bachelor degrees awarded (2%)
The indicator determines the amount (percentage) of students admitted to first-year undergraduate level programs as the total number of newly admitted students. There are two reasons to allocate the current metric into a special indicator and not considering it as an International students ratio indicator. First, the undergraduate education degree is the basis of higher education, its’ first and most wide-spread level.
As a result, students of the undergraduate level prevail in the student contingent of most Russian universities, rated in RUR.
IV. Financial Sustainability (10%)
Financial capacity of a university can say a lot about the level of teachers’ and researchers’ salaries, the level of scholarship packages provided to students, equipment and campus quality, etc.
However, the financial capacity of a university is almost entirely dependent on the economy level of the country, where the university is located. Funding for universities in Switzerland will always exceed the funding in South America to a degree, for example. Therefore, despite the objective importance of the group, we have allocated it 10% in the overall evaluation of the university, along with the level of internationalization.
16. Institutional income per academic staff (2%)
This indicator determines the general university budget per teacher. This index is rather arbitrary, since the total budget, as a rule, includes investments in major projects such as the construction or renovation of buildings and purchase of expensive equipment. Thus, the average per-person investment from an officially large budget may be comparatively low.
On the other hand, if at university X the amount per academic staff comes to $ 500,000 and at University Y it is only $ 20,000, that’s obvious that the possibility of qualitative growth at the first is clearly higher than at the second.
Summing up, this indicator defines the resources and capabilities of the university, as well as all other indicators of the group.
17. Institutional income per students (2%)
This indicator measures the gross budget of the university divided by the number of students. At first glance, the budget calculation based on the teachers’ number and on the students’ number gives the same result. In fact, the difference can be very significant. For example, the university has a large number of students, but a relatively low number of faculty. While calculating the budget for a high school student one gets a relatively low score. On the contrary, calculating it for a teacher will give rather high numbers.
18. Papers per research income (2%)
The indicator shows the number of papers published per one million of research income (USD) in a given university. The higher this ratio is, the more efficient and economical is the research conducted in a university. Of course, such an indicator can not measure the quality of the research. This is prerogative of scientometric indicators based on citation. In this case, we are talking just about the cost and economical efficiency of research.
19. Research income per academic and research staff (2%)
This indicator is calculated as follows: the research budget is being divided by the number of teachers and researchers, which reflects the level of investment in research. Finance received both from private and public sources is taken into consideration.
20. Research income per institutional income (2%)
The indicator shows the ratio of the research budget to the gross volume of the organization's budget. The higher the ratio, the more exploratory a university is.
The initial data of the RUR is diverse and qualitatively disparate. As an example, what number is bigger: 0.32 teacher per student, or $ 97.132 per teacher? To answer this question, datasets for each of the 20 indicators of the RUR must be brought to the 100-score scale.
To complete this the original values of universities are ranked in a descending order. Then each value is assigned a score which shows the percentile of the object. In other words, the X mark of the object indicates that X objects are in the ranking below this initial object. 100 scores are the maximum, which shows that 100% of the remaining objects in the sample are below the object with 100 scores.
Here is an example. Let's have a sample of 1,000 higher education institutions. Then the 1st university, with a maximum value, gets 100 points, 2nd - 99.9, third - 99.8 etc. Since there is no such selection in any of the RUR rankings, in each case, the percentile “step” is calculated separately.
Thus, before awarding percentile (points) to university, a so-called “matrix” of percentiles is created. A fixed value (values) is added to the point of every subsequent object in the selection. The percentile matrix is strongly dependent on the number of objects.
The percentile system allows to achieve the following:
• First, it neutralizes the impact of abnormally high values. Within each dataset there can be found a group of values that are standing out from the whole dataset, particularly in the case of reputation indicators. A proportional calculation, when all values are calculated as relative to a maximum in the population (100 scores), could already lead to a dramatic reduction of scores in the top ten. This way the calculation of university’s grade will already loose its meaning after the 1st hundred, or even earlier, as clusters with identical scores of dozens of universities will form automatically. Eventually there is nothing left but to group universities by 10, 25, 50, etc.
• Secondly, the system indicates the position of university in relation to other objects in the general population.
• Thirdly, the university’s score on a particular indicator depends on the total number of universities in the general population.
Particularities of the percentile calculation:
• If several universities have equal value on the indicator X they are assigned arithmetic mean of the percentile matrix. Suppose 37 universities have 0.01 on the indicator X. Then, based on pre-calculated “percentile matrix”, the arithmetic mean is calculated for these 37 objects and each object is awarded the resulting value. That’s why through closer examination, one can see dozens of institutions with the same score. It is more typical for the reputation indicators (a large number of universities receive 0.01%, 0.02% of the total number of respondents’ votes) and International Diversity indicators.
In the next stage, when each institution is assigned a unique score, the following procedures are being performed:
• The scores are multiplied by the coefficients (0.08 in Teaching and Research group, 0,02 in the International Diversity and Financial Sustainability, 0,2 in all Dimension Rankings);
• The results of the multiplications are added together;
• The objects are sorted in descending order. The rating is ready. But since no institution can get 100 points, as the possibility of getting 100 points for each out of 20 indicators for a single university is almost excluded, the highest value is taken as 100. The other values are calculated pro rata.
RANKINGS BY DIMENSION AREAS
Apart from the main Overall Ranking, that is calculated basing upon 20 indicators in accordance with the methodology described above, there exists a system of additional ratings in the RUR, which are echoing the main groups of rating indicators:
• Teaching Ranking
• Research Ranking
• International Diversity Ranking
• Financial Sustainability Ranking
Each rating is calculated on the same dataset as the Overall Ranking. The difference is in the number of indicators and their weighting coefficients. In each case the rankings are calculated by 5 indicators by dimension areas, the weight of each is 20%. Schematically, the methodology of these rankings is presented in the table below:
|RUR by Dimension areas|
|1||Academic staff per students||20%|
|2||Academic staff per bachelor degrees awarded||20%|
|3||Doctoral degrees awarded per academic staff||20%|
|4||Doctoral degrees awarded per bachelor degrees awarded||20%|
|5||World teaching reputation||20%|
|6||Citations per academic and research staff||20%|
|7||Doctoral degrees awarded per admitted PhD||20%|
|8||Normalized citation impact||20%|
|9||Papers per academic and research staff||20%|
|10||World research reputation||20%|
|RUR International Diversity||100%|
|11||Share of international academic staff||20%|
|12||Share of international students||20%|
|13||Share of international co-authored papers||20%|
|14||International teaching reputation||20%|
|15||Share of international bachelor degrees awarded||20%|
|RUR Financial Sustainability||100%|
|16||Institutional income per academic staff||20%|
|17||Institutional income per students||20%|
|18||Papers per research income||20%|
|19||Research income per academic and research staff||20%|
|20||Research income per institutional income||20%|