Monday, March 05, 2007

Top Universities Ranked by Research Impact

The THES – QS World Universities Rankings, and its bulky offspring, Guide to the World’s Top Universities (London: QS Quacquarelli Symonds), are strange documents, full of obvious errors and repeated contradictions. Thus, we find that the Guide has data about student faculty ratios that are completely different from those used in the top 200 rankings published in the THES while talking about how robust such a measure is. Also, if we look at the Guide we notice that for each of the top 100 universities it provides a figure for research impact, that is the number of citations divided by the number of papers. In other words it indicates how interesting other researchers found the research of each institution. These figures completely undermine the credibility ot the “peer review” as a measure of research expertise.

The table below is a re-ranking of the THES top 100 universities for 2006 by research impact and therefore by overall quality of research. This is not by any means a erefect measure. For a start, the natural sciences and medicine do a lot more citing than other disciplines and this might favor some universities more than others. Nonetheless it is very suggestive and it is so radically different from the THES-QS peer review and the overall ranking that it provides further evidence of the invalidity of the latter.

Cambridge and Oxford, ranked second and third by THES-QS, only manage to achieve thirtieth and twenty-first places for research impact.

Notice that in comparison to their research impact scores the following universities are overrated by THES-QS: Imperial College London, Ecole Normale Superieure, Ecole Polytechnique, Peking, Tsing Hua,Tokyo, Kyoto, Hong Kong, Chinese University of Hong Kong, National University of Singapore, Nanyang Technological University, Australian National University, Melbourne, Sydney, Monash, Indian Institutes of Technology, Indian Institutes of Management.

The following are underrated by THES-QS: Washington University in St Louis,
Pennsylvania State University, University of Washington, Vanderbilt, Case Western Reserve, Boston, Pittsburgh, Wisconsin, Lausanne, Erasmus, Basel, Utrecht, Munich, Wageningen, Birmingham.

The number on the left is the ranking by research impact, i.e. the number of citations divided by the number of papers. The number to the right of the universities is the research impact. The number in brackets is the overall ranking in the THES-QS 2006 rankings.

1 Harvard 41.3 (1 )
2 Washington St Louis 35.5 (48 )
3 Yale 34.7 (4 )
4 Stanford 34.6 (6 )
5 Caltech 34 (7 )
6 Johns Hopkins 33.8 (23 )
7 UC San Diego 33 (44)
8 MIT 32.8 (4)
9= Pennsylvania State University 30.8 (99)
9= Princeton 30.8 (10)
11 Chicago 30.7 (11)
12= Emory 30.3 (56)
12= Washington 30.3 (84)
14 Duke 29.9 (13 )
15 Columbia 29.7 (12 )
16 Vanderbilt 29.4 (53)
17 Lausanne 29.2 (89 )
18 University of Pennsylvania 29 (26)
19 Erasmus 28.3 (92)
20 UC Berkeley 28 (8)
21= UC Los Angeles 27.5 (31)
21= Oxford 27.5 (3
23 Case Western Reserve 27.4 (60)
24 Boston 27.2 (66)
25 Pittsburgh 27.1 (88 )
26 Basel 26.7 (75 )
27= New York University 26.4 (43)
27= Texas at Austin 26.4 (32 )
29 Geneva 26.2 (39 )
30= Northwestern 25.8 (42 )
30= Cambridge 25.8 (2)
32 Dartmouth College 25.6 (61
33 Cornell 25.5 (15 )
34 Rochester 25.1 (48 )
35 Michigan 25 (29)
36 University College London 24.9 (25 )
37 Brown 24.1 (54)
38 McGill 23.6 (21)
39 Edinburgh 23.4 (33 )
40 Toronto 23 (27 )
41 Amsterdam 21.6 (69 )
42 Wisconsin 21.5 (79 )
43= Utrecht 21.4 (95
43= Ecole Normale Superieure Lyon 21.4 (72)
45 ETH Zurich 21.2 (24 )
46 Heidelberg 20.8 (58 )
47 British Columbia 20.6 (50 )
48 Carnegie Mellon 20.5 (35 )
49= Imperial College London 20.4 (9)
49= Ecole Normale Superieure Paris 20.4 (18 )
51 King’s College London 20.1 (48 )
52 Bristol 20 (64)
53= Trinity College Dublin 19.9 (78 )
53= Copenhagen 19.9 (54 )
53= Glasgow 19.9 (81 )
56 Munich 19.8 (98)
57 Technical University Munich 19.4 (82 )
58= Birmingham 19.1 (90)
58= Catholic University of Louvain 19.1 (76 )
60 Tokyo 18.7 (19)
61 Illinois 18.6 (77 )
62 Osaka 18.4 (70
63 Wageningen 18.1 (97 )
64 Kyoto 18 (29 )
65 Australian National University 17.9 (16 )
66 Vienna 17.9 (87)
67 Manchester 17.3 (40 )
68 Catholic University of Leuven 17 (96)
69= Melbourne 16.8 (22 )
69= New South Wales 16.8 (41 )
71 Nottingham 16.6 (85 )
72 Sydney 15.9 (35)
73= Pierre-et-Marie-Curie 15.7 (93 )
73= Monash 15.7 (38)
75 Otago 15.5 (79 )
76 Queensland 15.3 (45)
77 Auckland 14.8 (46 )
78= EPF Lausanne 14.3 (64 )
78= MacQuarie 14.3 (82 )
78= Leiden 14.3 (90 )
81 Eindhoven University of Technology 13,4 (67 )
82= Warwick 13.3 (73 )
82= Delft University of Technology 3.3 (86)
84 Ecole Polytechnique 13.2 (37 )
85 Hong Kong 12.6 (33 )
86 Hong Kong Uni Science and Technology 12.2 (58)
87 Chinese University of Hong Kong 11.9 (50 )
88 Seoul National University 10.9 (63)
89 National University of Singapore 10.4 (19 )
90 National Autonomous University of Mexico 9.8 (74)
91 Peking 8 (14)
92 Lomonosov Moscow State 6 (93 )
93 Nanyang Technological University 5.6 (61)
94 Tsing Hua 5.4 (28 )
95 LSE 4.4 (17 )
96 Indian Institutes of Technology 3 (57 )
97 SOAS 2.5 (70 )
98 Indian Institutes of Management 1.9 (68)
Queen Mary London -- (99 )
Sciences Po -- (52)

Thursday, March 01, 2007

THES-QS Bias Chart

LevDau has been kind enough to reproduce my post "More Problems with Method" and to add a couple of very interesting graphs. What he has done is to calculate a bias ratio, which is the number of THE-QS reviewers reported on the topuniversities site divided by the number of highly cited researchers listed in Thomson ISI. The higher the number the more biased is the THES-QS review towards that country and the lower the number the more biased against. Some countries will not appear because they did not have anybody at all in the highly cited list.

If we chose a less rigorous definition of research expertise such as number of papers published rather than the number of highly cited researchers then the bias might be somewhat reduced. It would certainly not, however, be removed. In any case, if we are talking about the gold standard of ranking then the best researchers would surely be most qualified to judge the merits of their peers.

Bias in the THES-QS peer review (Selected Countries)

Iran 25
India 23.27
Singapore 23
Pakistan 23
China 19
Mexico 17
South Korea 9
Taiwan 3.22
Australia 1.82
Hong Kong 1.79
Finland 1.53
New Zealand 1.47
France 1
UK 0.86
Israel 0.77
Germany 0.43
Japan 0.22
USA 0.14

Monday, February 26, 2007

Is it Really a Matter of Global Opinion?

The THES-QS rankings of 2005 and 2006 are heavily weighted towards its so-called peer review, which receives 40% of the total ranking score. No other section gets more than 20 %. The “peer review” is supposed to be a survey of research active academics from around the world. One would therefore expect it to be based on a representative sample of the international research community, or “global opinion”, as THES claimed in 2005. It is, however, nothing of the sort.

The review was based on e-mails sent to people included on a database purchase from World Scientific Publishing Company. This is a publishing company that was founded in 1981. It now has 200 employees at its main office in Singapore. There are also subsidiary offices in New Jersey, London, Hong Kong, Taipei, Chennai, Beijing and Singapore. It claims to be the leading publisher of scientific journals and books in the Asia-Pacific region.


World Scientific has several subsidiaries. These include Imperial College (London) Press, which publishes books and journals on engineering, medicine, information technology, environmental technology and management, Pan Stanford Publishing of Singapore, which publishes in such fields as nanoelectronics, spintronics biomedical engineering and genetics, and KH Biotech Services Singapore who specialise in biotechnology, pharmaceuticals, food and agriculture; consultancy, training and conference organisation services. It also distributes books and journals produced for The National Academies Press (based in Washington, D.C.) in most countries in Asia (but not in Japan).

World Scientific has particularly close links with China, especially with Peking University. Their newsletter of November 2005 reports that:

”The last few years has seen the rapid growth of China's economy and academic sectors. Over the years, World Scientific has been actively establishing close links and putting down roots in rapidly growing China”

Another report describes a visit from Chinese university publishers:

”In August 2005, World Scientific Chairman Professor K. K. Phua, was proud to receive a delegation from the China University Press Association. Headed by President of Tsinghua University Press Professor Li Jiaqiang, the delegation comprised presidents from 18 top Chinese university publishing arms. The parties exchanged opinions on current trends and developments in the scientific publishing industry in China as well as Singapore. Professor Phua shared many of his experiences and expressed his interest in furthering collaboration with Chinese university presses. “

World Scientific has also established very close links with Peking University:

”World Scientific and Peking University's School of Mathematical Sciences have, for many years, enjoyed a close relationship in teaching, research and academic publishing. To further improve the close cooperation, a "World Scientific - Peking University Work Room" has been set up in the university to serve the academic communities around the world, and to provide a publishing platform to enhance global academic exchange and cooperation. World Scientific has also set up a biannual "World Scientific Scholarship" in the Peking School of Mathematical Sciences. The scholarship, totaling RMB 30,000 per annum and administered by the university, aims to reward and encourage students and academics with outstanding research contributions.”

Here are some of the titles published by the company:

China Particuology
Chinese Journal of Polymer science
Asia Pacific Journal of Operational research
Singapore Economic review
China: an International Journal
Review of Pacific Basin Financial Markets and Policies
Asian Case Research Journal

It should be clear by now that World Scientific is active mainly in the Asia-Pacific region, with an outpost in London. It seems more than likely that its database, which might be the list of subscribers or its mailing list, would be heavily biased towards the Asia-Pacific region. This goes a long way towards explaining why Chinese, Southeast Asian and Australasian universities do so dramatically better on the peer review than they do on the citations count or any other measure of quality.

I find it inconceivable that QS were unaware of the nature of World Scientific when they purchased the database and sent out the e-mails. To claim that the peer review is in any sense an international survey is absurd. QS have produced what may some day become a classical example of how bad sampling technique can destroy the validity of any survey.

Monday, February 19, 2007

Inflatable Australian Universities

Professor Simon Marginson of the University of Melbourne has made some very appropriate comments to The Age about the THES - QS rankings and Australian universities.

Professor Marginson told The Age that a lack of transparency in the rankings method means that universities could be damaged through no fault of their own.

'"Up to now, we in Australian universities have done better out of the Times rankings than our performance on other indicators would suggest," he said. "But it could all turn around and start working against us, too."
The Times rankings are volatile because surveys of employers and academics are open to manipulation, subjectivity and reward marketing over research, Professor Marginson said.'

The admitted extraordinarily low response rate to the THES - QS "peer review" combined with the overrepresentation of Australian "research-active academics"among the respondents are sufficient to confirm Professor's Marginsons remarks about the rankings.

Friday, February 16, 2007

More problems With Method

Another problem with the peer review section of the THES-QS World University Rankings is that it is extremely biased against certain countries and biased in favour of certain others. Here is an incomplete list of countries where respondents to the peer review survey are located and the number of respondents.

USA 532
UK 378
India 256
Australia 191
Canada 153
Malaysia 112
Germany 103
Indonesia 93
Singapore 92
China 76
Japan 53
France 56
Japan 53
Mexico 51
Thailand 37
Israel 36
Iran 31
Taiwan 29
South Korea 27
HongKong 25
New Zealand 25
Pakistan 23
Finland 23
Nigeria 20


How far does the above list reflect the distribution of research expertise throughout the world? Here is a list of the same countries with the number of academics listed in Thomson ISI Highly Cited Researchers.


USA 3,825
UK 439
India 11
Australia 105
Canada 172
Malaysia 0
Germany 241
Indonesia 0
Singapore 4
China (excluding Hong Kong) 4
Japan 53
France 56
Japan 246
Mexico 3
Thailand 0
Israel 47
Iran 1
Taiwan 9
South Korea 3
HongKong 14
New Zealand 17
Pakistan 1
Finland 15
Nigeria 0


The number of highly cited scholars is not a perfect measure of research activity -- for one thing, some disciplines cite more than others -- but it does give us a broad picture of the research expertise of different countries.

The peer review is outrageously biased against the United States, extremely biased against Japan and very biased against Canada, Israel, European countries like France, Germany, Switzerland and the Netherlands.


On the other hand, there is a strong bias towards China (less so Taiwan and Hong Kong), India Southeast Asia and Australia.

Now we now why Cambridge does so much better in the peer review than Harvard despite an inferior research record, why Peking university is apparantly among the best in the world, why there are so many Australian universities in the top 200 and why the world 's academics supposedly cite Japanese researchers copiously but cannot bring themselves to vote for them in the peer review .

Thursday, February 15, 2007

Something Needs Explaining

QS Quacquarelli Symonds have published additional information on their web site concerning the selection of the initial list of universities and the administration of the "peer review". I would like to focus on just one issue for the moment, namely the response rate to the e-mail survey. Ben Sowter of QS had already claimed to have surveyed more than 190,000 academics to produce the review. He had said:

"Peer Review: Over 190,000 academics were emailed a request to complete our online survey this year. Over 1600 responded - contributing to our response universe of 3,703 unique responses in the last three years. Previous respondents are given the opportunity to update their response." (THES-QS World University Rankings _ Methodology)

This is a response rate about 0.8%, less than 1%. I had assumed that the figure of 190,000 was a typographical error and that it should have been 1,900. A response rate of 80% would have been on the high side but perhaps respondents were highly motivated by being included in the ranks of "smart people" or winning a BlackBerry organiser.

However, the new information provided appears to suggests that QS did survey such a large number.

"So, each year, phase one of the peer review exercise is to invite all previous reviewers to return and update their opinion. Then we purchase two databases, one of 180,000 international academics from the World Scientific (based in Singapore) and another of around 12,000 from Mardev - focused mainly on Arts & humanities which is poorly represented in the former.
We examine the responses carefully and discard any test responses and bad responses and look for any non-academic responses that may have crept in. "
(Methodology-- The Peer Review)

There is a gap between "we purchase" and "we examine the responses" but the implication is that about 192, 000 academics were sent emails.

If this is the case then we have an extraordinarily low response rate, probably a record in the history of survey research. Kim Sheehan in an article in the Journal of Computer Mediated Communication reports that 31 studies of e-mail surveys show a mean response rate of about 37%. Response rates have been declining in recent years but even in 2004 the mean response rate was about 24%

Either QS did not send out so many e-mails or there was something wrong with the database or something else is wrong. Whatever it is, such a low response rate is in itself enough to render a survey invalid. A explanation is needed.

Wednesday, February 14, 2007

Congratulations to the Technical University Munich

The Technical Univeritsity of Munich has pulled off a major feat. It has been awarded not one but two places among the world's top 100 universities in the THES-QS book, Guide to the World's Top Universities. The Guide has also managed to move a major university several hundred miles.


In 2006 the THES -- QS world university rankings placed the Technical University of Munich in 82nd place and the University of Munich at 98th.

The new THES-QS Guide has profiles of the top 100 universities. On page 283 and in 82nd place we find the Technical University Munich. Its address is given as "Germany". How very helpful. The description is clearly that of the Technical University and so is the data in the factfile.


On page 313 the Technical University Munich appears again, now in 98th place . The description is identical to that on page 283 but the information in the factfile is different and appears to refer to the (Ludwig-Maximilien) University of Munich. The university is given an address in Dortmund, in a completely different state and the web site appears to be that of the University of Munich.

Turning to the directory we find that "Universitat Munchen" is listed, again with an address in Dortmund, and the Technische Universitat Munichen is on page 409, without an address. This time the data for the two universities appears to be correct.

Sunday, February 11, 2007

A Robust Measure

There is something very wrong with the THES-QS Guide to the World’s Top Universities, recently published in association with Blackwell’s of London. I am referring to the book’s presentation of two completely different sets of data for student faculty ratio.

In the Guide, it is claimed that this ratio “is a robust measure and is based on data gathered by QS from universities or from national bodies such as the UK’s Higher Education Statistics Agency, on a prescribed definition of staff and students” (p 75).


Chapter 9 of the book consists of the ranking of the world’s top 200 universities originally published in the THES in October 2006. The rankings consist of an overall score for each university and scores for various components one of which is for the number of students per faculty. This section accounted for 20% of the total ranking. Chapter 11 consists of profiles of the top 100 universities, which among other things, include data for student faculty ratio. Chapter 12 is a directory of over 500 universities which in most cases also includes the student faculty ratio.

Table 1 below shows the top ten universities in the world according to the faculty student score in the university rankings, which is indicated in the middle column. It is possible to reconstruct the process by which the scores in THES rankings were calculated by referring to QS’s topuniversities site which provides information, including numbers of students and faculty, about each university in the top 200, as well as more than 300 others.

There can be no doubt that the data on the web site is that from which the faculty student score has been calculated. Thus Duke has, according to QS, 11,106 students and 3,192 faculty or a ratio of 3.48 students per faculty which was converted to a score of 100. Harvard has 24,648 students and 3,997 faculty, a ratio of 6.17, which was converted to a score of 56. MIT has 10,320 students and 1,253 faculty, a ratio of 8.24 converted to a score of 42 and so on. There seems, incidentally, to have been an error in calculating the score for Princeton. The right hand column in table 1 shows the ratio of students per faculty, based on the data provided in the rankings for the ten universities with the best score on this component.

Table 1

1. Duke ...............................................................100..........3.48

2. Yale ..........................................................................................93 ............3.74

3. Eindhoven University of Technology .................92 ..........3.78

4. Rochester ...............................................................................91 .........3.82

5. Imperial College London ...........................................88.........4.94

6. Sciences Po Paris ............................................................86.........4.05

7= Tsing Hua, PRC ............................................................84 .............4.14

7= Emory .................................................................................84...........4. 14

9= Geneva xxxxxxxx...................................................xxxxxx81 ...............4.30

9= Wake Forest ..................................................81 ...............4.30

Table 2 shows the eleven best universities ranked for students per faculty according to the profile and directory in the Guide. It may need to be revised after another search. You will notice immediately that there is no overlap at all between the two lists. The student faculty ratio in the profile and directory is indicated in the right hand column.

Table 2

1. Kyongpook National University , Korea xxxxxxxxxxxxxxxxxxxxxxxxxxx0

2. University of California at Los Angeles (UCLA) xxxxxxxxxxxxxxxxxxxxxxxxxxxxx0.6

3.= Pontificia Univeridade Catolica do Rio de Janeirio, Brazil xxxxxxxxxxxxx3.8

3= Ecole Polytechnique Paris xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx3.8

5. Ljubljana, Slovenia xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx3.9

6= Kanazawa, Japan xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4.0

6= Oulo, Finland xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4.0

8= Edinburgh xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4.1

8= Trento, Italy ...........................................................................................................4.1

10= Utrecht, Netherlands ..........................................................................................4.3

10= Fudan, PRC.......................................................................................................... 4.3

The figures for Kyongpook and UCLA are obviously simple data entry errors. The figure for Ecole Polytechnique might not be grotesquely wrong if part-timers were included . But I remained very sceptical about such low ratios for universities in Brazil, China, Finland and Slovenia.

Someone who was looking for a university with a commitment to teaching would end up with dramatically different results if he or she checked the rankings or the profile and directory. A search of the first would produce Duke, Yale and Eindhoven and so on. A search of the second would produce (I’ll assume even the most naïve student would not believe the ratios for Kyongpook and UC LA) Ecole Polytechnique, Ljubljana and Kanazawa and so on.

Table 3 below compares the figures for student faculty ratio derived from the rankings on the left with those given in the profile and directory sections of the Guide, on the right.

Table 3.

Duke xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...x3.48 xxxxxxxxxxxxxxxxxxx16.7

Yale.........................................................................3.74 xxxxxxxxxxxxxxxxxxxx34.3

Eindhoven University of Technology................. 3.78 xxxxxxxxxxxxxxxxxx x31.1

Rochester xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx3.82 xxxxxxxxxxxxxxxxxxxx7.5

Imperial College London xxxxxxxxxxxxxxxxxxxxxxxxx4.94 xxxxxxxxxxxxxxxxxxx6.6

Sciences Po, Paris xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4.05 xxxxxxxxxxxxxxxxxxx22.5

Tsing Hua xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4.14 xxxxxxxxxxxxxxxxxxxxx9.3

Emory xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4.14xxxxxxxxxxxxxx 9.9

Geneva.....................................................................4.30 .........................................8.4

Wake Forest............................................................4.30 .........................................16.1

UCLA.......................................................................10.20............................... 0.6

Ecole Polytechnique, Paris xxxxxxxxxxxxxxxxxxxxxxx....x5.4 xxxxxxxxxxxxxxxxxxxxx3.8

Edinburgh ...................................................................8.3 xxxxxxxxxxxxxxxxxxxx4.1

Utrecht ......................................................................13.9 xxxxxxxxxxxxxxxxxxxx4.3

Fudan xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx19.3 xxxxxxxxxxxxxxxxxxx4.3

There seem to be no relationship whatsoever between the ratios derived from the rankings and those given in the profiles and directory.

Logically, there are three possibilities. The ranking data is wrong. The directory data is wrong. Both are wrong. It is impossible for both to be correct.

In a little while, I shall try to figure out where QS got the data for both sets of statistics. I am beginning to wonder, though, whether they got them from anywhere.

To call the faculty student score a robust measure is ridiculous. As compiled and presented by THES and QS, it is as robust as a pile of dead jellyfish.

Friday, February 09, 2007

Guide to the World’s Top Universities

Guide to the World’s Top Universities: Exclusively Featuring the Official Times Higher Education Supplement QS World University Rankings. John O’Leary, Nunzio Quacquarelli and Martin Ince (QS Quacquarelli Symonds Limited/Blackwell Publishing 2006)

Here are some preliminary comments on the THES-QS guide. A full review will follow in a few days.

The Times Higher Education Supplement and QS Quacquarelli Symonds have now produced a book, published in association with Blackwell’s. The book incorporates the 2006 world university rankings of 200 universities and the rankings by peer review of the top 100 universities in disciplinary areas. It also contains chapters on topics such as choosing a university, the benefits of studying abroad and tips for applying to university. There are profiles of the top 100 universities in the THES-QS rankings and a directory containing data about over 500 universities

The book is attractively produced and contains a large amount of information. A superficial glance would suggest that it would be a very valuable resource for anybody thinking about applying to university or anybody comparing universities for any reason. Unfortunately, this would be a mistake.

There are far too many basic errors. Here is a list, almost certainly incomplete. Taken individually they may be trivial but collectively they create a strong impression of general sloppiness.

“University of Gadjah Mada” (p91). Gadjah Mada was a person not a place.

In the factfile for Harvard (p119) the section Research Impact by Subject repeats information given in the previous section on Overall Research Performance.

The factfile for Yale (p 127) reports a Student Faculty Ratio of 34.3 , probably ten times too high.

The directory (p 483) provides data about something called the “Official University of Califormia, Riverside”. No doubt someone was cutting and pasting from the official university website.

Zurich, Geneva, St Gallen and Lausanne are listed as being in Sweden (p 462-3)

Kyungpook National University, Korea, has a Student faculty Ratio of 0:1. (p 452)

New Zealand is spelt New Zeland (p441)

There is a profile for the Indian Institutes of Technology [plural] (p 231) but the directory refers to only one in New Delhi (p 416).

Similarly, there is a profile for the Indian Institutes of Management [plural] (p 253) but the directory refers to one in Lucknow (p416)

On p 115 we find the “University of Melbourneersity”

On p 103 there is a reference to “SUNY” (State University of new York” that does not specifiy which of the four university centres of the SUNY system is referred to.

Malaysian universities are given the bahasa rojak (salad language) treatment and are referred to as University Putra Malaysia and University Sains Malaysia. (p437-438)

UCLA has a student faculty ratio of 0.6:1 (p483)

There will be further comments later.



Monday, February 05, 2007

The Rise of Seoul National University

One remarkable feature of the THES-QS world university rankings has been the rise of the Seoul National University (SNU) in the Republic of Korea from 118th place in 2004 to 93rd in 2005 and then to 63rd in 2006. This made SNU the eleventh best university in Asia in 2006 and placed it well above any other Korean university.

This was accomplished in part by a rise in the peer review score from 39 to 43. Also, SNU scored 13 on the recruiter rating compared with zero in 2005. However, the most important factor seems to be an improvement in the faculty student score from 14 in 2005 to 57 in 2006

How did this happen? If we are to believe QS it was because of a remarkable expansion in the number of SNU’s faculty. In 2005 according to QS’s topgraduate site, SNU had a total of 31,509 students and 3,312 faculty or 9.51 students per faculty. In 2006, again according to QS, SNU had 30,120 students and 4,952 faculty, a ratio of 6.08. The numbers provided for students seem reasonable. SNU’s site refers to 28,074 students. It is not implausible that QS’s figures included some categories such as non-degree, part-time or off-campus students that were counted by SNU.

The number of faculty is, however, another matter. The SNU site refers to 28,074 students and 1,927 full time equivalent faculty members. There are also “1,947 staff members”. It is reasonable to assume that the latter are non-teaching staff such as technicians and librarians.

Further down, the SNU site things begin to get confusing. As of 1st April 2006, according to the site, there were 3,834 “teaching faculty” and 1, 947 “educational staff”. Presumably these are the same as the earlier 1, 947 “staff members.

The mystery now is how 1,927 full time equivalent faculty grew to 3,834 teaching faculty. The latter figure would seem to be completely wrong if only because one would expect teaching faculty to be fewer than total faculty.

Since 1,947 full time equivalent faculty plus 1,947 staff members adds up to 3,874, a little bit more than 3, 834, it could be that “faculty” and “staff” were combined to produce a total for “teaching faculty”.

Another oddity is that SNU has announced on this site that it has a student-faculty ration of 4.6. I am baffled how this particular statistic was arrived at.

QS should, I suppose, get some credit for not accepting this thoroughly implausible claim. It’s ratio of 6.08 is, however, only slightly better and seems dependent on accepting a figure of 4,952 faculty. Unless somebody has been fabricating data out of very thin air, the most plausible explanation I can think of is that QS constructed the faculty statistic from a source that did something like taking the already inflated number of teaching faculty and then added the professors. Perhaps the numbers were obtained in the course of a telephone conversation over a bad line.

And the real ratio? On the SNU site there is a "visual statistics page" that refers to 1,733 "faculty members" in 2006. This seems plausible. Also, just have a look at what the MBA Dauphine-Sorbonne-Renault programme, which has partnerships with Asian and Latin American universities, says:

"Founded in 1946, Seoul National University (SNU) marked the opening of the first national university in modern Korean history. As an indisputable leader of higher education in Korea, SNU has maintained the high standard of education in liberal arts and sciences. With outstanding cultural and recreational benefits, SNU offers a wide variety of entertainment opportunities in the college town of Kwanak and in the city of Seoul.
SNU began with one graduate school and nine colleges, and today SNU has 16 colleges, 3 specialized graduate schools , 1 graduate school, 93 research institutes, and other supporting facilities, which are distributed over 2 campuses.
Currently, SNU has a student enrollment of approximately 30,600 degree candidates, including 27,600 undergraduates and 3,000 graduates. Also SNU has approximately 700 foreign students from 70 different countries. Maintaining the faculty of student ratio of 1:20, over 1,544 faculty and around 30 foreign professors are devoted to teaching SNU students to become leaders in every sector of Korean Society.
With the ideal of liberal education and progressive visions for the 21st century, SNU will continue to take a leading position as the most prestigious, research-oriented academic university in South Korea. " (my italics)

A Student-faculty ratio of around 20 seems far more realistic than the 4.6 claimed by SNU or QS's 6.08. An explanation would seem to be in order from SNU and from QS.

Monday, January 29, 2007

More About Duke

On January 23rd I wrote to John O’Leary, Editor of the Times Higher Education Supplement concerning the data for Duke University in the 2006 world university rankings. I had already pointed out that the 2006 data appeared to be inaccurate and that, since Duke had the best score in the faculty–student section against which the others were benchmarked, all the scores in this section and therefore all the overall scores were inaccurate. There has to date been no reply. I am therefore publishing this account of how the data for Duke may have been constructed.

It has been clear for some time that the score given to Duke for this section of the rankings and the underlying data reported on the web sites of THES’s consultants, QS Quacquarelli Symonds, were incorrect and that Duke should not have been the highest scorer in 2006 on this section. Even the Duke administration has expressed its surprise at the published data. What has not been clear is how QS could have come up with data so implausible and so different from those provided by Duke itself. I believe I have worked out how QS probably constructed these data, which have placed Duke at the top of this part of the rankings so that it has become the benchmark for every other score.

In 2005 Duke University made an impressive ascent up the rankings from 52nd to 11th. This rise was due in large part to a remarkable score for faculty-student ratio. In that year Duke was reported by QS on their topgraduates site to have a total of 12,223 students, comprising 6,248 undergraduates and 5, 975 postgraduates, and 6,244 faculty, producing a ratio of 1.96 students per faculty. The figure for faculty was clearly an error since Duke itself claimed to have only
1,595 tenure and tenure-track faculty and was almost certainly caused by someone entering the number of undergraduate students at Duke, 6,244 in the fall of 2005, into the space for faculty on the QS database. In any case someone should have pointed out that large non-specialist institutions, no matter how lavishly they are funded, simply do not have fewer than two students per faculty

In 2006 the number of faculty and students listed on QS’s topuniversities site was not so obviously incredible and erroneous but was still quite implausible.

According to QS, there were in 2006 11,106 students at Duke, of whom 6,301 were undergraduates and 4,805 postgraduates. It is unbelievable that a university could reduce the number of its postgraduate students by over a thousand, based on QS’s figures, or about two thousand, based on data on the Duke web site, in the course of a single year.

There were in 2006, according to QS, 3,192 faculty at Duke. This is not quite as incredible as the number claimed in 2005 but is still well in excess of the number reported on the Duke site.

So where did the figures, which have placed Duke at the top of the faculty student ratio component in 2006, come from? The problem evidently faced by whoever compiled the data is that the Duke site has not updated its totals of students and faculty since the fall of 2005 but has provided partial information about admissions and graduations which were used in an attempt to estimate current enrollment for the fall of 2006.

If you look at the Duke site you will notice that there is some information about admissions and graduations. At the start of the academic year of 2005 – 2006 (the “class of 2009”) 1,728 undergraduates were admitted and between July 1st, 2005 and June 30th, 2006 1,670 undergraduate degrees were conferred.

So, working from the information provided by Duke about undergraduate students we have;

6,244-
1,670+
1,728
=
6,302

The QS site indicates 6,301 undergraduate students in 2006.

It seems likely that the number of undergraduates in the fall of 2006 was calculated by adding the number of admissions in the fall of 2005 (it should actually have been the fall of 2006) to the number enrolled in the fall of 2005 and deducting the number of degrees conferred between July 2005 and June 2006. The total thus obtained differs by one digit from that listed by the QS site. This is most probably a simple data entry error. The total obtained by this method would not of course be completely valid since it did not take account of students leaving for reasons other than receiving a degree. It would, however, probably be not too far off the correct number.

The number of postgraduate students is another matter. It appears that there was a botched attempt to use the same procedure to calculate the number of graduate students in 2006. The problem, though, was that the Duke site does not indicate enrollment of postgraduate students in that year. In the fall of 2005 there were 6,844 postgraduate students. Between July 2005 and June 2006 2,348 postgraduate and professional degrees were awarded, according to the Duke site. This leaves 4,496 postgraduate students. The QS topuniversities site reports that there were 4,805 postgraduates in 2006. This is a difference of 309.

So where did the extra 309 postgraduates come from? Almost certainly the answer is provided by the online Duke news of September 6, 2006 which refers to a total of 1,687 first year undergraduate students composed of 1,378 entering the Trinity College of Arts and Science (Trinity College is the old name of Duke retained for the undergraduate school) and 309 undergraduates entering the Pratt School of Engineering. The total number of admissions is slightly different from the number given on the Duke main page but this may be explained by last minute withdrawals or a data entry error.

So it looks like someone at QS took the number of postgraduate students in 2005 , deducted the number of degrees awarded and added students admitted to the Pratt School of Engineering in the fall of 2006 and came up with the total of 4,805 in 2006. This is way off the mark because the 309 students admitted to the School of Engineering are not postgraduates, as is evident from their inclusion in the class of 2010, and no postgraduate admissions of any kind were counted. The result is that Duke appears erroneously to have lost about 2,000 postgraduate students between 2005 and 2006.

The undergraduate and postgraduate students were then apparently combined on the QS site to produce a total of 11,106 students, or about 1,000 less than QS reported in 2005 and about 2,000 less than indicated by Duke for that year.

What about the number of faculty? Here, QS’s procedure appears to get even dodgier. The Duke site refers to 1,595 tenure and tenure track faculty. The QS site refers to 3,192 faculty. Where does the difference come from? The answer ought to be obvious and I am embarrassed to admit that it took me a couple of hours to work it out. 1,595 multiplied by 2 is 3190 or exactly 2 less than QS’s figure. The slight difference is probably another data entry error or perhaps an earlier error of addition.

The Duke site contains a table of faculty classified according to school – Arts and Sciences, Engineering, Divinity and so on adding up to 1,595 and then classified according to status – full, associate and assistant professors, again adding up to 1,595. It would seem likely that someone assumed that the two tables referred to separate groups of faculty and then added them together.

So, having reduced the number of students by not including postgraduate admissions and doubling the number of faculty by counting them twice, QS seem to have come up with a a ratio of 3.48 students per faculty This gave Duke the best score for this part of the ranking against which all other scores were calibrated. The standardized score of 100 should in fact have been given to Yale, assuming, perhaps optimistically, that this ratio has been calculated correctly.

It follows that every score for the faculty student ratio is incorrect and therefore that every overall score is incorrect.

If there is another explain for the unbelievable Duke statistics I would be glad to hear it. But I think that if there is going to be a claim that an official at Duke provided information that is so obviously incorrect then the details of the communication should be provided. If information was obtained from another source, although I do not see any way that it could be, it should be indicated. Whatever the source of the error, someone at QS ought to have checked the score of the top university in each component and should have realized immediately that major universities do not reduce the number of their students so dramatically in a single year and keep it secret. Nor is it plausible that a large general university could have a ratio of 3.48 students per faculty.

To show that this reconstruction of QS’s methods is mistaken would require nothing more than indicating the source of the data and an e-mail address or citation by which it could be verified.

Friday, January 12, 2007

And Then There Were None

Something very odd has been going on at the University of Technology Sydney (UTS), if we can believe QS Quacquarelli Syminds, THES's consultants.

In 2005, according to QS, UTS had a faculty of 866 of whom 253 were international. The latter figure is definitely not real information but simply repesents 29% of the total faculty, which is QS's estimate or guess for Australian universities in general. This should have given UTS a score of 53 for the international faculty component on the 2005 world university rankings although the score actually given was 33. This was presumably the result of a data entry error. UTS was ranked 87th in the 2005 rankings.

In 2006, according to QS, the number of faculty at UTS increased dramatically to 1,224. However, the number of international faculty dropped to precisely zero. Partly as a result of this UTS's position in the rankings fell to 255.

Meanwhile, UTS itself reports that it has 2,576 full time equivalent faculty.
How Long is an Extended Christmas Break?

On the 21st of December I received a message from John O'Leary, Editor of THES, that he had sent my questions about the world university rankings to QS and that he hoped to get back to me in the new year since UK companies often have an extended Chtistmas break.

Assuming it started on December 25th, the break has now lasted for 18 days.

Monday, January 01, 2007

A Disgrace in Every Sense of the Word

That is the opinion of the Gadfly, a blog run by four Harvard undergraduates, of the THES world rankings. Here is a quotation:

"The Times Higher Education Supplement (THES) just released their global rankings, and it’s an utter scandal. Rife with errors of calculation, consistency and judgment, it is a testament not only to this ridiculous urge to rank everything but also to the carelessness with which important documents can be compiled."

The post concludes:

"One cannot help but think that the THES rankings are a British ploy to feel good about Oxford and Cambridge, the former of which is having a hard time pushing through financial reforms. Both are really universities who should be doing better, and are not. It may explain why Cambridge ups Harvard on the THES peer review, despite the fact that it lags behind Harvard under almost every other criteria, like citations per faculty, and citations per paper in specific disciplines."

Bangor is Very Naughty


Bangor University in Wales has apparantly been fiddling about with its exam results in order to boost its position in university rankings (not this time the THES world rankings). One wonders how much more of this sort of thing goes on. Anyway, here is an extract from the report in the THES . Contrary to what many people in Asia and the US think, the THES and the Times are separate publications.


And congratulations to Sam Burnett.


Bangor University was accused this week of lowering its academic standards with a proposal to boost the number of first-class degrees it awards.

According to a paper leaked to The Times Higher, the university agreed a system for calculating student results that would mean that about 60 per cent of graduates would obtain either a first or an upper-second class degree in 2007, compared with about 52 per cent under the current system.

The paper, by pro vice-chancellor Tom Corns, says that the university's key local rival, Aberystwyth University, "awarded 6.7 per cent more first and upper-second class degrees than we did". At the time, this helped place Bangor eight positions below Aberystwyth in The Times 2005 league table of universities.

He says: "We must redress the balance with all expedition", meaning the reforms are likely to take effect for 2007 graduates rather than for the 2007 entry cohort.

The move prompted heavy criticism this week. Alan Smithers, director of the Centre for Education and Employment Research at Buckingham University, said: "Hitherto, universities have been trusted to uphold degree standards, but such behaviour calls into question the desirability of continuing to allow them free rein in awarding their own degrees. Perhaps there should be an independent regulatory body."

He suggested that a body such as the Qualifications and Curriculum Authority, which regulates schools' exam awards, could be set up for higher education.

Sam Burnett, president of Bangor student union, said that Bangor had been "very naughty".

"The issue isn't about the system that should be in place... University figures seem to have identified the quickest way to boost Bangor up the league tables and will cheapen degrees in the process. Maybe it would be easier just to add 5 per cent to everyone's scores next July."

Thursday, December 21, 2006

Reply to Open Letter

John O'Leary, editor of the THES, has replied to my open letter:


Dear Mr Holmes

Thank you for your email about our world rankings. As you have raised a number of detailed points, I have forwarded it to QS for their comments. I will get back to you as soon as those have arrived but I suspect that may be early in the New Year, since a lot of UK companies have an extended Christmas break.

Best wishes

John O'Leary
Editor
The Times Higher Education Supplement


If nothing else, perhaps we will find out how long an extended christmas break is.



Saturday, December 16, 2006

Open Letter to the Times Higher Education Supplement

This letter has been sent to THES

Dear John O’Leary
The Times Higher Education Supplement (THES) world university rankings have acquired remarkable influence in a very short period. It has, for example, become very common for institutions to include their ranks in advertising or on web sites. It is also likely that many decisions to apply for university courses are now based on these rankings.

Furthermore, careers of prominent administrators have suffered or have been endangered because of a fall in the rankings. A recent example is that of the president of Yonsei University, Korea, who has been criticised for the decline of that university in the THES rankings compared to Korea University (1) although it still does better on the Shanghai Jiao Tong University index (2). Ironically, the President of Korea University seems to have got into trouble for trying too hard and has been attacked for changes designed to promote the international standing, and therefore the position in the rankings, of the university. (3) Another case is the Vice-Chancellor of Universiti Malaya, Malaysia, whose departure is widely believed to have been linked to a fall in the rankings between 2004 and 2005, which turned out to be the result of the rectification of a research error.

In many countries, administrative decisions and policies are shaped by the perception of their potential effect on places in the rankings. Universities are stepping up efforts to recruit international students or to pressure staff to produce more citable research. Also, ranking scores are used as ammunition for or against administrative reforms. Recently, we saw a claim the Oxford’s performance renders any proposed administrative change unnecessary (4).

It would then be unfortunate for THES to produce data that is any way misleading, incomplete or affected by errors. I note that the publishers of the forthcoming book that will include data on 500+ universities include a comment by Gordon Gee, Chancellor of Vanderbilt University, that the THES rankings are “the gold standard” of university evaluation (5)). I also note that on the website of your consultants, QS Quacquarelli Symonds, readers are told that your index is the best (6)).

It is therefore very desirable that the THES rankings should be as valid and as reliable as possible and that they should adhere to standard social science research procedures. We should not expect errors that affect the standing of institutions and mislead students, teachers, researchers, administrators and the general public.

I would therefore like to ask a few question concerning three components of the rankings that add up to 65% of the overall evaluation.

Faculty-student ratio
In 2005 there were a number of obvious, although apparently universally ignored, errors in the faculty-student ratio section. These include ascribing inflated faculty numbers to Ecole Polytechnique in Paris, Ecole Normale Superieure in Paris, Ecole Polytechnique Federale in Lausanne, Peking (Beijing) University and Duke University, USA. Thus, Ecole Polytechnique was reported on the site of QS Quacquarelli Symonds (7)), your consultants, to have 1,900 faculty and 2,468 students, a ratio of 1.30 students per faculty, Ecole Normale Superieure 900 faculty and 1800 students, a ratio of one per two faculty, Ecole Polytechnique Federale 3,210 faculty and 6,530 students, a ratio of 2.03, Peking University 15,558 faculty and 76,572 students, a ratio of 4.92, and Duke 6,244 faculty and 12,223 students, a ratio of 1.96

In 2006 the worst errors seem to have been corrected although I have not noticed any acknowledgement that any error had occurred or explanation that dramatic fluctuations in the faculty-student ratio or the overall score were not the result of any achievement or failing on the part of the universities concerned.

However, there still appear to be problems. I will deal with the case of Duke University, which this year is supposed to have the best score for faculty-student ratio. In 2005 Duke, according to the QS Topgraduates site, had, as I have just noted, 6,244 faculty and 12,223 students, giving it a ratio of about one faculty to 2 students. This is quite implausible and most probably resulted from a data entry error with an assistant or intern confusing the number of undergraduates listed on the Duke site, 6,244 in the fall of 2005, with the number of faculty. (8)

This year the data provided are not so implausible but they are still highly problematical. In 2006 Duke according to QS has 11,106 students but the Duke site refers to 13,088. True, the site may be in need of updating but it is difficult to believe that a university could reduce its total enrollment by about a sixth in the space of a year. Also, the QS site would have us believe that in 2006 Duke has 3,192 faculty members. But the Duke site refers to 1,595 tenure and tenure track faculty. Even if you count other faculty, including research professors, clinical professors and medical associates the total of 2,518 is still much less than the QS figure. I cannot see how QS could arrive at such a low figure for students and such a high figure for faculty. Counting part timers would not make up the difference, even if this were a legitimate procedure, since, according to the US News & World Report (America’s Best Colleges 2007 Edition), only three percent of Duke faculty are part time. My incredulity is increased by the surprise expressed by a senior Duke administrator (9) and by Duke's being surpassed by several other US institutions on this measure, according to the USNWR.

There are of course genuine problems about how to calculate this measure, including the question of part-time and temporary staff, visiting professors, research staff and so on. However, it is rather difficult to see how any consistently applied conventions could have produced your data for Duke.

I am afraid that I cannot help but wonder whether what happened was that data for 2005 and 2006 were entered in adjacent rows in a database for all three years and that the top score of 100 for Ecole Polytechnique in 2005 was entered into the data for Duke in 2006 – Duke was immediately below the Ecole in the 2005 rankings – and the numbers of faculty and students worked out backwards. I hope that this is not the case.

-- Could you please indicate the procedures that were employed for counting part-timers, visiting lecturers, research faculty and so on?
-- Could you also indicate when, how and from whom the figures for faculty and students at Duke were obtained?
-- I would like to point out that if the faculty-student ratio for Duke is incorrect then so are all the scores for this component, since the scores are indexed against the top scorer, and therefore all the overall scores. Also, if the ratio for Duke is based on an incorrect figure for faculty, then Duke’s score for citations per faculty is incorrect. If the Duke score does turn out to be incorrect would you consider recalculating the rankings and issuing a revised and corrected version?


International faculty
This year the university with the top score for international faculty is Macquarie, in Australia. On this measure it has made a giant leap forward from 55 to 100 (10).

This is not, I admit, totally unbelievable. THES has noted that in 2004 and 2005 it was not possible to get data for Australian universities about international faculty. The figures for Australian universities for these years therefore simply represent an estimate for Australian universities as a whole with every Australian university getting the same, or almost the same, score. This year the scores are different suggesting that data has now been obtained for specific universities.

I would like to digress a little here. On the QS Topgraduate website the data for 2005 gives the number of international faculty at each Australian university. I suspect that most visitors to the site would assume that these represent authentic data and not an estimate derived from applying a percentage to the total number of faculty. The failure to indicate that these data are estimates is perhaps a little misleading.

Also, I note that in the 2005 rankings the international faculty score for the Australian National University is 52, for Monash 54, for Curtin University of Technology 54 and for the University of Technology Sydney 33. For the other thirteen Australian and New Zealand universities it is 53. It is most unlikely that if data for these four universities were not estimates they would all differ from the general Australasian score by just one digit. It is likely then that in four out of seventeen cases there have been data entry errors or rounding errors. This suggests that it is possible that there have been other errors, perhaps more serious. The probability that errors have occurred is also increased by the claim, uncorrected for several weeks at the time of writing, on the QS Topuniversities site that in 2006 190,000 e-mails were sent out for the peer review.

This year the Australian and New Zealand universities have different scores for international faculty. I am wondering how they were obtained. I have spent several hours scouring the Internet, including annual reports and academic papers, but have been unable to find any information about the numbers of international faculty in any Australian university.

-- Can you please describe how you obtained this information? Was it from verifiable administrative or government sources? It is crucially important that the information for Macquarie is correct because if not then, once again, all the scores for this section are wrong.

Peer Review
This is not really a peer review in the conventional academic sense but I will use the term to avoid distracting arguments. My first concern with this section is that the results are wildly at variance with data that you yourselves have provided and with data from other sources. East Asian and Australian and some European universities do spectacularly better on the peer review, either overall or in specific disciplinary groups, than they do on any other criteria. I shall, first of all, look at Peking University (which you usually call Beijing University) and the Australian National University (ANU).

According to your rankings, Peking is in 2006 the 14th best university in the world (11). It is 11th on the general peer review, which according to your consultants explicitly assesses research accomplishment, and twelfth for science, twentieth for technology, eighth for biomedicine, 17th for social science and tenth for arts and humanities.

This is impressive, all the more so because it appears to be contradicted by the data provided by THES itself. On citations per paper Peking is 77th for science and 76th for technology. This measure is an indicator of how a research paper is regarded by other researchers. One that is frequently cited has aroused the interest of other researchers. It is difficult to see how Peking University could be so highly regarded when its research has such a modest impact. For biomedicine and social sciences Peking did not even do enough research for the citations to be counted.

If we compare overall research achievements with the peer review we find some extraordinary contrasts. Peking does much better on the peer review than California Institute of Technology (Caltech), with a score of 70 to 53 but for citations per faculty Peking’s score is only 2 compared to 100.

We find similar contrasts when we look at ANU. It was 16th overall and had an outstanding score on the peer review, ranking 7th on this criterion. It was also 16th for science, 24th for technology, 26th for biomedicine, 6th for social science and 6th for arts and humanities.

However, the scores for citations per paper are distinctly less impressive. On this measure, ANU ranks 35th for science, 62nd for technology and 56th for social science. It does not produce enough research to be counted for biomedicine.

Like Peking, ANU does much better than Caltech on the peer review with a score of 72 but its research record is less distinguished with a score of 13.

I should also like to look at the relative position of Cambridge and Harvard. According to the peer review Cambridge is more highly regarded than Harvard. Not only that, but its advantage increased appreciably in 2006. But Cambridge lags behind Harvard on other criteria, in particular citations per faculty and citations per paper in specific disciplinary groups. Cambridge is also decidedly inferior to Harvard and a few other US universities on most components of the Shanghai Jiao Tong index (12).

How can a university that has such an outstanding reputation perform so consistently less well on every other measure? Moreover, how can its reputation improve so dramatically in the course of two years?

I see no alternative but to conclude that much of the remarkable performance of Peking University, ANU and Cambridge is nothing more than an artifact of the research design. If you assign one third of your survey to Europe and one third to Asia on economic rather than academic grounds and then allow or encourage respondents to nominate universities in those areas then you are going to have large numbers of universities nominated simply because they are the best of a mediocre bunch. Is ANU really the sixth best university in the world for social science and Peking the tenth best for arts and humanities or is just that there are so few competitors in those disciplines in their regions?

There may be more. The performance on the peer review of Australian and Chinese universities suggests that a disproportionate number of e-mails were sent to and received from these places even within the Asia-Pacific region. The remarkable improvement of Cambridge between 2004 and 2006 also suggests that a disproportionate number of responses were received from Europe or the UK in 2006 compared to 2005 and 2004.

Perhaps there are other explanations for the discrepancy between the peer review scores for these universities and their performance on other measures. One is that citation counts favour English speaking researchers and universities but the peer review does not. This might explain the scores of Peking University but not Cambridge and ANU. Perhaps, Cambridge has a fine reputation based on past glories but this would not apply to ANU and why should there be such a wave of nostalgia sweeping the academic world between 2004 and 2006? Perhaps citation counts favour the natural sciences and do not reflect accomplishments in the humanities but the imbalances here seem to apply across the board in all disciplines.

There also are references to some very suspicious procedures. These include soliciting more responses to get more universities from certain areas in 2004. In 2006, there is a reference to weighting responses from certain regions. Also puzzling is the remarkable closing of the gap between high and low scoring institution between 2004 and 2005. Thus in 2004 the mean score for the peer review of all universities in the top 200 was 105.69 compared to a top score of 665 while in 2005 it was 32.82 compared to a top score of 100.

I would therefore like to ask these questions.

-- Can you indicate the university affiliation of your respondents in 2004, 2005 and 2006?
-- What was the exact question asked in each year?
-- How exactly were the respondents selected?
-- Were any precautions taken to ensure that those and only those to whom it was sent completed the survey?
-- How do you explain the general inflation of peer review scores between 2004 and 2005?
-- What exactly was the weighting given to certain regions in 2006 and to whom exactly was it given?
-- Would you considering publishing raw data to show the number of nominations that universities received from outside their regions and therefore the genuine extent of their international reputations?

The reputation of the THES rankings would be enormously increased if there were satisfactory answers to these questions. Even if errors have occurred it would surely be to THES’s long-term advantage to admit and to correct them.

Yours sincerely
Richard Holmes
Malaysia


Notes
(1) htttp://times.hankooki.com/lpage/nation/200611/kt2006110620382111990.htm
(2) http://ed.sjtu.edu.cn/ranking.htm
(3) http://english.chosun.com/w21data/html/news/200611/200611150020.html
(4) http://www.timesonline.co.uk/article/0,,3284-2452314,00.html
(5) http://www.blackwellpublishing.com/more_reviews.asp?ref=9781405163125&site=1
(6) http://www.topuniversities.com/worlduniversityrankings/2006/faqs/
(7) www.topgraduate.com
(8) http://www.dukenews.duke.edu/resources/quickfacts.html
(9) www.dukechronicle.com
(10) www.thes.co.uk
(11) www.thes.co.uk
(12) http://ed.sjtu.edu.cn/ranking.htm








Analysis of the THES Rankings

The Australian has an outstanding short article on the THES rankings by Professor Simon Marginson of the University of Melbourne here.

An extract is provided below.

Methodologically, the index is open to some criticism. It is not specified who is surveyed or what questions are asked. The student internationalisation indicator rewards entrepreneurial volume building but not the quality of student demand or the quality of programs or services. Teaching quality cannot be adequately assessed using student-staff ratios. Research plays a lesser role in this index than in most understandings of the role of universities. The THES rankings reward a university's marketing division better than its researchers. Further, the THES index is too easily open to manipulation. By changing the recipients of the two surveys, or the way the survey results are factored in, the results can be shifted markedly.

This illustrates the more general point that rankings frame competitive market standing as much or more than they reflect it.

Results have been highly volatile. There have been many sharp rises and falls, especially in the second half of the THES top200 where small differences in metrics can generate large rankings effects. Fudan in China has oscillated between 72 and 195, RMIT in Australia between 55 and 146. In the US Emory has risen from 173 to 56 and Purdue fell from 59 to 127. They must have let their THES subscriptions lapse.

Second, the British universities do too well in the THES table. They have done better each successive year. This year Cambridge and Oxford suddenly improved their performance despite Oxford's present problems. The British have two of the THES top three and Cambridge has almost closed the gap on Harvard. Yet the British universities are manifestly under-funded and the Harvard faculty is cited at 3 1/2 times the rate of its British counterparts. It does not add up. But the point is that it depends on who fills out the reputational survey and how each survey return is weighted.

Third, the performance of the Australian universities is also inflated.

Despite a relatively poor citation rate and moderate staffing ratios they do exceptionally well in the reputational academic survey and internationalisation indicators, especially that for students. My university, Melbourne, has been ranked by the 3703 academic peers surveyed by the THES at the same level as Yale and ahead of Princeton, Caltech, Chicago, Penn and University of California, Los Angeles. That's very generous to us but I do not believe it.

Friday, December 08, 2006

Comments on the THES Rankings

Professor Thomas W. Simon, a Fulbright scholar, has this to say about the THES rankings on a Malaysian blog.


Legitimate appearance
Quacquarelli Symonds (QS), the corporate engine behind the THES rankings, sells consulting for business education, job fairs, and other services. Its entrepreneurial, award winning CEO, Nunzio Quacquarelli, noted that Asian universities hardly ever made the grade in previous ranking studies. Curiously, on the last QS rankings, Asian universities improved dramatically. They accounted for nearly 30% of the top 100 slots. Did QS choose its methodology to reflect some desired trends?QS has managed to persuade academics to accept a business ploy as an academic report. Both The Economist and THES provide a cover by giving the rankings the appearance of legitimacy. Shockingly, many educators have rushed to get tickets and to perform for the magic show, yet it has no more credibility than a fairy tale and as much objectivity as a ranking of the world's most scenic spots.

Major flaws
It would be difficult to list all the flaws, but let us consider a few critical problems. Scientists expose their data to public criticism whereas QS has not published its raw data or detailed methodologies. The survey cleverly misuses scholarly terms to describe its methodology. THES calls its opinion poll (of over 2,000 academic experts) “peer review”, but an opinion poll of 2,000 self-proclaimed academic experts bears no resemblance to scholars submitting their research to those with expertise in a narrow field. Further, from one year to the next, QS unapologetically changed the weighting (from 50% peer review to 40%) and added new categories (employer surveys, 10%).

He concludes:

Concerned individuals should expose the QS/THES scandal. Some scholars have done scathing critiques of pieces of the survey, however, they now should launch an all out attack. Fortunately, this survey is only few years old. Let us stop it before it grows to maturity and finds a safe niche in an increasingly commercialised global world.
WE ARE CASH COWS

This is the title of a letter to the New Straits Times of Kuala Lumpur (print edition 1/12/06)from David Chan Weng Chong, a student at a university in Sydney. He writes

In recent years , the Australian government has slashed funding to universities.
University education has suffered because of the lack of resources and expertise.

Because of this drying up of government funds, almost all
universities are using foreign students as cash cows. Foreign students contribute about A$6 billion (RM17 billion) each year to the economy.

We hope the Australian government will ensure we get our money's worth by
increasing spending in higher education.

One wonders how many Asian students have been attracted to Australian universities by excellent scores on the THES rankings and how many are as disillusioned as David.

Perhaps internationalisation is not always an indicator of excellence. It could be nothing more than a response to financial problems.

Wednesday, November 15, 2006

Oxford and Cambridge and THES

George Best used to tell a story about being asked by a waiter in a five star hotel "where did it all go wrong?" Best, who was signing a bill for champagne, with his winnings from the casino scattered around the room and Miss World waiting for him, remarked "he must have seen something that I'd missed". It looks like The Times Higher Education Supplement (THES) has seen something about Oxford and Cambridge that everybody else has missed.

The THES world university rankings have proved to be extraordinarily influential. One example is criticism of the president of Yonsei University in Korea for his institution's poor performance on the rankings.

Another is the belief of Terence Kealey, Vice-Chancellor of the University of Buckingham, that since Oxford and Cambridge are the best universities in the world apart from Harvard, according to THES, they are in no need of reform. He argues that Oxford should reject proposals for administrative change since Oxford and Cambridge are the best run universities in the world.


Oxford's corporate madness
by Terence Kealey
THIS YEAR'S rankings of world
universities reveal that Oxford is one of the three best in the world. The other
two are Cambridge and Harvard.

It is obvious that Oxford and Cambridge are the best
managed universities in the world when you consider that Harvard has endowments
of $25 billion (many times more than Oxford or Cambridge's); that Princeton,
Yale and Stanford also have vast endowments; and that US universities can charge
huge fees which British universities are forbidden to do by law.

Kealey evidently has complete confidence in the reliability of the THES rankings and if they were indeed reliable then he would have a very good point. But if they are not then the rankings would have done an immense disservice to British higher education by promoting a false sense of superiority leading to a rejection of attempts that might reverse a steady decline.

Let's have a look at the THES rankings. On most components the record of Oxford and Cambridge is undistinguished. For international faculty, international students, and faculty-student ratio they have scores of 54 and 58, 39 and 43, 61 and 64 respectively, compared to top scores of 100, although these scores are perhaps not very significant and are easily manipulated. More telling is the score for citations per faculty, a measure of the significance of the institutions' research output. Here, the record is rather miserable with Oxford and Cambridge coming behind many institutions including the Indian Institutes of Technology, Helsinki and the University of Massachusetts at Amherst.

I would be the first to admit that the latter measure has to be taken with a little bit of salt. Science and technology are more citation-heavy than the humanities and social sciences, which would help to explain why the Indian Institutes of Technology apparently do so well, but they are suggestive.

Of course, this measure also depends on the number of faculty as well as the number of citations. If there has been an error in counting the number of faculty then the citations per faculty score would also be affected. I am wondering whether something like that happened to the Indian Institutes. THES refers to institutes but their consultants, QS, refer to institute and provide a link to the institute in Delhi. Can we be confident that QS did not count the faculty for Delhi but citations for all the IITs?

When we look at the data provided by THES for citations per paper, a measure of research quality, we find that the record of Oxford and Cambridge is equally unremarkable. For Science, Oxford is 20th and Cambridge 19th. For technology, Oxford is 11th and Cambridge 29th. For biomedicine, Oxford is seventh and Cambridge ninth. For Social Sciences, Oxford is 19th and Cambridge is 22nd.

The comparative performance of Oxford and Cambridge is just as unimpressive when we look at the data provided by Shanghai Jiao Tong University. Cambridge is second on alumni and awards, getting credit for Nobel prizes awarded early in the last century but 15th for highly cited researchers, 6th for publications in Nature and Science and 12th for citations in the Science Citation Index and Social Science Citation Index. Oxford is ninth for awards, 20th for highly cited researchers , seventh for papers in Nature and Science and 13th for citations in the SCI and SSCI.

So how did Oxford and Cambridge do so well on the overall THES rankings? It was solely because of the peer review. Even on the recruiter ratings they were only 8th and 6th. On the peer review, Cambridge was first and Oxford second. How is this possible? How can reviewers give such a high rating to universities that produce research that in most fields is inferior in quality to that of a dozen or more US universities, that now produce relatively few Nobel prize winners or citations or papers in leading journals.

Perhaps like the waiter in the George Best the THES reviewers have seen something that everybody else has missed.

Or is it simply a product of poor research design? I suspect that QS sent out a disproportionate number of surveys to European researchers and also to those in East Asia and Australia. We know that respondents were invited to pick universities in geographical areas with which they were familiar. This in itself is enough to render the peer review invalid as a survey of international academic opinion even if we could be sure that an appropriate selection procedure was used.
It is surely time for THES to provide more information about how the peer review was conducted.

Wednesday, November 08, 2006

Is Korea University's Rise the Result of a THES Error?

It looks as though the Times Higher Education Supplement (THES) world university rankings will soon claim another victim. The president of Yonsei University, Republic of Korea, Jung Chang-young, has been criticised by his faculty. According to the Korea Times:


The school has been downgraded on recent college evaluation charts, and Jung has
been held responsible for the downturn.

Associations of professors and alumni, as well as many students, are questioning the president’s leadership. Jung’s poor results on the survey were highlighted by the school’s rival, Korea University, steadily climbing the global education ranks.
When Jung took the position in 2004, he stressed the importance of the international competitiveness of the university. “Domestic college ranking is meaningless, and
I will foster the school as a world-renowned university,” he said during his
inauguration speech.

However, the school has moved in the other direction. Yonsei University ranked behind Korea University in this year’s JoongAng Daily September college evaluation for the first time since 1994. While its rival university saw its global rank jump from 184th last year to 150th this year, Yonsei University failed to have its name listed among the world’s top 200 universities in the ranking by the London based The Times.


It is rather interesting that Yonsei university has been ahead of Korea University (KU) between 1995 and 2005 on a local evaluation while lagging far behind on the THES rankings in 2005 and 2006. In 2005 Korea was 184th in the THES rankings while Yonsei was 467th. This year Korea University was 150th and Yonsei 486th.

It is also strange that Yonsei does quite a bit a better than KU on most parts of the Shanghai Jiao Tong rankings. Both get zero for alumni and awards but Yonsei does better on highly cited researchers (7.7 and 0), articles in Nature and Science (8.7 and 1.5), and Science Citation Index (46.4 and 42.6) , while being slightly behind on size (16 and 16.6) Overall, Yonsei is in the 201-300 band and KU in the 301-400.

So why has KU done so well on the THES rankings while Yonsei is languishing almost at the bottom? It is not because of research. KU gets a score of precisely 1 in both 2005 and 2006 and, anyway, Yonsei does better for research on the Shanghai index. One obvious contribution to KU’s outstanding performance is the faculty- student ratio. KU had a score of 15 on this measure in 2005 and of 55 in 2006, when the top scoring university is supposedly Duke with a ratio of 3.48 .

According to QS Quacquarelli Symonds, the consultants who prepared the data for THES, Korea University has 4,407 faculty and 28,042 students, giving a ratio of 6.36.

There is something very odd about this. Just last month the president of KU said that his university had 28 students per faculty and was trying had to get the ratio down to 12 students per faculty. Didn’t he know that, according to THES and QS, KU had done that already?

President Euh also noted that in order for Korean universities to provide better
education and stand higher among the world universities' ranking, the
faculty-student ratio should improve from the current 1: 28 (in the case of
Korea University) to 1: 12, the level of other OECD member nations. He insisted
that in order for this to be realized, government support for overall higher
education should be increased from the current level of 0.5% of GNP to 1% of GNP
to be in line with other OECD nations.

It is very unlikely that the president of KU has made a mistake. The World of Learning 2003 indicates that KU had 21, 685 students and 627 full time teachers. That gives us a ratio of 1: 35. suggesting that KU has been making steady progress in this respect over the last few years.
How then did QS arrive at the remarkable ratio of 6.36? I could not find any data on the KU web site. The number of students on the QS site, however, seems reasonable, suggesting a substantial but plausible increase over the last few years but 4,407 faculty seems quite unrealistic. Where did this figure come from? Whatever the answer, it is quite clear that KU‘s faculty-student score is grossly inflated and so therefore is it’s total score. If Duke had a score of 100 and a ratio of 3.48 (see archives) then KU’s score for faculty-student ratio should have been, by my calculation, 12 and not 55. Therefore its overall score, after calibrating against top scoring Harvard’s, would have been 20.3 and not 32.2. This would have left KU well outside the top 200.

Incidentally, Yonsei’s faculty student ratio, according to QS and its own web site, is 34.25, quite close to KU's self-admitted ratio.

It appears that the criticism directed at Yonsei’s president is perhaps misplaced since KU’s position in the rankings is the result of a QS error. Without that error, Yonsei might have been ahead of KU or at least not too far behind.