Wednesday, September 27, 2006

Undeserved Reputations?

As well as producing an overall ranking of universities last year, the Times Higher Education Supplement (THES) also published disciplinary rankings. These comprised the world's top 50 universities in arts and humanities, social sciences, science, technology and biomedicine.

The publication of the disciplinary rankings was welcomed by some universities that did not score well on the general rankings but were at least able to claim that they had got into the top fifty for something.

But there are some odd things about these lists. They are based exclusively on peer review and nothing else. For all but one list (arts and humanities), THES provides data about the number of citations per paper, although this is not used to rank the universities. This is a measure of the quality of the papers published since other researchers would normally only cite interesting research. It is noticeable that the relationship between the peer reviewers' opinions of a university and the quality of its research is not particularly high. For example, in science Cambridge comes top, but the average number of citations per paper is 12.9. This is excellent (I believe that the average number of citations of a scientific paper is just one) but Berkeley, Harvard, MIT, Princeton, Stanford, Caltech, ETH Zurich, Yale, Chicago, UCLA, University of California at Santa Barbara, Columbia, Johns Hopkins and the University of California at San Diego all do better.

It is, of course, possible that the reputation of Cambridge rests upon the amount of research produced rather than its overall quality or that the overall average disguises the fact that it has a few research superstars who contribute to its reputation and that is reflected in the peer review. But the size of the difference between the subjective score of the peer review and the objective one of the citation count is still a little puzzling.

Another thing is that for many universities there are no scores for citations per paper. Apparently, this is because they did not produce enough papers to be counted although what they did produce might have been of a high quality. But how could they get a reputation that puts them in the top 50 while producing so little research?

There are 45 universities that got into a disciplinary top 50 without a score for citations. Of these, 25 are in countries where QS, THES's consultants, have offices, and ten are in located in exactly the same city where QS has an office. Of the 11 universities (the seven Indian Institutes of Technology count as one) that got into more than one top 50 list, no less than eight are in countries where QS has an office, Monash, the China University of Science and Technology, Tokyo, the National University of Singapore, Beijing (Peking University), Kyoto, New South Wales and the Australian National University. Four of the eleven are in cities -- Beijing, Tokyo, Singapore and Sydney -- where QS has an office.

So, it seems that proximity to a QS office can count as much as quantity or quality of research. I suspect that QS chose its peer reviewers from those that they knew from meetings, seminars or MBA tours or those that had been personally recommended to them. Whatever happened, this suggests another way to get a boost in the rankings -- start a branch campus in Singapore or Sydney and show up at any event organised by QS and get on the reviewers' panel.

Tuesday, September 12, 2006

More on the THES Peer Review

There are some odd things about the peer review section of the Times Higher Education Supplement (THES) world universities ranking. If you compare the scores for 2004 and 2005 you will find that there is an extremely high correlation, well over .90, between the two sets of figures. (You can do this simply by typing the data into an SPSS file) This suggests that they might not be really independent data.

THES has admitted this. It has said that in 2005 the ratings of the 2004 reviewers were combined with those of an additional and larger set of reviewers. Even so, I am not sure that this is sufficient to explain such a close association.

But there is something else that is, or ought to be, noticeable. If you look at the figures one by one (doing some quick conversions becaue in 2004 top scoring University of California at Berkeley gets 665 in this category and in 2005 Harvard is top with 100) you will notice that everybody except Berkeley goes up. The biggest improvement is the University of Melbourne but some European and other Australian universities also do much better than average.

How is it possible that all universities can improve compared to the 2004 top scorer, with some places showing a much bigger improvement than others, while the correlation between the two scores remains very high?

I've received information recently about the administration of the THES peer review that might shed some light on this.

First, it looks as though QS, THES's consultants, sent out a list of universities divided into subject and geographical areas from which respondents were invited to choose. One wonders how the original list was chosen.

Next, in the second survey of 2005 those who had done the survey a year earlier received their submitted results and were invited to make additions and subtractions.

So, it looks as if in 2005 those who had been on the panel in 2004 were given their submissions for 2004 and asked if they wanted to make any changes. What about the additional peers in 2005? I would guess that they were given the original list and asked to make a selection but it would be interesting to find out for certain.

I think this takes us a bit further in explaining why there is such a strong correlation betweeen the two years. The old reviewers for the most part probably returned their lists with a few changes and probably added more than they withdrew. This would help to explain the very close correlation between 2004 and 2005 and the improvements for everyone except Berkeley. Presumably, hardly anybody added Berkeley in 2004 and a few added Harvard and others.

There is still a problem though. The improvement in peer review scores between 2004 and 2005 is much greater for some universities than for others and it does not appear to be random. Of the 25 universities with the greatest improvements, eight are located in Australia and New Zealand, including Auckland, and 7 in Europe, including Lomonosov Moscow State University in Russia. For Melbourne, Sydney, Auckland and the Australian National University there are some truly spectacular improvements. Melburne goes up from 31 to 66, Sydney, from 19 to 53, Auckland from 11 to 45 and the Australian National University from 32 to 64. (Berkeley's score of 665 in 2004 was converted to 100 and the other scores adjusted acordingly).

How can this happen? Is it plausible that Australian universities underwent such a dramatic improvement in the space of just one year? Or is it a product of a flawed survey design? Did QS just send out a lot more questionnaires to Australian and European universities in 2005?

One more thing might be noted. I've heard of one case where a respondent passed the message from QS on to others in the same institution, at least one of whom apparently managed to submit a response to the survey. If this sort of thng was common in some places and if it was accepted by QS, it might explain why certain unversities did strikingly better in 2005.

THES will, let's hope, be a lot more transparent about how they do the next ranking.

Friday, September 08, 2006

More on the Rise of Ecole Polytechnique

I have already mentioned the remarkable rise of the Ecole Polytechnique (EP), Paris, in the Times Higher Education Supplement (THES) world university rankings to 10th place in the world and first in Continental Europe. This was largely due to what looked like a massive increase in the number of teaching staff between 2004 and 2005. I speculated that what happened was that QS, THES's consultants, had counted part-time faculty in 2005 but not in 2004.

The likelihood that this is what happened is confirmed by data from QS themselves. Their website provides some basic information about EP. There are two different sets of figures for numbers of faculty and student on the page for EP. At the top it says the ecole has 2,500 students and 380 faculty members. At the bottom there is a box, DATAFILE, which indicates that the ecole has 1900 faculty and 2468 students.

In, 2004, the top scoring university in the Faculty-student ratio category was Ecole Normale Superieure (ENS), another French grande ecole. According to QS's current data, ENS has 1,800 students and 900 faculty or 2 students per faculty. If the numbers of faculty and students at ENS remained the same between 2004 and 2005, then EP's score for faculty-student ratio would have gone from several times lower than ENS in 2004 (23 out of 100)) to quite a bit higher (100, the new top score) in 2005.

Going back to QS's figures their first set of data gives us 6.58 students per faculty and the second 1.30.

EP's dramatic improvement is most probably explained by their using the first set of figures, or something similar, in 2004 and the second set, or something similar, in 2005.

The main difference between the two is the number of faculty, 380 compared to 1900. Most probably, the 1,500 plus difference represents part-timers. Once again, I would be happy to hear of another explanation. I am certain that they are a lot more distinguished than the adjuncts and graduate assistants who do far too much teaching in American universities, but should they really be counted as equivalent to full-time teaching faculty?

The next question is why hasn't anyone else noticed this.

Tuesday, September 05, 2006

So That's how They Did It

For some time I've been wondering how the panels for the Times Higher Education Supplement (THES) World University Ranking peer review in 2004 and 2005 were chosen. THES have been very coy about this, telling us only how many were involved, the continents they came from and the broad disciplinary areas. What they have not done is to give any information about exactly how these experts were selected, how they were distributed between countries, what the response rate was, exactly what questions were asked, whether resondents were allowed to pick their own universities, how many universities they could pick and so on. In short, we are given none of the information that would be required from even the most lackadaisical writer of a doctoral dissertation.

Something interesting has appeared on websites in Russia and New Zealand. Here are the links. The first is from the Special Astrophysical Observatory of the Russian Academy of Science http://www.sao.ru/lib/news/WScientific/WSci4.htm

The second is from the University of Auckland, New Zealand
http://www.aus.ac.nz/branches/auckland/akld06/AUS-SP.pdf.

The document is a message from QS, the consultants used by THES for their ranking exercise, soliciting respondents for the 2005 peer review. It begins with a quotation from Richard Sykes, Rector of Imperial College, London: "you need smart people to recognise smart people".

As if being acknowledged a smart person who can recognise smart people were not enough, anyone spending five minutes filling out an online form will qualify for a bunch of goodies, comprising a discount on attending the Asia Pacific Leaders in Education Conference in Singapore, a one month trial subscription to the THES, a chance to win a stand at the World Grad School Tour, a chance to qualify for a free exhibition table at "these prestigous events" and a chance to win a BlackBerry personal organiser.

It is quite common in social science research to pay survey participants for their time and trouble but this might be a bit excessive. It could also lead to a bias in the response rate. After all, not everybody is going to get very excited about going to those prestigous events. But some people might and they are more likely to be in certain disciplines and in certain places than others.

But the most interesting thing is the bit at the top of the Russian page. The message was addressed not to any particular person. but just to "World Scientific Subscriber" . World Scientific is an online collection of scientific journals. One wonders whether QS had any way of checking who they were getting replies from. Was it the head of the Observatory or some exploited graduate student whose job was to check the e-mail? Also, did they send the survey to all World Scientitific subscribers or just to some of them or only to those in Russia or Eastern Europe?

So now you know what to do if you want to get on the THES panel of peer reviwers. Subscribe to World Scientific and, perhaps, a few other online subscription services or work for an institution that does. With a bit of luck you will be recognised as a real smart person and get a chance to vote your employer and your alma mater into the Top 300 or 200.
The Fastest Way into the THES TOP 200

In a little while the latest edition of the THES rankings will be out. There will be protests from those who fail to make the top 200, 300 or 500 and much self-congratulation from those included. Also, of course, THES and QS, THES’s consultants, directly or indirectly, will make a lot of money from the whole business.

If you search through the web you will find that QS and THES have been quite busy over the last year or so promoting their rankings and giving advice about what to do to get into the top 200. Some of their advice is not very helpful. Thus, Nunzio Quacquarelli, director of QS told a seminar in Kuala Lumpur in November 2005, that producing more quality research was one way of moving up in the rankings. This is not necessarily a bad thing but it will be a least a decade before any quality research can be completed, written up, submitted for publication, revised, finally accepted, published, and then cited by another researcher whose work goes through the same processes. Only then will research will start to push a university into the top 200 or 100 by boosting their score for citations per faculty.

Something less advertised is that once a university has got onto the list of 300 universities (so far this has been decided by peer review) there is a very simple way of boosting a university’s position in the rankings. It is also not unlikely that several universities have already realized this.

Pause for a minute and review the THES methodology. They gave a weighing of 40 per cent to a review of universities by other academics, 10 per cent to a rating by employers, 20 per cent to the ratio of faculty to students, 10 per cent to the proportion of international faculty and students, and 20 per cent to the number of citations per faculty. In 2005 the top scoring institution in each category was given a score of 100 and then the scores of the others were calibrated accordingly.

Getting back to boosting ratings, first take a look at the 2004 and 205 scores for citations per faculty. Comparison is a bit difficult because in 2004 the top scorer is given a score of 400 and then one of 100 in 2005 (it’s MIT in both cases.) What immediately demands attention is that there are some very dramatic changes between 2004 and 2005.

For example Ecole Polytechnique in Paris fell from 14.75 (dividing the THES figures by four because top ranked MIT was given a score of 400 in 2004) to 4, ETH Zurich from 66.5 to 8, and McGill in Canada from 21 to 8.

This at first sight is more a bit strange. The figures are supposed to refer to ten-year periods, so that in 2005 citations for the earliest year would be dropped and then those for another year added. You would not expect very much change from year to year since the figures for 2004 and 2005 overlap a great deal.

But it is not only citations that we have to consider. The score is actually based on citations per faculty member. So, if the number of faculty goes up and the number of citations remains the same then the score for citations per faculty goes down.

This in fact is what happened to a lot of universities. If we look at the score for citations per faculty and then the score for faculty-student ratio there are several cases where they change proportionately but in opposite directions.

So, going back to the three examples given above between 2004 and 2005 Ecole Polytechnique went up from 23 to 100, to become the top scorer for faculty-student ratio, ETH Zurich from 4 to 37, and Mc Gill from 23 to 42. Notice the rise in the faculty student ratio score is roughly proportionate to the fall in the number of citations per faculty.

I am not the first person to notice the apparent dramatic collapse of research activity at ETH Zurich. Norbert Staub in ETH Life International was puzzled by this. It looks as though it wasn’t that ETH Zurich stopped doing research but that apparently it acquired something like eight times as many teachers.

It seems pretty obvious that what happened to these institutions is that the apparent number of faculty went up between 2004 and 2005. This led to a rise in the score for faculty student ratio and a fall in the number of citations per faculty.

You might ask, so what? If a university goes up on one measure and goes down on another surely the total score will remain unchanged.

Not always. THES has indexed the scores to the top scoring university so that in 2005 the top scorer gets 100 for both faculty-student ratio and citations per faculty. But the gap between the top university for faculty student ration and run of the mill places in, say, the second hundred is much less than it is for citations per faculty. For example take a look at the faculty-student scores of the universities starting at position number 100. We have 15, 4, 13, 10, 23, 16, 13, 29, 12, 23. Then look at the scores for citations per faculty, 7, 1, 8, 6, 0, 12, 9, 14, 12, 7.

That means that many universities can, like Ecole Polytechnique, gain much more by increasing their faculty student ratio than they lose by reducing the citations per faculty. Not all of course. ETH Zurich suffered badly as a result of this faculty inflation.

So what is going on? Are we really to believe that in 2005 Ecole Polytechnique quadrupled its teaching staff, ETH Zurich increased its eightfold and that of McGill nearly doubled. This is totally implausible. The only explanation that makes any sort of sense is that either QS or the institutions concerned were counting their teachers differently in 2004 and 2005.

The likeliest explanation for Ecole Polytechnique’s s remarkable change is simply that in 2004 only full time staff were counted but in 2005 part-time staff were counted as well. It is well known that many staff of the Grandes Ecoles of France are employed by neighbouring research institutes and universities, although exactly how many is hard to find out. If anyone can suggest any other explanation please let me know.

Going through the rankings we find that are quite a few universities that are affected by what we might call “faculty inflation”. EPF Lausanne from 13 to 64, Eindhoven from 11 to 54, University of California at San Francisco from 39 to 91, Nagoya from 19 to 35, Hong Kong from 8 to 17.

So, having got through the peer review, this is how to get a boost in the rankings. Just inflate the number of teachers and deflate the number of students.

Here are some ways to do it. Wherever possible, hire part-time teachers but don’t differentiate between full and part-time. Announce that every graduate student is a teaching assistant, even if they just have to do a bit of marking, and count them as teaching staff. Make sure anyone who leaves is designated emeritus or emerita and kept on the books. Never sack anyone but keep him or her suspended. Count everybody in branch campuses and off -campus programmes. Classify all administrative appointees as teaching staff.

It will also help to keep the official number of students down. A few possible ways are not counting part-time students, not counting branch campuses, counting at the end of the semester when some have dropped out.

Wednesday, August 30, 2006

Comparing the Newsweek and THES Top 100 Universities

It seems to be university ranking season again. Shanghai Jiao Tong University has just come out with their 2006 edition and it looks like there will be another Times Higher Education Supplement (THES) ranking quite soon. Now, Newsweek has joined in with its own list of the world’s top 100 universities.

The Newsweek list is, for the most part, not original but it does show something extremely interesting about the THES rankings.

What Newsweek did was to combine bits of the THES and Shanghai rankings (presumably for 2005 although Newsweek does not say). They took three components from the Shanghai index, the number of highly cited researchers, number of articles in Nature and Science, and the number of articles in the ISI Social Sciences and Arts and Humanities Indices (the SJTU ranking actually also included the Science Citation Index.) and gave them a weighting of 50 per cent. Then, they took four components from the THES rankings, percentage of international faculty, percentage of international students, faculty-student ratio and citations per faculty. They also added a score derived from the number of books in the university library.

Incidentally, it is a bit irritating that Newsweek, like some other commentators, refers to the THES as The Times of London. The THES has in fact long been a separate publication and is no longer even owned by the same company as the newspaper.

The idea of combining data from different rankings is not bad, although Newsweek does not indicate why they assign the weightings that they do. It is a shame, though, that they keep THES’s data on international students and faculty and faculty-student ratio, which do not show very much and are probably easy to manipulate.

Still, it seems that this ranking, as far as it goes, is probably better than either the THES or the Shanghai ones, considered separately. The main problem is that it includes only 100 universities and therefore tells us nothing at all about the thousands of others.

The Newsweek ranking is also notable for what it leaves out. It does not include the THES peer review which accounted for 50 per cent of the ranking in 2004 and 40 per cent in 2005 and the rating by employers which contributed 10 per cent in 2005. If we compare the top 100 universities in the THES ranking with Newsweek’s top 100, some very interesting patterns emerge. Essentially, the Newsweek ranking tells us what happens if we take the THES peer review out of the equation.

First, a lot of universities have a much lower position on the Newsweek ranking that they do on the THES’s and some even disappear altogether from the former. But the decline is not random by any means. All four French institutions suffer a decline. Of the 14 British universities, 2 go up, 2 have the same place and 10 go down. Altogether 26 European universities fall and five (three of them from Switzerland) rise.

The four Chinese (PRC) universities in the THES top 100 disappear altogether from the Newsweek top 100 while most Asian universities decline. Ten Australian universities go down and one goes up.


There are some truly spectacular tumbles. They include Peking University (which THES likes to call Beijing University), the best university in Asia and number 15 in the world, according to THES, which is out altogether. The Indian Institutes of Technology have also gone. Monash falls from 33 to 73, Ecole Polytechnique in Paris from 10 to 43, and Melbourne from 19 to 53.

So what is going on? Basically, it looks as though the function of the THES peer and employer reviews was to allow universities from Australia, Europe, especially France and the United Kingdom, and Asia, especially China, to do much better that they would on any other possible measure or combination of measures.

Did THES see something that everybody else was missing? It is unlikely. The THES peer reviewers are described as experts in their fields and as being research-active academics. They are not described as experts in teaching methodology or as involved in teaching or curricular reform. So it seems that this is supposed a review of the research standing of universities and not of teaching quality or anything else. And for some countries it is quite a good one. For North America, the United Kingdom, Germany, Australia and Japan, there is a high correlation between the scores for citations per faculty and the peer review. For other places it is not so good. There is no correlation between the peer review and citations for Asia overall, China, France, and the Netherlands. For the whole of the THES top 200 there is only a weak correlation.

So a high score on the peer review does not necessarily reflect a high research profile and it is hard to see that it reflects anything else.

It appears that the THES peer review, and therefore the ranking as a whole, was basically a kind of ranking gerrymandering in which the results were influenced by the method of sampling. QS assigned took about a third each of its peers from North America, Europe and Asia and then asked them to name the top universities in their geographic areas. No wonder that we have large numbers of European, Asian and especially Australian universities in the top 200. Had the THES surveyed an equal number of reviewers from Latin America and Africa (“major cultural regions”?) the results would have been different. Had they asked reviewers to nominate universities outside their own countries (surely quality means being known in other countries or continents?) they would have been even more different.

Is it entirely a coincidence that the regions that are disproportionately favoured by the peer review, the UK, France, China and Australia, are precisely those where QS, the consultants who carried the survey, have offices and are precisely those regions that are active in the production of MBAs and the lucrative globalised trade in students, teachers and researchers?

Anyway, it will be interesting to see if THES is going to do the same sort of thing this year.