Saturday, February 26, 2011

Surveys and Citations

I have just finished calculating the correlation between the scores for the academic survey and citations per faculty on the 2010 QS world University rankings.

Since the survey asked about research and since citations are supposed to be a robust indicator of research excellence we would expect a high correlation between the two.

It is in fact .391, which is on the low side. There could be valid reasons why it is so low. Citations, by definition, must follow publication which follows research which in turn is preceded by proposals and a variety of bureaucratic procedures. A flurry of citations might be indicative of the quality of research begun a decade ago. The responses to the survey might, on the other hand, be based on the first signs of research  excellence long before the citations start rolling in.

Still, the correlation does not seem high enough. At first glance one would suspect that the survey is faulty but it could be that citations do not mean very much any more as a measure of excellence.

It would be very interesting to calculate the correlation between the score for research reputation on the Times Higher Education WUR and its citation indicator.

We would expect the THE survey to be more valid since the basic qualification for being included in the survey is being the corresponding  author of an article included in the ISI indexes whereas for QS it is signing up for a journal published by World Scientific. But it can no longer be assumed that authorship of any article means very much . Does it always require more initiative and interest to get on the list of co-authors than to sign up for an online subscription?

It should also be noted that there is an overlap between the two surveys as both are supplemented with arts and humanities respondents from the Mardev mailing lists.

I have calculated the correlation between the citations indicator (normalised average citations per paper)  in the THE 2010 rankings and the research indicator -- volume ( 4.5 % of the total score)  income (6%)  and reputation (19.5%)

This is .562, quite a bit better than the QS correlation . However, the research indicator combines a survey with other data.

It would be very interesting if THE and/or Thomson Reuters released the scores of the individual components of the research indicator.

Wednesday, February 23, 2011

Reputation, reputation, reputation!

As the world (or some of it) waits for the ranking survey forms to appear in its mail boxes, both THE and QS are promoting their surveys.

According to Phil Baty of THE:

"But in our consultation with the sector, there was strong support for the continued use of reputation information in the world rankings. Some 79 per cent of respondents to a survey by our rankings data provider Thomson Reuters rated reputation as a “must have” or “nice to have” measure. We operate in a global market where reputation clearly matters."

He then indicates several ways in which the THE survey is an improvement over the THE-QS, now QS, survey.

"We received a record 13,388 usable responses in just three months, making the survey the biggest of its kind in the world.


We promised a transparent approach. The methodology and survey instrument were published in full and this week, the thousands of academics who took part in the survey were sent a detailed report on the respondent profile. It makes reassuring reading:


• Responses were received from 131 countries"

It would, however, be interesting if the number of respondents from all countries were indicated. There are some people who wonder whether THE's sampling technique means that Singapore got the lion's share of responses in Southeast Asia.

Also, will THE publish the scores for the reputation surveys? At the moment they are bundled in with the other teaching and research indicators. What is the correlation between the score for research reputation and the citations indicator? Is there any sign that Alexandria, Bilkant or Hong Kong Baptist University have reputations that match their scores for research impact?

Meanwhile QS also has an item on its survey. They find that there is a similar demand for data on reputations.

"An impressive 79% of respondents, voted reputation for research as one of their top three criteria, with 60% choosing international profile of faculty, essentially another indicator of international reputation for research. This is in stark contrast to the 26% and 30% that prioiritised citations as a key measure.



Furthermore, when breaking these results out by broad faculty area, we can see consistent support across disciplines for the reputation measure but a marked dip in support for citations as a measure amongst respondents in the Arts & Humanities area – which tends to be the area least recognized by traditional measures of research output."
Comment on Internationalisation

International Focus, the newsletter of UK HE international Unit, has an article by Jane Knight on myths of internationalisation. The second myth is:


"Myth two rests on a belief that the more international a university is – in terms of students, faculty, curriculum, research, agreements, network memberships – the better its reputation is.



This is tied to the false notion that a strong international reputation is a proxy for quality. Cases of questionable admission and exit standards for universities highly dependent on the revenue and ‘brand equity’ of international students are concrete evidence that internationalisation does not always translate into improved quality or high standards.


This myth is further complicated by the quest for higher rankings on a global or regional league table such as the Times Higher Education or Academic World Ranking of Universities (AWRU). It is highly questionable whether the league tables accurately measure the internationality of a university and more importantly whether the international dimension is always a robust indicator of quality."
 
Also, it is much easier to be international in Switzerland or Singapore than in Central China or the Midwest of the US.

Tuesday, February 22, 2011

Penn State Law School

Malcolm Gladwell has an article in the current New Yorker about the US News and World Report college rankings. There is quite a lot there that I would like to discuss in another post. For the moment, I will just comment on an anecdote about the appearance of a non-existent law school in a ranking.

Gladwell descibes how Thomas Brennan, who edits a well known ranking of law schools, once sent out a questionnaire to other lawyers asking them to rank law schools and found that Penn State was, as Brennan is quoted as recalling, ranked around fifth. This was strange since there was no law school at Penn State until quite recently (1997 or 2000 in different sources).

This immediately struck me as odd since I remember a similar story about the Princeton Law School, which does not exist and which was also supposed to have made its appearance in a ranking.
The Princeton story is very probably apocryphal and might have  begun with a comment by the dean of New York University Law School in the Dartmouth Law Journal that Princeton would appear in the top twenty law schools if a questionnaire was asked about it.

This story was plausible since it was an apparent example of the halo effect with Princeton's general excellence being reflected in the perception of a school that did not exist.

The problem with Brennan's account retold by Gladwell, which does not appear to be supported by documentary evidence, is that it requires that many lawyers should not only have mistakenly thought that Penn State had  a law school (getting mixed up with the University of Pennsylvania?) but should have been in error about the general quality of the university. Penn State is nowhere near being a top ten or even a top fifty school.

Could this be another academic legend?

Sunday, February 20, 2011

 Impact Assessment

The use of citations as a measure of research quality was highlighted by the remarkable performance of Alexandria University, Bilkent University, Hong Kong Baptist University and others in the 2010 Times Higher Education World University Rankings. As THE and Thomson Reuters review their methodology, perhaps they could take note of this post in Francis' World Inside Out, that refers to a paper by Arnold and Fowler.

'“Goodhart’s law warns us that “when a measure becomes a target, it ceases to be a good measure.” The impact factor has moved in recent years from an obscure bibliometric indicator to become the chief quantitative measure of the quality of a journal, its research papers, the researchers who wrote those papers and even the institution they work in. The impact factor for a journal in a given year is calculated by ISI (Thomson Reuters) as the average number of citations in that year to the articles the journal published in the preceding two years. It is widely used by researchers deciding where to publish and what to read, by tenure and promotion committees laboring under the assumption that publication in a higher impact-factor journal represents better work. However, it has been widely criticized on a variety of grounds (it does not determine paper’s quality, it is a crude and flawed statistic, etc.). Impact factor manipulation can take numerous forms. Let us follow Douglas N. Arnold and Kristine K. Fowler, “Nefarious Numbers,” Notices of the AMS 58: 434-437, Mach 2011 [ArXiv, 1 Oct 2010].



Editors can manipulate the impact factor by means of the following practices: (1) “canny editors cultivate a cadre of regulars who can be relied upon to boost the measured quality of the journal by citing themselves and each other shamelessly;” (2) “authors of manuscripts under review often were asked or required by editors to cite other papers from the journal; this practice borders on extortion, even when posed as a suggestion;” and (3) “editors raise their journals’ impact factors is by publishing review items with large numbers of citations to the journal.” “These unscientific practices wreak upon the scientific literature have raised occasional alarms. A counterexample should confirm the need for alarm.” '



Looking East

Shanghai is planning to persuade two Ivy League schools, Cornell and Columbia, to set up branch campuses there. They already a branch of New York University.

Would anybody like to make a prediction when a new Oxford or Cambridge college will be established in Shanghai (or Singapore or Hong Kong)?

Or when an entire American university will move to China?
More dumbing Down

De Paul University will make it optional for applicants to submit SAT or ACT scores. Instead they can write short essays that demonstrate non-cognitive traits such as "commitment to service", "leadership" and "ability to meet long term goals".

The university says:

'"Admissions officers have often said that you can't measure heart," said Jon Boeckenstedt, associate vice president for enrollment management. "This, in some sense, is an attempt to measure that heart."



Mr. Boeckenstedt expects the change to encourage applicants with high grade-point averages but relatively low ACT and SAT scores to apply—be they low-income students, underrepresented minorities, or otherwise. Moreover, he and his colleagues believe the new admissions option will allow them to better select applicants who are most likely to succeed—and graduate.'

De Paul's administrators are being extremely naive if they think that these attributes cannot be easily coached or faked. Bluntly, how much effort does it take to teach a student what to say on one of these essays compared to squeezing out a few more points on the SAT?

Wednesday, February 16, 2011

Another US News Ranking

This one is about the schools where congressmen received their bachelor degrees.

Here are the top 10. What might be more interesting is the party affiliation of the congressmen. D = Democrat, R = Republican, I = Independent.

1.    Harvard                                      D 13, R 2
2.    Stanford                                      D 9, R 2
3.    Yale                                            D 8, R 1, I 1
4.    UCLA                                        D 6, R 3
5=   Georgetown                                D 5, R 2
5=   Florida                                        D2, R5
5=   Georgia                                       D1, R6
5=   Wisconsin - Madison                   D6, R1
9.     North carolina -- Chapel Hill       D 5, R 1 
10=  Brigham Young                           R5
10=  George Washington                    D2, R5
10=  Louisiana State                           D1, R4
10=  Berkeley                                    D4, R1
10=  Missouri                                     D4, R1
10= Tennessee                                   D2, r3
The Fortune 500

The US News has produced a ranking of US universities according to the number of degrees awarded to the CEO s of the Fortune 500, the largest American corporations according to gross revenue.

Here are the top five.

1.  Harvard
2.  Columbia
3.  University of pennsylvania
4.  Unuiversity of Wisconsin -- Madison
5.  Dartmouth College

Sunday, February 13, 2011

Ranking Education Schools

The US News and World Report, publishers of America's Best Colleges, are teaming up with the National Council on Teachers Quality to produce a rating of teacher preparation programs.

Many Education deans are strongly  opposed. See here.
We are all equal

I have come across an interesting article, "The equality of intelligence", in the philosopher's magazine by Nina Power. It is one of a series, "Ideas of the century" (I am not sure which one).

Power, whose dissertation is entitled From Theoretical Antihumanism to Practical Humanism: The Political Subject in Sartre, Althusser and Badiou and who is a senior lecturer at Roehampton University, refers to the work of Jacques Rancière,


"who never tires of repeating his assertion that equality is not just something to be fought for, but something to be presupposed, is, for me, one of the most important ideas of the past decade. Although Rancière begins the discussion of this idea in his 1987 text The Ignorant Schoolmaster, it is really only in the last ten years that others have taken up the idea and attempted to work out what it might mean for politics, art and philosophy. Equality may also be something one wishes for in a future to come, after fundamental shifts in the arrangement and order of society. But this is not Rancière’s point at all. Equality is not something to be achieved, but something to be presupposed, universally. Everyone is equally intelligent."
Just in case you thought she was kidding:

"In principle then, there is no reason why a teacher is smarter than his or her student, or why educators shouldn’t be able to learn alongside pupils in a shared ignorance (coupled with the will to learn). The reason why we can relatively quickly understand complex arguments and formulae that have taken very clever people a long time to work out lends credence to Rancière’s insight that, at base, nothing is in principle impossible to understand and that everyone has the potential to understand anything."


Power seems to be living in a different universe from those of us in the academic periphery. Perhaps she is actually pulling a Sokalian stunt but I suspect not. This sort of thing might be funny to many of us but it seems to be taken seriously in departments of education around the world. Just take a look at the model teaching philosophy statements found on the Internet.

Another example of her writing is Sarah Palin: Castration as Plenitude. Presumably that is  potentially understandable by everybody.

Friday, February 11, 2011

More on Citations

A column in the THE by Phil Baty indicates that there might be some change in the research impact indicator in the forthcoming THE World University Rankings. It is good that THE is considering changes but I have a depressing feeling that  Thomson Reuters, who collect the citations data, are going to have more weight in this matter than anyone or anything else.

Baty refers to a paper by Simon Pratt who manages the data for TR and THE.
The issue was brought up again this month in a paper to the RU11 group of 11 leading research universities in Japan. It was written by Simon Pratt, project manager for institutional research at Thomson Reuters, which supplies the data for THE’s World University Rankings.


Explaining why THE’s rankings normalise for citations data by discipline, Pratt highlights the extent of the differences. In molecular biology and genetics, there were more than 1.6 million citations for the 145,939 papers published between 2005 and 2009, he writes; in mathematics, there were just 211,268 citations for a similar number of papers (140,219) published in the same period.


Obviously, an institution with world-class work in mathematics would be severely penalised by any system that did not reflect such differences in citations volume.
This is correct but perhaps we should also consider whether the number of citations to papers in genetics is telling us something about the value that societies place on genetics rather than on mathematics and perhaps that is something that should not be ignored.


Also, in the real world are there many universities that are excellent in a single field, defined as narrowly as theoretical physics or applied mathematics, while being mediocre or worse in everything else? Anyone who thinks that Alexandria is the fourth best university in the world for research impact because of its uncontested excellence in mathematics should take a look here.

There are also problems with normalising by region. Precisely what the regions are for the purposes of this indicator is not stated. If Africa is a region, does this mean that Alexandria got another boost, one denied to other Middle Eastern universities? Is Istanbul in Europe and Bilkent in Asia? Does Singapore get an extra weighting because of the poor performance of its Southeastern neighbours?

There are two other aspects of the normalisation that are not foregrounded in the article. First, TR apparently use normalisation by year. In some disciplines it is rare for a paper to be cited within a year of publication..In others it is commonplace. An article that is classified  as being in a low citation field would get a massive boost if in addition it had a few citations within months of publication.

Remember also that the scores represent averages. A small number of total publications means an immense advantage for a university that has a few highly cited article in low cited fields and is located in a normally unproductive region. Alexandria's remarkable success was due to the convergence of four favourable factors: credit for publishing in a low citation sub-discipline, the frequent citation of recently published papers, being located in a continent whose scholars are not generally noticed and finally the selfless cooperation of hundreds of faculty who graciously refrained from sending papers to ISI indexed journals.

Alexandria University may not be open for the rest of this year and may not take part in the second THE WUR exercise. One wonders though how many universities around the world could benefit from these four factors and how many are getting ready to submit data to Thomson Reuters.

Monday, February 07, 2011

Training for Academics

The bureaucratisation of higher education continues relentlessly. Times Higher Education reports on moves to make all UK academics undergo compulsory training. This is not a totally useless idea: a bit of training in teaching methodology would do no harm at all for all those unprepared graduate assistants, part-timers and new Ph Ds that make up an increasing proportion of the work force in European and American universities.

But the higher education establishment has more than this in mind.



Plans to revise the UK Professional Standards Framework were published by the HEA in November after the Browne Review called for teaching qualifications to be made compulsory for new academics.
The framework, which was first published in 2006, is used to accredit universities' teaching-development activities, but the HEA has admitted that many staff do not see it as "relevant" to their career progression.
Under the HEA's proposals, the updated framework says that in future, all staff on academic probation will have to complete an HEA-accredited teaching programme, such as a postgraduate certificate in higher education. Postgraduates who teach would also have to take an HEA-accredited course.
A "sector-wide profile" on the number of staff who have reached each level of the framework would be published by the HEA annually.


Meanwhile, training courses would have to meet more detailed requirements.

A comment by "agreed" indicates just what is likely to happen.
 
I did one of these course a couple of years ago. I learnt nothing from the "content" that I couldn't have learnt in a fraction of the time by reading a book. The bulk of the course was an attempt to compel all lecturers to adopt fashionable models of teaching with no regard to the need for students to learn content. The example set by the lecturers on the course was apalling: ill prepared, dogmatic, and lacking in substance. A failure to connect with the "students" and a generally patronising tone was just one of the weaknesses. Weeks of potentially productive time were taken up by jumping through hoops and preparing assignments. This is not an isolated case, I know of several other such courses in other institutions that were equally shambolic. I'm all for improving the qulaity of teaching, but this is nonsensical. The only real benefit was the collegial relations with academics from other deaprtments forged through common bonds of disgust and mockery aimed at this ridiculous enterprise (presumably designed to justify the continued employment of failed academics from other disciplines given the role of teaching the reast of us how to teach).

Thursday, February 03, 2011

Comparing Rankings 2

Number of Indicators

A ranking that contained only a single indicator would not be very interesting. Providing that the indicators are actually measuring different things, rankings with many indicators would contain more information. On the other hand, the more indicators there are the more likely it is that some will be redundant.

At the moment, the THE World University Rankings are in first place with 13 indicators and Paris Mines is last with only one. We should note, however, that the THE indicators are combined into 5 super-indicators and scores are given only for the latter.

So we have the following order.

1. THE World University Rankings: 13 indicators (scores are given for only 5 indicator groups)

2.  HEEACT: 8 indicators

3= Academic Ranking of World Universities (Shanghai):   6 indicators

3= QS World University Rankings: 6 indicators

5.  Leiden: 5 (strictly speaking, 5 separate rankings)

6=  Webometrics: 4

6=  Scimago Institutions Ranking: 4 (1 used for ranking)

8.  Paris Mines Tech: 1