iitypeii

iitypeii is the work of two current affair nitpickers ([C] and [M]) and guests ([G]) giving their perspective and insight on publicly available statistics, the connections (and lack thereof) between them, and the erroneous conclusions made...

Are university rankings a bad idea? \[C+M]

University rankings are extensively used by groups of individuals to make important choices - be it potential undergraduates choosing where to study, human resources choosing who has the better academic credentials, to university bureaucrats deciding on how much to pay themselves.

It shouldn't come as a great surprise to anyone that there is a subjective element in university rankings, and that there may be minor variations in the rankings depending on the methodology chosen. However, is this the case? How sensitive are the rankings to the methodology chosen? Can the rankings of a given institution be artificially increased by simple tinkering?

For our comparison, we will consider three of the most influential UK rankings from the perspective of a potential mathematics undergraduate (if she was to simply google for them) - The Complete University Guide (CUG), Guardian University Guide (GUG) [Guardian Newspaper], or perhaps QS University Ranking (QS) [Quacquarelli Symonds] if we consider UK universities in a worldwide context. All of these rankings use differing methodologies, and provide mathematics rankings in addition to overall rankings.

Now, suppose she has managed to narrow down her choices of university destination to the following short-list of potentials. How do you choose between the universities?

Universities above are sorted in increasing order by the average number in every line.

Reassuringly Cambridge, Oxford, Warwick and Imperial - anecdotally ranked by mathematicians as the strongest UK departments (so called, COWI) - appear at or near the top of the short-list regardless of the ranking chosen. However, beyond COWI the positions on the list vary wildly.

You might think that the inconsistency in position is caused by using different methodologies. Each ranking methodology is broadly the same - a collection of metrics are compiled for every university (for instance, student satisfaction, or graduate employability, or research quality), and these are then weighted to provide a number. These numbers are then sorted in decreasing order to produce a ranking. 

So there are at least three possible categories of weakness in the rankings - the metrics chosen, how they are weighted, and for what purpose the ultimate ranking is to be used . These will be discussed in our forthcoming posts! Stay tuned and subscribe!

"If it cannot be expressed in figures, it is not science, it is opinion."
Robert Anson Heinlein