World University Rankings: what I’ve learned

For the past 8 months I’ve been working as a Data Scientist in the Times Higher Education’s World University Rankings team, helping to take the analysis behind the rankings in-house (it was previously all done by Reuters), and improving it along the way … so what did I learn? 

Firstly, a common theme that I’ve been finding in various sectors is that despite now living in the so-called “Data Age”, it is still very difficult to source good worldwide data – data on the UK is very much easier to find! As a result, while some of the data used for the rankings is globally available, much of the rest has to be supplied by the universities themselves. So being as certain as possible about the input data is a “must” for such a relied-upon ranking; a lot of sense-checking has now been built in to the process, including year-on-year checks and outlier validation checks.

Despite these difficulties in sourcing & verifying data, this year THE managed to assess many more institutions across the world than ever before. As part of this, I contributed (using Tableau) to some of the analysis pieces published in the THE magazine, for example the infographic and world map here, depicting the spread of universities included: www.timeshighereducation.com/student/news/best-universities-world-revealed-world-university-rankings-2015-2016

World Map.jpg

Feature from the THE magazine, displaying the spread of universities now analysed

There is also now much greater transparency about how the results are compiled – part of the reason for taking it in-house was to have full control and access to the data, and to allow universities to also explore the dataset to see how they could best improve their student offering compared to peers. The methodology itself is on the THE website (https://www.timeshighereducation.com/news/ranking-methodology-2016) so I won’t repeat it here.

You might notice that the indicators inherently cover a good mix of time periods – reputation (prestige built up over the past), factual attributes (present), and funding (investment in future research). Including all these outlooks makes the weighting of the measures ever-more important, to ensure that a university with a brilliant past is not is highly-ranked in today’s tables solely because of their performance in an historic indicator, for example.

As Duncan Ross pointed out, however, in his article on kilo-papers (https://duncan3ross.wordpress.com/2015/08/18/its-not-just-the-hadron-collider-thats-large-super-colliders-and-super-papers/), another great difficulty with university rankings is that there is no “right” answer to calibrate against. Ideally you’d test the weightings of various indicators by comparing different combinations against real historical data – but you can’t do that here. So in the absence of any more scientific way to determine the weightings, in 2010 each main pillar was given equal weight (30%), and the indicators within pillars were allocated in consultation with a panel of university academics.

Finally, a great many enquiries were received as to why THE doesn’t rank universities individually all the way to the bottom of the list of universities (unlike some competitors). Well, they’re not just lumped together universities further down the list because it’s too hard / not as interesting towards the bottom – there is a valid statistical rationale. Part of the score for a university is comprised of votes from a reputation survey, where a selected sample of academics are asked to name the 10 universities they deem to be the best (in any order). The following heavy tail naturally occurs in such a survey:

Chart 2

Chart demonstrating the long tail of the reputation data

THE investigated the confidence limits of this reputation data (using the 95% confidence interval for each score, p ± 1.96 √ [p (1-p) / n], where p is the proportion and n is the sample size). As you might expect, further down the ranks some of these confidence intervals start to overlap, so the maximum possible score of (say) the 502nd university was greater than the minimum possible score of the 501st, so the two ranks could theoretically (but with low likelihood) be in reverse order.

Of course reputation represents only two out of thirteen indicators used, so by the time it is combined with the other more precise indicators the above effect is much less pronounced – however, as a consequence THE still prudently band their results after the 200th individually-ranked university. Thereafter the scoring is simply not statistically reliable enough to be able to say with sufficient certainty that the 501st university is placed above the 502nd, so THE allocates both universities the rank “501st-600th”.

Chart 1

Chart displaying the banding ranges for the World University Rankings results

This was just one of the analyses that THE has performed in their overhaul of the rankings, making it (in my view) now the most considered of the worldwide university rankings available.

More on what I do

Back to all blogs

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s