The failings of the Rankings and why I don’t trust them…

India announces that Foreign Universities which figure in top 500 in the world will be able to enter into twinning arrangements with Indian Universities. Sounds very good to someone who would think that getting a trusted list of “top 500” is easy. Yes, it is, if we are simply accepting the preference that Sibal’s office has shown for the Times Higher Education Supplement Rankings and the Shanghai rankings.

  • If we take the full top 500 from both the above listings, we are talking of not 500 but about 700-800 Universities who can enter into twinning arrangements. Hence, this is the first real clarification that we are not talking of opening up the Indian education system to top 500 in the world but to about 700-800 who may be top of one of the two rankings.
  • Secondly, I just hope that Ministry of HRD is aware that what they have been looking at as THES rankings over the years was actually QS_THES rankings and two years ago, QS, which conducted the rankings for THES split to start releasing their own separate rankings and hence there is no ONE QS_THES ranking any more. There are two separate rankings.
  • Non-multi disciplinary degree-awarding institutions are often left out(or severely disadvantaged) of the ranking mechanism and so are non-research institutions. Hence, does this mean that London Business School or INSEAD cannot enter into arrangements with Indian providers because they are not in top 500 of the University rankings. Or are we prepared to also consider top 500 MBA rankings and if so, we are then increasing the list of potential foreign providers who can partner with Indian institutions to 1000 or so.
  • And finally, the whole business of rankings are either perceptive and where several indicates are taken into consideration, the cooperation of the education providers is so critical that often only an inadequate ranking results come out. And do you know, many of the Universities now hire staff whose job is to basically get the check boxes ticked off so that the institution can figure in the rankings even though they may not deserve them.
  • My first gut to dismiss the rankings beyond simple guides is due to the fact that the results of the rankings differ so much between them. If they were all rankings of the same institutions, they should be the same or very similar. If they are meant to be critiques of the institutions then the variance can be justified… not otherwise. Also the fact that these rankings show that institutions can drop in their lists or improve their rankings so significantly within a year itself that the rankings by itself become questionable.

A very interesting blog on rankings was shared with me and titled 12 QUESTIONABLE FACTORS BEHIND TRADITIONAL UNIVERSITY RANKINGS and I provide a cut-paste from the link.

  1. Rankings can be highly arbitrary.

    Is it more important to care for your heart or your brain in order to be healthy? There is no easy answer for that, as you can’t live without either, yet when ranking colleges, certain types of data about essential academic services are weighted in just that manner. One factor is often determined to be more important than another, regardless of whether or not that factor really has that much of marked difference on the quality of education at the school. Essentially, much of what goes into ranking schools is arbitrary, to one degree or another, which can be seen by the fluctuations in rankings anytime new metrics are applied.

  2. Not all factors can possibly be considered.

    Rankings can hardly hope to take into account every factor that influences the quality of education a school offers. In fact, some important issues like teaching, community engagement, mission, innovation, and social and economic impact aren’t even taken into account at all in university rankings, despite many being major reasons a student would want or not want to attend a school. Rankings often focus on a few key pieces of data (generally related to research output and funding) that don’t offer a well-rounded picture of a school’s strengths and weaknesses.

  3. Money plays a big role.

    Some of the biggest factors that determine a school’s ranking are directly correlated with the money those schools have, either through endowments, donations, government funding, or research income. This often results in some pretty skewed rankings, with schools that aren’t sitting on millions ranked well below their wealthier counterparts. Rankings also often use the number of articles, research funding, and academic salaries to assess quality, which, again, also favors schools that simply have more money in the first place. While colleges and universities with bigger budgets can offer more amenities, facilities, and other opportunities than schools with smaller spending power, these things are not always necessarily directly correlated with a better education or long-term success.

  4. Rankings don’t actually measure the quality of education.

    At the heart of a great college education are great professors who help teach, inspire, and mentor students, yet rankings don’t have a reliable way of accurately measuring how much students are really learning at any school. In fact, some don’t really addressteaching at all, focusing on factors like income of academics or number of Ph.D. students assigned to each. While the number of students in a class and individual attention can improve education (and are measurable factors), the thing that really matters when choosing a college — the education you get — can’t be measured, ranked, or even compared between students because it’s highly individual. Rankings can account for a number of factors, but they cannot predict which students will flourish in a given environment, nor which schools will truly offer the best educational experience. Even worse, most don’t include any measure of teaching effectiveness, whatsoever.

  5. There are clear biases.

    Ranking systems, especially those attempting to measure quality on a global scale, have some clear and unfortunate biases. Schools located in English-speaking nations like America, Australia, Canada, and the UK are often offered major advantages in the metrics used to rank schools, making it nearly impossible for schools outside of these areas to break into the list of top schools. Rankings are also often biased toward certain types of research, namely that in STEM fields that often produces a greater number of articles per year and brings in much higher levels of funding than that in the humanities or social sciences. These factors mean that students aren’t getting a clear picture of what schools really have to offer, and which might truly offer the best experience, resources, and support throughout a college education.

  6. Criteria change over time, making it hard to compare over time or between rankings.

    One of the major criticisms of some of the large ranking systems is that they are inconsistent, frequently changing their criteria or ranking methodology so that it’s nearly impossible to make comparisons with rankings in past years or with other ranking systems. This isn’t helpful to students, who may be unsure what is being measured when schools are ranked, nor to the schools themselves who may see large drops between one year and another due to factors entirely out of their control. It also makes it difficult to see which schools are truly making improvements and which are simply reaping the benefits of a new ranking methodology.

  7. They put the chicken before the egg.

    Is a student who attends a top-ranked school more successful simply because he or she went to that school? Not necessarily. Talented, intelligent individuals are more likely to be successful as a whole, regardless of where they go to school. College rankings simply can’t account for individual ability of students, and all factors being equal, students who attend a top-tier school aren’t necessarily more likely to net big salaries and career success than those who attend a lower ranked school. Yet rankings, and the colleges at the top of those rankings, would often have you believe otherwise. That isn’t to say that a great college education can’t help students along in their careers, but it is only one in an incredibly complex set of factors that leads to long-term success and can’t be boiled down to simple cause and effect.

  8. Top colleges aren’t a good match for all students.

    College-age kids and their parents want the best, and rankings imply that schools at the top of the list offer just that. While this may be true in some cases, it isn’t the case for every student. Not all students will flourish at top-tier schools. Some may get more out of courses taken at a small college, others may find a better program in their major elsewhere, and some may simply appreciate an educational environment that’s less competitive than that offered at many top schools. There are a variety of reasons why a school that ranks well may not offer the best experience for a given student, which is why rankings, even the best systems, should only make up a small part of the decision to attend a specific school.

  9. Rankings rely on peer-assessments.

    A hefty portion of the rankings for colleges and universities rely on peer assessments. These come from academics and officials at other schools, who are asked to give their honest opinion about the services, education, and academics at a given school. It’s not hard to see some potential problems with this system. Sometime these assessmentsare left blank for schools that officials don’t know enough about to accurately rate, meaning data for certain schools may be more rich than that for others, giving much more weight to a particularly high or low assessment. Additionally, individuals outside of a school can hardly be expected to have an accurate sense of life on campus and can only make assumptions based on what they do know about a school, which may or may not be true to reality. And, sadly, there’s always the possibility that individuals will give unnecessarily harsh ratings of other schools, in an attempt to promote their own institutions.

  10. Not all universities participate.

    Ranking systems don’t really mean much unless they include all schools in a given geographic area. Yet many schools refuse to participate in ranking systems or do not provide the full set of data required to accurately assess them in a given metric. One notable example is Reed College, who has questioned the validity of rankings since they first appeared and refuses to participate in the U.S. News & World Report rankings. The school not only believes that rankings are highly arbitrary and often useless, but maintains that the school wants to pursue their own educational philosophy, not one dictated by an outside force. Over the years, colleges like Stanford, Alma, Barnard, Sarah Lawrence, Kenyon, Gettysberg, and St. John’s have also boycotted rankings, and in 2007 dozens of public and private colleges refused to participate in the peer assessment survey required by the U.S. News rankings.

  11. Some schools lie.

    Students aren’t the only ones who cheat to get ahead; colleges and universities do it, too, especially when it comes to rankings that can potentially cost them big money and connections. In 1994, The Wall Street Journal published an expose about colleges that were flagrantly manipulating data in order to move up in the rankings. Unfortunately, in the years since then little has changed. In 2011, Claremont McKenna College was found to have been inflating the reading and math SAT scores for the past five years. It’s unlikely that Claremont McKenna was the only school doing this kind of shady reporting. Others just haven’t gotten caught yet, drawing into question the validity of rankings as a whole.

  12. Most rankings are far too broad.

    Is it really possible to compile a list of the best universities in the world? While these kinds of rankings are becoming increasingly popular, their usefulness is questionable. Critics of university ranking systems feel that smaller, more specific rankings are much more accurate in determining where schools really rank. It makes sense, as a top-tier medical research school can hardly be fairly compared with a school that is known for its writing programs. In 2009, Forbes ranked West Point as the best college in the U.S., but it isn’t hard to figure out that that school, no matter how prestigious, might not be the best match for every degree program or student.

Categories:

10 Comments

  1. Very astute article, Mr. Singh. Many exceptional institutions never appear on these rankings. A recent US National Science Foundation study found that a super-abundance of PhD awardees received their undergraduate educations at small, elite liberal arts colleges in the US (i.e. Oberlin, Harvey Mudd, Reed, Wellesley, etc.). These institutions never appear in the global rankings, yet they are the preferred recruiting ground for doctoral programs run by the likes of Chicago, Harvard, Stanford, etc. One would think that the Indian government could come up with a more rigorous standard for evaluation than commercial rankings.

    Mitch Leventhal, PhD (Chicago)
    Vice Chancellor, State University of New York

    Like

    1. Thanks Mr Leventhal. Your comments makes perfect sense and thanks also for taking out time to post your thoughts… Keep up the good work that you are doing with AIRC too. The problem with Indian Government is that it has good intentions but often has poor advisors… They have shortlisted two rankings to decide whether the foreign Universities can partner with Indian institutions but forgot that those two rankings too were severely criticized by top Indian institutions including the IITs since the Indian system was different and hence the ranking methodology disadvantaged them. In India, under each of the Universities there are 50 colleges and each of the colleges with varying standards. Hence one ranking for one University does not make sense. Ironically, if the foreign Universities too say that they will partner only with those Indian institutions that are in top 500 list, we will be left with only 1-2 Universities in India who can be eligible. After all, partnerships are only between equals…

      Like

  2. Excellent points Ravi, what amazes me is that an Institution like London School of Economics is using an private ranking done by a Indian magazine ( An we know how ‘scientifically’ these rankings are done.. the less said the better) as a parameter for their admission criteria for students from India. They did when i last checked in September 2011, when I was aghast to see the kind of people getting in.

    Like

    1. I fully agree that Indian rankings are very very incomplete. There are Indian rankings that show IIPM amongst the better institutions even though Government of India does not recognize their status and also they cannot offer degrees. I was looking at the ranking of schools done by one publication and looked for their parameters and it appears that they only ranked the perception… Wow… we can all have perception based rankings…

      Like

    1. The magazines and websites make a lot of money from the rankings. In India, take a look at the India Today issue of College rankings and you will see every second page as an advert of a B grade or C grade private institution…

      Like

  3. Can Indian universities not do their own detailed research and evaluation and decide which institutions they want to have an agreement with? They could proactively search institutions as per their needs or be reactive in their approach and investigate whoever approaches them.
    Not sure why the government needs to set an overarching policy here?

    Like

    1. The Universities normally do that. The Government should focus on more pressing issues than getting involved in finding shortcuts. They have not been able to pass the foreign education bill and are not likely to get it passed too anytime soon. They make so many announcements and very few of them really become regulations. Sibal has good intentions but nothing more often. He tends to be unable to build consensus… Look at the current issues on IIT admissions as an example.

      Like

  4. Hi Ravi. I can see why the Indian government may have taken such an approach. While it is a very broad approach and possibly not thought out, what they may be seeking is to not have the usual culprits who enter into collaboration for financial gain, and not for academic gain. I do agree that the Indian universities should be able to choose who would be the right partner for them, but who would monitor this to ensure that the relationships formed are genuine, two way, and of benefit to the students themselves? There is so much that needs to be thought out

    Like

Leave a comment