India announces that Foreign Universities which figure in top 500 in the world will be able to enter into twinning arrangements with Indian Universities. Sounds very good to someone who would think that getting a trusted list of “top 500” is easy. Yes, it is, if we are simply accepting the preference that Sibal’s office has shown for the Times Higher Education Supplement Rankings and the Shanghai rankings.
- If we take the full top 500 from both the above listings, we are talking of not 500 but about 700-800 Universities who can enter into twinning arrangements. Hence, this is the first real clarification that we are not talking of opening up the Indian education system to top 500 in the world but to about 700-800 who may be top of one of the two rankings.
- Secondly, I just hope that Ministry of HRD is aware that what they have been looking at as THES rankings over the years was actually QS_THES rankings and two years ago, QS, which conducted the rankings for THES split to start releasing their own separate rankings and hence there is no ONE QS_THES ranking any more. There are two separate rankings.
- Non-multi disciplinary degree-awarding institutions are often left out(or severely disadvantaged) of the ranking mechanism and so are non-research institutions. Hence, does this mean that London Business School or INSEAD cannot enter into arrangements with Indian providers because they are not in top 500 of the University rankings. Or are we prepared to also consider top 500 MBA rankings and if so, we are then increasing the list of potential foreign providers who can partner with Indian institutions to 1000 or so.
- And finally, the whole business of rankings are either perceptive and where several indicates are taken into consideration, the cooperation of the education providers is so critical that often only an inadequate ranking results come out. And do you know, many of the Universities now hire staff whose job is to basically get the check boxes ticked off so that the institution can figure in the rankings even though they may not deserve them.
- My first gut to dismiss the rankings beyond simple guides is due to the fact that the results of the rankings differ so much between them. If they were all rankings of the same institutions, they should be the same or very similar. If they are meant to be critiques of the institutions then the variance can be justified… not otherwise. Also the fact that these rankings show that institutions can drop in their lists or improve their rankings so significantly within a year itself that the rankings by itself become questionable.
A very interesting blog on rankings was shared with me and titled 12 QUESTIONABLE FACTORS BEHIND TRADITIONAL UNIVERSITY RANKINGS and I provide a cut-paste from the link.
Rankings can be highly arbitrary.
Is it more important to care for your heart or your brain in order to be healthy? There is no easy answer for that, as you can’t live without either, yet when ranking colleges, certain types of data about essential academic services are weighted in just that manner. One factor is often determined to be more important than another, regardless of whether or not that factor really has that much of marked difference on the quality of education at the school. Essentially, much of what goes into ranking schools is arbitrary, to one degree or another, which can be seen by the fluctuations in rankings anytime new metrics are applied.
Not all factors can possibly be considered.
Rankings can hardly hope to take into account every factor that influences the quality of education a school offers. In fact, some important issues like teaching, community engagement, mission, innovation, and social and economic impact aren’t even taken into account at all in university rankings, despite many being major reasons a student would want or not want to attend a school. Rankings often focus on a few key pieces of data (generally related to research output and funding) that don’t offer a well-rounded picture of a school’s strengths and weaknesses.
Some of the biggest factors that determine a school’s ranking are directly correlated with the money those schools have, either through endowments, donations, government funding, or research income. This often results in some pretty skewed rankings, with schools that aren’t sitting on millions ranked well below their wealthier counterparts. Rankings also often use the number of articles, research funding, and academic salaries to assess quality, which, again, also favors schools that simply have more money in the first place. While colleges and universities with bigger budgets can offer more amenities, facilities, and other opportunities than schools with smaller spending power, these things are not always necessarily directly correlated with a better education or long-term success.
Rankings don’t actually measure the quality of education.
At the heart of a great college education are great professors who help teach, inspire, and mentor students, yet rankings don’t have a reliable way of accurately measuring how much students are really learning at any school. In fact, some don’t really addressteaching at all, focusing on factors like income of academics or number of Ph.D. students assigned to each. While the number of students in a class and individual attention can improve education (and are measurable factors), the thing that really matters when choosing a college — the education you get — can’t be measured, ranked, or even compared between students because it’s highly individual. Rankings can account for a number of factors, but they cannot predict which students will flourish in a given environment, nor which schools will truly offer the best educational experience. Even worse, most don’t include any measure of teaching effectiveness, whatsoever.
There are clear biases.
Ranking systems, especially those attempting to measure quality on a global scale, have some clear and unfortunate biases. Schools located in English-speaking nations like America, Australia, Canada, and the UK are often offered major advantages in the metrics used to rank schools, making it nearly impossible for schools outside of these areas to break into the list of top schools. Rankings are also often biased toward certain types of research, namely that in STEM fields that often produces a greater number of articles per year and brings in much higher levels of funding than that in the humanities or social sciences. These factors mean that students aren’t getting a clear picture of what schools really have to offer, and which might truly offer the best experience, resources, and support throughout a college education.
Criteria change over time, making it hard to compare over time or between rankings.
One of the major criticisms of some of the large ranking systems is that they are inconsistent, frequently changing their criteria or ranking methodology so that it’s nearly impossible to make comparisons with rankings in past years or with other ranking systems. This isn’t helpful to students, who may be unsure what is being measured when schools are ranked, nor to the schools themselves who may see large drops between one year and another due to factors entirely out of their control. It also makes it difficult to see which schools are truly making improvements and which are simply reaping the benefits of a new ranking methodology.
They put the chicken before the egg.
Is a student who attends a top-ranked school more successful simply because he or she went to that school? Not necessarily. Talented, intelligent individuals are more likely to be successful as a whole, regardless of where they go to school. College rankings simply can’t account for individual ability of students, and all factors being equal, students who attend a top-tier school aren’t necessarily more likely to net big salaries and career success than those who attend a lower ranked school. Yet rankings, and the colleges at the top of those rankings, would often have you believe otherwise. That isn’t to say that a great college education can’t help students along in their careers, but it is only one in an incredibly complex set of factors that leads to long-term success and can’t be boiled down to simple cause and effect.
Top colleges aren’t a good match for all students.
College-age kids and their parents want the best, and rankings imply that schools at the top of the list offer just that. While this may be true in some cases, it isn’t the case for every student. Not all students will flourish at top-tier schools. Some may get more out of courses taken at a small college, others may find a better program in their major elsewhere, and some may simply appreciate an educational environment that’s less competitive than that offered at many top schools. There are a variety of reasons why a school that ranks well may not offer the best experience for a given student, which is why rankings, even the best systems, should only make up a small part of the decision to attend a specific school.
Rankings rely on peer-assessments.
A hefty portion of the rankings for colleges and universities rely on peer assessments. These come from academics and officials at other schools, who are asked to give their honest opinion about the services, education, and academics at a given school. It’s not hard to see some potential problems with this system. Sometime these assessmentsare left blank for schools that officials don’t know enough about to accurately rate, meaning data for certain schools may be more rich than that for others, giving much more weight to a particularly high or low assessment. Additionally, individuals outside of a school can hardly be expected to have an accurate sense of life on campus and can only make assumptions based on what they do know about a school, which may or may not be true to reality. And, sadly, there’s always the possibility that individuals will give unnecessarily harsh ratings of other schools, in an attempt to promote their own institutions.
Ranking systems don’t really mean much unless they include all schools in a given geographic area. Yet many schools refuse to participate in ranking systems or do not provide the full set of data required to accurately assess them in a given metric. One notable example is Reed College, who has questioned the validity of rankings since they first appeared and refuses to participate in the U.S. News & World Report rankings. The school not only believes that rankings are highly arbitrary and often useless, but maintains that the school wants to pursue their own educational philosophy, not one dictated by an outside force. Over the years, colleges like Stanford, Alma, Barnard, Sarah Lawrence, Kenyon, Gettysberg, and St. John’s have also boycotted rankings, and in 2007 dozens of public and private colleges refused to participate in the peer assessment survey required by the U.S. News rankings.
Some schools lie.
Students aren’t the only ones who cheat to get ahead; colleges and universities do it, too, especially when it comes to rankings that can potentially cost them big money and connections. In 1994, The Wall Street Journal published an expose about colleges that were flagrantly manipulating data in order to move up in the rankings. Unfortunately, in the years since then little has changed. In 2011, Claremont McKenna College was found to have been inflating the reading and math SAT scores for the past five years. It’s unlikely that Claremont McKenna was the only school doing this kind of shady reporting. Others just haven’t gotten caught yet, drawing into question the validity of rankings as a whole.
Most rankings are far too broad.
Is it really possible to compile a list of the best universities in the world? While these kinds of rankings are becoming increasingly popular, their usefulness is questionable. Critics of university ranking systems feel that smaller, more specific rankings are much more accurate in determining where schools really rank. It makes sense, as a top-tier medical research school can hardly be fairly compared with a school that is known for its writing programs. In 2009, Forbes ranked West Point as the best college in the U.S., but it isn’t hard to figure out that that school, no matter how prestigious, might not be the best match for every degree program or student.