What's new

Official explains why IITs are not on list of world's top 200 universities

IndoCarib

ELITE MEMBER
Joined
Jul 12, 2011
Messages
10,784
Reaction score
-14
Country
India
Location
Antigua And Barbuda
India's top engineering institutes, the IITs, have failed to find a place in the world's top 200 universities, and the reason, according to an IIT director, is that they didn't pay the huge price to be even considered for a global ranking.

"They (a ranking agency) charge 1.5 lakh US dollars per IIT and even the ministry is aware of this," said Indraneil Manna, the director of IIT-Kanpur, "One agency proposed that we pay 4.5 lakh dollars to be evaluated. It was quite a large sum."


Without naming any ranking agency, Mr Manna claimed that since the IITs refused to subscribe, their place in the global rankings suffered as it was based on outdated data available freely on the Internet.


The top IIT official, who is part of a five-member committee of directors set up to explore global ranking systems, said, "We don't shy away from scrutiny. If the exercise is transparent and objective and most importantly their data interpretation is precise then it's worth spending that much money."


The IITs have always maintained that they don't want to be part of the global rat race, but the slide in rankings seems to have rattled the premier institutes.

In the prestigious QS rankings of the world's 500 best universities released on September 10, IIT Delhi ranked at 222, IIT Bombay at 233, IIT Kanpur at 295, IIT Madras at 313 and IIT Kharagpur at 346.

The world's top three were MIT, Harvard and Britain's Cambridge.

"It hurts our reputation, so now we have been forced to explore the ranking system," Mr Manna said, "We never felt the need to chase a ranking system, but the President raised a concern that IITs are not making it to any global list, so we felt the need to address that concern."

The IITs, which count among their alumni some of the world's best brains, have faced criticism from many former students about a decline in the quality of teaching and research in the institutes.

Official explains why IITs are not on list of world's top 200 universities | NDTV.com
 
IITs celebrated in Dilbert cartoon series. Dont take an IITian lightly :lol:

dilbert_IIT4.jpg
:omghaha:
 
IIT exam is one of the toughest exam to crack, IITians are rated among the best.

Leave that western biased rankings and PISA pi$$ tests. Better concentrate on our standards which are quite good :cheers:
 
LOL, I can't believe ITTs academics can come up with such ludicrous excuse. Ranking are based on empirical data, like research output, citation...etc. Whether you subscribe or not, won't make a difference to the amount of research you're producing or citation per paper.

QS ranking is far from prestigious, it is considered crap by graduate researchers. Try Times Higher Education ranking or Academic Ranking of World Universities.
 
LOL, I can't believe ITTs academics can come up with such ludicrous excuse. Ranking are based on empirical data, like research output, citation...etc. Whether you subscribe or not, won't make a difference to the amount of research you're producing or citation per paper.

QS ranking is far from prestigious, it is considered crap by graduate researchers. Try Times Higher Education ranking or Academic Ranking of World Universities.

Data sources

The information used to compile the World University Ranking comes partly from the online surveys carried out by QS, partly from Scopus, and partly from an annual information-gathering exercise carried out by QS itself. QS collects data from universities directly, from their web sites and publications, and from national bodies such as education ministries and the National Center for Education Statistics in the US and the Higher Education Statistics Agency in the UK.
Aggregation

The data are aggregated into columns according to its Z score, an indicator of how far removed any institution is from the average. Between 2004 and 2007 a different system was used whereby the top university for any measure was scaled as 100 and the others received a score reflecting their comparative performance. According to QS, this method was dropped because it gives too much weight to some exceptional outliers, such as the very high faculty/student ratio of the California Institute of Technology. In 2006, the last year before the Z score system was introduced, Caltech was top of the citations per faculty score, receiving 100 on this indicator, because of its highly research and science-oriented approach. The next two institutions on this measure, Harvard and Stanford, each scored 55. In other words, 45 per cent of the possible difference between all the world's universities was between the top university and the next one (in fact two) on the list, leaving every other university on Earth to fight over the remaining 55 per cent.

Likewise in 2005, Harvard was top university and MIT was second with 86.9, so that 13 per cent of the total difference between all the world's universities was between first and second place. In 2011, the University of Cambridge was top and the second institution, Harvard, got 99.34. So the Z score system allows the full range of available difference to be used in a more informative way.
Classifications

In 2009, a column of classifications was introduced to provide additional context to the rankings tables. Universities are classified by size, defined by the size of the student body; comprehensive or specialist status, defined by the range of faculty areas in which programs are offered; and research activity, defined by the number of papers published in a five-year period.
Fees

In 2011, QS began publishing average fees data for the universities it ranks. These are not used as an indicator in the rankings, but are clearly of immense interest and reveal much about a university's self-image and market position.

QS publishes domestic and international fees for undergraduate and postgraduate study.

General criticisms


Many are concerned with the use or misuse of survey data.

Since the split from Times Higher Education, further concerns about the methodology QS uses for its rankings have been brought up by several experts. Simon Marginson, professor of higher education at University of Melbourne and a member of the THE editorial board, in the article "Improving Latin American universities' global ranking" for University World News on 10 June 2012, said: "I will not discuss the QS ranking because the methodology is not sufficiently robust to provide data valid as social science." [35]

In an article for the New Statesman entitled "The QS World University Rankings are a load of old baloney", David Blanchflower, a leading labour economist, said: "This ranking is complete rubbish and nobody should place any credence in it. The results are based on an entirely flawed methodology that underweights the quality of research and overweights fluff... The QS is a flawed index and should be ignored." [36]

In an article titled The Globalisation of College and University Rankings and appearing in the January/February 2012 issue of Change magazine, Philip Altbach, professor of higher education at Boston College and also a member of the THE editorial board, said: “The QS World University Rankings are the most problematical. From the beginning, the QS has relied on reputational indicators for half of its analysis … it[clarification needed] probably accounts for the significant variability in the QS rankings over the years. In addition, QS queries employers, introducing even more variability and unreliability into the mix. Whether the QS rankings should be taken seriously by the higher education community is questionable."[37]

The QS World University Rankings have been criticised by many for placing too much emphasis on peer review, which receives 40 percent of the overall score. Some people have expressed concern about the manner in which the peer review has been carried out.[38] In a report,[39] Peter Wills from the University of Auckland, New Zealand wrote of the Times Higher Education-QS World University Rankings:

But we note also that this survey establishes its rankings by appealing to university staff, even offering financial enticements to participate (see Appendix II). Staff are likely to feel it is in their greatest interest to rank their own institution more highly than others. This means the results of the survey and any apparent change in ranking are highly questionable, and that a high ranking has no real intrinsic value in any case. We are vehemently opposed to the evaluation of the University according to the outcome of such PR competitions.

QS points out that no survey participant, academic or employer, has been offered a financial incentive to respondents. And academics cannot vote for their own institution.

THES-QS introduced several changes in methodology in 2007 which were aimed at addressing these criticisms,[40] the ranking has continued to attract criticisms. In an article[41] in the peer-reviewed BMC Medicine authored by several scientists from the US and Greece, it was pointed out:

If properly performed, most scientists would consider peer review to have very good construct validity; many may even consider it the gold standard for appraising excellence. However, even peers need some standardized input data to peer review. The Times simply asks each expert to list the 30 universities they regard as top institutions of their area without offering input data on any performance indicators. Research products may occasionally be more visible to outsiders, but it is unlikely that any expert possesses a global view of the inner workings of teaching at institutions worldwide. Moreover, the expert selection process of The Times is entirely unclear. The survey response rate among the selected experts was only <1% in 2006 (1,600 of 190,000 contacted). In the absence of any guarantee for protection from selection biases, measurement validity can be very problematic.

Alex Usher, vice president of Higher Education Strategy Associates in Canada, commented:

Most people in the rankings business think that the main problem with The Times is the opaque way it constructs its sample for its reputational rankings - a not-unimportant question given that reputation makes up 50% of the sample. Moreover, this year's switch from using raw reputation scores to using normalized Z-scores has really shaken things up at the top-end of the rankings by reducing the advantage held by really top universities - University of British Columbia (UBC) for instance, is now functionally equivalent to Harvard in the Peer Review score, which, no disrespect to UBC, is ludicrous. I'll be honest and say that at the moment the THES Rankings are an inferior product to the Shanghai Jiao Tong&#8217;s Academic Ranking of World Universities.

Academicians have also been critical of the use of the citation database, arguing that it undervalues institutions which excel in the social sciences. Ian Diamond, former chief executive of the Economic and Social Research Council and now vice-chancellor of the University of Aberdeen and a member of the THE editorial board, wrote to Times Higher Education in 2007, saying:[42]

The use of a citation database must have an impact because such databases do not have as wide a cover of the social sciences (or arts and humanities) as the natural sciences. Hence the low position of the London School of Economics, caused primarily by its citations score, is a result not of the output of an outstanding institution but the database and the fact that the LSE does not have the counterweight of a large natural science base.

The most recent criticism of the old system came from Fred L. Bookstein, Horst Seidler, Martin Fieder and Georg Winckler in the journal Scientomentrics for the unreliability of QS's methods:

Several individual indicators from the Times Higher Education Survey (THES) data base the overall score, the reported staff-to-student ratio, and the peer ratings&#8212;demonstrate unacceptably high fluctuation from year to year. The inappropriateness of the summary tabulations for assessing the majority of the &#8220;top 200&#8221; universities would be apparent purely for reason of this obvious statistical instability regardless of other grounds of criticism. There are far too many anomalies in the change scores of the various indices for them to be of use in the course of university management.[43]

QS World University Rankings - Wikipedia, the free encyclopedia
 
Rankings are fair or not but IIT's doe have suffered some sort of decline in standards
 
Data sources

The information used to compile the World University Ranking comes partly from the online surveys carried out by QS, partly from Scopus, and partly from an annual information-gathering exercise carried out by QS itself. QS collects data from universities directly, from their web sites and publications, and from national bodies such as education ministries and the National Center for Education Statistics in the US and the Higher Education Statistics Agency in the UK.
Aggregation

The data are aggregated into columns according to its Z score, an indicator of how far removed any institution is from the average. Between 2004 and 2007 a different system was used whereby the top university for any measure was scaled as 100 and the others received a score reflecting their comparative performance. According to QS, this method was dropped because it gives too much weight to some exceptional outliers, such as the very high faculty/student ratio of the California Institute of Technology. In 2006, the last year before the Z score system was introduced, Caltech was top of the citations per faculty score, receiving 100 on this indicator, because of its highly research and science-oriented approach. The next two institutions on this measure, Harvard and Stanford, each scored 55. In other words, 45 per cent of the possible difference between all the world's universities was between the top university and the next one (in fact two) on the list, leaving every other university on Earth to fight over the remaining 55 per cent.

Likewise in 2005, Harvard was top university and MIT was second with 86.9, so that 13 per cent of the total difference between all the world's universities was between first and second place. In 2011, the University of Cambridge was top and the second institution, Harvard, got 99.34. So the Z score system allows the full range of available difference to be used in a more informative way.
Classifications

In 2009, a column of classifications was introduced to provide additional context to the rankings tables. Universities are classified by size, defined by the size of the student body; comprehensive or specialist status, defined by the range of faculty areas in which programs are offered; and research activity, defined by the number of papers published in a five-year period.
Fees

In 2011, QS began publishing average fees data for the universities it ranks. These are not used as an indicator in the rankings, but are clearly of immense interest and reveal much about a university's self-image and market position.

QS publishes domestic and international fees for undergraduate and postgraduate study.

General criticisms


Many are concerned with the use or misuse of survey data.

Since the split from Times Higher Education, further concerns about the methodology QS uses for its rankings have been brought up by several experts. Simon Marginson, professor of higher education at University of Melbourne and a member of the THE editorial board, in the article "Improving Latin American universities' global ranking" for University World News on 10 June 2012, said: "I will not discuss the QS ranking because the methodology is not sufficiently robust to provide data valid as social science." [35]

In an article for the New Statesman entitled "The QS World University Rankings are a load of old baloney", David Blanchflower, a leading labour economist, said: "This ranking is complete rubbish and nobody should place any credence in it. The results are based on an entirely flawed methodology that underweights the quality of research and overweights fluff... The QS is a flawed index and should be ignored." [36]

In an article titled The Globalisation of College and University Rankings and appearing in the January/February 2012 issue of Change magazine, Philip Altbach, professor of higher education at Boston College and also a member of the THE editorial board, said: &#8220;The QS World University Rankings are the most problematical. From the beginning, the QS has relied on reputational indicators for half of its analysis &#8230; it[clarification needed] probably accounts for the significant variability in the QS rankings over the years. In addition, QS queries employers, introducing even more variability and unreliability into the mix. Whether the QS rankings should be taken seriously by the higher education community is questionable."[37]

The QS World University Rankings have been criticised by many for placing too much emphasis on peer review, which receives 40 percent of the overall score. Some people have expressed concern about the manner in which the peer review has been carried out.[38] In a report,[39] Peter Wills from the University of Auckland, New Zealand wrote of the Times Higher Education-QS World University Rankings:

But we note also that this survey establishes its rankings by appealing to university staff, even offering financial enticements to participate (see Appendix II). Staff are likely to feel it is in their greatest interest to rank their own institution more highly than others. This means the results of the survey and any apparent change in ranking are highly questionable, and that a high ranking has no real intrinsic value in any case. We are vehemently opposed to the evaluation of the University according to the outcome of such PR competitions.

QS points out that no survey participant, academic or employer, has been offered a financial incentive to respondents. And academics cannot vote for their own institution.

THES-QS introduced several changes in methodology in 2007 which were aimed at addressing these criticisms,[40] the ranking has continued to attract criticisms. In an article[41] in the peer-reviewed BMC Medicine authored by several scientists from the US and Greece, it was pointed out:

If properly performed, most scientists would consider peer review to have very good construct validity; many may even consider it the gold standard for appraising excellence. However, even peers need some standardized input data to peer review. The Times simply asks each expert to list the 30 universities they regard as top institutions of their area without offering input data on any performance indicators. Research products may occasionally be more visible to outsiders, but it is unlikely that any expert possesses a global view of the inner workings of teaching at institutions worldwide. Moreover, the expert selection process of The Times is entirely unclear. The survey response rate among the selected experts was only <1% in 2006 (1,600 of 190,000 contacted). In the absence of any guarantee for protection from selection biases, measurement validity can be very problematic.

Alex Usher, vice president of Higher Education Strategy Associates in Canada, commented:

Most people in the rankings business think that the main problem with The Times is the opaque way it constructs its sample for its reputational rankings - a not-unimportant question given that reputation makes up 50% of the sample. Moreover, this year's switch from using raw reputation scores to using normalized Z-scores has really shaken things up at the top-end of the rankings by reducing the advantage held by really top universities - University of British Columbia (UBC) for instance, is now functionally equivalent to Harvard in the Peer Review score, which, no disrespect to UBC, is ludicrous. I'll be honest and say that at the moment the THES Rankings are an inferior product to the Shanghai Jiao Tong&#8217;s Academic Ranking of World Universities.

Academicians have also been critical of the use of the citation database, arguing that it undervalues institutions which excel in the social sciences. Ian Diamond, former chief executive of the Economic and Social Research Council and now vice-chancellor of the University of Aberdeen and a member of the THE editorial board, wrote to Times Higher Education in 2007, saying:[42]

The use of a citation database must have an impact because such databases do not have as wide a cover of the social sciences (or arts and humanities) as the natural sciences. Hence the low position of the London School of Economics, caused primarily by its citations score, is a result not of the output of an outstanding institution but the database and the fact that the LSE does not have the counterweight of a large natural science base.

The most recent criticism of the old system came from Fred L. Bookstein, Horst Seidler, Martin Fieder and Georg Winckler in the journal Scientomentrics for the unreliability of QS's methods:

Several individual indicators from the Times Higher Education Survey (THES) data base the overall score, the reported staff-to-student ratio, and the peer ratings&#8212;demonstrate unacceptably high fluctuation from year to year. The inappropriateness of the summary tabulations for assessing the majority of the &#8220;top 200&#8221; universities would be apparent purely for reason of this obvious statistical instability regardless of other grounds of criticism. There are far too many anomalies in the change scores of the various indices for them to be of use in the course of university management.[43]

QS World University Rankings - Wikipedia, the free encyclopedia

Exactly what I said, no graduate level students take it seriously. QS is BS.
 
Another main reason is that IITs are not Universities
 
Ranking is irrelevant, as long as IITs are churning out quality graduates for the betterment of the nation. As for ranking is concerned, even a village guy like me from a small town engineering college can get into World's top 100 university without much effort, but couldn't get into IIT. So where's the exclusivity?
 
people expect too much from IIT, it gives you good quality engineers thats all.
 
LOL... this ranking system is so flawed...... it's all about "procedures" / curriculum design / course load design...... nothing to do with quality of publications / number of publications....... I went to one of Europe's toughest institutes, guess what happened? When such organizations came for "ranking", they told our university that they couldn't place our departments on the list, because the "courses were too tough", and the curriculum needed more humanities courses in the mix.... LMFAO.... they wanted to place approximately 20% humanities courses in an Engineering curriculum, what a bunch of morons....... it's all topi drama.

Other litmus test was, we used to get these exchange students from MIT, McGill, UP.... etc etc every semester, and guess what? We used to whoop their arses!!! :lol:
 
Back
Top Bottom