
Last week, Macquarie Business School posted on LinkedIn: (www.linkedin.com/posts/macquarie-business-school) an announcement that The Economist, Which MBA had ranked the Macquarie MBA as number 2 in Australia and number 41 in the world.
This is a great result of which, I as an alumnus and, as an adjunct lecturer during the period assessed, could not be more pleased. I said as much in my comment to the LinkedIn post. Unfortunately, at least one person at MQBS thought I was taking a “pot-shot” at the school. Quite the opposite. I am very proud of the quality of the MGSM MBA program and my small part in it. But I am critical of the misleading and problematic MBA rankings and that through these rankings a business school’s reputation can be transformed into a soundbite — the rankings number itself. Here’s why.
1. Rankings exist to sell magazines
Let’s be clear about this, rankers rank to sell magazines. A “successful” ranking is the one which has managed to convince the audience that its show is the reality itself. To do this it is important that the ranking appears to be plausible and, therefore, it needs to look “sciency”. So, rankings make a big deal about methodology. But what happens if that methodology changes from one year to the next, or if universities choose to participate in one year and opt out the next?
First, the methodology across the various rankings; The Economist, Times, QS etc; are each different and can change without notice. These magazines are in competition to increase readership and revenues and therefore need to differentiate to win. Each year, each magazine will change their methodology if it helps to establish competitive advantage. As more magazines enter the rankings game this increases the competitive need to demonstrate continual improvement and therefore to impose more and different criteria and rules each year.
Second, ultimately, the status, and plausibility, of any game relies on the participation of the “usual” players. People will not buy a ranking if that ranking does not include the top 10 of the world’s best, such as Harvard, Wharton, Stanford, Insead, London Business School, etc. This is the case with the recent 2021 Economist: Which MBA ranking.
As MBA News reported on 28th Jan 2021, “More than 60 of the world’s biggest business schools including Harvard, Wharton and Stanford did not participate in this year’s [Economist: Which MBA] ranking due to challenges of gathering reliable data.” In my LinkedIn comment I questioned, “Does this mean that Business Schools are beginning to realise that the rankings industry isn't worth it?” The implication is that I do hope they are.
2. Rankings are comparisons
The participation of the usual players in a ranking is important for the plausibility of the ranking because rankings are a zero-sum comparison where players can never be considered equal. Even in cases where players might tie the ranking is still a hierarchy of relative performance on the criteria set by the magazine and adjudicated by its editors.
So, no matter how well a school performs in absolute terms, there will always be the same number of possible positions and you can only perform better at the expense of others. As sociologist, Jelena Brankovic observed, “You can only go up if someone else goes down. You can’t improve unless someone else is worse off. That’s the world rankings create, although rankers may insist they “only measure” it.”
Rankers will claim that they are just neutral arbiters presenting “objective reality” with “hard numbers.” But this is misleading. As Kevin Corley and Dennis Gioia of Pennsylvania State University wrote in their 2000 paper, The Rankings Game: Managing Business School Reputation, “Rankings produce a social reality in its own right… and the magazine publishers have a vested interest in wielding substantial control over the nuances of the game.”
3. Rankings do not measure quality
Magazines work hard to persuade their readers of an illusion of an objective reality of comparative performance. Rankings suggest a precision which is not supported by close scrutiny of the data. But rankings are derived from scores which in turn are weighted aggregates of the components and sometimes universities can have very similar scores (and hence performance) but the separate rankings suggest a greater difference in performance than there really is (Longden 2011).
Even though these rankings are an artificial reputation measure, people are, however, influenced by them and as a result, due to the prominence of the rankings, they have replaced any other, more comprehensive view of reputation. Australian business schools now have to actively compete for students, and academics, from around the world. As indicators of quality, the rankings have become perceived to be the single most useful gauge of a school’s ability (or inability) to compete in this global marketplace.
But, are they?
If a business school ranks number 41 in the world it is clearly better than one ranked 141. Yes? But is the 41st ranked school worse than the school ranked 40th, or 35th, or 30th? How? In what ways? Using principal components analysis of the indicators which underpin the rankings, Johnes (2018) found that “one composite index does not adequately reflect the information contained in the data set. In addition, the differences between universities…might actually be very slight, yet the rankings suggest to the laypeople who use them that distinctions in performance are potentially large.”
If the rankings rules and criteria change each year and if some universities choose to participate in one year and not the next in this zero-sum comparison game, how can one make sense of relative change up or down the ranking in each year? Business schools can improve in many ways but there is little to no evidence that climbing a ranking is one of them. Indeed, “when a measure becomes a target, it ceases to be a good measure.”
4. Rankings are a game
A game with ambiguous and constantly changing rules and players will lead to a tacit understanding by the players that you can only win if you too are continuously changing. In the absence of good information about which new criteria might be added to a coming survey, without any certainty as to which criteria will be added to a new survey, MBA director’s only option is to respond by emphasising some notable change as evidence of continual improvement. This leads to further ambiguity in an already-ambiguous game by constructing a self-portrayal as ‘always moving, always doing something.’
In addition, the transparency of the rankings means that any school can easily identify its strengths and weaknesses. This may help in making improvements and changes, but it also presents an opportunity for rankings to be manipulated and hence the possibility of gaming even though the altered behaviour may not actually be of benefit to performance – only to its ranking.
Many of the measures of performance are under the control of the school and there has been anecdotal evidence that some measures in rankings are susceptible to ‘cheating’ behaviour where some schools have influenced data in order to raise their rankings (Hazelkorn 2015).
For example, through lowering standards which leads to ‘grade inflation’ or, hosting lavish events where students and alumni are pressured to provide favourable responses in order to boost performance in rankings (Newman 2008).
Perhaps the worst outcome of a focus on the rankings is that it leads to schools becoming much more homogeneous. Rankings, particularly the international ones, are biased towards research activity (Dill 2009), and this could lead to business schools altering their mission from teaching to research excellence (Shin and Toutkoushian 2011). Elite, research-intensive universities are often the ones which are highly ranked, and so they become, often unsuitably, the benchmarks for the lower-ranked business schools.
However, I suspect most of us know that the game has now progressed to the point that it does not actually matter whether the rankings are valid. The rankings have taken on a life of their own and this makes the rankings even more powerful as the choice criteria for prospective students becomes even more ambiguous.
But diversity permits more choice for students and ultimately helps to differentiate a business school in ways that rankings may not.
When I chose MGSM to do my MBA (pre-rankings) I did so because MGSM was, I believed, positively distinct in important ways that suited my needs and ambitions. As an alumnus I have held closely to the fact that my MBA was different, and in my perception, better than others. That difference became part of how I saw and presented myself.
MGSM’s difference was part of my professional differentiation. I strongly believe that MGSM is a great business school producing superior graduates and that there is no reason why this cannot be true today and tomorrow irrespective of what the Financial Times, QS or Economist magazines may wish to assert in their rankings as they attempt to supplant image with substance.
My comment under the MQBS post on LinkedIn sent my “congratulations and well deserved” to the school on coming 2nd in Australia and 41st in the world, but it was also intended to caution readers that rankings are misleading and a poor measure of a business school’s quality.
Randal Tame
MGSM Alumni Association