The idea of ranking graduate business programmes is a relatively new process, which began as a means of increasing magazine sales. The ranking of MBA programmes is a hot issue in management education circles. Questions about how rankings are formulated, what they measure, and what they mean, are constantly being asked. Given the attention rankings have received and the controversy they have generated, a discussion about the topic is necessary.

Most early B-schools rankings relied primarily on business school deans, faculty members, or top executives to rate the different institutions - even though they only had vague knowledge of most of the programmes. This very crude methodology was pointless at best and grossly misleading at worst. These very first rankings were the Cartter Report (1976), the MBA Magazine ranking (1977), and the Ladd-Lipset survey (1977). 

The "black box" approach to ranking appeared soon after. The approach only considered what gets in or what comes out, quantitatively, of an MBA programme; you do not consider the educational experience itself. The first ranking of this type was released by U.S. News & World Report. Here is how U.S. News originally devised these rankings: overall rank was based upon four indicators of "academic" quality.

- Student Selectivity: GPA, GMAT, acceptance rate (25% of overall rank)

- Graduation Rate (5% of overall rank)

- Placement Success: after graduation, median salary, etc. (30% of overall rank)

- Reputation: business school deans (20% of overall rank), CEOs (20% of overall rank)

Of course, the U.S. News selection was open to criticism (couldn't schools lie when they gave the numbers, and how did U.S. News choose or weigh indicators anyway). But those rankings, if you knew exactly what they meant, were indeed a giant step forward and an excellent introduction to a more advanced form of assessment. 

Business schools must listen to their customers

If business schools want their products (MBA degrees) to succeed in the market place, they must listen to their customers (students and their future employers) and understand their needs. The next step in the reengineering process is therefore the installation of continuous feed-back mechanisms. Schools must determine what tools and skills students bring to class, what they take from it, and what for. By building these feedback mechanisms into the curriculum, you reinforce the notion that change is not only inevitable, but desirable. At most places, however, the barriers to change are formidable. Business schools are often as badly organised as the worst corporations, with faculties that refuse to do things differently. This is particularly true of America: there are a lot of questions about whether the sort of business-as-usual, research-oriented, academic, disciplined-based paradigm is really working. This is not so much a problem in Europe: the best European schools have long established close ties with business. Lacking the huge endowments of many American institutions, the Europeans were forced to put greater emphasis on executive education. 

Until Businessweek began to rate business schools on customer satisfaction in 1988, many rankings had been based largely on the reputation of the school's professors and their published work in academic journals. Businessweek adopted a new and efficient approach, surveying both the graduates of top schools and the corporate recruiters who hired them to determine the best business schools. In effect, the survey measured how well the schools were serving their two markets: students and their ultimate employers. The graduate poll was randomly mailed to more than 5,000 MBAs from about 40 of the most prominent schools. MBA graduates were asked to assess characteristics such as the quality of the teaching, curriculum, environment, and placement offices at their schools. The corporate recruiters poll was mailed to about 400 companies that hired graduates off the campuses of the best schools.

What do rankings mean?

The MBA is not simply an education, it is also a credential. Representing a credential, the utility of the MBA can be defined purely in terms of its market value. Brand names matter. An MBA degree from a top school is increasingly becoming a crucial accomplishment within the business community. A top MBA degree is a signaling device. It tells the marketplace that a student was good enough to get it. It invites the recognition of and facilitates entry into many kinds of organisations which might otherwise be oblivious to an individual's talent. Elite programmes are able to protect and ensure this great demand for their graduates by wisely and purposely limiting the size of their entering classes.

It is only natural for competition among graduate business schools to exist in a free market. Such rivalry is no doubt healthful and results in the progressive evolution of graduate management education. But competition is not at issue. It is the measurement of competition that invites controversy. If applicants are to be rated on numerical scales (e.g., GMAT scores, GPA), then it seems justified for graduate business schools themselves to be ranked. But, just as the use of quantitative measures to assess the "quality" of an applicant has its limits, so does the use of rankings to gauge the "quality" of a business school. The number of TV sets tuned to a particular network, the number of runs scored by a given team, the number of votes cast for a given candidate – all measurable gauges of performance. Unfortunately, the "quality" of a business school cannot so easily be determined. And yet, this is precisely what rankings try to accomplish.

Questions such as who does the evaluation and what criteria are used are often raised in discussions about rankings. Should employers do the rankings, or faculty members and students? Should academic quality be the overriding measure or maybe the number of job offers after graduation? What about the faculty research productivity and the average starting salary of graduates? The fact of the matter is, no single source of information, such as the body of deans, is sufficiently informed to assess the quality of business schools; and no single evaluative record, such as average GMAT scores, can possibly do justice to the excellence that different schools may offer.

This is not to fully discredit rankings. At the very least they are important because they have real impact. They influence the manner in which a school is perceived by students, deans, faculty members, and recruiting organisations. Beyond this, rankings (within certain parameters) do have validity and applicability. How, then, should business school rankings be properly interpreted? They should be viewed as a general guide, and not as a precise measurement of quality. It really matters little if a school is ranked fifth, eight, or tenth. What matters is whether a school is consistently among the top-ranked institutions. Ultimately, if you need to choose between these top-rated schools, you should go beyond the rankings and delve into the numerous particularities of each individual programme.