Business School Quality
An interview with Hubert Silly
Founder of MBA Center
Rankings are an enigma
The idea of ranking graduate business programs is a relatively new process, which began as a means of increasing magazine sales. The ranking of MBA programs is a hot issue in management education circles. Questions regarding how rankings are formulated, what they measure, and what they mean are constantly circulating. Given the attention rankings have received and the controversy they have generated, a full discussion of this topic is necessary.
Most early B-schools rankings relied primarily on business school deans, faculty members or top executives to rate the different institutions - even though those professionals only had a vague knowledge of most of the programs. This very crude methodology was pointless at best and grossly misleading at worst. These very first rankings were the Cartter Report (1976), the MBA Magazine ranking (1977) and the Ladd-Lipset survey (1977).
Soon appeared the "black box" approach to ranking. Such an approach means that you only consider what gets in or what comes out, quantitatively, of an MBA program; you do not consider the educational experience itself. The most serious and first ranking of this type was from U.S. News & World Report. Here is how U.S. News originally devised these rankings: overall rank was based upon 4 indicators of "academic" quality.
- Student Selectivity : GPA, GMAT, acceptance rate (25% of overall rank)
- Graduation Rate (5% of overall rank)
- Placement Success : after graduation, median salary, etc. (30% of overall rank)
- Reputation : business school deans (20% of overall rank), CEOs (20% of overall rank)
Of course, the U.S. News selection was open to criticism (couldn't schools lie when they gave the numbers, and how did U.S. News choose or weigh indicators anyway). But those rankings, if you knew exactly what they meant, were indeed a giant step forward and an excellent introduction to a more advanced form of assessment.
Business Schools must listen to their customers
If business schools want their products (MBA degrees) to succeed in the market place, they must listen to their customers (students and their future employers) and understand their needs. The next step in the reengineering process is therefore the installation of continuous feed-back mechanisms. Schools must determine what tools and skills students bring to class, what they take from it, and what for. By building these feedback mechanisms into the curriculum, you reinforce the notion that change is not only inevitable, but desirable. At most places, however, the barriers to change are formidable. Business schools are often as badly organized as the worst corporations, with tenured faculties that refuse to do things differently. This is particularly true in America: there is a lot of questioning about whether the sort of business-as-usual, research-oriented, academic, disciplined-based paradigm is really working. This is not so much a problem in Europe: the best European schools have long had close ties with business. Lacking the huge endowments of many American institutions, the Europeans were forced to put greater emphasis on executive education.
Until Business Week began to rate business schools on customer satisfaction in 1988, many rankings had been based largely on the reputation of the school's professors and their published work in academic journals. Business Week adopted a strikingly new and efficient approach, surveying both the graduates of top schools and the corporate recruiters who hired them to determine the best business schools. In effect, the survey measured how well the schools were serving their two markets: students and their ultimate employers. The graduate poll was randomly mailed to more than 5,000 MBAs from about 40 of the most prominent schools. MBA graduates were asked to assess such characteristics as the quality of the teaching, curriculum, environment, and placement offices at their schools. The poll of corporate recruiters was mailed to about 400 companies that recruited these graduates off the campuses of the best schools. Again, it is very easy to criticize the methodology used to obtain and crunch the numbers. But when considering Business Week's rankings with US News, you are starting to get a very clear picture of what the best programs are.
What do rankings mean?
The MBA is not simply an education, it is also a credential. Representing a credential, the utility of the MBA can be defined purely in terms of its market value. Brand names matter. An MBA from a top school is becoming more and more a crucial legitimacy within the business community. A top MBA is a signaling device. It tells the marketplace that a student was good enough to get in. It invites the recognition of and facilitates entry into many kinds of organizations which might otherwise be oblivious to an individual's talent. Elite programs are able to protect and ensure this great demand for their graduates by wisely and purposely limiting the size of their entering classes.
Certainly, it is only natural in a "free market " for there to exist competition among graduate business schools. Such rivalry is no doubt healthful and results in the progressive evolution of graduate management education. Competition is not at issue. It is the measure of competition that invites controversy. If applicants are to be rated on numerical scales (e.g., GMAT scores, GPA), then it seems justified for graduate business schools themselves to be ranked. But, just as there are real limitations to the use of quantitative measures to assess the "quality" of an applicant, so there are limitations to the use of rankings to gauge the "quality" of a business school. The number of TV sets tuned to a particular network, the number of runs scored by a given team, the number of votes cast for a given candidate-all are measurable gauges of performance. Unfortunately, the "quality" of a business school can not so easily be quantified. And yet this is precisely what rankings presume to accomplish.
An inevitable criticism leveled against rankings pertains to who does the evaluation and what criteria are used. Should employers do the ranking? Faculty members? Students? Should academic quality be the overriding measure? The number of job offers after graduation? Faculty research productivity? Average starting salary of graduates? The fact of the matter is, no one source of information-such as deans-is well-enough informed to assess the quality of business schools; and no single evaluative dimension-such as average GMAT scores-can possibly reflect the many kinds of excellence that different schools may offer.
This is not to fully discredit rankings. At the very least they are important because they have real impact. They influence the manner in which a school is perceived by students, deans, faculty members, and recruiting organizations. Beyond this, rankings (within certain parameters) do have validity and applicability. How then should business school rankings be properly interpreted? As a general indication, not a precise measurement, of quality. It really matters little if a school is ranked number 5 or number 8 or number 10. These differences are hardly reflective of anything substantive. What matters is whether a school is consistently among the top scorers. Ultimately, if you need to differentiate among these top-rated schools, you should go beyond the rankings and delve into the numerous particularities of each individual program.
More important than the global rankings are the speciality rankings of business schools. A candidate seeking a future career in finance, for instance, would be well advised to choose a school ranked highly in finance-even if that school is not in the "Top Ten" in the overall rankings. In this case, these departmental rankings (which are usually darkened by the composite rankings) are more helpful. Business schools view themselves as specific brands that can appeal to a particular segment of the MBA market. Only the largest, wealthier programs can sustain overall competitive strengths across all niches. Thus, graduate business schools attempt to define one or two disciplines in which they can excel as part of building a core product. The rankings, then, can assist candidates in evaluating the relative strength of the different products.