by Dr. Watson Scott Swail, President & Senior Research Scientist
CLICK HERE for an audio version on Spotify
Yesterday, a jury in Pennsylvania found a former dean of the Temple University School of Business guilty on fraud related to falsifying information for the US News & World Report Best Colleges ranking system, arguably the largest and most utilized ranking system in the world. According to InsideHigherEd.com, the indictment of Moshe Porat included evidence of misreporting the percentage of students who submitted GMAT scores to the institution.
The maximum penalty for Porat is 25 years in prison and up to $500,000 in fines. Two others, including a business professor and a finance/accounting manager, pled guilty earlier this year and face up to five years in prison and also up to $500,000 in fines.
This should surprise no one and is yet another reason to seriously reconsider why we value college rankings so much (see my AACRAO Journal article from 2011 on this issue). In the end, rankings come down, perhaps surreptitiously, to reputational or perceptional values. That is, what people “think” of a university’s value. But US News & World Report, The Times Higher Education’s World University Rankings, and QS World University Rankings, are all a hodgepodge of utilizing available data (and for US News, requiring information directly from the colleges) to create an almost Rube Goldberg system for creating a rank order of institutions by sector. In the end, they count mostly the same things and come out with similar, if minor, differences in outcomes.
I believe that college rankings are fueled by a Macbethian appetite for information about colleges to help lead mostly affluent parents in the quest for admission to the highest ranked institution they can either afford or subsequently be admitted to. It is why the truly selective institutions are only a small microcosm of the higher education arena; the focus is truly retained only on those that remain in the top echelons of the rankings. Those institutions that place in the mid 200s, for instance, don’t really benefit from rankings, because potential clients are only interesting in the top 10 or 25. Internationally, arguably the top 100.
But this appetite of the few has propelled an insatiable desire for more and more and more, and that is what has been manifested over the past several decades. Without delving into the details (prior Swail Letters have done that; see links below), the US Ivy League institutions and top private colleges are well represented in the International Rankings, as are many of our top land grant institutions (UC institutions, for instance). This appetite is why several people, including famous people like Felicity Huffman and Lori Loughlin, went to prison in 2019 for an admissions scandal where people will, evidently, do almost anything to get their kids into the “right” institution (want to go sailing?). Arizona State University president Michael Crow cited this issue based on a “crisis of access to these social-status-granting institutions.” That’s what rankings do, in many ways: create a social status that has existed but now is “proven” by “research.” At Temple University, the dean in question was paid up to $600,000/year to build the reputation of the university. That’s big business.
If the appetite for this information is evident among parents and students, it is immense within the higher education industry. Institutions clearly understand that their “full pay” students are looking (or their parents are) at the rankings, and the ability to attract full pays is what balances the use of institutional need-based aid. It is, to a degree, a Robin Hood mentality: take from the rich to provide for the poor, although it doesn’t work that way; sometimes it is take from the rich and give to other affluent students to create a more outstanding class. Thus, these full-pay students are very attractive to an institution. As we’ve seen, sometimes very attractive to deans of business schools, as the jury found yesterday.
But to think that Temple University is a one-off situation would be misleading and dangerous. I argue that institutions “massage” their data very carefully before giving it to US News and World report. In my own analysis, I find that numbers in the rankings do not always match what I find either from either college-based data or IPEDS (Integrated Postsecondary Education Data System). Sometimes this is by simple mistake (there are many definitional challenges to IPEDS, for instance), but I also see institutions cherry picking their own data; perhaps even creating it. The well-used example of manipulating institutional data has always been the “percentage of alumni who give back to the institution.” We know that some institutions created a $1 surcharge on graduating students defined as a “gift” so that 100 percent of their graduates “gave” back to the institution. This is a strategy created by ranking systems. The desire for more, even if fraudulent.
In a perfect world, I guess it would be nice to rank institutions in a nice, pristine order. It is a human condition to want things to fit in an organized taxonomy. Nature doesn’t create these systems; humans do out of our social and cognitive need for categorization. But for what purpose in this regard? Our knowledge of institutions is either by our experience (alma maters) or by perception (what we’ve read or been told). In fact, perception data is used within the rankings systems; they literally ask professors and administrators who they think are the top institutions. In the end, there isn’t anything wrong with that, as long as we call it such: perception rankings. But it leaves us with the base question: for what purpose?
Using two suffered analogies, the rankings train has long left the station and the genie isn’t going back in the bottle. Still, I think we would be better off pushing off the rankings wagon and forcing institution on to the competency wagon. Using real data from degree programs and student progress to detail how well the institution meets certain competencies. No, it isn’t easy work and would require some level of standardization. But nothing worthwhile is easy, right?
Related Swail Letters: