The world of economics is in a tizzy about quality of research and the little devils that can ruin the best of intentions. It all started with a research paper published in 2010 by professors Carmen Reinhart and Kenneth Rogoff, of Harvard University who, using published data, found that nations that had a debt to gross domestic product greater than 90 per cent were in danger of seeing their average growth rate fall by 0.1 per cent.

This finding was of great interest to policy makers, since this was post-global economic crisis when many countries were thinking of taking on more debt in order to increase their spending as a stimulation to their economies. The research gave a fillip to the opposite argument, that what was needed was austerity.

Other studies that examined the same issue of debt also seemed to support the findings and warned highly indebted countries from taking on even more debt.

Now, a paper published earlier this month by three authors, Thomas Herndon, Michael Ash and Robert Pollin, of the University of Massachusetts Amherst (my alma mater , I may proudly add) has cast a doubt on the Reinhart-Rogoff findings. This study grew out of the efforts of the first author, a graduate student, who tried to replicate the Reinhart-Rogoff study as assignment for a course.

Findings questioned

Finding that the results were not the same, he then sought and obtained the original data sheet from Reinhart-Rogoff, and found simple miscalculations and data exclusions in their study. Correcting these errors, Herndon et al found that the average growth for the high debt countries would be 2.2 per cent, and not negative as the original study claimed. A major reversal of the conclusions.

This ‘incident’ allows us to draw some lessons. One is the importance of being able to replicate research, using the same or similar data to study the same questions and to see if the same results are obtained. The other is being able to access the data set of other scholars.

This is quite often impossible, for researchers will hide behind confidentiality and various other excuses and not share their data. This is especially so if the data are not publicly available but have been collected as primary data through surveys, and so on.

Gate keepers such as journal editors and reviewers ask for a lot of details about the research methodology and analysis from those who submit a manuscript to get a sense of the reliability and validity of the data, analysis and the conclusions, but even they rarely have the time or the inclination to ask for the raw data and run the analysis themselves.

Replicability issues

In general, research in the sciences differentiates itself from the social sciences in the matter of replicability. If chemical A mixed with chemical B and heated to C results in D, and it is published, anyone can try it out and see if it is true. In most social sciences, this is difficult to do, and even when possible, is often not attempted.

The other problem that complicates social science research is the difficulty of establishing cause and effect relationships between variables due to the many variables that can affect the situation, and the inability to keep all the other intervening variables constant, or trying to control them. Thus, conclusions are provided with various caveats about the weaknesses of the study and the limitations on generalisability.

In the business/management field, there is an ongoing debate on the issue of relevance of research. When primary data are collected through surveys, replicability becomes almost impossible.

In short, research conclusions rarely contribute to cumulative knowledge. Can’t blame the business folks for ignoring much academic business research!

Research vs education

In an article published online at the Knowledge@Wharton Web site in February this year, Larry Zicklin, a former chairman of a financial services firm and a finance professor, argued that with university reward systems driving faculty to publish, much research that is done is irrelevant, and students end up paying for it through higher tuition fees, since faculty teaching loads are lowered in research-driven universities to allow their faculty to be engaged in research.

He comments about a faculty member who is highly regarded for his research in the area of finance: “I’ve worked for 50 years and I didn’t have a clue as to what his most recent articles were about… (His research was directed at) the community of scholars who write for one another but not for their students and certainly not for business executives who are interested in practical ideas that might actually work.”

A particularly tough rhetorical question Zicklin asks is: If business research is useful, why are corporations so reluctant to pay for it?

The message from all this is that all educational institutions should not join the game of pursuing research just because it helps the reputation or ranking of the institution.

The majority of institutions should realise that they may make a better contribution to society focused on education, which is not the same as research, and which involves paying attention to teaching and making education affordable to those who need it.

Those institutions that wish to pursue the research route should also be careful that what they encourage in their scholars is relevant and applicable.

(The author is professor of International Business and Strategic Management at Suffolk University, Boston, US.)