So, will money managers struggle to survive?

Maybe. And a newly-published research paper is threatening an existential crisis of sorts for the money managers of the world. The research study says that most published research studies about why certain stocks do well or insights into why some investment strategies are successful are all inherently faulty.

Who’s come up with this?

Two finance professors (from Duke University and the University of Oklahoma) and a marketing professor (from Texas A&M) released their paper, titled ‘…And the Cross-section of Expected Returns’, late last month on the US-based National Bureau of Economic Research. The authors wanted to adapt a 2005 medical research paper, which concluded that most medical research cannot be trusted due to statistical inconsistencies, to the world of finance and see whether those results held good here as well.

And do they?

Their answer would be yes.

How?

The problem, they say, is that with the way statistical tools are currently used in research to paint a relationship between A and B, it is difficult to say which relationships actually exist and which are just products of the researcher’s imagination, or worse, just random chance.

How does this happen?

The researchers believe the problem lies with a style of analysis called ‘multiple testing’. For instance, a fund manager may want to find out why certain stocks outperformed the benchmark over a certain period. He might look for a certain factor (or what statisticians call a ‘variable’) that the stocks have in common and then subject that factor to tests to look for a significant relationship.

Meaning what?

The thumb rule in stats to prove that a relation between two items is significant is to test it for a 95 per cent confidence interval, that is, to prove that the result of the test will be reliable 95 per cent of the time. Which leaves a five per cent chance that the result is random and the two items have nothing to do with each other.

But isn’t 95 per cent good enough?

It is, when you stop with just one variable. The problem arises when the fund manager doesn’t stop the test with one variable but goes on to test hundreds of others simultaneously on a computer.

The 5 per cent chance of a relationship being random adds up with each test, until the fund manager might find a relationship that he believes is reliable 95 per cent of the time is, in fact, utterly useless.

What we would call dumb luck.

And what statisticians call a type one error, or a false positive — that is, seeing a relationship where none exists.

So my money manager’s secret strategy is as good as mine?

Or even worse. Campbell R Harvey, the Duke University professor and the paper’s lead writer, said in an interview later that this is most likely why the majority of actively-managed mutual funds do worse than the benchmark in the long-term, because their fund managers have placed their faith in secret strategies that don’t really exist.

Does he have an alternative?

Campbell does offer an alternative, meant to be a more stringent test of statistical significance. But there still is the caveat at the fag end of the paper: That “statistical evidence can only get us this far in terms of getting rid of the false discoveries”.

The secret could be that there really is no secret!

A weekly column that helps you ask the right questions

So, will money

comment COMMENT NOW