Human foibles
Suckers for the supporting story

Investors are bombarded daily with the opinions of analysts, advisors, pundits, journalists and bloggers purporting to be based on conclusions drawn from statistical analysis. Typical would be an opinion that the stock market is likely to advance next year. The reason given might be the assertion that when the S&P 500 has advanced more than 20% in a year, it is likely to advance the following year. The proffered evidence might be that, historically, the stock market has advanced 20% or more 17 times and that 14 of those occasions were followed by an advance the following year. What credence can we give to this prediction? The answer is none. For our S&P 500 advance problem there is no statistical basis to reach any conclusion. Nor is there any valid basis in inductive reasoning.
Investors are not trained statisticians. How can we sort the wheat from the chaff? We get all sorts of comments like this from pundits.
Five steps
The first step has to be to recognize statistics based inductive reasoning for what it is. This is easier said than done.
Second, be particularly alert to sample size;
Third, be alert to sample period;
Forth, be alert to survivorship bias;
Fifth, beware of all generalizations and statistical conclusions. Try to develop an instinctive skepticism to all opinions expressed by analysts, advisors, pundits, journalists and bloggers especially when they are about the future.
The bottom line: any conclusion drawn from an analysis of data should cause investors to immediately raise their guard.
It’s often a problem with sample size
The problem can come up with even the most sophisticated analysis. Here’s an example you’ve probably heard of: the suggested outperformance of emerging markets. But, emerging market stock markets have far fewer listed stocks than developed market exchanges. The sample size in emerging markets is small. For this reason alone their results would be more variable than a larger market. How could that make a difference? See the Gates Foundation blunder below.
Sample period and survivorship bias
As for company size performance we need to ask hard questions about sample period. There are times in history when smaller stocks outperform and other times when they underperform. If you draw your statistical sample from one period or the other, the answer will be different. As well, there may be a survivorship bias in the data. For example if you look at the historic performance of today’s ETFs or mutual funds and treat that as your sample, you are missing out from your sample the ETFs or mutual funds that have gone out of business.
These ‘studies’ (a nice sounding word) are usually accompanied by a narrative, a story, such as in the former case, that emerging economies contain greater opportunities for high growth, and in the latter case, that smaller company are more nimble and are growing from a smaller base.
Let me tell you a story
If you think the ‘experts’ always get it right and can be relied on, read on:
Two statisticians, Howard Wainer and Harris Zwerliing wrote an essay about an investment of approximately $1.7 billion made by the Gates Foundation which was intended to implement the findings of a study showing the characteristics of the most successful schools. As described by Kahneman, referencing the Wainer/Zwerling essay: “One of the conclusions of this research is that the most successful schools, on average, are small. In a survey of 1,662 schools in Pennsylvania for instance, 6 of the top 50 were small, which is an overrepresentation by a factor of 4.
These data encouraged the Gates Foundation to make a substantial investment in the creation of small schools, sometimes by splitting large schools into smaller units. At least half a dozen other prominent institutions, such as the Annenberg Foundation and the Pew Charitable Trust, joined the effort as did the U.S. Department of Education’s Smaller Learning Communities Program.” It is easy to construct a supporting narrative or story. One can easily imagine smaller schools giving more personal attention to students.
The problem is that the statistical conclusion is mistaken. The data established no causal connection between smaller schools and better student outcomes. It seems that the data would just as easily have shown that bad schools also tended to be smaller. The small schools were simply more variable in their outcomes.
Kahneman explains the statistics problem caused by the different sizes of the schools. He asks us to imaging two very patient helpers taking red and white marbles from a large urn that contains half red marbles and half white marbles. “Jack draws 4 marbles on each trial, Jill draws 7. They both record each time they observe a homogeneous sample– all white or all red. If they go on long enough, Jack will observe such extreme outcomes more often than Jill – by a factor of 8 (the expected percentages are (12.5% and 1.56%).” The drawing of the marbles is purely random. There is no causation at work in the outcome. The fact that Jack sees more all red or all white marbles is a pure mathematical fact. The fact is that in statistics, “small samples yield extreme results more often than large samples do”. (Kahneman, Thinking, Fast and Slow, 2011) p110. This seems innocuous enough. But, it seems even highly trained researchers can get it wrong.
In many of these cases the proffered answer or opinion may be right or it may be wrong. It’s a case of ‘not proven’. And a supportive narrative doesn’t make the case.
Discussion
Small samples, sample period and survivorship bias are a problem in statistics. But, more than that, they exemplify a problem in our ordinary reasoning. This is the problem of drawing conclusions from minimal evidence. We have a tendency to overgeneralize. This is called faulty inductive reasoning. A related problem is our tendency to find cause and effect relationships without adequate evidence. This can happen in different ways. We can make the cognitive error of seeing a pattern in a short series of random events. It can also stem from our tendency to use correlation as a stand in for cause.
Conclusion
Phoney statistics are a common cause of the erroneous conclusions we and others we rely on to explain the world, the economy or the markets and also to make forecasts or predictions. We rely on them to our peril
+++++++++++++++
Other posts on investment psychology
This post, which is really about faulty inductive reasoning, is part of a series on investment psychology. Readers are invited to read Investment psychology explainer for Mr. Market – introduction .This will give you a better understanding of some of the terms and ideas and give you links to other posts in the series.
Beyond the series of posts on investment psychology, there is also the Motherlode, Part 2: Human Foibles and Investment Decision Making
And specifically to look further into faulty inductive reasoning check out Chapter 16. We Overgeneralize and Find Causes
+++++++++++++++
You can reach me by email at rodney@investingmotherlode.com
+++++++++++++++
Check out the Tags Index on the right side of the Home page that goes from ‘accounting goodwill’ to ‘wisdom of crowds’. This will give readers access to a host of useful topics.
+++++++++++++++
There is also a Table of Contents for the whole Motherlode when you click on the Motherlode tab.
Want to dig deeper into the principles behind successful investing?
Click here for the Motherlode – introduction.
If you like this blog, tell your friends about it
And don’t hesitate to provide comments or share on Twitter and Facebook
2 thoughts on “The danger of specious statistics”
You must log in to post a comment.