I grew up a baseball fan. I fell in love with the numbers that document the game. I relished how people like Bill James exposed the truth behind the mythology of the sport. While I'm certainly no mathematician, over time I developed a sense of how numbers tell a story. I also saw how most people misunderstood what the statistics were saying. Subsequently, throughout my career at Morningstar, I've realized that most investors lack a basic numerical grounding and are therefore vulnerable to misleading statistical analysis. I've even seen industry professionals fall prey to flawed use of stats. Unfortunately, these flaws are on clear display in The Wall Street Journal's recent high-profile piece on Morningstar and our star ratings.
The Morningstar Mirage
How the WSJ did its analysis of Morningstar Ratings
The great irony is that The Wall Street Journal's own numbers show the efficacy of the Morningstar Ratings, but its writers fail to grasp this insight. As seen in the chart reproduced below, 5-star funds using the Journal’s chosen methodology produce better future star ratings than do 4-star funds, which in turn are better future performers than 3-star funds, which in turn better 2-star funds, which better 1-star funds. (Five-star funds also meaningfully outperform the 1-star funds over the subsequent three- and five-year periods.) A rational take on these numbers would say that the stars add value. Picking higher-rated funds leads to better future results.
Graphic source: The Wall Street Journal; data source: Morningstar
That's no small accomplishment. Take any group of 10,000 or so things, be they sports teams, collectibles, or mutual funds, and put them into five buckets and rank those buckets. Do you think that 10 years later the subsequent performance of those buckets will be in the anticipated order and that there will be a meaningful difference between the top and bottom group? You might think so, but you're probably kidding yourself. To predict the order of finish is a significant accomplishment. Yet, rather than praising this achievement, the Journal faulted the fact that the average subsequent rating for 5-star funds declined while the average future rating of 1-star funds improved, as shown below.
Graphic source: The Wall Street Journal; data source: Morningstar
Well, that's the nature of numbers. Five-star funds can't go to six; they can only decline. One-star funds can't go to zero; they can only be buried or improve. The numbers statistically must move toward the mean.
The Fault in Our Stars?
But beyond the laws of numbers, this move to the middle is the very nature of capitalism. Good ideas get replicated, and the competitive advantages of 5-star funds tend to diminish. Similarly, 1-star managers either change their stripes or get fired. How ironic that The Wall Street Journal, of all places, finds fault with Morningstar's ratings for documenting the effects of capitalism in a competitive market like the U.S. money management industry! The fact that the numbers naturally cluster, however, doesn't negate the meaningful benefit of the stars' ability to rank order 10 years in advance for five broad buckets of funds.
There's another oddity in the analysis. Notice that the subsequent average star ratings are 3 stars or fewer. This suggests that many subsequent leaders are newer funds that didn't exist at the time of the initial ratings, ones that came along later and delivered good returns. That's fine, but it can hardly be considered a flaw in the earlier ratings that they didn't anticipate the arrival of these new funds and somehow save the top slots for funds not yet launched. In sum, the very numbers the Journal generates suggest that the star rating performs exactly as Morningstar suggests: It's not a fully fledged conclusion, but as a first-stage screen, it meaningfully tilts the odds in investors' favor. That's a benefit that should not be taken lightly.
A second problem with the Journal's statistical analysis of Morningstar's work comes in its casual dismissal of the efficacy of our newer Analyst Ratings. It cites that Gold funds have delivered subsequent performance of 3.4 stars, while Silver funds generated 3.3-star performance, and Bronze funds 3.0 stars. Now if it had claimed that the five-year time horizon of this analysis was too short to be meaningful, that may well have been a fair concern, but instead it took the position that the differences weren't meaningful enough. How so? The difference in future performance between a bucket of funds with an average expense ratio of 0.75% and one of 0.25% isn't large. Sometimes, over some periods, the higher-expense bucket will prevail. Does anybody wish to argue that those extra 50 basis points aren't worth bothering about?
I didn't think so.
The Importance of Tilting the Odds
The Journal's dismissal of small advantages is precisely the human dynamic that casinos and many financial-services companies use to exploit their customers. Casinos know we'll overlook the small tilt the odds give them, not recognizing that their small benefits can lead to huge profits. We gladly forfeit that advantage and think we're getting away with free drinks, when in reality we're being played. Financial-services companies take small fees and floats from investors who are blind to the transfer. We think a 1.50% expense ratio doesn't matter when looking at 10% to 20% recent gains on a fund. We're wrong. Over time, little things mean a lot. The consequence of ignoring these tilts is their riches and our loss.
The Journal's article adopts the free-drink mindset, only in reverse. Rather than casually dismiss the casino's advantages, it casually dismisses the advantages accrued by investors. The Journal's measurement shows that the star ratings pointed in the right direction for that measurement period. (Such results always vary by time.) It also showed a similar pattern for the Morningstar Analyst Ratings, which unlike the star ratings incorporate the analysts' viewpoints, and which unlike the star ratings are intended to be predictive. The Analyst Ratings to date have gone even further than the stars in improving investors' odds. It would seem foolish to dismiss that information.
The True Story
As always, a general precept is best understood by delving into the specifics. Let’s return to the Journal’s results, as documented in this article’s first chart. The numbers show that 14% of 5-star funds, on average, went on to deliver 5-star performance over the next 10 years. At first blush, this result seems damning. It looks like an 86% failure rate. But the case has been framed incorrectly. The proper way of looking at the issue is: Does picking from the list improve your odds over picking randomly? If it does, it is value added; if it doesn't, it's not.
In this case, one would expect any randomly selected fund of having a 10% chance of generating 5-star performance in the future, as Morningstar awards 5-star ratings to 10% of funds. (The ratings distribution is 10% for 5-star funds, 22.5% for 4 stars, 35% for 3 stars, 22.5% for 2 stars, and 10% for 1 star.). If choosing from the pool of 5-star funds gives a 14% chance of generating 5-star performance, then it has increased an investor's chances of holding a future 5-star fund by 40%. Moreover, many of the former 5-star funds went on to deliver very desirable 4-star performances. That's a sizeable win, but oddly The Wall Street Journal writers present that performance as a disservice to investors. The Journal's numbers demonstrate that the stars improve investors' odds, but its writers overlook these benefits.
Conclusion
Investors have many parties trying to take a slice of their money, but few forces trying to tilt the odds in their favor. By the Journal's own analysis, Morningstar's ratings push the needle in the correct direction--while costing absolutely nothing and being widely available. If that is a sin, then perhaps Wall Street needs more sinners.