Also, I don't think he recognizes the extent to which the Heat's large margin of victory comes in part from having run up the score on bad teams. Because he doesn't collapse runaway scores, their margin is inflated.
I am sorry, but this doesn't make any sense. If you "collapse" runaway scores, then the whole "margin of victory" concept becomes rather pointless.
No it doesn't. This is what some of the better NCAA college football computer polls do. For instance, the Lakers beat the Cavs this year by 55 points. They could have lost the next nine games by five points each and still have a positive winning margin for those 10 games. But can a team that losses nine in a row be called a good team?
This is especially relevant for Hollinger's system, because he gives more weight to the last 25% of games, and so for 12 games or so, the Lakers' 55-point victory skewed the results of the rankings. Once that game dropped off, the Lakers fell somewhat dramatically in the rankings.
It isn't as relevant when you control for wins and losses as well.
Either way -- I understand what you're saying, but "collapsing" stuff is an extremely arbitrary way to correct for something which can be fixed by using the statistical technique that's designed specifically to deal with outliers.
I think capping wins (or losses) seems reasonable. There's probably not much of a difference between winning by 20 and winning by 35 but there's a big difference between winning by 10 and 35 (assuming 10's above the median).
How about the difference between games that were won by 15 and 20? I agree that effect that's being sought is reasonable. The chosen method, however, is fundamentally unsound.
There is no good underlying reason to "cap" at any given value. Ultimately, caps are arbitrary, and arbitrary rules create data noise.
To prevent skewing the average margin of victory too badly. How many blowout wins is likely a better predictor than how badly you win those games.
I don't understand why anyone would bother to open this can of worms, when they could simply take the median value. In this case, from the set of (+55, -5, -5, -5, -5) you will get a margin of victory of -5, effectively minimizing the importance of the outlier game.
With the median there would be no difference between margins of 3,3,5,7,7 and 3,3,5,23,23. I think that's too drastic. You want the blowouts to count, but not necessarily overwhelm the results.
I think the problem is that one method is regressive and the other is...i don't know what it's called, but we'll call it "progressive."
Using straight up margin of victory is regressive in the sense that margin of victory rankings, when superimposed over several seasons data is both a relatively good predictor of the actual win/loss record of a given team, and, more importantly, is a better predictor of win/loss record for a given team going forward. It doesn't deal with subjective at all; it uses data to predict future data.
The problem with the NCAA method (of which i was unaware) is that it is NOT a method used to predict future independent results, but rather, the NCAA ranking methods are computer programs used to take data and create a ranking list that the programmer SUBJECTIVELY agrees with. In other words, it is susceptible to fallible human ideas/concepts of what makes a good/bad team.
For example, we may come to the table with a preconceived notion that teams that often lose to very good teams cannot themselves be very good teams, and therefore create a computer ranking system that punishes teams for losing to good teams/rewards teams for beating good teams independent of other results. HOWEVER, this is often done without any study or evidence as to whether or not losing to good teams/beating good teams is actually a good predictor of a given teams' ability to win future games against good teams (everything i've read says that it is irrelevant).
To sum:
The NCAA method sounds like a system where people (with biases) pick numbers and criteria to enter into a system so that the system will spew out a ranking that jives with their subtle, perhaps subconscious, ranking of how good teams are. In this way i called it "progressive:" you are creating your own reality, in a way.
Using basic margin of victory, however, is somewhat "regressive:" it's looking back in history at actual concrete measures of how good a team is (wins/losses) and looking for different independent variables that most closely predicted such win losses over large fields of data; margin of victory does this fairly well, and hence is a relatively good predictor of future results.
I would love to see a paper or something written about a different method that outperforms margin of victory in terms of actually matching win/loss results, and not just "seeming" better because it passes our heavily biased "smell tests."