That's not really what he's saying. He's actually saying that a steal has the same PREDICTIVE power as 9.1 points:
Well, yes and no. To be honest, he seems a bit confused about the underlying math. My last post was referring to the way they generate that 9.1 figure, which is a descriptive (not predictive) model. What the author is trying to do is use the observed parametric relationships of steals and points (and rebounding, etc.) in order to generate a predictive model. It is ... well, it isn't particularly good statistical practice, to be honest. It wouldn't be accepted in many of the well respected statistical journals, for example.
To be clear, the 9.1 figure in and of itself is not predictive, it is simply comparative. However, after making some assumptions about the nature of the data, one can use that comparative figure to make predictions. Again, I am splitting hairs here, and to a casual observer it may not be clear that there is a real distinction to be made here but it is actually a massively significant one.
Here's a non-basketball example. If you measure the heights and weights of 5,000 people you may find that there is some linear relationship between height and weight. Say that for some unit increase in height, you observe a 0.5 unit increase in weight. So the relationship between height and weight is 0.5 in your observed data set. This is a descriptive statistic, and a comparative one. It isn't inherently predictive. However, one can still make predictions based on it ... you may guess that if you sample another 5,000 people you will find something reasonable close to 0.5. You are basically making the assumption that your measure is unbiased, and thus given an infinitely large sample is indistinguishable from whatever the "true" (and unmeasured) relationship between height and weight is.
It's important to remember that these stats are drawn retroactively to explain wins and losses. Point differentials and scoring expectations are well understood in terms of correlation to win/loss records. It therefore makes sense to use points as a sort of currency to assess the predictive value of other key stats, but the analysis has to be limited to just that.
I'm not sure what you mean when you say the analysis has to be limited to just "that"?
In this sense, the goal of the researcher/statistician has to hypothesize as to WHY a stat's predictive value (in terms of points or any other metric/currency) is what it is.
To be fair to the statisticians, they only have to establish that the stat DOES have predictive value, not WHY it does. Essentially, the way all statistical tests are set-up is the opposite of how most people think of a problem. All statistical analysis works under the hypothesis that the measured stat has NO predictive value whatsoever. None. Statistical tests are executed under this assumption. For another non-mathematical example, it's like you start up an car's engine and measure the heat being generated by the engine, and try to prove the engine isn't running. Since you measure some positive amount of heat being generated, you then reject your assumption that the engine isn't running. It's sort of weird, but it is a mathematical imperative for reasons that I won't get into.
Saying that a steal has the same predictive power as 9.1 points is not the same thing as saying that a steal itself has the same proportional impact as 9.1 points because a steal is correlated with and/or represents a favorable defensive matchup that limits points scored in other defensive possessions.
Yes. I agree. But, to be fair, the same thing can be said for points. That is, it represents a favorable offensive matchup. The reason the researcher is focusing on steals is because the observed variability in steals production is low relative to the variability in points production. I am not saying I agree with him completely, but there is a certain amount of sense in trying to measure performance with a relatively consistent measurement.