I disagree. What's the use of evaluating annually when you can't be sure which Jeff Green is going to show up on any given night?
I say that he's been inconsistent within the context of the seasons. There have been excuses for that but at some point he has to get past that if he's going to live up to his own expectations.
One thing that basketball nerd-stats probably already have but should really make available to the public is a figure that looks at the typical range for a player every night. A 'consistency rating'. If a player averages 17 points a night but thats because of staggered high-scoring and low-scoring games, and a player averages 16 points a night but never really strays farther than 3pts from 16 in doing so, isn't the 16ppg player more valuable?
This a distinction I was trying to get at. Is Green's play unsatisfying because his game to game variance is so high? Or because he "should be" averaging 20/6/3 over the year instead of 15/5/2?
Conceptually the complaints are about different things. I've seen both mentioned. One is that he should be better overall over the course of an entire season. The other is that he should be more consistent overall. And, some people think he should do both.
But it's not right to ask "what's the use of evaluating annually" as though it it rendered irrelevant by inconsistency. If Green averages 20/7/3 with similar game to game inconsistency, that's improvement, right? Or, he could keep his averages the same with a lower variance. Or both.
I'm with the other posters above in that I'd like to see more variance-based metrics. I'd also like to see how consistency game to game relates to player roles and team success. For example, maybe consistency is more valuable for starters, while for 6th men (think Vinnie Johnson) can a high variance actually be a good thing? It'd be interesting.