Regarding those stats mmmmm gave, Jeff Green is not middle of the pack. His numbers come equidistant from the extremes of the data, but of the 40+ players listed only 10 had more divergent numbers given the percentage sum of deviation from the amount of games above and below average. That puts Green in the 25th percentile or so for most inconsistent.
Well, sure, if you want to weigh positives against him. In a strict plus-or-minus variance definition that is correct.
But stepping away from statistics, what is probably more important to a fan's definition of 'consistency' is probably avoidance of the 'under' performances. I don't think anyone is complaining about the share of Green's games that were 40% over his average. My guess would bet that what is more important is what share of his games are stinkers, and how often does that happen compared to other players of similar usage?
EDIT: I'm also not sure they are properly characterized as simple 'variance'. The factors resulting in a 'poor' game versus a 'great' game aren't necessarily symmetric 'noise' (though certainly that's a big part of it). I think the fact that most of the players have an asymmetric pair of outlier numbers points to this. And keep in mind further to treat as outliers to the standard deviation would require them to be just that. These numbers are not that. These are simply counting performance outliers with my arbitrary '40%' threshold. I suspect using the std dev would probably track this closely, though. Just more work for little benefit.