I derive two different statistics for characterizing the dispersion in a random variable. One, call it E, is an average of ranges. The other, call it R, is an unbiased estimator of the random variable's parameter. But E is far more popular than R because it's easier to calculate and observe.

Analyzing a particular sample I discover that the 90% confidence interval for E has an error range of +/-20%. On the same sample the 90% confidence interval for R also has an error range of +/-20%. Based on this it looks like E is as "efficient" as R.

However I believe that R is a much more powerful and efficient statistic, because E ignores useful information that R doesn't. How can I quantify this?

I calculated the coefficient of variation for each statistic and on this sample it is .12 for E and .10 for R. Does that demonstrate that R is a better statistic? And if so how what is the plain-language explanation of that? Or how would that difference manifest itself given that the confidence intervals are identical?

Analyzing a particular sample I discover that the 90% confidence interval for E has an error range of +/-20%. On the same sample the 90% confidence interval for R also has an error range of +/-20%. Based on this it looks like E is as "efficient" as R.

However I believe that R is a much more powerful and efficient statistic, because E ignores useful information that R doesn't. How can I quantify this?

I calculated the coefficient of variation for each statistic and on this sample it is .12 for E and .10 for R. Does that demonstrate that R is a better statistic? And if so how what is the plain-language explanation of that? Or how would that difference manifest itself given that the confidence intervals are identical?

Last edited: