Mam zestaw danych z przykładowymi obserwacjami, przechowywanymi jako liczby w przedziałach zakresu. na przykład:
min/max count
40/44 1
45/49 2
50/54 3
55/59 4
70/74 1
Teraz oszacowanie średniej z tego jest dość proste. Po prostu użyj średniej (lub mediany) każdego przedziału zakresu jako obserwacji i zliczenia jako wagi i znajdź średnią ważoną:
For my test case, this gives me 53.82.
My question now is, what's the correct method of finding the standard deviation (or variance)?
Through my searching, I've found several answers, but I'm unsure which, if any, is actually appropriate for my dataset. I was able to find the following formula both on another question here and a random NIST document.
Which gives a standard deviation of 8.35 for my test case. However, the Wikipedia article on weighted means gives both the formula:
and
Which give standard deviations of 8.66 and 7.83, respectively, for my test case.
Update
Thanks to @whuber who suggested looking into Sheppard's Corrections, and your helpful comments related to them. Unfortunately, I'm having a difficult time understanding the resources I can find about it (and I can't find any good examples). To recap though, I understand that the following is a biased estimate of variance:
I also understand that most standard corrections for the bias are for direct random samples of a normal distribution. Therefore, I see two potential issues for me:
- Są to losowe próbki skumulowane (co, jestem pewien, że właśnie tam pojawiają się Korekcje Shepparda).
- Nie wiadomo, czy dane mają rozkład normalny (dlatego zakładam, że nie, co, jestem całkiem pewien, unieważnia Korekcje Shepparda).
Moje zaktualizowane pytanie brzmi: Jaka jest właściwa metoda radzenia sobie z błędem wynikającym z „prostej” ważonej formuły odchylenia standardowego / wariancji dla rozkładu niestandardowego? W szczególności w odniesieniu do danych binarnych.
Uwaga: używam następujących terminów:
- jest ważoną wariancją
- to liczba obserwacji. (tj. liczba pojemników)
- to liczba niezerowych wag. (tj. liczba pojemników z zliczeniami)
- są wagami (tj. zliczeniami)
- są obserwacjami. (tzn. bin oznacza)
- is the weighted mean.
Odpowiedzi:
This reply presents two solutions: Sheppard's corrections and a maximum likelihood estimate. Both closely agree on an estimate of the standard deviation:7.70 for the first and 7.69 for the second (when adjusted to be comparable to the usual "unbiased" estimator).
Sheppard's corrections
"Sheppard's corrections" are formulas that adjust moments computed from binned data (like these) where
the data are assumed to be governed by a distribution supported on a finite interval[a,b]
that interval is divided sequentially into equal bins of common widthh that is relatively small (no bin contains a large proportion of all the data)
the distribution has a continuous density function.
They are derived from the Euler-Maclaurin sum formula, which approximates integrals in terms of linear combinations of values of the integrand at regularly spaced points, and therefore generally applicable (and not just to Normal distributions).
Although strictly speaking a Normal distribution is not supported on a finite interval, to an extremely close approximation it is. Essentially all its probability is contained within seven standard deviations of the mean. Therefore Sheppard's corrections are applicable to data assumed to come from a Normal distribution.
The first two Sheppard's corrections are
Use the mean of the binned data for the mean of the data (that is, no correction is needed for the mean).
Subtracth2/12 from the variance of the binned data to obtain the (approximate) variance of the data.
Where doesh2/12 come from? This equals the variance of a uniform variate distributed over an interval of length h . Intuitively, then, Sheppard's correction for the second moment suggests that binning the data--effectively replacing them by the midpoint of each bin--appears to add an approximately uniformly distributed value ranging between −h/2 and h/2 , whence it inflates the variance by h2/12 .
Let's do the calculations. I use
R
to illustrate them, beginning by specifying the counts and the bins:The proper formula to use for the counts comes from replicating the bin widths by the amounts given by the counts; that is, the binned data are equivalent to
Their number, mean, and variance can be directly computed without having to expand the data in this way, though: when a bin has midpointx and a count of k , then its contribution to the sum of squares is kx2 . This leads to the second of the Wikipedia formulas cited in the question.
The mean (1195/22≈54.32 (needing no correction) and the variance (675/11≈61.36 . (Its square root is 7.83 as stated in the question.) Because the common bin width is h=5 , we subtract h2/12=25/12≈2.08 from the variance and take its square root, obtaining 675/11−52/12−−−−−−−−−−−−√≈7.70 for the standard deviation.
mu
) issigma2
) isMaximum Likelihood Estimates
An alternative method is to apply a maximum likelihood estimate. When the assumed underlying distribution has a distribution functionFθ (depending on parameters θ to be estimated) and the bin (x0,x1] contains k values out of a set of independent, identically distributed values from Fθ , then the (additive) contribution to the log likelihood of this bin is
(see MLE/Likelihood of lognormally distributed interval).
Summing over all bins gives the log likelihoodΛ(θ) for the dataset. As usual, we find an estimate θ^ which minimizes −Λ(θ) . This requires numerical optimization and that is expedited by supplying good starting values for θ . The following
R
code does the work for a Normal distribution:The resulting coefficients are(μ^,σ^)=(54.32,7.33) .
Remember, though, that for Normal distributions the maximum likelihood estimate ofσ (when the data are given exactly and not binned) is the population SD of the data, not the more conventional "bias corrected" estimate in which the variance is multiplied by n/(n−1) . Let us then (for comparison) correct the MLE of σ , finding n/(n−1)−−−−−−−−√σ^=11/10−−−−−√×7.33=7.69 . This compares favorably with the result of Sheppard's correction, which was 7.70 .
Verifying the Assumptions
To visualize these results we can plot the fitted Normal density over a histogram:
To some this might not look like a good fit. However, because the dataset is small (only11 values), surprisingly large deviations between the distribution of the observations and the true underlying distribution can occur.
Let's more formally check the assumption (made by the MLE) that the data are governed by a Normal distribution. An approximate goodness of fit test can be obtained from aχ2 test: the estimated parameters indicate the expected amount of data in each bin; the χ2 statistic compares the observed counts to the expected counts. Here is a test in
R
:The output is
The software has performed a permutation test (which is needed because the test statistic does not follow a chi-squared distribution exactly: see my analysis at How to Understand Degrees of Freedom). Its p-value of0.245 , which is not small, shows very little evidence of departure from normality: we have reason to trust the maximum likelihood results.
źródło