Jest to bardziej ogólne podejście do problemu postawionego przez to pytanie . Po uzyskaniu asymptotycznego rozkładu wariancji próbki, możemy zastosować metodę Delta, aby uzyskać odpowiedni rozkład dla odchylenia standardowego.
Niech próbka wielkości iid nietypowych zmiennych losowych , ze średnią i wariancją . Ustaw średnią próbki i wariancję próbki na
Wiemy, że
gdzie , a my ograniczamy naszą uwagę do rozkładów, dla których które momenty muszą istnieć i być skończone, istnieją i są skończone.
Czy to tak trzyma?
mathematical-statistics
variance
central-limit-theorem
asymptotics
Alecos Papadopoulos
źródło
źródło
Odpowiedzi:
Do bocznych zależności pojawiających się, gdy weźmiemy pod uwagę wariancję próbki, piszemy
and after a little manipualtion,
Therefore
Manipulating,
The termn/(n−1) becomes unity asymptotically. The term n√n−1σ2 is determinsitic and goes to zero as n→∞ .
We also haven−−√(x¯−μ)2=[n−−√(x¯−μ)]⋅(x¯−μ) . The first component converges in distribution to a Normal, the second convergres in probability to zero. Then by Slutsky's theorem the product converges in probability to zero,
We are left with the term
Alerted by a lethal example offered by @whuber in a comment to this answer, we want to make certain that(Xi−μ)2 is not constant. Whuber pointed out that if Xi is a Bernoulli (1/2) then this quantity is a constant. So excluding variables for which this happens (perhaps other dichotomous, not just 0/1 binary?), for the rest we have
and so the term under investigation is a usual subject matter of the classical Central Limit Theorem, and
Note: the above result of course holds also for normally distributed samples -but in this last case we have also available a finite-sample chi-square distributional result.
źródło
You already have a detailed answer to your question but let me offer another one to go with it. Actually, a shorter proof is possible based on the fact that the distribution of
does not depend onE(X)=ξ , say. Asymptotically, it also does not matter whether we change the factor 1n−1 to 1n , which I will do for convenience. We then have
And now we assume without loss of generality thatξ=0 and we notice that
has probability limit zero, since the second term is bounded in probability (by the CLT and the continuous mapping theorem), i.e. it isOp(1) . The asymptotic result now follows from Slutzky's theorem and the CLT, since
whereτ2=Var{X2}=E(X4)−(E(X2))2 . And that will do it.
źródło
The excellent answers by Alecos and JohnK already derive the result you are after, but I would like to note something else about the asymptotic distribution of the sample variance.
It is common to see asymptotic results presented using the normal distribution, and this is useful for stating the theorems. However, practically speaking, the purpose of an asymptotic distribution for a sample statistic is that it allows you to obtain an approximate distribution whenn is large. There are lots of choices you could make for your large-sample approximation, since many distributions have the same asymptotic form. In the case of the sample variance, it is my view that an excellent approximating distribution for large n is given by:
whereDFn≡2/V(S2n/σ2)=2n/(κ−(n−3)/(n−1)) and κ=μ4/σ4 is the kurtosis parameter. This distribution is asymptotically equivalent to the normal approximation derived from the theorem (the chi-squared distribution converges to normal as the degrees-of-freedom tends to infinity). Despite this equivalence, this approximation has various other properties you would like your approximating distribution to have:
Unlike the normal approximation derived directly from the theorem, this distribution has the correct support for the statistic of interest. The sample variance is non-negative, and this distribution is has non-negative support.
In the case where the underlying values are normally distributed, this approximation is actually the exact sampling distribution. (In this case we haveκ=3 which gives DFn=n−1 , which is the standard form used in most texts.) It therefore constitutes a result that is exact in an important special case, while still being a reasonable approximation in more general cases.
Derivation of the above result: Approximate distributional results for the sample mean and variance are discussed at length in O'Neill (2014), and this paper provides derivations of many results, including the present approximating distribution.
This derivation starts from the limiting result in the question:
Re-arranging this result we obtain the approximation:
Since the chi-squared distribution is asymptotically normal, asDF→∞ we have:
TakingDFn≡2/V(S2n/σ2) (which yields the above formula) gives DFn→2n/(κ−1) which ensures that the chi-squared distribution is asymptotically equivalent to the normal approximation from the limiting theorem.
źródło