To nie jest praca domowa, choć na to wygląda. Wszelkie odniesienia są mile widziane. :-)
Scenariusz: Istnieje różnych piłek i różnych pojemników (oznaczonych od 1 do , od lewej do prawej). Każda piłka jest rzucana niezależnie i równomiernie do koszy. Niech będzie liczbą kulek w -tym pojemniku. Niech oznaczają następujące zdarzenie.
Dla każdego ,
Oznacza to, że pierwsze pojemniki (najbardziej lewe pojemniki ) zawierają mniej niż kulki , dla każdego .
Pytanie: Oszacowanie w kategoriach ? Kiedy idzie w nieskończoność. Preferowane jest ograniczenie dolne. Nie sądzę, by istniała łatwo obliczalna formuła.
Przykład: . Uwaga.
Zgaduję: Myślę, że , gdy idzie w nieskończoność. Uważałem pierwsze elementów w sumowaniu.
reference-request
co.combinatorics
pr.probability
Peng Zhang
źródło
źródło
Odpowiedzi:
EDYCJA: (2014-08-08) Jak zauważa Douglas Zare w komentarzach, poniższy argument, a konkretnie „pomost” między dwoma prawdopodobieństwami, jest nieprawidłowy. Nie widzę prostego sposobu, aby to naprawić. Zostawię odpowiedź tutaj jako wierzę, że nadal zapewnia pewną intuicję, ale wiem, że jest nie tak w ogóle.
To nie będzie pełna odpowiedź, ale mam nadzieję, że będzie zawierała wystarczającą ilość treści, którą Ty lub ktoś bardziej kompetentny ode mnie skończę.
Rozważ prawdopodobieństwo, że dokładnie kulek wpadnie do pierwszych l (z n ) pojemników:k l n
Wywołaj prawdopodobieństwo, że mniej niż kulek wpadnie do pierwszych l pojemników F l :l l Fl
Prawdopodobieństwo wystąpienia zdarzenia, powyżej występuje jest mniejszy niż, jeśli rozważyć każdy z F l zdarzeń występujących samodzielnie i w jednej porcji. To daje nam pomost między nimi:El Fl
Gdzie jestfunkcją rozkładu skumulowanego dla rozkładu dwumianowegoprzyp=lF(l−1;n,ln) . Wystarczy przeczytać kilka wierszy w dół na stronie Wikipedii i zauważyć, że(l-1≤pn)możemy użyćnierówności Chernoffa,aby uzyskać:p=ln (l−1≤pn)
WhereHm is the m 'th Harmonic Number, γ is the Euler-Mascheroni constant and the inequality for the Hm is taken from Wolfram's MathWorld linked page.
Not worrying about thee−1/4m factor, this finally gives us:
Below is a log-log plot of an average of 100,000 instances forn=2048 as a function of m with the function e−γ/2m√ also plotted for reference:
While the constants are off, the form of the function appears to be correct.
Below is a log-log plot for varyingn with each point being the average of 100,000 instances as a function of m :
Finally, getting to the original question you wanted answered, since we know thatPr(Em)∝1m√ we have:
And as numerical verification, below is a log-log plot of the sum,S , versus instance size, n . Each point represents the average of the sum of 100,000
instances. The function x1/2 has been plotted for reference:
While I see no direct connection between the two, the tricks and final form of this problem have a lot of commonalities with the Birthday Problem as initially guessed at in the comments.
źródło
The answer isΘ(n−−√) .
First, let's computeEn−1 .
Let's suppose we thrown balls into n bins, and look at the probability that a bin has exactly k balls in it. This probability comes from the Poisson distribution, and as n goes to ∞ the probability that there are exactly k balls in a given bin is 1e1k! .
Now, let's look at a different way of distributing balls into bins. We throw a number of balls into each bin chosen from the Poisson distribution, and condition on the event that there aren balls total. I claim that this gives exactly the same distribution as throwing n balls into n bins. Why? It is easy to see that the probability of having kj balls in the j th bin is proportional to ∏nj=11kj! in both distributions.
So let's consider a random walk where at each step, you go fromt to t+1−k with probability 1e1k! . I claim that if you condition on the event that this random walk returns to 0 after n steps, the probability that this random always stays above 0 is the probability that the OP wants to calculate. Why? This height of this random walk after s steps is s minus the number of balls in the first s bins.
If we had chosen a random walk with a probability of12 of going up or down 1 on each step, this would be the classical ballot problem, for which the answer is 12(n−1) . This is a variant of the ballot problem which has been studied (see this paper), and the answer is still Θ(1n) .
I don't know whether there is an easy way to compute the constant for the Θ(1n) for this case.
The same paper shows that when the random walk is conditioned to end at heightk , the probability of always staying positive is Θ(k/n) as long as k=O(n−−√) . This fact will let us estimate Es for any s .
I'm going to be a little handwavy for the rest of my answer, but standard probability techniques can be used to make this rigorous.
We know that asn goes to ∞ , this random walk converges to a Brownian bridge, i.e., Brownian motion conditioned to start and end at 0 . From general probability theorems, for ϵn<s<(1−ϵ)n , the random walk is roughly Θ(n−−√) away from the x -axis. In the case it has height t>0 , the probability that it has stayed above 0 for the entire time before s is Θ(t/s) . Since t is likely to be Θ(n−−√) when s=Θ(n) , we have Es≈Θ(1/n−−√) .
źródło
[Edit 2014-08-13: Thanks to a comment by Peter Shor, I have changed my estimate of the asymptotic growth rate of this series.]
My belief is thatlimn→∞∑i<nPr(Ei) grows as n−−√ . I do not have a proof but I think I have a convincing argument.
LetBi=f(i) be a random variable that gives the number of balls in bin i . Let Bi,j=∑jk=iBk be a random variable that gives the total number of balls in bins i through j inclusive.
You can now writePr(Ei)=∑b<jPr(Ej∧B1,j=b)Pr(Ei∣Ej∧B1,j=b) for any j<i . To that end, let's introduce the functions π and gi .
We can writePr(Ei) in terms of gi :
Now, it's clear from the definition ofgi that
wherehi(n) is a polynomial in n of degree i−1 . This makes some intuitive sense too; at least n−i+1 balls will have to be put in one of the (i+1) th through n th bins (of which there are n−i ).
Since we're only talking aboutPr(Ei) when n→∞ , only the lead coefficient of hi(n) is relevant; let's call this coefficient ai . Then
How do we computeai ? Well, this is where I'll do a little handwaving. If you work out the first few Ei , you'll see that a pattern emerges in the computation of this coefficient. You can write it as
Now, I wasn't able to derive a closed-form equivalent directly, but I computed the first 20 values ofPr(Ei) :
Now, it turns out that
wherePois(i;λ) is the probability that a random variable X has value i when it's drawn from a Poisson distribution with mean λ . Thus we can write our sum as
Wolfram Alpha tells me this series diverges. Peter Shor points out in a comment that Stirling's approximation allows us to estimatePr(Ei) :
Let
Since
our series grows as∫n1ϕ(x)dx (See e.g. Theorem 2). That is,
źródło
Exhaustively checking the first few terms (by examining all n^n cases) and a bit of lookup shows that the answer is https://oeis.org/A036276 /nn . This implies that the answer is ∼n12π√2 .
More exactly, the answer is:
źródło