Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 46 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
46
Dung lượng
1,35 MB
Nội dung
BRANCHING BROWNIAN MOTIONS
SHEN JINGYUN
(Bachelor of Science, Shanghai Jiao Tong University)
A THESIS SUBMITTED
FOR THE DEGREE OF MASTER OF SCIENCE
DEPARTMENT OF MATHEMATICS
NATIONAL UNIVERSITY OF SINGAPORE
2012
DECLARATION
I hereby declare that the thesis is my original work and it has been written by me
in its entirety.
I have duly acknowledged all the sources of information which have been used
in the thesis.
This thesis has also not been submitted for any degree in any university previously.
___________________
Shen Jingyun
13 August 2012
Acknowledgements
First and foremost, I would like to show my deepest gratitude to my supervisor,
Dr. Sun Rongfeng, a respectable, responsible and resourceful scholar, who has
provided me with valuable guidance in every stage of the writing of this thesis.
Without his enlightening instruction, impressive kindness and patience, I could not
have completed my thesis. His keen and vigorous academic observation enlightens me
not only in this thesis but also in my future study.
I shall extend my thanks to all my teachers who have helped me to develop the
fundamental and essential academic competence.
Last but not least, I'd like to thank my families, my friends, and my boyfriend,
for their constant encouragement and support.
i
Table of Contents
Acknowledgements.......................................................................................................i
Summary......................................................................................................................iii
List of Tables............................................................................................................... iv
List of Figures...............................................................................................................v
Chapter 1 Introduction................................................................................................1
1.1 The problem.......................................................................................................1
1.2 The objective......................................................................................................2
Chapter 2 Bramson’s main results..............................................................................3
2.1 Results of Kolmogorov, Petrovsky, and Piscounov............................................3
2.2 Tightness of the maximal displacement.............................................................6
2.3 The second order term of the maximal displacement.........................................8
Chapter 3 Other works on the maximal displacement of a branching Brownian
motion..........................................................................................................................13
3.1 Roberts’s result on the distribution of Mt ........................................................13
3.2 Roberts’s result on the paths of Mt ..................................................................17
Chapter 4 L.-P Arguin, A. Bovier and N. Kistler’s results on the extremal process
of branching Brownian motion.................................................................................20
4.1 Derivative martingale.......................................................................................20
4.2 Localization of paths.............................. .........................................................21
4.3 The genealogy of extremalparticles.................................................................23
4.4 The thinning map..............................................................................................25
4.5 The limit process..............................................................................................28
Chapter 5 The work of E.Aidekon, J. Berestycki, E. Brunet and Z. Shi on the
branching Brownian motion seen from its tip.........................................................30
Bibliography...............................................................................................................37
ii
Summary
Branching Brownian Motion (BBM) describes a system of particles where each
particle moves according to a Brownian motion and has an exponentially distributed
lifetime.
Branching Brownian Motion has a long history. In 1937, Kolmogorov, Petrovsky,
and Piscounov proved the first order term of the Maximal Displacement is √2t +
o(t) as t → ∞.
3
In 1977, Bramson proved the second-order term is √2t − 2√2 log t + O(1) as
t → ∞ . He then developed an important relation between branching Brownian
motions and the Fisher-KPP equation based on the work of McKean.
Roberts provided another proof of Bramson’s result concerning the median
position of the extremal particle in branching Brownian motions, which is much
shorter than Bramson’s proof.
L.-P Arguin, A. Bovier and N. Kistler work on the extremal process of branching
Brownian motion. They addressed the genealogy of extremal particles. They proved
that in the limit of large time t, extremal particles descend with overwhelming
probability from ancestors having split either within a distance of order one from time
0, or within a distance of order one from time t. The results suggest that the extremal
process of branching Brownian motion in the limit of large times converges weakly to
a cluster point process. The limiting process is a randomly shifted Poisson cluster
process.
The work of E.Aidekon, J. Berestycki, E. Brunet and Z. Shi also gives a
description of the limit object and a different proof of the convergence of the
branching Brownian motion seen from its tip.
This thesis shows the main results on the branching Brownian motion till now.
We describe how these results are obtained and mainly focus on the proof strategies.
iii
List of Tables
Table 5.1 Comparison of results in Chapter 4 and Chapter 5.......................................36
iv
List of Figures
Fig 1.1 A branching Brownian motion in one dimension..............................................2
Fig 4.1 The upper envelope..........................................................................................22
Fig 4.2 The entropic envelope and the lower envelope................................................23
Fig 4.3 Genealogies of extremal particles....................................................................24
Fig 4.4 Cluster-extrema................................................................................................26
Fig 5.1 The points branching off from X1 recently.....................................................31
Fig 5.2 The event At (x, η)...........................................................................................34
v
Chapter 1 Introduction
1.1 The problem
Branching Brownian motion has a long history. A general theory of branching
Markov processes was developed in a series of three papers by Ikeda, Nagasawa and
Watanabe in 1968, 1969 [8,9,10].
Branching Brownian Motion (BBM) describes a system of particles where each
particle moves according to a Brownian motion and has an exponentially distributed
lifetime. It is a continuous time Markov branching process with death rate 1,
producing a random number of offsprings at its time of death, where the individual
particles undergo independent standard Brownian motions. It is constructed as follows:
a single particle performs a standard Brownian motion x, with x(0)=0, which it
continues for an exponential holding time T independent of x, with ℙ[T > t] = e−t .
At time T, the particle splits independently of x and T into k (≥1) offsprings with
∞
∞
probability pk , where ∑∞
k=1 pk = 1, ∑k=1 kpk = 2, and K = ∑k=1 k(k − 1)pk < ∞.
These particles continue along independent Brownian paths starting at X(T), and are
subject to the same splitting rule, with the effect that the resulting tree X contains,
after an elapsed time t > 0, N(t) particles located at X1 (t), … , XN(t) (t), where N(t)
is the random number of particles generated up to that time (note that 𝔼[N(t)] = et ).
And denote 𝒩(t) = {Xk (t), 1 ≤ k ≤ N(t)} the collection of positions of particles in
branching Brownian motion at time t.
1
Fig 1.1 A branching Brownian motion in one dimension [14]
Define Mt = max1≤k≤N(t) Xk (t), and denote
u(t, x) = ℙ[Mt ≤ x],
the distribution function of the maximal displacement of the branching Brownian
motions at time t.
For 0 < ε < 1, let mε (t) be chosen to satisfy u(t, mε (t)) = ℙ[Mt ≤ mε (t)] =
ε. The asymptotic behavior of u(t, x) will be studied.
1.2 The objective
Much current research in probability theory is concerned with branching
processes. These generalizations are essential for the modeling of systems. Branching
Brownian motion is most easily described as a simple model of an evolving
population.
A number of different ways of thinking about branching Brownian motions have
emerged over the last ten years, and the principal aim of this thesis is to describe them
in an accessible way. We will mainly focus on the proof strategies and key points
while the details of most proofs might be omitted, and the reader is referred to the
relevant papers.
2
Chapter 2 Bramson’s main results
In Bramson’s Ph.D. dissertation in 1977 [5], he shows that the position of any
fixed percentile of the maximal displacement of standard branching Brownian motion
3
in one dimension is √2t − 2√2 log t + O(1) as t → ∞, where the second-order term
was previously unknown. He then developed an important relation between branching
Brownian motions and the Fisher-KPP equation based on the work of McKean [13].
2.1 Results of Kolmogorov, Petrovsky, and Piscounov
Theorem 2.1 [5, page 579-580]: u(t,x) solves the Kolmogorov-Petrovsky-Piscounov
(KPP) equation
ut =
1
u + f(u),
2 xx
with initial condition
(2.1) u(0, x) = {
1 if x ≥ 0,
0 if x < 0,
where
∞
f(u) = ∑ pk uk − u,
k=1
and
∞
∞
∑ pk = 1 , ∑ kpk = 2 , with pk ≥ 0.
k=1
k=1
Proof: If we stop the process at T ∧ t, where T is the time at which the initial particle
splits, then we obtain the following decomposition for u(t, x):
3
t
∞
u(t, x) = ℙ[T > t] ∙ ℙ[X(t) ≤ x] + ∫ ℙ[T ∈ ds] ∫ ℙ[X(s) ∈ x − dy]
0
−∞
∞
∙ ∑ pk uk (t − s, y)
k=1
∞
t
−t
−s
∞
= e ℙ[X(t) ≤ x] + ∑ pk ∫ e ds ∫ ℙ[X(s) ∈ x − dy] uk (t − s, y)
k=1
0
−∞
= e−t ℙ[X(t) ≤ x]
∞
t
∞
+ ∑ pk ∫ e−(t−s) ds ∫ ℙ[X(t − s) ∈ x − dy] uk (s, y).
k=1
0
−∞
Substituting s for t-s in the last step, this equation shows that u(t,x) is jointly
continuous. Differentiate it with respect to t, and an interchange of limits yields
4
∂u(t, x)
∂t
∞
−t
= −e ℙ[X(t) ≤ x] + e
−t
∂ℙ[X(t) ≤ x]
+ ∑ pk uk (t, x)
∂t
k=1
∞
t
− ∑ pk ∫ e
−(t−s)
∞
ds ∫ ℙ[X(t − s) ∈ x − dy] uk (s, y)
−∞
0
k=1
∞
t
+ ∑ pk ∫ e
−(t−s)
∞
0
k=1
∂ℙ[X(t − s) ∈ x − dy] k
u (s, y)
∂t
−∞
ds ∫
∞
∂2 ℙ[X(t) ≤ x]
1
+ ∑ pk uk (t, x)
= −e ℙ[X(t) ≤ x] + e−t
∂x 2
2
−t
k=1
∞
∞
t
− ∑ pk ∫ e−(t−s) ds ∫ ℙ[X(t − s) ∈ x − dy] uk (s, y)
−∞
0
k=1
∞
t
∞ 2
1
∂ ℙ[X(t − s) ∈ x − dy] k
−(t−s)
+ ∑ pk ∫ e
ds ∫
u (s, y)
2
∂x 2
0
−∞
k=1
∞
∞
t
= − (e ℙ[X(t) ≤ x] + ∑ pk ∫ e−(t−s) ds ∫ ℙ[X(t − s) ∈ x − dy] uk (s, y))
−t
k=1
−∞
0
∞
+ ∑ pk uk (t, x)
k=1
∞
t
∞
−(t−s)
2
−t
ds ∫−∞ ℙ[X(t − s) ∈ x − dy] uk (s, y))
1 ∂ (e ℙ[X(t) ≤ x] + ∑k=1 pk ∫0 e
+
∂x 2
2
∞
1 ∂2 u(t, x)
=
+ ∑ pk uk (t, x) − u(t, x).
2
2 ∂x
k=1
Here the second equation is because
∂
∂t
1 ∂2
= 2 ∂x2 is the infinitesimal generator for
Brownian motion.
Kolmogorov-Petrovsky-Piscounov proved the first order term of the Maximal
Displacement [11].
Theorem 2.2 (Kolmogorov, Petrovsky, and Piscounov’37) [11]: For fixed
5
ε ∈ (0,1),
mε (t) = √2t + o(t), as t → ∞.
Proof: To compute u(t, x) directly,
1 − u(t, x) = ℙ [max Xk (t) > x] ≤ 𝔼[#Xk (t) > x] = et ℙ[Xk (t) > x]
k
= et ∫
x
y2
e− 2t
−∞ √2πt
dy,
we can estimate that
1 − u(t, x + √2t) = (1 + o(1))
e
−
x
√2
2 √π
.
Therefore,
mε (t) = √2t + o(t), as t → ∞.
2.2 Tightness of the maximal displacement
In this section, Proposition 2.4 shows that u(t, x) is tight when properly
centered. And the main tool will be Lemma 2.3.
Lemma 2.3 [13, page 326-327]: For 0 < ε < 1, let mε (t) be chosen to satisfy
u(t, mε (t)) = ε, then
u(t, x + mε (t)) is {
increasing in t if x < 0,
decreasing in t if x > 0.
Proof: Fix t 0 > 0 and a > 0 , and let v(t, x) = u(t + a, x + b) − u(t, x) with
b = mε (t 0 + a) − mε (t 0 ). Then
∂v 1 ∂2 v
=
+ kv
∂t 2 ∂x 2
with
k = u(t + a, x + b) + u(t, x) − 1.
By (2.1),
6
v(0+, x) {
> 0 if x < 0,
< 0 if x > 0,
and v(t 0 , x0 ) = 0 for x0 = mε (t 0 ). It is to be proved that v(t 0 , x) ≤ 0 for x > x0 .
Then v′(t 0 , x0 ) ≤ 0, and the lemma will follow from that. This can be proved by
contradiction. Suppose v(t 0 , x1 ) > 0 for some x1 > x0 , then (t 0 , x1 ) must be
connected to (t = 0) × (x < 0) by a continuous curve C along which v > 0, by
means of Kac’s formula:
t0
v(t 0 , x) = 𝔼 [exp {∫ k(t 0 − t, X(t)) dt} v(t 0 − t, X(t))],
t
where X is a standard Brownian motion starting at t(0) = x.
Fix x = x1 , and looking backwards from t 0 , the first root t ≤ t 0 defines a
stopping time, then the right-hand side vanishes, contradicting v(t 0 , x1 ) > 0.
Fix such a curve C and use the formula with x = x0 and t the passage time to C,
then the expectation is positive while the left-hand side vanished.
So the only way to avoid the contradiction is to admit that v(t 0 , x1 ) > 0 cannot
be maintained. Hence the proof is finished.
Proposition 2.4 [5, page 571-574]: The following convergence holds uniformly in x
u(t, x + mε (t)) → wε (x) as t → ∞,
where wε (x) is the unique distribution function which solves the ordinary
differential equation
1
0 = wxx + √2wx + f(w).
2
Proof: Lemma 2.3 implies that limt→∞ u(t, x + mε (t)) = wε (x) exists. According
to the decomposition for u(t, x) in Theorem 2.1, u(t, x) is continuous in x for t > 0.
Therefore, wε (x) is continuous, and hence convergence is uniform in x.
Next we will show that that wε (x) is the unique distribution function by
contradiction that wε (−∞) = 0.
The proof that wε (∞) = 1 is analogous. Actually, if wε (∞) = α < 1, then
7
1
wβ (−∞) > 0 where β = (1 + α).
2
The basic idea is to show that
mε (t)
t
→ ∞ as t → ∞, which contradicts Theorem
2.2.
First stop the process at T ∧ 1, and we obtain
u(t, x) = e
−1
∞
1
∫ ℙ[x(1) ∈ −dy] u(t − 1, x + y) + ∫ ds e
−∞
0
−s
∞
∫ ℙ[x(s) ∈ −dy]
−∞
∞
∙ ∑ pk uk (t − s, x + y),
for t ≥ 1.
k=1
Choosing proper M and δ, after some calculation we have that
∞
1
∫ ℙ[X(1) ∈ −dy] u (t − 1, mβ (t − 1) + M + y) ≤ β + 2δ,
2
−∞
and
∞
∞
1
∫ ℙ[X(s) ∈ −dy] ∑ uk (t − s, mβ (t − s) + M + y) ≤ β − 2δ, for 0 ≤ s ≤ 1.
2
−∞
k=1
If we define
m
̂ β (t) = inf mβ (t − s),
0≤s≤1
then it can be shown that
1
m
̂ β (t) + M < m
̂ β (t + 1).
2
Repeat the above result, we obtain
m
̂ β (t) 1
> M.
t→∞
t
2
Let M → ∞, we conclude that
lim
m
̂ β (t)
= ∞.
t→∞
t
lim
So the proof is complete.
2.3 The second order term of the maximal displacement
8
Theorem 2.5 (Bramson’77) [5, page 574-576]: For fixed ε ∈ (0,1),
3
mε (t) = √2t −
log t + bε (t),
2√2
where bε (t) = O(1) as t → ∞.
Bramson’s proof of Theorem 2.5 is based on the following two propositions: the
upper and lower bound for the maximal displacement.
Proposition 2.6 [5, page 556]: For 0 ≤ y ≤ √t and t ≥ 2,
3
1 − u (t, √2t −
log t + y) ≤ γ(y + 1)2 e−√2y ,
2√2
where γ is independent of t and y.
We can choose γ′ ≥ 1 large enough such that
γ(y + 1)2 e−√2y ≤ γ′ e−y
for all y ≥ 0.
Proposition 2.7 [5, page 568]: For 0 ≤ y ≤ √t and t ≥ 2,
3
1 − u (t, √2t −
log t + y) ≥ δe−√2y ,
2√2
where δ > 0 is independent of t and y.
These two propositions together imply that
(2.2)
δe−√2y ≤ 1 − u(t, √2t −
3
2√2
log t + y) ≤ γ′ e−y
for 0 ≤ y ≤ √t and t ≥ 2.
Recall that u(t, mε (t)) = ε , and let y =
mε (t) − (√2t −
3
2√2
log t) . By
inverting (2.2), we see that
1
log
δ
3
γ′
≤ mε (t) − (√2t −
log t) ≤ log
1−ε
1−ε
2√2
√2
for 1 − δ < ε < 1 − γ′ exp(−√t). As t → ∞, the conclusion holds for 1 − δ < ε <
1.
9
Note that if 0 < ε < ε0 < 1, then mε (t) ≤ mε0 (t). As t → ∞,
ε0 = u (t, mε0 (t)) = u(t, mε0 (t) − mε (t) + mε (t)) ↓ wε (mε0 (t) − mε (t))
so
0 ≤ mε0 (t) − mε (t) ≤ wε−1 (ε0 ) < ∞.
γ′
Therefore, if we choose ε0 > 1 − δ, it follows that, for t ≥ log 2 (1−ε ),
0
1
δ
3
log
− wε−1 (ε0 ) ≤ mε0 (t) − wε−1 (ε0 ) − (√2t −
log t)
1 − ε0
2√2
√2
3
γ′
γ′
≤ mε (t) − (√2t −
log t) ≤ log
≤ log
,
1−ε
1 − ε0
2√2
which implies Theorem 2.4 for 0 < ε ≤ 1 − δ as well.
To prove Proposition 2.6, firstly the probability space Ω is partitioned into small sets,
and conditioned on each set, the expectation of the number of particles above a certain
curve is estimated. Then combining these estimations together, the upper bound is
obtained.
3
Proof of Proposition 2.6 [5, page 559-562]: Let ei = t − exp{
22 (i0 −i)
3
}, i ∈ ℕ, where
5
i0 is chosen so that e0 ≥ t > e−1 .
6
Set
S = inf {s: 0 ≤ s ≤ t, max Xk (s) − (√2t −
1≤k≤n(t)
3 s
log t) ≥ L(s) + y},
2√2 t
where L(s) is defined as
0 if 0 ≤ s ≤ 1 or t − 1 ≤ s ≤ t,
3
t
log s
if 1 ≤ s ≤ ,
2
L(s) = 2√2
3
t
log(t − s)
if ≤ s ≤ t.
2
{2√2
S is a stopping time. Define the sets of events Ai such that
A0 = {S ≤ e0 },
Ai = {ei−1 < S ≤ ei } for i = 1, … , i0 ,
10
Ai0 +1 = {t − 1 < S ≤ t},
Ai0 +2 = {S = ∞}.
i +2
0
Obviously, ⋃i=0
Ai = Ω.
Bramson managed to prove that
ℙ[A0 ] ≤ c1 e−√2y ,
1
22 (i0 + 1 − i)
ℙ[Ai ] ≤ c2 (y + 1)2 e−√2y exp{−
},
3
ℙ[Ai0 +1 ] ≤ c3 (y + 1)2 e−√2y .
Therefore,
1 − u (t, √2t −
3
2√2
log t + y) ≤ ℙ[S < ∞]
1
i0
22 i
≤ (c1 + c2 ∑ exp {− } + c3 ) (y + 1)2 e−√2y ≤ γ(y + 1)2 e−√2y ,
3
i=1
where
∞
1
22 i
γ = c1 + c2 ∑ exp {− } + c3 .
3
i=1
Proof of Proposition 2.7 [5, page 568-570]: Let h be the number of branches Xk (t)
whose paths satisfy
√2t −
3
2√2
log t + y < Xk (t) ≤ √2t −
3
2√2
log t + y + 1,
and
s
3
Xk (s) < (√2t −
log t + y + 1) + 1 − L(s) for 0 ≤ s ≤ t,
t
2√2
where L(s) is defined in Proposition 2.6.
Bramson proved in his paper that
𝔼[h] ≥ d1 e−√2y ,
and
11
𝔼[h2 ] ≤ d2 E[h].
Therefore,
1 − u (t, √2t −
3
2√2
log t + y) ≥ ℙ[h > 0] ≥
(𝔼[h])2 𝔼[h] d1 −√2y
≥
≥ e
= δe−√2y ,
𝔼[h2 ]
d2
d2
where
δ=
d1
.
d2
12
Chapter 3 Other works on the maximal displacement of a
branching Brownian motion
3.1 Roberts’s result on the distribution of 𝐌𝐭
Roberts [15] provided another proof of the following result which is much shorter
than Bramson’s proof.
Define
u(t, x) = ℙ[Mt ≤ x].
Recall the result that u converges to a travelling wave: there exist function m of t and
w of x such that
u(t, m(t) + x) → w(x)
uniformly in x as t → ∞.
Theorem 3.1 [15, page 2]: The centering term m(t) satisfies
3
m(t) = √2t −
log t + O(1)
2√2
as t → ∞.
Same as Bramson, Roberts also gives the lower bound and upper bound for
m(t). See the following two propositions.
Proposition 3.2 [15, page 9]: There exist t 0 and a constant δ ∈ (0, ∞) not
depending on t or y such that
ℙ(∃1 ≤ k ≤ N(t): Xk (t) ≥ √2t −
3
2√2
log t + y) ≥ δe−√2y
for all y ∈ [0, √γt] and t ≥ t 0 .
13
Proposition 3.3 [15, page 10]: There exist t 0 and a constant A ∈ (0, ∞) not
depending on t or y such that
ℙ(∃1 ≤ k ≤ N(t): Xk (t) ≥ √2t −
3
2√2
log t + y) ≤ A(y + 2)4 e−√2y
for all y ∈ [0, √t] and t ≥ t 0.
Having shown that
1
3
δe−√2y ≤ 1 − u (t, 22 t − 3 ∙ 2−2 log t + y) ≤ A(y + 2)4 e−√2y ,
it can be deduced that m(t) = √2t −
3
2√2
log t + O(1).
Bramson proved by estimating G(t) directly, where G(t) is the number of
particles near m(t) at time t. In Robert’s work, he estimated H(t), the number of
particles near m(t) that have remained below
s
t
m(t) for all times s < t.
We know that if Bt , t ≥ 0 is a Brownian motion in ℝ3 , then its modulus
|Bt |, t ≥ 0 is called a Bessel-3 process. Robert guessed that particles behaving in this
way look like Bessel-3 processes below the line
s
t
m(t), 0 < s < t. Thanks to this
observation, Roberts obtained the upper and lower bound by estimating on Bessel
processes using a change of measure, which is based on the many-to-one lemma and
many-to-two lemma. In that, Roberts gives proofs much shorter and simplier than
those of Bramson.
Lemma 3.4 (Simple many-to-one lemma) [6, page 1]: For measurable f,
N(t)
𝔼 [∑ f(Xk (t))] = et 𝔼[f(Bt )].
k=1
Lemma 3.5 (Simple many-to-two lemma) [6, page 2]: For measurable f and g,
N(t)
𝔼 [∑ f(Xi (t)) g (Xj (t))] = e2t 𝔼[eT⋀t f(Bt )g(Bt′ )],
i,j=1
14
where
B
if t < T
Bt′ = { t
BT + Wt−T if t ≥ T
with T exponentially distributed with parameter 2, and Wt , t ≥ 0 a standard
Brownian motion independent of Bt .
Lemma 3.6 (General many-to-one lemma) [6, page 7]: Let ft (∙) be a measurable
function of t and the path of a particle up to time t. Fix α > 0 and β ∈ ℝ and define
1 2
1
ζ(t) = (α + βt − ξt )eβξt−2β t 𝟏{ξs 0 and
i
1
1 2
β ∈ ℝ and define ζi (t) = (α + βt − ξit )eβξt −2β t 𝟏{ξis T) and (ξ2t − ξ2T , t > T)
are independent given T and ξ1T .
Proof of Proposition 3.2 [6, page 8-10]: For t > 0 set
15
β = √2 −
3 log t y
+ .
t
2√2 t
Define
Hα (t) = #{1 ≤ k ≤ N(t): Xk (t) ≤ α + βs ∀s ≤ t, βt − 1 ≤ Xk (t) ≤ βt},
and
Bi = {βt − 1 ≤ ξit ≤ βt} for i = 1,2.
For large t,
𝔼[Hα (t)] = et ℚ [
1
3
1
t− β2 t
2
2 e−√2y ℚ[α ≤ α + βt − ξ1
]
∼
t
ℚ[B
]
∼
e
𝟏
1
B
t ≤ α + 1]
1
ζ1 (t)
∼ α2 e−√2y .
So we can estimate that 𝔼[H1 (t)] ≥ c1 e−√2y for some c1 > 0.
For the second moment of H1 (t),
𝔼[H1
(t)2 ]
t
1
ζ1 (s)
2t−s
= e ℚ [ 1 1B1 ] + ∫ 2 e
ℚ[ 1
|T = s] ds
𝟏
ζ (t)ζ2 (t) B1 ∩B2
ζ (t)
0
t
≤ 𝔼[H1 (t)]
2 t+2β
+ 2e2t−β
t
1
1 2
∫ e−s ℚ [(βs + 1 − ξ1s )eβξs −2β s 𝟏B1 ∩B2 |T = s] ds.
0
After some calculation we will get
𝔼[H1 (t)2 ] ≤ c2 𝔼[H1 (t)].
We deduce that
𝔼[H1 (t)2 ]
ℙ[H1 (t) ≠ 0] ≥
≥ δe−√2y
𝔼[H1 (t)]2
as required.
Proof of Proposition 3.3 [6, page 10-11]: For the upper bound on m(t), the first
moment method for Hα (t) is combined with an estimate of the probability that a
particle ever moves too far from the origin.
Define
Γ = #{1 ≤ k ≤ N(t): Xk (t) ≤ α + βs + 1 ∀s ≤ t, βt − 1 ≤ Xk (t) ≤ βt + α}.
16
Note that Γ is very similar to Hα , and we can estimate that
𝔼[Γ] ≤ c3 (α + 1)4 e−√2y
for some constant c3 not depending on t, α or y.
Define also
B = {∃1 ≤ k ≤ N(t), s ≤ t: Xk (t) > βs + α}.
We claim that for α ≥ y ≥ 0,
𝔼[Γ|B] ≥ c4
for some constant c4 > 0 also not depending on t, α or y.
Then for α ≥ y ≥ 0,
ℙ[B] ≤
𝔼[Γ]ℙ[B]
𝔼[Γ]
c3
=
≤ (α + 1)4 e−√2y .
𝔼[Γ1B ]
𝔼[Γ|B] c4
Applying Markov’s inequality, we have
ℙ (∃1 ≤ k ≤ N(t): Xk (t) ≥ √2t −
3
2√2
log t + y) ≤ ℙ[Γ ≥ 1] + ℙ[B]
≤ A(α + 1)4 e−√2y
as required.
3.2 Roberts’s result on the paths of 𝐌𝐭
Theorem 3.8 [6, page 2-3]: The maximum Mt satisfies
Mt − √2t
3
→−
in probability,
log t
2√2
and
liminf
t→∞
Mt − √2t
3
=−
almost surely,
log t
2√2
limsup
t→∞
1
Mt − √2t
=−
almost surely.
log t
2√2
This result is the analogue of a result for discrete-time branching random walks
17
given by Hu and Shi [7]. This result says that most of the time the extremal particle
stays near m(t), however a particle will travel much further occasionally.
The proof is made up of four parts: the upper and lower bounds in the two
statements. Actually only the lower bound in the second statement requires an amount
of work; the other three are the simple application of Proposition 3.3 and
Borel-Cantelli Lemma. Reader can refer to [6, page 12-16].
The proof of
limsup
t→∞
Mt − √2t
1
≥−
almost surely
log t
2√2
is similar to the proof of Proposition 3.2.
Let
βt = √2 −
1 log t
2√2 t
and
V(t) = {1 ≤ k ≤ N(t): Xk (s) ≤ βt s + 1 ∀s ≤ t, βt t − 1 ≤ Xk (t) ≤ βt t}.
Define
2n
In = ∫ 𝟏{V(t)≠∅} dt.
n
Hu and Shi noticed that although the probability that a particle has position much
bigger than m(t) at a fixed time t very small, the probability that there exists a tims s
between n and 2n such that a particle has position much bigger than m(s) at time s is
actually quite large.
Applying the many-to-one lemma and many-to-two lemma, we get
2n
𝔼[In ] = ∫ ℙ[V(t) ≠ ∅]dt ≥ c′,
n
and
𝔼[In2 ]
2n
= 2∫
n
t
∫ ℙ[V(s) ≠ ∅, V(t) ≠ ∅]ds dt ≥ c ′′ .
n
So
18
𝔼[In ]
𝔼[In ]2
ℙ[In > 0] ≥ ℙ[In ≥
]≥
≥ c > 0.
4𝔼[In2 ]
2
When n is large, at time 2δlogn there are at least nδ particles, all of which
have position at least −2√2δlogn. By the above, the probability that none of these
1
has a descendant that goes above √2s −
logs − 2√2logn for any s between
2√2
2δlogn + n and 2δlogn + 2n is no larger than
δ
(1 − c)n .
δ
The result follow by the Borel-Cantelli lemma since ∑n(1 − c)n < ∞.
According to Theorem 3.8, we can claim that if there are two branching
Brownian motions starting at x(0)=0, their maximal displacements will alternate in
the leading position for infinitely many times. In other words, every particle born in a
branching Brownian motion has a descendant particle in the “lead” at some future
time.
Theorem
3.9:
Suppose
two
independent
branching
Brownian
motions
A
B
B
(X1A (t), ⋯ , XN
A (t)) and (X1 (t), ⋯ , X NB (t)) start at X(0)=0, then with probability 1
there exist random times t n ↑ ∞ such that
M A (t n ) < M B (t n )
for all n, where
M A (t) =
max
XiA (t)
A
1≤i≤N (t)
and
M B (t) =
max XiB (t).
1≤i≤NB (t)
For the proof, reader can refer to [12, page 1057-1058].
19
Chapter 4 L.-P Arguin, A. Bovier and N. Kistler’s results on
the extremal process of branching Brownian motion
L.-P Arguin, A. Bovier and N. Kistler [2,3,4] addressed the genealogy of
extremal particles. They proved that in the limit of large time t, extremal particles
descend with overwhelming probability from ancestors having split either within a
distance of order one from time 0, or within a distance of order one from time t. The
results suggest that the extremal process of branching Brownian motion in the limit of
large times converges weakly to a cluster point process. The limiting process is a
randomly shifted Poisson cluster process.
4.1 Derivative martingale
The so-called derivative martingale is denoted by
N(t)
Z(t) ≡ ∑ (√2t − Xk (t)) e√2Xk (t)−2t .
k=1
N(t)
β2
To elaborate, first we need to prove that Y(t) = e−t ∑k=1 eβXk (t)− 2 t is a
positive martingale. Without loss of generality, we assume that N(t) = 1. Recall that
e
βX(t)−
β2
t
2
is a martingale, where X(t) is a Brownian motion. For any s > 0,
N(t+s)
𝔼[Y(t + s)|ℱt ] = 𝔼 [e
−t−s
∑ e
βX(t+s)−
β2
(t+s)
2
|ℱt ]
k=1
β2
β2
β2
= e−t−s 𝔼[N(t + s)|ℱt ]eβX(t)− 2 t = e−t−s es eβX(t)− 2 t = e−t eβX(t)− 2 t
= Y(t).
Take derivative w.r.t. β to Y(t), and let β = √2, we obtain Z(t) by taking the
negation so as to make it positive.
Lalley and Sellke [12] proved that Z(t) converges almost surely to a strictly
positive random variable Z, and established the integral representation
20
(4.1)
−√2t
w(x) = 𝔼 [e−CZe
],
for some C > 0, where w(x) is the unique distribution function which solves the
o.d.e.
1
w + √2wx + w 2 − w = 0.
2 xx
4.2 Localization of paths
Arguin et al’s approach towards the genealogy of particles at the edge of
branching Brownian motion is based on characterizing, up to a certain level of
precision, the paths of extremal particles. As a first step towards a characterization, in
this section we conclude that such paths cannot fluctuate too wildly in the upward
direction.
First introduce some notation. For α, γ > 0, set
ft,γ (s) ≝ {
t
2.
sγ
0≤s≤
(t − s)γ
t
≤s≤t
2
The upper envelope up to time t is defined as
s
Ut,γ (s) ≝ m(t) + ft,γ (s),
t
and the entropic envelope is defined as
s
Et,α (s) ≝ m(t) − ft,α (s).
t
1
Theorem 4.1 (Upper Envelope) [2, page 6]: Let 0 < γ < 2, and y ∈ ℝ given. For
any ε > 0, there exists ru = ru (γ, y, ε) such that for r ≥ ru and for any t > 3r,
ℙ [∃1 ≤ k ≤ N(t): Xk (s) > y + Ut,γ (s), for some s ∈ [r, t − r]] < ε.
This theorem says that the vast majority of particles lies under the upper
envelope on the time interval [r, t − r], see Figure 4.1.
21
Fig 4.1 The upper envelope [2, page 6]
1
Theorem 4.2 (Entropic Repulsion) [2, page 8]: Let 0 < α < , D ⊂ ℝ be a
2
̅ ≝ sup{x ∈ D}. For any ε > 0, there exists re = re (α, D, ε) such that
compact set. D
for r ≥ re and for any t > 3r,
̅ + Et,α (s)] < ε.
ℙ[∃1 ≤ k ≤ N(t): Xk (t) ∈ m(t) + D, but ∃s ∈ [r, t − r]: Xk (s) ≥ D
Theorem 4.3 (Lower Envelope) [2, page 9]: Let
1
2
< β < 1, D ⊂ ℝ be a compact
̅ ≝ sup{x ∈ D}. For any ε > 0 , there exists rl = rl (β, D, ε) such that for
set. D
r ≥ rl and for any t > 3r,
̅ + Et,β (s)] < ε.
ℙ[∃1 ≤ k ≤ N(t): Xk (t) ∈ m(t) + D, but ∃s ∈ [r, t − r]: Xk (s) ≤ D
̅
These two theorems together suggest that the genealogy of particles in m(t) + D
at time t lies between the entropic envelope and the lower envelope on the time
interval [r, t − r], see Figure 4.2.
22
Fig 4.2 The entropic envelope and the lower envelope [2, page 9]
The proof of Theorem 4.1-4.3 is a bit technical, mainly relying on Markov
property and the property of the Brownian bridge. The reader is referred to [2, page
17-25].
4.3 The genealogy of extremal particles
Define the genealogical distance
Qij (t) ≡ inf{s ≤ t: X i (s) ≠ Xj (s)},
the time to first branching of the common ancestor.
Theorem 4.4 (The genealogy of extremal particles) [3, page 3]: For any compact
D ⊂ R,
lim
sup
rd ,rg →∞ t>3max{r ,rg }
d
ℙ[∃i, j ∈ Σt (D): Qij (t) ∈ (rd , t − rg )] = 0,
where Σt (D) ≝ {k ∈ Σt : Xk (t) ∈ m(t) + D}, Σt = {1, ⋯ , N(t)}.
This theorem indicates that extremal particles descend from common ancestors
23
which either branch off “very early” (in the interval (0, rd )), or “very late” (in the
interval (t − rg , t)). The next theorems will show that in the middle of the time, the
extremal particles should stay within a small area.
Fig 4.3 Genealogies of extremal particles [3, page 4]
The proof of Theorem 4.4 is based on the localization of the paths.
Proof of Theorem 4.4: Consider the expected number of pairs of particles of
branching Brownian motion whose path satisfies some conditions, say ΞD,t ,
𝔼[#{(i, j)i ≠ j: Xi , Xj ∈ ΞD,t , Qt (i, j) ∈ (rd , t − rg )}]
= Ket ∫
t−rg
rd
∞
′
s,t−r
et−s ds ∫ dμs (y) ℙ[X ∈ ΞD,t |X(s) = y]ℙ [X ∈ ΞD,t
|X(s) = y],
−∞
where μs is the Gaussian measure of variance s.
We will show the existence of a r0 (D, ε) and t 0 (D, ε) such that for r > r0 and
t > max{t 0 , 3rd , 3rg }, the right-hand side is small than ε (provided we take r0 > r′).
The idea is to bound the last two terms. Only the basic steps are listed here. The
reader is refered to [3, page 8-11].
24
′
s,t−r
ℙ [X ∈ ΞD,t
|X(s) = y]
≤ ℙ [𝔷t−s (s ′ ) ≤ (1 −
s′
s′
) Z1 +
Z , ∀0 ≤ s′
t−s
t−s 2
≤ t − s − r ′ ] ℙ[X(t − s) ≥ m(t) − y + D],
ℙ[X ∈ ΞD,t |X(s) = y] ≤ ℙ[𝔷t (s) ≤ D, ∀r ′ ≤ s ≤ t − r ′ ]ℙ[X(t) ≥ m(t) + D],
s
where 𝔷t (s) = X(s) − t X(t), 0 ≤ s ≤ t is a Brownian bridge, D ≤ D ∈ ℝ are
chosen such that D ⊆ [D, D] for a compact set D ∈ ℝ.
It can been proved that
s,t−r′
ℙ [X ∈ ΞD,t
|X(s) = y] ≤ κ√r
3 t−s
logt
t
ft,β (s)e−√2ft,α (s)
e−(t−s) e2
(t −
3
s)2
,
and
ℙ[X ∈ ΞD,t |X(s) = y] ≤ κr ′ e−t ,
for some κ > 0.
Putting together, we obtain the expectation is less than ε as expected. This implies
Theorem 4.4 by Markov’s inequality and Theorem 4.2-4.3.
4.4 The thinning map
Define
̅ (t) = {Q
̅ ij (t)}i,j≤N(t) ≡ {t −1 Qij (t)}
Q
i,j≤N(t)
,
Qij (t) is defined in Section 4.3.
Define also the random measure
N(t)
Ξ(t) = ∑ δXk (t)−m(t) .
k=1
̅ (t)) admits the following natural thinning.
The pair (Ξ(t), Q
25
̅ (t)), denoted by Ξ(q) (t), is
For any q > 0, the q-thinning of the process (Ξ(t), Q
defined recursively as follows:
i1 = 1
⋯
{
;
̅ i j (t) < q ∀l ≤ k − 1}
ik = min{j > ik−1 : Q
l
and
(q)
Ξ(q) (t) ≡ (Ξk (t), k ∈ ℕ) ≡ (x̅ik (t), k ∈ ℕ).
Then
̅ (t)) ⟼ Ξ (q) (t)
(Ξ(t), Q
is called the thinning map.
To explain, at time (1 − q)t every particle Xk will produce a cluster-extrema
at time t, and the rule is to pick up the smallest index. Therefore, the cluster-extrema
have no common ancestors in the time interval [(1 − q)t, t]. See figure 4.4.
Fig 4.4 Cluster-extrema [3, page 5]
The main result states that all such thinned processes converge to the same
randomly shifted Poisson Point Process (PPP) with exponential density.
Theorem 4.5 [3, page 6]: The process Ξ (q) (t) converge in law for any 0 < q < 1
26
to the same limit Ξ 0 . In particular,
rg
lim lim Ξ(1− t ) (t) = Ξ0 .
rg →∞ t→∞
Moreover, conditional on Z, the limit of the derivative martingale,
Ξ0 = PPP(CZe−√2x dx).
where C > 0 is the constant appearing in (4.1).
Proof: Let ϕ: ℝ → ℝ+ be measurable, with compact support. We need to show that
lim 𝔼[exp{− ∫ ϕ(x) Ξ (q) (t)dx}] = 𝔼 [exp {−CZ ∫(1 − e−ϕ(x) )e−√2x dx}],
t→∞
for any
rd
t
rg
≤ q ≤ 1 − t . This can be obtained by means of the following Lemma 4.6
and Proposition 4.7.
Lemma 4.6 [3, page 7]: For any y ∈ ℝ and ε > 0, there exists r0 (D, ε) such that
for rd , rg > r0 and t > 3max{rd , rg }, on a set of probability 1 − ε,
r
( d)
Ξ (q) (t)|(y,∞) = Ξt t |(y,∞) ,
for any
rd
t
rg
≤q≤1− t.
Proposition 4.7 [3, page 8]: With C > 0 and Z the limiting derivative martingale,
conditionally on Z,
rd
lim lim Ξ ( t ) (t) = PPP(CZe−√2x dx).
rg →∞ t→∞
To conclude, branching happens at the very beginning, after which particles
continue along independent paths, and start branching again only towards the end.
The branching at the beginning is responsible for the appearance of the derivative
martingale in the large time limit, while the branching towards the end creates the
clusters.
27
4.5 The limit process
The limiting point process of BBM is constructed as follows. Let Z be the
limiting derivative martingale. Conditionally on Z, consider the Poisson point process
(PPP) of density CZ√2e−√2x dx:
PZ ≡ ∑ δpi ≡ PPP(CZ√2e−√2x dx).
i∈ℕ
Let {Xk (t)}1≤k≤N(t) be a branching Brownian motion of length t. Consider the point
process of the gaps ∑k δXk (t)−Mt conditioned on the event {Mt − √2t > 0}. Write
𝒟 = ∑j δ∆j for a point process with this law and consider independent, identically
distributed copies (𝒟 (i) )i∈ℕ. The following result starts that the extremal process of
branching Brownian motion as a point process converges as follows:
Theorem 4.8 [4, page 5]: The family of point processes defined in Section 4.4
converges in distribution to a point process, Ξ, given by
Ξ ≡ lim Ξt = ∑ δp +∆(i) .
t→∞
i
i.j
j
The key ingredient in the proof of Theorem 4.8 is an identification of the
extremal process of branching Brownian motion with an auxiliary process constructed
from a Poisson process, with an explicit density of points in the tail.
Next introduce Theorem 4.9 which plays an important role in proving Theorem
4.8.
The auxiliary point process of interest is the superposition of the iid BBM’s with
drift and shifted by
1
√2
logZ + ηi :
Πt ≡ ∑ δ 1 logZ+η +X(i)(t)−
i,k
√2
i
k
√2t
.
28
Theorem 4.9 (The auxiliary point process) [4, page 6-7]: Let Ξt be the extremal
process of BBM. Then
lim Ξt = lim Πt .
t→∞
t→∞
Proof of Theorem 4.8 [4, page 28-29]: The Laplace transform of the extremal
process of branching Brownian motion is defined as
ψt (ϕ) ≡ 𝔼 [exp {− ∫ ϕ(y)Ξt (dy)}],
for ϕ ∈ 𝒞C (ℝ) nonnegative.
It suffices to show that for ϕ: ℝ → ℝ+ continuous with compact support the Laplace
functional of the extremal process of branching Brownian motion satisfies
lim ψt (ϕ) = 𝔼 [exp {−CZ ∫ 𝔼[1 − e− ∫ ϕ(y+z)𝒟(dz) ]√2e−√2y dy}].
t→∞
ℝ
By Theorem 4.9,
lim ψt (ϕ) = lim 𝔼[exp{− ∑ ϕ(
t→∞
t→∞
i,k
1
√2
(i)
logZ + ηi + Xk (t) − √2t)}]
0
= 𝔼 [exp {−Z lim ∫ 𝔼 [1
t→∞ −∞
2
− exp {− ∫ ϕ(y + z)Ξt (dx)}] √ (−ye−√2y )dy}],
π
where Ξt ≡ ∑k δXk (t)−√2t defined for convenience.
Arguin et al [8, Page 28-29] show that the right-hand side converges to
𝔼 [exp {−CZ ∫ 𝔼[1 − e− ∫ ϕ(y+z)𝒟(dz) ]√2e−√2y dy}],
ℝ
which proves Theorem 4.8.
29
Chapter 5 The work of E.Aidekon, J. Berestycki, E. Brunet
and Z. Shi on the branching Brownian motion seen from its
tip
The work of E.Aidekon, J. Berestycki, E. Brunet and Z. Shi [1] also gives a
description of the limit object and a different proof of the convergence of the
branching Brownian motion seen from its tip.
Instead of the previous works where authors usually assume a Brownian motion
with variance 1 and no drift, Aidekon et al. assume in their work a Brownian motion
with drift 2 and variance 2, while the exponential time with parameter 1 remaining the
same.
First, a derivative martingale is also defined. Define
N(t)
Z(t) ≡ ∑ Xi (t)e−Xi (t) .
k=1
Note that 𝔼[Z(t)] = 0.
It can be shown that
Z ≔ lim Z(t)
t→∞
exists, is finite and strictly positive with probability 1.
Consider the point process of the particles seen from the rightmost:
̅ (t) ≔ 𝒩(t) − m(t) − log(CZ) = {Xi (t) − m(t) − log(CZ) , 1 ≤ i ≤ N(t)},
𝒩
where C is the constant appearing in (4.1).
̅ (t), Z(t)} converges jointly in
Theorem 5.1 [1, page 4]: As t → ∞ the pair {𝒩
distribution to {ℒ, Z} where ℒ and Z are independent and ℒ is obtained as follows:
(i) Define 𝒫 a Poisson point process on ℝ, with intensity measure ex dx;
30
(ii) For each atom x of 𝒫 , attach a point process x + 𝒥 (x) where 𝒥 (x) are
independent copies of a certain point process 𝒥;
(iii) ℒ is then the superposition of all the point processes x + 𝒥 (x) , i.e., ℒ ≔ {x +
y: x ∈ 𝒫, y ∈ 𝒥 (x) }.
For each s ≤ t, let X1,t (s) be the position at time s of the ancestor of X1 (t),
i.e.,s ↦ X1,t (s) is the path followed by the rightmost particle at time t. And define
Yt (s) ≔ X1,t (t − s) − X1 (t), s ∈ [0, t]
the time reversed path back from the final position X1 (t).
For each t > 0 and for each path X ≔ X(s), s ∈ [0, t] that goes from the
ancestor to one particle in 𝒩(t), let us write τXi be the successive splitting times of
(i)
branching along X (enumerated backward), 𝒩t,X be the set of all particles at time t
which are descended from the one particle which has branched off X at time τXi
relative to the final position X(t), and τX,j (t) be the time at which the particle Xj (t)
has branched off the path of X during [0, t], then we have
(i)
𝒩t,X ≔ {Xj (t) − X(t), τX,j (t) = τXi }.
31
Fig 5.1 The points branching off from X1 recently
Then define
𝒥(t, ζ) ≔
⋃
X (t)
τi 1
(i)
𝒩t,X1(t)
>t−ζ
the set of particles at time t which have branched off X1,t (s) after time t − ζ.
The key result in proving Theorem 5.1 is the following theorem.
Theorem 5.2 [1, page 7-8]: The following convergence holds jointly in distribution.
lim lim {(Yt (s), s ∈ [0, t]); 𝒥(t, ζ); X1 (t) − m(t)} = {(Y(s), s ≥ 0); 𝒥; W},
ζ→∞ t→∞
where the random variable W is independent of the pair ((Y(s), s ≥ 0); 𝒥).
Here, (Y; 𝒥) is the limit of the path s ↦ X1,t (t − s) − X1 (t) and of the points
that have branched recently off from X1,t . The construction is similar to the thinning
map in Chapter 4. Reader can compare Theorem 5.3 to Theorem 4.5.
Next an explicit construction of the decoration point process 𝒥 will be given.
Let B ≔ (Bt , t ≥ 0) be a standard Brownian motion and R ≔ (R t , t ≥ 0) be a
32
three-dimensional Bessel process started from R 0 = 0 and independent of B. Define
Tb ≔ inf{t ≥ 0: Bt = b}. For b > 0, the process Γ (b) is defined as follows:
(b)
Γs
Bs
≔{
b − R s−Tb
if 0 ≤ s ≤ Tb
if s ≥ Tb .
The law of the backward path Y is described as follows:
ℙ[Y ∈ A] =
∞
(b)
1
𝔼 [e−2 ∫0 Gv (σΓv )dv 𝟏−σΓ(b)∈A ],
f(b)
where b ∈ (0, ∞) is a random variable whose density is given by
ℙ[σb ∈ dx] =
f(x)
dx
c1
where
∞
f(x) = 𝔼 [e−2 ∫0
(x)
Gv (σΓv )dv
]
and
∞
∞
c1 = ∫ 𝔼 [e−2 ∫0
(a)
Gv (σΓv )dv
] da.
0
Now conditionally on the path Y, let π be a Poisson point process on [0, ∞)
with density 2 (1 − Gt (−Y(t))) dt = 2(1 − ℙY(t) (X1 (t) < 0))dt , where Gt (x) ≔
ℙ0 [X1 (t) ≤ x] = ℙx [X1 (t) ≤ 0] . For each point t ∈ π start an independent
branching Brownian motion (XY(t) (u), u ≥ 0) at position Y(t) conditioned to
minX(t) > 0. Then 𝒥 is defined as
𝒥 ≔ ⋃ XY(t) (t).
t∈π
Now a good event At (x, η) is defined for it happens with high probability. Fix a
constant η > 0. For t ≥ 1 and x > 0,
At (x, η) ≔ E1 (x, η) ∩ E2 (x, η) ∩ E3 (x, η)
where
E1 (x, η) ≔ {∀k s. t. |Xk (t) − m(t) < η|, min Xk,t (s) ≥ −x, min
X k,t (s)
t
s∈[0,t]
s∈[ ,t]
2
≥ m(t) − x},
33
1
t
E2 (x, η) ≔ {∀k s. t. |Xk (t) − m(t) < η|, ∀s ∈ [x, ] , Xk,t (s) ≥ s3 },
2
1 2
t
E3 (x, η) ≔ {∀k s. t. |Xk (t) − m(t) < η|, ∀s ∈ [x, ] , Xk,t (t − s) − Xk (t) ∈ [s 3 , s3 ]}.
2
E1 (x, η) is the event that all the paths s ↦ Xk,t (s) ending within distance η of
m(t) avoid the hashed region on the left. E2 (x, η) is the event that those same paths
1
stay above s 3 between x and
1
t
2
. E3 (x, η) is the event that thoses paths stay
2
between Xk (t) + (t − s)3 and Xk (t) + (t − s)3 between
t
2
and t − x. See Figure
5.2.
Fig 5.2 The event At (x, η) [1, page 10]
Theorem 5.3 [1, page 9]: Let η > 0. For any ε > 0, there exists x > 0 large enough
such that ℙ[At (x, η)] ≥ 1 − ε for t large enough.
Another two propositions are needed in addition. Proposition 5.4 explains the
appearance of the point process, and Proposition 5.5 shows that particles sampled near
X1 (t) either have a very recent common ancestor or have branched at the very
34
beginning of the process.
Fix k ≥ 1 and consider ℋk the set of all particles which are the first in their
line of descent to hit the spatial position k. Define Hk ≔ #ℋk .
For each u ∈ ℋk , write X1u (t) for the maximal position at time t of the particles
which are descendants of u and define the finite point process
∗
≔ (X1u (t) − m(t) − log(CZk ) , u ∈ ℋk )
𝒫k,t
where
Zk ≔ ke−k Hk .
Proposition 5.4 [1, page 11]: The following convergences hold in distribution
∗
∗
≔ ∑ δk+W(u)−log(CZk )
= 𝒫k,∞
lim 𝒫k,t
t→∞
u∈ℋk
where W(u) are independent copies of the random variable W and
∗
lim {𝒫k,∞
, Zk } = {𝒫, Z}
k→∞
where 𝒫 is independent of the random variable Z.
This proposition explains the appearance of the point process 𝒫, which can be
compared to Theorem 4.9.
Proposition 5.5 [1, page 11-12]: Fix η > 0 and define an arbitrary deterministic
function k ⟼ ζ(k) which increases to infinity. We have
lim lim ℙ[ℬη,k,t ] = 0.
k→∞ t→∞
Here the event ℬη,k,t is defined as
ℬη,k,t ≔ {∃i, j ∈ Jη (t): τi,j (t) ∈ [ζ(k), t − ζ(k)]},
where Jη (t) ≔ {k ∈ ℕ: |Xk (t) − mt | ≤ η} the set of indices which correspond to
particles near mt at time t, and τi,j (t) is the time at which the particles Xi (t) and
35
Xj (t) have branched from one another (same concept as Qij (t) in Section 4.3).
This proposition means no branching at intermediate times. Reader can compare it
with Theorem 4.4.
Proof of Theorem 5.1 [1, page 10-13]: It can be obtained that
Z = lim Zk = lim ke−k Hk
k→∞
k→∞
̅t(k) be the extremal process seen from the position mt −
almost surely. Let 𝒩
log(CZk )
̅t(k) ≔ 𝒩t − m(t) − log(CZk ).
𝒩
Since
(u)
u
̅t(k) ∩ (−η, η) = ⋃ (X1,t
𝒩
− m(t) − log(CZk ) + 𝒥t,ζ(k) ∩ (−η, η)),
u∈ℋk
and we know that in distribution
(x)
(u)
u
− m(t) − log(CZk ) + 𝒥t,ζ(k) ) = lim ⋃ (x + 𝒥ζ(k) ) = 𝒥.
lim lim ⋃ (X1,t
k→∞ t→∞
k→∞
u∈ℋk
∗
x∈𝒫k,t
(x)
where the 𝒥ζ(k) are independent copies of 𝒥ζ(k) .
Therefore, by Proposition 5.4, we conclude that in distribution
(k)
̅t
lim lim 𝒩
k→∞ t→∞
∩ (−η, η) = 𝒥 ∩ (−η, η).
Hence,
̅t(k) = 𝒥
lim lim 𝒩
k→∞ t→∞
in distribution.
(k)
Since 𝒩t is obtained from 𝒩t
by the shift log(CZ) − log(CZk ), which goes to 0,
we have in distribution
(k)
lim 𝒩t
t→∞
=𝒥
36
which leads to the content of Theorem 5.1.
Now let’s conclude the results in Chapter 4 and Chapter 5.
E.Aidekon, J. Berestycki, E. L.-P Arguin, A. Bovier and N.
Brunet and Z. Shi’s results
Kistler’s results
N(t)
Derivative
Z(t) ≡ ∑ Xi (t)e−Xi (t)
Martingale
N(t)
Z(t) ≡ ∑ (√2t
k=1
k=1
− Xk (t)) e√2Xk (t)−2t
Density of PPP
ex
Auxiliary point
∑ δk+W(u)−log(CZk )
process
u∈ℋk
Extremal
𝒩t − m(t) − log(CZk )
CZ√2e−√2x dx
∑ δ 1 logZ+η +X(i)(t)−
i,k
process
√2
i
k
√2t
N(t)
∑ δXk (t)−m(t)
k=1
Table 5.1 Comparison of results in Chapter 4 and Chapter 5
These results all show that a certain process obtained by a correlation-dependent
thinning of the extremal particles converges to a random shift of a Poisson Point
Process with exponential density. The difference of the results in two works is mainly
because that they assume two kinds of branching Brownian motions in their works.
Therefore, although these two works give alternative descriptions of the limit object
and proofs of the convergence, they are same in essence.
37
Bibliography
[1] Aidekon, E., Berestycki, J., Brunet, E., and Shi, Z.. The branching Brownian
motion seen from its tip. arXiv: 1104.3738v2 [math. PR], 4 May 2011.
[2] Arguin, L. P., Bovier, A., and Kistler, N.. The Genealogy of Extremal Particles of
Branching Brownian Motion. ar Xiv: 1008.4386v2 [math. PR] 27 Jun 2010.
[3] Arguin, L. P., Bovier, A., and Kistler, N.. Poissonian Statistics in the Extremal
Process of Branching Brownian Motion. arXiv:1010.2376v2 [math.PR] 17 Nov
2010.
[4] Arguin, L. P., Bovier, A., and Kistler, N.. The Extremal Process of Branching
Motion. arXiv: 1103.2322v1 [math. PR] 11 Mar 2011.
[5] Bramson, M. D.. Maximal Displacement of Branching Brownian Motion.
Communications on Pure and Applied Mathematics. Volume 31, Issue 5, 531–581,
September 1978.
[6] Harris, S. C. and Roberts, M. I.. The many-to-few lemma and multiple spines.
arXiv: 1106.4761v2 [math. PR]. 30 Jun 2011.
[7] Hu, Y. and Shi, Z.. Minimal position and critical martingale convergence in
branching random walks, and directed polymers on disordered trees.
Ann.
Probab., 37(2), 742–789, 2009.
[8] Ikeda, N., Nagasawa, M., and Watanabe, S.. Branching Markov processes I. J.
Math. Kyoto Univ. Volume 8, Number 2, 233-278, 1968.
[9] Ikeda, N., Nagasawa, M., and Watanabe, S.. Branching Markov processes II. J.
Math. Kyoto Univ. Volume 8, Number 3, 365-410, 1968.
[10] Ikeda, N., Nagasawa, M., and Watanabe, S.. Branching Markov Processes III. J.
Math. Kyoto Univ. Volume 9, Number 1, 95-160, 1969.
[11] Kolmogorov, A.N., Petrovsky, I.G., and Piskunov, N.S.. Etude de l'équation de la
diffusion avec croissance de la quantité de matière et son application à un
problème biologique. Moscou Universitet. Bull. Math. 1, 1-25, 1937.
38
[12] Lalley, S. P., and Sellke, T.. A conditional limit theorem for the frontier of a
branching Brownian motion. The Annals of Probability, Vol. 15, No.3, pp.
1052-1061, 1987.
[13] McKean, H. P.. Application of brownian motion to the equation of
kolmogorov-Petrovskii-Piskunov.
Communications
on Pure and
Applied
Mathematics. Volume 28, Issue 3, pages 323–331, May 1975
[14] Roberts, M. I.. http://people.bath.ac.uk/mir20/research.html. Matt Roberts’
website.
[15] Roberts, M. I.. A simple path to asymptotics for the frontier of a branching
Brownian motion. arXiv:1106.4771v2 [math.PR]. 9 Jul 2011.
39
[...]... 3.8, we can claim that if there are two branching Brownian motions starting at x(0)=0, their maximal displacements will alternate in the leading position for infinitely many times In other words, every particle born in a branching Brownian motion has a descendant particle in the “lead” at some future time Theorem 3.9: Suppose two independent branching Brownian motions A B B (X1A (t), ⋯ , XN A (t)) and... rd lim lim Ξ ( t ) (t) = PPP(CZe−√2x dx) rg →∞ t→∞ To conclude, branching happens at the very beginning, after which particles continue along independent paths, and start branching again only towards the end The branching at the beginning is responsible for the appearance of the derivative martingale in the large time limit, while the branching towards the end creates the clusters 27 4.5 The limit... PPP(CZ√2e−√2x dx) i∈ℕ Let {Xk (t)}1≤k≤N(t) be a branching Brownian motion of length t Consider the point process of the gaps ∑k δXk (t)−Mt conditioned on the event {Mt − √2t > 0} Write 𝒟 = ∑j δ∆j for a point process with this law and consider independent, identically distributed copies (𝒟 (i) )i∈ℕ The following result starts that the extremal process of branching Brownian motion as a point process converges... Proof of Theorem 4.8 [4, page 28-29]: The Laplace transform of the extremal process of branching Brownian motion is defined as ψt (ϕ) ≡ 𝔼 [exp {− ∫ ϕ(y)Ξt (dy)}], for ϕ ∈ 𝒞C (ℝ) nonnegative It suffices to show that for ϕ: ℝ → ℝ+ continuous with compact support the Laplace functional of the extremal process of branching Brownian motion satisfies lim ψt (ϕ) = 𝔼 [exp {−CZ ∫ 𝔼[1 − e− ∫ ϕ(y+z)𝒟(dz) ]√2e−√2y... of branching Brownian motion L.-P Arguin, A Bovier and N Kistler [2,3,4] addressed the genealogy of extremal particles They proved that in the limit of large time t, extremal particles descend with overwhelming probability from ancestors having split either within a distance of order one from time 0, or within a distance of order one from time t The results suggest that the extremal process of branching. .. Fig 4.3 Genealogies of extremal particles [3, page 4] The proof of Theorem 4.4 is based on the localization of the paths Proof of Theorem 4.4: Consider the expected number of pairs of particles of branching Brownian motion whose path satisfies some conditions, say ΞD,t , 𝔼[#{(i, j)i ≠ j: Xi , Xj ∈ ΞD,t , Qt (i, j) ∈ (rd , t − rg )}] = Ket ∫ t−rg rd ∞ ′ s,t−r et−s ds ∫ dμs (y) ℙ[X ∈ ΞD,t |X(s) = y]ℙ [X... 𝔼[h2 ] ≤ d2 E[h] Therefore, 1 − u (t, √2t − 3 2√2 log t + y) ≥ ℙ[h > 0] ≥ (𝔼[h])2 𝔼[h] d1 −√2y ≥ ≥ e = δe−√2y , 𝔼[h2 ] d2 d2 where δ= d1 d2 12 Chapter 3 Other works on the maximal displacement of a branching Brownian motion 3.1 Roberts’s result on the distribution of 𝐌𝐭 Roberts [15] provided another proof of the following result which is much shorter than Bramson’s proof Define u(t, x) = ℙ[Mt ≤ x] Recall... where w(x) is the unique distribution function which solves the o.d.e 1 w + √2wx + w 2 − w = 0 2 xx 4.2 Localization of paths Arguin et al’s approach towards the genealogy of particles at the edge of branching Brownian motion is based on characterizing, up to a certain level of precision, the paths of extremal particles As a first step towards a characterization, in this section we conclude that such paths... Section 4.4 converges in distribution to a point process, Ξ, given by Ξ ≡ lim Ξt = ∑ δp +∆(i) t→∞ i i.j j The key ingredient in the proof of Theorem 4.8 is an identification of the extremal process of branching Brownian motion with an auxiliary process constructed from a Poisson process, with an explicit density of points in the tail Next introduce Theorem 4.9 which plays an important role in proving Theorem... The proof of Theorem 4.1-4.3 is a bit technical, mainly relying on Markov property and the property of the Brownian bridge The reader is referred to [2, page 17-25] 4.3 The genealogy of extremal particles Define the genealogical distance Qij (t) ≡ inf{s ≤ t: X i (s) ≠ Xj (s)}, the time to first branching of the common ancestor Theorem 4.4 (The genealogy of extremal particles) [3, page 3]: For any compact ... extremal particle in branching Brownian motions, which is much shorter than Bramson’s proof L.-P Arguin, A Bovier and N Kistler work on the extremal process of branching Brownian motion They... E.Aidekon, J Berestycki, E Brunet and Z Shi on the branching Brownian motion seen from its tip .30 Bibliography .37 ii Summary Branching Brownian Motion (BBM) describes a system of... and a different proof of the convergence of the branching Brownian motion seen from its tip This thesis shows the main results on the branching Brownian motion till now We describe how these results