Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 96 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
96
Dung lượng
2,38 MB
Nội dung
Curve Shortening and its Application
Mohamed Faizal
(B.Science (Hons), NUS)
A thesis submitted for the degree in Master of Science
Associate Professor Xu Xingwang
Department of Mathematics
National University of Singapore
2006-2007
Acknowledgements
First and foremost, I would like to thank my mummy for being my support for
undertaking this postgraduate course and also for being with me even when I was
being so unreasonable. To my sister and brother in law, thank you for always
inviting me over for dinner.
To my amazing supervisor, Associate Professor Xu Xingwang. He showed me
truly what it meant to do research in mathematics. Sir I would like to thank you for
guiding me and always being patient with me even when I really do not understand
some ideas. Sir, you always saying “It’s okay” whenever I face a problem amazes
me.
I would also like to thank Jiaxuan for being by my side throughout my Master’s
years and for bringing me laughter and the strength to carry on. Thank you for
making me smile when my days have no more cheer. Remember this phrase?
“Things may be tough now but I am sure that the fruits will be good.”
ii
Acknowledgements
To Ghazali, for helping me with technical issues that I seem to frequently face
during my years in NUS.
Now for all my dear friends who have been with me all these years. Firstly
to my University friends, like Adeline, Eugene, Joan, Kelly, Liyun and Weiliang.
Thanks for all the dinners where we laughed and joked about anything under the
sun.
To Kayjin, Meng Huat and Xiong Dan for helping me whenever I found myself
in dire straits with Mathematics. To my old buddy, Terence. Thank you for
consistently bugging me to continue running those long runs with you. To my
younger “sisters” Weini, Han Ping and Xing En, never give up trying!
To my dear brothers from FCBC, like Eugene, Joshua, John, Vincent, Kelvin
and Harold. Thank you for listening to me grumble about my life.
Finally to Him who is Almighty. Thank You for always guiding me in my life
and always giving me light during my darkest times.
iii
Contents
Acknowledgements
ii
Summary
vi
Author’s Contribution
vii
1 Basics in PDEs, Differential Geometry and Robust Statistics
1
1.1
Partial Differential Equations . . . . . . . . . . . . . . . . . . . . .
1
1.2
Differential Geometry . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.3
Robust Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
2 Geometric Curves
9
iv
Contents
v
2.1
Curve evolution and Level set representation of curve evolution
. .
9
2.2
Curve shortening . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
2.3
Geometric Heat Flow and Preservation . . . . . . . . . . . . . . . .
42
3 Anisotropic Diffusion
47
3.1
Anisotropic Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . .
47
3.2
A Robust Statistical View of Anisotropic Diffusion . . . . . . . . . .
62
Bibliography
79
A Appendix
81
Index
86
Summary
This thesis primarily deals with the curve evolution and shortening process and its
applications to image processing.
In Chapter 2, there will an indepth discussion about the mathematical framework of curve evolution. Curve evolution follows a Partial Differential Equation
such that the shape of the closed curve changes over time. In the second section of
this chapter, the concept of curve shortening will be introduced. The last section
will link the curve shortening process with non-linear heat diffusion. In this section
there will be a brief discussion on the preservation of the area and perimeter of a
closed curve.
The last chapter in this thesis will give an application of the curve shortening
process called anisotropic diffusion. The first section will cover on the basic theory
of anisotropic diffusion and there will be examples that describe the functionality
of it. The last section of this chapter will give reasons to why the anisotropic
diffusions works and how certain types work better than others.
vi
Author’s Contribution
The author expanded on the proofs of the various lemmas and theorems found
in Chapter 2. The proof of Lemma 2.2.1 was extended from the original paper.
Details were shown completely. In Section 2.3, the proofs were given in detail.
In Chapter 3, the author had independently tested the various functions, c(x)
using Matlab programming. Furthermore, the author independently came up with
a new function of c(x) with the properties needed.
The author also verified that the properties were met. Finally the author ran
tests using the new function.
vii
List of Figures
2.1
Graphical representation of the inequality and the secant line. . . .
30
2.2
Reflection of C1 through the origin. . . . . . . . . . . . . . . . . . .
36
3.1
Image with Gaussian filter of variance = 1. . . . . . . . . . . . . . .
48
3.2
Image with Gaussian filter of variance = 25. . . . . . . . . . . . . .
49
3.3
Image with Gaussian filter of variance = 100. . . . . . . . . . . . . .
49
3.4
Notation of discretisation for anisotropic diffusion. . . . . . . . . . .
51
3.5
The image I0 and its edge representation . . . . . . . . . . . . . . .
52
3.6
After applying anisotropic diffusion using c(x) = e− k2 . . . . . . . .
53
3.7
Enlarged edge images. . . . . . . . . . . . . . . . . . . . . . . . . .
54
3.8
The difference after applying anisotropic diffusion using c(x) =
1
2
1+ x2
. 55
x2
k
viii
List of Figures
3.9
ix
The difference after applying anisotropic diffusion using the Tukey
Biweight.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
3.10 Enlarged edge images. . . . . . . . . . . . . . . . . . . . . . . . . .
58
3.11 The difference after applying anisotropic diffusion using c(x) defined
in (3.9). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
3.12 The differences after applying anisotropic diffusion with Tukey Biweight. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
3.13 Local neighbourhood of pixels at a boundary. . . . . . . . . . . . .
63
x2
3.14 c(x) = e− k2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
x2
3.15 ψ(x) = xe− k2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
68
2
3.16 ρ(x) =
3.17 c(x) =
3.18 ψ(x) =
k2 − x
e k2 .
2
1
1+
x2
.
2σ 2
2x
2
2+ x2 .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
σ
3.19 ρ(x) = σ 2 log(1 +
x2
).
2σ 2
. . . . . . . . . . . . . . . . . . . . . . . . .
71
3.20 c(x). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
3.21 ψ(x). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
3.22 ρ(x). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
3.23 c(x) = |erfc(|x|) cos(x)|. . . . . . . . . . . . . . . . . . . . . . . . . .
75
3.24 ψ(x) = x|erfc(|x|) cos(x)|. . . . . . . . . . . . . . . . . . . . . . . . .
75
. . . . . . . . . . . . . . . . . .
3.25 c(x) in a restricted domain of 0, 3π
4
76
List of Figures
x
3.26 The edge representation after applying anisotropic diffusion with
different c(x) for k = 100 . . . . . . . . . . . . . . . . . . . . . . . .
77
3.27 The edge representation after applying anisotropic diffusion with
different c(x) for k = 1000. . . . . . . . . . . . . . . . . . . . . . . .
78
Chapter 1
Basics in PDEs, Differential
Geometry and Robust Statistics
In this chapter, we will give some basic results on partial differential equations,
differential geometry and robust statistics. The results in this chapter will be stated
without proof and are taken from [Strauss], [Hampel & Stahel] and [Nasraoui].
1.1
Partial Differential Equations
We first define the solution of the heat equation.
Definition 1.1.1. A function u(x, t) on (x, t) ∈ R2 × R+ is a solution of the heat
equation if for some c > 0, u(x, t) satisfies
∂u
= c u(x, t),
∂t
Secondly, we define Gaussian filters.
1
1.1 Partial Differential Equations
2
Definition 1.1.2. A gaussian filter is a filter such that for any x ∈ R2 ,
|x|2
e− 2σ2 t
,
G(x, t) =
(4πσ 2 t)
where σ, a constant, is the standard deviation of the normal distribution.
Note that the Gaussian filter satisfies the heat equation.
Theorem 1.1.3 (Green’s Theorem). Over a simply connected region D with
boundary ∂D, Green’s Theorem states
(gx − fy )dxdy =
D
f (x, y)dx + g(x, y)dy,
∂D
where f and g are two functions of (x, y).
In Chapter 3, we would be using a numerical scheme that is derived from a
standard numerical scheme for solving differential equations. This is called the
gradient descent method.
Theorem 1.1.4 (Gradient Descent Method). If a vector-valued function F (x)
is defined and differentiable in a neighbourhood of a point a, then F (x) increases
fastest if one goes from a in the direction of the gradient of F at a, i.e., ∇F (a).
Thus it follows that if
b = a + γ∇F (a),
for γ > 0 small enough then, F (a)
F (b). Thus the numerical scheme would be
by starting at an initial point x0 and consider the sequence x0 , x1 , x2 , . . . we have
xn+1 = xn + γn ∇F (xn ), n
0.
Thus the sequence xn will eventually converge to a local maximum.
1.1 Partial Differential Equations
3
Now we will give results such as the Blaschke Selection theorem and Hausdorff
metric.
Definition 1.1.5. Let S be a non-empty convex subset of R2 . Then for any given
δ > 0, the parallel body Sδ is defined as
Sδ :=
K(s, δ),
s∈S
where K(s, δ) = {x : d(x, s)
δ} and d(·, ·) is the ordinary Euclidean plane
distance.
Definition 1.1.6. Let S and R be non-empty compact convex subsets of R2 . Then
the Hausdorff distance between S and R is defined as
D(S, R) := inf{δ : S ⊂ Rδ and R ⊂ Sδ }.
We now list some standard definitions that would be needed later.
Definition 1.1.7.
1. A sequence {Si } of compact convex subset of R2 is said
to converge (in the Hausdorff metric) to the set S if
lim D(S, Si ) = 0.
i→∞
2. Let C be the collection of non-empty compact subsets of R2 . A subcollection
M of C is uniformly bounded if there exists a disk, k in R2 such that every
m ∈ M, m ⊂ k.
3. A collection of subsets {Si } is decreasing if St ⊆ Sτ , for all t
τ . In the case
of curve evolution, the collection of laminae H(t) associated with the curves
C(t) is called decreasing if H(t) ⊆ H(τ ) for all t
τ.
We now state the well-known Blaschke Theorem.
Theorem 1.1.8 (Blaschke Theorem). Let M be a uniformly bounded infinite
subcollection of C. Then M contains a sequence that converges to a member of C.
1.2 Differential Geometry
1.2
4
Differential Geometry
In this section we will cover some basics on planar differential geometry.
Definition 1.2.1. A parameterised planar curve is a map given as
C : I → R2 ,
where I = [a, b] ∈ R. Thus for each value p ∈ I, we obtain a point C(p) on the
planar curve
Definition 1.2.2. We define the classical Euclidean length as · =
·, · , where
·, · is the usual inner product.
There are two important objects when we talk about differential geometry. The
first is the concept of arc length.
Definition 1.2.3. Given any p ∈ I, we define the arc length of a parameterised
planar curve from a point p0 as
p
dx
dp
s(p) =
2
+
dy
dp
2
dp.
p0
We now define the second important concept in differential geometry called the
curvature of a planar curve.
Definition 1.2.4. Let C : I → R2 be a planar curve parameterised by arc length
s. We define
κ(s) =
d2 C
,
ds2
as the curvature of C at s.
If we reparametrize the curve using arc length, we get the relationship
1.2 Differential Geometry
5
dC
=
ds
dC
dp
dp
ds
,
which leads to
ds
=
dp
dx
dp
2
+
dy
dp
2
.
Now that we have discussed on the two main properties for differential geometry,
it would be natural to discuss on the derivatives of the curve, C(s).
Definition 1.2.5. We define the unit tangent of a curve C(s), T as
dC
= T.
ds
Definition 1.2.6. We define the unit normal of a curve C(s) as
d2 C
= κ(s)N .
ds2
We thus obtain the following Frenet Equations ,
dT
ds
dN
ds
= κ(s)N ,
= −κ(s)T .
Note that we can consider the inner product between the tangent, Cs , and the
normal, Css . This is given as
Cs , Cs = 1,
and
Cs , Css = 0.
1.3 Robust Statistics
1.3
6
Robust Statistics
In this section, we will study some aspects of robust statistics and also the key
idea of robust statistics. But first we will loosely define what robust statistics is.
Robust statistics is the study of statistics that relates to the deviation from
idealised assumptions that encompasses the rejection of outliers.
We give the aim of robust statistics:
• To describe the structure best fitting the bulk of the data.
• To identify deviating data points (outliers) or deviating substructures for
further treatment if desired.
• To identify and give warning about highly influential data points.
We define deviations from statistical data as gross errors. Gross errors are errors
which occur occasionally but have a large impact to the data. They are the most
frequent reasons for outliers, i.e., data which are far away from the bulk of the
data.
There are a few approaches to robust statistics and we will discuss those based
on influence functions.
We first begin our theory of robust statistics by first defining linear regression
models.
Definition 1.3.1. A linear regression model has the form
yi = β0 + β1 xi1 + . . . + βk xik + i ,
i = 1, 2, . . . , n,
1.3 Robust Statistics
7
where we have k variables observed on n cases and βj for j = 0, . . . , k, are the
regression coefficients that measure the average effect on the regression of the unit
change in xij and
i
∼ N (0, σ).
Note that βj are unknown and need to be estimated. Classically, the elements
of β are estimated by using the method of least squares given as
n
(yi − β0 − β1 xi1 − . . . − βk xik )2 .
i=1
However, the least square function is not effective against outliers. So an alternative method was proposed. We choose β0 , . . . , βk such that it minimises
n
|yi − β0 − β1 xi1 − . . . − βk xik | .
i=1
A general approach is to choose β1 , . . . , βk to minimise
n
ρ(yi − β0 − β1 xi1 − . . . − βk xik ),
i=1
where ρ is some function that have the following properties
1. ρ(x)
0 for all x and has a minimum at 0.
2. ρ(x) = ρ(−x) for all x.
3. ρ(x) increases for x > 0 but it does not grow as fast as an exponential
function.
Thus the least squares estimate is given as
ρ(x) = x2 .
1.3 Robust Statistics
8
Definition 1.3.2. The influence function, ψ(x) is the derivative of ρ(x) i.e.,
ψ(x) = ρ (x),
and is bounded.
Definition 1.3.3. The robust error norm in robust statistics is denoted as ρ(x)
with the above mentioned properties.
Chapter 2
Geometric Curves
In this chapter, we will give a theoretical explanation towards geometric curves
and its evolution over time. We will then show the relationship between geometric
curve evolution and anisotropic diffusion in the next chapter. For this chapter, we
use sources [Gage1], [Gage2] and [Sapiro & Tannenbaum].
2.1
Curve evolution and Level set representation
of curve evolution
We consider curves that deform over time. We let
C(p, t) : S 1 × [0, T ) → R2 ,
(2.1)
be a family of closed curves, where t parameterizes the family and p parameterizes
the curve.
9
2.1 Curve evolution and Level set representation of curve evolution
Definition 2.1.1. We say that C(p, t) is a curve that evolves if it satisfies the
following PDE,
∂C(p, t)
= α(p, t)T (p, t) + β(p, t)N (p, t),
∂t
(2.2)
with the initial condition C(p, 0) = C0 (p), where T represents the unit tangent
direction of the curve and N represents the normal unit direction. α(p, t) is the
speed in the tangent direction and β(p, t) is the speed in the normal direction.
Lemma 2.1.2. The curve evolution after reparametrization, also satisfies
∂C(p, t)
= β(p, t)N (p, t).
∂t
Proof. We will prove this lemma by first trying to modify the tangential velocity
α(p, t) with a new one. If this alteration does not affect the curve motion, we are
done.
˜ p, τ ) = C(p, t) such that p˜ =
We begin by reparameterizing the curve to C(˜
p˜(p, τ ) and t = τ . By Chain Rule we find
∂ C˜
=
∂τ
=
=
∂ C˜
∂ p˜
∂ C˜
∂ p˜
∂ C˜
∂ p˜
∂ C˜
∂t
∂ C˜
∂ p˜
∂τ
+
∂t
∂τ
∂ p˜
∂τ
+
∂ p˜
∂τ
+ α(p, t)T (p, t) + β(p, t)N (p, t)
∂t
We rewrite the first term on the right hand side, with respect to the Euclidean
arc length, s, and we get
10
2.1 Curve evolution and Level set representation of curve evolution
∂ C˜
∂ p˜
Recall that T (p, t) =
˜
∂C
∂s
∂ p˜
∂τ
=
∂ C˜
∂s
∂s
∂ p˜
∂ p˜
.
∂τ
(2.3)
and thus we rewrite (2.3) as
T (p, t)
∂s
∂ p˜
∂ p˜
.
∂τ
Therefore we get
∂ C˜
=
∂τ
∂s
∂ p˜
∂ p˜
∂τ
+ α T (p, t) + β(p, t)N (p, t).
Now all we have to prove is that
∂s
∂ p˜
∂ p˜
∂τ
+ α = 0,
has a solution.
We rewrite
∂ p˜
∂ p˜
+α
= 0
∂t
∂s
Hence we have
∂ p˜
+α
∂t
∂ p˜
∂p
sp
= 0,
where sp is known in the old parametrization. This solution is known and easily
found. [cf. [Epstein]].
We can also prove this locally.
Corollary 2.1.3 (Local property). Let y = γ(x, t) be a local representation of
the curve C(p, t) then
11
2.1 Curve evolution and Level set representation of curve evolution
∂γ(x, t)
= β(x, t) 1 +
∂t
∂γ(x, t)
∂x
2
.
Proof. We represent C(p, t) in the form (x, y) ∈ R2 . Thus we have
∂C xt
=
.
∂t
yt
Let y = γ(x, t) and we get
∂y
∂γ
=
+
∂t
∂t
∂γ
∂x
∂x
.
∂t
Rewriting
∂γ
∂γ
∂y
∂x
=
−
∂t
∂t
∂x
∂t
−γ
x
t , x .
=
1
yt
Recall that
T =
and
1
1
1 + γx2 γx
N=
(2.4)
−γ
1
x .
1 + γx2
1
Since we know that Ct = αT + β N , we have
α
−βγ
1
∂C
x
+
=
2
∂t
1 + γx
αγx
β
α − βγx
1
.
=
1 + γx2 αγx + β
(2.5)
12
2.1 Curve evolution and Level set representation of curve evolution
From (2.4),
∂γ
=
∂t
=
−γx
∂C
,
∂t
1
β
+
1 + γx2
αγx
1 + γx2
− γx
α
−
1 + γx2
βγx
1 + γx2
β(γx )2
β
+
1 + γx2
1 + γx2
β(1 + γx2 )
=
1 + γx2
=
= β(x, t) 1 +
∂γ(x, t)
∂x
2
.
Recall from (2.1), C(p, t) represents a family of closed curves, where t parameterizes the family and p for the curve.
From Lemma 2.1.2 we have proven that with a suitable choice of parameter p,
∂C(p, t)
= β(p, t)N (p, t),
∂t
with initial condition C(p, 0).
We can also define this curve as a level set of a Lipschitz function, u(x, y, t) :
R2 × [0, T ) → R, in the form
Lc := {(x, y, t) ∈ R2 × [0, T ) : u(x, y, t) = c},
(2.6)
where c ∈ R. We define the initial curve of the level set representation as u0 (x, y).
13
2.1 Curve evolution and Level set representation of curve evolution
Thus we have to find the evolution of u(x, y, t) such that C(x, y, t) = Lc (x, y, t).
We do this to ensure that the curve C(p, t) evolves with the level sets of u(x, y, t).
By differentiating the equation u(x, y, t) = c with respect to t, we get
that means,
∂
(u(x, y, t)) = 0,
∂t
∂u
∂x
∂u
+
∂x
∂t
∂y
∂u
∂x
∂x , ∂t
i.e.,
∂y
∂t
∂u
∂y
+
∂y
∂t
+
∂u
=0
∂t
∂u
= 0.
∂t
Thus we have
∇u,
Note that N = −
∇u
∇u
∂Lc
∂t
+
∂u
= 0.
∂t
(2.7)
, where u > C and the level set representation has to be
equal the curve representation, i.e., L ≡ C and thus
∂Lc
∂C
=
= βN .
∂t
∂t
(2.8)
From (2.7) and (2.8) we get
β N , ∇u +
∂u
= 0
∂t
Hence,
−β
Thus
∂u
∂t
∇u
∂u
, ∇u +
= 0.
∇u
∂t
= β ∇u .
We shall now show that as a closed curve evolves, depending on what β is, will
become more circular. Before that we shall have the following preceding statements.
14
2.1 Curve evolution and Level set representation of curve evolution
Definition 2.1.4. The support function of any closed convex curve with a chosen
point as an origin that is contained in the area bounded by the curve, denoted as
r(s), where s is the Euclidean arc length, is given as
r(s) = C(s), −N (s) .
We also use L, A to represent the length of C(p, t) and the area enclosed by
C(p, t) respectively. Furthermore, κ(s) is the curvature of C(p, t) with respect to
the arc length, s.
The next few lemmas states that the length and the area of C(p, t) can be
expressed in terms of r(s).
Lemma 2.1.5. The area enclosed by C is given by
L
1
A=
2
r(s)ds.
0
Proof. We apply the Green’s Theorem to prove the lemma. By using area formula
in (x, y) form, we have
L
1
A =
2
(xy − yx )ds
0
L
1
=
2
C, −N ds
0
L
1
=
2
r(s)ds.
0
15
2.1 Curve evolution and Level set representation of curve evolution
Lemma 2.1.6. The length of C is given as
L
L=
r(s)κ(s)ds.
(2.9)
0
Proof. We note the fact that
∂C(s,t)
∂s
= T (s) and
L
∂ 2 C(s,t)
∂s2
= κ(s)N (s), thus
L
r(s)κ(s)ds = −
0
κ(s) C, N ds
0
L
= −
C, κ(s)N ds
0
L
= −
C, Css .
0
We integrate by parts and get
L
L
L
r(s)κ(s)ds = − C, Cs
+
0
0
Cs , Cs .
0
Since
Cs , Cs
2
=
Cs
=
T (s)
2
= 1,
L
we conclude that
r(s)κ(s)ds = L.
0
Lemma 2.1.7. If C(s) is a closed convex C 1 curve which satisfies the inequality
L
r(s)2 ds
LA
,
π
(2.10)
0
for a certain origin of the lamina enclosed by the convex curve C(s), then the
following inequality
L
L
π
A
κ(s)2 ds,
0
(2.11)
16
2.2 Curve shortening
17
is met.
Proof. Recall (2.9) and apply Cauchy Schwartz Inequality to obtain
L
L =
r(s)κ(s)ds
0
L
L
1
2
r(s)2 ds
κ(s)2 ds
0
1
2
0
L
1
2
LA
π
κ(s)2 ds
1
2
.
0
By squaring both sides we get
L
L
2
LA
π
κ(s)2 ds .
0
L
L
Thus we get π A
2.2
κ(s)2 ds.
0
Curve shortening
In this section we will study on curve shortening and show that the curve evolution
makes curves more circular. We recall that Ct = β N . By choosing β = κ we have
∂C
= κN .
∂t
Lemma 2.2.1. Given a closed convex curve C(s, t), the derivative of the isoperimetric ratio is given as
L2
A
= −2
t
L
A
κ(s)2 ds − π
C
L
.
A
(2.12)
2.2 Curve shortening
18
Proof. We recall that we have initially defined a closed curve with respect to the
parameter p. Thus C(s) can be written as C(p).
Furthermore, the Euclidean arc length can be written as
the Chapter 1.
We begin by computing
∂C
∂p
2
=
t
∂C ∂C
,
∂p ∂p
= 2
= 2
t
∂C
∂C
,
∂p t ∂p
∂Ct ∂C
,
∂p ∂p
= 2
∂(κN ) ∂C
,
∂p
∂p
= 2κ
∂ N ∂C
,
∂p ∂p
Recall that
∂N
∂p
∂N
∂s
dp
ds
∂C
,
∂s
∂C
.
= −κ
∂s
= −κ
Hence
∂N
∂p
∂C
∂s
∂C
= −κ
.
∂p
= −κ
ds
dp
.
ds
dp
=
∂C
∂p
as seen in
2.2 Curve shortening
19
Thus we get
∂C
∂p
2
= −2κ2
t
= −2κ2
∂C ∂C
,
∂p ∂p
∂C
∂p
(2.13)
2
.
On the other hand, we also have
∂C
∂p
2
∂C
∂p
=2
t
∂C
∂p
.
t
By (2.13), we get
∂C
∂p
= −κ2
t
∂C
.
∂p
Now we calculate the derivative of the area , A. From Lemma 2.1.5, by changing
variables A can be written in terms of its inner product and parameterization in p
as follows
L
1
A =
2
rds
0
L
1
=
2
C, −N ds
0
1
1
= −
2
C, N
∂C
dp.
∂p
0
Now we compute the derivative of A using product rule,
1
1
At = −
2
Ct , N
∂C
+ C, Nt
∂p
∂C
+ C, N
∂p
0
∂C
∂p
dp.
(2.14)
t
Since Ct = κN , we have Ct = κN . The derivative of Ct with respect to p is
given as
Ctp = κNp + κp N = Cpt .
2.2 Curve shortening
20
To calculate Nt we have to use the fact that
dp
ds
∂C
∂p
∂C
∂s
= T . Thus we write
= T.
Thus
∂C
∂p
ds
dp t
∂C
T
∂p
=
T
t
=
(2.15)
.
t
By taking the inner product of (2.15) with N , we obtain
T
∂C
∂p
,N
∂N
∂κ
N +κ
,N
∂p
∂p
=
t
=
∂κ
+κ
∂p
=
∂κ
.
∂p
∂N
,N
∂p
Note that since N , N = 1,
N, N
p
= 2 Np , N
= 0.
Since T is orthogonal to N , we have
∂C
T,N
∂p
=
T
t
∂C
∂p
,N
+
t
= 0.
Thus we get
∂κ
=−
∂p
∂C
T , Nt .
∂p
∂C
T , Nt
∂p
2.2 Curve shortening
21
Since N is a unit vector and Nt is perpendicular to N thus
Nt = −
∂κ
∂p
∂C
∂p
T.
(2.16)
By substituting (2.16) into (2.14) and recalling the evolution equation, we get
1
∂κ
∂C
∂C
1
∂C
∂p
κN , N
C, N dp
At = −
+ C, −
− κ2
T
∂C
2
∂p
∂p
∂p
∂p
0
1
1
= −
2
κ
∂C
∂C
∂κ
C, T − κ2
−
∂p
∂p
∂p
C, N
dp.
0
Now we focus on the second term and integrate it by parts
1
∂κ
C, T dp = κ C, T
∂p
0
1
1
−
κ C, T
dp
(2.17)
p
0
0
1
= −
κ Cp , T + κ C, Tp
dp.
0
Recall that Cp = (Cs )(sp ), Tp = (Ts )(sp ) and that Ts = κN . Thus (2.17)
becomes
1
1
0
1
∂C
κ T
,T
∂p
∂κ
C, T dp = −
∂p
dp −
0
dp
1
κ
0
∂C
∂p
0
1
= −
κ C, κN
∂C
dp −
∂p
κ2 C, N
∂C
dp.
∂p
0
Thus we resubstitute the above integral to At and use the fact that ds =
∂C
∂p
dp
2.2 Curve shortening
22
and get
1
1
1
∂C
κ
dp −
∂p
2At = −
0
∂C
κ
dp −
∂p
0
κ2 C, N
∂C
dp
∂p
0
1
∂C
∂p
κ2 C, N
+
0
L
= −2
κds.
0
Since the total curvature of a closed curve is 2π, then we get 2At = −4π . Thus
At = −2π.
From (2.16) we have the formula for Nt . Now by differentiating Nt with p we
are able to compute κt which is to be used for the derivative of L. We take the
partial derivative of Nt with respect to p.
κp
Ntp = −
∂C
∂p
= −
∂
∂p
= −
∂
∂p
T
(2.18)
p
κp
∂C
∂p
κp
∂C
∂p
T+
κp
∂C
∂p
Tp
T + κκp N .
Similarly, let us compute Npt ,
Npt =
∂ N ∂C
∂s ∂p
(2.19)
t
∂C
∂p t
∂C
∂C
= − κt T
+κ T
∂p
∂p
=
−κT
.
t
2.2 Curve shortening
23
We use (2.15) to see that
T
∂C
∂p
= Cpt
t
= Ctp
= κNp + κp N
∂C
+ κp N
= κ −κT
∂p
∂C
= −κ2 T
+ κp N .
∂p
Thus
Npt = − κt T
= −κt T
∂C
∂p
∂C
+ κp N
∂p
+ κ −κ2 T
(2.20)
∂C
∂C
+ κ3 T
− κκp N .
∂p
∂p
Since we know that Ntp = Npt , we get
− κt T
∂
∂C
∂C
− κκp N + κ3 T
=−
∂p
∂p
∂p
κp
∂C
∂p
T + κκp N .
By taking inner product of (2.21) with T , we get
−κt
Thus we get
∂C
∂C
∂
+ κ3
=−
∂p
∂p
∂p
κp
∂C
∂p
.
(2.21)
2.2 Curve shortening
24
1
κt =
∂
∂p
∂C
∂p
κp
+ κ3 .
∂C
∂p
(2.22)
Now for the derivative of L . Recall from Lemma 2.1.6, L is given by
L
L=
r(s)κ(s)ds.
0
We recall from Definition 2.1.4 of the support function, r(s), and write the length
in terms of its inner product and reparameterise it to get
1
L=−
κ C, N
∂C
dp.
∂p
0
Thus by differentiating with respect to t we get,
1
Lt = −
κ C, N
∂C
∂p
0
1
= −
κt
C, N
∂C
∂p
dp.
t
+κ
∂C
∂p
C, N
0
1
= −
1
κt C, N
0
dp
t
∂C
dp +
∂p
1
κ C, N
t
0
∂C
dps +
∂p
κ C, N
0
∂C
dp
∂p t
2.2 Curve shortening
25
1
= −
1
κt C, N
∂C
dp +
∂p
0
1
+
κ
∂C
dp
∂p
Ct , N + C, Nt
0
κ C, N
−κ2
∂C
∂p
dp
0
1
= −
1
κt C, N
∂C
dp +
∂p
κ Ct , N
0
0
1
κ3 C, N
−
1
∂C
dp +
∂p
κ C, Nt
∂C
dp
∂p
0
∂C
dp .
∂p
0
Since Ct = κN , then
1
1
Lt = −
κt C, N
∂C
dp +
∂p
0
0
1
κ3 C, N
−
1
∂C
κ
dp +
∂p
2
κ C, Nt
∂C
dp
∂p
0
∂C
dp .
∂p
0
We know from (2.16) that Nt = − |Cκpp | T and from (2.22) that κt =
κ3 , then we get
1 ∂
|Cp | ∂p
κp
|Cp |
+
2.2 Curve shortening
26
1
1
1
∂
C, N
|Cp |
∂p
Lt = −
0
1
∂C
dp +
∂p
∂C
dp −
∂p
1
0
1
1
∂
C, N
∂p
= −
κp
|Cp |
0
1
∂C
dp +
∂p
0
0
C, N
κp
|Cp |
∂
∂p
∂C
dp +
κ
∂p
2
κκp C, T dp −
1
κp C, N
|Cp |
0
1
dp
0
1
0
∂C
dp
∂p
1
κκp C, T dp −
1
1
1
κ2
∂C
dp +
∂p
0
κκp C, T dp
0
1
1
κp
Cp , N dp +
|Cp |
−
0
Note that Cp = T
κp
C, Np dp
|Cp |
.
0
∂C
∂p
and Np = Ns
∂C
∂p
= −κT
∂C
∂p
κp
C, N
|Cp |
−
1
0
−
κ2
κκp C, T dp +
0
0
= −
dp −
1
1
κ2
= −
∂C
dp
∂p
κ3 C, N
κκp C, T dp −
0
0
= −
∂C
dp
∂p
κ3 C, N
0
1
κ2
+
κp
|Cp |
. Therefore
0
dp
p
2.2 Curve shortening
27
1
1
∂C
κ2
dp +
∂p
Lt = −
0
κκp C, T dp
0
1
1
κp
T,N
|Cp |
+
∂C
dp −
∂p
κp
C, T
|Cp |
∂C
dp
∂p
0
0
1
1
∂C
κ2
dp +
∂p
= −
κ
0
1
κκp C, T dp −
0
κκp C, T dp
0
1
κ2
= −
∂C
dp
∂p
0
L
κ2 ds.
= −
0
Now we are left to just computing the derivative of the isoperimetric inequality,
L2
A
t
2L(Lt )(A) − L2 At
=
A2
L
2AL(− κ2 ds) − L2 (−2π)
0
=
= −
A2
2L
A
κ2 ds − π
C
L
A
.
We assume that C(t) satisfies the curve evolution equation i.e., Ct = κN on a
time interval [0, T ).
Lemma 2.2.2. We define A as a function of t. If lim A(t) = 0, then
t→T
κ2 ds − π
lim inf L(t)
t→T
C
L(t)
A(t)
0.
(2.23)
2.2 Curve shortening
28
Proof. From Lemma 2.2.1, the derivative of the isoperimetric ratio is
L2
A
= −2
t
L
A
κ(s)2 ds − π
C
L
.
A
(2.24)
Suppose on the contrary, there exists a neighbourhood in [t, T ) such that
κ(s)2 ds − π
L
C
L
A
> ,
for any > 0. Then we get
L2
A
−2 .
A
t
(2.25)
Note that
1 dA
A(t) dt
1
.
= −2π
A(t)
(ln A(t))t =
Thus we get
L2
A
with t1
π
t
(ln A)t ,
(2.26)
t < T.
Now we integrate (2.26) to obtain
t
t
L2
A
dt
t
t1
π
t1
2
2
L
L
(t) − (t1 )
A
A
Since we know that
(ln A)t dt,
L2
A
π
ln(A(t)) −
π
ln(A(t1 ))
4π (cf. [OssermanR]) and it is given that lim A(t) = 0
t→T
thus
π
ln(A(t)) −
π
ln(A(t1 )) → −∞.
2.2 Curve shortening
29
This implies that
4π
L2
L2
L2
(t) − (t1 ) + (t1 ) < 0, for a fixed t1
A
A
A
and t sufficiently close to T , which leads to a contradiction.
Lemma 2.2.3. There exists a non-negative functional F (C), defined for all C 1
curves, with the following property
LA(1 − F (C))
r(s)2 ds.
π
(2.27)
C
Every sequence of closed convex curves {Ci } with lim F (Ci ) = 0 converges to the
i→∞
unit disk, up to renormalization.
Proof. The proof of this lemma is broken down into three parts. The first part is
to prove that (2.27) is met for convex curves symmetric with respect to the origin,
while the second part is for the convergence of Ci when lim F (Ci ) = 0 and finally
i→∞
we generalise to all convex curves.
For convex curves symmetric with respect to the origin, i.e., for any (x, y) ∈ C
then (−x, −y) ∈ C, we will define a non-negative functional E(C) such that
LA(1 − E(C))
r(s)2 ds.
π
(2.28)
C
We define
E(C) = 1 +
πxin xout 2π(xout + xin )
−
,
A
L
(2.29)
where xout represents the radius of the smallest circumscribed circle of C and
xin represents the radius of the largest inscribed circle of C. Furthermore, the
Bonnesen inequality states that xL − A − πx2
0 for all x ∈ [xin , xout ].
Note that when xin = xout , C is a circle and also the Bonnesen inequality (cf.
[Osserman]) is a quadratic function with respect to x.
2.2 Curve shortening
30
Figure 2.1: Graphical representation of the inequality and the secant line.
We consider a secant line from 2 points on the curve, xL − A − πx2 , given as
(xin , xin L − A − π(xin )2 ) and (xout , xout L − A − π(xout )2 ).
We now compute the equation of the secant line below the curve xL − A − πx2 .
The equation of a line in the R2 domain, in general is
y − y1
x − x1
=
,
y1 − y2
x1 − x2
where we have points (x1 , y1 ) and (x2 , y2 ) ∈ R2 .
Thus we get
y − (xin L − A − π(xin )2 ) =
x − xin
(xin L − A − π(xin )2
xin − xout
−(xout L − A − π(xout )2 )).
Therefore we have
y =
x − xin
xin L − A − π(xin )2 − (xout L − A − π(xout )2 )
xin − xout
+(xin L − A − π(xin )2 )
2.2 Curve shortening
31
x − xin
x − xin
+ 1 (xin L − A − π(xin )2 ) −
(xout L − A − π(xout )2 ))
xin − xout
xin − xout
x − xout
x − xin
=
(xin L − A − π(xin )2 ) −
(xout L − A − π(xout )2 ))
xin − xout
xin − xout
(xout L − A − π(xout )2 )
(xin L − A − π(xin )2 )
= (x − xin )
+ (xout − x)
.
xout − xin
xout − xin
=
Since the secant line is below the equation xL − A − πx2 , we get
xL − A − πx
2
(xout L − A − π(xout )2 )
(xin L − A − π(xin )2 )
(x − xin )
+ (xout − x)
.
xout − xin
xout − xin
(2.30)
Simplifying (2.30) we get
(x − xin )(xout L − A − π(xout )2 ) + (xout − x)(xin L − A − π(xin )2 )
(2.31)
xout − xin
xL − A − πx2
=
1
xxout L − xA − πxx2out − xout xin L + Axin + πx2out xin
xout − xin
+(xout xin L − xout A − πxout x2in − xxin L + Ax + πx2in x)
=
1
A(xin − xout ) + xL(xout − xin ) − πx(x2out − x2in )
xout − xin
−πxin xout (xout − xin )
= −A + xL − πx(xout + xin ) − πxout xin .
(2.32)
We replace x by the support function r and by integrating over [0, L] with
respect to r for (2.31), we get
L
L
rds − AL − π
L
0
L
2
r ds
0
−AL + L
L
rds − π(xout + xin )
0
rds − πxout xin L.
0
2.2 Curve shortening
32
L
Recall that
r(s)ds = 2A. Thus we get
0
L
r2 ds
LA − π
LA − 2Aπ(xout + xin ) − πxout xin L
0
πxout xin
2π
(xout + xin ) +
L
A
= LA · E(C).
= LA 1 −
If E(C) = 0 then
LA · E(C) = LA + πLxin xout − 2πA(xin + xout )
L
A + rL − πr(xin + xout ) − πxin xout ds
=
0
This means that
(x − xin )
(xout L − A − π(xout )2 )
(xin L − A − π(xin )2 )
+ (xout − x)
= 0.
xout − xin
xout − xin
Thus we have xin = xout because, by Bonnesen’s inequality, we know that (xin L−
A − π(xin )2 )
0 and (xout L − A − π(xout )2 )
0. This indicates that C is a circle.
Now for the second part of the proof. We first give a sequence of convex curves,
{Ci } that are symmetric to the origin. Let us normalize such curves to γi =
π
C.
Ai i
We denote Hi as the area enclosed by γi . If lim E(Ci ) = 0, then lim E(γi ) = 0.
i→∞
i→∞
To prove this, we first define a few curves.
We let Cout and γout be the smallest circumscribed circles of C and γ respectively
and Cin and γin be the largest inscribed circles of C and γ respectively.
Since we know that γ =
have
π
C,
A
where A is the area bounded by C. Thus we
2.2 Curve shortening
33
π
(xout )Cout ,
A
π
(xin )Cin .
A
(xout )γout =
(xin )γin =
Therefore
E(γ) = 1 +
= 1+
2π((xout )γout + (xin )γin )
π(xin )γin (xout )γout
−
Aγ
Lγ
π
A
π
[(xout )Cout (xin )Cin ] 2π
−
Aγ
Lγ
π
((xin )Cin + (xout )Cout )
A
where Aγ and Lγ are the area bounded by the normalized curve and the length of
the normalized curve, γ, respectively. Since we know that Aγ = π and Lγ =
π
L,
A
we get
E(γ) = 1 +
π
π
A
[(xout )Cout (xin )Cin ]
2π
− π
π
L
A
π
((xin )Cin + (xout )Cout )
A
π ((xout )Cout (xin )Cin ) 2π
−
((xin )Cin + (xout )Cout )
A
L
= E(C).
= 1+
Therefore for any sequence {Ci } with its normalized sequence of curves {γi } we
have
lim E(Ci ) = lim E(γi ) = 0.
i→∞
i→∞
Suppose all γi lies within a disk in R2 , then by the Blaschke selection theorem,
there exist a subsequence {γik } converges to a limit convex set γ∞ . We know that
A, L, xout , xin are all continuous functionals of convex sets, thus E(C) is also a
continuous functional and thus we can say
2.2 Curve shortening
34
E(γ∞ ) = lim E(γik ) = 0.
ik →∞
From the first part, γ∞ is a unit circle and thus every convergent subsequence
γik converges to the unit circle and {γi } will converge to the unit circle in Hausdorff
metric.
Finally for the third part of the proof, for any convex curve, we can split the
curve into equal areas. If we choose one point on the convex curve C say X(s),
then we can uniquely determine another point Y (s) such that the line containing
X(s) and Y (s) splits the area bounded by the curve into two equal parts.
Furthermore, the tangent of the two points are parallel to each other. To prove
this, we first define a function, for the points X(s), Y (s) ∈ C such that
f (X(s)) = TX(s) × TY (s) , n ,
where TXs) and TY (s) are tangents at X(s) and Y (s) respectively. Note that n
represents the positively oriented normal to the plane.
Since we know that for any vector a and b, then a × b = −(b × a), thus we have
f (Y (s)) =
=
TY (s) × TX(s) , n
−(TX(s) × TY (s) ), n
= − TX(s) × TY (s) , n
= −f (X(s)).
Since f is a functional defined as
f : C → R,
2.2 Curve shortening
35
where C is any closed convex curve, then we can apply Intermediate Value Theorem
because if f (Y (s)) > 0, then f (X(s)) < 0 and trivially for f (Y (s)) = 0. Thus
there is an s1 such that f (X(s1 )) = 0, with a unique corresponding Y (s1 ) ∈ C.
This means that
TX(s1 ) × TY (s1 ) , n = 0.
This either means TX(s1 ) × TY (s1 ) is orthogonal to n or TX(s1 ) × TY (s1 ) = 0.
But we know that the usual cross product gives a vector parallel to n. Thus we
say that TX(s1 ) × TY (s1 ) = 0. Therefore we say that TX(s1 ) is parallel to TY (s1 ) , with
opposite direction.
By defining this line segment, X(s1 ) and Y (s1 ), as the x axis, we can set the
midpoint of that axis as the origin of the curve, C. We can now split the curve
into two open curves namely C1 and C2 such that C1 lies above the axis and C2
lies below.
The way to solve this problem is to consider C1 and its reflection with respect
to the origin, −C1 to form a closed convex curve symmetric through the origin.
We can see this from the diagram below.
2.2 Curve shortening
36
Figure 2.2: Reflection of C1 through the origin.
This new curve is denoted as
ξ (1) = C1 ∪ {−C1 }.
Similarly for C2 we create a new symmetric convex curve through the origin
denoted as
ξ (2) = C2 ∪ {−C2 }.
Note that both the new curves have the same area as C. Now we apply ξ (1) on
(2.28) and we get
2L(1) A − π
r2 ds
2L(1) A · E ξ (1) ,
(2.33)
2L(2) A · E ξ (2) ,
(2.34)
ξ (1)
where 2L(1) is the length of ξ (1) .
Similarly we apply ξ (2) on (2.28) to get
2L(2) A − π
r2 ds
ξ (2)
2.2 Curve shortening
37
where 2L(2) is the length of ξ (2) .
We sum (2.33) and (2.34) to get
2(L(1) + L(2) )A − π
r2 ds +
ξ (1)
r2 ds
2L(1) A · E(ξ (1) ) + 2L(2) A · E(ξ (2) )
ξ (2)
Note that L(1) + L(2) = L and thus simplifying the above will give us
r2 ds +
2LA − π
r2 ds
C1 ∪{−C1 }
2LA
C2 ∪{−C2 }
L(1)
L(2)
E(ξ (1) ) +
E(ξ (2) ) ,
L
L
which is equivalent to
r2 ds +
2LA − π
C1
2LA
r2 ds +
ds
ds +
−{C1 }
C2
−{C2 }
L(1)
L(2)
E(ξ (1) ) +
E(ξ (2) ) .
L
L
Hence
r2 ds +
2LA − π
C
r2 ds
C
2LA
L(1)
L(2)
E(ξ (1) ) +
E(ξ (2) ) .
L
L
We thus get
r2 ds
LA − π
C
LA
L(1)
L(2)
E(ξ (1) ) +
E(ξ (2) ) .
L
L
Let
F (C) = sup
L(1)
L(2)
(1)
E(ξ ) +
E(ξ (2) ) ,
L
L
and we normalize all such C with area bounded as A, into γ =
We claim that F (C) = F (γ). To prove this, we first note that
π
C.
A
(2.35)
2.2 Curve shortening
38
E(ξ (1) ) = E(γ (1) ),
E(ξ (2) ) = E(γ (2) ).
Thus we have
F (C) = sup
= sup
L(1)
L(2)
E(ξ (1) ) +
E(ξ (2) )
L
L
L(1)
L(2)
E(γ (1) ) +
E(γ (2) )
L
L
= F (γ).
For the normalized curve of C1 , denoted as γ1 , we let ζ (1) = γ1 ∪ {−γ1 } and
likewise for the normalized curve of C2 , denoted as γ2 , we let ζ (2) = γ2 ∪ {−γ2 }.
Then 2π
2L(1) .
If the length of γ, Lγ
K for some constant K, then
F (γ) = sup
L(1)
L(2)
(1)
E(ζ ) +
E(ζ (2) )
Lγ
Lγ
(2.36)
L(1)
E(ζ (1) )
Lγ
π
E(ζ (1) ) .
K
Similarly for ζ2 , we have
F (γ)
It is clear to see that
π
E(ζ (2) ) .
K
(2.37)
2.2 Curve shortening
39
xin (γ)
min{xin (ζ (1) ), xin (ζ (2) )},
(2.38)
max{xout (ζ (1) ), xout (ζ (2) )}.
(2.39)
and
xout (γ)
So if we have a sequence of γi such that lim F (γi ) = 0, then the functional E for
i→∞
symmetric curves will tend to zero. Thus from the second part, we know that the
area bounded by the symmetric curves will converge to the unit disk and therefore
xout and xin will converge.
Lemma 2.2.4. For the functional, F (C), as defined in Lemma 2.2.3, we have
κ2 ds (1 − F (C)) − π
C
L
A
0,
(2.40)
where C is a close convex C 2 curve.
Proof. We recall that
L=
2
1
2
r ds
κ(s)r(s)ds
C
C
1
2
2
κ ds
C
Thus by squaring (2.41) and applying (2.27) we get
L2
r2 ds
C
κ2 ds
C
κ2 ds
C
LA(1 − f (C))
π
and thus we get
πL
A
κ2 ds (1 − F (C)).
C
,
.
(2.41)
2.2 Curve shortening
40
Theorem 2.2.5. Given a family of C 2 curves C(t) which satisfies the evolution
equation, Ct = κN , for 0
t < T such that lim A(t) = 0. Then
t→T
2
lim
t→T
L
= 4π.
A
π
C
A
Furthermore, for the normalized curve γ =
with the bounded domain the
corresponding, H will converge to the unit disk.
Proof. We rearrange (2.40) to get
κ2 ds − π
C
L
A
κ2 ds F (C).
(2.42)
C
By the Cauchy Schwartz inequality, we know that
2
κ2 ds
κds
C
Since
C
ds = L and
C
ds .
C
C
κds = 2π then
2
κds
κ2 ds
L
C
C
4π 2
κ2 ds.
L
C
Thus we have L
C
κ2 ds
4π 2 . Now we combine this with (2.42) and get
κ2 ds − π
L
C
L
A
κ2 ds F (C).
L
C
4π 2 F (C).
Since lim A(t) = 0, from Lemma 2.12, there is a sequence of {C(ti )} with ti → T
t→T
such that
κ2 ds − π
lim inf L(t)
t→T
C(t)
L(t)
A(t)
0.
2.2 Curve shortening
41
Since we know that F (C)
0, this implies that F (C(ti )) approaches zero as
t → T as
0
lim inf 4π 2 F (C(ti ))
κ2 ds − π
lim inf L(t)
t→T
t→T
C(t)
L(t)
A(t)
0.
From (2.40) we know that
κ2 ds − π
L
C
and since F (C)
0, this implies that L
L
A
4π 2 F (C),
C
L
κ2 ds − π A
0.
Thus we have
L2
A
= −2
t
L
A
κ(s)2 ds − π
C
L
A
0.
Bonnesen Inequality (cf. [Osserman]) states that
L2 − 4πA
π 2 (xout − xin )2 .
L2
− 4π
A
π2
(xout − xin )2 ,
A
Rewriting this we get
(2.43)
which shows that the outer radii of the normalized curves γ(t) are bounded by a
constant R.
Since the evolution process shrinks the normalized curve, γ then the corresponding area bounded by the evolving curve satisfies the property, H(t1 ) ⊆ H(t2 ) where
t2 > t1 .
Thus this leads to the fact that
2.3 Geometric Heat Flow and Preservation
42
{H(t)},
0 t 0.
We will now give some known functions of c(·) that does a relatively good
anisotropic diffusion for any given image and also a new function that does equivalently well or better. Furthermore, we will rationalise why certain functions chosen
are better than others in the next section.
The first function we will discuss, is the function that Perona and Malik first
proposed. This function is given as
x2
c(x) = e− k2 .
(3.6)
3.1 Anisotropic Diffusion
52
We notice that this function satisfies all the properties needed for anisotropic
diffusion. We note that
lim c(x) = 0.
x→∞
We begin with a grayscale picture denoted as I0 , and its edge representation
given below.
Original image, I0
Edge representation of I0
Figure 3.5: The image I0 and its edge representation
Now we will apply (3.4) with the function given in (3.6) on the image I0 , for
100 iterations with k = 20 and λ = 0.25. A comparison between the images and
their edge representation is shown below.
3.1 Anisotropic Diffusion
53
Original image, I0
Image after anisotropic diffusion
Edge representation of I0
Edge representation after anisotropic diffusion
x2
Figure 3.6: After applying anisotropic diffusion using c(x) = e− k2 .
Notice that the diffused image from Figure 3.6, has some loss of information.
Note that there is some degree of blurring of the image. Furthermore, most of the
edges are lost as we can see it from the enlarged version of a portion of the image.
3.1 Anisotropic Diffusion
54
Enlarged original image
Enlarged image after anisotropic diffusion c(x)
Figure 3.7: Enlarged edge images.
Notice that a big portion of the edge that was found in the original edge was
x2
lost after anisotropic diffusion using the function e− k2 .
The reason why this function works less effective with respect to other functions
would be given in the next section.
Now for the second function we will cover is in fact it is the Taylor expansion
x2
of e− k2 . We let the function
c(x) =
1
2 .
1 + xk2
(3.7)
Now we will run the Perona Malik algorithm with the number of iterations as
100 and k = 20 with λ = 0.25 and we get the following images. We will compare
this with the original image.
3.1 Anisotropic Diffusion
55
Original image, I0
Image after anisotropic diffusion
Original edge representation
Edge representation after anisotropic diffusion
Figure 3.8: The difference after applying anisotropic diffusion using c(x) =
1
2
1+ x2
.
k
Note that for this choice of the function c(x), there is more loss of information
such as spreading of data across the edges of the image as well as further blurring
of the image as seen in Figure 3.8.
Now we will consider a function called the Tukey Biweight. This function is
3.1 Anisotropic Diffusion
56
defined in the following way.
1−
c(x, k) =
0,
x2 2
k
, |x|
k;
(3.8)
otherwise.
We will now the apply Perona Malik algorithm where the number of iterations
is 100 with k = 20 and λ = 0.25.
Thus we have the following diffused images with respect to the original image.
Original image, I0
Image after using anisotropic diffusion
3.1 Anisotropic Diffusion
57
Original edge representation
Edge representation after anisotropic diffusion
Figure 3.9: The difference after applying anisotropic diffusion using the Tukey
Biweight.
Now we will compare the edge based images of all three functions that has been
x2
employed, namely e− k2 ,
1
2
1+ x2
and finally the Tukey Biweight function.
k
Thus we have,
x2
c(x) = e− k2
c(x) =
1
2
1+ x2
k
3.1 Anisotropic Diffusion
58
Tukey Biweight
Original edge image
Figure 3.10: Enlarged edge images.
As noticed, the diffused image using Tukey Biweight is almost the same as the
original image.
Now a new function that works as effectively as the Tukey Biweight, will be
presented.
Let us define this function as
c(x) = |erfc(|x|) cos(x)|,
(3.9)
where the erfc(x) known as the complementary error function is given as
∞
2
erfc(x) = √
π
2
e−t dt.
x
This choice of function is because, it needs to have the property that as x → ∞,
we would have the function heading towards 0. This property is met by using the
erfc(x) function. Furthermore, cos(x) is to smooth out the function at its singular
3.1 Anisotropic Diffusion
59
point found in erfc(x).
We need to first ensure that the function tends to zero as x → ∞ and it does
not contain singular points.
We check that indeed this function satisfies the property, i.e.,
lim c(x) = lim |erfc(|x|) cos(x)| = 0.
x→∞
x→∞
Since c(x) = |erfc(|x|) cos(x)|, we have
|erfc(|x|) cos(x)|
0
|erfc(|x|)|.
Now we let x → ∞ and we get
lim |erfc(|x|) cos(x)|
0
x→∞
∞
However erfc(|x|) =
√2
π
lim |erfc(|x|)|.
x→∞
2
e−t dt and thus we have
|x|
∞
2
lim |erfc(|x|)| = √ lim
x→∞
π x→∞
2
e−t dt
|x|
∞
2
√ lim
π x→∞
2
e−t dt
|x|
∞
2
= √ lim
π x→∞
2
e−t dt
|x|
= 0.
Thus by Squeeze Theorem we get lim c(x) = 0.
x→∞
3.1 Anisotropic Diffusion
60
Now we will run Perona Malik algorithm using the new function defined in (3.9),
with k = 20 and λ = 0.25 with 100 iterations. We get the following images.
Original image, I0
Original edge representation
Image after using anisotropic diffusion.
Edge representation after anisotropic diffusion.
Figure 3.11: The difference after applying anisotropic diffusion using c(x) defined
in (3.9).
3.1 Anisotropic Diffusion
61
Notice that the function c(x) = |(erfc(|x|) cos(x))| works as effectively as the
Tukey Biweight.
Now we will test whether when we increase our iterations, will the Tukey Biweight or the new function work better.
Here we will keep all things the same with the exception of the number of
iterations. We shall now let the number of iterations be 1000 and compare between
our given function and the Tukey Biweight.
Image after diffusing with Tukey Biweight Image after diffusing using function c(x)
3.2 A Robust Statistical View of Anisotropic Diffusion
62
Edge representation using Tukey Biweight Edge representation after using function c(x)
Figure 3.12: The differences after applying anisotropic diffusion with Tukey Biweight.
Notice that there is very little change in the edge representation and both seem
to work well. However in the next section, we will show that changing another
factor will differentiate the two.
3.2
A Robust Statistical View of Anisotropic Diffusion
In this section, we will analyse the Perona Malik anisotropic diffusion in the viewpoint of robust statistics. We will cover on the 4 functions of c(x) discussed in the
previous section and describe why certain functions work better than others.
We shall assume that the input image is a piecewise constant function that has
been corrupted by a zero mean Gaussian noise with a small variance.
Consider the intensity difference between pixel s and p, i.e., Ip − Is . In the
3.2 A Robust Statistical View of Anisotropic Diffusion
domain of a smooth image region, this difference would be small, with zero mean
and normally distributed.
Therefore an optimal estimator of Is would minimise the square of the neighbour
differences. However, if the image region contains a boundary, then Ip − Is will not
be normally distributed.
Figure 3.13: Local neighbourhood of pixels at a boundary.
Suppose for example, we have the image region as in Figure 3.13. Note that
there is a boundary in the image region. If a pixel, p, is found in the darker
domain and the pixel s is found in the white domain, then to estimate the true
intensity value at s, would mean that we require only neighbours that are in the
same domain.
As such Ip − Is , will be a large amount. Thus the neighbour difference i.e.,
Ip − Is will be considered as an outlier.
We wish to find the image I from one that has been convoluted with noisy data,
I0 .
63
3.2 A Robust Statistical View of Anisotropic Diffusion
64
This is done by the minimisation problem given as
ρ(Ip − Is , σ),
min
I
(3.10)
s∈I p∈ηs
where ηs is the neighbourhood of pixel s and ρ(·) is the robust error norm, with σ
being a scale parameter.
We can solve (3.10) by the gradient descent method, given as
Isn+1 = Isn +
λ
ψ(Ipn − Isn , σ),
|ηs | p∈η
(3.11)
s
where ψ(·) = ρ (·), n represents the number of iterations and
λ
|ηs |
represents the
rate of descent.
Thus the choice of ρ(·) is critical. In order for us to analyse the robust error
norm function, we consider the derivative, ψ(·).
Thus let us say we consider a quadratic error norm, then Isn+1 is assigned the
mean of the neighouring values of Ipn . If these values comes from different populations i.e., across edges, then this means that the image would not be a good
representation of the correct population. Thus the image will become too blurred.
Consider the influence function of the quadratic robust error norm, ρ(x, σ) =
x 2
σ
d x
dx σ
2x
=
σ2
= ψ(x, σ)
2
ρ (x, σ) =
The derivative of the error norm is proportional to the influence function (cf.
Definition 1.3.2) and thus we notice that we get a linear influence function which
3.2 A Robust Statistical View of Anisotropic Diffusion
65
is unbounded. Thus the quadratic error norm gives outliers, i.e., large values of
∇Is,p , too much influence.
Thus we need to increase the robustness and reject such outliers, i.e., ρ(·) must
increase less rapidly then a quadratic equation and ψ(x) which is proportional to
the influence function, must be bounded.
We now formally express the relationship between robust statistics and anisotropic
diffusion. Recall that an anisotropic diffusion is given as
∂I(x, y, t)
= div[c(|∇I|)∇I].
∂t
(3.12)
Thus we can express the continuous form of the robust minimisation problem
in (3.10) as
min
I
ρ(|∇I|, σ)dΩ,
(3.13)
Ω
where Ω is the domain of the image and I0 is the initial noisy image. We can
minimise (3.13) by the gradient descent method as stated in Chapter 1.
We define the function
c(x) :=
ρ (x)
.
x
(3.14)
By setting x = |∇I| and applying to (3.12), we get
∂I(x, y, t)
∇I
= div ρ (|∇I|)
.
∂t
|∇I|
(3.15)
Therefore we have obtained a straightforward relation between anisotropic diffusion with robust estimation.
3.2 A Robust Statistical View of Anisotropic Diffusion
66
Recall Perona Malik’s discrete formulation of anisotropic diffusion, (3.4), i.e.,
Isn+1 = Isn +
λ
c(|∇Is,p |)∇Is,p ,
|ηs | p∈η
s
where ∇Is,p = Ip − Isn .
First we let x = |∇Is,p | and by comparing this with (3.11) we get
c(x)x = ψ(x).
Thus we get ψ(x) = ρ (x).
Recall from Section 3.1 that we have discussed on 4 possible functions of c(x)
given below:
x2
c(x) = e− k2
1
c(x) =
2
1 + xk2
1−
c(x) =
0,
(3.16)
(3.17)
2
x 2
k
, |x|
c(x) = |erfc(|x|) cos(x)|
k;
(3.18)
otherwise.
(3.19)
Now we will discuss the rationale on why some of these functions work better
than the others. We begin our argument by discussing the effectiveness of these
functions in anisotropic diffusion.
We will begin our argument with the function c(x) given in (3.16). We first
show that ρ(x) increases less rapidly compared to the quadratic error norm, x2 .
x2
By letting k 2 = −2σ 2 , we get c(x) = e 2σ2 . We also get ψ(x) = xc(x). Thus we get
x2
ρ (x) = ψ(x) = xe 2σ2 . Notice that ψ(x) is bounded. We now integrate ρ (x) and
3.2 A Robust Statistical View of Anisotropic Diffusion
67
we get
ρ(x) =
x2
xe 2σ2 dx
x x22
e 2σ dx
σ2
= σ2
x2
= σ 2 e 2σ2 .
Thus we list out the following functions of the first edge stopping function.
x2
c(x) = e− k2
(3.20)
x2
ψ(x) = xe− k2
k 2 − x22
ρ(x) =
e k .
2
Note that in order for the function c(x) to work effectively for the anisotropic
diffusion, we need to ensure that the ρ(x) increases less rapidly with respect to the
quadratic error norm i.e., x2 functions.
x2
Now for our ψ(x) = xe− k2 we can see that for any x, ψ(x) is bounded.
We can see from our diagram below for the 3 functions denoted in (3.20)
3.2 A Robust Statistical View of Anisotropic Diffusion
x2
Figure 3.14: c(x) = e− k2 .
x2
Figure 3.15: ψ(x) = xe− k2 .
Thus we can say that this function c(x) works effectively for our anisotropic
diffusion. Now we carry out the same procedure with the second function of (3.17).
Since k 2 = 2σ 2 , then we get c(x) =
1
2
1+ x 2
2σ
.
68
3.2 A Robust Statistical View of Anisotropic Diffusion
69
2
Figure 3.16: ρ(x) =
k2 − x
e k2 .
2
Thus we compute the derivative of ρ(x) which is
x
x2
1 + 2σ
2
2x
=
2 .
2 + σx2
ψ(x) =
Now we integrate ψ(x) with respect to x and we get
ρ(x) =
= σ2
2x
2 dx
2 + σx2
2x
σ2
2+
x2
σ2
= σ 2 log 2 +
dx
x2
σ2
.
Thus we list out the following functions of the second edge stopping function.
1
x2
1 + 2σ
2
2x
ψ(x) =
2
2 + σx2
c(x) =
ρ(x) = σ 2 log 2 +
(3.21)
x2
σ2
3.2 A Robust Statistical View of Anisotropic Diffusion
We have to ensure that ρ(x) does not increase faster then the x2 curve. We look
at the graphs of the following functions.
Figure 3.17: c(x) =
Figure 3.18: ψ(x) =
1
1+
x2
.
2σ 2
2x
2
2+ x2 .
σ
70
3.2 A Robust Statistical View of Anisotropic Diffusion
Figure 3.19: ρ(x) = σ 2 log(1 +
x2
).
2σ 2
The third function we are discussing is the Tukey Biweight, i.e., (3.18)
This function c(x) was chosen from robust statistics for certain reasons. Firstly,
it was a function such that if |x| > k, then c(x) = 0. Thus this immediately
satisfies the property for the function to be used in anisotropic diffusion.
Furthermore, its derivative, which is proportional to the influence function is
bounded. Let k 2 = 2σ 2 and we get the following functions related to the Tukey
Biweight.
71
3.2 A Robust Statistical View of Anisotropic Diffusion
1−
c(x) =
0,
x2
2σ 2
ρ(x) =
x2
2
−
1 σ2,
3
, |x|
√
2σ;
otherwise.
x 1−
ψ(x) =
0,
2
x4
4σ 2
x2
2σ 2
2
, |x|
√
2σ;
otherwise.
+
x6
24σ 4
|x|
√
2σ;
otherwise.
We ensure that the Tukey Biweight’s choice is in fact a good function to use
for anisotropic diffusion, i.e., to ensure that ρ(x) does not grow faster than the x2
function. The following graphs shows ρ(x) as well the Tukey Biweight’s function.
Figure 3.20: c(x).
72
3.2 A Robust Statistical View of Anisotropic Diffusion
Figure 3.21: ψ(x).
Figure 3.22: ρ(x).
Notice that ψ(x) is bounded.
Finally, we discuss on the function defined in (3.19). Here we need to ensure that
this new function works well as an edge stopping function in anisotropic diffusion,
i.e., the ρ(x) function does not grow faster than the x2 function and the derivative
of ρ(x) i.e., ψ(x) is bounded. Recall that the influence function is proportional to
73
3.2 A Robust Statistical View of Anisotropic Diffusion
74
ψ(x) (cf. Definition 1.3.2).
Thus the following functions relating to c(x) = |erfc(|x|) cos(x)| are given below.
c(x) = |erfc(|x|) cos(x)|
ψ(x) = x|erfc(|x|) cos(x)|
x
ρ(x) =
s|erfc(|s|) cos(s)|ds,
a
where a ∈ R.
We need to ensure that ρ(x) does not grow faster than the x2 curve. To do this
we show that
x
s|erfc(|s|) cos(s)|ds
a
lim
x→∞
= 0.
x2
By applying the L Hopital rule, we get
x
s|erfc(|s|) cos(s)|ds
a
lim
x→∞
x2
=
=
lim
x|erfc(|x|) cos(x)|
2x
lim
|erfc(|x|) cos(x)|
2
x→∞
x→∞
.
Since we know that lim (|erfc(|x|) cos(x)|) = 0, we ensure that ρ(x) does not
x→∞
2
grow faster than x .
Now we display the graphs of c(x) and ψ(x), given below.
3.2 A Robust Statistical View of Anisotropic Diffusion
Figure 3.23: c(x) = |erfc(|x|) cos(x)|.
Figure 3.24: ψ(x) = x|erfc(|x|) cos(x)|.
Notice that ψ(x) has small bumps due to the cos(x) function. Then we would
expect that ρ(x) would have similar bumps. We now show the restricted domain
for c(x).
between 0, 3π
4
Furthermore ψ(x) is bounded. Thus we have now ensured that all four edge
stopping functions are capable of diffusing an image using anisotropic diffusion
75
3.2 A Robust Statistical View of Anisotropic Diffusion
Figure 3.25: c(x) in a restricted domain of 0, 3π
.
4
without largely passing through the edges.
Now we need to check which function is better in the aspect on whether a large
value of k affects the diffusion.
Throughout this chapter we tested on the effectiveness of our chosen functions
c(x) for anisotropic diffusion. At this point we shall compare the effect of making
our k large enough.
We have let k = 20 for all our functions that we have used. Now we shall
increase k to 100 and 1000, and keep the rest of the conditions unchanged and see
how changes in k affects diffusion.
Below are the edge representations of the image using k = 100 for all four
functions.
76
3.2 A Robust Statistical View of Anisotropic Diffusion
Image after diffusing with k=100
77
Image after diffusing using function c(x)
Edge representation using Tukey Biweight Edge representation after using function c(x)
Figure 3.26: The edge representation after applying anisotropic diffusion with different c(x) for k = 100
Now we test out our functions with k = 1000.
3.2 A Robust Statistical View of Anisotropic Diffusion
Image after diffusing with first function
78
Image after diffusing using function c(x)
Edge representation using Tukey Biweight Edge representation after using function c(x)
Figure 3.27: The edge representation after applying anisotropic diffusion with different c(x) for k = 1000.
Notice that even the Tukey Biweight function has started to fail when k is large
enough.
However the fourth function has no change after diffusion. This is because only
the fourth function works effectively whatever the value of k is. Hence the fourth
function works, in some sense more effectively than the other 3 functions.
Bibliography
[Black & Sapiro] Michael Black, Guillermo Sapiro, David H. Marimont & David
Heeger, Robust Anisotropic Diffusion, IEEE Transactions on Image Processing,
Vol 7, No 3, , 421-432, Mar 1998
[Epstein] C. L Epstein and M. Gage, “The curve shortening flow” in Wave Motion:
Theory Modelling and Computation, Springer Verlag, New York 1987
[Gage1] Michael E Gage, An Isoperimetric Inequality with Applications to Curve
Shortening, Duke Mathematical Journal, Vol 50. No 4, 1225-1229, Dec 1983
[Gage2] Michael E Gage, Curve Shortening makes convex curves circular, Inventiones Mathematicae 76, 357-364 1984
[Hampel & Stahel] F. R Hampel, E. M Rochetti, P. J. Rousseeuw & W. A Stahel,
Robust Statistics: The approach based on influence functions, Wiley, 1986
[Kovesi] Peter Kovesi, MATLAB and Octave Functions for Computer Vision and
Image Processing .
79
Bibliography
[Nasraoui] Olfa
80
Nasraoui,
A
brief
overview
of
Robust
Statistics
.
[Osserman] Robert Osserman, Bonnesen-Style Isoperimetric Inequalities, The
American Mathematicl Monthly, Vol 83, No 1, Jan 1979
[OssermanR] Robert Osserman, Isoperimetric Inequality, Appendix 3 in a Survey
of Minimal Surfaces, NY, Dover, 147-148, 1986
[Sapiro & Tannenbaum] Guillermo Sapiro & Allen Tannenbaum, Area and Length
Preserving Geometric Invariant Scale-Space, IEEE Transactions on Pattern
Analysis and Machine Intelligence, Vol 17, No 1, 67-72 Jan 1995
[Sapiro] Guillermo Sapiro, Geometric Partial Differential Equations and Image
Analysis, Cambridge University Press, 2001.
[Strauss] Walter A. Strauss, Partial Differential Equations: An Introduction, John
Wiley & Sons, 1992.
Appendix A
Appendix
Program for anisotropic diffusion using Matlab.
% ANISODIFF - Anisotropic diffusion.
% Usage:
%
diff = anisodiff(im, niter, kappa, lambda, option)
% Arguments:
%
im
- input image
%
niter
- number of iterations.
%
kappa
- conduction coefficient 20-100 ?
%
lambda - max value of .25 for stability
% Returns:
%
diff
- diffused image.
%
81
82
% kappa controls conduction as a function of gradient.
% If kappa is low, small intensity gradients are able
% to block conduction and hence diffusion
% across step edges.
A large value reduces the influence of intensity
% gradients on conduction.
% lambda controls speed of diffusion (you usually want it at a
% maximum of 0.25)
% References:
% P. Perona and J. Malik.
% Scale-space and edge detection using ansotropic diffusion.
% IEEE Transactions on Pattern Analysis and Machine Intelligence,
% 12(7):629-639, July 1990.
%
% Peter Kovesi
% School of Computer Science & Software Engineering
% The University of Western Australia
% pk @ csse uwa edu au
% http://www.csse.uwa.edu.au
%
% June 2000
original version.
% March 2002 corrected diffusion eqn No 2.
function
diff = anisodiff(im, niter, kappa, lambda, option)
if ndims(im)==3
83
error(’Anisodiff only operates on 2D grey-scale images’);
end
im = double(im); [rows,cols] = size(im); diff = im;
for i = 1:niter
diffl = zeros(rows+2, cols+2);
diffl(2:rows+1, 2:cols+1) = diff;
% North, South, East and West differences
deltaN = diffl(1:rows,2:cols+1)
- diff;
deltaS = diffl(3:rows+2,2:cols+1) - diff;
deltaE = diffl(2:rows+1,3:cols+2) - diff;
deltaW = diffl(2:rows+1,1:cols)
- diff;
% Conduction
if option == 1
cN = exp(-(deltaN/kappa).^2);
cS = exp(-(deltaS/kappa).^2);
cE = exp(-(deltaE/kappa).^2);
cW = exp(-(deltaW/kappa).^2);
elseif option == 2
cN = 1./(1 + (deltaN/kappa).^2);
cS = 1./(1 + (deltaS/kappa).^2);
cE = 1./(1 + (deltaE/kappa).^2);
84
cW = 1./(1 + (deltaW/kappa).^2);
elseif option == 3
if abs(deltaN) [...]... Geometric Curves In this chapter, we will give a theoretical explanation towards geometric curves and its evolution over time We will then show the relationship between geometric curve evolution and anisotropic diffusion in the next chapter For this chapter, we use sources [Gage1], [Gage2] and [Sapiro & Tannenbaum] 2.1 Curve evolution and Level set representation of curve evolution We consider curves... Thus we get π A 2.2 κ(s)2 ds 0 Curve shortening In this section we will study on curve shortening and show that the curve evolution makes curves more circular We recall that Ct = β N By choosing β = κ we have ∂C = κN ∂t Lemma 2.2.1 Given a closed convex curve C(s, t), the derivative of the isoperimetric ratio is given as L2 A = −2 t L A κ(s)2 ds − π C L A (2.12) 2.2 Curve shortening 18 Proof We recall... closed curves, where t parameterizes the family and p parameterizes the curve 9 2.1 Curve evolution and Level set representation of curve evolution Definition 2.1.1 We say that C(p, t) is a curve that evolves if it satisfies the following PDE, ∂C(p, t) = α(p, t)T (p, t) + β(p, t)N (p, t), ∂t (2.2) with the initial condition C(p, 0) = C0 (p), where T represents the unit tangent direction of the curve and. .. representation of the curve C(p, t) then 11 2.1 Curve evolution and Level set representation of curve evolution ∂γ(x, t) = β(x, t) 1 + ∂t ∂γ(x, t) ∂x 2 Proof We represent C(p, t) in the form (x, y) ∈ R2 Thus we have ∂C xt = ∂t yt Let y = γ(x, t) and we get ∂y ∂γ = + ∂t ∂t ∂γ ∂x ∂x ∂t Rewriting ∂γ ∂γ ∂y ∂x = − ∂t ∂t ∂x ∂t −γ x t , x = 1 yt Recall that T = and 1 1 1 +... γx2 αγx + β (2.5) 12 2.1 Curve evolution and Level set representation of curve evolution From (2.4), ∂γ = ∂t = −γx ∂C , ∂t 1 β + 1 + γx2 αγx 1 + γx2 − γx α − 1 + γx2 βγx 1 + γx2 β(γx )2 β + 1 + γx2 1 + γx2 β(1 + γx2 ) = 1 + γx2 = = β(x, t) 1 + ∂γ(x, t) ∂x 2 Recall from (2.1), C(p, t) represents a family of closed curves, where t parameterizes the family and p for the curve From Lemma 2.1.2 we... representation, i.e., L ≡ C and thus ∂Lc ∂C = = βN ∂t ∂t (2.8) From (2.7) and (2.8) we get β N , ∇u + ∂u = 0 ∂t Hence, −β Thus ∂u ∂t ∇u ∂u , ∇u + = 0 ∇u ∂t = β ∇u We shall now show that as a closed curve evolves, depending on what β is, will become more circular Before that we shall have the following preceding statements 14 2.1 Curve evolution and Level set representation of curve evolution Definition... reparameterizing the curve to C(˜ p˜(p, τ ) and t = τ By Chain Rule we find ∂ C˜ = ∂τ = = ∂ C˜ ∂ p˜ ∂ C˜ ∂ p˜ ∂ C˜ ∂ p˜ ∂ C˜ ∂t ∂ C˜ ∂ p˜ ∂τ + ∂t ∂τ ∂ p˜ ∂τ + ∂ p˜ ∂τ + α(p, t)T (p, t) + β(p, t)N (p, t) ∂t We rewrite the first term on the right hand side, with respect to the Euclidean arc length, s, and we get 10 2.1 Curve evolution and Level set representation of curve evolution ∂ C˜ ∂ p˜ Recall that T... )(sp ) and that Ts = κN Thus (2.17) becomes 1 1 0 1 ∂C κ T ,T ∂p ∂κ C, T dp = − ∂p dp − 0 dp 1 κ 0 ∂C ∂p 0 1 = − κ C, κN ∂C dp − ∂p κ2 C, N ∂C dp ∂p 0 Thus we resubstitute the above integral to At and use the fact that ds = ∂C ∂p dp 2.2 Curve shortening 22 and get 1 1 1 ∂C κ dp − ∂p 2At = − 0 ∂C κ dp − ∂p 0 κ2 C, N ∂C dp ∂p 0 1 ∂C ∂p κ2 C, N + 0 L = −2 κds 0 Since the total curvature of a closed curve. .. )ds 0 L 1 = 2 C, −N ds 0 L 1 = 2 r(s)ds 0 15 2.1 Curve evolution and Level set representation of curve evolution Lemma 2.1.6 The length of C is given as L L= r(s)κ(s)ds (2.9) 0 Proof We note the fact that ∂C(s,t) ∂s = T (s) and L ∂ 2 C(s,t) ∂s2 = κ(s)N (s), thus L r(s)κ(s)ds = − 0 κ(s) C, N ds 0 L = − C, κ(s)N ds 0 L = − C, Css 0 We integrate by parts and get L L L r(s)κ(s)ds = − C, Cs + 0 0 Cs , Cs... L we conclude that r(s)κ(s)ds = L 0 Lemma 2.1.7 If C(s) is a closed convex C 1 curve which satisfies the inequality L r(s)2 ds LA , π (2.10) 0 for a certain origin of the lamina enclosed by the convex curve C(s), then the following inequality L L π A κ(s)2 ds, 0 (2.11) 16 2.2 Curve shortening 17 is met Proof Recall (2.9) and apply Cauchy Schwartz Inequality to obtain L L = r(s)κ(s)ds 0 L L 1 2 r(s)2 ... with the curve evolution and shortening process and its applications to image processing In Chapter 2, there will an indepth discussion about the mathematical framework of curve evolution Curve. .. Geometric Curves iv Contents v 2.1 Curve evolution and Level set representation of curve evolution 2.2 Curve shortening 17 2.3 Geometric Heat Flow and Preservation... closed curves, where t parameterizes the family and p parameterizes the curve 2.1 Curve evolution and Level set representation of curve evolution Definition 2.1.1 We say that C(p, t) is a curve