7.5 CONTINUOUS AMPLITUDE MEMORYLESS SOURCES
7.5.1 Block Coding Theorems for Continuous- Amplitude Sources
Again referring to Fig. 7.3 for the basic block source coding system, let
& =(v
i>
V
2> >
VM} be a set ofM representation sequences ofN user symbols
each, which we call a block code of length N and rate R = (In M)/N nats per source symbol. For this code the average distortion isnow
d(} = JC - C QN(u)d(u\%) du (7.5.2)
-oo -
where
d(u\&) = mindN(u, v)
u)=n
i
ew
In proving coding theorems for block codes weessentially followourearlier proofs for the discrete memoryless source presented in Sec. 7.2, the main differ
ence being that integrals of probability density functions replacesummations of probabilities. As before, we use an ensemble average coding argument by first introducing theconditional probabilitydensity function
PN(v|u)= [I (.!.) (7.5.3)
n 1
and thecorresponding marginal probabilitydensity on $N
oo oo
) (7.5.4)
n=l where
P(v)=l Q(u)P(v\u)du (7.5.5)
oo
Proceeding exactly as in Sec. 7.2 [Eqs. (7.2.8)through (7.2.11)], but withsumma
tions replaced by integrals, we obtain
-oo -oo
= -
- -oo
"
<
7-5-6
)
(7210)
oo oo
where
Now, defining (as in 7.2.15)
D(P) -
[ j Q(u)P(v|u)d(u, v)du dv (7.5.7)
*
oo oo
we see that since 1 O < 1, the first integral is easilybounded to yield
^^}dlld^ (7.5.8)
oo oo
To bound the second term we can no longer appeal to the argument that d(u\J#)isbounded bydQ. Instead we use a simple formof the Holder inequality (App. 3A)
oo oo
1/2
r oo oo
x - &
L -oo -oc
1 2
(7.5.9)
where wenotedthatO2=<I>.Next we assumethatvt eM = {vj, v2, ..., \M}isthe all-zerosvector; that is, Vj = 0. Then
d(u\J#) < dN(u, vj = d
N(n,0) and
.00 .00
P.v (v|u)(d(u du
oo oo
=
| -j e>Mu
/"/" e(4K o 2 du
oo oo
c/U
(7.5.10)
where the last inequality follows from (7.5.1). Hence whenVj = e^, we have
< D(P) + d QN(u)P N(M|u)0(u, v;$} du
oo oo
1/2
(7.5.11)
We now proceed to bound the ensemble average ofd($\ We consider an ensemble ofcodes in which v
t
= is fixed and the ensemble
weighting for the remainingM 1 codewordsisaccording toaproduct measurecorrespondingto independentidenticallydistributedcomponentshaving commonprobabilityden-
426 SOURCECODING FOR DIGITAL COMMUNICATION
sity (P(v): -oo < v<
oo}. Now for any code ^ = {vl9 v2, ..., VM}, define
J>=
|v2, v3, ...,VM} which is the code without codeword vt =0. Then clearly
d(u|#)<</(u|J) (7.5.12)
and
O(u, v;#) <
O(u, v; J) (7.5.13)
Hence for any code ^, we have from (7.5.11) and (7.5.13)
1/2
<
D(P) +
00 .00
; J)
oo co
(7.5.14)
Nowaveragingthisoverthecode ensemble and usingtheJenseninequalityyields
:D(P) +a r QN(u)PN(\ u)<I>(u, v; &)du d\
J-oo
1/2
v;
-co oo
1/2
(7.5.15)
Theterminsidethebracket cannowbebounded byfollowing theproofofLemma
7.2.1 [(7.2.20) through (7.2.22)], replacingsummations of probabilities with inte grals of probability densities. This yields the bound
00 .CO
SPN(v|u)<D(u,v;
t-NE(R;p,P)
(7.5.16) -oo oo
where
E(R;p9 P)= -
,(p,P)=-lnj
oo I oo
du
-1 <p<0
(7.5.17)
Theproperties ofE (p, P)are thesame as those givenin Lemma7.2.2where now
/(P)is
i
/(P)=
| | Q(u)P(v|u) In
*-oo * oo
the average mutual information. Then it follows from Lemma 7.2.3 that
max E(R;p, P)> forK > 7(P) -
i<p<o
Combining these extensions ofearlier results into (7.5.15) yields
(7.5.18)
(7.5.19)
(7.5.20)
where
max E(R\p, P)> forR > /(P)
At this point we are still free to choose the conditional probability density {P(v|u): u, v e
J>} to minimize the bound on d(M\ Suppose the fidelitycriterion
tobesatisfied bytheblock source coding systemisspecifiedasD(P) < D.Thenlet
= {P(v\u): D(P)<
D} (7.5.21)
and define
E(R,D)= sup max E(R;p, P) (7.5.22)
-
l<p<0
and the ratedistortion function
R(D) = inf /(P) (7.5.23)
Pe^o
Applying these to (7.5.20) yields the coding theorem for continuous-amplitude memoryless sources.
Theorem 7.5.1: Source coding theorem For any block length N and rate R
there existsa block code with average distortion d(J#) satisfying
)
(7.5.24)
where
E(R, D) > for R > R(D)
PROOF See theproofofTheorem 7.2.1.
We defined R(D) in (7.5.23) as the ratedistortion function foracontinuous- amplitudememorylesssource,wheretheunboundedsingle-letterdistortionmeas
ure satisfies a bounded variance condition. Tojustify this definition we need to prove a converse theorem. This is easily done using the same basicproofgiven earlier for thediscrete memoryless sources (seeTheorem 7.2.3).
Theorem 7.5.2: Converse source coding theorem For any source encoder- decoder pair, it is impossibletoachieve averagedistortionless than orequal toD wheneverthe rate R satisfies R < R(D),
The proofof thedirectcoding theoremgiven in this section certainly applies as wellfordiscretememorylesssourceswithunboundeddistortionmeasureaslong as theboundedvarianceconditionissatisfied.Similarlyfordiscretesourceswith a countably infinite alphabetwecanestablish coding theorems similar toTheorem
7.5.1. An exampleofsuch asource isone whichemits a Poissonrandomvariable and which has a magnitude distortion measure (see Probs. 7.25, 7.26, and 7.27).
428 SOURCECODING FORDIGITAL COMMUNICATION
Here we have shown that coding theorems can be obtained for continuous amplitude sources using proofs thatare essentiallythesame asthosefor discrete memoryless sources with bounded single-letter distortion measures. All of the earlierdiscussionconcerningthe relationshipbetween channelandsource coding alsoapplies.Infact,thetrelliscodingtheoremcanalsobe extendedin thisway,as willbe shownnext.