Proakis J. (2002) Communication Systems Engineering - Solutions Manual (299s) Episode 9 pdf

20 294 0
Proakis J. (2002) Communication Systems Engineering - Solutions Manual (299s) Episode 9 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

or since N =2 ν D total = x 2 max 3 · 4 ν (1+4p b (4 ν − 1)) = x 2 max 3N 2 (1+4p b (N 2 − 1)) 4) SNR = E[X 2 ] D total = E[X 2 ]3N 2 x 2 max (1+4p b (N 2 − 1)) If we let ˘ X = X x max , then E[X 2 ] x 2 max = E[ ˘ X 2 ]= ˘ X 2 . Hence, SNR = 3N 2 ˘ X 2 1+4p b (N 2 − 1) = 3 · 4 ν ˘ X 2 1+4p b (4 ν − 1) Problem 6.57 1) g(x)= log(1 + µ |x| x max ) log(1 + µ) sgn(x) Differentiating the previous using natural logarithms, we obtain g  (x)= 1 ln(1 + µ) µ/x max (1 + µ |x| x max ) sgn 2 (x) Since, for the µ-law compander y max = g(x max ) = 1, we obtain D ≈ y 2 max 3 × 4 ν  ∞ −∞ f X (x) [g  (x)] 2 dx = x 2 max [ln(1 + µ)] 2 3 × 4 ν µ 2  ∞ −∞  1+µ 2 |x| 2 x 2 max +2µ |x| x max  f X (x)dx = x 2 max [ln(1 + µ)] 2 3 × 4 ν µ 2  1+µ 2 E[ ˘ X 2 ]+2µE[| ˘ X|]  = x 2 max [ln(1 + µ)] 2 3 × N 2 µ 2  1+µ 2 E[ ˘ X 2 ]+2µE[| ˘ X|]  where N 2 =4 ν and ˘ X = X/x max . 2) SQNR = E[X 2 ] D = E[X 2 ] x 2 max µ 2 3 · N 2 [ln(1 + µ)] 2 (µ 2 E[ ˘ X 2 ]+2µE[| ˘ X|]+1) = 3µ 2 N 2 E[ ˘ X 2 ] [ln(1 + µ)] 2 (µ 2 E[ ˘ X 2 ]+2µE[| ˘ X|]+1) 3) Since SQNR unif =3·N 2 E[ ˘ X 2 ], we have SQNR µ law = SQNR unif µ 2 [ln(1 + µ)] 2 (µ 2 E[ ˘ X 2 ]+2µE[| ˘ X|]+1) = SQNR unif G(µ, ˘ X) 158 where we identify G(µ, ˘ X)= µ 2 [ln(1 + µ)] 2 (µ 2 E[ ˘ X 2 ]+2µE[| ˘ X|]+1) 3) The truncated Gaussian distribution has a PDF given by f Y (y)= K √ 2πσ x e − x 2 2σ 2 x where the constant K is such that K  4σ x −4σ x 1 √ 2πσ x e − x 2 2σ 2 x dx =1=⇒ K = 1 1 − 2Q(4) =1.0001 Hence, E[| ˘ X|]= K √ 2πσ x  4σ x −4σ x |x| 4σ x e − x 2 2σ 2 x dx = 2K 4 √ 2πσ 2 x  4σ x 0 xe − x 2 2σ 2 x dx = K 2 √ 2πσ 2 x  −σ 2 x e − x 2 2σ 2 x  4σ x 0 = K 2 √ 2π (1 − e −2 )=0.1725 In the next figure we plot 10 log 10 SQNR unif and 10 log 10 SQNR mu−law vs. 10 log 10 E[ ˘ X 2 ] when the latter varies from −100 to 100 db. As it is observed the µ-law compressor is insensitive to the dynamic range of the input signal for E[ ˘ X 2 ] > 1. -50 0 50 100 150 200 -100 -80 -60 -40 -20 0 20 40 60 80 100 mu-law uniform E[X^2] db SQNR (db) Problem 6.58 The optimal compressor has the form g(x)=y max   2  x −∞ [f X (v)] 1 3 dv  ∞ −∞ [f X (v)] 1 3 dv −   where y max = g(x max )=g(1).  ∞ −∞ [f X (v)] 1 3 dv =  1 −1 [f X (v)] 1 3 dv =  0 −1 (v +1) 1 3 dv +  1 0 (−v +1) 1 3 dv =2  1 0 x 1 3 dx = 3 2 159 If x ≤ 0, then  x −∞ [f X (v)] 1 3 dv =  x −1 (v +1) 1 3 dv =  x+1 0 z 1 3 dz = 3 4 z 4 3     x+1 0 = 3 4 (x +1) 4 3 If x>0, then  x −∞ [f X (v)] 1 3 dv =  0 −1 (v +1) 1 3 dv +  x 0 (−v +1) 1 3 dv = 3 4 +  1 1−x z 1 3 dz = 3 4 + 3 4  1 − (1 − x) 4 3  Hence, g(x)=    g(1)  (x +1) 4 3 − 1  −1 ≤ x<0 g(1)  1 − (1 − x) 4 3  0 ≤ x ≤ 1 The next figure depicts g(x) for g(1) = 1. Since the resulting distortion is (see Equation 6.6.17) -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 g(x) x D = 1 12 × 4 ν   ∞ −infty [f X (x)] 1 3 dx  3 = 1 12 × 4 ν  3 2  3 we have SQNR = E[X 2 ] D = 32 9 × 4 ν E[X 2 ]= 32 9 × 4 ν · 1 6 = 16 27 4 ν Problem 6.59 The sampling rate is f s = 44100 meaning that we take 44100 samples per second. Each sample is quantized using 16 bits so the total number of bits per second is 44100 × 16. For a music piece of duration 50 min = 3000 sec the resulting number of bits per channel (left and right) is 44100 × 16 × 3000 = 2.1168 × 10 9 and the overall number of bits is 2.1168 × 10 9 × 2=4.2336 × 10 9 160 Chapter 7 Problem 7.1 The amplitudes A m take the values A m =(2m − 1 − M) d 2 ,m=1, M Hence, the average energy is E av = 1 M M  m=1 s 2 m = d 2 4M E g M  m=1 (2m − 1 − M) 2 = d 2 4M E g M  m=1 [4m 2 +(M +1) 2 − 4m(M + 1)] = d 2 4M E g  4 M  m=1 m 2 + M(M +1) 2 − 4(M +1) M  m=1 m  = d 2 4M E g  4 M(M + 1)(2M +1) 6 + M(M +1) 2 − 4(M +1) M(M +1) 2  = M 2 − 1 3 d 2 4 E g Problem 7.2 The correlation coefficient between the m th and the n th signal points is γ mn = s m · s n |s m ||s n | where s m =(s m1 ,s m2 , ,s mN ) and s mj = ±  E s N . Two adjacent signal points differ in only one coordinate, for which s mk and s nk have opposite signs. Hence, s m · s n = N  j=1 s mj s nj =  j=k s mj s nj + s mk s nk =(N − 1) E s N − E s N = N − 2 N E s Furthermore, |s m | = |s n | =(E s ) 1 2 so that γ mn = N − 2 N The Euclidean distance between the two adjacent signal points is d =  |s m − s n | 2 =      ±2  E s /N     2 =  4 E s N =2  E s N 161 Problem 7.3 a) To show that the waveforms ψ n (t), n =1, ,3 are orthogonal we have to prove that  ∞ −∞ ψ m (t)ψ n (t)dt =0,m= n Clearly, c 12 =  ∞ −∞ ψ 1 (t)ψ 2 (t)dt =  4 0 ψ 1 (t)ψ 2 (t)dt =  2 0 ψ 1 (t)ψ 2 (t)dt +  4 2 ψ 1 (t)ψ 2 (t)dt = 1 4  2 0 dt − 1 4  4 2 dt = 1 4 × 2 − 1 4 × (4 − 2) =0 Similarly, c 13 =  ∞ −∞ ψ 1 (t)ψ 3 (t)dt =  4 0 ψ 1 (t)ψ 3 (t)dt = 1 4  1 0 dt − 1 4  2 1 dt − 1 4  3 2 dt + 1 4  4 3 dt =0 and c 23 =  ∞ −∞ ψ 2 (t)ψ 3 (t)dt =  4 0 ψ 2 (t)ψ 3 (t)dt = 1 4  1 0 dt − 1 4  2 1 dt + 1 4  3 2 dt − 1 4  4 3 dt =0 Thus, the signals ψ n (t) are orthogonal. b) We first determine the weighting coefficients x n =  ∞ −∞ x(t)ψ n (t)dt, n =1, 2, 3 x 1 =  4 0 x(t)ψ 1 (t)dt = − 1 2  1 0 dt + 1 2  2 1 dt − 1 2  3 2 dt + 1 2  4 3 dt =0 x 2 =  4 0 x(t)ψ 2 (t)dt = 1 2  4 0 x(t)dt =0 x 3 =  4 0 x(t)ψ 3 (t)dt = − 1 2  1 0 dt − 1 2  2 1 dt + 1 2  3 2 dt + 1 2  4 3 dt =0 As it is observed, x(t) is orthogonal to the signal waveforms ψ n (t), n =1, 2, 3 and thus it can not represented as a linear combination of these functions. Problem 7.4 a) The expansion coefficients {c n }, that minimize the mean square error, satisfy c n =  ∞ −∞ x(t)ψ n (t)dt =  4 0 sin πt 4 ψ n (t)dt 162 Hence, c 1 =  4 0 sin πt 4 ψ 1 (t)dt = 1 2  2 0 sin πt 4 dt − 1 2  4 2 sin πt 4 dt = − 2 π cos πt 4     2 0 + 2 π cos πt 4     4 2 = − 2 π (0 − 1) + 2 π (−1 − 0)=0 Similarly, c 2 =  4 0 sin πt 4 ψ 2 (t)dt = 1 2  4 0 sin πt 4 dt = − 2 π cos πt 4     4 0 = − 2 π (−1 − 1) = 4 π and c 3 =  4 0 sin πt 4 ψ 3 (t)dt = 1 2  1 0 sin πt 4 dt − 1 2  2 1 sin πt 4 dt + 1 2  3 2 sin πt 4 dt − 1 2  4 3 sin πt 4 dt =0 Note that c 1 , c 2 can be found by inspection since sin πt 4 is even with respect to the x = 2 axis and ψ 1 (t), ψ 3 (t) are odd with respect to the same axis. b) The residual mean square error E min can be found from E min =  ∞ −∞ |x(t)| 2 dt − 3  i=1 |c i | 2 Thus, E min =  4 0  sin πt 4  2 dt −  4 π  2 = 1 2  4 0  1 − cos πt 2  dt − 16 π 2 =2− 1 π sin πt 2     4 0 − 16 π 2 =2− 16 π 2 Problem 7.5 a) As an orthonormal set of basis functions we consider the set ψ 1 (t)=  10≤ t<1 0 o.w ψ 2 (t)=  11≤ t<2 0 o.w ψ 3 (t)=  12≤ t<3 0 o.w ψ 4 (t)=  13≤ t<4 0 o.w In matrix notation, the four waveforms can be represented as      s 1 (t) s 2 (t) s 3 (t) s 4 (t)      =      2 −1 −1 −1 −2110 1 −11−1 1 −2 −22           ψ 1 (t) ψ 2 (t) ψ 3 (t) ψ 4 (t)      163 Note that the rank of the transformation matrix is 4 and therefore, the dimensionality of the waveforms is 4 b) The representation vectors are s 1 =  2 −1 −1 −1  s 2 =  −2110  s 3 =  1 −11−1  s 4 =  1 −2 −22  c) The distance between the first and the second vector is d 1,2 =  |s 1 − s 2 | 2 =      4 −2 −2 −1     2 = √ 25 Similarly we find that d 1,3 =  |s 1 − s 3 | 2 =      10−20     2 = √ 5 d 1,4 =  |s 1 − s 4 | 2 =      111−3     2 = √ 12 d 2,3 =  |s 2 − s 3 | 2 =      −3201     2 = √ 14 d 2,4 =  |s 2 − s 4 | 2 =      −333−2     2 = √ 31 d 3,4 =  |s 3 − s 4 | 2 =      013−3     2 = √ 19 Thus, the minimum distance between any pair of vectors is d min = √ 5. Problem 7.6 As a set of orthonormal functions we consider the waveforms ψ 1 (t)=  10≤ t<1 0 o.w ψ 2 (t)=  11≤ t<2 0 o.w ψ 3 (t)=  12≤ t<3 0 o.w The vector representation of the signals is s 1 =  222  s 2 =  200  s 3 =  0 −2 −2  s 4 =  220  Note that s 3 (t)=s 2 (t) − s 1 (t) and that the dimensionality of the waveforms is 3. 164 Problem 7.7 The energy of the signal waveform s  m (t)is E  =  ∞ −∞   s  m (t)   2 dt =  ∞ −∞      s m (t) − 1 M M  k=1 s k (t)      2 dt =  ∞ −∞ s 2 m (t)dt + 1 M 2 M  k=1 M  l=1  ∞ −∞ s k (t)s l (t)dt − 1 M M  k=1  ∞ −∞ s m (t)s k (t)dt − 1 M M  l=1  ∞ −∞ s m (t)s l (t)dt = E + 1 M 2 M  k=1 M  l=1 Eδ kl − 2 M E = E + 1 M E− 2 M E =  M − 1 M  E The correlation coefficient is given by γ mn =  ∞ −∞ s  m (t)s  n (t)dt   ∞ −∞ |s  m (t)| 2 dt  1 2   ∞ −∞ |s  n (t)| 2 dt  1 2 = 1 E   ∞ −∞  s m (t) − 1 M M  k=1 s k (t)  s n (t) − 1 M M  l=1 s l (t)  dt = 1 E    ∞ −∞ s m (t)s n (t)dt + 1 M 2 M  k=1 M  l=1  ∞ −∞ s k (t)s l (t)dt  − 1 E   1 M M  k=1  ∞ −∞ s n (t)s k (t)dt + 1 M M  l=1  ∞ −∞ s m (t)s l (t)dt  = 1 M 2 ME− 1 M E− 1 M E M−1 M E = − 1 M − 1 Problem 7.8 Assuming that E[n 2 (t)] = σ 2 n , we obtain E[n 1 n 2 ]=E   T 0 s 1 (t)n(t)dt   T 0 s 2 (v)n(v)dv  =  T 0  T 0 s 1 (t)s 2 (v)E[n(t)n(v)]dtdv = σ 2 n  T 0 s 1 (t)s 2 (t)dt =0 where the last equality follows from the orthogonality of the signal waveforms s 1 (t) and s 2 (t). Problem 7.9 a) The received signal may be expressed as r(t)=  n(t)ifs 0 (t) was transmitted A + n(t)ifs 1 (t) was transmitted 165 Assuming that s(t) has unit energy, then the sampled outputs of the crosscorrelators are r = s m + n, m =0, 1 where s 0 =0,s 1 = A √ T and the noise term n is a zero-mean Gaussian random variable with variance σ 2 n = E  1 √ T  T 0 n(t)dt 1 √ T  T 0 n(τ)dτ  = 1 T  T 0  T 0 E [n(t)n(τ)] dtdτ = N 0 2T  T 0  T 0 δ(t −τ)dtdτ = N 0 2 The probability density function for the sampled output is f(r|s 0 )= 1 √ πN 0 e − r 2 N 0 f(r|s 1 )= 1 √ πN 0 e − (r−A √ T ) 2 N 0 Since the signals are equally probable, the optimal detector decides in favor of s 0 if PM(r, s 0 )=f(r|s 0 ) >f(r|s 1 ) = PM(r, s 1 ) otherwise it decides in favor of s 1 . The decision rule may be expressed as PM(r, s 0 ) PM(r, s 1 ) = e (r−A √ T ) 2 −r 2 N 0 = e − (2r−A √ T )A √ T N 0 s 0 > < s 1 1 or equivalently r s 1 > < s 0 1 2 A √ T The optimum threshold is 1 2 A √ T . b) The average probability of error is P (e)= 1 2 P (e|s 0 )+ 1 2 P (e|s 1 ) = 1 2  ∞ 1 2 A √ T f(r|s 0 )dr + 1 2  1 2 A √ T −∞ f(r|s 1 )dr = 1 2  ∞ 1 2 A √ T 1 √ πN 0 e − r 2 N 0 dr + 1 2  1 2 A √ T −∞ 1 √ πN 0 e − (r−A √ T ) 2 N 0 dr = 1 2  ∞ 1 2  2 N 0 A √ T 1 √ 2π e − x 2 2 dx + 1 2  − 1 2  2 N 0 A √ T −∞ 1 √ 2π e − x 2 2 dx = Q  1 2  2 N 0 A √ T  = Q  √ SNR  166 where SNR = 1 2 A 2 T N 0 Thus, the on-off signaling requires a factor of two more energy to achieve the same probability of error as the antipodal signaling. Problem 7.10 Since the rate of transmission is R =10 5 bits/sec, the bit interval T b is 10 −5 sec. The probability of error in a binary PAM system is P (e)=Q   2E b N 0  where the bit energy is E b = A 2 T b . With P (e)=P 2 =10 −6 , we obtain  2E b N 0 =4.75 =⇒E b = 4.75 2 N 0 2 =0.112813 Thus A 2 T b =0.112813 =⇒ A =  0.112813 × 10 5 = 106.21 Problem 7.11 a) For a binary PAM system for which the two signals have unequal probability, the optimum detector is r s 1 > < s 2 N 0 4 √ E b ln 1 − p p = η The average probability of error is P (e)=P (e|s 1 )P (s 1 )+P (e|s 2 )P (s 2 ) = pP(e|s 1 )+(1− p)P (e|s 2 ) = p  η −∞ f(r|s 1 )dr +(1− p)  ∞ η f(r|s 1 )dr = p  η −∞ 1 √ πN 0 e − (r− √ E b ) 2 N 0 dr +(1− p)  ∞ η 1 √ πN 0 e − (r+ √ E b ) 2 N 0 dr = p 1 √ 2π  η 1 −∞ e − x 2 2 dx +(1− p) 1 √ 2π  ∞ η 2 e − x 2 2 dx where η 1 = −  2E b N 0 + η  2 N 0 η 2 =  2E b N 0 + η  2 N 0 Thus, P (e)=pQ   2E b N 0 − η  2 N 0  +(1− p)Q   2E b N 0 + η  2 N 0  b) If p =0.3 and E b N 0 = 10, then P (e)=0.3Q[4.3774] + 0.7Q[4.5668] = 0.3 ×6.01 ×10 −6 +0.7 × 2.48 × 10 −6 =3.539 × 10 −6 167 [...]... T 2 t− T 2 then y2 (t) = t (−2)dτ + 173 −T 2 t 0 (−2)dτ = −2t If 9 dτ = 7t − T 2 T 2 < t ≤ T , then If T < t ≤ 3T 2 , then T 2 y2 (t) = t− T 2 4dτ + T 2 t−T For, 3T 2 T (−2)dτ + t− T 2 dτ = 19T − 7t 2 < t ≤ 2T , we obtain T y2 (t) = t−T In summary (−2)dτ = 2t − 4T  0 t≤0      −2t 0 < s2 0 b) The average probability of error is given by ∞ P (e) = p 0 √ √ 2 1 e−(r+ Eb ) /N0 dr + (1 − p) πN0 168 0 −∞ √ √ 2 1 e−(r− Eb ) /N0 dr πN0 ∞ = p √ 2Eb /N0 2Eb . g(1) = 1. Since the resulting distortion is (see Equation 6.6.17) -1 -0 .8 -0 .6 -0 .4 -0 .2 0 0.2 0.4 0.6 0.8 1 -1 -0 .8 -0 .6 -0 .4 -0 .2 0 0.2 0.4 0.6 0.8 1 g(x) x D = 1 12 × 4 ν   ∞ −infty [f X (x)] 1 3 dx  3 = 1 12. where r(t)= N  i=1 r i ψ i (t) s m (t)= N  i=1 s m,i ψ i (t) then,  ∞ −∞ r(t)s m (t)dt =  ∞ −∞ N  i=1 r i ψ i (t) N  j= 1 s m ,j ψ j (t)dt = N  i=1 N  j= 1 r i s m ,j  ∞ −∞ ψ i (t)ψ j (t)dt = N  i=1 N  j= 1 r i s m ,j δ i ,j = N  i=1 r i s m,i = r ·s m Similarly we. ,s mN ) and s mj = ±  E s N . Two adjacent signal points differ in only one coordinate, for which s mk and s nk have opposite signs. Hence, s m · s n = N  j= 1 s mj s nj =  j =k s mj s nj + s mk s nk =(N

Ngày đăng: 12/08/2014, 16:21

Tài liệu cùng người dùng

Tài liệu liên quan