1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Optical Networks: A Practical Perspective - Part 79 pot

10 124 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 406,86 KB

Nội dung

750 MULTILAYER THIN-FILM FILTERS The three-cavity filter is described by the sequence G(HL)5HLL(HL)llHLL(HL)llHLL(HL)SHG. Again, the values nG = 1.52, nL = 1.46, and nH 2.3 were used. References [Kni76] Z. Knittl. Optics of Thin Films. John Wiley, New York, 1976. [RWv93] S. Ramo, J. R. Whinnery, and T. van Duzer. Fields and Waves in Communication Electronics. John Wiley, New York, 1993. Random Variables and Processes H.1 I N MANY PLACES in the book, we use random variables and random processes to model noise, polarization, and network traffic. Understanding the statistical nature of these parameters is essential in predicting the performance of communication systems. Random Variables A random variable X is characterized by a probability distribution function Fx(x) = P{X <_ x}. The derivative of Fx(x) is the probability density function dFx(x) fx(x) = ~. dx Note that f_ ~fx - 1. (x)dx (X) In many cases, we will be interested in obtaining the expectation, or ensemble aver- age, associated with this probability function. The expectation of a function g(x) is defined as E[g(X)] = fx(x)g(x)dx. oo 751 752 RANDOM VARIABLES AND PROCESSES H.1.1 The mean of X is defined to be E[X] xfx(x)dx, (x) and the mean square (second moment) of X is Z_ E[X 2] x2fx(x)dx. oo The variance of X is defined as o.2 _ E[X 2] - (E[X]) 2. In many cases, we are interested in determining the statistical properties of two or more random variables that are not independent of each other. The joint probability distribution function of two random variables X and Y is defined as Fx, v(x, y)- P{X < x, Y < y}. Sometimes we are given some information about one of the random variables and must estimate the distribution of the other. The conditional distribution of X given Y is denoted as Fxly(xly)- P{X <_ xlY <_ y}. An important relation between these distributions is given by Bayes' theorem: Fx, y(x, y) Fxly(xly) Fr(y) Gaussian Distribution A random variable X is said to follow a Gaussian distribution if its probability density function 1 (X_/Z)2/O.2 fx(x) ~e -oo < x < oo. Here, # is the mean and O "2 the variance of X. In order to compute bit error rates, we will need to compute the probability that X _> v, which is defined as the (unction O(v) fx(x)dx. This function can be numerically evaluated. For example, Q(v) - 10 -9 if v - 6, and Q(v) - 10 -15 if v- 8. H.1 Random Variables 753 H.1.2 H.1.3 Also if X and Y are jointly distributed Gaussian random variables, then it can be proved that E[X2y 2] E[X2]E[Y 2] -+- 2(E[XY]) 2. (H.1) Maxwell Distribution The Maxwellian probability density function is useful to calculate penalties due to polarization-mode dispersion. A random variable X is said to follow a Maxwellian distribution if its probability density function 2 -x2/2ot 2 fx(x) - 3 _x e , x > O Ot ~/7[ ~ ' where ~ is a parameter associated with the distribution. The mean and mean-square value of X can be computed as E[X] = 2~~ and E[X 21 3or 2 37r 8 (E[X])2" Therefore, the variance ry2-E[X2]-(E[X])2-ol2(3- 8). It can also be shown that P(X >3E[X])~4x 10 -5 . Poisson Distribution A discrete random variable X takes on values from a discrete but possibly infinite set S = {Xl, xz, x3 }. It is characterized by a probability mass function P(x), which is the probability that X takes on a value x. The expectation of a function g(X) is defined as E[g(X)]- y~. g(xi)P(xi). ilxiES X is a Poisson random variable if e-rri P(i) = ~, i =0, 1,2 i! 754 RANDOM VARIABLES AND PROCESSES H.2 where r is a parameter associated with the distribution. It is easily verified that E[X] - r and crx2 - r. Random Processes Random processes are useful to model time-varying stochastic events. A random process X (t) is simply a sequence of random variables X (tl), X (t2) one for each instant of time. The first-order probability distribution function is given by F(x, t)= P{X(t) < x}, and the first-order density function by OF(x,t) f(x,t) = ~. Ox The second-order distribution function is the joint distribution function F(xl, x2, tl, t2) = P{X(tl) < xl, X(t2) < x2}, and the corresponding second-order density function is defined as 02F(xl, x2, tl, t2) f (xl, x2, tl, t2) = OXlOX2 The mean of the process is I_" Ix(t) E[X(t)] xf (x, t)dx. oo The autocorrelation of the process is Rx(q, t2) - E[X(q)X(t2)] XlX2f (xl,x2, tl, t2)dxldx2. oo O0 The autocovariance of the process is defined as Lx(tl, t2) = Rx(q, t2) - E[X (tl)]E[X (t2)]. The random process is wide-sense stationary if it has a constant mean E[X (t)] = l~, and the autocorrelation (and autocovariance) depends only on r = tl - t2, that is, Rx(r) = E[X(t)X(t + r)] and Lx(r) = Rx(r) - #2. For a wide-sense stationary H.2 Random Processes 755 H.2.1 random process, the power spectral density is the Fourier transform of the autoco- variance and is given by Sx(f) Lx(r)e -i2rrfrdr. (x) Note that the variance of the random process is given by if_ ~2 x - Lx (o) - ~ Sx (f)af. In many cases, we will represent noise introduced in the system as a stationary random process. In this case, the spectral density is useful to represent the spectral distribution of the noise. For example, in a receiver, the noise X (t) and signal are sent through a low pass filter with impulse response h(t). The transfer function of the filter H(f) is the Fourier transform of its impulse response. In this case, the spectral density of the output noise process Y(t) can be expressed as Sr (f) = Sx (f)lH (f)12. Suppose the filter is an ideal low pass filter with bandwidth Be; that is, H(f) = 1, -Be < f < Be and 0 otherwise. The variance of the noise process at its output is simply l f_ Be Sx(f)df. 0 -2 Zy(O) -~ Be Poisson Random Process Poisson random processes are used to model the arrival of photons in an optical communication system. They are also used widely to model the arrival of traffic in a communication network. The model is accurate primarily for voice calls, but it is used for other applications as well, without much real justification. A Poisson process X (t) is characterized by a rate parameter X. For any two time instants tl and t2 > tl, X (t2) - X (tl) is the number of arrivals during the time interval (tl, t2]. The number of arrivals during this interval follows a Poisson distribution; that is, P (X(t2) - X(tl) - n) e -)~(t2-tl) ()~(t2 - tl)) n n! where n is a nonnegative integer. Therefore, the mean number of arrivals during this time interval is E[X(t2) - X(tl)] = )~(t2 - q). 756 RANDOM VARIABLES AND PROCESSES A Poisson process has many important properties that make it easier to analyze systems with Poisson traffic than other forms of traffic. See [BG92] for a good summary. H.2.2 Gaussian Random Process In many cases, we model noise as a wide-sense stationary Gaussian random process X (t). It is also common to assume that at any two instants of time tl ~ t2 the random variables X (tl) and X (t2) are independent Gaussian variables with mean #. For such a process, we can use (H.1) and write E[X2(t)X2(t + r)] (E[X2(t)]) 2 + 2(E[X(t)]E[X(t -t- r)]) 2, that is, E[X2(t)X2(t + r)] = R2x(0) + 2R2x(r). Further Reading There are several good books on probability and random processes. See, for example, [Pap91, Ga199]. References [BG92] D. Bertsekas and R. G. Gallager. Data Networks. Prentice Hall, Englewood Cliffs, NJ, 1992. [Ga199] R.G. Gallager. Discrete Stochastic Processes. Kluwer, Boston, 1999. [Pap91] A. Papoulis. Probability, Random Variables, and Stochastic Processes, 3rd edition. McGraw-Hill, New York, 1991. Receiver Noise Statistics W E START OUT BY DERIVING an expression for the statistics of the photocurrent in the pin receiver, along the lines of [BL90, RH90]. It is useful to think of the photodetection process in the following way. Each time a photon hits the receiver, the receiver generates a small current pulse. Let t~ denote the arrival times of photons at the receiver. Then the photocurrent generated can be expressed as oo I (t) Z eh(t - tk), (I.1) k cx~ where e is the electronic charge and eh(t - tk) denotes the current impulse due to a photon arriving at time tk. Note that since eh(t - tk) is the current due to a single electron, we must have f ~ eh(t - tk)dt e. oo The arrival of photons may be described by a Poisson process, whose rate is given by P(t)/hfc. Here, P(t) is the instantaneous optical power, and hfc is the photon energy. The rate of generation of electrons may then also be considered to be a Poisson process, with rate 5~(t) - P(t), e where 7r - ~e/hfc is the responsivity of the photodetector, ~ being the quantum efficiency. 757 758 RECEIVER NOISE STATISTICS To evaluate (I.1), let us break up the time axis into small intervals of length at, with the kth interval being [(k - 1/2)at, (k + 1/2)at). Let N~ denote the number of electrons generated during the kth interval. Using these notations, we can rewrite (I.1) as r I (t) Z eN~h(t - kat). k ~ Note that since the intervals are nonoverlapping, the N~ are independent Poisson random variables, with rate )~(kat)at. We will first compute the mean value and autocorrelation functions of the pho- tocurrent for a given optical power P(.). The mean value of the photocurrent is oo oo E[I(t)IP(.)] - Z eE[N~]h(t - kat) - ~ e)~(kat)at h(t - kat). k oo k oo In the limit when at + O, this can be rewritten as f f E[I(t)IP(.)] - e)~(r)h(t - r)dr - ~ P(r)h(t - r)dr. oo oo Likewise, the autocorrelation of the photocurrent can be written as E[I (tl)I (t2) I P (.)] f e2~.(r)h(tl - r)h(t2 - r)dr oo -+- E[I(tl)IP(.)]E[I(tz)IP(.)] Z_ eT~ P(r,)h(tl - r)h(t2 - r)dr oo + T~ 2 P(r)h(tl - r)dr P(r)h(t2 - r)dr. (x) (N~ An ideal photodetector generates pure current impulses for each received photon. For such a detector h(t) - 8(t), where 8(t) is the impulse function with the properties that 8(t) - 0, t -r 0 and f_~ a(t)dt - 1. For this case, the mean photocurrent becomes E[I(t)iP(.)] - 7gP(t), and its autocorrelation is E[I(tl)I(t2)lP(.)] eT~P(tl)8(t2 - tl) + ~2p(tl)P(t2). 1.1 Shot Noise 75 9 The autocovariance of I (t) is then given as where Lp denotes the autocovariance of P(t). 1.1 Shot Noise - First let us consider the simple case when there is a constant power P incident on the receiver. For this case, E[P(r)] = P and Lp(s) = 0, and (1.2) and (1.3) can be written as E[I(t)] = RP and L/(t) = eRPG(t), where s = t2 - rI. The power spectral density of the photocurrent is the Fourier transform of the autocovariance and is given by m SI (f) = S_, L, (s)e-'2~.f'ds = eRP. Thus the shot noise current can be thought of as being a white noise process with a flat spectral density as given here. Within a receiver bandwidth of Be, the shot noise power is given by Therefore, the photocurrent can be written as I =T+i,$, where 7 = RP and i, is the shot noise current with zero mean and variance eRPB,. . function can be numerically evaluated. For example, Q(v) - 10 -9 if v - 6, and Q(v) - 10 -1 5 if v- 8. H.1 Random Variables 753 H.1.2 H.1.3 Also if X and Y are jointly distributed Gaussian random. is a parameter associated with the distribution. The mean and mean-square value of X can be computed as E[X] = 2~~ and E[X 21 3or 2 37r 8 (E[X])2" Therefore, the variance ry2-E[X2 ]-( E[X])2-ol2( 3-. analyze systems with Poisson traffic than other forms of traffic. See [BG92] for a good summary. H.2.2 Gaussian Random Process In many cases, we model noise as a wide-sense stationary Gaussian

Ngày đăng: 02/07/2014, 13:20