Some Useful Probability Relationships

Mα»™t phαΊ§n của tΓ i liệu GiΓ‘o trΓ¬nh principles of communications systems modulation and noise 7e by ziểm tranter (Trang 265 - 269)

Since it is true that𝐴 βˆͺ 𝐴 = 𝑆and that𝐴and𝐴are mutually exclusive, it follows by Axioms 2 and 3 that𝑃 (𝐴) + 𝑃(

𝐴)

= 𝑃 (𝑆) = 1, or 𝑃(

𝐴)

= 1 βˆ’ 𝑃 (𝐴) (6.3)

A generalization of Axiom 3 to events that are not mutually exclusive is obtained by noting that𝐴 βˆͺ 𝐡 = 𝐴 βˆͺ(

𝐡 ∩ 𝐴)

, where𝐴and𝐡 ∩ 𝐴are disjoint (this is most easily seen by using a Venn diagram). Therefore, Axiom 3 can be applied to give

𝑃 (𝐴 βˆͺ 𝐡) = 𝑃 (𝐴) + 𝑃 (𝐡 ∩ 𝐴) (6.4)

Similarly, we note from a Venn diagram that the events𝐴 ∩ 𝐡and𝐡 ∩ 𝐴are disjoint and that (𝐴 ∩ 𝐡) βˆͺ(

𝐡 ∩ 𝐴)

= 𝐡so that

𝑃 (𝐴 ∩ 𝐡) + 𝑃 (𝐡 ∩ 𝐴) = 𝑃 (𝐡) (6.5)

Solving for𝑃 (𝐡 ∩ 𝐴)from(6.5)and substituting into(6.4)yields the following for𝑃 (𝐴 βˆͺ 𝐡):

𝑃 (𝐴 βˆͺ 𝐡) = 𝑃 (𝐴) + 𝑃 (𝐡) βˆ’ 𝑃 (𝐴 ∩ 𝐡) (6.6) This is the desired generalization of Axiom 3.

Now consider two events𝐴and𝐡, with individual probabilities𝑃 (𝐴) > 0and𝑃 (𝐡) > 0, respectively, and joint event probability𝑃 (𝐴 ∩ 𝐡). We define theconditional probability of

2This can be generalized to𝑃 (𝐴 βˆͺ 𝐡 βˆͺ 𝐢) = 𝑃 (𝐴) + 𝑃 (𝐡) + 𝑃 (𝐢)for𝐴,𝐡, and𝐢mutually exclusive by consider- ing𝐡1= 𝐡 βˆͺ 𝐢to be a composite event in Axiom 3 and applying Axiom 3 twice: i.e.,𝑃(

𝐴 βˆͺ 𝐡1)

= 𝑃 (𝐴) + 𝑃 (𝐡1) = 𝑃 (𝐴) + 𝑃 (𝐡) + 𝑃 (𝐢). Clearly, in this way we can generalize this result to any finite number of mutually exclusive events.

event𝐴given that event𝐡occurred as

𝑃 (𝐴|𝐡) = 𝑃 (𝐴 ∩ 𝐡)

𝑃 (𝐡) (6.7)

Similarly, the conditional probability of event𝐡given that event𝐴has occurred is defined as 𝑃 (𝐡|𝐴) = 𝑃 (𝐴 ∩ 𝐡)

𝑃 (𝐴) (6.8)

Putting Equations(6.7)and(6.8)together, we obtain

𝑃 (𝐴|𝐡) 𝑃 (𝐡) = 𝑃 (𝐡|𝐴) 𝑃 (𝐴) (6.9)

or

𝑃 (𝐡|𝐴) = 𝑃 (𝐡) 𝑃 (𝐴|𝐡)

𝑃 (𝐴) (6.10)

This is a special case ofBayes’ rule.

Finally, suppose that the occurrence or nonoccurrence of 𝐡 in no way influences the occurrence or nonoccurrence of𝐴. If this is true,𝐴and𝐡are said to bestatistically indepen- dent. Thus, if we are given𝐡, this tells us nothing about𝐴and therefore,𝑃 (𝐴|𝐡) = 𝑃 (𝐴).

Similarly,𝑃 (𝐡|𝐴) = 𝑃 (𝐡). From Equation(6.7)or(6.8)it follows that, for such events,

𝑃 (𝐴 ∩ 𝐡) = 𝑃 (𝐴)𝑃 (𝐡) (6.11)

Equation(6.11)will be taken as the definition of statistically independent events.

EXAMPLE 6.3

Referring to Example 6.2, suppose𝐴denotes at least one head and𝐡denotes a match. The sample space is shown in Figure 6.1(b). To find𝑃 (𝐴)and𝑃 (𝐡), we may proceed in several different ways.

S o l u t i o n

First, if we use equal likelihood, there are three outcomes favorable to𝐴(that is, HH, HT, and TH) among four possible outcomes, yielding𝑃 (𝐴) =34. For𝐡, there are two favorable outcomes in four possibilities, giving𝑃 (𝐡) =12.

As a second approach, we note that, if the coins do not influence each other when tossed, the outcomes on separate coins are statistically independent with𝑃 (𝐻) = 𝑃 (𝑇 ) =12. Also, event𝐴consists of any of the mutually exclusive outcomes HH, TH, and HT, giving

𝑃 (𝐴) =(1 2β‹…1

2 )+(1

2β‹…1 2

)+(1 2β‹…1

2 )= 3

4 (6.12)

by(6.11)and Axiom 3, generalized. Similarly, since𝐡consists of the mutually exclusive outcomes HH and TT,

𝑃 (𝐡) =(1 2β‹…1

2 )+(1

2β‹…1 2

)= 1

2 (6.13)

again through the use of(6.11) and Axiom 3. Also,𝑃 (𝐴 ∩ 𝐡) = 𝑃(at least one head and a match)

= 𝑃 (HH) = 14.

Next, consider the probability of at least one head given a match,𝑃 (𝐴|𝐡). Using Bayes’ rule, we obtain

𝑃 (𝐴|𝐡) =𝑃 (𝐴 ∩ 𝐡) 𝑃 (𝐡) =

1 4 1 2

= 12 (6.14)

which is reasonable, since given𝐡, the only outcomes under consideration are HH and TT, only one of which is favorable to event𝐴. Next, finding𝑃 (𝐡|𝐴), the probability of a match given at least one head, we obtain

𝑃 (𝐡|𝐴) =𝑃 (𝐴 ∩ 𝐡) 𝑃 (𝐴) =

1 4 3 4

= 13 (6.15)

Checking this result using the principle of equal likelihood, we have one favorable event among three candidate events (HH, TH, and HT), which yields a probability of 13. We note that

𝑃 (𝐴 ∩ 𝐡)≠𝑃 (𝐴)𝑃 (𝐡) (6.16)

Thus, events𝐴and𝐡are not statistically independent, although the events H and T on either coin are independent.

Finally, consider the joint probability𝑃 (𝐴 βˆͺ 𝐡). Using(6.6), we obtain 𝑃 (𝐴 βˆͺ 𝐡) =3

4+ 1 2βˆ’ 1

4= 1 (6.17)

Remembering that𝑃 (𝐴 βˆͺ 𝐡)is the probability of at least one head, or a match, or both, we see that this includes all possible outcomes, thus confirming the result.

β– 

EXAMPLE 6.4

This example illustrates the reasoning to be applied when trying to determine if two events are indepen- dent. A single card is drawn at random from a deck of cards. Which of the following pairs of events are independent? (a) The card is a club, and the card is black. (b) The card is a king, and the card is black.

S o l u t i o n

We use the relationship 𝑃 (𝐴 ∩ 𝐡) = 𝑃 (𝐴|𝐡)𝑃 (𝐡) (always valid) and check it against the relation 𝑃 (𝐴 ∩ 𝐡) = 𝑃 (𝐴)𝑃 (𝐡)(valid only for independent events). For part (a), we let𝐴be the event that the card is a club and𝐡be the event that it is black. Since there are 26 black cards in an ordinary deck of cards, 13 of which are clubs, the conditional probability𝑃 (𝐴 ∣ 𝐡)is13

26(given we are considering only black cards, we have 13 favorable outcomes for the card being a club). The probability that the card is black is𝑃 (𝐡) =2652, because half the cards in the 52-card deck are black. The probability of a club (event𝐴), on the other hand, is𝑃 (𝐴) =1352(13 cards in a 52-card deck are clubs). In this case,

𝑃 (𝐴|𝐡)𝑃 (𝐡) =13 26

26

52 ≠𝑃 (𝐴)𝑃 (𝐡) = 13 52

26

52 (6.18)

so the events are not independent.

For part (b), we let𝐴be the event that a king is drawn, and event𝐡 be that it is black. In this case, the probability of a king given that the card is black is𝑃 (𝐴|𝐡) = 262 (two cards of the 26 black cards are kings). The probability of a king is simply𝑃 (𝐴) =524 (four kings in the 52-card deck) and 𝑃 (𝐡) = 𝑃 (black) = 2652. Hence,

𝑃 (𝐴|𝐡)𝑃 (𝐡) = 2 26

26

52 = 𝑃 (𝐴)𝑃 (𝐡) = 4 52

26

52 (6.19)

which shows that the events king and black are statistically independent.

β– 

EXAMPLE 6.5

As an example more closely related to communications, consider the transmission of binary digits through a channel as might occur, for example, in computer networks. As is customary, we denote the two possible symbols as 0 and 1. Let the probability of receiving a zero, given a zero was sent,𝑃 (0π‘Ÿ|0𝑠), and the probability of receiving a 1, given a 1 was sent,𝑃 (1π‘Ÿ|1𝑠), be

𝑃 (0π‘Ÿ|0𝑠) = 𝑃 (1π‘Ÿ|1𝑠) = 0.9 (6.20)

Thus, the probabilities𝑃 (1π‘Ÿ|0𝑠)and𝑃 (0π‘Ÿ|1𝑠)must be

𝑃 (1π‘Ÿ|0𝑠) = 1 βˆ’ 𝑃 (0π‘Ÿ|0𝑠) = 0.1 (6.21)

and

𝑃 (0π‘Ÿ|1𝑠) = 1 βˆ’ 𝑃 (1π‘Ÿ|1𝑠) = 0.1 (6.22)

respectively. These probabilities characterize the channel and would be obtained through experimental measurement or analysis. Techniques for calculating them for particular situations will be discussed in Chapters 9 and 10.

In addition to these probabilities, suppose that we have determined through measurement that the probability of sending a 0 is

𝑃 (0𝑠) = 0.8 (6.23)

and therefore the probability of sending a 1 is

𝑃 (1𝑠) = 1 βˆ’ 𝑃 (0𝑠) = 0.2 (6.24)

Note that once𝑃 (0π‘Ÿ|0𝑠),𝑃 (1π‘Ÿ|1𝑠), and𝑃 (0𝑠)are specified, the remaining probabilities are calculated using Axioms 2 and 3.

The next question we ask is, β€˜β€˜If a 1 was received, what is the probability,𝑃 (1𝑠|1π‘Ÿ), that a 1 was sent?’’ Applying Bayes’ rule, we find that

𝑃 (1𝑠|1π‘Ÿ) =𝑃 (1π‘Ÿ|1𝑠)𝑃 (1𝑠)

𝑃 (1π‘Ÿ) (6.25)

To find𝑃 (1π‘Ÿ), we note that

𝑃 (1π‘Ÿ, 1𝑠) = 𝑃 (1π‘Ÿ|1𝑠)𝑃 (1𝑠) = 0.18 (6.26) and

𝑃 (1π‘Ÿ, 0𝑠) = 𝑃 (1π‘Ÿ|0𝑠)𝑃 (0𝑠) = 0.08 (6.27) Thus,

𝑃 (1π‘Ÿ) = 𝑃 (1π‘Ÿ, 1𝑠) + 𝑃 (1π‘Ÿ, 0𝑠) = 0.18 + 0.08 = 0.26 (6.28) and

𝑃 (1𝑠|1π‘Ÿ) =(0.9)(0.2)

0.26 = 0.69 (6.29)

Similarly, one can calculate𝑃 (0𝑠|1π‘Ÿ) = 0.31,𝑃 (0𝑠|0π‘Ÿ) = 0.97, and𝑃 (1𝑠|0π‘Ÿ) = 0.03. For practice, you should go through the necessary calculations.

β– 

Mα»™t phαΊ§n của tΓ i liệu GiΓ‘o trΓ¬nh principles of communications systems modulation and noise 7e by ziểm tranter (Trang 265 - 269)

TαΊ£i bαΊ£n Δ‘αΊ§y đủ (PDF)

(746 trang)