Conditional Probability and Independent Events

From MM*Stat International

Jump to: navigation, search
English
Português
Français
‎Español
Italiano
Nederlands


Conditional Probability

Let A and B be two events defined on the sample space S. The conditional probability of A given B, is defined as P(A|B)=\frac{P(A\cap B)}{P(B)},\text{ for }P(B)>0 The conditional probability assumes that B has occurred and asks what is the probability that A has occurred.  By assuming that B has occurred, we have defined a new sample space S=B   and a new probability measure P(A|B).. If B=A_{2}\cap A_{3} then we may write P(A_{1}|A_{2}\cap A_{3})=\frac{P(A_{1}\cap A_{2}\cap A_{3})}{P(A_{2}\cap
A_{3})},\text{ for }P(A_{2}\cap A_{3})>0 We may also define the conditional probability of B given A:: P(B|A)=\frac{P(A\cap B)}{P(A)},\text{ for }P(A)>0

Multiplication Rule

By rearranging the definition of conditional probability we can extract a formula for the probability of both A AND B occurring: P(A\cap B)=P(A)\cdot P(B|A)=P(B)\cdot P(A|B) and, in analogous fashion, P(A_{1}\cap A_{2}\cap A_{3})=P(A_{1})\cdot P(A_{2}|A_{1})\cdot P(A_{3}
|A_{1}\cap A_{2}) Generalisation for events A_{2},A_{2},\ldots A_{n}: P(A_{1}\cap\ldots\cap A_{n}=P(A_{1})\cdot P(A_{2}|A_{1})\cdot P(A_{3}
|A_{1}\cap A_{2})\cdot\ldots\cdot P(A_{n}|A_{1}\cdot\ldots\cdots A_{n-1})

Independent Events

The notion underlying the concept of conditional probability is that a priori information cocerning the occurrence of events does in general influence probabilities of other events.  (For example, if one knows that someone is a smoker, than one would assign a higher probability to that individual contracting lung cancer).  In general,  one would expect.P(A)\neq P(A|B) The case P(A)=P(A|B) has an important interpretation.  If the probability of A occurring remains the same, whether or not B has occurred, we say that the two events are statistically (or stochastically) independent. (For example knowing whether an individual is tall or short does not affect one’s assessment of that individual developing lung cancer.) We define stochastic independence of two events A and B by the condition P(A\cap B)=P(A)\cdot P(B)which implies that the followig conditions holdP(A)=P(A|B)P(B)=P(B|A)P(A|B)=P(A|\overline{B})P(B|A)=P(B|\overline{A}) The multiplication condition defining stochastic independence of two events also holds for n independent events: P(A_{1}\cap\ldots\cap A_{n})=P(A_{1})\cdot\ldots\cdot P(A_{n}) To establish statistical independence of n events, one must ensure that the multiplication rule holds for any subset of the events. That is P\left(  A_{i_{1}}\cap\ldots\cap A_{i_{m}}\right)  =P\left(  A_{i_{1}}\right)
\cdot\ldots\cdot P\left(  A_{i_{m}}\right)  ,\text{ for }i_{1},\ldots
,i_{m}\text{ distinct integers}<n It is important not to confuse stochastic independence with mutual exclusivity. For example, if two events A and B with P(A)>0 and P(B)>0, are mutually exclusive then P(A\cap B)=0, as P(\emptyset)=0 and A\cap B=\emptyset.   In which case P(A\cap B)\neq P(A)\cdot P(B). A small example should clarify the difference between independence and mutual exclusivity (rowing Cambridge versus Oxford): click on the symbol of the loudspeaker.

Two-Way Cross-Tabulation

In many applications the researcher is interested in associations between two categorical variables. The simplest case is if one observes two binary variables, i.e. there are two variables, each with two possible outcomes.   For example, suppose that for a randomly selected individual we observe whether or not they smoke and whether or not they have emphysema.  Let A be the outcome that the individual smokes and B be the outcome that they have emphysema.  We can construct separate sample spaces \left\{
A,\overline{A}\right\}  and \left\{  B,\overline{B}\right\}  .for each of the two variables. Alternatively we can construct the sample space of ordered pairs: S=\left\{  \left(  A,B\right)  ,\left(  A,\overline{B}\right)  ,\left(
\overline{A},B\right)  ,\left(  \overline{A},\overline{B}\right)  \right\} In tabulating data of this type, we would simply count the number of individuals corresponding to each of the four basic outcomes. No information is lost regarding the two variables individually because we can always obtain frequencies for both categories of either variable by summing over the two categories of the other variable.  For example, to calculate the number of individuals who have emphysema, we add up all those who smoke and have emphysema (i.e., (A,B)) and all those who do not smoke and have emphysema (i.e., (A,\overline{B})).   Relative frequencies for categories of of the individual variables are called marginal relative frequencies. Relative frequencies arising from bivariate categorical data are usually displayed by cross-tabulating the categories of the two variables. Marginal frequencies are included as sums of the columns/rows representing the categories of each of the variables.  The resulting matrix is called an (r\times c)-contingency table, where r and c denote the number of categories observed for each variable. In our example with two categories for each variable, we have a (2\times2)-contingency table. We may summarize the probabilities associated with each basic outcome in a similar table:

B \overline{B} Sum
A P(A\cap B) P(A\cap\overline{B}) P(A)
\overline{A} P(\overline{A}\cap B) P(\overline{A}\cap\overline{B}) P(\overline{A})
Sum P(B) P(\overline{B}) P(S)=1

The structure of this table is particularly helpful in checking for independence between events. Recall that the joint probability of two independent events can be calculated as the product of the probabilities of the two individual events. In this case, we want to verify whether the joint probabilities in the main body of the table are equal to the products of the marginal probabilities.  If they are not, then the events are not independent.  For example, under independence, we would have \ P(A)\,P(B)=P(A\cap B) If one replaces the probabilities in the above table with their sample frequencies, then independence implies that the estimated joint probabilities should be approximately equal to the products of the estimated marginal probabilities.  Formal procedures for testing independence will be discussed later. Joint probabilities of two binary variables are arranged in the contingency table below. Are the variables represented by the events \left\{  A,\overline{A}\right\}  respectively \left\{  B,\overline{B}\right\}  (mutually) independent?

B \overline{B} Sum
A 1/3 1/6 1/2
\overline{A} 1/3 1/6 1/2
Sum 2/3 1/3 1

For the multiplication condition of independence to be satisfied, the inner cells of the contingency table must equal the product of their corresponding marginal probabilities. This is true for all four cells:

B \overline{B} Sum
A 1/3=1/2\cdot2/3 1/6=1/2\cdot1/3 1/2
\overline{A} 1/3=1/2\cdot2/3 1/6=1/2\cdot1/3 1/2
Sum 2/3 1/3 1

In this very special example with two binary variables it is, however, not necessary to verify the validity of the multiplication rule for each of the four cells. As we have already seen, stochastic independence of two events implies stochastic independence of the complementary. Consequently, if the multiplication condition holds for one of the four cells, it must hold for the other three. This is only true because the only two events to be considered for each variable are complements. A master and his apprentice produce hand-made screws. The following data were collected over the course of the year1998:

Total production: 2000 screws
Group 1 1400 screws
(the master) 1162 good screws
238 faulty screws
Group 2 600 screws
(the apprentice) 378 good screws
222 faulty screws

What is the probability, that a randomly selected screw is not faulty given that it was produced by the master? In order to calculate this probability, we will use this notation:A = {screw is good}B = {screw produced by master}C = {screw produced by apprentice} The situation is displayed on this Venn diagram:

En folnode7 d k 1.gif

We would like to calculate P(A|B). This conditional probability is defined as P(A|B)=P(A\cap B)/P(B). The event A\cap B corresponds to selection of a good screw produced by the master. In order to calculate P(A\cap B), we divide the number of screws with this property by the total number of screws: P(A\cap B)=1162/2000. The probability P(B) can be calculated as a ratio of the number of screws produced by the master and total production: P(B)=1400/2000. Thus we obtain: P(A|B)=1162/1400=0.83\,. We want to show that: For any pair of independent events A and B we have P(A) = P(A|B). Assume that the events A and B are independent. Then we have P(A|B)=\frac{P(A\cap B)}{P(B)}=\frac{P(A)\,P(B)}{P(B)}=P(A)

En folnode7 d mi 1.gif

Similarly, we can show that P(B|A) = P(B). Next suppose that P(A)=P(A|B)\,\ we want to show that this implies the multiplicative rule, i.e., that A and B are independent: \begin{align}
P(A|B)  &  =\frac{P(A\cap B)}{P(B)}=P(A)\\
\,\,\,P(A\cap B)  &  =P(A)\cdot P(B)\\
&\end{align} Indeed stochastic independence can be defined equivalently in a number of ways.