The notion of independent families of events leads us next to the notion of independent trials . Let \(S\) be a sample description space of a random observation or experiment on which is defined a probability function \(P[\cdot]\) . Suppose further that each description in \(S\) is an n-tuple. Then the random phenomenon which \(S\) describes is defined as consisting of \(n\) trials. For example, suppose one is drawing a sample of size \(n\) from an urn containing \(M\) balls. The sample description space of such an experiment consists of \(n\) -tuples. It is also useful to regard this experiment as a series of trials, in each of which a ball is drawn from the urn. Mathematically, the fact that in drawing a sample of size \(n\) one is performing \(n\) trials is expressed by the fact that the sample description space \(S\) consists of \(n\) -tuples \(\left(z_{1}, z_{2}, \ldots, z_{n}\right)\) ; the first component \(z_{1}\) represents the outcome of the first trial, the second component \(z_{2}\) represents the outcome of the second trial, and so on, until \(z_{n}\) represents the outcome of the \(n\) th trial.

We next define the important notion of event depending on a trial. Let \(S\) be a sample description space consisting of \(n\) trials, and let \(A\) be an event on \(S\) . Let \(k\) be an integer, 1 to \(n\) . We say that \(A\) depends on the \(k\) th trial if the occurrence or nonoccurrence of \(A\) depends only on the outcome of the \(k\) th trial. In other words, in order to determine whether or not \(A\) has occurred, one must have a knowledge only of the outcome of the \(k\) th trial. From a more abstract point of view, an event \(A\) is said to depend on the \(k\) th trial if the decision as to whether a given description in \(S\) belongs to the event \(A\) depends only on the \(k\) th component of the description. It should be especially noted that the certain event \(S\) and the impossible event \(\emptyset\) may be said to depend on every trial, since the occurrence or nonoccurrence of these events can be determined without knowing the outcome of any trial.

Example 2A . Suppose one is drawing a sample of size 2 from an urn containing white and black balls. The event \(A\) that the first ball drawn is white depends on the first trial. Similarly, the event \(B\) that the second ball drawn is white depends on the second trial. However, the event \(C\) that exactly one of the balls drawn is white does not depend on any one trial. Note that one may express \(C\) in terms of \(A\) and \(B\) by \(C=A B^{c} \cup A^{c} B\) .

Example 2B . Choose a summer day at random on which both the Dodgers and the Giants are playing baseball games, but not with one another. Let \(z_{1}=1\) or 0, depending on whether the Dodgers win or lose their game, and, similarly, let \(z_{2}=1\) or 0, depending on whether the Giants win or lose their game. The event \(A\) that the Dodgers win depends on the first trial of the sample description space \[S = \{(z_1, z_2) : z_1 = 0 \text{ or } 1, \text{ and } z_2 = 0 \text{ or } 1\}.\] 

We next define the very important notion of independent trials . Consider a sample description space \(S\) consisting of \(n\) trials. For \(k=1,2, \ldots, n\) let \(\mathscr{A}_{k}\) be the family of events on \(S\) that depends on the \(k\) th trial. We define the \(n\) trials as independent (and we say that \(S\) consists of \(n\) independent trials) if the families of events \(\mathscr{A}_{1}, \mathscr{A}_{2}, \ldots, \mathscr{A}_{n}\) are independent . Otherwise, the \(n\) trials are said to be dependent or nonindependent. More explicitly, the \(n\) trials are said to be independent if (1.11) holds for every set of events \(A_{1}, A_{2}, \ldots, A_{n}\) , such that, for \(k=1,2, \ldots, n, A_{k}\) depends only on the \(k\) th trial.

If the reader traces through the various definitions that have been made in this chapter, it should become clear to him that the mathematical definition of the notion of independent trials embodies the intuitive meaning of the notion, which is that two trials (of the same or different experiments) are independent if the outcome of one does not affect the outcome of the other and are otherwise dependent.

In the foregoing definition of independent trials it was assumed that the probability function \(P[\cdot]\) was already defined on the sample description space \(S\) , which consists of \(n\) -tuples. If this were the case, it is clear that to establish that \(S\) consists of \(n\) independent trials requires the verification of a large number of relations of the form of (1.11) . However, in practice, one does not start with a probability function \(P[\cdot]\) on \(S\) and then proceed to verify all of the relations of the form of (1.11) in order to show that \(S\) consists of \(n\) independent trials. Rather, the notion of independent trials derives its importance from the fact that it provides an often-used method for setting up a probability function on a sample description space . This is done in the following way.

Let \(Z_{1}, Z_{2}, \ldots, Z_{n}\) be \(n\) sample description spaces (which may be alike) on whose subsets, respectively, are defined probability functions \(P_{1}\) , \(P_{2}, \ldots, P_{n}\) . For example, suppose we are drawing, with replacement, a sample of size \(n\) from an urn containing \(N\) balls, numbered 1 to \(N\) . We define (for \(k=1,2, \ldots, n\) ) \(Z_{k}\) as the sample description space of the outcome of the kth draw; consequently, \(Z_{k}=\{1,2, \ldots, N\}\) . If the descriptions in \(Z_{k}\) are assumed to be equally likely, then the probability function \(P_{k}\) is defined on the events \(C_{k}\) of \(Z_{k}\) by \(P_{k}\left[C_{k}\right]=N\left[C_{k}\right] / N\left[Z_{k}\right]\) .

Now suppose we perform in succession the \(n\) random experiments whose sample description spaces are \(Z_{1}, Z_{2}, \ldots, Z_{n}\) , respectively. The sample description space \(S\) of this series of \(n\) random experiments consists of \(n\) tuples \(\left(z_{1}, z_{2}, \ldots, z_{n}\right)\) , which may be formed by taking for the first component \(z_{1}\) any member of \(Z_{1}\) , by taking for the second component \(z_{2}\) any member of \(Z_{2}\) , and so on, until for the \(n\) th component \(z_{n}\) we take any member of \(Z_{n}\) . We introduce a notation to express these facts; we write \(S=Z_{1} \otimes Z_{2} \otimes \ldots \otimes Z_{n}\) , which we read “ \(S\) is the combinatorial product of the spaces \(Z_{1}, Z_{2}, \ldots, Z_{n}\) ”. More generally, we define the notion of a combinatorial product event on \(S\) . For any events \(C_{1}\) on \(Z_{1}, C_{2}\) on \(Z_{2}\) , and \(C_{n}\) on \(Z_{n}\) we define the combinatorial product event \(C=C_{1} \otimes C_{2} \otimes \ldots \otimes C_{n}\) as the set of all \(n\) -tuples \(\left(z_{1}, z_{2}, \ldots, z_{n}\right)\) , which can be formed by taking for the first component \(z_{1}\) any member of \(C_{1}\) , for the second component \(z_{2}\) any member of \(C_{2}\) , and so on, until for the \(n\) th component \(z_{n}\) we take any member of \(C_{n}\) .

We now define a probability function \(P[\cdot]\) on the subsets of \(S\) . For every event \(C\) on \(S\) that is a combinatorial product event, so that \(C=C_{1} \otimes\) \(C_{2} \otimes \ldots \otimes C_{n}\) for some events \(C_{1}, C_{2}, \ldots, C_{n}\) , which belong, respectively, to \(Z_{1}, Z_{2}, \ldots, Z_{n}\) , we define \[P[C]=P_{1}\left[C_{1}\right] P_{2}\left[C_{2}\right] \cdots P_{n}\left[C_{n}\right]. \tag{2.1}\] 

Not every event in \(S\) is a combinatorial product event. However, it can be shown that it is possible to define a unique probability function \(P[\cdot]\) on the events of \(S\) in such a way that (2.1) holds for combinatorial product events.

It may help to clarify the meaning of the foregoing ideas if we consider the special (but, nevertheless, important) case, in which each sample description space \(Z_{1}, Z_{2}, \ldots, Z_{n}\) is finite, of sizes \(N_{1}, N_{2}, \ldots, N_{n}\) , respectively. As in section 6 of Chapter 1 , we list the descriptions in \(Z_{1}, Z_{2}, \ldots, Z_{n}\) : for \(j=1, \ldots, n\) . \[Z_{j}=\left\{D_{1}^{(j)}, D_{2}^{(j)}, \ldots, D_{N_{j}}^{(j)}\right\}.\] 

Now let \(S=Z_{1} \otimes Z_{2} \otimes \ldots \otimes Z_{n}\) be the sample description space of the random experiment, which consists in performing in succession the \(n\) random experiments whose sample description spaces are \(Z_{1}, Z_{2}, \ldots, Z_{n}\) , respectively. A typical description in \(S\) can be written \(\left(D_{i_{1}}^{(1)}, D_{i_{2}}^{(2)}, \ldots, D_{i_{n}}^{(n)}\right)\) where, for \(j=1, \ldots, n, D_{i_{j}}^{(j)}\) represents a description in \(Z_{j}\) and \(i_{j}\) is some integer, 1 to \(N_{j}\) . To determine a probability function \(P\left[^{\cdot}\right]\) on the subsets of \(S\) , it suffices to specify it on the single-member events of \(S\) . Given probability functions \(P_{1}[\cdot], P_{2}[\cdot], \ldots, P_{n}[\cdot]\) defined on \(Z_{1}, Z_{2}, \ldots, Z_{n}\) , respectively, we define \(P[\cdot]\) on the subsets of \(S\) by defining \[P\left[\left\{\left(D_{i_{1}}^{(1)}, D_{i_{2}}^{(2)}, \ldots, D_{i_{n}}^{(n)}\right)\right\}\right]=P_{1}\left[\left\{D_{i_{1}}^{(1)}\right\}\right] P_{2}\left[\left\{D_{i_{2}}^{(2)}\right\}\right] \cdots P_{n}\left[\left\{D_{i_{n}}^{(n)}\right\}\right]. \tag{2.2}\] 

Equation (2.2) is a special case of (2.1) , since a single-member event on \(S\) can be written as a combinatorial product event; indeed, \[\left\{\left(D_{i_{1}}^{(1)}, D_{i_{2}}^{(2)}, \ldots, D_{i_{n}}^{(n)}\right)\right\}=\left\{D_{i_{1}}^{(1)}\right\} \otimes\left\{D_{i_{2}}^{(2)}\right\} \otimes \cdots \otimes\left\{D_{i_{n}}^{(n)}\right\}. \tag{2.3}\] 

Example 2C . Let \(Z_{1}=\{H, T\}\) be the sample description space of the experiment of tossing a coin, and let \(Z_{2}=\{1,2, \ldots, 6\}\) be the sample description space of the experiment of throwing a fair dic. Let \(S\) be the sample description space of the experiment, which consists of first tossing a coin and then throwing a die. What is the probability that in the jointly performed experiment one will obtain heads on the coin toss and a 5 on the die toss? The assumption made by (2.2) is that it is equal to the product of (i) the probability that the outcome of the coin.toss will be heads and (ii) the probability that the outcome of the die throw will be a 5.

We now desire to show that the probability space, consisting of the sample description space \(S=Z_{1} \otimes Z_{2} \otimes \ldots \otimes Z_{n}\) , on whose subsets a probability function \(P[\cdot]\) is defined by means of (2.1) , consists of \(n\) independent trials. 

We first note that an event \(A_{k}\) in \(S\) , which depends only on the \(k\) th trial, is necessarily a combinatorial product event; indeed, for some event \(C_{k}\) in \(Z_{k}\) \[A_{k}=Z_{1} \otimes \cdots \otimes Z_{k-1} \otimes C_{k} \otimes Z_{k+1} \otimes \cdots \otimes Z_{n}. \tag{2.4}\] 

Equation (2.4) follows from the fact that an event \(A_{k}\) depends on the \(k\) th trial if and only if the decision as to whether or not a description \(\left(z_{1}, z_{2}, \ldots, z_{n}\right)\) belongs to \(A_{k}\) depends only on the \(k\) th component \(z_{k}\) of the description. Next, let \(A_{1}, A_{2}, \ldots, A_{n}\) be events depending, respectively, on the first, second,…, \(n\) th trial. For each \(A_{k}\) we have a representation of the form of (2.4) . We next assert that the intersection may be written as a combinatorial product event: \[A_{1} A_{2} \cdots A_{n}=C_{1} \otimes C_{2} \otimes \cdots \otimes C_{n}. \tag{2.5}\] 

We leave the verification of (2.5) , which requires only a little thought, to the reader. Now, from (2.1) and (2.5) \[P\left[A_{1} A_{2} \cdots A_{n}\right]=P_{1}\left[C_{1}\right] P_{2}\left[C_{2}\right] \cdots P_{n}\left[C_{n}\right], \tag{2.6}\] whereas from (2.1) and (2.4) \begin{align} P\left[A_{k}\right] & =P_{1}\left[Z_{1}\right] \cdots P_{k-1}\left[Z_{k-1}\right] P_{k}\left[C_{k}\right] P_{k+1}\left[Z_{k+1}\right] \cdots P_{n}\left[Z_{n}\right] \tag{2.7} \\ & =P_{k}\left[C_{k}\right]. \end{align} From (2.6) and (2.7) it is seen that (1.11) is satisfied, so that \(S\) consists of \(n\) independent trials.

The foregoing considerations are not only sufficient to define a probability space that consists of independent trials but are also necessary in the sense of the following theorem, which we state without proof. Let the sample description space \(S\) be a combinatorial product of \(n\) sample description spaces \(Z_{1}, Z_{2}, \ldots, Z_{n}\) . Let \(P[\cdot]\) be a probability function defined on the subsets of \(S\) . The probability space \(S\) consists of \(n\) independent trials if and only if there exist probability functions \(P_{1}[\cdot], P_{2}[\cdot], \ldots, P_{n}[\cdot]\) , defined, respectively, on the subsets of the sample description spaces \(Z_{1}, Z_{2}, \ldots, Z_{n}\) , with respect to which \(P[\cdot]\) satisfies (2.6) for every set of \(n\) events \(A_{1}, A_{2}, \ldots\) , \(A_{n}\) on \(S\) such that, for \(k=1, \ldots, n, A_{k}\) depends only on the kth trial (and then \(C_{k}\) is defined by (2.4) ). 

To illustrate the foregoing considerations, we consider the following example.

Example 2D . A man tosses two fair coins independently. Let \(C_{1}\) be the event that the first coin tossed is a head, let \(C_{2}\) be the event that the second coin tossed is a head, and let \(C\) be the event that both coins tossed are heads. Consider sample description spaces: \(S=\{(H, H),(H, T)\) , \((T, H),(T, T)\}, Z_{1}=Z_{2}=\{H, T\}\) . Clearly \(S\) is the sample description space of the outcome of the two tosses, whereas \(Z_{1}\) and \(Z_{2}\) are the sample description spaces of the outcome of the first and second tosses, respectively. We assume that each of these sample description spaces has equally likely descriptions.

The event \(C_{1}\) may be defined on either \(S\) or \(Z_{1}\) . If defined on \(Z_{1}, C_{1}=\) \(\{H\}\) . If defined on \(S, C_{1}=\{(H, H),(H, T)\}\) . The event \(C_{2}\) may in a similar manner be defined on either \(\mathrm{Z}_{2}\) or \(S\) . However, the event \(C\) can be defined only on \(S ; C=\{(H, H)\}\) .

The spaces on which \(\mathrm{C}_{1}\) and \(\mathrm{C}_{2}\) are defined determines the relation that exists between \(C_{1}, C_{2}\) , and \(C\) . If both \(C_{1}\) and \(C_{2}\) are defined on \(S\) , then \(C=C_{1} C_{2}\) . If \(C_{1}\) and \(C_{2}\) are defined on \(Z_{1}\) and \(Z_{2}\) , respectively, then \(C=\) \(C_{1} \otimes C_{2}\) .

In order to speak of the independence of \(C_{1}\) and \(C_{2}\) , we must regard them as being defined on the same sample description space. That \(C_{1}\) and \(C_{2}\) are independent events is intuitively clear, since \(S\) consists of two independent trials and \(C_{1}\) depends on the first trial, whereas \(C_{2}\) depends on the second trial. Events can be independent without depending on independent trials. For example, consider the event \(D=\{(H, H),(T, T)\}\) that the two tosses have the same outcome. One may verify that \(D\) and \(C_{1}\) are independent and also that \(D\) and \(C_{2}\) are independent. On the other hand, the events \(D\) , \(C_{1}\) , and \(C_{2}\) are not independent.

Exercises

2.1 . Consider a man who has made 2 tosses of a die. State whether each of the following six statements is true or false.

Let \(A_{1}\) be the event that the outcome of the first throw is a 1 or a 2.

Statement 1: \(A_{1}\) depends on the first throw.

Let \(A_{2}\) be the event that the outcome of the second throw is a 1 or a 2.

Statement 2: \(A_{1}\) and \(A_{2}\) are mutually exclusive events.

Let \(B_{1}\) be the event that the sum of the outcomes is 7.

Statement 3: \(B_{1}\) depends on the first throw.

Let \(B_{2}\) be the event that the sum of the outcomes is 3.

Statement 4: \(B_{1}\) and \(B_{2}\) are mutually exclusive events.

Let \(C\) be the event that one of the outcomes is a 1 and the other is a 2.

Statement 5: \(A_{1} \cup A_{2}\) is a sub-event of \(C\) .

Statement 6: \(C\) is a sub-event of \(B_{2}\) .

 

Answer

(i) \(T\) ; (2) \(F\) ; (3) \(F\) ; (4) \(T\) ; (5) \(F\) ; (6) \(T\) .

 

2.2 . Consider a man who has made 2 tosses of a coin. He assumes that the possible outcomes of the experiment, together with their probability, are given by the following table:

Sample Descriptions \(D\) \((H, H)\) \((H, T)\) \((T, H)\) \((T, T)\) 
\(P[\{D\}]\) \(\frac{1}{3}\) \(\frac{1}{6}\) \(\frac{1}{6}\) \(\frac{1}{3}\) 

Show that this probability space does not consist of 2 independent trials. Is there a unique probability function that must be assigned on the subsets of the foregoing sample description space in order that it consist of 2 independent trials?

2.3 . Consider 3 urns; urn I contains 1 white and 2 black balls, urn II contains 3 white and 2 black balls, and urn III contains 2 white and 3 black balls. One ball is drawn from each urn. What is the probability that among the balls drawn there will be (i) 1 white and 2 black balls, (ii) at least 2 black balls, (iii) more black than white balls?

 

Answer

(i) \(\frac{32}{75}\) ; (ii), (iii) \(\frac{44}{75}\) .

 

2.4 . If you had to construct a mathematical model for events \(A\) and \(B\) , as described below, would it be appropriate to assume that \(A\) and \(B\) are independent? Explain the reasons for your opinion. 
(i) \(A\) is the event that a subscriber to a certain magazine owns a car, and \(B\) is the event that the same subscriber is listed in the telephone directory.

(ii) \(A\) is the event that a married man has blue eyes, and \(B\) is the event that his wife has blue eyes.

(iii) \(A\) is the event that a man aged 21 is more than 6 feet tall, and \(B\) is the event that the same man weighs less than 150 pounds.

(iv) \(A\) is the event that a man lives in the Northern Hemisphere, and \(B\) is the event that he lives in the Western Hemisphere.

(v) \(A\) is the event that it will rain tomorrow, and \(B\) is the event that it will rain within the next week.

2.5 . Explain the meaning of the following statements:

(i) A random phenomenon consists of \(n\) trials.

(ii) In drawing a sample of size \(n\) , one is performing \(n\) trials.

(iii) An event \(A\) depends on the third trial.

(iv) The event that the third ball drawn is white depends on the third trial.

(v) In drawing with replacement a sample of size 6, one is performing 6 independent trials of an experiment.

(vi) If \(S\) is the sample description space of the experiment of drawing with replacement a sample of size 6 from an urn containing balls, numbered 1 to 10, then \(S=Z_{1} \otimes Z_{2} \otimes \ldots \otimes Z_{6}\) , in which \(Z_{j}=\{1,2, \ldots, 10\}\) for \(j=1, \ldots, 6\) .

(vii) If, in (vi), balls numbered 1 to 7 are white and if \(A\) is the event that all balls drawn are white, then \(A=C_{1} \otimes C_{2} \otimes \ldots \otimes C_{6}\) , in which \(C_{j}=\) \(\{1,2, \ldots, 7\}\) for \(j=1, \ldots, 6\) .