Many problems in probability theory involve independent repeated trials of an experiment whose outcomes have been classified in two categories, called “successes” and “failures” and represented by the letters \(s\) and \(f\) , respectively. Such an experiment, which has only two possible outcomes, is called a Bernoulli trial. The probability of the outcome \(s\) is usually denoted by \(p\) , and the probability of the outcome \(f\) is usually denoted by \(q\) , where \[p \geq 0, \quad q \geq 0, \quad p+q=1. \tag{3.1}\]
In symbols, the sample description space of a Bernoulli trial is \(Z=\{s, f\}\) , on whose subsets is given a probability function \(P_{Z}[\cdot]\) , satisfying \(P_{Z}[\{s\}]=\) \(p, P_{Z}[\{f\}]=q\) .
Consider now \(n\) independent repeated Bernoulli trials , in which the word “repeated” is meant to indicate that the probabilities of success and failure remain the same throughout the trials. The sample description space \(S\) of \(n\) independent repeated Bernoulli trials contains \(2^{n}\) descriptions, each an \(n\) -tuple \(\left(z_{1}, z_{2}, \ldots, z_{n}\right)\) , in which each \(z_{i}\) is either an \(s\) or an \(f\) . The sample description space \(S\) is finite. However, to specify a probability function \(P[\cdot]\) on the subsets of \(S\) , we shall not assume that all descriptions in \(S\) are equally likely. Rather, we shall use the ideas in section 2.
In order to specify a probability function \(P[\cdot]\) on the subsets of \(S\) , it suffices to specify it on the single-member events \(\left\{\left(z_{1}, \ldots, z_{n}\right)\right\}\) . However, a single-member event may be written as a combinatorial product event; indeed, \(\left\{\left(z_{1}, \ldots, z_{n}\right)\right\}=\left\{z_{1}\right\} \otimes \ldots \otimes\left\{z_{n}\right\}\) . Since it has been assumed that \(P_{Z}[\{s\}]=p\) and \(P_{Z}[\{f\}]=q\) , we obtain the following basic rule. 1
If a probability space consists of \(n\) independent repeated Bernoulli trials, then the probability \(P\left[\left\{\left(z_{1}, \ldots, z_{n}\right)\right\}\right]\) of any single-member event is equal to \(p^{k} q^{n-k}\) , in which \(k\) is the number of successes \(s\) among the components of the description \(\left(z_{1}, \ldots, z_{n}\right)\) .
Example 3A . Suppose that a man tosses ten times a possibly unfair coin, whose probability of falling heads is \(p\) , which may be any number between 0 and 1, inclusive, depending on the construction of the coin. On each trial a success \(s\) is said to have occurred if the coin falls heads. Let us find the probability of the event \(A\) that the coin will fall heads on the first four tosses and tails on the last six tosses, assuming that the tosses are independent. It is equal to \(p^{4} q^{6}\) , since the event \(A\) is the same as the single-member event \(\{(s, s, s, s, f, f, f, f, f, f)\}\) .
One usually encounters Bernoulli trials by considering a random event \(E\) , whose probability of occurrence is \(p\) . In each trial one is interested only in the occurrence or nonoccurrence of \(E\) . A success \(s\) corresponds to an occurrence of the event \(E\) , and a failure \(f\) corresponds to a nonoccurrence of \(E\) . Thus, for example, one may be tossing darts at a target, and \(E\) may be the event that the target is hit; or one may be tossing a pair of dice, and \(E\) may represent the event that the sum of the dice is 7 (for fair dice, \(p=\frac{1}{6}\) ); or 3 men may be tossing coins simultaneously, and \(E\) may be the event that all of the coins fall heads (for fair coins, \(p=\frac{1}{8}\) ); or a woman may be pregnant, and \(E\) is the event that her child is a boy; or a man may be celebrating his 21 st birthday, and \(E\) may be the event that he will live to be 22 years old.
The Probability of \(\boldsymbol{k}\) Successes in \(\boldsymbol{n}\) Independent Repeated Bernoulli Trials . Frequently, the only fact about the outcome of a succession of \(n\) Bernoulli trials in which we are interested is the number of successes. We now compute the probability that the number of successes will be \(k\) , for any integer \(k\) from \(0,1,2, \ldots, n\) . The event “ \(k\) successes in \(n\) trials” can happen in as many ways as \(k\) letters \(s\) may be distributed among \(n\) places; this is the same as the number of subsets of size \(k\) that may be formed from a set containing \(n\) members. Consequently, there are \(n\choose k\) descriptions containing exactly \(k\) successes and \(n-k\) failures. Each such description has probability \(p^{k} q^{n-k}\) . Thus we have obtained a basic formula.
The Binomial Law. The probability, denoted by \(b(k ; n, p)\) , that \(n\) independent repeated Bernoulli trials, with probabilities \(p\) for success, and \(q=1-p\) for failure, will result in \(k\) successes and \(n-k\) failures (in which \(k=0,1, \ldots, n\) ) is given by \[b(k ; n, p)=\left(\begin{array}{l} n \tag{3.2} \\ k \end{array}\right) p^{k} q^{n-k}.\]
The law expressed by (3.2) is called the binomial law because of the role the quantities in (3.2) play in the binomial theorem, which states that \[\sum_{k=0}^{n}\left(\begin{array}{l} n \tag{3.3} \\ k \end{array}\right) p^{k} q^{n-k}=(p+q)^{n}=1,\] since \(p+q=1\) .
The reader should note that (3.2) is very similar to (3.4) of Chapter 2. However, (3.2) represents the solution to a probability problem that does not involve equally likely descriptions. The importance of this fact is illustrated by the following example. Suppose one is throwing darts at a target. It is difficult to see how one could compute the probability of the event \(E\) that one will hit the target by setting up some appropriate sample description space with equally likely descriptions. Rather, \(p\) may have to be estimated approximately by means of the frequency definition of probability. Nevertheless, even though \(p\) cannot be computed, once one has assumed a value for \(p\) one can compute by the methods of this section the probability of any event \(A\) that can be expressed in terms of independent trials of the event \(E\) .
The reader should also note that (3.2) is very similar to (1.13) . By means of the considerations of section 2, it can be seen that (3.2) and (1.13) are equivalent formulations of the same law.
The binomial law, and consequently the quantity \(b(k ; n, p)\) , occurs frequently in applications of probability theory. The quantities \(b(k ; n, p)\) , \(k=0,1, \ldots, n\) , are tabulated for \(p=0.01 (0.01) 0.50\) and \(n=2(1) 49\) (that is, for all values of \(p\) and \(n\) in the ranges \(p=0.01,0.02,0.03, \ldots\) , 0.50 and \(n=2,3,4, \ldots, 49)\) in “Tables of the Binomial Probability Distribution”, National Bureau of Standards, Applied Mathematics Series 6, Washington, 1950. A short table of \(b(k ; n, p)\) for various values of \(p\) between 0.01 and 0.5 and for \(n=2,3, \ldots, 10\) is given in Table II . It should be noted that values of \(b(k ; n, p)\) for \(p>0.5\) can be obtained from Table II by means of the formula \[b(k ; n, p)=b(n-k ; n, 1-p). \tag{3.4}\]
Example 3B . By a series of tests of a certain type of electrical relay, it has been determined that in approximately \(5 \%\) of the trials the relay will fail to operate under certain specified conditions. What is the probability that in ten trials made under these conditions the relay will fail to operate one or more times?
Solution
To describe the results of the ten trials, we write a 10-tuple \(\left(z_{1}, z_{2}, \ldots, z_{10}\right)\) whose \(k\) th component \(z_{k}=s\) or \(f\) , depending on whether the relay did or did not operate on the \(k\) th trial. We next assume that the ten trials constitute ten independent repeated Bernoulli trials, with probability of success \(p=0.95\) at each trial. The probability of no failures in the ten trials is \(b(10 ; 10,0.95)=(0.95)^{10}=b(0 ; 10,0.05)\) . Consequently, the probability of one or more failures in the ten trials is equal to \[1-(0.95)^{10}=1-b(0 ; 10,0.05)=1-0.5987=0.4013.\]
Example 3C . How to tell skill from luck. A rather famous personage in statistical circles is the tea-tasting lady whose claims have been discussed by such outstanding scholars as R. A. Fisher and J. Neyman; see J. Neyman, First Course in Probability and Statistics , Henry Holt, New York, 1950, pp. 272–289. “A Lady declares that by tasting a cup of tea made with milk she can discriminate whether the milk or the tea infusion was first added to the cup”. Specifically, the lady’s claim is “not that she could draw the distinction with invariable certainty, but that, though sometimes mistaken, she would be right more often than not”. To test the lady’s claim, she will be subjected to an experiment. She will be required to taste and classify \(n\) pairs of cups of tea, each pair containing one cup of tea made by each of the two methods under consideration. Let \(p\) be the probability that the lady will correctly classify a pair of cups. Assuming that the \(n\) pairs of cups are classified under independent and identical conditions, the probability that the lady will correctly classify \(k\) of the \(n\) pairs is \(\left(\begin{array}{l}n \\ k\end{array}\right) p^{k} q^{n-k}\) . Suppose that it is decided to grant the lady’s claims if she correctly classifies at least eight of ten pairs of cups. Let \(P(p)\) be the probability of granting the lady’s claims, given that her true probability of classifying a pair of cups is \(p\) . Then \(P(p)=\left(\begin{array}{c}10 \\ 8\end{array}\right) p^{8} q^{2}+\left(\begin{array}{c}10 \\ 9\end{array}\right) p^{9} q+p^{10}\) , since \(P(p)\) is equal to the probability that the lady will correctly classify at least eight of ten pairs. In particular, the probability that the lady will establish her claim, given that she is skillful (say, \(p=0.85\) ) is given by \(P(0.85)=0.820\) , whereas the probability that the lady will establish her claim, given that she is merely lucky (that is, \(p=0.50\) ) is given by \(P(0.50)=\) 0.055.
Example 3D . The game of “odd man out” . Let \(N\) distinguishable coins be tossed simultaneously and independently, where \(N \geq 3\) . Suppose that each coin has probability \(p\) of falling heads. What is the probability that either exactly one of the coins will fall heads or that exactly one of the coins will fall tails?
Application : In a game, which we shall call “odd man out”, \(N\) persons toss coins to determine one person who will buy refreshments for the group. If there is a person in the group whose outcome (be it heads or tails) is not the same as that of any other member of the group, then that person is called an odd man and must buy refreshment for each member of the group. The probability asked for in this example is the probability that in any play of the game there will be an odd man. The next example is concerned with how many plays of the game will be required to determine an odd man.
Solution
To describe the results of the \(N\) tosses, we write an \(N\) -tuple \(\left(z_{1}, z_{2}, \ldots, z_{N}\right)\) whose \(k\) th component is \(s\) or \(f\) , depending on whether the \(k\) th coin tossed fell heads or tails. We are then considering \(N\) independent repeated Bernoulli trials, with probability \(p\) of success at each trial. The probability of exactly one success is \(\left(\begin{array}{c}N \\ 1\end{array}\right) p q^{N-1}\) , whereas the probability of exactly one failure is \(\left(\begin{array}{c}N \\ N-1\end{array}\right) p^{N-1} q\) . Consequently, the probability that either exactly one of the coins will fall heads or exactly one of the coins will fall tails is equal to \(N\left(p^{N-1} q+p^{N-1}\right)\) . If the coins are fair, so that \(p=\frac{1}{2}\) , then the probability is \(N / 2^{N-1}\) . Thus, if five persons play the game of "odd man out" with fair coins, the probability that in any play of the game there will be a loser is \(\frac{5}{16}\) .
Example 3E . The duration of the game of “odd man out” . Let \(N\) persons play the game of “odd man out” with fair coins. What is the probability for \(n=1,2, \ldots\) that \(n\) plays will be required to conclude the game (that is, the nth play is the first play in which one of the players will have an outcome on his coin toss different from those of all the other players)?
Solution
Let us rephrase the problem. (See theoretical exercise 3.3.) Suppose that \(n\) independent plays are made of the game of “odd man out”. What is the probability that on the nth play, but not on any preceding play, there will be an odd man? Let \(P\) be the probability that on any play there will be an odd man. In example 3D it was shown that \(P=N / 2^{N-1}\) if \(N\) persons are tossing fair coins. Let \(Q=1-P\) . To describe the results of \(n\) plays, we write an \(n\) -tuple \(\left(z_{1}, z_{2}, \ldots, z_{n}\right)\) whose \(k\) th component is \(s\) or \(f\) , depending on whether the \(k\) th play does or does not result in an odd man. Assuming that the plays are independent, the \(n\) plays thus constitute repeated independent Bernoulli trials with probability \(P=N / 2^{N-1}\) of success at each trial. Consequently, the event \(\{(f, f, \ldots, f, s)\}\) of failure at all trials but the \(n\) th has probability \(Q^{n-1} P\) . Thus, if five persons toss fair coins, the probability that four tosses will be required to produce an odd \(\mathrm{man}\) is \((11 / 16)^{3}(5 / 16)\) .
Various approximations that exist for computing the binomial probabilities are discussed in section 2 of Chapter 6. We now briefly indicate the nature of one of these approximations, namely, that of the binomial probability law by the Poisson probability law.
The Poisson Law. A random phenomenon whose sample description space \(S\) consists of all the integers from 0 onward, so that \(S=\{0,1,2, \ldots\}\) , and on whose subsets a probability function \(P\left[^{\cdot}\right]\) is defined in terms of a parameter \(\lambda>0\) by \[P[\{k\}]=e^{-\lambda} \frac{\lambda^{k}}{k !}, \quad k=0,1,2, \ldots, \tag{3.5}\] is said to obey the Poisson probability law with parameter \(\lambda\) . Examples of random phenomena that obey the Poisson probability law are given in section 3 of Chapter 6. For the present, let us show that under certain circumstances the number of successes in \(n\) independent repeated Bernoulli trials, with probability of success \(p\) at each trial, approximately obeys the Poisson probability law with parameter \(\lambda=n p\) .
More precisely, we show that for any fixed \(k=0,1,2, \ldots\) , and \(\lambda>0\) \[\lim _{n \rightarrow \infty}\left(\begin{array}{l} n \tag{3.6} \\ k \end{array}\right)\left(\frac{\lambda}{n}\right)^{k}\left(1-\frac{\lambda}{n}\right)^{n-k}=e^{-\lambda} \frac{\lambda^{k}}{k !}.\]
To prove (3.6) , we need only rewrite its left-hand side: \[\frac{1}{k !} \lambda^{k}\left(1-\frac{\lambda}{n}\right)^{n-k} \frac{n(n-1) \cdots(n-k+1)}{n^{k}}.\] Since \(\lim _{n \rightarrow \infty}[1-(\lambda / n)]^{n}=e^{-\lambda}\) , we obtain (3.6) .
Since (3.6) holds in the limit, we may write that it is approximately true for large values of \(n\) that \[\left(\begin{array}{l} n \tag{3.7} \\ k \end{array}\right) p^{k}(1-p)^{n-k}=e^{-n p} \frac{(n p)^{k}}{k !}\]
We shall not consider here the remainder terms for the determination of the accuracy of the approximation formula (3.7) . In practice, the approximation represented by (3.7) is used if \(p \leq 0.1\) .
Example 3F . It is known that the probability that an item produced by a certain machine will be defective is 0.1. Let us find the probability that a sample of ten items, selected at random from the output of the machine, will contain no more than one defective item. The required probability, based on the binomial law, is \(\left(\begin{array}{c}10 \\ 0\end{array}\right)(0.1)^{0}(0.9)^{10}+\) \(\left(\begin{array}{c}10 \\ 1\end{array}\right)(0.1)^{1}(0.9)^{9}=0.7361\) , whereas the Poisson approximation given by (3.7) yields the value \(e^{-1}+e^{-1}=0.7358\) .
Example 3G . Safety testing vaccine. Suppose that at a certain stage in the production process of a vaccine the vaccine contains, on the average, \(m\) live viruses per cubic centimeter and the constant \(m\) is known to us. Consequently, let it be assumed that in a large vat containing \(V\) cubic centimeters of vaccine there are \(n=m V\) viruses. Let a sample of vaccine be drawn from the vat; the sample’s volume is \(v\) cubic centimeters. Let us find for \(k=0,1, \ldots, n\) the probability that the sample will contain \(k\) viruses. Let us write an \(n\) -tuple \(\left(z_{1}, z_{2}, \ldots, z_{n}\right)\) to describe the location of the \(n\) viruses in the vat, the \(j\) th component \(z_{j}\) being equal to \(s\) or \(f\) , depending on whether the \(j\) th virus is or is not located in our sample. The probability \(p\) that a virus in the vat will be in our sample may be taken as the ratio of the volume of the sample to the volume of the vat, \(p=v / V\) , if it is assumed that the viruses are dispersed uniformly in the vat. Assuming further that the viruses are independently dispersed in the vat, it follows by the binomial law that the probability \(P[\{k\}]\) that the sample will contain exactly \(k\) viruses is given by \[P[\{k\}]=\left(\begin{array}{c} m V \tag{3.8} \\ k \end{array}\right)\left(\frac{v}{V}\right)^{k}\left(1-\frac{v}{V}\right)^{m V-k}.\]
If it is assumed that the sample has a volume \(v\) less than \(1 \%\) of the volume \(V\) of the vat, then by the Poisson approximation to the binomial law \[P[\{k\}]=e^{-m v} \frac{(m v)^{k}}{k !}. \tag{3.9}\]
As an application of this result, let us consider a vat of vaccine that contains five viruses per 1000 cubic centimeters. Then \(m=0.005\) . Let a sample of volume \(v=600\) cubic centimeters be taken. We are interested in determining the probability \(P[\{0\}]\) that the sample will contain no viruses. This problem is of great importance in the design of a scheme to safety-test vaccine, for if the sample contains no viruses one might be led to pass as virus free the entire contents of the vat of vaccine from which the sample was drawn. By (3.9) we have \[P[\{0\}]=e^{-m v}=e^{-(0.005)(600)}=e^{-3}=0.0498. \tag{3.10}\]
Let us attempt to interpret this result. If we desire to produce virus-free vaccine, we must design a production process so that the density \(m\) of viruses in the vaccine is 0. As a check that the production process is operating properly, we sample the vaccine produced. Now, (3.10) implies that when judging a given vat of vaccine it is not sufficient to rely merely on the sample from that vat, if we are taking samples of volume 600 cubic centimeters, since \(5 \%\) of the samples drawn from vats with virus densities \(m=0.005\) viruses per cubic centimeter will yield the conclusion that no viruses are present in the vat. One way of decreasing this probability of a wrong decision might be to take into account the results of recent safety tests on similar vats of vaccine.
Independent Trials with More Than 2 Possible Outcomes . In the foregoing we considered independent trials of a random experiment with just two possible outcomes. It is natural to consider next the independent trials of an experiment with several possible outcomes, say \(r\) possible outcomes, in which \(r\) is an integer greater than 2. For the sample description space of the outcomes of a particular trial we write \(Z=\left\{s_{1}, s_{2}, \ldots, s_{r}\right\}\) . We assume that we know positive numbers \(p_{1}, p_{2}, \ldots, p_{r}\) , whose sum is 1, such that at each trial \(p_{k}\) represents the probability that \(s_{k}\) will be the outcome of that trial. In symbols, there exist numbers \(p_{1}, p_{2}, \ldots, p_{r}\) such that \begin{align} 0
Example 3H . Consider an experiment in which two fair dice are tossed. Consider three possible outcomes, \(s_{1}, s_{2}\) , and \(s_{3}\) , defined as follows: if the sum of the two dice is five or less, we say that \(s_{1}\) is the outcome; if the sum of the two dice is six, seven, or eight, we say \(s_{2}\) is the outcome; if the sum of the two dice is nine or more, we say \(s_{3}\) is the outcome. Then \(p_{1}=\) \(\frac{5}{18}, p_{2}=\frac{8}{18}, p_{3}=\frac{5}{18}\) .
Let \(S\) be the sample description space of \(n\) independent repeated trials of the experiment described. There are \(r^{n}\) descriptions in \(S\) . The probability \(P\left[\left\{\left(z_{1}, z_{2}, \ldots, z_{n}\right)\right\}\right]\) of any single-member event is equal to \(p_{1}^{k_{1}} p_{2}^{k_{2}} \ldots p_{r}^{k_{r}}\) , in which \(k_{1}, k_{2}, \ldots, k_{r}\) denote, respectively, the number of occurrences of \(s_{1}, s_{2}, \ldots, s_{r}\) among the components of the description \(\left(z_{1}, z_{2}, \ldots, z_{n}\right)\) .
Corresponding to the binomial law, we have the multinomial law: the probability that in \(n\) trials the outcome \(s_{1}\) will occur \(k_{1}\) times, the outcome \(s_{2}\) will occur \(k_{2}\) times, \(\ldots\) , the outcome \(s_{r}\) will occur \(k_{r}\) times, for any nonnegative integers \(k_{j}\) satisfying the condition \(k_{1}+k_{2}+\cdots+k_{r}=n\) , is given by \[\frac{n !}{k_{1} ! k_{2} ! \cdots k_{r} !} p_{1}^{k_{1}} p_{2}^{k_{2}} \cdots p_{r}^{k_{r}} \tag{3.12}\]
To prove (3.12) , one must note only that the number of descriptions in \(S\) , which contain \(k_{1} s_{1}^{\prime} s, k_{2} s_{2}^{\prime} s, \ldots, k_{r} s_{r}^{\prime} s\) , is equal to the number of ways a set of size \(n\) can be partitioned into \(r\) ordered subsets of sizes \(k_{1}, k_{2}, \ldots, k_{r}\) , respectively, which is equal to \(\left(\begin{array}{c}n \\ k_{1} k_{2} \ldots k_{r}\end{array}\right)\) . Each of these descriptions has probability \(p_{1}^{k_{1}} p_{2}^{k_{2}} \ldots p_{r}^{k_{i}}\) . Consequently, (3.12) is proved. The name, “multinomial law” derives from the role played by the expressions given in (3.12) in the multinomial theorem [see (1.18) of Chapter 2]. The reader should note the similarity between (3.12) and (3.14) of Chapter 2; these two equations are in the same relationship to each other as (3.2) and (3.4) of Chapter 2.
Theoretical Exercises
3.1 . Suppose one makes \(n\) independent trials of an experiment whose probability of success at each trial is \(p\) . Show that the conditional probability that any given trial will result in a success, given that there are \(k\) successes in the \(n\) trials, is equal to \(k / n\) .
3.2 . Suppose one makes \(m+n\) independent trials of an experiment whose probability of success at each trial is \(p\) . Let \(q=1-p\) .
(i) Show that for any \(k=0,1, \ldots, n\) the conditional probability that exactly \(m+k\) trials will result in success, given that the first \(m\) trials result in success, is equal to \(\left(\begin{array}{l}n \\ k\end{array}\right) p^{k} q^{n-k}\) .
(ii) Show that the conditional probability that exactly \(m+k\) trials will result in success, given that at least \(m\) trials result in success, is equal to
\[\frac{\left(\begin{array}{c} m+n \tag{3.13} \\ m+k \end{array}\right)\left(\dfrac{p}{q}\right)^{k}}{\displaystyle \sum_{r=0}^{n}\left(\begin{array}{l} m+n \\ m+r \end{array}\right)\left(\dfrac{p}{q}\right)^{r}}\]
3.3 . Suppose one performed a sequence of independent Bernoulli trials (in which the probability of success at each trial is \(p\) ) until the first success occurs. Show for any integer \(n=1,2, \ldots\) that the probability that \(n\) will be the number of trials required to achieve the first success is \(p q^{n-1}\) . Note: Strictly speaking, this problem should be rephrased as follows.
Consider \(n\) independent Bernoulli trials, with probability \(p\) for success on any trial. What is the probability that the \(n\) th trial will be the first trial on which a success occurs? To show that the problem originally stated is equivalent to the reformulated problem requires the consideration of the theory of a countably infinite number of independent repeated Bernoulli trials; this is beyond the scope of this book.
3.4 . The behavior of the binomial probabilities. Show that, as \(k\) goes from 0 to \(n\) , the terms \(b(k ; n, p)\) increase monotonically, then decrease monotonically, reaching their largest value (i) in the case that \((n+1) p\) is not an integer, when \(k\) is equal to the integer \(m\) satisfying the inequalities
\[(n+1) p-1
and (ii) in the case \((n+1) p\) is an integer, when \(k\) is equal to either \((n+1) p-1\) or \((n+1) p\) . Hint: Use the fact that
\[\frac{b(k ; n, p)}{b(k-1 ; n, p)}=\frac{(n-k+1) p}{k q}=1+\frac{(n+1) p-k}{k q}. \tag{3.15}\]
3.5 . Consider a series of \(n\) independent repeated Bernoulli trials at which the probability of success at each trial is \(p\) . Show that in order to have two successive integers, \(k_{1}\) and \(k_{2}\) , between 0 and \(n\) , such that the probability of \(k_{1}\) successes in the \(n\) trials will be equal to the probability of \(k_{2}\) successes in the \(n\) trials, it is necessary and sufficient that \((n+1) p\) be an integer.
3.6 . Show that the probability [denoted by \(P(r+1)\) , say] of at least \((r+1)\) successes in \((n+1)\) independent repeated Bernoulli trials, with probability \(p\) of success at each trial, is equal to \[(r+1)\left(\begin{array}{l} n+1 \tag{3.16} \\ r+1 \end{array}\right) \int_{0}^{p} x^{r}(1-x)^{n-r} d x\]
Hint: \(P(r+1)\) may be regarded as a function of \(p\) for \(r\) and \(n\) fixed. By differentiation, verify that \[\frac{d}{d p} P(r+1)=\frac{(n+1) !}{r !(n-r) !} p^{r} q^{n-r}\]
3.7 . The behavior of the Poisson probabilities. Show that the probabilities of the Poisson probability law, given by (3.5) , increase monotonically, then decrease monotonically as \(k\) increases, and reach their maximum when \(k\) is the largest integer not exceeding \(\lambda\) .
3.8 . The behavior of the multinomial probabilities. Show that the probabilities of the multinomial probability law, given by (3.12) , reach their maximum at \(k_{1}, k_{2}, \ldots, k_{\tau}\) , satisfying the inequalities, for \(i=1,2, \ldots, r\) , \[n p_{i}-1
Hint: Prove first that the maximum is attained at and only at values \(k_{1}, \ldots, k_{r}\) satisfying \(p_{i} k_{j} \leq p_{j}\left(k_{i}+1\right)\) for each pair of indices \(i\) and \(j\) . Add these inequalities for all \(j\) and also for all \(i \neq j\) . (This result is taken from W. Feller, An Introduction to Probability Theory and its Applications , second edition, New York, Wiley, 1957, p. 161, where it is ascribed to P. A. P. Moran.)
Exercises
3.1 . Assuming that each child has probability 0.51 of being a boy, find the probability that a family of 4 children will have (i) exactly 1 boy, (ii) exactly 1 girl, (iii) at least one boy, (iv) at least 1 girl.
Answer
(i) 0.240; (ii) 0.260; (iii) 0.942; (iv) 0.932.
3.2 . Find the number of children a couple should have in order that the probability of their having at least 2 boys will be greater than 0.75.
3.3 . Assuming that each dart has probability 0.20 of hitting its target, find the probability that if one throws 5 darts at a target one will score (i) no hits, (ii) exactly 1 hit, (iii) at least 2 hits.
Answer
(i) 0.328; (ii) 0.410; (iii) 0.262.
3.4 . Assuming that each dart has probability 0.20 of hitting its target, find the number of darts one should throw at a target in order that the probability of at least 2 hits will be greater than 0.60.
3.5 . Consider a family with 4 children, and assume that each child has probability 0.51 of being a boy. Find the conditional probability that all the children will be boys, given that (i) the eldest child is a boy, (ii) at least 1 of the children is a boy.
Answer
(i) 0.133; (ii) 0.072.
3.6 . Assuming that each dart has probability 0.20 of hitting its target, find the conditional probability of obtaining 2 hits in 5 throws, given that one has scored an even number of hits in the 5 throws.
3.7 . A certain manufacturing process yields electrical fuses, of which, in the long run, \(15 \%\) are defective. Find the probability that in a sample of 10 fuses selected at random there will be (i) no defectives, (ii) at least 1 defective, (iii) no more than 1 defective.
Answer
(i) 0.197; (ii) 0.803; (iii) 0.544.
3.8 . A machine normally makes items of which \(5 \%\) are defective. The practice of the producer is to check the machine every hour by drawing a sample of size 10, which he inspects. If the sample contains no defectives, he allows the machine to run for another hour. What is the probability that this practice will lead him to leave the machine alone when in fact it has shifted to producing items of which \(10 \%\) are defective?
3.9 . (Continuation of 3.8). How large a sample should be inspected to insure that if \(p=0.10\) the probability that the machine will not be stopped is less than or equal to 0.01?
Answer
Choose \(n\) such that \((0.90)^{n}<0.01\) ; therefore, choose \(n=44\) .
3.10 . Consider 3 friends who contract a disease; medical experience has shown that \(10 \%\) of people contracting this disease do not recover. What is the probability that (i) none of the 3 friends will recover, (ii) all of them will recover?
3.11 . Let the probability that a person aged \(x\) years will survive 1 year be denoted by \(p_{x}\) , whereas \(q_{x}=1-p_{x}\) is the probability that he will die within a year. Consider a board of directors, consisting of a chairman and 5 members; all of the members are 60, the chairman is 65. Find the probability, in terms of \(q_{60}\) and \(q_{65}\) , that within a year (i) no members will die, (ii) not more than 1 member will die, (iii) neither a member nor the chairman will die, (iv) only the chairman will die. Evaluate these probabilities under the assumption that \(q_{60}=0.025\) and \(q_{65}=0.040\) .
3.12 . Consider a young man who is waiting for a young lady, who is late. To amuse himself while waiting, he decides to take a walk under the following set of rules. He tosses a coin (which we may assume is fair). If the coin falls heads, he walks 10 yards north; if the coin falls tails, he walks 10 yards south. He repeats this process every 10 yards and thus executes what is called a “random walk”. What is the probability that after walking 100 yards he will be (i) back at his starting point, (ii) within 10 yards of his starting point, (iii) exactly 20 yards away from his starting point.
3.13 . Do the preceding exercise under the assumption that the coin tossed by the young man is unfair and has probability 0.51 of falling heads (probability 0.49 of falling heads).
Answer
(i), (ii) 0.2456; (iii) 0.4096.
3.14 . Let 4 persons play the game of “odd man out” with fair coins. What is the probability, for \(n=1,2, \ldots\) , that \(n\) plays will be required to conclude the game (that is, the \(n\) th play is the first play on which 1 of the players will have an outcome on his coin toss that is different from those of all the other players)?
3.15 . Consider an experiment that consists of tossing 2 fair dice independently. Consider a sequence of \(n\) repeated independent trials of the experiment. What is the probability that the \(n\) th throw will be the first time that the sum of the 2 dice is a 7?
Answer
\(5^{n-1} / 6^{n}\) .
3.16 . A man wants to open his door; he has 5 keys, only 1 of which fits the door. He tries the keys successively, choosing them (i) without replacement, (ii) with replacement, until he opens the door. For each integer \(k=\) \(1,2, \ldots\) , find the probability that the \(k\) th key tried will be the first to fit the door.
3.17 . A man makes 5 independent throws of a dart at a target. Let \(p\) denote his probability of hitting the target at each throw. Given that he has made exactly 3 hits in the 5 throws, what is the probability that the first throw hit the target? Express your answer in terms as simple as you can.
Answer
\(\frac{3}{5}\) .
3.18 . Consider a loaded die; in 10 independent throws the probability that an even number will appear 5 times is twice the probability that an even number will appear 4 times. What is the probability that an even number will not appear at all in 10 independent throws of the die?
3.19 . An accident insurance company finds that 0.001 of the population incurs a certain kind of accident each year. Assuming that the company has insured 10,000 persons selected randomly from the population, what is the probability that not more than 3 of the company’s policyholders will incur this accident in a given year?
Answer
\(0.010\) .
3.20 . A certain airline finds that 4 per cent of the persons making reservations on a certain flight will not show up for the flight. Consequently, their policy is to sell to 75 persons reserved seats on a plane that has exactly 73 seats. What is the probability that for every person who shows up for the flight there will be a seat available?
3.21 . Consider a flask containing 1000 cubic centimeters of vaccine drawn from a vat that contains on the average 5 live viruses in every 1000 cubic centimeters of vaccine. What is the probability that the flask contains (i) exactly 5 live viruses, (ii) 5 or more live viruses?
Answer
(i) \(0.1755\) ; (ii) \(0.5595\) .
3.22 . The items produced by a certain machine may be classified in 4 grades, \(A, B, C\) , and \(D\) . It is known that these items are produced in the following proportions:
| Grade A | Grade B | Grade C | Grade D |
| 0.3 | 0.4 | 0.2 | 0.1 |
What is the probability that there will be exactly 1 item of each grade in a sample of 4 items, selected at random from the output of the machine?
3.23 . A certain door-to-door salesman sells 3 sizes of brushes, which he calls large, extra large, and giant. He estimates that among the persons he calls upon the probabilities are 0.4 that he will make no sale, 0.3 that he will sell a large brush, 0.1 that he will sell an extra large brush, and 0.2 that he will sell a giant brush. Find the probability that in 4 calls he will sell (i) no brushes, (ii) 4 large brushes, (iii) at least 1 brush of each kind.
Answer
(i) 0.0256; (ii) 0.0081; (iii) 0.1008.
3.24 . Consider a man who claims to be able to locate hidden sources of water by use of a divining rod. To test his claim, he is presented with 10 covered cans, 1 at a time; he must decide, by means of his divining rod, whether each can contains water. What is the probability that the diviner will make at least 7 correct decisions just by chance? Do you think that the test described in this exercise is fairer than the test described in exercise 2.14 of Chapter 2? Will it make a difference if the diviner knows how many of the cans actually contain water?
3.25 . In their paper “Testing the claims of a graphologist”, Journal of Personality, Vol. 16 (1947), pp. 192–197, G. R. Pascal and B. Suttell describe an experiment designed to evaluate the ability of a professional graphologist. The graphologist claimed that she could distinguish the handwriting of abnormal from that of normal persons. The experimenters selected 10 persons who had been diagnosed as psychotics by at least 2 psychiatrists. For each of these persons a normal-control person was matched for age, sex, and education. Handwriting samples from each pair of persons were placed in a separate folder and presented to the graphologist, who was able to identify correctly the sample of the psychotic in 6 of the 10 pairs.
(i) What is the probability that she would have been correct on at least 6 pairs just by chance?
(ii) How many correct judgements would the graphologist need to make so that the probability of her getting at least that many correct by chance is \(5 \%\) or less?
Answer
(i) 0.3770; (ii) 9.
3.26 . Two athletic teams play a series of games; the first team winning 4 games is the winner. The World Series is an example. Suppose that 1 of the teams is stronger than the other and has probability \(p\) of winning each game, independent of the outcomes of any other games. Assume that a game cannot end in a tie. Show that the probabilities that the series will end in \(4,5,6\) , or 7 games are (i) if \(p=\frac{2}{3}, 0.21,0.296,0.274\) , and 0.22, respectively, and (ii) if \(p=\frac{1}{2}, 0.125,0.25,0.3125\) , and 0.3125, respectively.
3.27 . Suppose that 9 people, chosen at random, are asked if they favor a certain proposal. Find the probability that a majority of the persons polled will favor the proposal, given that \(45 \%\) of the population favor the proposal.
Answer
\(0.379\) .
3.28 . Suppose that (i) 2, (ii) 3 restaurants compete for the same 10 patrons. Find the number of seats each restaurant should have in order to have a probability greater than \(95 \%\) that it can serve all patrons who come to it (assuming that all patrons arrive at the same time and choose, independently of one another, each restaurant with equal probability).
3.29 . A fair die is to be thrown 9 times. What is the most probable number of throws on which the outcome is (i) a 6, (ii) an even number?
Answer
(i) 1; (ii) 4 or 5.
- A reader who has omitted the preceding section may take this rule as the definition of \(n\) independent repeated Bernoulli trials. ↩︎