probability-theory-as-the-study-of-mathematical-models-of-random-phenomena

To gain some insight into the amount of freedom we have in defining probability functions, it is useful to consider finite sample description spaces. The sample description space \(S\) of a random observation or experiment is defined as finite if it is of finite size, which is to say that the random observation or experiment under consideration possesses only a finite number of possible outcomes.

Consider now a finite sample description space \(S\) , of size \(N\) . We may then list the descriptions in \(S\) . If we denote the descriptions in \(S\) by \(D_{1}\) , \(D_{2}, \ldots, D_{N}\) , then we may write \(S=\left\{D_{1}, D_{2}, \ldots, D_{N}\right\}\) . For example, let \(S\) be the sample description space of the random experiment of tossing two coins; if we define \(D_{1}=(H, H), D_{2}=(H, T), D_{3}=(T, H), D_{4}=\) \((T, T)\) , then \(S=\left\{D_{1}, D_{2}, D_{3}, D_{4}\right\}\) .

It is shown in section 1 of Chapter 2 that \(2^{N}\) possible events may be defined on a sample description space of finite size \(N\) . For example, if \(S=\left\{D_{1}, D_{2}, D_{3}, D_{4}\right\}\) , then there are sixteen possible events that may be defined; namely, \(S\) , \(\emptyset\) , \(\left\{D_{1}\right\}\) , \(\left\{D_{2}\right\}\) , \(\left\{D_{3}\right\}\) , \(\left\{D_{4}\right\}\) , \(\left\{D_{1}, D_{2}\right\}\) , \(\left\{D_{1}, D_{3}\right\}\) , \(\left\{D_{1}, D_{4}\right\}\) , \(\left\{D_{2}, D_{3}\right\}\) , \(\left\{D_{2}, D_{4}\right\}\) , \(\left\{D_{3}, D_{4}\right\}\) , \(\left\{D_{1}, D_{2}, D_{3}\right\}\) , \(\left\{D_{1}, D_{2}, D_{4}\right\}\) , \(\left\{D_{1}, D_{3}, D_{4}\right\}\) , \(\left\{D_{2}, D_{3}, D_{4}\right\}\) .

Consequently, to define a probability function \(P[\cdot]\) on the subsets of \(S\) , one needs to specify the \(2^{N}\) values that \(P[A]\) assumes as \(A\) varies over the events on \(S\) . However, the values of the probability function cannot be specified arbitrarily but must be such that axioms 1 to 3 are satisfied.

There are certain events of particularly simple structure, called the single-member events, on which it will suffice to specify the probability function \(P[\cdot]\) in order that it be specified for all events. A single-member event is an event that contains exactly one description . If an event \(E\) has as its only member the description \(D_{i}\) , this fact may be expressed in symbols by writing \(E=\left\{D_{i}\right\}\) . Thus \(\left\{D_{i}\right\}\) is the event that occurs if and only if the random situation being observed has description \(D_{i}\) . The reader should note the distinction between \(D_{i}\) and \(\left\{D_{i}\right\}\) ; the former is a description, the latter is an event (which because of its simple structure is called a single member event).

Example 6A. The distinction between a single-member event and a sample description. Suppose that we are drawing a ball from an urn containing six balls, numbered 1 to 6 (or, alternately, we may be observing the outcome of the toss of a die, bearing numbers 1 to 6 on its sides). As sample description space \(S\) , we take \(S=\{1,2,3,4,5,6\}\) . The event, denoted by \(\{2\}\) , that the outcome of the experiment is a 2 is a single-member event. The event, denoted by \(\{2,4,6\}\) , that the outcome of the experiment is an even number is not a single-member event. Note that 2 is a description, whereas \(\{2\}\) is an event.

A probability function \(P[\cdot]\) defined on \(S\) can be specified by giving its value \(P\left[\left\{D_{i}\right\}\right]\) on the single-member events \(\left\{D_{i}\right\}\) which correspond to the members of \(S\) . Its value \(P[E]\) on any event \(E\) may then be computed by the following formula:

Formula For Calculating the Probability of Events When the Sample Description Space Is Finite. Let \(E\) be any event on a finite sample description space \(S=\left\{D_{1}, D_{2}, \ldots, D_{N}\right\}\) . Then the probability \(P[E]\) of the event \(E\) is the sum, over all descriptions \(D_{i}\) that are members of \(E\) , of the probabilities \(P\left[\left\{D_{i}\right\}\right]\) ; we express this symbolically by writing that if \(E=\left\{D_{i_{1}}, D_{i_{2}}, \ldots, D_{i_{k}}\right\}\) then

\[P[E]=P\left[\left\{D_{i_{1}}\right\}\right]+P\left[\left\{D_{i_{2}}\right\}\right]+\cdots+P\left[\left\{D_{i_{k}}\right\}\right] . \tag{6.1}\] 

To prove (6.1), one need note only that if \(E\) consists of the descriptions \(D_{i_{1}}, D_{i_{2}}, \ldots, D_{i_{k}}\) then \(E\) can be written as the union of the mutually exclusive single-member events \(\left\{D_{i_{1}}\right\},\left\{D_{i_{2}}\right\}, \ldots,\left\{D_{i_{k}}\right\}\) . Equation (6.1) follows immediately from (5.8) .

Example 6B. Illustrating the use of (6.1). Suppose one is drawing a sample of size 2 from an urn containing white and red balls. Suppose that as the sample description space of the experiment one takes \(S=\{(W, W),(W, R),(R, W),(R, R)\}\) . To specify a probability function \(P[\cdot]\) on \(S\) , one may specify the values of \(P\left[{\cdot}\right]\) on the single-member events by a table:

\(x\) \((W, W)\) \((W, R)\) \((R, W)\) \((R, R)\) 
\(P[\{x\}]\) \(\frac{6}{15}\) \(\frac{4}{15}\) \(\frac{4}{15}\) \(\frac{1}{15}\) 

Let \(E\) be the event that the ball drawn on the first draw is white. The event \(E\) may be represented as a set of descriptions by \(E=\{(W, W),(W, R)\}\) . Then, by (6.1) , \(P[E]=P[\{(W, W)\}]+P[\{(W, R)\}]=\frac{2}{3}\) .