In this section we develop formulas for the probability law of a random variable \(Y\) , which arises as a function \(Y=g\left(X_{1}, X_{2}, \ldots, X_{n}\right)\) of \(n\) jointly distributed random variables \(X_{1}, X_{2}, \ldots, X_{n}\) . All of the formulas developed in this section are consequences of the following basic theorem.

Theorem 9A . Let \(X_{1}, X_{2}, \ldots, X_{n}\) be \(n\) jointly distributed random variables, with joint probability law \(P_{X_{1}, X_{2}, \ldots, X_{n}}[\cdot]\) . Let \(Y=g\left(X_{1}\right.\) , \(X_{2}, \ldots, X_{n}\) . Then, for any real number \(y\) \begin{align} F_{Y}(y) & =P[Y \leq y] \tag{9.1}\\ & =P_{X_{1}, X_{2}, \ldots, X_{n}}\left[\left\{\left(x_{1}, x_{2}, \ldots, x_{n}\right): \quad g\left(x_{1}, x_{2}, \ldots, x_{n}\right) \leq y\right\}\right]. \end{align} 

The proof of theorem 9A is immediate, since the event that \(Y \leq y\) is logically equivalent to the event that \(g\left(X_{1}, \ldots, X_{n}\right) \leq y\) , which is the event that the observed values of the random variables \(X_{1}, X_{2}, \ldots, X_{n}\) lie in the set of \(n\) -tuples \(\left\{\left(x_{1}, \ldots, x_{n}\right): g\left(x_{1}, x_{2}, \ldots, x_{n}\right) \leq y\right\}\) .

We are especially interested in the case in which the random variables \(X_{1}, X_{2}, \ldots, X_{n}\) are jointly continuous, with joint probability density \(f_{X_{1}, X_{2}, \ldots, X_{n}}(x_{1}, x_{2}, \ldots, x_{n})\) . Then (9.1) may be written \[F_{Y}(y)=\underset{\left\{\left(x_{1}, x_{2}, \ldots, x_{n}\right): g\left(x_{1}, x_{2}, \ldots, x_{n}\right) \leq y\right\}}{\iint\cdots\int} f_{X_{1}, X_{2}, \ldots, X_{n}}\left(x_{1}, x_{2}, \ldots, x_{n}\right) \tag{9.2}\\ d x_{1} d x_{2} \cdots d x_{n}.\] 

To begin with, let us obtain the probability law of the sum of two jointly continuous random variables \(X_{1}\) and \(X_{2}\) , with a joint probability density function \(f_{X_{1}, \Gamma_{2}}(.,.)\) . Let \(Y=X_{1}+X_{2}\) . Then \begin{align} F_{Y}(y) & =P\left[X_{1}+X_{2} \leq y\right]=P_{X_{1}, X_{2}}\left[\left\{\left(x_{1}, x_{2}\right): x_{1}+x_{2} \leq y\right\}\right] \tag{9.3}\\ & =\iint\limits_{\left\{\left(x_{1}, x_{2}\right):x_{1}+x_{2} \leq y\right\}} f_{X_1,X_2}\left(x_{1}, x_{2}\right)\, dx_{1} d x_{2} \\ & =\int_{-\infty}^{\infty} d x_{1} \int_{-\infty}^{y-x_{1}} d x_{2}\, f_{X_{1}, X_{2}}\left(x_{1}, x_{2}\right) \\ & =\int_{-\infty}^{\infty} d x_{1} \int_{-\infty}^{y} d x_{2}^{\prime}\, f_{X_{1}, X_{2}}\left(x_{1}, x_{2}^{\prime}-x_{1}\right). \end{align} By differentiation of the last equation in (9.3), we obtain the formula for the probability density function of \(X_{1}+X_{2}\) : for any real number \(y\) \[f_{X_{1}+X_{2}}(y)=\int_{-\infty}^{\infty} d x_{1}\, f_{X_{1} \cdot X_{2}}\left(x_{1}, y-x_{1}\right)=\int_{-\infty}^{\infty} d x_{2}\, f_{X_{1}, X_{2}}\left(y-x_{2}, x_{2}\right). \tag{9.4}\] 

If the random variables \(X_{1}\) and \(X_{2}\) are independent, then for any real number \(y\) \[f_{X_{1}+X_{2}}(y)=\int_{-\infty}^{\infty} d x f_{X_{1}}(x) f_{X_{2}}(y-x)=\int_{-\infty}^{\infty} d x f_{X_{1}}(y-x) f_{X_{2}}(x). \tag{9.5}\] 

The mathematical operation involved in (9.5) arises in many parts of mathematics. Consequently, it has been given a name. Consider three functions \(f_{1}(\cdot), f_{2}(\cdot)\) , and \(f_{3}(\cdot)\) , which are such that for every real number \(y\) \[f_{3}(y)=\int_{-\infty}^{\infty} f_{1}(x) f_{2}(y-x)dx; \tag{9.6}\] the function \(f_{3}(\cdot)\) is then said to be the convolution of the functions \(f_{1}(\cdot)\) and \(f_{2}(\cdot)\) , and in symbols we write \(f_{3}(\cdot)=f_{1}(\cdot)^{*} f_{2}(\cdot)\) .

In terms of the notion of convolution, we may express (9.5) as follows. The probability density function \(f_{X_{1}+X_{2}}(\cdot)\) of the sum of two independent continuous random variables is the convolution of the probability density functions \(f_{X_{1}}(\cdot)\) and \(f_{X_{2}}(\cdot)\) of the random variables. 

One can prove similarly that if the random variables \(X_{1}\) and \(X_{2}\) are jointly discrete then the probability mass function of their sum, \(X_{1}+X_{2}\) , for any real number \(y\) is given by \begin{align} p_{X_{1}+X_{2}}(y) & =\sum_{\substack{\text { over all } x \text { such that } \\ p_{X_{1}, X_{2}}(x, y-x)>0}} p_{X_{1}, X_{2}}(x, y-x) \tag{9.7}\\ & =\sum_{\substack{\text { over all } x \text { such that } \\ p_{X_{1}, X_{2}}(y-x x)>0}} p_{X_{1}, X_{2}}(y-x, x) \end{align} 

In the same way that we proved (9.4) we may prove the formulas for the probability density function of the difference, product, and quotient of two jointly continuous random variables:

\begin{align} f_{X-Y}(y) & =\int_{-\infty}^{\infty} d x\ f_{X, Y}(y+x, x)=\int_{-\infty}^{\infty} d x f_{X, Y}(x, x-y) . \tag{9.8}\\ f_{X Y}(y) & =\int_{-\infty}^{\infty} dx\ \frac{1}{|x|} f_{X, Y}\left(\frac{y}{x}, x\right)=\int_{-\infty}^{\infty} d x \frac{1}{|x|} f_{X, Y}\left(x, \frac{y}{x}\right) . \tag{9.9}\\ f_{X / Y}(y) & =\int_{-\infty}^{\infty} d x\ |x| f_{X, Y}(y x, x) . \tag{9.10} \end{align} 

We next consider the function of two variables given by \(g\left(x_{1}, x_{2}\right)=\) \(\sqrt{x_{1}^{2}+x_{2}^{2}}\) and obtain the probability law of \(Y=\sqrt{X_{1}^{2}+X_{2}^{2}}\) . Suppose one is taking a walk in a plane; starting at the origin, one takes a step of magnitude \(X_{1}\) in one direction and then in a perpendicular direction one takes a step of magnitude \(X_{2}\) . One will then be at a distance \(Y\) from the origin given by \(Y=\sqrt{X_{1}^{2}+X_{2}^{2}}\) . Similarly, suppose one is shooting at a target; let \(X_{1}\) and \(X_{2}\) denote the coordinates of the shot, taken along perpendicular axes, the center of which is the target. Then \(Y=\sqrt{X_{1}^{2}+X_{2}^{2}}\) is the distance from the target to the point hit by the shot.

The distribution function of \(Y=\sqrt{X_{1}^{2}+X_{2}^{2}}\) clearly satisfies \(F_{Y}(y)=0\) for \(y<0\) , and for \(y \geq 0\) 

\[F_{Y}(y)=\underset{\left\{\left(x_{1}, x_{2}\right): x_{1}^{2}+x_{2}^{2} \leq y^{2}\right\}}{\displaystyle\iint} f_{X_{1}, X_{2}}\left(x_{1}, x_{2}\right) d x_{1} d x_{2}. \tag{9.11}\] 

We express the double integral in (9.11) by means of polar coordinates. We have, letting \(x_{1}=r \cos \theta, x_{2}=r \sin \theta\) ,

\[F_{Y}(y)=\int_{0}^{2 \pi} d \theta \int_{0}^{y} r d r f_{X_{1}, X_{2}}(r \cos \theta, r \sin \theta). \tag{9.12}\] 

If \(X_{1}\) and \(X_{2}\) are jointly continuous, then \(Y\) is continuous, with a probability density function obtained by differentiating (9.12) with respect to \(y\) . Consequently, \[\begin{array}{rlr} f_{\sqrt{X_{1}^{2}+X_{2}^{2}}}(y) & = \begin{cases} y \displaystyle \int_{0}^{2 \pi} d\theta \, f_{X_{1}, X_{2}}(y \cos \theta, y \sin \theta), & \text{for } y > 0, \\ 0, & \text{for } y < 0 \end{cases}\tag{9.13} \end{array}\] \[\begin{array}{rlr} f_{X_{1}^{2}+X_{2}^{2}}(y) & = \begin{cases} \frac{1}{2} \displaystyle \int_{0}^{2 \pi} d\theta \, f_{X_{1}, X_{2}}(\sqrt{y} \cos \theta, \sqrt{y} \sin \theta), & \text{for } y > 0 \\ 0, & \text{for } y < 0, \end{cases} \tag{9.14} \end{array}\] where (9.14) follows from (9.13) and (8.8).

The formulas given in this section provide tools for the solution of a great many problems of theoretical and applied probability theory, as examples 9A to 9F indicate. In particular, the important problem of finding the probability distribution of the sum of two independent random variables can be treated by using (9.5) and (9.7). One may prove results such as the following:

Theorem 9B . Let \(X_{1}\) and \(X_{2}\) be independent random variables. 

(i) If \(X_{1}\) is normally distributed with parameters \(m_{1}\) and \(\sigma_{1}\) and \(X_{2}\) is normally distributed with parameters \(m_{2}\) and \(\sigma_{2}\) , then \(X_{1}+X_{2}\) is normally distributed with parameters \(m=m_{1}+m_{2}\) and \(\sigma=\sqrt{\sigma_{1}^{2}+\sigma_{2}^{2}}\)

(ii) If \(X_{1}\) obeys a binomial probability law with parameters \(n_{1}\) and \(p\) and \(X_{2}\) obeys a binomial probability law with parameters \(n_{2}\) and \(p\) , then \(X_{1}+X_{2}\) obeys a binomial probability law with parameters \(n_{1}+n_{2}\) and \(p\)

(iii) If \(X_{1}\) is Poisson distributed with parameter \(\lambda_{1}\) and \(X_{2}\) is Poisson distributed with parameter \(\lambda_{2}\) , then \(X_{1}+X_{2}\) is Poisson distributed with parameter \(\lambda=\lambda_{1}+\lambda_{2}\)

(iv) If \(X_{1}\) obeys a Cauchy probability law with parameters \(a_{1}\) and \(b_{1}\) and \(X_{2}\) obeys a Cauchy probability law with parameters \(a_{2}\) and \(b_{2}\) , then \(X_{1}+X_{2}\) obeys a Cauchy probability law with parameters \(a_{1}+a_{2}\) and \(b_{1}+b_{2}\)

(v) If \(X_{1}\) obeys a gamma probability law with parameters \(r_{1}\) and \(\lambda\) and \(X_{2}\) obeys a gamma probability law with parameters \(r_{2}\) and \(\lambda\) , then \(X_{1}+X_{2}\) obeys a gamma probability law with parameters \(r_{1}+r_{2}\) and \(\lambda\)

A proof of part (i) of theorem 9B is given in example 9A. The other parts of theorem 9B are left to the reader as exercises. A proof of theorem 9B from another point of view is given in section 4 of Chapter 9.

Example 9A . Let \(X_{1}\) and \(X_{2}\) be independent random variables; \(X_{1}\) is normally distributed with parameters \(m_{1}\) and \(\sigma_{1}\) , whereas \(X_{2}\) is normally distributed with parameters \(m_{2}\) and \(\sigma_{2}\) . Show that their sum \(X_{1}+X_{2}\) is normally distributed, with parameters \(m\) and \(\sigma\) satisfying the relations \[m=m_{1}+m_{2}, \quad \sigma^{2}=\sigma_{1}^{2}+\sigma_{2}^{2}. \tag{9.15}\] 

 

Solution

By (9.5), \[f_{X_{1}+X_{2}}(y)=\frac{1}{2 \pi \sigma_{1} \sigma_{2}} \int_{-\infty}^{\infty} d x \exp \left[-\frac{1}{2}\left(\frac{x-m_{1}}{\sigma_{1}}\right)^{2}\right] \exp \left[-\frac{1}{2}\left(\frac{y-x-m_{2}}{\sigma_{2}}\right)^{2}\right]\] By (6.9) of Chapter 4, it follows that \begin{align} f_{X_{1}+X_{2}}(y)=\frac{1}{\sqrt{2 \pi} \sigma} & \exp \left[-\frac{1}{2}\left(\frac{y-m}{\sigma}\right)^{2}\right] \tag{9.16}\\ & \times\left\{\frac{1}{\sqrt{2 \pi} \sigma^{*}} \int_{-\infty}^{\infty} d x \exp \left[-\frac{1}{2}\left(\frac{x-m^{*}}{\sigma^{*}}\right)^{2}\right]\right\}, \end{align} where

 

\[m^{*}=\frac{m_{1} \sigma_{2}^{2}+\left(y-m_{2}\right) \sigma_{1}^{2}}{\sigma_{1}^{2}+\sigma_{2}^{2}}, \quad \sigma^{* 2}=\frac{\sigma_{1}^{2} \sigma_{2}^{2}}{\sigma_{1}^{2}+\sigma_{2}^{2}}\] 

However, the expression in braces in equation (9.16) is equal to 1. Therefore, it follows that \(X_{1}+X_{2}\) is normally distributed with parameters \(m\) and \(\sigma\) , given by (9.15).

Example 9B . The assembly of parts . It is often the case that a dimension of an assembled article is the sum of the dimensions of several parts. An electrical resistance may be the sum of several electrical resistances. The weight or thickness of the article may be the sum of the weights or thicknesses of individual parts. The probability law of the individual dimensions may be known; what is of interest is the probability law of the dimension of the assembled article. An answer to this question may be obtained from (9.5) and (9.7) if the individual dimensions are independent random variables. For example, let us consider two 10 -ohm resistors assembled in series. Suppose that, in fact, the resistances of the resistors are independent random variables, each obeying a normal probability law with mean \(10 \mathrm{ohms}\) and standard deviation \(0.5 \mathrm{ohms}\) . The unit, consisting of the two resistors assembled in series, has resistance equal to the sum of the individual resistances; therefore, the resistance of the unit obeys a normal probability law with mean \(20 \mathrm{ohms}\) and standard deviation \(\left\{(0.5)^{2}+(0.5)^{2}\right\}^{!, 3}=0.707\) ohms. Now suppose one wishes to measure the resistance of the unit, using an ohmmeter whose error of measurement is a random variable obeying a normal probability law with mean 0 and standard deviation \(0.5 \mathrm{ohms}\) . The measured resistance of the unit is a random variable obeying a normal probability law with mean \(20 \mathrm{ohms}\) and standard deviation \(\sqrt{(0.707)^{2}+(0.5)^{2}}=0.866\) ohms.

Example 9C . Let \(X_{1}\) and \(X_{2}\) be independent random variables, each normally distributed with parameters \(m=0\) and \(\sigma>0\) . Then \[f_{X_{1}, X_{2}}(y \cos \theta, y \sin \theta)=\frac{1}{2 \pi \sigma^{2}} e^{-1 / 2\left(\frac{y}{\sigma}\right)^{2}}.\] Consequently, for \(y>0\) \begin{align} f_{\sqrt{X_{1}^{2}+X_{2}^{2}}}(y) & =\frac{y}{\sigma^{2}} e^{-1 / 2\left(\frac{y}{\sigma}\right)^{2}}. \tag{9.17}\\ f_{X_{1}^{2}+X_{2}^{2}}(y) & =\frac{1}{2 \sigma^{2}} e^{-\frac{1}{2 \sigma^{2}} y} \tag{9.18} \end{align} In words, \(\sqrt{X_{1}^{2}+X_{2}^{2}}\) has a Rayleigh distribution with parameter \(\sigma\) , whereas \(X_{1}^{2}+X_{2}^{2}\) has a \(\chi^{2}\) distribution with parameters \(n=2\) and \(\sigma\) .

Example 9D . The probability distribution of the envelope of narrowband noise . A family of random variables \(X(t)\) , defined for \(t>0\) , is said to represent a narrow-band noise voltage [see S. O. Rice, “Mathematical Analysis of Random Noise,” Bell System. Tech.Jour. , Vol. 24 (1945), p. 81] if \(X(t)\) is represented in the form \[X(t)=X_{c}(t) \cos \omega t+X_{s}(t) \sin \omega t, \tag{9.19}\] in which \(\omega\) is a known frequency, whereas \(X_{c}(t)\) and \(X_{s}(t)\) are independent normally distributed random variables with means 0 and equal variances \(\sigma^{2}\) . The envelope of \(X(t)\) is then defined as \[R(t)=\left[X_{c}^{2}(t)+X_{s}^{2}(t)\right]^{1 / 2}. \tag{9.20}\] In view of example 9C, it is seen that the envelope \(R(t)\) has a Rayleigh distribution with parameter \(\alpha=\sigma\)

Example 9E . Let \(U\) and \(V\) be independent random variables, such that \(U\) is normally distributed with mean 0 and variance \(\sigma^{2}\) and \(V\) has a \(\chi\) distribution with parameters \(n\) and \(\sigma\) . Show that the quotient \(T=U / V\) has Student’s distribution with parameter \(n\) .

 

Solution

By (9.10), the probability density function of \(T\) for any real number is given by

 

\begin{align} f_{T}(y) & =\int_{0}^{\infty} d x x f_{U}(y x) f_{V}(x) \\ & =\frac{K}{\sigma^{n+1}} \int_{0}^{\infty} d x\;x \exp \left[-\frac{1}{2}\left(\frac{y x}{\sigma}\right)^{2}\right] x^{n-1} \exp \left[-\frac{n}{2}\left(\frac{x}{\sigma}\right)^{2}\right] \end{align} 

where

\[K=\frac{2(n / 2)^{n / 2}}{\Gamma(n / 2) \sqrt{2 \pi}}.\] 

By making the change of variable \(u=x \sqrt{\left(y^{2}+n\right)} / \sigma\) , it follows that

\begin{align} f_{T}(y) & =K\left(y^{2}+n\right)^{-(n+1) / 2} \int_{0}^{\infty} d u u^{n} e^{-1 / 2 u^{2}} \\ & =K\left(y^{2}+n\right)^{-(n+1) / 2} 2^{(n-1) / 2} \Gamma\left(\frac{n+1}{2}\right), \end{align} 

from which one may immediately deduce that the probability density function of \(T\) is given by (4.15) of Chapter 4.

Example 9F . Distribution of the range . A ship is shelling a target on an enemy shore line, firing \(n\) independent shots, all of which may be assumed to fall on a straight line and to be distributed according to the distribution function \(F(x)\) with probability density function \(f(x)\) . Define the range (or span) \(R\) of the attack as the interval between the location of the extreme shells. Find the probability density function of \(R\) .

 

Solution

Let \(X_{1}, X_{2}, \ldots, X_{n}\) be independent random variables representing the coordinates locating the position of the \(n\) shots. The range \(R\) may be written \(R=V-U\) , in which \(V=\operatorname{maximum}\left(X_{1}, X_{2}, \ldots, X_{n}\right)\) and \(U=\) minimum \(\left(X_{1}, X_{2}, \ldots, X_{n}\right)\) . The joint distribution function \(F_{U, V}(u, v)\) is found as follows. If \(u \geq v\) , then \(F_{U, V}(u, v)\) is the probability that simultaneously \(X_{1} \leq v, \ldots, X_{n} \leq v\) ; consequently,

 

\[F_{U, v}(u, v)=[F(v)]^{n} \quad \text { if } u \geq v, \tag{9.21}\] 

since \(P\left[X_{k} \leq v\right]=F(v)\) for \(k=1,2, \ldots, n\) . If \(u, then \(F_{U, V}(u, v)\) is the probability that simultaneously \(X_{1} \leq v, \ldots, X_{n} \leq v\) but not simultaneously \(u; consequently,

\[F_{U, v}(u, v)=[F(v)]^{n}-[F(v)-F(u)]^{n} \quad \text { if } u

The joint probability density of \(U\) and \(V\) is then obtained by differentiation. It is given by

\begin{align} f_{U, V}(u, v) & = \begin{cases} 0, & \text{if } u > v \tag{9.23} \\ n(n-1)[F(v)-F(u)]^{n-2} f(u) f(v), & \text{if } u < v. \end{cases} \end{align} 

From (9.8) and (9.23) it follows that the probability density function of the range \(R\) of \(n\) independent continuous random variables, whose individual distribution functions are all equal to \(F(x)\) and whose individual probability density functions are all equal to \(f(x)\) , is given by

\begin{align} f_{R}(x) & = \int_{-\infty}^{\infty} d v \, f_{U, V}(v-x, v) \tag{9.24}\\ & = \begin{cases} 0, & \text{for } x < 0 \\ n(n-1)\displaystyle \int_{-\infty}^{\infty} [F(v)-F(v-x)]^{n-2} f(v-x) f(v) \, d v, & \text{for } x > 0. \end{cases} \end{align} 

The distribution function of \(R\) is then given by

\begin{align} F_{R}(x) & = \begin{cases} 0, & \text{if } x < 0 \tag{9.25}\\ n \int_{-\infty}^{\infty} [F(v)-F(v-x)]^{n-1} f(v) \, d v, & \text{if } x \geq 0. \end{cases} \end{align} 

Equations (9.24) and (9.25) can be explicitly evaluated only in a few cases, such as that in which each random variable \(X_{1}, X_{2}, \ldots, X_{n}\) is uniformly distributed on the interval 0 to 1. Then from (9.24) it follows that

\begin{align} f_{R}(x) & = \begin{cases} n(n-1) x^{n-2}(1-x), & \text{if } 0 \leq x \leq 1 \tag{9.26}\\ 0, & \text{elsewhere.} \end{cases} \end{align} 

 

A Geometrical Method for Finding the Probability Law of a Function of Several Random Variables . Consider \(n\) jointly continuous random variables \(X_{1}, X_{2}, \ldots, X_{n}\) , and the random variable \(Y=g\left(X_{1}, X_{2}, \ldots, X_{n}\right)\) . Suppose that the joint probability density function of \(X_{1}, X_{2}, \ldots, X_{n}\) has the property that it is constant on the surface in \(n\) -dimensional space obtained by setting \(g\left(x_{1}, \ldots, x_{n}\right)\) equal to a constant; more precisely, suppose that there is a function of a real variable, denoted by \(f_{g}(\cdot)\) , such that

\[f_{X_{1}, X_{2}, \ldots, X_{n}}\left(x_{1}, \ldots, x_{n}\right)=f_{g}(y), \quad \text { if } g\left(x_{1}, x_{2}, \ldots, x_{n}\right)=y. \tag{9.27}\] 

If (9.27) holds and \(g(\cdot)\) is a continuous function, we obtain a simple formula for the probability density function \(f_{Y}(\cdot)\) of the random variable \(Y=g\left(X_{1}, X_{2}, \ldots, X_{n}\right) ;\) for any real number \(y\) \[f_{F}(y)=f_{g}(y) \frac{d V_{g}(y)}{dy}. \tag{9.28}\] in which \(V_{g}(y)\) represents the volume within the surface in \(n\) -dimensional space with equation \(g\left(x_{1}, x_{2}, \ldots, x_{n}\right)=y\) ; in symbols,

\[V_{g}(y)=\underset{\left\{\left(x_{1} x_{2}, \ldots, x_{n}\right): g\left(x_{1}, \ldots, x_{n}\right) \leq y\right\}}{\iint\cdots\int} d x_{1} d x_{2} \cdots d x_{n}. \tag{9.29}\] 

We sketch a proof of (9.28). Let \(B(y ; h)=\left\{\left(x_{1}, x_{2}, \ldots, x_{n}\right): y<\right.\) \(\left.g\left(x_{1}, \ldots, x_{n}\right) \leq y+h\right\}\) . Then, by the law of the mean for integrals,

\begin{align} F_{Y}(y+h)-F_{Y}(y) & =\underset{B(y ; h)} {\iint\cdots \int} f_{X_{1}}, \ldots, X_{n}\left(x_{1}, \ldots, x_{n}\right) d x_{1} \cdots d x_{n} \\ & =f_{X_{1}, \ldots, X_{n}}\left(x_{1}^{\prime}, \ldots, x_{n}^{\prime}\right)\left[V_{g}(y+h)-V_{g}(y)\right] \end{align} 

for some point \(\left(x_{1}^{\prime}, \ldots, x_{n}^{\prime}\right)\) in the set \(B(y ; h)\) . Now, as \(h\) tends to 0 , \(f_{X_{1}, \cdots, X_{n}}\left(x_{1}^{\prime}, \ldots, x_{n}^{\prime}\right)\) tends to \(f_{g}(y)\) , assuming \(f_{g}(\cdot)\) is a continuous function, and \(\left[V_{g}(y+h)-V_{g}(y)\right] / h\) tends to \(d V_{g}(y) / d y\) . From these facts, one immediately obtains \((9.28)\) .

We illustrate the use of (9.28) by obtaining a basic formula, which generalizes example 9C.

Example 9G . Let \(X_{1}, X_{2}, \ldots, X_{n}\) be independent random variables, each normally distributed with mean 0 and variance 1. Let \(Y=\) \(\sqrt{X_{1}^{2}+X_{2}^{2}+\cdots+X_{n}^{2}}\) . Show that \[f_{Y}(y) =\left\{ \begin{aligned} &\frac{y^{n-1} e^{-\frac{1}{2} y^{2}}}{\displaystyle \int_{0}^{\infty} y^{n-1} e^{-\frac{1}{2} y^{2}} dy}, && \text{for } y > 0, \\ &0, && \text{for } y < 0, \end{aligned}\right. \tag{9.30}\] where \(\displaystyle \int_{0}^{\infty} y^{n-1} e^{-y^{2}/2} d y=2^{(n-2) / 2} \Gamma(n / 2)\) . In words, \(Y\) has a \(\chi\) distribution with parameters \(n\) and \(\sigma=\sqrt{n}\) .

 

Solution

Define \(g\left(x_{1}, \ldots, x_{n}\right)=\sqrt{x_{1}^{2}+\ldots+x_{n}^{2}}\) and \(f_{g}(y)=\) \((2 \pi)^{-n / 2} e^{-y^{2}/2}\) . Then (9.27) holds. Now \(V_{g}(y)\) is the volume within a sphere in \(n\) -dimensional space of radius \(y\) . Clearly, \(V_{g}(y)=0\) for \(y<0\) , and for \(y \geq 0\) \[V_{g}(y)=y^{n} \quad \underset{\left\{\left(x_{1}, \cdots, x_{n}\right): x_{1}^{2}+\cdots+x_{n}^{2} \leq 1\right\}}{\iint\cdots\int} d x_{1} \cdots dx_{n},\] so that \(V_{g}(y)=K y^{n}\) for some constant \(K\) . Then \(d V_{g}(y) / d y=n K y^{n-1}\) . By (9.28), \(f_{Y}(y)=0\) for \(y<0\) , and for \(y \geq 0 f_{Y}(y)=K^{\prime} y^{n-1} e^{-1 / 2 y^{2}}\) , for some constant \(K^{\prime}\) . To obtain \(K^{\prime}\) , use the normalization condition \(\displaystyle \int_{-\infty}^{\infty} f_{Y}(y) d y\) \(=1\) . The proof of \((9.30)\) is complete.

 

Example 9H . The energy of an ideal gas is \(\chi^{2}\) distributed . Consider an ideal gas composed of \(N\) particles of respective masses \(m_{1}, m_{2}, \ldots, m_{N}\) . Let \(v_{x}^{(i)}, v_{y}^{(i)}, v_{z}^{(i)}\) denote the velocity components at a given time instant of the \(i\) th particle. Assume that the total energy \(E\) of the gas is given by its kinetic energy \[E=\sum_{i=1}^{N} \frac{m_{i}}{2}\left\{\left(v_{x}^{(i)}\right)^{2}+\left(v_{y}^{(i)}\right)^{2}+\left(v_{z}^{(i)}\right)^{2}\right\}.\] 

Assume that the joint probability density function of the \(3 N\) -velocities \(\left(v_{x}^{(1)}, v_{y}^{(1)}, v_{z}^{(1)}, v_{x}^{(2)}, v_{y}^{(2)}, v_{z}^{(2)}, \ldots, v_{x}^{(N)}, v_{y}^{(N)}, v_{z}^{(N)}\right)\) is proportional to \(e^{-E / k T}\) , in which \(k\) is Boltzmann’s constant and \(T\) is the absolute temperature of the gas; in statistical mechanics one says that the state of the gas has as its probability law Gibb’s canonical distribution. The energy \(E\) of the gas is a random variable whose probability density function may be derived by the geometrical method. For \(x>0\) \[f_{E}(x)=K_{1} e^{-x / k T} \frac{d V_{E}(x)}{d x}\] for some constant \(K_{1}\) , in which \(V_{E}(x)\) is the volume within the ellipsoid in \(3 N\) -dimensional space consisting of all \(3 N\) -tuples of velocities whose kinetic energy \(E \leq x\) . One may show that \[V_{E}(x)=K_{2} x^{3 N / 2}, \quad \frac{d V_{E}(x)}{d x}=K_{2} \frac{3}{2} N x^{(3 N / 2)-1}\] for some constant \(K_{2}\) in the same way that \(V_{g}(y)\) is shown in example 9G to be proportional to \(y^{n}\) . Consequently, for \(x>0\) \[f_{E}(x)=\frac{x^{(3 N / 2)-1} e^{-x / k T}}{\displaystyle \int_{0}^{\infty} x^{(3 N / 2)-1} e^{-x / k T} d x}\] In words, \(E\) has a \(\chi^{2}\) distribution with parameters \(n=3 N\) and \(\sigma^{2}=\) \(k T / 2\) .

We leave it for the reader to verify the validity of the next example.

Example 9I . The joint normal distribution . Consider two jointly normally distributed random variables \(X_{1}\) and \(X_{2}\) ; that is, \(X_{1}\) and \(X_{2}\) have a joint probability density function

\[f_{X_{1}, X_{2}}\left(x_{1}, x_{2}\right)=\frac{1}{2 \pi \sigma_{1} \sigma_{2} \sqrt{1-\rho^{2}}} e^{-Q\left(x_{1}, x_{2}\right)} \tag{9.31}\] for some constants \(\sigma_{1}>0\) , \(\sigma_{2}>0\) , \(-1<\rho<1\) , \(-\infty, \(-\infty, in which the function \(Q(.,.)\) for any two real numbers \(x_{1}\) and \(x_{2}\) is defined by \begin{align} Q\left(x_{1}, x_{2}\right)=\frac{1}{2\left(1-\rho^{2}\right)}\left[\left(\frac{x_{1}-m_{1}}{\sigma_{1}}\right)^{2}-2 \rho\left(\frac{x_{1}-m_{1}}{\sigma_{1}}\right)\right. & \left(\frac{x_{2}-m_{2}}{\sigma_{2}}\right) \\ & \left.+\left(\frac{x_{2}-m_{2}}{\sigma_{2}}\right)^{2}\right] . \end{align} 

The curve \(Q\left(x_{1}, x_{2}\right)=\) constant is an ellipse. Let \(Y=Q\left(X_{1}, X_{2}\right)\) . Then \(P[Y>y]=e^{-y}\) for \(y>0\) .

Theoretical Exercises

Various probability laws (or equivalently, probability distributions), which are of importance in statistics, arise as the probability laws of various functions of normally distributed random variables.

9.1 . The \(\chi^{2}\) distribution . Show that if \(X_{1}, X_{2}, \ldots, X_{n}\) are independent random variables, each normally distributed with parameters \(m=0\) and \(\sigma>0\) , and if \(Z=X_{1}^{2}+X_{2}^{2}+\cdots+X_{n}^{2}\) , then \(Z\) has a \(\chi^{2}\) distribution with parameters \(n\) and \(\sigma^{2}\) .

9.2 . The \(\chi\) distribution . Show that if \(X_{1}, X_{2}, \ldots, X_{n}\) are independent random variables, each normally distributed with parameters \(m=0\) and \(\sigma>0\) , then

\[V=\sqrt{\frac{1}{n} \sum_{k=1}^{n} X_{k}^{2}}\] 

has a \(\chi\) distribution with parameters \(n\) and \(\sigma\) .

9.3 . Student’s distribution . Show that if \(X_{0}, X_{1}, \ldots, X_{n}\) are \((n+1)\) independent random variables, each normally distributed with parameters \(m=0\) and \(\sigma>0\) , then the random variable

\[X_{0} / \sqrt{\frac{1}{n} \sum_{k=1}^{n} X_{k}^{2}}\] 

has as its probability law Student’s distribution with parameter \(n\) (which, it should be noted, is independent of \(\sigma\) )!

9.4 . The \(F\) distribution . Show that if \(Z_{1}\) and \(Z_{2}\) are independent random variables, \(\chi^{2}\) distributed with \(n_{1}\) and \(n_{2}\) degrees of freedom, respectively, then the quotient \(n_{2} Z_{1} / n_{1} Z_{2}\) obeys the \(F\) distribution with parameters \(n_{1}\) and \(n_{2}\) . Consequently, conclude that if \(X_{1}, \ldots, X_{m}, X_{m+1}, \ldots, X_{m+n}\) are \((m+n)\) independent random variables, each normally distributed with parameters \(m=0\) and \(\sigma>0\) , then the random variable

\[\frac{(1 / m) \sum_{k=1}^{n} X_{k}^{2}}{(1 / n) \sum_{k=1}^{n} X_{m+k}^{2}}\] 

has as its probability law the \(F\) distribution with parameters \(m\) and \(n\) . In statistics the parameters \(m\) and \(n\) are spoken of as “degrees of freedom.”

9.5 . Show that if \(X_{1}\) has a binomial distribution with parameters \(n_{1}\) and \(p\) , if \(X_{2}\) has a binomial distribution with parameters \(n_{2}\) and \(p\) , and \(X_{1}\) and \(X_{2}\) are independent, then \(X_{1}+X_{2}\) has a binomial distribution with parameters \(n_{1}+n_{2}\) and \(p\) .

9.6 . Show that if \(X_{1}\) has a Poisson distribution with parameter \(\lambda_{1}\) , if \(X_{2}\) has a Poisson distribution with parameter \(\lambda_{2}\) , and \(X_{1}\) and \(X_{2}\) are independent, then \(X_{1}+X_{2}\) is Poisson distributed with parameter \(\lambda_{1}+\lambda_{2}\) .

9.7 . Show that if \(X_{1}\) and \(X_{2}\) are independently and uniformly distributed over the interval \(a\) to \(b\) , then

\begin{align} f_{X_{1}+X_{2}}(y) & =\begin{cases} 0, & \text{for } y < 2a \text{ or } y > 2b \tag{9.32} \\ \dfrac{y - 2a}{(b-a)^{2}}, & \text{for } 2a < y < a+b \\ \dfrac{2b - y}{(b-a)^{2}}, & \text{for } a+b < y < 2b. \end{cases} \end{align} 

9.8 . Prove the validity of the assertion made in example 9I. Identify the probability law of \(Y\) . Find the probability law of \(Z=2\left(1-\rho^{2}\right) Y\) .

9.9 . Let \(X_{1}\) and \(X_{2}\) have a joint probability density function given by (9.31). Show that the sum \(X_{1}+X_{2}\) is normally distributed, with parameters \(m=m_{1}+m_{2}\) and \(\sigma^{2}=\sigma_{1}^{2}+2 \rho \sigma_{1} \sigma_{2}+\sigma_{2}^{2}\) .

9.10 . Let \(X_{1}\) and \(X_{2}\) have a joint probability density function given by equation (9.31), with \(m_{1}=m_{2}=0\) . Show that

\[f_{x_{1} / X_{2}}(y)=\frac{\sigma_{1} \sigma_{2} \sqrt{1-\rho^{2}}}{\pi\left(\sigma_{2}^{2} y^{2}-2 \rho \sigma_{1} \sigma_{2} y+\sigma_{1}^{2}\right)}. \tag{9.33}\] 

If \(X_{1}\) and \(X_{2}\) are independent, then the quotient \(X_{1} / X_{2}\) has a Cauchy distribution.

9.11 . Use the proof of example 9G to prove that the volume \(V_{n}(r)\) of an \(n\) dimensional sphere of radius \(r\) is given by

Prove that the surface area of the sphere is given by \(d V_{n}(r) / d r\) .

9.12 . Prove that it is impossible for two independent identically distributed random variables, \(X_{1}\) and \(X_{2}\) , each taking the values 1 to 6, to have the property that \(P\left[X_{1}+X_{2}=k\right]=-\frac{1}{1}\) for \(k=2,3, \ldots, 12\) . Consequently, conclude that it is impossible to weight a pair of dice so that the probability of occurrence of every sum from 2 to 12 will be the same.

9.13 . Prove that if two independent identically distributed random variables, \(X_{1}\) and \(X_{2}\) , each taking the values 1 to 6 , have the property that their sum will satisfy \(P\left[X_{1}+X_{2}=k\right]=P\left[X_{1}+X_{2}=14-k\right]=(k-1) / 36\) for \(k=2\) , \(3,4,5,6\) , and \(P\left[X_{1}+X_{2}=7\right]=\frac{6}{36}\) then \(P\left[X_{1}=k\right]=P\left[X_{2}=k\right]=\frac{1}{6}\) for \(k=1,2, \ldots, 6\) .

Exercises

9.1 . Suppose that the load on an airplane wing is a random variable \(X\) obeying a normal probability law with mean 1000 and variance 14,400, whereas the load \(Y\) that the wing can withstand is a random variable obeying a normal probability law with mean 1260 and variance 2500. Assuming that \(X\) and \(Y\) are independent, find the probability that \(X(that the load encountered by the wing is less than the load the wing can withstand).

 

Answer

\(0.9772\) .

 

In exercises 9.2 to 9.4 let \(X_{1}\) and \(X_{2}\) be independently and uniformly distributed over the intervals 0 to 1.

9.2 . Find and sketch the probability density function of (i) \(X_{1}+X_{2}\) , (ii) \(X_{1}-X_{2}\) , (iii) \(\left|X_{1}-X_{2}\right|\) .

9.3 . (i) Maximum \(\left(X_{1}, X_{2}\right)\) , (ii) minimum \(\left(X_{1}, X_{2}\right)\) .

 

Answer

(i) \(2 y, 0; 0 otherwise. (ii) \(2(1-y), 0; 0 otherwise;

 

9.4 . (i) \(X_{1} X_{2}\) , (ii) \(X_{1} / X_{2}\) .

In exercises 9.5 to 9.7 let \(X_{1}\) and \(X_{2}\) be independent random variables, each normally distributed with parameters \(m=0\) and \(\sigma>0\) .

9.5 . Find and sketch the probability density function of (i) \(X_{1}+X_{2}\) , (ii) \(X_{1}-X_{2}\) , (iii) \(\left|X_{1}-X_{2}\right|\) , (iv) \(\left(X_{1}+X_{2}\right) / 2\) , (v) \(\left(X_{1}-X_{2}\right) / 2\) .

 

Answer

(i), (ii) Normal with mean 0, variance \(2 \sigma^{2}\) ; (iii) \(\frac{1}{\sigma \sqrt \pi} e^{-y^{2 / 4 \sigma^{2}}}\) for \(y>0\) ; 0 otherwise; (iv), (v) normal with mean 0, variance \(\frac{1}{2} \sigma^{2}\) .

 

9.6 . (i) \(X_{1}^{2}+X_{2}^{2}\) , (ii) \(\left(X_{1}^{2}+X_{2}^{2}\right) / 2\) .

9.7 . (i) \(X_{1} / X_{2}\) , (ii) \(X_{1} /\left|X_{2}\right|\) .

 

Answer

\(\left\{\pi\left(y^{2}+1\right)\right\}^{-1}\) .

 

9.8 . Let \(X_{1}, X_{2}, X_{3}\) , and \(X_{4}\) be independent random variables, each normally distributed with parameters \(m=0\) and \(\sigma^{2}=1\) . Find and sketch the probability density functions of (i) \(X_{3} / \sqrt{\left(X_{1}^{2}+X_{2}^{2}\right) / 2}\) , (ii) \(2 X_{3}^{2} /\left(X_{1}^{2}+X_{2}^{2}\right)\) , (iii) \(3 X_{4}^{2} /\left(X_{1}^{2}+X_{2}^{2}+X_{3}^{2}\right)\) , (iv) \(\left(X_{1}^{2}+X_{2}^{2}\right) /\left(X_{3}^{2}+X_{4}^{2}\right)\) .

9.9 . Let \(X_{1}, X_{2}\) , and \(X_{3}\) be independent random variables, each exponentially distributed with parameter \(\lambda=\frac{1}{2}\) . Find the probability density function of (i) \(X_{1}+X_{2}+X_{3}\) , (ii) minimum \(\left(X_{1}, X_{2}, X_{3}\right.\) ), (iii) maximum \(\left(X_{1}, X_{2}, X_{3}\right)\) , (iv) \(X_{1} / X_{2}\) .

 

Answer

(i) Gamma with parameters \(r=3\) and \(\lambda=\frac{1}{2}\) ; (ii) exponential with \(\lambda=\frac{3}{2}\) ;

 

(iii) \(\frac{3}{2} e^{-y / 2}\left(1-e^{-y / 2}\right)^{2}\) for \(y>0 ; 0\) otherwise; (iv) \((1+y)^{-2}\) for \(y>0 ; 0\) otherwise.

9.10 . Find and sketch the probability density function of \(\theta=\tan ^{-1}(Y \mid X)\) if \(X\) and \(Y\) are independent random variables, each normally distributed with mean 0 and variance \(\sigma^{2}\) .

9.11 . The envelope of a narrow-band noise is sampled periodically, the samples being sufficiently far apart to assure independence. In this way \(n\) independent random variables \(X_{1}, X_{2}, \ldots, X_{n}\) are observed, each of which is Rayleigh distributed with parameter \(\sigma\) . Let \(Y=\operatorname{maximum}\left(X_{1}, X_{2}, \ldots\right.\) , \(X_{n}\) ) be the largest value in the sample. Find the probability density function of \(Y\) .

 

Answer

\(n \frac{y}{\sigma^{2}} e^{-y^{2} / 2 \sigma^{2}}\left(1-e^{-y^{2} / 2 \sigma^{2}}\right)^{n-1}\) for \(y>0\) .

 

9.12 . Let \(v=\left(v_{x}^{2}+v_{y}^{2}+v_{z}^{2}\right)^{1 / 2}\) be the magnitude of the velocity of a particle whose velocity components \(v_{x}, v_{y}, v_{z}\) are independent random variables, each normally distributed with mean 0 and variance \(k T / M ; k\) is Boltzmann’s constant, \(T\) is the absolute temperature of the medium in which the particle is immersed, and \(M\) is the mass of the particle. Describe the probability law of \(v\) .

9.13 . Let \(X_{1}, X_{2}, \ldots, X_{n}\) be independent random variables, uniformly distributed over the interval 0 to 1. Describe the probability law of \(-2 \log\) \(\left(X_{1} X_{2} \ldots X_{n}\right)\) . Using this result, describe a procedure for forming a random sample of a random variable with a \(\chi^{2}\) distribution with \(2 n\) degrees of freedom.

9.14 . Let \(X\) and \(Y\) be independent random variables, each exponentially distributed with parameter \(\lambda\) . Find the probability density function of \(Z=X /(X+Y)\) .

9.15 . Show that if \(X_{1}, X_{2}, \ldots, X_{n}\) are independent identically distributed random variables, whose minimum \(Y=\) minimum \(\left(X_{1}, X_{2}, \ldots, X_{n}\right)\) obeys an exponential probability law with parameter \(\lambda\) , then each of the random variables \(X_{1}, \ldots, X_{n}\) obeys an exponential probability law with parameter \((\lambda / n)\) . If you prefer to solve the problem for the special case that \(n=2\) , this will suffice.

Hint : \(Y\) obeys an exponential probability law with parameter \(\lambda\) if and only if \(F_{1}(y)=1-e^{-\lambda . y}\) or 0, depending on whether \(y \geq 0\) or \(y<0\) .

9.16 . Let \(X_{1}, X_{2}, \ldots, X_{n}\) be independent random variables (i) uniformly distributed on the interval -1 to 1, (ii) exponentially distributed with mean 2. Find the distribution of the range \(R=\operatorname{maximum}\left(X_{1}, X_{2}, \ldots, X_{n}\right)\) - minimum \(\left(X_{1}, X_{2}, \ldots, X_{n}\right)\) .

9.17 . Find the probability that in a random sample of size \(n\) of a random variable uniformly distributed on the interval 0 to 1 the range will exceed 0.8.

 

Answer

\(1-n(0.8)^{n-1}+(n-1)(0.8)^{n}\) .

 

9.18 . Determine how large a random sample one must take of a random variable uniformly distributed on the interval 0 to 1 in order that the probability will be more than 0.95 that the range will exceed 0.90.

9.19 . The random variable \(X\) represents the amplitude of a sine wave; \(Y\) represents the amplitude of a cosine wave. Both are independently and uniformly distributed over the interval 0 to 1.

(i) Let the random variable \(R\) represent the amplitude of their resultant; that is, \(R^{2}=X^{2}+Y^{2}\) . Find and sketch the probability density function of \(R\) .

(ii) Let the random variable \(\theta\) represent the phase angle of the resultant; that is, \(\theta=\tan ^{-1}(Y / X)\) . Find and sketch the probability density function of \(\theta\) .

 

Answer

See the answer to exercise 10.3.

 

9.20 . The noise output of a quadratic detector in a radio receiver can be represented as \(X^{2}+Y^{2}\) , where \(X\) and \(Y\) are independently and normally distributed with parameters \(m=0\) and \(\sigma>0\) . If, in addition to noise, there is a signal present, the output is represented by \((X+a)^{2}+(Y+b)^{2}\) , where \(a\) and \(b\) are given constants. Find the probability density function of the output of the detector, assuming that (i) noise alone is present, (ii) both signal and noise are present.

9.21 . Consider 3 jointly distributed random variables \(X, Y\) , and \(Z\) with a joint probability density function

\begin{align} f_{X, Y, Z}(x, y, z) &= \begin{cases} \dfrac{6}{(1+x+y+z)^{4}}, & \text{for } x > 0, \quad y > 0, \quad z > 0 \\ 0, & \text{otherwise.} \end{cases} \end{align} Find the probability density function of the sum \(X+Y+Z\) .

 

Answer

\(3 u^{2} /(1+u)^{4}\) for \(u>0 ; 0\) otherwise.