Consider an equation
$ f\left(x, y, c_{1}, c_{2}, \ldots, c_{n}\right)=0\tag{A} $in which $x$ and $y$ are variables and $c_{1}, c_{2}, \ldots, c_{n}$ are arbitrary and independent constants. This equation serves to determine $y$ as a function of $x$; strictly speaking, an $n$-fold infinity of functions is so determined, each function corresponding to a particular set of values attributed to $c_{1}, c_{2}, \ldots c_{n}$. Now an ordinary differential equation can be formed which is satisfied by every one of these functions, as follows.
Let the given equation be differentiated $n$ times in succession, with respect to $x$, then $n$ new equations are obtained, namely,
$ \frac{\partial f}{\partial x}+\frac{\partial f}{\partial y} y^{\prime}=0 $$ \frac{\partial^2 f}{\partial x^2}+2 \frac{\partial^2 f}{\partial x \partial y}y^{\prime}+\frac{\partial^2 f}{\partial y^2} y^{\prime 2}+\frac{\partial f}{\partial y} y^{\prime \prime}=0 $$ \vdots $$ \frac{\partial^n f}{\partial x^n}+\cdots+\frac{\partial f}{\partial y} y^{(n)}=0 $where
$ y^{\prime}=\frac{d y}{d x}, y^{\prime \prime}=\frac{d^2 y}{d x^2}, \ldots, y^{(n)}=\frac{d^n y}{d x^n}. $Each equation is manifestly distinct from those which precede it; 1 from the aggregate of $n+1$ equations the $n$ arbitrary constants $c_{1}, c_{2}, \ldots, c_{n}$ can be eliminated by algebraical processes, and the eliminant is the differential equation of order $n$ :
$ F\left(x, y, y^{\prime}, y^{\prime \prime}, \ldots, y^{(n)}\right)=0 $It is clear from the very manner in which this differential equation was formed that it is satisfied by every function $y=\phi(x)$ defined by the relation (A). This relation is termed the primitive of the differential equation, and every function $y=\phi(x)$ which satisfies the differential equation is known as a solution. 2 A solution which involves a number of essentially distinct arbitrary constants equal to the order of the equation is known as the general solution.3 That this terminology is justified, will be seen when in Chapter III. it is proved that one solution of an equation of order $n$ and one only can always be found to satisfy, for a specified value of $x, n$ distinct conditions of a particular type. The possibility of satisfying these $n$ conditions depends upon the existence of a solution containing $n$ arbitrary constants. The general solution is thus essentially the same as the primitive of the differential equation.
It has been assumed that the primitive actually contains $n$ distinct constants $c_{1}, c_{2}, \ldots, c_{n}$. If there are only apparently $n$ constants, that is to say if two or more constants can be replaced by a single constant without essentially modifying the primitive, then the order of the resulting differential equation will be less than $n$. For instance, suppose that the primitive is given in the form
$ f\{x, y, \phi(a, b)\}=0, $then it apparently depends upon two constants $a$ and $b$, but in reality upon one constant only, namely $c=\phi(a, b)$. In this case the resulting differential equation is of the first and not of the second order.
Again, if the primitive is reducible, that is to say if $f\left(x, y, c_{1}, \ldots, c_{n}\right)$ breaks up into two factors, each of which contains $y$, the order of the resulting differential equation may be less than $n$. For if neither factor contains all the $n$ constants, then each factor will give rise to a differential equation of order less than $n$, and it may occur that these two differential equations are identical, or that one of them admits of all the solutions of the other, and therefore is satisfied by the primitive itself. Thus let the primitive be
$ y^{2}-(a+b) x y+a b x^{2}=0 ; $it is reducible and equivalent to the two equations
$ y-a x=0, \quad y-b x=0, $each of which, and therefore the primitive itself, satisfies the differential equation
$ y-x y^{\prime}=0. $0.1 The Differential Equation of a Family of Confocal Conies
Consider the equation
$ \frac{x^{2}}{a^{2}+\lambda}+\frac{y^{2}}{b^{2}+\lambda}=1, $where $a$ and $b$ are definite constants, and $\lambda$ an arbitrary parameter which can assume all real values. This equation represents a family of confocal conics. The differential equation of which it is the primitive is obtained by eliminating $\lambda$ between it and the derived equation
$ \frac{2 x}{a^{2}+\lambda}+\frac{2 y y^{\prime}}{b^{2}+\lambda}=0. $From the primitive and the derived equation it is found that
$a^{2}+\lambda=\frac{x^{2} y^{\prime}-x y}{y^{\prime}}, \quad b^{2}+\lambda=y^{2}-x y y^{\prime},$and, eliminating $\lambda$,
$a^{2}-b^{2}=\frac{x^{2} y^{\prime}-x y}{y^{\prime}}-y^{2}+x y y^{\prime},$and therefore the required differential equation is
$x y y^{\prime 2}+\left(x^{2}-y^{2}-a^{2}+b^{2}\right) y^{\prime}-x y=0 ;$it is of the first order and the second degree.
When an equation is of the first order it is customary to represent the derivative $y^{\prime}$ by the symbol $p$. Thus the differential equation of the family of confocal conics may be written:
$x y\left(p^{2}-1\right)+\left(x^{2}-y^{2}-a^{2}+b^{2}\right) p=0.$1. Formation of Partial Differential Equations through the Elimination of Arbitrary Constants
Let $x_{1}, x_{2}, \ldots, x_{m}$ be independent variables, and let $z$, the dependent variable, be defined by the equation
$f\left(x_{1}, x_{2}, \ldots, x_{m} ; z ; c_{1}, c_{2}, \ldots, c_{n}\right)=0,$where $c_{1}, c_{2}, \ldots, c_{n}$ are $n$ arbitrary constants. To this equation may be adjoined the $m$ equations obtained by differentiating partially with respect to each of the variables $x_{1}$, $x_{2}, \ldots, x_{m}$ in succession, namely,
$ \begin{aligned} \frac{\partial f}{\partial x_{1}}+\frac{\partial f}{\partial z} \cdot \frac{\partial z}{\partial x_{1}}&=0,\\ \vdots\qquad &\\ \frac{\partial f}{\partial x_{m}}+\frac{\partial f}{\partial z} \cdot \frac{\partial z}{\partial x_{m}}&=0 \end{aligned} $If $m \geqslant n$, sufficient equations are now available to eliminate the constants $c_{1}, c_{2}, \ldots, c_{n}$. If $m This process is continued until enough equations have been obtained to enable the elimination to be carried out. In general, when this stage has been reached, there will be more equations available than there are constants to eliminate and therefore the primitive may lead not to one partial differential equation but to a system of simultaneous partial differential equations. As a first example let the primitive be the equation in which $a, b, c$ are arbitrary constants. By a proper choice of these constants, the equation can be made to represent any plane in space except a plane parallel to the $z$ axis. The first derived equations are: These are not sufficient to eliminate $a, b$, and $c$, and therefore the second derived equations are taken, namely, They are free of arbitrary constants, and are therefore the differential equations required. It is customary to write Thus any plane in space which is not parallel to the $z$-axis satisfies simultaneously the three equations In the second place, consider the equation satisfied by the most general sphere; it is where $a, b, c$ and $r$ are arbitrary constants. The first derived equations are and the second derived equations are When $z-c$ is eliminated, the required equations are obtained, namely, Thus there are two distinct equations. Let $\lambda$ be the value of each of the members of the equations, then Consequently, if the spheres considered are real, the additional condition must be satisfied. It will now be shown that the natural primitive of a single partial differential equation is a relation into which enter arbitrary functions of the variables. The investigation which leads up to this result depends upon a property of functional determinants or Jacobians. Let $u_{1}, u_{2}, \ldots, u_{m}$ be functions of the independent variables $x_{1}, x_{2}, \ldots, x_{n}$, and consider the set of partial differential coefficients arranged in order thus: Then the determinant of order $p$ whose elements are the elements common to $p$ rows and $p$ columns of the above scheme is known as a Jacobian.4 Let all the different possible Jacobians be constructed, then if a Jacobian of order $p$, say is not zero for a chosen set of values $x_{1}=\xi_{1}, \ldots, x_{n}=\xi_{n}$, but if every Jacobian of order $p+1$ is identically zero, then the functions $u_{1}, u_{2}, \ldots, u_{p}$ are independent, but the remaining functions $u_{p+1}, \ldots, u_{m}$ are expressible in terms of $u_{1}, \ldots, u_{p}$. Suppose that, for values of $x_{1}, \ldots, x_{n}$ in the neighbourhood of $\xi_{1}, \ldots, \xi_{n}$, the functions $u_{1}$, ..., $u_{p}$ are not independent, but that there exists an identical relationship, Then the equations identically in the neighbourhood of $\xi_{1}, \ldots, \xi_{n}$, which is contrary to the hypothesis. Consequently, the first part of the theorem, namely, that $u_{1}, \ldots, u_{p}$ are independent, is true. In $u_{p+1}, \ldots, u_{m}$ let the variables $x_{1}, \ldots, x_{p}, x_{p+1}, \ldots, x_{n}$ be replaced by the new set of independent variables $u_{1}, \ldots, u_{p}, x_{p+1}, \ldots, x_{n}$. It will now be shown that if $u_{r}$ is any of the functions $u_{p+1}, \ldots, u_{m}$, and $x_{s}$ any one of the variables $x_{p+1}, \ldots, x_{n}$, then $u_{r}$ is explicitly independent of $x_{s}$ that is Let and let $x_{1}, \ldots, x_{p}$ be replaced by their expressions in terms of the new independent variables $u_{1}, \ldots, u_{p}, x_{p+1}, \ldots, x_{n}$, then differentiating both sides of each equation with respect to $x_{s}$, The eliminant of $\dfrac{\partial x_1}{\partial x_s}$, $\ldots$, $\dfrac{\partial x_p}{\partial x_s}$ is But since, by hypothesis, it follows that Consequently each of the functions $u_{p+1}, \ldots, u_{m}$ is expressible in terms of the functions $u_{1}, \ldots, u_{p}$ alone, as was to be proved. Let the dependent variable $z$ be related to the independent variables $x_{1}, \ldots, x_{n}$ by an equation of the form where $F$ is an arbitrary function of its arguments $u_{1}, u_{2}, \ldots, u_{n}$ which, in turn, are given functions of $x_{1}, \ldots, x_{n}$ and $z$. When for $z$ is substituted its value in terms of $x_{1}, \ldots, x_{n}$, the equation becomes an identity. If therefore $D_{r} u_{s}$ represents the partial derivative of $u_{s}$ with respect to $x_{r}$ when $z$ has been replaced by its value, then But and therefore the partial differential equation satisfied by $z$ is The equation represents a surface of revolution whose axis coincides with the $z$-axis. In the notation of the preceding section, and therefore $z$ satisfies the partial differential equation: or Conversely, this equation is satisfied by where $\phi$ is an arbitrary function of its argument, and is therefore the differential equation of all surfaces of revolution which have the common axis $x=0, y=0$. Let where $\phi(x, y)$ is a homogeneous function of $x$ and $y$ of degree $n$. Then, since $\phi(x, y)$ can be written in the form it follows that In the notation of 2.3, and therefore $z$ satisfies the partial differential equation: and this equation reduces to Similarly, if $u$ is a homogeneous function of the three variables $x, y$ and $z$, of degree $n$, This theorem can be extended to any number of variables. The equation represents a family of surfaces, and it will be supposed that to each value of $c$ corresponds one, and only one, surface of the family. Now let $(x, y, z)$ be a point on a particular surface and $(x+\delta x, y+\delta y, z+\delta z)$ a neighbouring point on the same surface, then Assuming that the partial derivatives exist and are continuous, this equation may be written in the form where $\epsilon_{1}, \epsilon_{2}, \epsilon_{3} \rightarrow 0$, as $\delta x, \delta y, \delta z \rightarrow 0$. Now let $\epsilon_{1}, \epsilon_{2}$ and $\epsilon_{3}$ be made zero and let $d x$, $d y$ and $d z$ be written for $\delta x, \delta y$ and $\delta z$ respectively. Then there results the total differential equation which has been derived from the primitive by a consistent and logical process. then if the factor $\mu$ is removed, the equation takes the form That there is no inconsistency in the above use of the differentials $d x$, etc., may be verified by considering a particular equation in two variables, namely, The above process gives rise to the total differential equation and thus the quotient of the differentials $d y, d x$ is in fact the differential coefficient $d y / d x$. Needless to say, it is assumed that all the partial differential coefficients of $f$ exist, and that $\dfrac{\partial f}{\partial y}$ is not identically zero. ↩ Originally the terms integral (James Bernoulli, 1689) and particular integral (Euler, Inst. Calc. Int. 1768) were used. The use of the word solution dates back to Lagrange (1774), and, mainly through the influence of Poincaré, it has become established. The term particular integral is now used only in a very restricted sense, cf. Chap. VI. infra. ↩ Formerly known as the complete integral or complete integral equation (œquatio integralis completa, Euler). The term integral equation has now an utterly different meaning (cf. $\S$ 3.2, infra), and its use in any other connection should be abandoned. ↩ Scott and Mathews, Theory of Determinants, Chap. XIII. ↩1.1 The Partial Differential Equations of all Planes and of all Spheres.
2. A Property of Jacobians
3. Formation of a Partial Differential Equation through the Elimination of an Arbitrary Function
3.1. The Differential Equation of a Surface of Revolution
3.2. Euler's Theorem on Homogeneous Functions
4. Formation of a Total Differential Equation in Three Variables
If the three partial derivatives have a common factor $\mu$, and ifFootnotes