Generalized Stokes Theorem
35.1 INTRODUCTION
35.1.1 Extending Multivariable Calculus to Four Dimensions
Having seen a fundamental theorem (FTC) in dimension \(1\), two theorems (FTLI, GREEN) in dimension two and three theorems (FTLI, STOKES, GAUSS) in dimension \(3\), we expect \(4\) theorems in dimensions \(4\). This is indeed the case, but how do we formulate such a theory? How would you formulate this in 4\(\)dimensions where points have coordinates \((x, y, z, w)\)?

35.1.2 Demystifying Differential Forms: From Functions to Multilinear Maps
Élie Cartan introduced forms. In three dimensions, a \(\boldsymbol{0}\)-form is just a scalar function \[f(x, y, z).\] A \(\boldsymbol{1}\)-form is \[F=P d x+Q d y+R d z,\] where \(P\), \(Q\), \(R\) are scalar functions and \(d x\), \(d y\), \(d z\) are formal expressions. A \(\boldsymbol{2}\)-form is an expression of the form \[F=P \,d y \,d z+Q \,d z \,d x+R \,d x \,d y,\] where \(d x\), \(d y\), \(d z\) are again symbols but satisfy rules like \(d x \,d y=-d y \,d x\), \(d x \,d z=-d z \,d x\) and \(d y \,d z=-d z \,d y\). A \(\boldsymbol{3}\)-form finally is written as \[f \,d x \,d y \,d z,\] where \(d x \,d y \,d z\) as a volume form. Most calculus books treat \(0\)-forms and \(3\)-form \(f\) as a scalar functions and \(1\)-forms and \(2\)-forms as vector fields. But what is \(d x\)? It is a linear map from \(\mathbb{R}^{3} \rightarrow \mathbb{R}\) which maps a vector \([v_{1}, v_{2}, v_{3}]\) to \(v_{1}\). The expression \(d x \,d y\) as a multi-linear anti-symmetric map from \(\mathbb{R}^{3} \times \mathbb{R}^{3}\) to \(\mathbb{R}\): the object \(d x \,d y\) assigns to two vectors \(v\), \(w\) the determinant of the matrix \(v\), \(w\), \([0,0,1]^{T}\) as column vectors. which is equal to \[v \times w \cdot k=v_{1} w_{2}-v_{2} w_{1}.\] Switching \(v\) and \(w\) changes the sign so that \(d x \,d y=-d y \,d x\) and especially \(d x \,d x=0\). The object \(d x \,d y \,d z\) is a multi-linear map from \(\mathbb{R}^{3} \times \mathbb{R}^{3} \times \mathbb{R}^{3}\) to \(\mathbb{R}\) assigning to \(3\) vectors \(u\), \(v\), \(w\) the determinant of the matrix in which \(u\), \(v\), \(w\) are the columns. Again, switching two elements changes the sign. For example \(d x \,d y \,d z=-d x \,d z \,d y\), or \(d x \,d x \,d z=0\).
35.1.3 Generalizing Scalar and Vector Fields
Breaking away from notions like cross product, we get now objects which can be defined in arbitrary dimensions \(\mathbb{R}^{n}\). A \(\boldsymbol{k}\)-form is a rule which at every point defines a multi-linear and anti-symmetric map to the reals. Let us look how this is defined in \(4\) dimensions: a \(\boldsymbol{0}\)-form is a scalar function \(f\). It assigns to every point \((x, y, z, w)\) a number \[f(x, y, z, w).\] A \(\boldsymbol{1}\)-form is an expression \[F=P \,d x+Q \,d y+R \,d z+S \,d w\] which can be thought of as a vector field \(F=[P, Q, R, S]\). A \(\boldsymbol{2}\)-form is an expression \[F=A \,d x \,d y+B \,d x \,d z+C \,d x \,d w+P \,d y \,d z+Q \,d y \,d w+R \,d z \,d w.\] It is a field with \(6\) components. A \(\boldsymbol{3}\)-form is an expression \[F=A \,d y \,d z \,d w+B \,d x \,d z \,d w+C \,d x \,d y \,d w+D \,d x \,d y \,d z.\] As it is a field with \(4\) components we can again see it as a "vector field". A \(\boldsymbol{4}\)-form is an expression \[F=f \,d x \,d y \,d z \,d w.\] As it has only one component, we can again think of it as a "scalar function" even so this a lie. A \(4\)-form is a different object than a \(0\)-form.
35.1.4 Application of the Exterior Derivative in Four Dimensions
The exterior derivative produces from a \(k\)-form a \((k+1)\)-form. First define the \(1\)-form \[d f=f_{x} \,d x+f_{y} \,d y+f_{z} \,d z+f_{w} \,d w\] for a \(0\)-form \(f\), then use this for general \(k\)-forms. Given a \(1\)-form \[F=P \,d x+Q \,d y+R \,d z+S \,d w\] define \[\begin{aligned} d F&=(P_{x} \,d x+P_{y} \,d y+P_{z} \,d z+P_{w} \,d w) \,d x\\ &\quad +(Q_{x} \,d x+Q_{y} \,d y+Q_{z} \,d z+Q_{w} \,d w) \,d y\\ &\quad +(R_{x} \,d x+R_{y} \,d y+R_{z} \,d z+R_{w} \,d w) \,d z\\ &\quad +(S_{x} \,d x+S_{y} \,d y+S_{z} \,d z+S_{w} \,d w) \,d w \end{aligned}\] which simplifies to \[\begin{aligned} d F&=(Q_{x}-P_{y}) \,d x \,d y\\ &\quad +(R_{x}-P_{z}) \,d x \,d z\\ &\quad +(S_{x}-P_{w}) \,d x \,d w\\ &\quad +(R_{y}-Q_{z}) \,d y \,d z\\ &\quad +(S_{y}-Q_{w}) \,d y \,d w\\ &\quad +(R_{w}-S_{z}) \,d w \,d z. \end{aligned}\] If \[F=A \,d x \,d y+B \,d x \,d z+C \,d x \,d w+P \,d y \,d z+Q \,d y \,dw+R \,d z \,d w\] is \(2\)-form, then \[\begin{aligned} d F&=(A_{x} \,dx+A_{y} \,dy+A_{z} \,dz+A_{w} \,dw) \,dx \,dy\\ &\quad +(B_{x} \,dx+B_{y} \,dy+B_{z} \,dz+B_{w} \,dw) \,dx \,dz\\ &\quad + (C_{x} \,dx+C_{y} \,dy+C_{z} \,dz+C_{w} \,dw) \,dx \,dw\\ &\quad + (P_{x} \,dx+P_{y} \,dy+P_{z} \,dz+P_{w} \,dw) \,dy \,dz\\ &\quad + (Q_{x} \,dx+Q_{y} \,dy+Q_{z} \,dz+Q_{w} \,dw) \,dy \,dw\\ &\quad +(R_{x} \,dx+R_{y} \,dy+R_{z} \,dz+R_{w} \,dw) \,dz \,dw \end{aligned}\] simplifies to \[\begin{aligned} dF &= (P_{x}-B_{y}+A_{z}) \,dx \,dy \,dz\\ &\quad +(Q_{x}-C_{y}+A_{w}) \,dx \,dy \,dw\\ &\quad +(R_{x}-C_{z}+B_{w}) \,dx \,dz \,dw\\ &\quad +(R_{y}-Q_{z}+P_{w}) \,dy \,dz \,dw. \end{aligned}\] Finally for \[F=A \,dy \,dz \,dw+B \,dx \,dz \,dw+C \,dx \,dy \,dw+D \,dx \,dy \,dz\] we have \[\begin{aligned} d F&=(A_{x} \,dx+A_{y} \,dy+A_{z} \,dz+A_{w} \,dw) \,dy \,dz \,dw\\ &\quad +(B_{x} \,dx+B_{y} \,dy+B_{z} \,dz+B_{w} \,dw) \,dx \,dz \,dw\\ &\quad +(C_{x} \,dx+C_{y} \,dy+C_{z} \,dz+C_{w} \,dw) \,dx \,dy \,dw\\ &\quad +(D_{x} \,dx+D_{y} \,dy+D_{z} \,dz+D_{w} \,dw) \,dx \,dy \,dz\\ &=(A_{x}+B_{y}+C_{z}+D_{w}) \,dx \,dy \,dz \,dw. \end{aligned}\]
35.1.5 Tensors and Stokes Theorem Integration
We can integrate a \((k+1)\)-form \(d F\) over a \((k+1)\)-manifold \(G\) and a \(k\)-form \(F\) over the \(k\)-manifold \(d G\), the boundary \(d G\) of \(G\). We write \(\int_{G} d F\). To see the general Stokes theorem , we need to know that a tensor is. Machine learning can justify to introduce the concept.1 Let \(E\) be a space of column vectors and \(E^{*}\) a space of row vectors.
Column vectors are tensors of the type \((1,0)\), row vectors are tensors of the type \((0,1)\), matrices are tensors of the type \((1,1)\). The \(k\)-th Jacobean derivative of a function \(f\) is a tensor of type \((0, k)\). A tensor of type \((0,3)\) for example as a \(3\)-dimensional array of numbers \(A_{i j k}\). It defines a multi-linear map assigning to every triplet of vectors \(u\), \(v\), \(w\) the number \(\sum_{i, j, k} A_{i j k} u^{i} v^{j} v^{k}\).2 A \(k\)-form on a manifold attaches a \((0, k)\) tensor at every point.
35.2 LECTURE
35.2.1 Tensors as Multilinear Maps on Dual Spaces
\(E=\mathbb{R}^{n}=M(n, 1)\) is the space of column vectors. Its dual \(E^{*}=M(1, n)\) is the space of row vectors. To get more general objects we treat vectors as maps. A row vector is a linear map \(F: E \rightarrow \mathbb{R}\) defined by \(F(u)=F u\) and a column vector defines a linear map \(F: E^{*} \rightarrow \mathbb{R}\) by \(F(u)=u F\). A map \(F\left(x_{1}, \ldots, x_{n}\right)\) of several variables is called multi-linear, if it is linear in each coordinate. The set \(T_{q}^{p}(E)\) of all multi-linear maps \(F:(E^{*})^{p} \times E^{q} \rightarrow \mathbb{R}\) is the space of tensors of type \((p, q)\). We have \(T_{0}^{1}(E)=E\) and \(T_{1}^{0}(E)=E^{*}\). The space \(T_{1}^{1}(E)\) can naturally be identified with the space \(M(n, n)\) of \(n \times n\) matrices. Indeed, given a matrix \(A\), a column vector \(v \in E\) and a row vector \(w \in E^{*}\), we get the bi-linear map \(F(v, w)=w A v\). It is linear in \(v\) and in \(w\). In other words, it is a tensor of type \((1,1)\).
35.2.2 Anti-Symmetric Tensors and k-Forms
Let \(\Lambda^{q}(E)\) be the subspace of \(T_{q}^{0}(E)\) which consists of tensors \(F\) of type \((0, q)\) such that \(F(x_{1}, \ldots x_{q})\) is anti-symmetric in \(x_{1}, \ldots, x_{q} \in E\): this means \[F(x_{\sigma(1)}, \ldots, x_{\sigma(q)})=(-1)^{\sigma} f(x_{1}, \ldots, x_{q})\] for all \(i, j=1, \ldots, q\), where \((-1)^{\sigma}\) is the sign of the permutation \(\sigma\) of \(\{1, \ldots, n\}\). If the Binomial coefficient \(B(n, q)=n ! /(q !(n-q) !)\) counts the number of subsets with \(q\) elements \(i_{1}<\cdots
35.2.3 Exterior Calculus: Forms, Derivatives, and Integration
The exterior derivative \(d: \Lambda^{p} \rightarrow \Lambda^{p+1}\) is defined for \(f \in \Lambda^{0}\) as \[d f=f_{x_{1}} \,dx_{1}+\cdots+f_{x_{n}} \,dx_{n}\] and \[d(f \,dx_{i_{1}} \cdots \,dx_{i_{p}})=\sum_{i} f_{x_{i}} \,dx_{i} \,dx_{i_{1}} \cdots \,dx_{i_{p}}.\] For \(F=P \,dx+Q \,dy\) for example, it is \[(P_{x} \,dx+P_{y} \,dy) \,dx+(Q_{x} \,dx+Q_{y} \,dy) \,dy=(Q_{x}-P_{y}) \,dx \,dy\] which is the curl of \(F\). If \(r: G \subset \mathbb{R}^{m} \rightarrow \mathbb{R}^{n}\) is a parametrization, then \(S=r(G)\) is a \(\boldsymbol{m}\)-surface and \(\delta S=r(\delta G)\) is its boundary in \(\mathbb{R}^{n}\). If \(F \in \Lambda^{p}(\mathbb{R}^{n})\) is a \(\boldsymbol{p}\)-form on \(\mathbb{R}^{n}\), then \[r^{*} F(x)(u_{1}, \ldots, u_{p})=F(r(x))\Big(d r(x)(u_{1}), d r(x)(u_{2}), \ldots, d r(x)(u_{p})\Big)\] is a \(p\)-form in \(\mathbb{R}^{m}\) called the pull-back of \(r\). Given a \(p\)-form \(F\) and an \(p\)-surface \(S=r(G)\), define the integral \(\int_{S} F=\int_{G} r^{*} F\). The general Stokes theorem is
Theorem 1. \(\int_{S} d F=\int_{\delta S} F\) for a \((m-1)\)-form \(F\) and \(m\)-surface \(S\) in \(E\).
Proof. As in the proof of the divergence theorem, we can assume that the region \(G\) is simultaneously of the form \[g_{j}(x_{1}, \ldots, \hat{x}_{j}, \ldots x_{m}) \leq x_{j} \leq h_{j}(x_{1}, \ldots, \hat{x}_{j}, \ldots x_{m}),\] where \(1 \leq j \leq n\) and that \(F=\left[0, \ldots, 0, F_{j}, 0, \ldots, 0\right]\). The coordinate independent definition of \(d F\) reduces the result to the divergence theorem in \(G\). ◻
35.3 EXAMPLES
Example 1. For \(n=1\), there are only \(0\)-forms and \(1\)-forms. Both are scalar functions. We write \(f\) for a \(0\)-form and \(F=f \,dx\) for a \(1\)-form. The symbol \(d x\) abbreviates the linear map \(d x(u)=u\). The \(1\)-form assigns to every point the linear map \(f(x) \,dx(u)=f(x) u\). The exterior derivative \(d: \Lambda^{0} \rightarrow \Lambda^{1}\) is given by \(d f(x) u=f^{\prime}(x) u\). Stokes theorem is the fundamental theorem of calculus \(\int_{a}^{b} f^{\prime}(x) \,dx=f(b)-f(a)\).
Example 2. For \(n=2\), there are \(0\)-forms, \(1\)-forms and \(2\)-forms. It is custom to write \(F=P \,dx+Q \,dy\) rather than \(F=[P, Q]\) which is thought of as a linear map \[F(x, y)(u)=P(x, y) u_{1}+Q(x, y) u_{2}.\] A \(2\)-form is also written as \(F=f \,dx \,dy\) or \(F=f \,dx \wedge dy\). Here \(d x \,dy\) means the bi-linear map \(d x \,dy(u, v)=(u_{1} v_{2}-u_{2} v_{1})\). The \(2\)-form defines such a bi-linear map at every point \((x, y)\). The exterior derivative \(d \Lambda^{0} \rightarrow \Lambda^{1}\) is \[d f(x, y)(u_{1}, u_{2})=f_{x}(x, y) u_{1}+f_{y}(x, y) u_{2}\] which encodes the Jacobian \(d f=[f_{x}, f_{y}]\), a row vector. The exterior derivative of a \(1\)-form \(F=P \,dx+Q \,dy\) is \[d F(x, y)(u, v)=(-1)^{1} P_{y}(x, y) \det([u, v])+(-1)^{2} Q_{x}(x, y) \det([u, v])\] which is \((Q_{x}-P_{y}) \,dx \,dy\). Using coordinates is convenient as \[d F=P_{y} \,dy \,dx+Q_{x} \,dx \,dy=(Q_{x}-P_{y}) \,dx \,dy\] using now that \(d y \,dx=-d x \,dy\).
Example 3. For \(n=3\), we write \(F=P \,dx+Q \,dy+R \,dz\) for a \(1\)-form, and \[F=P \,dy \,dz+Q \,dz \,dx+R \,dx \,dy\] for a \(2\)-form. Here \(d y \,dz=d y \wedge dz\) are symbols representing bilinear maps like \(d y \,dz(u, v)=u_{2} v_{3}-v_{3} u_{2}\). As a \(2\)-form has \(3\) components, it can be visualized as vector field. A \(3\)-form \(f \,dx \,dy \,dz\) defines a scalar function \(f\). The symbol \(d x \,dy \,dz=d x \wedge d y \wedge d z\) represents the map \(d x \,dy \,dz(u, v, w)=\operatorname{det}([u v w])\). The exterior derivative of a \(1\)-form gives the curl because \[\begin{aligned} d(P \,dx+Q \,dy+R \,dz)&=P_{y} \,dy \,dx+P_{z} \,dz \,dx\\ &\quad +Q_{x} \,dx \,dy+Q_{z} \,dz \,dy\\ &\quad +R_{x} \,dx \,dz+R_{y} \,dy \,dz \end{aligned}\] which is \[\begin{aligned} (R_{y}-Q_{z}) \,dy \,dz+(P_{z}-R_{x}) \,dz \,dx+(Q_{x}-P_{y}) \,dx \,dy. \end{aligned}\] The exterior derivative of a \(2\)-form \(P \,dy \,dz+Q \,dz \,dx+R \,dx \,dy\) is \[\begin{aligned} P_{x} \,dx \,dy \,dz+Q_{y} \,dy \,dz \,dx+R_{z} \,dz \,dx \,dy=(P_{x}+Q_{y}+R_{z}) \,dx \,dy \,dz. \end{aligned}\] To integrate a \(2\)-form \[F=x^{2} y z \,dx \,dy+y z \,dy \,dz+x z \,dx \,dz\] over a surface \(r(u, v)=[x, y, z]=[u v, u-v, u+v]\) with \(G=\{u^{2}+v^{2} \leq 1\}\) we end up with integrating \(F(r(u, v)) \cdot r_{u} \times r_{v}\). In order to integrate \(d F\) for a \(1\)-form \(F=P \,dx+Q \,dy+R \,dz\) we can also pull back \(F\) and get \[\iint_{G} \Big(F_{v}(r(u, v)) r_{u}-F_{u}(r(u, v) r_{v}\Big) \,d u \,d v.\]
Example 4. For \(n=4\), where we have \(0\)-forms \(f\), \(1\)-forms \[F=P \,dx+Q \,dy+R \,dz+S \,dw\] and \(2\)-forms \[\begin{aligned} F&=F_{12} \,dx \,dy+F_{13} \,dx \,dz+F_{14} \,dx \,dw\\ &\quad +F_{23} \,dy \,dz+F_{24} \,dy \,dw\\ &\quad +F_{34} \,dz \,dw \end{aligned}\] which are objects with \(6\) components. Then \(3\)-forms \[F=P \,dy \,dz \,dw+Q \,dx \,dz \,dw+R \,dx \,dy \,dw+S \,dx \,dy \,dz\] and finally \(4\)-forms \[f \,dx \,dy \,dz \,dw.\]
35.4 REMARKS
35.4.1 Differential Forms: Modern Approach vs. Classical Methods
Historically, differential forms emerged in 1922 with Élie Cartan. Most textbooks introduce the Grassmannian algebra early and use the language of "chains" for example which is the language used in algebraic topology. I myself taught the subject in this old-fashioned way too, back in 1995.3 It was Jean Dieudonné in 1972 who freed the general Stokes theorem from chains and used first the coordinate free pull back idea. This allowed us in this lecture to formulate the general Stokes theorem from scratch on a single page with all definitions.
35.4.2 Intuitive Approaches to Differential Forms
What is a differential form? We have seen a mathematically precise definition: a differential form is a kind of field: it defines a multi-linear anti-symmetric function that is attached to each point of space. But what is the intuition and what are ways to "visualize" and "see" and "understand" such an object? Here are four paths. Maybe one of them helps:
- Using Stokes one can see a form as a functional \(F\), which assigns to a \(m\)-dimensional oriented surface \(S\) a number \(\int_{S} F \cdot d S\) such that4 \[\int_{-S} F \cdot d S=\int_{S}(-F) \cdot d S=-\int_{S} F \cdot d S.\] This way of thinking about forms matches what we do in the discrete. If we have a \(k\)-form on a graph, then this is a function on \(k\)-dimensional oriented complete subgraphs. Given a graph \(S\) we have \(\int_{S} F \cdot d S=\sum_{x \in S} F(x)\), where the sum is over all \(k\)-dimensional simplices in \(S\).
- One can understand differential forms better using arithmetic, the Grassmannian algebra. This is done with the help of the tensor product, which induces an exterior product \(F \wedge G\) on \(\Lambda^{p} \times \Lambda^{q} \rightarrow \Lambda^{p+q}\). This product generalizes the cross product \(\Lambda^{1} \times \Lambda^{1} \rightarrow \Lambda^{2}\) which works for \(n=3\) as there, the space of \(1\)-forms \(\Lambda^{1}\) and \(2\)-forms \(\Lambda^{2}\) can be identified. The exterior algebra structure helps to understand \(k\)-forms. We can for example see a \(2\)-form as an exterior product \(F \wedge G\) of two \(1\)-forms. We can think of a \(2\)-form for example as attaching two vectors at a point and identify two such frames if their orientation and parallelogram areas match.
- A third way comes through physics. We are familiar with manifestations of electomagnetism: we see light, we use magnets to attach papers to the fridge or have magnetic forces keep the laptop lid closed. Electric fields are felt when combing the hair, as we see sparks generated by the high electric field obtained by stripping away the electrons from the head. We use magnetic fields to store information on hard drives and electric fields to store information on a SSD hard drive. Non-visible electro-magnetic fields are used when communicating using cell phones or connecting through blue-tooth or wireless network connections. The electro-magnetic field \(E\), \(B\) is actually a \(2\)-form in \(4\)-dimensions. The \(B(4,2)=6\) components are \((E_{1}, E_{2}, E_{3}, B_{1}, B_{2}, B_{3})\).
- A fourth way comes through discretization. When formulating Stokes on a discrete network, everything is much easier: a \(k\)-form is just a function on oriented \(k\)-dimensional complete subgraphs of a network. Start with a graph \(G=(V, E)\) and orient the complete subgraphs arbitrarily. Given a \(k\)-form \(F\), a function on \(k\) simplices has an exterior derivative at a \(k+1\) dimensional simplex \(x\) is defined as \(d F(x)=\sum_{y \subset x} \sigma(y, x) F(y)\), where the sum is over all \(k\)-dimensional sub-simplices of \(x\) and \(\sigma(y, x)=1\) if the orientation of \(y\) matches the orientation of \(x\) or \(-1\) else. We have for example seen that for a \(1\)-form \(F\), a function on edges, the exterior derivative at a triangle \(x\) is the sum over the \(F\) values of the edges, where we add up the value negatively if the arrow of the edge does not match the orientation of the triangle.
35.5 APPLICATIONS
35.5.1 Electromagnetic Duality from One-Forms to the Laplacian
An electromagnetic field is determined by a \(1\)-form \(A\) in \(4\)-dimensional space time. The electromagnetic field is \(F=d A\). The Maxwell equations are \(d F=0\) (the relation \(d \circ d=0\) is seen in the homework). The second part of the Maxwell equations are \(d^{*} F=j\), where \(d^{*}: \Lambda^{p} \rightarrow \Lambda^{p-1}\) is the adjoint and \(j\) is a \(1\)-form encoding both the electric charge and the electric current. We can always gauge with a gradient \(A+d f\) so that \(d^{*}(A+d f)=0\) (Coulomb gauge). Using \(d^{*} A=0\), the Maxwell equations reduced to the Poisson equation \[L A=(d d^{*}+d^{*} d) A=j,\] where \(L\) is the Laplacian on \(1\)-forms. The electric current \(j\) defines the electromagnetic field \(F\) simply by inverting the Laplacian. This is a bit tricky in the continuum, as the inverse is an integral operator.5 In the discrete it is just the inverse of the matrix \(L\), which by the way is always an invertible \(|E| \times|E|\) matrix if the graph \(G=(V, E)\) is simply connected. And there was light!
EXERCISES
Exercise 1. Given the \(1\)-form \[F(x, y, z, w)=[x^{3}, y^{5}, z^{5}, w^{2}]=x^{3} \,dx+y^{5} \,dy+z^{5} \,dz+w^{2} \,dw\] and the curve \(C: r(t)=[\cos (t), \sin (t), \cos (t), \sin (t)]\) with \(0 \leq t \leq \pi\). Find the line integral \(\int_{C} F(r(t)) \cdot d r\).
Exercise 2. Given the \(1\)-form \[F=[x y z, x y, w x, w x y]=x y z \,dx+x y \,dy+w x \,dz+w x y \,dw,\] find the \(\operatorname{curl} d F\). Now find \(\iint_{S} d F\) over the \(2\)-dimensional surface \[S: x^{2}+y^{2} \leq 1, \quad z=1, \quad w=1\] which has as a boundary the curve \(C: r(t)=[\cos (t), \sin (t), 1,1]^{T}\), \(0 \leq t \leq 2 \pi\).
Hint: You certainly can use the Stokes theorem. If you like to compute both sides of the theorem you can see how the theorem works. The \(2\)-manifold \(S\) is parametrized by \(r(t, s)=[s, t, 1,1]^{T}\). The \((r_{s} \wedge r_{t})_{i j}\) has \(6\) components, where only one component \((r_{s} \wedge r_{t})_{12}\) is nonzero. This will match with the \(d F_{12}=P \,dx \,dy\) part of the \(6\)-component \(2\)-form \(d F\) building the curl. We will have to integrate then over \(G=s^{2}+t^{2} \leq 1\).
Exercise 3. Given the \(2\)-form \[F=z^{4} x \,dx \,dz+x y z w^{2} \,dy \,dw\] and the \(3\)-sphere \(x^{2}+y^{2}+z^{2}+w^{2}=1\) oriented outwards. What is the integral \(\iiint_{S} d F\)? To compute this \(3\)D integral, you can use the general integral theorem.
Exercise 4. Given the \(3\)-form \[F=x y z \,dx \,dy \,dz+y^{2} z \,dy \,dz \,dw,\] find the divergence \(d F\). Now find the flux of \(F\) through the unit sphere \(x^{2}+\) \(y^{2}+z^{2}+w^{2}=1\) oriented outwards.
Exercise 5.
- Take \(f(x, y, z, w)\). Check that \(F=d f\) satisfies \(d F=0\).
- Take \[F=F_{1} \,dx+F_{2} \,dy+F_{3} \,dz+F_{4} \,dw.\] Compute the \(\operatorname{curl} G=d F\) and check that \(d G=0\).
- Take the \(2\)-form \[\begin{aligned} F&=F_{12} \,dx \,dy+F_{13} \,dx \,dz+F_{14} \,dx \,dw\\ &\quad +F_{23} \,dy \,dz+F_{24} \,dy \,dw\\ &\quad +F_{34} \,dz \,dw. \end{aligned}\] Write down the \(3\)-form \(G=d F\) and check \(d G=0\).
- Take the \(3\)-form \[\begin{aligned} F=F_{1} \,dy \,dz \,dw+F_{2} \,dx \,dz \,dw+F_{3} \,dx \,dy \,dw+F_{4} \,dx \,dy \,dz \end{aligned}\] and compute the \(4\)-form \(G=d F\). Check that \(d G=0\).
- There is a "tensor flow" library for example.↩︎
- Albert Einstein would just write \(A_{i j k} u^{i} v^{j} v^{k}\) and not bother about the summation symbol.↩︎
- Caltech notes: https://people.math.harvard.edu/knill/teaching/math109_1995/geometry.webp↩︎
- David Bachman’s text on differential forms: "it is a thing which can be integrated".↩︎
- There are thick books about this like Jackson’s Electromagnetism, the bible of the topic.↩︎