Definition12.4.1Total Differential
Let z=f(x,y) be continuous on an open set S. Let dx and dy represent changes in x and y, respectively. Where the partial derivatives fx and fy exist, the total differential of z is dz=fx(x,y)dx+fy(x,y)dy.
We studied differentials in Section 4.4, where Definition 4.4.2 states that if y=f(x) and f is differentiable, then dy=f′(x)dx. One important use of this differential is in Integration by Substitution. Another important application is approximation. Let Δx=dx represent a change in x. When dx is small, dy≈Δy, the change in y resulting from the change in x. Fundamental in this understanding is this: as dx gets small, the difference between Δy and dy goes to 0. Another way of stating this: as dx goes to 0, the error in approximating Δy with dy goes to 0.
We extend this idea to functions of two variables. Let z=f(x,y), and let Δx=dx and Δy=dy represent changes in x and y, respectively. Let Δz=f(x+dx,y+dy)−f(x,y) be the change in z over the change in x and y. Recalling that fx and fy give the instantaneous rates of z-change in the x- and y-directions, respectively, we can approximate Δz with dz=fxdx+fydy; in words, the total change in z is approximately the change caused by changing x plus the change caused by changing y. In a moment we give an indication of whether or not this approximation is any good. First we give a name to dz.
Let z=f(x,y) be continuous on an open set S. Let dx and dy represent changes in x and y, respectively. Where the partial derivatives fx and fy exist, the total differential of z is dz=fx(x,y)dx+fy(x,y)dy.
Let z=x4e3y. Find dz.
We can approximate Δz with dz, but as with all approximations, there is error involved. A good approximation is one in which the error is small. At a given point (x0,y0), let Ex and Ey be functions of dx and dy such that Exdx+Eydy describes this error. Then Δz=dz+Exdx+Eydy=fx(x0,y0)dx+fy(x0,y0)dy+Exdx+Eydy.
If the approximation of Δz by dz is good, then as dx and dy get small, so does Exdx+Eydy. The approximation of Δz by dz is even better if, as dx and dy go to 0, so do Ex and Ey. This leads us to our definition of differentiability.
Let z=f(x,y) be defined on an open set S containing (x0,y0) where fx(x0,y0) and fy(x0,y0) exist. Let dz be the total differential of z at (x0,y0), let Δz=f(x0+dx,y0+dy)−f(x0,y0), and let Ex and Ey be functions of dx and dy such that Δz=dz+Exdx+Eydy.
f is differentiable at (x0,y0) if, given ε>0, there is a δ>0 such that if ‖ then \norm{\la E_x,E_y\ra} \lt \varepsilon\text{.} That is, as dx and dy go to 0, so do E_x and E_y\text{.}
f is differentiable on S if f is differentiable at every point in S\text{.} If f is differentiable on \mathbb{R}^2\text{,} we say that f is differentiable everywhere.
Show f(x,y) = xy+3y^2 is differentiable using Definition 12.4.3.
Our intuitive understanding of differentiability of functions y=f(x) of one variable was that the graph of f was “smooth.” A similar intuitive understanding of functions z=f(x,y) of two variables is that the surface defined by f is also “smooth,” not containing cusps, edges, breaks, etc. The following theorem states that differentiable functions are continuous, followed by another theorem that provides a more tangible way of determining whether a great number of functions are differentiable or not.
Let z=f(x,y) be defined on an open set S containing (x_0,y_0)\text{.} If f is differentiable at (x_0,y_0)\text{,} then f is continuous at (x_0,y_0)\text{.}
Let z=f(x,y) be defined on an open set S containing (x_0,y_0)\text{.} If f_x and f_y are both continuous on S\text{,} then f is differentiable on S\text{.}
The theorems assure us that essentially all functions that we see in the course of our studies here are differentiable (and hence continuous) on their natural domains. There is a difference between Definition 12.4.3 and Theorem 12.4.6, though: it is possible for a function f to be differentiable yet f_x and/or f_y is not continuous. Such strange behavior of functions is a source of delight for many mathematicians.
When f_x and f_y exist at a point but are not continuous at that point, we need to use other methods to determine whether or not f is differentiable at that point.
For instance, consider the function \begin{equation*} f(x,y) = \left\{\begin{array}{cl} \frac{xy}{x^2+y^2} \amp (x,y)\neq (0,0) \\ 0 \amp (x,y) = (0,0) \end{array} \right. \end{equation*}
We can find f_x(0,0) and f_y(0,0) using Definition 12.3.2: \begin{align*} f_x(0,0) \amp = \lim_{h\to 0} \frac{f(0+h,0) - f(0,0)}{h}\\ \amp = \lim_{h\to 0} \frac{0}{h^2} = 0;\\ f_y(0,0) \amp = \lim_{h\to 0} \frac{f(0,0+h) - f(0,0)}{h}\\ \amp = \lim_{h\to 0} \frac{0}{h^2} = 0. \end{align*}
Both f_x and f_y exist at (0,0)\text{,} but they are not continuous at (0,0)\text{,} as \begin{equation*} f_x(x,y) = \frac{y(y^2-x^2)}{(x^2+y^2)^2} \qquad \text{ and } \qquad f_y(x,y) = \frac{x(x^2-y^2)}{(x^2+y^2)^2} \end{equation*} are not continuous at (0,0)\text{.} (Take the limit of f_x as (x,y)\to(0,0) along the x- and y-axes; they give different results.) So even though f_x and f_y exist at every point in the x-y plane, they are not continuous. Therefore it is possible, by Theorem 12.4.6, for f to not be differentiable.
Indeed, it is not. One can show that f is not continuous at (0,0) (see Example 12.2.10), and by Theorem 12.4.5, this means f is not differentiable at (0,0)\text{.}
By the definition, when f is differentiable dz is a good approximation for \ddz when dx and dy are small. We give some simple examples of how this is used here.
Let z = \sqrt{x}\sin(y)\text{.} Approximate f(4.1,0.8)\text{.}
The point of the previous example was not to develop an approximation method for known functions. After all, we can very easily compute f(4.1,0.8) using readily available technology. Rather, it serves to illustrate how well this method of approximation works, and to reinforce the following concept:
“New position = old position + amount of change,” so
“New position \approx old position + approximate amount of change.”
In the previous example, we could easily compute f(4,\pi/4) and could approximate the amount of z-change when computing f(4.1,0.8)\text{,} letting us approximate the new z-value.
It may be surprising to learn that it is not uncommon to know the values of f\text{,} f_x and f_y at a particular point without actually knowing the function f\text{.} The total differential gives a good method of approximating f at nearby points.
Given that f(2,-3) = 6\text{,} f_x(2,-3) = 1.3 and f_y(2,-3) = -0.6\text{,} approximate f(2.1,-3.03)\text{.}
The total differential gives an approximation of the change in z given small changes in x and y\text{.} We can use this to approximate error propagation; that is, if the input is a little off from what it should be, how far from correct will the output be? We demonstrate this in an example.
A cylindrical steel storage tank is to be built that is 10ft tall and 4ft across in diameter. It is known that the steel will expand/contract with temperature changes; is the overall volume of the tank more sensitive to changes in the diameter or in the height of the tank?
The previous example showed that the volume of a particular tank was more sensitive to changes in radius than in height. Keep in mind that this analysis only applies to a tank of those dimensions. A tank with a height of 1ft and radius of 5ft would be more sensitive to changes in height than in radius.
One could make a chart of small changes in radius and height and find exact changes in volume given specific changes. While this provides exact numbers, it does not give as much insight as the error analysis using the total differential.
The definition of differentiability for functions of three variables is very similar to that of functions of two variables. We again start with the total differential.
Let w=f(x,y,z) be continuous on an open set S\text{.} Let dx\text{,} dy and dz represent changes in x\text{,} y and z\text{,} respectively. Where the partial derivatives f_x\text{,} f_y and f_z exist, the total differential of w is \begin{equation*} dz = f_x(x,y,z)dx + f_y(x,y,z)dy+f_z(x,y,z)dz. \end{equation*}
This differential can be a good approximation of the change in w when w = f(x,y,z) is differentiable.
Let w=f(x,y,z) be defined on an open ball B containing (x_0,y_0,z_0) where f_x(x_0,y_0,z_0)\text{,} f_y(x_0,y_0,z_0) and f_z(x_0,y_0,z_0) exist. Let dw be the total differential of w at (x_0,y_0,z_0)\text{,} let \Delta w = f(x_0+dx,y_0+dy,z_0+dz) - f(x_0,y_0,z_0)\text{,} and let E_x\text{,} E_y and E_z be functions of dx\text{,} dy and dz such that \begin{equation*} \Delta w = dw + E_xdx + E_ydy + E_zdz. \end{equation*}
f is differentiable at (x_0,y_0,z_0) if, given \varepsilon >0\text{,} there is a \delta >0 such that if \norm{\la dx,dy,dz\ra} \lt \delta\text{,} then \norm{\la E_x,E_y,E_z\ra} \lt \varepsilon\text{.}
f is differentiable on B if f is differentiable at every point in B\text{.} If f is differentiable on \mathbb{R}^3\text{,} we say that f is differentiable everywhere.
Just as before, this definition gives a rigorous statement about what it means to be differentiable that is not very intuitive. We follow it with a theorem similar to Theorem 12.4.6.
Let w=f(x,y,z) be defined on an open ball B containing (x_0,y_0,z_0)\text{.}
If f is differentiable at (x_0,y_0,z_0)\text{,} then f is continuous at (x_0,y_0,z_0)\text{.}
If f_x\text{,} f_y and f_z are continuous on B\text{,} then f is differentiable on B\text{.}
This set of definition and theorem extends to functions of any number of variables. The theorem again gives us a simple way of verifying that most functions that we enounter are differentiable on their natural domains.
This section has given us a formal definition of what it means for a functions to be “differentiable,” along with a theorem that gives a more accessible understanding. The following sections return to notions prompted by our study of partial derivatives that make use of the fact that most functions we encounter are differentiable.
Terms and Concepts
In the following exercises, find the total differential dz\text{.}