We studied *differentials* in Section 4.4, where Definition 4.4.2 states that if \(y=f(x)\) and \(f\) is differentiable, then \(dy=\fp(x)dx\text{.}\) One important use of this differential is in Integration by Substitution. Another important application is approximation. Let \(\dx = dx\) represent a change in \(x\text{.}\) When \(dx\) is small, \(dy\approx \dy\text{,}\) the change in \(y\) resulting from the change in \(x\text{.}\) Fundamental in this understanding is this: as \(dx\) gets small, the difference between \(\dy\) and \(dy\) goes to 0. Another way of stating this: as \(dx\) goes to 0, the *error* in approximating \(\dy\) with \(dy\) goes to 0.

We extend this idea to functions of two variables. Let \(z=f(x,y)\text{,}\) and let \(\dx = dx\) and \(\dy=dy\) represent changes in \(x\) and \(y\text{,}\) respectively. Let \(\ddz = f(x+dx,y+dy) - f(x,y)\) be the change in \(z\) over the change in \(x\) and \(y\text{.}\) Recalling that \(f_x\) and \(f_y\) give the instantaneous rates of \(z\)-change in the \(x\)- and \(y\)-directions, respectively, we can approximate \(\ddz\) with \(dz = f_xdx+f_ydy\text{;}\) in words, the total change in \(z\) is approximately the change caused by changing \(x\) plus the change caused by changing \(y\text{.}\) In a moment we give an indication of whether or not this approximation is any good. First we give a name to \(dz\text{.}\)

#####
Definition12.4.1Total Differential

Let \(z=f(x,y)\) be continuous on an open set \(S\text{.}\) Let \(dx\) and \(dy\) represent changes in \(x\) and \(y\text{,}\) respectively. Where the partial derivatives \(f_x\) and \(f_y\) exist, the *total differential of \(z\)* is
\begin{equation*}
dz = f_x(x,y)dx + f_y(x,y)dy.
\end{equation*}

#####
Example12.4.2Finding the total differential

Let \(z = x^4e^{3y}\text{.}\) Find \(dz\text{.}\)

SolutionWe compute the partial derivatives: \(f_x = 4x^3e^{3y}\) and \(f_y = 3x^4e^{3y}\text{.}\) Following Definition 12.4.1, we have
\begin{equation*}
dz = 4x^3e^{3y}dx+3x^4e^{3y}dy.
\end{equation*}

We *can* approximate \(\ddz\) with \(dz\text{,}\) but as with all approximations, there is error involved. A good approximation is one in which the error is small. At a given point \((x_0,y_0)\text{,}\) let \(E_x\) and \(E_y\) be functions of \(dx\) and \(dy\) such that \(E_xdx+E_ydy\) describes this error. Then
\begin{align*}
\ddz \amp = dz + E_xdx+ E_ydy\\
\amp = f_x(x_0,y_0)dx+f_y(x_0,y_0)dy + E_xdx+E_ydy.
\end{align*}

If the approximation of \(\ddz\) by \(dz\) is good, then as \(dx\) and \(dy\) get small, so does \(E_xdx+E_ydy\text{.}\) The approximation of \(\ddz\) by \(dz\) is even better if, as \(dx\) and \(dy\) go to 0, so do \(E_x\) and \(E_y\text{.}\) This leads us to our definition of differentiability.

#####
Definition12.4.3Multivariable Differentiability

Let \(z=f(x,y)\) be defined on an open set \(S\) containing \((x_0,y_0)\) where \(f_x(x_0,y_0)\) and \(f_y(x_0,y_0)\) exist. Let \(dz\) be the total differential of \(z\) at \((x_0,y_0)\text{,}\) let \(\ddz = f(x_0+dx,y_0+dy) - f(x_0,y_0)\text{,}\) and let \(E_x\) and \(E_y\) be functions of \(dx\) and \(dy\) such that
\begin{equation*}
\ddz = dz + E_xdx + E_ydy.
\end{equation*}

\(f\) is *differentiable at \((x_0,y_0)\)* if, given \(\varepsilon >0\text{,}\) there is a \(\delta >0\) such that if \(\norm{\la dx,dy\ra} \lt \delta\text{,}\) then \(\norm{\la E_x,E_y\ra} \lt \varepsilon\text{.}\) That is, as \(dx\) and \(dy\) go to 0, so do \(E_x\) and \(E_y\text{.}\)

\(f\) is *differentiable on \(S\)* if \(f\) is differentiable at every point in \(S\text{.}\) If \(f\) is differentiable on \(\mathbb{R}^2\text{,}\) we say that \(f\) is *differentiable everywhere*.

#####
Example12.4.4Showing a function is differentiable

Show \(f(x,y) = xy+3y^2\) is differentiable using Definition 12.4.3.

SolutionWe begin by finding \(f(x+dx,y+dy)\text{,}\) \(\ddz\text{,}\) \(f_x\) and \(f_y\text{.}\)
\begin{align*}
f(x+dx,y+dy) \amp = (x+dx)(y+dy) + 3(y+dy)^2\\
\amp = xy + xdy+ydx+dxdy + 3y^2+6ydy+3dy^2.
\end{align*}

\(\ddz = f(x+dx,y+dy) - f(x,y)\text{,}\) so
\begin{equation*}
\ddz = xdy + ydx + dxdy + 6ydy+3dy^2.
\end{equation*}

It is straightforward to compute \(f_x = y\) and \(f_y = x+6y\text{.}\) Consider once more \(\ddz\text{:}\)
\begin{align*}
\ddz \amp = xdy + ydx + dxdy + 6ydy+3dy^2 \qquad \text{ (now reorder) }\\
\amp = ydx + xdy+6ydy+ dxdy + 3dy^2\\
\amp = \underbrace{(y)}_{f_x}dx + \underbrace{(x+6y)}_{f_y}dy + \underbrace{(dy)}_{E_x}dx+\underbrace{(3dy)}_{E_y}dy\\
\amp = f_xdx + f_ydy + E_xdx+E_ydy.
\end{align*}

With \(E_x = dy\) and \(E_y = 3dy\text{,}\) it is clear that as \(dx\) and \(dy\) go to 0, \(E_x\) and \(E_y\) also go to 0. Since this did not depend on a specific point \((x_0,y_0)\text{,}\) we can say that \(f(x,y)\) is differentiable for all pairs \((x,y)\) in \(\mathbb{R}^2\text{,}\) or, equivalently, that \(f\) is differentiable everywhere.

Our intuitive understanding of differentiability of functions \(y=f(x)\) of one variable was that the graph of \(f\) was “smooth.” A similar intuitive understanding of functions \(z=f(x,y)\) of two variables is that the surface defined by \(f\) is also “smooth,” not containing cusps, edges, breaks, etc. The following theorem states that differentiable functions are continuous, followed by another theorem that provides a more tangible way of determining whether a great number of functions are differentiable or not.

#####
Theorem12.4.5Continuity and Differentiability of Multivariable Functions

Let \(z=f(x,y)\) be defined on an open set \(S\) containing \((x_0,y_0)\text{.}\) If \(f\) is differentiable at \((x_0,y_0)\text{,}\) then \(f\) is continuous at \((x_0,y_0)\text{.}\)

#####
Theorem12.4.6Differentiability of Multivariable Functions

Let \(z=f(x,y)\) be defined on an open set \(S\) containing \((x_0,y_0)\text{.}\) If \(f_x\) and \(f_y\) are both continuous on \(S\text{,}\) then \(f\) is differentiable on \(S\text{.}\)

The theorems assure us that essentially all functions that we see in the course of our studies here are differentiable (and hence continuous) on their natural domains. There is a difference between Definition 12.4.3 and Theorem 12.4.6, though: it is possible for a function \(f\) to be differentiable yet \(f_x\) and/or \(f_y\) is *not* continuous. Such strange behavior of functions is a source of delight for many mathematicians.

When \(f_x\) and \(f_y\) exist at a point but are not continuous at that point, we need to use other methods to determine whether or not \(f\) is differentiable at that point.

For instance, consider the function
\begin{equation*}
f(x,y) = \left\{\begin{array}{cl} \frac{xy}{x^2+y^2} \amp (x,y)\neq (0,0) \\
0 \amp (x,y) = (0,0)
\end{array} \right.
\end{equation*}

We can find \(f_x(0,0)\) and \(f_y(0,0)\) using Definition 12.3.2:
\begin{align*}
f_x(0,0) \amp = \lim_{h\to 0} \frac{f(0+h,0) - f(0,0)}{h}\\
\amp = \lim_{h\to 0} \frac{0}{h^2} = 0;\\
f_y(0,0) \amp = \lim_{h\to 0} \frac{f(0,0+h) - f(0,0)}{h}\\
\amp = \lim_{h\to 0} \frac{0}{h^2} = 0.
\end{align*}

Both \(f_x\) and \(f_y\) *exist* at \((0,0)\text{,}\) but they are not continuous at \((0,0)\text{,}\) as
\begin{equation*}
f_x(x,y) = \frac{y(y^2-x^2)}{(x^2+y^2)^2} \qquad \text{ and } \qquad f_y(x,y) = \frac{x(x^2-y^2)}{(x^2+y^2)^2}
\end{equation*}
are not continuous at \((0,0)\text{.}\) (Take the limit of \(f_x\) as \((x,y)\to(0,0)\) along the \(x\)- and \(y\)-axes; they give different results.) So even though \(f_x\) and \(f_y\) *exist* at every point in the \(x\)-\(y\) plane, they are not continuous. Therefore it is possible, by Theorem 12.4.6, for \(f\) to not be differentiable.

Indeed, it is not. One can show that \(f\) is not continuous at \((0,0)\) (see Example 12.2.10), and by Theorem 12.4.5, this means \(f\) is not differentiable at \((0,0)\text{.}\)

#
Subsection12.4.1Approximating with the Total Differential

By the definition, when \(f\) is differentiable \(dz\) is a good approximation for \(\ddz\) when \(dx\) and \(dy\) are small. We give some simple examples of how this is used here.

#####
Example12.4.7Approximating with the total differential

Let \(z = \sqrt{x}\sin(y)\text{.}\) Approximate \(f(4.1,0.8)\text{.}\)

SolutionRecognizing that \(\pi/4 \approx 0.785\approx 0.8\text{,}\) we can approximate \(f(4.1,0.8)\) using \(f(4,\pi/4)\text{.}\) We can easily compute \(f(4,\pi/4) = \sqrt{4}\sin(\pi/4) = 2\left(\frac{\sqrt{2}}2\right) = \sqrt{2}\approx 1.414\text{.}\) Without calculus, this is the best approximation we could reasonably come up with. The total differential gives us a way of adjusting this initial approximation to hopefully get a more accurate answer.

We let \(\ddz = f(4.1,0.8) - f(4,\pi/4)\text{.}\) The total differential \(dz\) is approximately equal to \(\ddz\text{,}\) so
\begin{equation}
f(4.1,0.8) - f(4,\pi/4) \approx dz \Rightarrow f(4.1,0.8) \approx dz + f(4,\pi/4).
\label{eq_totaldiff2}\tag{12.4.1}
\end{equation}

To find \(dz\text{,}\) we need \(f_x\) and \(f_y\text{.}\)
\begin{align*}
f_x(x,y) \amp = \frac{\sin(y) }{2\sqrt{x}} \Rightarrow\amp
f_x(4,\pi/4) \amp = \frac{\sin(\pi) /4}{2\sqrt{4}}\\
\amp \amp \amp = \frac{\sqrt{2}/2}{4} = \sqrt{2}/8.\\
f_y(x,y) \amp = \sqrt{x}\cos(y) \Rightarrow\amp
f_y(4,\pi/4) \amp = \sqrt{4}\frac{\sqrt{2}}2\\
\amp \amp \amp = \sqrt{2}.
\end{align*}

Approximating \(4.1\) with 4 gives \(dx = 0.1\text{;}\) approximating \(0.8\) with \(\pi/4\) gives \(dy \approx 0.015\text{.}\) Thus
\begin{align*}
dz(4,\pi/4) \amp = f_x(4,\pi/4)(0.1) + f_y(4,\pi/4)(0.015)\\
\amp = \frac{\sqrt{2}}8(0.1) + \sqrt{2}(0.015)\\
\amp \approx 0.039.
\end{align*}

Returning to Equation (12.4.1), we have
\begin{equation*}
f(4.1,0.8) \approx 0.039 + 1.414 = 1.4531.
\end{equation*}

We, of course, can compute the actual value of \(f(4.1,0.8)\) with a calculator; the actual value, accurate to 5 places after the decimal, is \(1.45254\text{.}\) Obviously our approximation is quite good.

The point of the previous example was *not* to develop an approximation method for known functions. After all, we can very easily compute \(f(4.1,0.8)\) using readily available technology. Rather, it serves to illustrate how well this method of approximation works, and to reinforce the following concept:

“New position = old position \(+\) amount of change,” so

“New position \(\approx\) old position + approximate amount of change.”

In the previous example, we could easily compute \(f(4,\pi/4)\) and could approximate the amount of \(z\)-change when computing \(f(4.1,0.8)\text{,}\) letting us approximate the new \(z\)-value.

It may be surprising to learn that it is not uncommon to know the values of \(f\text{,}\) \(f_x\) and \(f_y\) at a particular point without actually knowing the function \(f\text{.}\) The total differential gives a good method of approximating \(f\) at nearby points.

#####
Example12.4.8Approximating an unknown function

Given that \(f(2,-3) = 6\text{,}\) \(f_x(2,-3) = 1.3\) and \(f_y(2,-3) = -0.6\text{,}\) approximate \(f(2.1,-3.03)\text{.}\)

SolutionThe total differential approximates how much \(f\) changes from the point \((2,-3)\) to the point \((2.1,-3.03)\text{.}\) With \(dx = 0.1\) and \(dy = -0.03\text{,}\) we have
\begin{align*}
dz \amp = f_x(2,-3)dx + f_y(2,-3)dy\\
\amp = 1.3(0.1) + (-0.6)(-0.03)\\
\amp = 0.148.
\end{align*}

The change in \(z\) is approximately \(0.148\text{,}\) so we approximate \(f(2.1,-3.03)\approx 6.148\text{.}\)

#
Subsection12.4.3Differentiability of Functions of Three Variables

The definition of differentiability for functions of three variables is very similar to that of functions of two variables. We again start with the total differential.

#####
Definition12.4.10Total Differential

Let \(w=f(x,y,z)\) be continuous on an open set \(S\text{.}\) Let \(dx\text{,}\) \(dy\) and \(dz\) represent changes in \(x\text{,}\) \(y\) and \(z\text{,}\) respectively. Where the partial derivatives \(f_x\text{,}\) \(f_y\) and \(f_z\) exist, the *total differential of \(w\)* is
\begin{equation*}
dz = f_x(x,y,z)dx + f_y(x,y,z)dy+f_z(x,y,z)dz.
\end{equation*}

This differential can be a good approximation of the change in \(w\) when \(w = f(x,y,z)\) is *differentiable*.

#####
Definition12.4.11Multivariable Differentiability

Let \(w=f(x,y,z)\) be defined on an open ball \(B\) containing \((x_0,y_0,z_0)\) where \(f_x(x_0,y_0,z_0)\text{,}\) \(f_y(x_0,y_0,z_0)\) and \(f_z(x_0,y_0,z_0)\) exist. Let \(dw\) be the total differential of \(w\) at \((x_0,y_0,z_0)\text{,}\) let \(\Delta w = f(x_0+dx,y_0+dy,z_0+dz) - f(x_0,y_0,z_0)\text{,}\) and let \(E_x\text{,}\) \(E_y\) and \(E_z\) be functions of \(dx\text{,}\) \(dy\) and \(dz\) such that
\begin{equation*}
\Delta w = dw + E_xdx + E_ydy + E_zdz.
\end{equation*}

\(f\) is *differentiable at \((x_0,y_0,z_0)\)* if, given \(\varepsilon >0\text{,}\) there is a \(\delta >0\) such that if \(\norm{\la dx,dy,dz\ra} \lt \delta\text{,}\) then \(\norm{\la E_x,E_y,E_z\ra} \lt \varepsilon\text{.}\)

\(f\) is *differentiable on \(B\)* if \(f\) is differentiable at every point in \(B\text{.}\) If \(f\) is differentiable on \(\mathbb{R}^3\text{,}\) we say that \(f\) is *differentiable everywhere*.

Just as before, this definition gives a rigorous statement about what it means to be differentiable that is not very intuitive. We follow it with a theorem similar to Theorem 12.4.6.

#####
Theorem12.4.12Continuity and Differentiability of Functions of Three Variables

Let \(w=f(x,y,z)\) be defined on an open ball \(B\) containing \((x_0,y_0,z_0)\text{.}\)

If \(f\) is differentiable at \((x_0,y_0,z_0)\text{,}\) then \(f\) is continuous at \((x_0,y_0,z_0)\text{.}\)

If \(f_x\text{,}\) \(f_y\) and \(f_z\) are continuous on \(B\text{,}\) then \(f\) is differentiable on \(B\text{.}\)

This set of definition and theorem extends to functions of any number of variables. The theorem again gives us a simple way of verifying that most functions that we enounter are differentiable on their natural domains.

This section has given us a formal definition of what it means for a functions to be “differentiable,” along with a theorem that gives a more accessible understanding. The following sections return to notions prompted by our study of partial derivatives that make use of the fact that most functions we encounter are differentiable.