What is an intuition behind total differential in two variables function?
$begingroup$
As the definition, the total differential of a differentiable function with two variables equal to:
$$
dz=frac{partial z}{partial x}dx+frac{partial z}{partial y}dy
$$
Since there are innumerable derivable directions, I confuse it now. I have two confusion in follow:
- Why the total differential equal to sum of just two partial differentials?
- For a differentiable function, the total differential equal to sum of any two different direction's partial differentials?
differential-geometry derivatives
$endgroup$
|
show 2 more comments
$begingroup$
As the definition, the total differential of a differentiable function with two variables equal to:
$$
dz=frac{partial z}{partial x}dx+frac{partial z}{partial y}dy
$$
Since there are innumerable derivable directions, I confuse it now. I have two confusion in follow:
- Why the total differential equal to sum of just two partial differentials?
- For a differentiable function, the total differential equal to sum of any two different direction's partial differentials?
differential-geometry derivatives
$endgroup$
$begingroup$
It is not limited with just two.As shown in the definition, $partial f=frac{partial f}{x_1} d(x_1) + frac{partial f}{x_2} d(x_2) +... frac{partial f}{x_i} d(x_i)$
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:11
$begingroup$
@Leth I mean two variables function. Your comment's function have $i$ variables. :)
$endgroup$
– mayi
Jul 31 '16 at 5:13
$begingroup$
Then, I can't understand what exactly asking, because you said "any two variable" but there just two already ?
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:15
1
$begingroup$
@mayi: Are you intuitively happy that each of the innumerable directions of travel in the plane can be uniquely specified by just two numbers (e.g., the Cartesian components of the velocity)?
$endgroup$
– Andrew D. Hwang
Jul 31 '16 at 16:03
1
$begingroup$
@amd's answer is particularly good, IMO. My guess is that you were mainly missing the notion and use of basis here. You don't mention it, and it is key, I think, to your confusion.
$endgroup$
– Drew
Jul 31 '16 at 19:50
|
show 2 more comments
$begingroup$
As the definition, the total differential of a differentiable function with two variables equal to:
$$
dz=frac{partial z}{partial x}dx+frac{partial z}{partial y}dy
$$
Since there are innumerable derivable directions, I confuse it now. I have two confusion in follow:
- Why the total differential equal to sum of just two partial differentials?
- For a differentiable function, the total differential equal to sum of any two different direction's partial differentials?
differential-geometry derivatives
$endgroup$
As the definition, the total differential of a differentiable function with two variables equal to:
$$
dz=frac{partial z}{partial x}dx+frac{partial z}{partial y}dy
$$
Since there are innumerable derivable directions, I confuse it now. I have two confusion in follow:
- Why the total differential equal to sum of just two partial differentials?
- For a differentiable function, the total differential equal to sum of any two different direction's partial differentials?
differential-geometry derivatives
differential-geometry derivatives
edited Jul 31 '16 at 5:30
mayi
asked Jul 31 '16 at 5:01
mayimayi
242111
242111
$begingroup$
It is not limited with just two.As shown in the definition, $partial f=frac{partial f}{x_1} d(x_1) + frac{partial f}{x_2} d(x_2) +... frac{partial f}{x_i} d(x_i)$
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:11
$begingroup$
@Leth I mean two variables function. Your comment's function have $i$ variables. :)
$endgroup$
– mayi
Jul 31 '16 at 5:13
$begingroup$
Then, I can't understand what exactly asking, because you said "any two variable" but there just two already ?
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:15
1
$begingroup$
@mayi: Are you intuitively happy that each of the innumerable directions of travel in the plane can be uniquely specified by just two numbers (e.g., the Cartesian components of the velocity)?
$endgroup$
– Andrew D. Hwang
Jul 31 '16 at 16:03
1
$begingroup$
@amd's answer is particularly good, IMO. My guess is that you were mainly missing the notion and use of basis here. You don't mention it, and it is key, I think, to your confusion.
$endgroup$
– Drew
Jul 31 '16 at 19:50
|
show 2 more comments
$begingroup$
It is not limited with just two.As shown in the definition, $partial f=frac{partial f}{x_1} d(x_1) + frac{partial f}{x_2} d(x_2) +... frac{partial f}{x_i} d(x_i)$
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:11
$begingroup$
@Leth I mean two variables function. Your comment's function have $i$ variables. :)
$endgroup$
– mayi
Jul 31 '16 at 5:13
$begingroup$
Then, I can't understand what exactly asking, because you said "any two variable" but there just two already ?
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:15
1
$begingroup$
@mayi: Are you intuitively happy that each of the innumerable directions of travel in the plane can be uniquely specified by just two numbers (e.g., the Cartesian components of the velocity)?
$endgroup$
– Andrew D. Hwang
Jul 31 '16 at 16:03
1
$begingroup$
@amd's answer is particularly good, IMO. My guess is that you were mainly missing the notion and use of basis here. You don't mention it, and it is key, I think, to your confusion.
$endgroup$
– Drew
Jul 31 '16 at 19:50
$begingroup$
It is not limited with just two.As shown in the definition, $partial f=frac{partial f}{x_1} d(x_1) + frac{partial f}{x_2} d(x_2) +... frac{partial f}{x_i} d(x_i)$
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:11
$begingroup$
It is not limited with just two.As shown in the definition, $partial f=frac{partial f}{x_1} d(x_1) + frac{partial f}{x_2} d(x_2) +... frac{partial f}{x_i} d(x_i)$
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:11
$begingroup$
@Leth I mean two variables function. Your comment's function have $i$ variables. :)
$endgroup$
– mayi
Jul 31 '16 at 5:13
$begingroup$
@Leth I mean two variables function. Your comment's function have $i$ variables. :)
$endgroup$
– mayi
Jul 31 '16 at 5:13
$begingroup$
Then, I can't understand what exactly asking, because you said "any two variable" but there just two already ?
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:15
$begingroup$
Then, I can't understand what exactly asking, because you said "any two variable" but there just two already ?
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:15
1
1
$begingroup$
@mayi: Are you intuitively happy that each of the innumerable directions of travel in the plane can be uniquely specified by just two numbers (e.g., the Cartesian components of the velocity)?
$endgroup$
– Andrew D. Hwang
Jul 31 '16 at 16:03
$begingroup$
@mayi: Are you intuitively happy that each of the innumerable directions of travel in the plane can be uniquely specified by just two numbers (e.g., the Cartesian components of the velocity)?
$endgroup$
– Andrew D. Hwang
Jul 31 '16 at 16:03
1
1
$begingroup$
@amd's answer is particularly good, IMO. My guess is that you were mainly missing the notion and use of basis here. You don't mention it, and it is key, I think, to your confusion.
$endgroup$
– Drew
Jul 31 '16 at 19:50
$begingroup$
@amd's answer is particularly good, IMO. My guess is that you were mainly missing the notion and use of basis here. You don't mention it, and it is key, I think, to your confusion.
$endgroup$
– Drew
Jul 31 '16 at 19:50
|
show 2 more comments
5 Answers
5
active
oldest
votes
$begingroup$
I prefer to start from a definition of the total derivative that isn’t tied to a specific coordinate system. If $f:mathbb R^mtomathbb R^n$, it is differentiable at $mathbf vinmathbb R^m$ if there is a linear map $L_{mathbf v}:mathbb R^mtomathbb R^n$ such that $f(mathbf v+mathbf h)=f(mathbf v)+L_{mathbf v}[mathbf h]+o(|mathbf h|)$. The linear map $L_{mathbf v}$ is called the differential or total derivative of $f$ at $mathbf v$, denoted by $mathrm df_{mathbf v}$ or simply $mathrm df$. The idea here is that $mathrm df_{mathbf v}$ is the best linear approximation to the change in $f$ near $mathbf v$, with the error of this approximation vanishing “faster” than the displacement $mathbf h$.
Relative to some specific pair of bases for the domain and range of $f$, $mathrm df$ can be represented by an $ntimes m$ matrix. To see what this matrix is, you can treat $f$ as a vector of functions:$$f(mathbf v)=pmatrix{phi_1(mathbf v)\phi_2(mathbf v)\vdots\phi_n(mathbf v)}$$ or, written in terms of coordinates, $$begin{align}y_1&=phi_1(x_1,x_2,dots,x_m)\y_2&=phi_2(x_1,x_2,dots,x_m)\vdots\y_n&=phi_n(x_1,x_2,dots,x_m).end{align}$$ The matrix of $mathrm df$ then turns out to be the Jacobian matrix of partial derivatives $$pmatrix{{partialphi_1overpartial x_1}&{partialphi_1overpartial x_2}&cdots&{partialphi_1overpartial x_m}\{partialphi_2overpartial x_1}&{partialphi_2overpartial x_2}&cdots&{partialphi_2overpartial x_m}\vdots&vdots&ddots&vdots\{partialphi_noverpartial x_1}&{partialphi_noverpartial x_2}&cdots&{partialphi_noverpartial x_m}}.$$ The displacement vector $mathbf h$ can be written as $mathrm dmathbf v=(mathrm dx^1,mathrm dx^2,dots,mathrm dx^m)^T$. (The $mathrm dx^i$ here can themselves be thought of as differentials of affine coordinate functions, but that’s not an important detail for this discussion.)
For the special case of a scalar function $f:mathbb R^mtomathbb R$, $mathrm df[mathbf h]$ becomes $${partial foverpartial x_1}mathrm dx^1+{partial foverpartial x_2}mathrm dx^2+cdots+{partial foverpartial x_m}mathrm dx^m.$$ Now, the partial derivative ${partial foverpartial x^i}$ is just the directional derivative of $f$ in the direction of the $x^i$-axis, so this formula expresses the total derivative of $f$ in terms of its directional derivatives in a particular set of directions. Notice that there was nothing special about the basis we chose for $mathbb R^m$. If we choose a different basis, $mathrm df$ will have the same form, but the derivatives will be taken in a different set of directions. In your case of $mathbb R^2$, a basis consists of two vectors, so derivatives in only two directions are sufficient to completely specify the total derivative. If you understand it as a linear map from $mathbb R^2$ to $mathbb R$, this should come as no surprise.
$endgroup$
add a comment |
$begingroup$
Suppose I have a scalar field on the plane given by the formula
$$ s = x + y^2 + e^r + sin(theta) $$
Yes, this formula for $s$ mixes both cartesian and polar coordinates on the plane!
Using this formula, we can compute the total differential to be
$$ mathrm{d}s = mathrm{d}x + 2 y ,mathrm{d}y + e^r ,mathrm{d}r + cos(theta) ,mathrm{d}theta $$
So don't think of it as doing some calculation with just the right number of partial derivatives — think of it as just the extension of the familiar methods of computing derivatives. Partial derivatives only enter the picture when you are specifically interested in computing the differential of a function that has more than one argument; e.g. to compute $mathrm{d}f(s,t)$ for some function $f$ of two arguments.
Of course, we can rewrite; e.g. it using equations like $mathrm{d}x = mathrm{d}(r cos(theta)) = cos(theta), mathrm{d}r - r sin(theta) , mathrm{d}theta$ and $mathrm{d}y = sin(theta) ,mathrm{d}r + r cos(theta) , mathrm{d}theta$ to get rid of the $mathrm{d}x$ and $mathrm{d}y$ terms and leaving the result in terms of $mathrm{d}r$ and $mathrm{d}theta$.
In the plane, there are only two independent differentials, so we can always rewrite as a linear combination of two of them.
In my opinion, the better way to think about things is that the total differential is the most natural form of the derivative, and the partial derivative is a linear functional on differential forms; e.g. in the standard $x-y$ coordates, $partial/partial x$ is the mapping that sends $mathrm{d}x to 1$ and $mathrm{d}y to 0$.
So, using the notation $partial z / partial x$ for the action of $partial / partial x$ on $mathrm{d}z$, we see that if we have an equation
$$ mathrm{d}z = f ,mathrm{d}x + g ,mathrm{d}y $$
then
$$ frac{partial z}{partial x} = f cdot 1 + g cdot 0 = f$$
$$ frac{partial z}{partial y} = f cdot 0 + g cdot 1 = g$$
and so we'd have
$$ mathrm{d}z = frac{partial z}{partial x}, mathrm{d}x + frac{partial z}{partial y} ,mathrm{d} y$$
Aside: another advantage the total differential has over the partial derivative is that it's actually self-contained. In the plane, $partial / partial x$ has no meaning on its own; e.g. if we set $w=x+y$, then $partial / partial x$ means something different when expressing things as a function of $(x,y)$ than it does when expressing things as a function of $(x,w)$. (in the former, it sends $mathrm{d}y to 0$, in the latter it sends $mathrm{d}w to 0$, and thus $mathrm{d}y to -1$).
$endgroup$
1
$begingroup$
Aside: the notation I favor when functions are involved is $mathrm{d}f(s,t) = f_1(s,t) mathrm{d}s + f_2(s,t) mathrm{d}t$. This emphasizes that we are taking the derivative of the function with respect to one of its places, rather than anything related to the actual variable we plug in. One feature is that if we have $s=t=x$, then $f_1(x,x)$ is completely unambiguous where $partial f(x,x) / partial x$ is not.
$endgroup$
– Hurkyl
Jul 31 '16 at 14:26
add a comment |
$begingroup$
Any infinitely small change in $(x,y)$ includes a change $dx$ in $x$ and a change $dy$ in $y$. The resulting change in $z$ resulting from the change in $x$ is $dfrac{partial z}{partial x} , dx$, and to that we add a change in $z$ resulting from the change in $y$.
$endgroup$
$begingroup$
" includes a change dx in x and a change dy in y"..It include innumerable direction in two variable function. :)
$endgroup$
– mayi
Jul 31 '16 at 6:05
$begingroup$
@mayi : The change in location is entirely characterized by the changes in $x$ and $y$. The fact that one could choose other coordinate systems doesn't alter that. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 12:56
add a comment |
$begingroup$
It's not that $dz$ contains just two directional derivative. Instead, you may think of
$$ dz = frac{partial z}{partial x } dx + frac{partial z}{partial y }dy$$
as representating $dz$ under the basis ${dx, dy}$. Let $v = (v_1, v_2)$ be a vector, then
$$dz (v) = frac{partial z}{partial x } dx (v)+ frac{partial z}{partial y }dy(v) = frac{partial z}{partial x } v_1 + frac{partial z}{partial y }v_2 = D_v z,$$
where $D_v z$ is the directional derivative of $z$ along the direction $v$. So all directional derivative of $z$ is encoded in the formula already.Almost true. Let $v, w$ be two independent vectors. Let $v^*, w^*$ be the dual vector defined by
$$tag{1} v^*(av + bw) = a, w^*(av + bw) =b, forall a, b in mathbb R.$$
From $(1)$, we can represent $v, w$ using the basis ${dx,dy}$: indeed if we write
$$tag{2} begin{pmatrix}v^* \ w^* end{pmatrix} = begin{pmatrix}A & B\ C & D end{pmatrix} begin{pmatrix}dx \ dy end{pmatrix},$$
one sees that
$$tag{3} begin{pmatrix}A & B\ C & D end{pmatrix} begin{pmatrix}v_1 & w_1\ v_2 & w_2 end{pmatrix}= begin{pmatrix}1 & 0\ 0 & 1 end{pmatrix}, Rightarrow begin{pmatrix}A & B\ C & D end{pmatrix} =begin{pmatrix}v_1 & w_1\ v_2 & w_2 end{pmatrix}^{-1} $$
(we need that ${v, w}$ be linear independent so that the right hand side is defined). Thus we have, using $(2)$ and $(3)$,
$$begin{split} (D_v z) v^* + (D_w z) w^* &= begin{pmatrix} D_v z & D_wzend{pmatrix} begin{pmatrix} v^* \ w^*end{pmatrix} \
&= begin{pmatrix} v_1frac{partial z}{partial x} + v_2 frac{partial z}{partial y} & w_1 frac{partial z}{partial x} + w_2 frac{partial z}{partial y}end{pmatrix} begin{pmatrix} v^* \ w^*end{pmatrix} \
&= begin{pmatrix} frac{partial z}{partial x} & frac{partial z}{partial y}end{pmatrix} begin{pmatrix} v_1 & w_1 \ v_2 & w_2 end{pmatrix}begin{pmatrix} v^* \ w^*end{pmatrix}\
&= begin{pmatrix} frac{partial z}{partial x} & frac{partial z}{partial y}end{pmatrix} begin{pmatrix} dx \ dyend{pmatrix} \
&=frac{partial z}{partial x} dx + frac{partial z}{partial y}dy = dz end{split}$$
Thus it is true that you can use any two independent vectors to represent $dz$ as
$$dz = (D_v z) v^* + (D_wz) w^*.$$
$endgroup$
$begingroup$
It looks as if you forgot a plus sign. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 6:11
add a comment |
$begingroup$
I used to be confused about this also, until I learned that $(frac{partial z}{partial x},frac{partial z}{partial y})$ is a vector-like object.
What it means is that you can rotate the coordinate axes so that they point in any direction and rewrite the total derivative in terms of the partial derivatives along those axes, and under such a rotation the partial derivatives and the displacements dx and dy transform in such a way that leaves the total derivative unchanged.
It was Feynman's explanation in Volume II Chapter 2 that cleared up my confusion.
$endgroup$
$begingroup$
Thanks for your link.It's very useful to me.
$endgroup$
– mayi
Aug 1 '16 at 17:50
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1876559%2fwhat-is-an-intuition-behind-total-differential-in-two-variables-function%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I prefer to start from a definition of the total derivative that isn’t tied to a specific coordinate system. If $f:mathbb R^mtomathbb R^n$, it is differentiable at $mathbf vinmathbb R^m$ if there is a linear map $L_{mathbf v}:mathbb R^mtomathbb R^n$ such that $f(mathbf v+mathbf h)=f(mathbf v)+L_{mathbf v}[mathbf h]+o(|mathbf h|)$. The linear map $L_{mathbf v}$ is called the differential or total derivative of $f$ at $mathbf v$, denoted by $mathrm df_{mathbf v}$ or simply $mathrm df$. The idea here is that $mathrm df_{mathbf v}$ is the best linear approximation to the change in $f$ near $mathbf v$, with the error of this approximation vanishing “faster” than the displacement $mathbf h$.
Relative to some specific pair of bases for the domain and range of $f$, $mathrm df$ can be represented by an $ntimes m$ matrix. To see what this matrix is, you can treat $f$ as a vector of functions:$$f(mathbf v)=pmatrix{phi_1(mathbf v)\phi_2(mathbf v)\vdots\phi_n(mathbf v)}$$ or, written in terms of coordinates, $$begin{align}y_1&=phi_1(x_1,x_2,dots,x_m)\y_2&=phi_2(x_1,x_2,dots,x_m)\vdots\y_n&=phi_n(x_1,x_2,dots,x_m).end{align}$$ The matrix of $mathrm df$ then turns out to be the Jacobian matrix of partial derivatives $$pmatrix{{partialphi_1overpartial x_1}&{partialphi_1overpartial x_2}&cdots&{partialphi_1overpartial x_m}\{partialphi_2overpartial x_1}&{partialphi_2overpartial x_2}&cdots&{partialphi_2overpartial x_m}\vdots&vdots&ddots&vdots\{partialphi_noverpartial x_1}&{partialphi_noverpartial x_2}&cdots&{partialphi_noverpartial x_m}}.$$ The displacement vector $mathbf h$ can be written as $mathrm dmathbf v=(mathrm dx^1,mathrm dx^2,dots,mathrm dx^m)^T$. (The $mathrm dx^i$ here can themselves be thought of as differentials of affine coordinate functions, but that’s not an important detail for this discussion.)
For the special case of a scalar function $f:mathbb R^mtomathbb R$, $mathrm df[mathbf h]$ becomes $${partial foverpartial x_1}mathrm dx^1+{partial foverpartial x_2}mathrm dx^2+cdots+{partial foverpartial x_m}mathrm dx^m.$$ Now, the partial derivative ${partial foverpartial x^i}$ is just the directional derivative of $f$ in the direction of the $x^i$-axis, so this formula expresses the total derivative of $f$ in terms of its directional derivatives in a particular set of directions. Notice that there was nothing special about the basis we chose for $mathbb R^m$. If we choose a different basis, $mathrm df$ will have the same form, but the derivatives will be taken in a different set of directions. In your case of $mathbb R^2$, a basis consists of two vectors, so derivatives in only two directions are sufficient to completely specify the total derivative. If you understand it as a linear map from $mathbb R^2$ to $mathbb R$, this should come as no surprise.
$endgroup$
add a comment |
$begingroup$
I prefer to start from a definition of the total derivative that isn’t tied to a specific coordinate system. If $f:mathbb R^mtomathbb R^n$, it is differentiable at $mathbf vinmathbb R^m$ if there is a linear map $L_{mathbf v}:mathbb R^mtomathbb R^n$ such that $f(mathbf v+mathbf h)=f(mathbf v)+L_{mathbf v}[mathbf h]+o(|mathbf h|)$. The linear map $L_{mathbf v}$ is called the differential or total derivative of $f$ at $mathbf v$, denoted by $mathrm df_{mathbf v}$ or simply $mathrm df$. The idea here is that $mathrm df_{mathbf v}$ is the best linear approximation to the change in $f$ near $mathbf v$, with the error of this approximation vanishing “faster” than the displacement $mathbf h$.
Relative to some specific pair of bases for the domain and range of $f$, $mathrm df$ can be represented by an $ntimes m$ matrix. To see what this matrix is, you can treat $f$ as a vector of functions:$$f(mathbf v)=pmatrix{phi_1(mathbf v)\phi_2(mathbf v)\vdots\phi_n(mathbf v)}$$ or, written in terms of coordinates, $$begin{align}y_1&=phi_1(x_1,x_2,dots,x_m)\y_2&=phi_2(x_1,x_2,dots,x_m)\vdots\y_n&=phi_n(x_1,x_2,dots,x_m).end{align}$$ The matrix of $mathrm df$ then turns out to be the Jacobian matrix of partial derivatives $$pmatrix{{partialphi_1overpartial x_1}&{partialphi_1overpartial x_2}&cdots&{partialphi_1overpartial x_m}\{partialphi_2overpartial x_1}&{partialphi_2overpartial x_2}&cdots&{partialphi_2overpartial x_m}\vdots&vdots&ddots&vdots\{partialphi_noverpartial x_1}&{partialphi_noverpartial x_2}&cdots&{partialphi_noverpartial x_m}}.$$ The displacement vector $mathbf h$ can be written as $mathrm dmathbf v=(mathrm dx^1,mathrm dx^2,dots,mathrm dx^m)^T$. (The $mathrm dx^i$ here can themselves be thought of as differentials of affine coordinate functions, but that’s not an important detail for this discussion.)
For the special case of a scalar function $f:mathbb R^mtomathbb R$, $mathrm df[mathbf h]$ becomes $${partial foverpartial x_1}mathrm dx^1+{partial foverpartial x_2}mathrm dx^2+cdots+{partial foverpartial x_m}mathrm dx^m.$$ Now, the partial derivative ${partial foverpartial x^i}$ is just the directional derivative of $f$ in the direction of the $x^i$-axis, so this formula expresses the total derivative of $f$ in terms of its directional derivatives in a particular set of directions. Notice that there was nothing special about the basis we chose for $mathbb R^m$. If we choose a different basis, $mathrm df$ will have the same form, but the derivatives will be taken in a different set of directions. In your case of $mathbb R^2$, a basis consists of two vectors, so derivatives in only two directions are sufficient to completely specify the total derivative. If you understand it as a linear map from $mathbb R^2$ to $mathbb R$, this should come as no surprise.
$endgroup$
add a comment |
$begingroup$
I prefer to start from a definition of the total derivative that isn’t tied to a specific coordinate system. If $f:mathbb R^mtomathbb R^n$, it is differentiable at $mathbf vinmathbb R^m$ if there is a linear map $L_{mathbf v}:mathbb R^mtomathbb R^n$ such that $f(mathbf v+mathbf h)=f(mathbf v)+L_{mathbf v}[mathbf h]+o(|mathbf h|)$. The linear map $L_{mathbf v}$ is called the differential or total derivative of $f$ at $mathbf v$, denoted by $mathrm df_{mathbf v}$ or simply $mathrm df$. The idea here is that $mathrm df_{mathbf v}$ is the best linear approximation to the change in $f$ near $mathbf v$, with the error of this approximation vanishing “faster” than the displacement $mathbf h$.
Relative to some specific pair of bases for the domain and range of $f$, $mathrm df$ can be represented by an $ntimes m$ matrix. To see what this matrix is, you can treat $f$ as a vector of functions:$$f(mathbf v)=pmatrix{phi_1(mathbf v)\phi_2(mathbf v)\vdots\phi_n(mathbf v)}$$ or, written in terms of coordinates, $$begin{align}y_1&=phi_1(x_1,x_2,dots,x_m)\y_2&=phi_2(x_1,x_2,dots,x_m)\vdots\y_n&=phi_n(x_1,x_2,dots,x_m).end{align}$$ The matrix of $mathrm df$ then turns out to be the Jacobian matrix of partial derivatives $$pmatrix{{partialphi_1overpartial x_1}&{partialphi_1overpartial x_2}&cdots&{partialphi_1overpartial x_m}\{partialphi_2overpartial x_1}&{partialphi_2overpartial x_2}&cdots&{partialphi_2overpartial x_m}\vdots&vdots&ddots&vdots\{partialphi_noverpartial x_1}&{partialphi_noverpartial x_2}&cdots&{partialphi_noverpartial x_m}}.$$ The displacement vector $mathbf h$ can be written as $mathrm dmathbf v=(mathrm dx^1,mathrm dx^2,dots,mathrm dx^m)^T$. (The $mathrm dx^i$ here can themselves be thought of as differentials of affine coordinate functions, but that’s not an important detail for this discussion.)
For the special case of a scalar function $f:mathbb R^mtomathbb R$, $mathrm df[mathbf h]$ becomes $${partial foverpartial x_1}mathrm dx^1+{partial foverpartial x_2}mathrm dx^2+cdots+{partial foverpartial x_m}mathrm dx^m.$$ Now, the partial derivative ${partial foverpartial x^i}$ is just the directional derivative of $f$ in the direction of the $x^i$-axis, so this formula expresses the total derivative of $f$ in terms of its directional derivatives in a particular set of directions. Notice that there was nothing special about the basis we chose for $mathbb R^m$. If we choose a different basis, $mathrm df$ will have the same form, but the derivatives will be taken in a different set of directions. In your case of $mathbb R^2$, a basis consists of two vectors, so derivatives in only two directions are sufficient to completely specify the total derivative. If you understand it as a linear map from $mathbb R^2$ to $mathbb R$, this should come as no surprise.
$endgroup$
I prefer to start from a definition of the total derivative that isn’t tied to a specific coordinate system. If $f:mathbb R^mtomathbb R^n$, it is differentiable at $mathbf vinmathbb R^m$ if there is a linear map $L_{mathbf v}:mathbb R^mtomathbb R^n$ such that $f(mathbf v+mathbf h)=f(mathbf v)+L_{mathbf v}[mathbf h]+o(|mathbf h|)$. The linear map $L_{mathbf v}$ is called the differential or total derivative of $f$ at $mathbf v$, denoted by $mathrm df_{mathbf v}$ or simply $mathrm df$. The idea here is that $mathrm df_{mathbf v}$ is the best linear approximation to the change in $f$ near $mathbf v$, with the error of this approximation vanishing “faster” than the displacement $mathbf h$.
Relative to some specific pair of bases for the domain and range of $f$, $mathrm df$ can be represented by an $ntimes m$ matrix. To see what this matrix is, you can treat $f$ as a vector of functions:$$f(mathbf v)=pmatrix{phi_1(mathbf v)\phi_2(mathbf v)\vdots\phi_n(mathbf v)}$$ or, written in terms of coordinates, $$begin{align}y_1&=phi_1(x_1,x_2,dots,x_m)\y_2&=phi_2(x_1,x_2,dots,x_m)\vdots\y_n&=phi_n(x_1,x_2,dots,x_m).end{align}$$ The matrix of $mathrm df$ then turns out to be the Jacobian matrix of partial derivatives $$pmatrix{{partialphi_1overpartial x_1}&{partialphi_1overpartial x_2}&cdots&{partialphi_1overpartial x_m}\{partialphi_2overpartial x_1}&{partialphi_2overpartial x_2}&cdots&{partialphi_2overpartial x_m}\vdots&vdots&ddots&vdots\{partialphi_noverpartial x_1}&{partialphi_noverpartial x_2}&cdots&{partialphi_noverpartial x_m}}.$$ The displacement vector $mathbf h$ can be written as $mathrm dmathbf v=(mathrm dx^1,mathrm dx^2,dots,mathrm dx^m)^T$. (The $mathrm dx^i$ here can themselves be thought of as differentials of affine coordinate functions, but that’s not an important detail for this discussion.)
For the special case of a scalar function $f:mathbb R^mtomathbb R$, $mathrm df[mathbf h]$ becomes $${partial foverpartial x_1}mathrm dx^1+{partial foverpartial x_2}mathrm dx^2+cdots+{partial foverpartial x_m}mathrm dx^m.$$ Now, the partial derivative ${partial foverpartial x^i}$ is just the directional derivative of $f$ in the direction of the $x^i$-axis, so this formula expresses the total derivative of $f$ in terms of its directional derivatives in a particular set of directions. Notice that there was nothing special about the basis we chose for $mathbb R^m$. If we choose a different basis, $mathrm df$ will have the same form, but the derivatives will be taken in a different set of directions. In your case of $mathbb R^2$, a basis consists of two vectors, so derivatives in only two directions are sufficient to completely specify the total derivative. If you understand it as a linear map from $mathbb R^2$ to $mathbb R$, this should come as no surprise.
edited Jul 31 '16 at 9:13
answered Jul 31 '16 at 6:32
amdamd
29.5k21050
29.5k21050
add a comment |
add a comment |
$begingroup$
Suppose I have a scalar field on the plane given by the formula
$$ s = x + y^2 + e^r + sin(theta) $$
Yes, this formula for $s$ mixes both cartesian and polar coordinates on the plane!
Using this formula, we can compute the total differential to be
$$ mathrm{d}s = mathrm{d}x + 2 y ,mathrm{d}y + e^r ,mathrm{d}r + cos(theta) ,mathrm{d}theta $$
So don't think of it as doing some calculation with just the right number of partial derivatives — think of it as just the extension of the familiar methods of computing derivatives. Partial derivatives only enter the picture when you are specifically interested in computing the differential of a function that has more than one argument; e.g. to compute $mathrm{d}f(s,t)$ for some function $f$ of two arguments.
Of course, we can rewrite; e.g. it using equations like $mathrm{d}x = mathrm{d}(r cos(theta)) = cos(theta), mathrm{d}r - r sin(theta) , mathrm{d}theta$ and $mathrm{d}y = sin(theta) ,mathrm{d}r + r cos(theta) , mathrm{d}theta$ to get rid of the $mathrm{d}x$ and $mathrm{d}y$ terms and leaving the result in terms of $mathrm{d}r$ and $mathrm{d}theta$.
In the plane, there are only two independent differentials, so we can always rewrite as a linear combination of two of them.
In my opinion, the better way to think about things is that the total differential is the most natural form of the derivative, and the partial derivative is a linear functional on differential forms; e.g. in the standard $x-y$ coordates, $partial/partial x$ is the mapping that sends $mathrm{d}x to 1$ and $mathrm{d}y to 0$.
So, using the notation $partial z / partial x$ for the action of $partial / partial x$ on $mathrm{d}z$, we see that if we have an equation
$$ mathrm{d}z = f ,mathrm{d}x + g ,mathrm{d}y $$
then
$$ frac{partial z}{partial x} = f cdot 1 + g cdot 0 = f$$
$$ frac{partial z}{partial y} = f cdot 0 + g cdot 1 = g$$
and so we'd have
$$ mathrm{d}z = frac{partial z}{partial x}, mathrm{d}x + frac{partial z}{partial y} ,mathrm{d} y$$
Aside: another advantage the total differential has over the partial derivative is that it's actually self-contained. In the plane, $partial / partial x$ has no meaning on its own; e.g. if we set $w=x+y$, then $partial / partial x$ means something different when expressing things as a function of $(x,y)$ than it does when expressing things as a function of $(x,w)$. (in the former, it sends $mathrm{d}y to 0$, in the latter it sends $mathrm{d}w to 0$, and thus $mathrm{d}y to -1$).
$endgroup$
1
$begingroup$
Aside: the notation I favor when functions are involved is $mathrm{d}f(s,t) = f_1(s,t) mathrm{d}s + f_2(s,t) mathrm{d}t$. This emphasizes that we are taking the derivative of the function with respect to one of its places, rather than anything related to the actual variable we plug in. One feature is that if we have $s=t=x$, then $f_1(x,x)$ is completely unambiguous where $partial f(x,x) / partial x$ is not.
$endgroup$
– Hurkyl
Jul 31 '16 at 14:26
add a comment |
$begingroup$
Suppose I have a scalar field on the plane given by the formula
$$ s = x + y^2 + e^r + sin(theta) $$
Yes, this formula for $s$ mixes both cartesian and polar coordinates on the plane!
Using this formula, we can compute the total differential to be
$$ mathrm{d}s = mathrm{d}x + 2 y ,mathrm{d}y + e^r ,mathrm{d}r + cos(theta) ,mathrm{d}theta $$
So don't think of it as doing some calculation with just the right number of partial derivatives — think of it as just the extension of the familiar methods of computing derivatives. Partial derivatives only enter the picture when you are specifically interested in computing the differential of a function that has more than one argument; e.g. to compute $mathrm{d}f(s,t)$ for some function $f$ of two arguments.
Of course, we can rewrite; e.g. it using equations like $mathrm{d}x = mathrm{d}(r cos(theta)) = cos(theta), mathrm{d}r - r sin(theta) , mathrm{d}theta$ and $mathrm{d}y = sin(theta) ,mathrm{d}r + r cos(theta) , mathrm{d}theta$ to get rid of the $mathrm{d}x$ and $mathrm{d}y$ terms and leaving the result in terms of $mathrm{d}r$ and $mathrm{d}theta$.
In the plane, there are only two independent differentials, so we can always rewrite as a linear combination of two of them.
In my opinion, the better way to think about things is that the total differential is the most natural form of the derivative, and the partial derivative is a linear functional on differential forms; e.g. in the standard $x-y$ coordates, $partial/partial x$ is the mapping that sends $mathrm{d}x to 1$ and $mathrm{d}y to 0$.
So, using the notation $partial z / partial x$ for the action of $partial / partial x$ on $mathrm{d}z$, we see that if we have an equation
$$ mathrm{d}z = f ,mathrm{d}x + g ,mathrm{d}y $$
then
$$ frac{partial z}{partial x} = f cdot 1 + g cdot 0 = f$$
$$ frac{partial z}{partial y} = f cdot 0 + g cdot 1 = g$$
and so we'd have
$$ mathrm{d}z = frac{partial z}{partial x}, mathrm{d}x + frac{partial z}{partial y} ,mathrm{d} y$$
Aside: another advantage the total differential has over the partial derivative is that it's actually self-contained. In the plane, $partial / partial x$ has no meaning on its own; e.g. if we set $w=x+y$, then $partial / partial x$ means something different when expressing things as a function of $(x,y)$ than it does when expressing things as a function of $(x,w)$. (in the former, it sends $mathrm{d}y to 0$, in the latter it sends $mathrm{d}w to 0$, and thus $mathrm{d}y to -1$).
$endgroup$
1
$begingroup$
Aside: the notation I favor when functions are involved is $mathrm{d}f(s,t) = f_1(s,t) mathrm{d}s + f_2(s,t) mathrm{d}t$. This emphasizes that we are taking the derivative of the function with respect to one of its places, rather than anything related to the actual variable we plug in. One feature is that if we have $s=t=x$, then $f_1(x,x)$ is completely unambiguous where $partial f(x,x) / partial x$ is not.
$endgroup$
– Hurkyl
Jul 31 '16 at 14:26
add a comment |
$begingroup$
Suppose I have a scalar field on the plane given by the formula
$$ s = x + y^2 + e^r + sin(theta) $$
Yes, this formula for $s$ mixes both cartesian and polar coordinates on the plane!
Using this formula, we can compute the total differential to be
$$ mathrm{d}s = mathrm{d}x + 2 y ,mathrm{d}y + e^r ,mathrm{d}r + cos(theta) ,mathrm{d}theta $$
So don't think of it as doing some calculation with just the right number of partial derivatives — think of it as just the extension of the familiar methods of computing derivatives. Partial derivatives only enter the picture when you are specifically interested in computing the differential of a function that has more than one argument; e.g. to compute $mathrm{d}f(s,t)$ for some function $f$ of two arguments.
Of course, we can rewrite; e.g. it using equations like $mathrm{d}x = mathrm{d}(r cos(theta)) = cos(theta), mathrm{d}r - r sin(theta) , mathrm{d}theta$ and $mathrm{d}y = sin(theta) ,mathrm{d}r + r cos(theta) , mathrm{d}theta$ to get rid of the $mathrm{d}x$ and $mathrm{d}y$ terms and leaving the result in terms of $mathrm{d}r$ and $mathrm{d}theta$.
In the plane, there are only two independent differentials, so we can always rewrite as a linear combination of two of them.
In my opinion, the better way to think about things is that the total differential is the most natural form of the derivative, and the partial derivative is a linear functional on differential forms; e.g. in the standard $x-y$ coordates, $partial/partial x$ is the mapping that sends $mathrm{d}x to 1$ and $mathrm{d}y to 0$.
So, using the notation $partial z / partial x$ for the action of $partial / partial x$ on $mathrm{d}z$, we see that if we have an equation
$$ mathrm{d}z = f ,mathrm{d}x + g ,mathrm{d}y $$
then
$$ frac{partial z}{partial x} = f cdot 1 + g cdot 0 = f$$
$$ frac{partial z}{partial y} = f cdot 0 + g cdot 1 = g$$
and so we'd have
$$ mathrm{d}z = frac{partial z}{partial x}, mathrm{d}x + frac{partial z}{partial y} ,mathrm{d} y$$
Aside: another advantage the total differential has over the partial derivative is that it's actually self-contained. In the plane, $partial / partial x$ has no meaning on its own; e.g. if we set $w=x+y$, then $partial / partial x$ means something different when expressing things as a function of $(x,y)$ than it does when expressing things as a function of $(x,w)$. (in the former, it sends $mathrm{d}y to 0$, in the latter it sends $mathrm{d}w to 0$, and thus $mathrm{d}y to -1$).
$endgroup$
Suppose I have a scalar field on the plane given by the formula
$$ s = x + y^2 + e^r + sin(theta) $$
Yes, this formula for $s$ mixes both cartesian and polar coordinates on the plane!
Using this formula, we can compute the total differential to be
$$ mathrm{d}s = mathrm{d}x + 2 y ,mathrm{d}y + e^r ,mathrm{d}r + cos(theta) ,mathrm{d}theta $$
So don't think of it as doing some calculation with just the right number of partial derivatives — think of it as just the extension of the familiar methods of computing derivatives. Partial derivatives only enter the picture when you are specifically interested in computing the differential of a function that has more than one argument; e.g. to compute $mathrm{d}f(s,t)$ for some function $f$ of two arguments.
Of course, we can rewrite; e.g. it using equations like $mathrm{d}x = mathrm{d}(r cos(theta)) = cos(theta), mathrm{d}r - r sin(theta) , mathrm{d}theta$ and $mathrm{d}y = sin(theta) ,mathrm{d}r + r cos(theta) , mathrm{d}theta$ to get rid of the $mathrm{d}x$ and $mathrm{d}y$ terms and leaving the result in terms of $mathrm{d}r$ and $mathrm{d}theta$.
In the plane, there are only two independent differentials, so we can always rewrite as a linear combination of two of them.
In my opinion, the better way to think about things is that the total differential is the most natural form of the derivative, and the partial derivative is a linear functional on differential forms; e.g. in the standard $x-y$ coordates, $partial/partial x$ is the mapping that sends $mathrm{d}x to 1$ and $mathrm{d}y to 0$.
So, using the notation $partial z / partial x$ for the action of $partial / partial x$ on $mathrm{d}z$, we see that if we have an equation
$$ mathrm{d}z = f ,mathrm{d}x + g ,mathrm{d}y $$
then
$$ frac{partial z}{partial x} = f cdot 1 + g cdot 0 = f$$
$$ frac{partial z}{partial y} = f cdot 0 + g cdot 1 = g$$
and so we'd have
$$ mathrm{d}z = frac{partial z}{partial x}, mathrm{d}x + frac{partial z}{partial y} ,mathrm{d} y$$
Aside: another advantage the total differential has over the partial derivative is that it's actually self-contained. In the plane, $partial / partial x$ has no meaning on its own; e.g. if we set $w=x+y$, then $partial / partial x$ means something different when expressing things as a function of $(x,y)$ than it does when expressing things as a function of $(x,w)$. (in the former, it sends $mathrm{d}y to 0$, in the latter it sends $mathrm{d}w to 0$, and thus $mathrm{d}y to -1$).
edited Jul 31 '16 at 19:25
Michael Hardy
1
1
answered Jul 31 '16 at 14:16
HurkylHurkyl
111k9117261
111k9117261
1
$begingroup$
Aside: the notation I favor when functions are involved is $mathrm{d}f(s,t) = f_1(s,t) mathrm{d}s + f_2(s,t) mathrm{d}t$. This emphasizes that we are taking the derivative of the function with respect to one of its places, rather than anything related to the actual variable we plug in. One feature is that if we have $s=t=x$, then $f_1(x,x)$ is completely unambiguous where $partial f(x,x) / partial x$ is not.
$endgroup$
– Hurkyl
Jul 31 '16 at 14:26
add a comment |
1
$begingroup$
Aside: the notation I favor when functions are involved is $mathrm{d}f(s,t) = f_1(s,t) mathrm{d}s + f_2(s,t) mathrm{d}t$. This emphasizes that we are taking the derivative of the function with respect to one of its places, rather than anything related to the actual variable we plug in. One feature is that if we have $s=t=x$, then $f_1(x,x)$ is completely unambiguous where $partial f(x,x) / partial x$ is not.
$endgroup$
– Hurkyl
Jul 31 '16 at 14:26
1
1
$begingroup$
Aside: the notation I favor when functions are involved is $mathrm{d}f(s,t) = f_1(s,t) mathrm{d}s + f_2(s,t) mathrm{d}t$. This emphasizes that we are taking the derivative of the function with respect to one of its places, rather than anything related to the actual variable we plug in. One feature is that if we have $s=t=x$, then $f_1(x,x)$ is completely unambiguous where $partial f(x,x) / partial x$ is not.
$endgroup$
– Hurkyl
Jul 31 '16 at 14:26
$begingroup$
Aside: the notation I favor when functions are involved is $mathrm{d}f(s,t) = f_1(s,t) mathrm{d}s + f_2(s,t) mathrm{d}t$. This emphasizes that we are taking the derivative of the function with respect to one of its places, rather than anything related to the actual variable we plug in. One feature is that if we have $s=t=x$, then $f_1(x,x)$ is completely unambiguous where $partial f(x,x) / partial x$ is not.
$endgroup$
– Hurkyl
Jul 31 '16 at 14:26
add a comment |
$begingroup$
Any infinitely small change in $(x,y)$ includes a change $dx$ in $x$ and a change $dy$ in $y$. The resulting change in $z$ resulting from the change in $x$ is $dfrac{partial z}{partial x} , dx$, and to that we add a change in $z$ resulting from the change in $y$.
$endgroup$
$begingroup$
" includes a change dx in x and a change dy in y"..It include innumerable direction in two variable function. :)
$endgroup$
– mayi
Jul 31 '16 at 6:05
$begingroup$
@mayi : The change in location is entirely characterized by the changes in $x$ and $y$. The fact that one could choose other coordinate systems doesn't alter that. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 12:56
add a comment |
$begingroup$
Any infinitely small change in $(x,y)$ includes a change $dx$ in $x$ and a change $dy$ in $y$. The resulting change in $z$ resulting from the change in $x$ is $dfrac{partial z}{partial x} , dx$, and to that we add a change in $z$ resulting from the change in $y$.
$endgroup$
$begingroup$
" includes a change dx in x and a change dy in y"..It include innumerable direction in two variable function. :)
$endgroup$
– mayi
Jul 31 '16 at 6:05
$begingroup$
@mayi : The change in location is entirely characterized by the changes in $x$ and $y$. The fact that one could choose other coordinate systems doesn't alter that. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 12:56
add a comment |
$begingroup$
Any infinitely small change in $(x,y)$ includes a change $dx$ in $x$ and a change $dy$ in $y$. The resulting change in $z$ resulting from the change in $x$ is $dfrac{partial z}{partial x} , dx$, and to that we add a change in $z$ resulting from the change in $y$.
$endgroup$
Any infinitely small change in $(x,y)$ includes a change $dx$ in $x$ and a change $dy$ in $y$. The resulting change in $z$ resulting from the change in $x$ is $dfrac{partial z}{partial x} , dx$, and to that we add a change in $z$ resulting from the change in $y$.
answered Jul 31 '16 at 5:33
Michael HardyMichael Hardy
1
1
$begingroup$
" includes a change dx in x and a change dy in y"..It include innumerable direction in two variable function. :)
$endgroup$
– mayi
Jul 31 '16 at 6:05
$begingroup$
@mayi : The change in location is entirely characterized by the changes in $x$ and $y$. The fact that one could choose other coordinate systems doesn't alter that. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 12:56
add a comment |
$begingroup$
" includes a change dx in x and a change dy in y"..It include innumerable direction in two variable function. :)
$endgroup$
– mayi
Jul 31 '16 at 6:05
$begingroup$
@mayi : The change in location is entirely characterized by the changes in $x$ and $y$. The fact that one could choose other coordinate systems doesn't alter that. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 12:56
$begingroup$
" includes a change dx in x and a change dy in y"..It include innumerable direction in two variable function. :)
$endgroup$
– mayi
Jul 31 '16 at 6:05
$begingroup$
" includes a change dx in x and a change dy in y"..It include innumerable direction in two variable function. :)
$endgroup$
– mayi
Jul 31 '16 at 6:05
$begingroup$
@mayi : The change in location is entirely characterized by the changes in $x$ and $y$. The fact that one could choose other coordinate systems doesn't alter that. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 12:56
$begingroup$
@mayi : The change in location is entirely characterized by the changes in $x$ and $y$. The fact that one could choose other coordinate systems doesn't alter that. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 12:56
add a comment |
$begingroup$
It's not that $dz$ contains just two directional derivative. Instead, you may think of
$$ dz = frac{partial z}{partial x } dx + frac{partial z}{partial y }dy$$
as representating $dz$ under the basis ${dx, dy}$. Let $v = (v_1, v_2)$ be a vector, then
$$dz (v) = frac{partial z}{partial x } dx (v)+ frac{partial z}{partial y }dy(v) = frac{partial z}{partial x } v_1 + frac{partial z}{partial y }v_2 = D_v z,$$
where $D_v z$ is the directional derivative of $z$ along the direction $v$. So all directional derivative of $z$ is encoded in the formula already.Almost true. Let $v, w$ be two independent vectors. Let $v^*, w^*$ be the dual vector defined by
$$tag{1} v^*(av + bw) = a, w^*(av + bw) =b, forall a, b in mathbb R.$$
From $(1)$, we can represent $v, w$ using the basis ${dx,dy}$: indeed if we write
$$tag{2} begin{pmatrix}v^* \ w^* end{pmatrix} = begin{pmatrix}A & B\ C & D end{pmatrix} begin{pmatrix}dx \ dy end{pmatrix},$$
one sees that
$$tag{3} begin{pmatrix}A & B\ C & D end{pmatrix} begin{pmatrix}v_1 & w_1\ v_2 & w_2 end{pmatrix}= begin{pmatrix}1 & 0\ 0 & 1 end{pmatrix}, Rightarrow begin{pmatrix}A & B\ C & D end{pmatrix} =begin{pmatrix}v_1 & w_1\ v_2 & w_2 end{pmatrix}^{-1} $$
(we need that ${v, w}$ be linear independent so that the right hand side is defined). Thus we have, using $(2)$ and $(3)$,
$$begin{split} (D_v z) v^* + (D_w z) w^* &= begin{pmatrix} D_v z & D_wzend{pmatrix} begin{pmatrix} v^* \ w^*end{pmatrix} \
&= begin{pmatrix} v_1frac{partial z}{partial x} + v_2 frac{partial z}{partial y} & w_1 frac{partial z}{partial x} + w_2 frac{partial z}{partial y}end{pmatrix} begin{pmatrix} v^* \ w^*end{pmatrix} \
&= begin{pmatrix} frac{partial z}{partial x} & frac{partial z}{partial y}end{pmatrix} begin{pmatrix} v_1 & w_1 \ v_2 & w_2 end{pmatrix}begin{pmatrix} v^* \ w^*end{pmatrix}\
&= begin{pmatrix} frac{partial z}{partial x} & frac{partial z}{partial y}end{pmatrix} begin{pmatrix} dx \ dyend{pmatrix} \
&=frac{partial z}{partial x} dx + frac{partial z}{partial y}dy = dz end{split}$$
Thus it is true that you can use any two independent vectors to represent $dz$ as
$$dz = (D_v z) v^* + (D_wz) w^*.$$
$endgroup$
$begingroup$
It looks as if you forgot a plus sign. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 6:11
add a comment |
$begingroup$
It's not that $dz$ contains just two directional derivative. Instead, you may think of
$$ dz = frac{partial z}{partial x } dx + frac{partial z}{partial y }dy$$
as representating $dz$ under the basis ${dx, dy}$. Let $v = (v_1, v_2)$ be a vector, then
$$dz (v) = frac{partial z}{partial x } dx (v)+ frac{partial z}{partial y }dy(v) = frac{partial z}{partial x } v_1 + frac{partial z}{partial y }v_2 = D_v z,$$
where $D_v z$ is the directional derivative of $z$ along the direction $v$. So all directional derivative of $z$ is encoded in the formula already.Almost true. Let $v, w$ be two independent vectors. Let $v^*, w^*$ be the dual vector defined by
$$tag{1} v^*(av + bw) = a, w^*(av + bw) =b, forall a, b in mathbb R.$$
From $(1)$, we can represent $v, w$ using the basis ${dx,dy}$: indeed if we write
$$tag{2} begin{pmatrix}v^* \ w^* end{pmatrix} = begin{pmatrix}A & B\ C & D end{pmatrix} begin{pmatrix}dx \ dy end{pmatrix},$$
one sees that
$$tag{3} begin{pmatrix}A & B\ C & D end{pmatrix} begin{pmatrix}v_1 & w_1\ v_2 & w_2 end{pmatrix}= begin{pmatrix}1 & 0\ 0 & 1 end{pmatrix}, Rightarrow begin{pmatrix}A & B\ C & D end{pmatrix} =begin{pmatrix}v_1 & w_1\ v_2 & w_2 end{pmatrix}^{-1} $$
(we need that ${v, w}$ be linear independent so that the right hand side is defined). Thus we have, using $(2)$ and $(3)$,
$$begin{split} (D_v z) v^* + (D_w z) w^* &= begin{pmatrix} D_v z & D_wzend{pmatrix} begin{pmatrix} v^* \ w^*end{pmatrix} \
&= begin{pmatrix} v_1frac{partial z}{partial x} + v_2 frac{partial z}{partial y} & w_1 frac{partial z}{partial x} + w_2 frac{partial z}{partial y}end{pmatrix} begin{pmatrix} v^* \ w^*end{pmatrix} \
&= begin{pmatrix} frac{partial z}{partial x} & frac{partial z}{partial y}end{pmatrix} begin{pmatrix} v_1 & w_1 \ v_2 & w_2 end{pmatrix}begin{pmatrix} v^* \ w^*end{pmatrix}\
&= begin{pmatrix} frac{partial z}{partial x} & frac{partial z}{partial y}end{pmatrix} begin{pmatrix} dx \ dyend{pmatrix} \
&=frac{partial z}{partial x} dx + frac{partial z}{partial y}dy = dz end{split}$$
Thus it is true that you can use any two independent vectors to represent $dz$ as
$$dz = (D_v z) v^* + (D_wz) w^*.$$
$endgroup$
$begingroup$
It looks as if you forgot a plus sign. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 6:11
add a comment |
$begingroup$
It's not that $dz$ contains just two directional derivative. Instead, you may think of
$$ dz = frac{partial z}{partial x } dx + frac{partial z}{partial y }dy$$
as representating $dz$ under the basis ${dx, dy}$. Let $v = (v_1, v_2)$ be a vector, then
$$dz (v) = frac{partial z}{partial x } dx (v)+ frac{partial z}{partial y }dy(v) = frac{partial z}{partial x } v_1 + frac{partial z}{partial y }v_2 = D_v z,$$
where $D_v z$ is the directional derivative of $z$ along the direction $v$. So all directional derivative of $z$ is encoded in the formula already.Almost true. Let $v, w$ be two independent vectors. Let $v^*, w^*$ be the dual vector defined by
$$tag{1} v^*(av + bw) = a, w^*(av + bw) =b, forall a, b in mathbb R.$$
From $(1)$, we can represent $v, w$ using the basis ${dx,dy}$: indeed if we write
$$tag{2} begin{pmatrix}v^* \ w^* end{pmatrix} = begin{pmatrix}A & B\ C & D end{pmatrix} begin{pmatrix}dx \ dy end{pmatrix},$$
one sees that
$$tag{3} begin{pmatrix}A & B\ C & D end{pmatrix} begin{pmatrix}v_1 & w_1\ v_2 & w_2 end{pmatrix}= begin{pmatrix}1 & 0\ 0 & 1 end{pmatrix}, Rightarrow begin{pmatrix}A & B\ C & D end{pmatrix} =begin{pmatrix}v_1 & w_1\ v_2 & w_2 end{pmatrix}^{-1} $$
(we need that ${v, w}$ be linear independent so that the right hand side is defined). Thus we have, using $(2)$ and $(3)$,
$$begin{split} (D_v z) v^* + (D_w z) w^* &= begin{pmatrix} D_v z & D_wzend{pmatrix} begin{pmatrix} v^* \ w^*end{pmatrix} \
&= begin{pmatrix} v_1frac{partial z}{partial x} + v_2 frac{partial z}{partial y} & w_1 frac{partial z}{partial x} + w_2 frac{partial z}{partial y}end{pmatrix} begin{pmatrix} v^* \ w^*end{pmatrix} \
&= begin{pmatrix} frac{partial z}{partial x} & frac{partial z}{partial y}end{pmatrix} begin{pmatrix} v_1 & w_1 \ v_2 & w_2 end{pmatrix}begin{pmatrix} v^* \ w^*end{pmatrix}\
&= begin{pmatrix} frac{partial z}{partial x} & frac{partial z}{partial y}end{pmatrix} begin{pmatrix} dx \ dyend{pmatrix} \
&=frac{partial z}{partial x} dx + frac{partial z}{partial y}dy = dz end{split}$$
Thus it is true that you can use any two independent vectors to represent $dz$ as
$$dz = (D_v z) v^* + (D_wz) w^*.$$
$endgroup$
It's not that $dz$ contains just two directional derivative. Instead, you may think of
$$ dz = frac{partial z}{partial x } dx + frac{partial z}{partial y }dy$$
as representating $dz$ under the basis ${dx, dy}$. Let $v = (v_1, v_2)$ be a vector, then
$$dz (v) = frac{partial z}{partial x } dx (v)+ frac{partial z}{partial y }dy(v) = frac{partial z}{partial x } v_1 + frac{partial z}{partial y }v_2 = D_v z,$$
where $D_v z$ is the directional derivative of $z$ along the direction $v$. So all directional derivative of $z$ is encoded in the formula already.Almost true. Let $v, w$ be two independent vectors. Let $v^*, w^*$ be the dual vector defined by
$$tag{1} v^*(av + bw) = a, w^*(av + bw) =b, forall a, b in mathbb R.$$
From $(1)$, we can represent $v, w$ using the basis ${dx,dy}$: indeed if we write
$$tag{2} begin{pmatrix}v^* \ w^* end{pmatrix} = begin{pmatrix}A & B\ C & D end{pmatrix} begin{pmatrix}dx \ dy end{pmatrix},$$
one sees that
$$tag{3} begin{pmatrix}A & B\ C & D end{pmatrix} begin{pmatrix}v_1 & w_1\ v_2 & w_2 end{pmatrix}= begin{pmatrix}1 & 0\ 0 & 1 end{pmatrix}, Rightarrow begin{pmatrix}A & B\ C & D end{pmatrix} =begin{pmatrix}v_1 & w_1\ v_2 & w_2 end{pmatrix}^{-1} $$
(we need that ${v, w}$ be linear independent so that the right hand side is defined). Thus we have, using $(2)$ and $(3)$,
$$begin{split} (D_v z) v^* + (D_w z) w^* &= begin{pmatrix} D_v z & D_wzend{pmatrix} begin{pmatrix} v^* \ w^*end{pmatrix} \
&= begin{pmatrix} v_1frac{partial z}{partial x} + v_2 frac{partial z}{partial y} & w_1 frac{partial z}{partial x} + w_2 frac{partial z}{partial y}end{pmatrix} begin{pmatrix} v^* \ w^*end{pmatrix} \
&= begin{pmatrix} frac{partial z}{partial x} & frac{partial z}{partial y}end{pmatrix} begin{pmatrix} v_1 & w_1 \ v_2 & w_2 end{pmatrix}begin{pmatrix} v^* \ w^*end{pmatrix}\
&= begin{pmatrix} frac{partial z}{partial x} & frac{partial z}{partial y}end{pmatrix} begin{pmatrix} dx \ dyend{pmatrix} \
&=frac{partial z}{partial x} dx + frac{partial z}{partial y}dy = dz end{split}$$
Thus it is true that you can use any two independent vectors to represent $dz$ as
$$dz = (D_v z) v^* + (D_wz) w^*.$$
edited Jul 31 '16 at 8:10
answered Jul 31 '16 at 5:54
user99914
$begingroup$
It looks as if you forgot a plus sign. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 6:11
add a comment |
$begingroup$
It looks as if you forgot a plus sign. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 6:11
$begingroup$
It looks as if you forgot a plus sign. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 6:11
$begingroup$
It looks as if you forgot a plus sign. $qquad$
$endgroup$
– Michael Hardy
Jul 31 '16 at 6:11
add a comment |
$begingroup$
I used to be confused about this also, until I learned that $(frac{partial z}{partial x},frac{partial z}{partial y})$ is a vector-like object.
What it means is that you can rotate the coordinate axes so that they point in any direction and rewrite the total derivative in terms of the partial derivatives along those axes, and under such a rotation the partial derivatives and the displacements dx and dy transform in such a way that leaves the total derivative unchanged.
It was Feynman's explanation in Volume II Chapter 2 that cleared up my confusion.
$endgroup$
$begingroup$
Thanks for your link.It's very useful to me.
$endgroup$
– mayi
Aug 1 '16 at 17:50
add a comment |
$begingroup$
I used to be confused about this also, until I learned that $(frac{partial z}{partial x},frac{partial z}{partial y})$ is a vector-like object.
What it means is that you can rotate the coordinate axes so that they point in any direction and rewrite the total derivative in terms of the partial derivatives along those axes, and under such a rotation the partial derivatives and the displacements dx and dy transform in such a way that leaves the total derivative unchanged.
It was Feynman's explanation in Volume II Chapter 2 that cleared up my confusion.
$endgroup$
$begingroup$
Thanks for your link.It's very useful to me.
$endgroup$
– mayi
Aug 1 '16 at 17:50
add a comment |
$begingroup$
I used to be confused about this also, until I learned that $(frac{partial z}{partial x},frac{partial z}{partial y})$ is a vector-like object.
What it means is that you can rotate the coordinate axes so that they point in any direction and rewrite the total derivative in terms of the partial derivatives along those axes, and under such a rotation the partial derivatives and the displacements dx and dy transform in such a way that leaves the total derivative unchanged.
It was Feynman's explanation in Volume II Chapter 2 that cleared up my confusion.
$endgroup$
I used to be confused about this also, until I learned that $(frac{partial z}{partial x},frac{partial z}{partial y})$ is a vector-like object.
What it means is that you can rotate the coordinate axes so that they point in any direction and rewrite the total derivative in terms of the partial derivatives along those axes, and under such a rotation the partial derivatives and the displacements dx and dy transform in such a way that leaves the total derivative unchanged.
It was Feynman's explanation in Volume II Chapter 2 that cleared up my confusion.
answered Aug 1 '16 at 17:25
Tomek DobrzynskiTomek Dobrzynski
392
392
$begingroup$
Thanks for your link.It's very useful to me.
$endgroup$
– mayi
Aug 1 '16 at 17:50
add a comment |
$begingroup$
Thanks for your link.It's very useful to me.
$endgroup$
– mayi
Aug 1 '16 at 17:50
$begingroup$
Thanks for your link.It's very useful to me.
$endgroup$
– mayi
Aug 1 '16 at 17:50
$begingroup$
Thanks for your link.It's very useful to me.
$endgroup$
– mayi
Aug 1 '16 at 17:50
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1876559%2fwhat-is-an-intuition-behind-total-differential-in-two-variables-function%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
It is not limited with just two.As shown in the definition, $partial f=frac{partial f}{x_1} d(x_1) + frac{partial f}{x_2} d(x_2) +... frac{partial f}{x_i} d(x_i)$
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:11
$begingroup$
@Leth I mean two variables function. Your comment's function have $i$ variables. :)
$endgroup$
– mayi
Jul 31 '16 at 5:13
$begingroup$
Then, I can't understand what exactly asking, because you said "any two variable" but there just two already ?
$endgroup$
– onurcanbektas
Jul 31 '16 at 5:15
1
$begingroup$
@mayi: Are you intuitively happy that each of the innumerable directions of travel in the plane can be uniquely specified by just two numbers (e.g., the Cartesian components of the velocity)?
$endgroup$
– Andrew D. Hwang
Jul 31 '16 at 16:03
1
$begingroup$
@amd's answer is particularly good, IMO. My guess is that you were mainly missing the notion and use of basis here. You don't mention it, and it is key, I think, to your confusion.
$endgroup$
– Drew
Jul 31 '16 at 19:50