![]() | This article is rated List-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
The other vector identities can be found in the apendices of a lot of textbooks. These particular ones came from "Advanced Engineering Electromagnetics" by Constantine A. Balanis.
Hello. I wonder if anyone is opposed to rewriting the identities as curl grad foo = 0, etc, i.e., using the words curl, div, and grad. As Wikipedia is a reference work, it seems safe to assume that we are going to get many readers who are capable of understanding the identities, but are only occasional users of the notation. Yes, no, maybe? Keep up the good work, 64.48.158.121 03:59, 19 May 2006 (UTC)
Another thing to think about -- it would be a good idea to look at the other pages in Category:Vector calculus and see if there are some identities which can be copied here. 64.48.158.121 04:01, 19 May 2006 (UTC)
Hello User talk:Crowsnest: You reverted these entries; why?
Occasionally I come to this page for a quick reference of vector calculus identities. Feynman notation, while fine enough in itself, is not helpful. (We get it -- you can understand the notation.) The fact is that Feynman notation has not caught on as large as say, Einstein notation, and so to include it the vector calculus identity page in Wikipedia is just confusing. Keep it, fine, but put it as an "oh by the way, here's the same stuff in different notation" at the end of the page. It would be very helpful to just include the formulas that most of us work with at the beginning when we just come for a quick reference -- otherwise your page is gonna get passed right over. —Preceding unsigned comment added by Jkedmond ( talk • contribs) 15:29, 29 January 2009 (UTC)
In simpler form, using Feynman subscript notation:
where the notation ∇A means the subscripted gradient operates on only the factor A.
where the Feynman subscript notation ∇B means the subscripted gradient operates on only the factor B.
Point 1: The notation is defined and is useful - why delete it? It's only notation - use it or don't; suit yourself. So why not show how it works and give the reader a choice?
Point 2: Feynman's notation is used; for example, see: "following Feynman , introduce a partial operator ∇A in the latter identity (14), where the subscript denotes the quantity to be differentiated. p. 4
Point 3: Quote from Feynman: "Here is our new convention: we show by a subscript, what a differential operator works on; the order has no meaning." etc. The guy introduced it in his undergrad lectures - he must think its useful. Hey, he's a Nobel Prize winner; why trust his judgment, eh?? Reference: R P Feyman, & Leighton & Sands (1964). The Feynman Lecture on Physics. Addison-Wesley. p. Vol II p. 27-4. ISBN 0805390499.
Point 4: They are identities, for Pete's sake. They are here for reference only - no great wisdom attaches. You can prove them yourself, there is no question of validity. It's just to save some time, or provide a little smörgåsbord for browsing when solving a problem. What possible purpose or rationale is there for these reverts??? Brews ohare ( talk) 05:35, 23 April 2008 (UTC)
I think A.nabla should be made clearer. In cartesian coordinates its easy to understand but in other coordinate systems, one might wonder if it is referring to nabla treated as a vector operator as it appears in the gradient or in the divergence which are of course different. The answer is as it appears in the divergence but since A.nabla is sometimes used as a notation for the directional derivative in the direction of A, one may think the gradient form should be used. I think it is definitely worth making this clearer. —Preceding unsigned comment added by 130.104.48.8 ( talk) 10:55, 24 February 2009 (UTC)
“ | … Emanuel (Analytical fluid dynamics) states on page 6: "We shall utilize a notation, first introduced by George Stokes, to define the operator D⁄Dt = ∂⁄∂t + w•∇ which is called the substantial or material derivative. This definition is independent of any specific coordinate system."
And on page 7: "The dot product on the right hand side can be interpreted as w•(∇w), which involves the the dyadic ∇w, or as (w•∇)w, which does not involve a dyadic. With tensor analysis one can show that both interpretations yield the same result; the second one is usually preferred because of its greater simplicity." |
” |
I think it would be good to mention how Cartesian tensors in rectangular coordinates (a condition which I didn't state in a previous edit) can be used to derive the vector calculus identities stated in this article. This method is much easier and more elegant than laboriously expanding curls of curls etc. and then grouping terms. However, detailing how this is done might expand the article a substantial amount. Any thoughts? —Preceding unsigned comment added by Breeet ( talk • contribs) 07:25, 5 May 2010 (UTC)
Hello ... I have extended this article by adding a few more identities. Now, I think this article is superset of the other article!. So, I would like to suggest to delete(or something equivalent) the article " List of vector identities". If anybody has objection, please let us discuss. zinka 09:30, 5 May 2010 (UTC)
The present article Vector calculus identities describes identities involving integration and derivative operations like the curl and gradient. The article Vector algebra relations involves only algebraic relations like the dot and cross products, and no calculus. Inasmuch as the subsection of Vector calculus identities titled "Addition and Multiplication" does not involve any calculus (and so is not properly part of the subject of Vector calculus identities), and inasmuch as the material of this section is contained (and expanded upon) in Vector algebra relations, I have removed this section and replaced it with a link to Vector algebra relations. Brews ohare ( talk) 15:18, 14 September 2010 (UTC)
Should duplicate material in Vector calculus identities be removed?
The article Vector calculus identities describes identities involving integration as well as derivative operations like the curl and gradient. The article Vector algebra relations contains only algebraic relations involving the dot and cross products, and no calculus. Inasmuch as the subsection of Vector calculus identities titled "Addition and Multiplication" does not involve any calculus (and so is not properly part of the subject of Vector calculus identities), and inasmuch as the material of this section is contained (and expanded upon) in Vector algebra relations, should this section of Vector calculus identities be removed? Brews ohare ( talk) 17:19, 14 September 2010 (UTC)
Please add your comments below beginning with an asterisk *
I deleted the useless template + section in this edit [1], is it really that necessary to expand on this? An integral is an integral, if there really are widespread wacky notations in use just add as and when, instead of leaving it empty...-- F = q( E + v × B) 00:05, 2 March 2012 (UTC)
The identities curl(grad(f)) = 0 and div(curl(F)) = 0 need conditions on the scalar field f and the vector field F, namely continuous second partials in a connected open set. This article says that the identities hold unconditionally. — Preceding unsigned comment added by 69.210.137.176 ( talk) 05:06, 11 March 2012 (UTC)
I have never seen Feynman subscript notation before. In this article it is introduced as if it is just a convenient abbreviation for a two-term operation in vector calculus, but then it is used only once! Perhaps I am dense but it just seems unintuitive, and it sticks out like a sore thumb in an otherwise beautifully concise page of identities written using the standard nabla notation. The introduction of Hestenes overdot notation is even more strange as it is not used at all.
To be clear I'm not saying alternative notations/operators don't belong on wikipedia, just that they don't fit in here. It would be nice to see an article about these, as I think there are probably many many more variants worth mentioning as long as the intuition and the gain is made obvious. -- Nanite ( talk) 10:48, 24 November 2015 (UTC)
I know there are a few vector calculus identities involving dyadics, i.e. , but I'm not sure if they should be included in this page or the dyadics wikipedia page. Thoughts? 76.116.117.40 ( talk) 00:48, 22 February 2016 (UTC)
Can anyone provide a draft proof (or a reference) to the extraction of this relation:
from the 'standard' theorems (e.g. Stokes/Green/Gauss)? I've found a reference on Mathworld (Divergence Theorem), but it mentions using a constant vector "c" which troubles me. Aleonymous ( talk) 16:54, 6 March 2018 (UTC)
It seems there is an error in this section. To be concrete, we have to know how to express everything in terms of vector components, i.e. in index notation. I will follow the Einstein summation convention for repeated indices. The original left-hand side is clear: . I agree with the first right-hand side, e.g. . But the third line seems ambiguous: what in components is ? Is it a) or b) ? Following the article Dyadics#Product of dyadic and dyadic, , so that a) is the right answer. But this is just , so that the two cross-product terms in the first right-hand side must be zero! But they are not. So, I think we want b), which means we should write .
What am I missing? Is there an "authoritative" place to go for the meaning of this kind of notation on Wikipedia? Dstrozzi ( talk) 03:29, 24 April 2018 (UTC)
I would say using the dyadic definition of Kelley (and me) is fine, unambiguous, and intuitive, in that the "the thing on the left" () stays on the left. It's a very compact expression, and I don't think it's opaque. There is no need to define "the gradient of a vector", which I'd say is even more ambiguous and non-standard than dyadics. I assume your comment about "the thing on the left becoming the thing on the right" refers to the later, definition, not the earlier dyadic one.
Either way - is there a place on Wikipedia where the math notation that pages should use is defined? I must say that I find the "transpose" in general confusing, since I don't know if a "plain" vector on this page should be understood as a row or column vector. Thinking of things in index notation, transposing a vector "makes no sense", though of course transposing a matrix does.
Also, sorry, I just realized I've been using \vec instead of \mathbf for vectors. If Wikipedia wants people to use boldface and not over-arrows for vectors, maybe they should re-define \vec.
Dstrozzi ( talk) 12:35, 28 April 2018 (UTC)
The section on the Gradient refers to "a tensor field of order n" although the meaning of "order n" for a tensor field is not explained in this article and is not explained in the linked article Tensor field, either.
The section discusses the case of a space of dimension 3 but then suddenly starts talking about tensor fields of order n. Why? 2600:1700:E1C0:F340:ED8B:9E01:6222:F4CC ( talk) 19:48, 20 January 2019 (UTC)
This page was very confusing because it uses row vectors for A and B, etc., rather than column vectors like every math textbook I have ever read. It also seems to do matrix multiplication in reverse, so to speak.
For example, there is one place where A is multiplied by a rectangular (i.e., with more than one row and more than one column) Jacobean matrix J. I had assumed A was an (n x 1) column vector, and matrix multiplication was defined in the normal way. For this to work, J would have to be a (1 x m) matrix, that is, a row vector, not a rectangular matrix.
The only way any of this makes sense is if we think of everything in reverse that the rest of the world seems to use. Or, let A be a column vector and put it on the right of J and use the normal matrix multiplication conventions.
I cannot fix it because I came here to learn the formulas, so I do not know them well enough to rewrite them. — Preceding unsigned comment added by 146.142.1.10 ( talk) 21:55, 29 January 2020 (UTC)
Has anyone else noticed the strange and inconsistent LaTeX spacing used for some of the identities? For example, the use of , written (\nabla {\cdot} \psi)
, in the operator notation section's special notation subsection compared to the use of , written (\nabla \cdot \psi)
, in the summary section's second derivative subsection. Is there a reason for this?
CoronalMassAffection (
talk)
03:02, 10 June 2021 (UTC)
This wikipedia entry is absurdly distant from Plain English. There are dozens of articles that explain this subject so that non-experts can understand this topic. DDilworth ( talk) 16:33, 27 May 2024 (UTC)
![]() | This article is rated List-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
The other vector identities can be found in the apendices of a lot of textbooks. These particular ones came from "Advanced Engineering Electromagnetics" by Constantine A. Balanis.
Hello. I wonder if anyone is opposed to rewriting the identities as curl grad foo = 0, etc, i.e., using the words curl, div, and grad. As Wikipedia is a reference work, it seems safe to assume that we are going to get many readers who are capable of understanding the identities, but are only occasional users of the notation. Yes, no, maybe? Keep up the good work, 64.48.158.121 03:59, 19 May 2006 (UTC)
Another thing to think about -- it would be a good idea to look at the other pages in Category:Vector calculus and see if there are some identities which can be copied here. 64.48.158.121 04:01, 19 May 2006 (UTC)
Hello User talk:Crowsnest: You reverted these entries; why?
Occasionally I come to this page for a quick reference of vector calculus identities. Feynman notation, while fine enough in itself, is not helpful. (We get it -- you can understand the notation.) The fact is that Feynman notation has not caught on as large as say, Einstein notation, and so to include it the vector calculus identity page in Wikipedia is just confusing. Keep it, fine, but put it as an "oh by the way, here's the same stuff in different notation" at the end of the page. It would be very helpful to just include the formulas that most of us work with at the beginning when we just come for a quick reference -- otherwise your page is gonna get passed right over. —Preceding unsigned comment added by Jkedmond ( talk • contribs) 15:29, 29 January 2009 (UTC)
In simpler form, using Feynman subscript notation:
where the notation ∇A means the subscripted gradient operates on only the factor A.
where the Feynman subscript notation ∇B means the subscripted gradient operates on only the factor B.
Point 1: The notation is defined and is useful - why delete it? It's only notation - use it or don't; suit yourself. So why not show how it works and give the reader a choice?
Point 2: Feynman's notation is used; for example, see: "following Feynman , introduce a partial operator ∇A in the latter identity (14), where the subscript denotes the quantity to be differentiated. p. 4
Point 3: Quote from Feynman: "Here is our new convention: we show by a subscript, what a differential operator works on; the order has no meaning." etc. The guy introduced it in his undergrad lectures - he must think its useful. Hey, he's a Nobel Prize winner; why trust his judgment, eh?? Reference: R P Feyman, & Leighton & Sands (1964). The Feynman Lecture on Physics. Addison-Wesley. p. Vol II p. 27-4. ISBN 0805390499.
Point 4: They are identities, for Pete's sake. They are here for reference only - no great wisdom attaches. You can prove them yourself, there is no question of validity. It's just to save some time, or provide a little smörgåsbord for browsing when solving a problem. What possible purpose or rationale is there for these reverts??? Brews ohare ( talk) 05:35, 23 April 2008 (UTC)
I think A.nabla should be made clearer. In cartesian coordinates its easy to understand but in other coordinate systems, one might wonder if it is referring to nabla treated as a vector operator as it appears in the gradient or in the divergence which are of course different. The answer is as it appears in the divergence but since A.nabla is sometimes used as a notation for the directional derivative in the direction of A, one may think the gradient form should be used. I think it is definitely worth making this clearer. —Preceding unsigned comment added by 130.104.48.8 ( talk) 10:55, 24 February 2009 (UTC)
“ | … Emanuel (Analytical fluid dynamics) states on page 6: "We shall utilize a notation, first introduced by George Stokes, to define the operator D⁄Dt = ∂⁄∂t + w•∇ which is called the substantial or material derivative. This definition is independent of any specific coordinate system."
And on page 7: "The dot product on the right hand side can be interpreted as w•(∇w), which involves the the dyadic ∇w, or as (w•∇)w, which does not involve a dyadic. With tensor analysis one can show that both interpretations yield the same result; the second one is usually preferred because of its greater simplicity." |
” |
I think it would be good to mention how Cartesian tensors in rectangular coordinates (a condition which I didn't state in a previous edit) can be used to derive the vector calculus identities stated in this article. This method is much easier and more elegant than laboriously expanding curls of curls etc. and then grouping terms. However, detailing how this is done might expand the article a substantial amount. Any thoughts? —Preceding unsigned comment added by Breeet ( talk • contribs) 07:25, 5 May 2010 (UTC)
Hello ... I have extended this article by adding a few more identities. Now, I think this article is superset of the other article!. So, I would like to suggest to delete(or something equivalent) the article " List of vector identities". If anybody has objection, please let us discuss. zinka 09:30, 5 May 2010 (UTC)
The present article Vector calculus identities describes identities involving integration and derivative operations like the curl and gradient. The article Vector algebra relations involves only algebraic relations like the dot and cross products, and no calculus. Inasmuch as the subsection of Vector calculus identities titled "Addition and Multiplication" does not involve any calculus (and so is not properly part of the subject of Vector calculus identities), and inasmuch as the material of this section is contained (and expanded upon) in Vector algebra relations, I have removed this section and replaced it with a link to Vector algebra relations. Brews ohare ( talk) 15:18, 14 September 2010 (UTC)
Should duplicate material in Vector calculus identities be removed?
The article Vector calculus identities describes identities involving integration as well as derivative operations like the curl and gradient. The article Vector algebra relations contains only algebraic relations involving the dot and cross products, and no calculus. Inasmuch as the subsection of Vector calculus identities titled "Addition and Multiplication" does not involve any calculus (and so is not properly part of the subject of Vector calculus identities), and inasmuch as the material of this section is contained (and expanded upon) in Vector algebra relations, should this section of Vector calculus identities be removed? Brews ohare ( talk) 17:19, 14 September 2010 (UTC)
Please add your comments below beginning with an asterisk *
I deleted the useless template + section in this edit [1], is it really that necessary to expand on this? An integral is an integral, if there really are widespread wacky notations in use just add as and when, instead of leaving it empty...-- F = q( E + v × B) 00:05, 2 March 2012 (UTC)
The identities curl(grad(f)) = 0 and div(curl(F)) = 0 need conditions on the scalar field f and the vector field F, namely continuous second partials in a connected open set. This article says that the identities hold unconditionally. — Preceding unsigned comment added by 69.210.137.176 ( talk) 05:06, 11 March 2012 (UTC)
I have never seen Feynman subscript notation before. In this article it is introduced as if it is just a convenient abbreviation for a two-term operation in vector calculus, but then it is used only once! Perhaps I am dense but it just seems unintuitive, and it sticks out like a sore thumb in an otherwise beautifully concise page of identities written using the standard nabla notation. The introduction of Hestenes overdot notation is even more strange as it is not used at all.
To be clear I'm not saying alternative notations/operators don't belong on wikipedia, just that they don't fit in here. It would be nice to see an article about these, as I think there are probably many many more variants worth mentioning as long as the intuition and the gain is made obvious. -- Nanite ( talk) 10:48, 24 November 2015 (UTC)
I know there are a few vector calculus identities involving dyadics, i.e. , but I'm not sure if they should be included in this page or the dyadics wikipedia page. Thoughts? 76.116.117.40 ( talk) 00:48, 22 February 2016 (UTC)
Can anyone provide a draft proof (or a reference) to the extraction of this relation:
from the 'standard' theorems (e.g. Stokes/Green/Gauss)? I've found a reference on Mathworld (Divergence Theorem), but it mentions using a constant vector "c" which troubles me. Aleonymous ( talk) 16:54, 6 March 2018 (UTC)
It seems there is an error in this section. To be concrete, we have to know how to express everything in terms of vector components, i.e. in index notation. I will follow the Einstein summation convention for repeated indices. The original left-hand side is clear: . I agree with the first right-hand side, e.g. . But the third line seems ambiguous: what in components is ? Is it a) or b) ? Following the article Dyadics#Product of dyadic and dyadic, , so that a) is the right answer. But this is just , so that the two cross-product terms in the first right-hand side must be zero! But they are not. So, I think we want b), which means we should write .
What am I missing? Is there an "authoritative" place to go for the meaning of this kind of notation on Wikipedia? Dstrozzi ( talk) 03:29, 24 April 2018 (UTC)
I would say using the dyadic definition of Kelley (and me) is fine, unambiguous, and intuitive, in that the "the thing on the left" () stays on the left. It's a very compact expression, and I don't think it's opaque. There is no need to define "the gradient of a vector", which I'd say is even more ambiguous and non-standard than dyadics. I assume your comment about "the thing on the left becoming the thing on the right" refers to the later, definition, not the earlier dyadic one.
Either way - is there a place on Wikipedia where the math notation that pages should use is defined? I must say that I find the "transpose" in general confusing, since I don't know if a "plain" vector on this page should be understood as a row or column vector. Thinking of things in index notation, transposing a vector "makes no sense", though of course transposing a matrix does.
Also, sorry, I just realized I've been using \vec instead of \mathbf for vectors. If Wikipedia wants people to use boldface and not over-arrows for vectors, maybe they should re-define \vec.
Dstrozzi ( talk) 12:35, 28 April 2018 (UTC)
The section on the Gradient refers to "a tensor field of order n" although the meaning of "order n" for a tensor field is not explained in this article and is not explained in the linked article Tensor field, either.
The section discusses the case of a space of dimension 3 but then suddenly starts talking about tensor fields of order n. Why? 2600:1700:E1C0:F340:ED8B:9E01:6222:F4CC ( talk) 19:48, 20 January 2019 (UTC)
This page was very confusing because it uses row vectors for A and B, etc., rather than column vectors like every math textbook I have ever read. It also seems to do matrix multiplication in reverse, so to speak.
For example, there is one place where A is multiplied by a rectangular (i.e., with more than one row and more than one column) Jacobean matrix J. I had assumed A was an (n x 1) column vector, and matrix multiplication was defined in the normal way. For this to work, J would have to be a (1 x m) matrix, that is, a row vector, not a rectangular matrix.
The only way any of this makes sense is if we think of everything in reverse that the rest of the world seems to use. Or, let A be a column vector and put it on the right of J and use the normal matrix multiplication conventions.
I cannot fix it because I came here to learn the formulas, so I do not know them well enough to rewrite them. — Preceding unsigned comment added by 146.142.1.10 ( talk) 21:55, 29 January 2020 (UTC)
Has anyone else noticed the strange and inconsistent LaTeX spacing used for some of the identities? For example, the use of , written (\nabla {\cdot} \psi)
, in the operator notation section's special notation subsection compared to the use of , written (\nabla \cdot \psi)
, in the summary section's second derivative subsection. Is there a reason for this?
CoronalMassAffection (
talk)
03:02, 10 June 2021 (UTC)
This wikipedia entry is absurdly distant from Plain English. There are dozens of articles that explain this subject so that non-experts can understand this topic. DDilworth ( talk) 16:33, 27 May 2024 (UTC)