This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 |
In the image the dotted line is at right angles to the vector B isn't it? Shouldn't the image indicate this? stib ( talk) 01:55, 19 February 2008 (UTC)
I agree, the right angle sign should be appended to the image. I just learned about Dot Products, and that would have saved me a couple HOURS(!!) of reading. I didn't understand the image at all for a few days and that sign would have completely solved it for me. Now, granted, I'm rather ignorant of most maths, but surely this page isn't only for the highly literate mathematicians. If I should add the bracket myself, let me know, and let me know how. Thanks.
Limited_Atonement
I edited the image to include the parallel indicators and |A||B|, but I don't know how to replace the existing image. I don't think I can upload to Wikipedia because my account isn't confirmed? How do I change that?
Limited_Atonement
I successfully changed the image to indicate the right angle and took out the extra in |A||B|. Limited_Atonement Thursday, December 09, 2010 6:05:45 PM —Preceding unsigned comment added by 66.244.68.201 ( talk) 18:06, 9 December 2010 (UTC)
I have a small clarification to make in the article on Dot Product (Mathematics).It is shown that dot product is matrix multiplication of a transpose multiplied by b But in the example the transpose of b is taken.
Can you please calrify this 195.229.236.247 ( talk) 09:04, 17 March 2008 (UTC) Aravind
I don't think we need to distinguish between row and column vectors here. That distinction only needs to be made when considering vectors as degenerate forms of matrices. If vectors were always considered as degenerate matrices, we'd simply do matrix multiplication and forget about defining it as a vector operation.
Instead, we have separate operators (the dot or the Cartesian x) so that the vectors can remain vectors (as in vector-space), not specifically row or column vectors. The result is equivalent to considering the first vector as a row-vector and the second as a column-vector and applying normal matrix multiplication, but matrix multiplication is not commutative.
I believe the generalization (neither considered as row nor as column) makes dot product truly commutative. (additional operations such as transposition should not be artificially required by forcing row or column orientation on your model).
_-T 德 —Preceding unsigned comment added by 129.115.13.107 ( talk) 16:29, 21 March 2008 (UTC)
Is the dot product valid for "non-orthonormal vector spaces"? For readers who don't know the exact definition of the Euclidean space (i.e. most readers, in my opinion), the introduction does not answer this question explicitly. It just states that the dot product "is the standard inner product of the Euclidean space". Readers appreciate a concise and generic introduction, but they are not supposed to know whether the Euclidean space is required to be orthonormal or not. The definition section creates or accentuates this doubt. Here is the first sentence of that section:
The dot product of two vectors (from an orthonormal vector space) a = [a1, a2, … , an] and b = [b1, b2, … , bn] is by definition: |
The phrase within parentheses puzzles me and may also confuse other readers. Why did the author use parentheses?
First hypothesis. Did the author mean that the dot product is only defined in orthonormal vector spaces? (and only its generalization, the inner product, can be used for non-orthonormal vector spaces)? In this case, the parenheses might indicate that the writer regards the information as redundant. However, this is not true for all readers, because the information is not given explicitly in the previous paragraphs.
Opposite hypothesis. Did the author mean, on the contrary, that a more general and complex definition of the dot product exists (coinciding with the definition of the inner product given below), valid for non-orthonormal vector spaces? Did the author choose to give first a commonly used simplified version of that definition? In this case, the definition section should be completed with the warning that a more general definition exists in the literature, and it coincides with the definition of the inner product.
What is the correct hypothesis? Do you agree that the phrase is an important part of the definition and should be enphasized, rather than confined within parenthesis? Paolo.dL ( talk) 12:57, 19 April 2008 (UTC)
I believe that it is important to warn the reader that the definition of the dot product is not valid for non-orthogonal Euclidean vector spaces. Not valid means that it does not compute what is meant to compute (a number which can be used to define length and angle). I agree with Ezrakilty that this "validity" is related to the geometrical interpretation of the dot product, but IMO the geometrical interpretation is inseparable from the mathematical definition
Moreover, if we accept Ezrakilty's decision to remove from the definition section the sentence about restriction to orthogonal vector spaces, we should remove it everywhere in the article, including the beginning of section "Conversion to matrix multiplication". Also, notice that at the end of the section "Generalization", the inner product formula is shown to simplify into the dot product formula when the basis set is orthogonal. This information is precious in this context.
Being not sure about what's the most common or most appropriate definition of the dot product, I undid Ezrakilty's edit, just to restore consistency between sections. I hope that the discussion on this topic will continue. Paolo.dL ( talk) 14:44, 13 May 2008 (UTC)
Your point is reasonable. Indeed, in physics textbooks, where I learned the definition of the dot product, the operation is described in R3 and is not even extended to n-dimensional spaces.
I guess that you accept my "first hypothesis". I still have a doubt. Can we exclude that some authoritative author might have endorsed, in the literature, my "opposite hypothesis" (see previous section)? In other words, does any mathematician call dot product the inner product for non-orthogonal vector spaces? I hope not. And I hope that some expert mathematician will give a final answer to this question either by posting a comment here or by editing the article. Paolo.dL ( talk) 15:35, 15 May 2008 (UTC)
We still don't have an answer, but in the meantime I removed the ambiguous sentence. The last sentence of the definition section presents non-ambiguously a similar concept, hedged by the adverb "typically". Even if my "opposite hypothesis" were true (and I hope it's not), the ambiguous sentence would not be a good way to inform the readers. Paolo.dL ( talk) 09:00, 30 May 2008 (UTC)
In the section "Properties," there's a spurious comment: "Decomposing vectors is often useful for conveniently adding them, e.g. in the calculation of net force in mechanics." Decomposition of vectors isn't defined anywhere in the article; but this is an interesting topic and perhaps it deserves treatment. Would anyone like to move this line to a new section and expand on the idea? Ezrakilty ( talk) 18:46, 12 May 2008 (UTC)
Quotting: "I think there could be two articles, the inner product space one giving the abstract algebraic formulation, and a more concrete geometric one focusing just on R^n or C^n, aimed more at people who might just use it for calculus or physics classes."
I STRONG agree with the separation proposed. PLEASE, do it (and a lot of students will be gracefull).
Armando Simoes, student —Preceding unsigned comment added by 83.132.150.145 ( talk) 16:37, 26 November 2008 (UTC)
I was wondering if the section dealing with the dot product as matrix multiplication should be extended to also cover the complex case in terms of the transpose conjugate:
where
is the transpose conjugate. The ugly thing about this matrix notation is that the order of the vectors is reversed. It is therefore something you have to be careful about and which I think could be highlighted explicitly
The present real number formula
could tempt casual readers looking for the complex relation (which they wont find) into generalizing this as for the complex case, which is wrong.
Perhaps, if the real number example was extended to
the casual reader looking for the complex version would think twice before generalizing by seeing the symmetry in the two equivalent expressions.
Actually, I would have personally preferred if the whole article had used the generalized complex version for defining the dot product and then treated the real case as a (very important) special case. I do see the problems in this as it may be a turn-off for the lay person reader, who may be intimidated by the notion of complex numbers. An alternative idea could be to completely split up the article into two articles. This one dealing exclusively with the dot product of real numbers and another article, e.g., dot product of complex vectors dealing with the generalized case. There should then be a reference to the generalized case from the preamble of this article.
Alternatively, the more abstract inner product article could be extended with a more thorough treatment of the dot product of complex vectors, including the matrix notation.
-- Slaunger ( talk) 10:25, 11 February 2009 (UTC)
I think it is important to have a thread running through the article that uses just real numbers, so that readers unfamiliar with complex numbers can still appreciate the dot product. Also, I think it's good to keep the more general definitions within this article, so that casual readers can have their interest piqued and perhaps extend their grasp.
All that said, when we can make formulas for the real case look like their complex friends, we should do so, as with your example
In this case, I think, we are not adding any burden for readers oblivious of complex numbers, yet as you note the temptation to generalize wrongly would be reduced.
-- Ezrakilty ( talk) 12:52, 11 February 2009 (UTC)
In 1]: a = array([1, 1j]) # Vector notation
In 2]: b = array([1, -1j]) # Also as a vector
In 3]: a_dot_b = dot(a, b.conjugate()) # Explicit conjugation needed in SciPy
In 4]: print a_dot_b
0j
In 5]: a = mat('[1; 1j]') # As column matrix
In 6]: b = mat('[1; -1j]') # Also as columns matrix
In 7]: b_star_a = b.H * a
In 8]: print b_star_a
[[ 0.+0.j]]
In 9]: a_dot_b = b_star_a0,0 # Pick the element from the matrix
In 10]: print a_dot_b
0j
Perhaps it is just me, but despite refreshing, the image described below does not appear. Instead, it's alt-text is shown. I am not familiar with Latex or whatever is used to generate these images, so I apologize that I cannot fix it myself.
\mathbf{a} \cdot \mathbf{b} = \mathbf{a}^T \mathbf{b} \,
76.236.188.70 ( talk) 02:42, 21 February 2009 (UTC)
In "Quick Introduction to Tensor Analysis" R.A. Sharipov calls a covector times a vector a scalar product and distinguishes it from a vector times a vector which he calls a dot product. This distinction isn't referenced in this article but perhaps it should be.
When describing the projection of a vector. The text was confusing due to the close proximity of two separate definitions.
f neither a nor b is a unit vector, then the magnitude of the projection of a in the direction of b, for example, would be a · (b / |b|). b / |b| is the unit vector in the direction of b.
Changed this to hopefully remove ambiguity. —Preceding unsigned comment added by Fflanner ( talk • contribs) 00:37, 10 March 2009 (UTC)
The fourth property
does not make any sense to me. This is clearly false when a and c are not parallel. -- Marozols ( talk) 15:06, 5 July 2009 (UTC)
Is this article trying to maintain some consistency in the use of bold and capital versions of letters? Especially given that vectors and matrices are under discussion, this seems important... yet in the section "Geometric Interpretation" the narrative uses bold smalls for what appear to be the same vectors as the bold caps in the figure. Is this intentional or a mistake, or regarded as not significant? Gwideman ( talk) 20:30, 18 March 2010 (UTC)
In the Geometric Interpretation section, after the equation "theta = arccos(...)", appears the sentence "The cosine of the angle is returned". Clearly this does not refer to what the preceding equation returns, as this returns an angle. Probably the writer intended to talk about the meaning of the argument to arccos. Gwideman ( talk) 20:29, 8 April 2010 (UTC)
is the
scalar projection of A onto B.
Since A • B = , then AB = (A • B) / .
This caption is badly formatted and contains a clearly wrong equation:
AB = (A • B) / ,
Parts of the equations are formatted with <math>, other parts are not. Moreover, in Wikipedia image captions are always rendered with a smaller font size, with respect to the text, so the <math> format is inappropriate. The previous caption was:
This is why I am going to revert this edit. If you do not agree, please first explain here the reason. Paolo.dL ( talk) 21:22, 9 December 2010 (UTC)
See Table of mathematical symbols. In my opinion, two pipes are used when you are afraid that a single pipe can be confused with I. Sincerely, I have rarely seen university teachers using double pipe when they write on the board. The single pipe, which is the same symbol used for the absolute value of a scalar, but there's no ambiguity when it is applied to a vector, as nobody needs to take the absolute value of all the elements of a vector (it makes no sense).
And on the other hand, there's no ambiguity when you use it with a scalar, because the square root of the square of a scalar coincides with its absolute value. I mean, to compute the absolute value of a scalar you can take its Euclidean norm, if you like!
The problem is that in this article we use both conventions, and this is likely to confuse the reader. Another problem is that the image uses the single pipe. So, we can't use the double pipe in the caption. Feel free to change everything to single pipe. Paolo.dL
However, this is not the only difference between the figure and the text. Namely, the figure uses A and B, the text uses a and b It would be nice to have a picture using labels consistent with the text. Not vice versa, as vectors are typically represented by lowercase characters. ( talk) 09:10, 10 December 2010 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 |
In the image the dotted line is at right angles to the vector B isn't it? Shouldn't the image indicate this? stib ( talk) 01:55, 19 February 2008 (UTC)
I agree, the right angle sign should be appended to the image. I just learned about Dot Products, and that would have saved me a couple HOURS(!!) of reading. I didn't understand the image at all for a few days and that sign would have completely solved it for me. Now, granted, I'm rather ignorant of most maths, but surely this page isn't only for the highly literate mathematicians. If I should add the bracket myself, let me know, and let me know how. Thanks.
Limited_Atonement
I edited the image to include the parallel indicators and |A||B|, but I don't know how to replace the existing image. I don't think I can upload to Wikipedia because my account isn't confirmed? How do I change that?
Limited_Atonement
I successfully changed the image to indicate the right angle and took out the extra in |A||B|. Limited_Atonement Thursday, December 09, 2010 6:05:45 PM —Preceding unsigned comment added by 66.244.68.201 ( talk) 18:06, 9 December 2010 (UTC)
I have a small clarification to make in the article on Dot Product (Mathematics).It is shown that dot product is matrix multiplication of a transpose multiplied by b But in the example the transpose of b is taken.
Can you please calrify this 195.229.236.247 ( talk) 09:04, 17 March 2008 (UTC) Aravind
I don't think we need to distinguish between row and column vectors here. That distinction only needs to be made when considering vectors as degenerate forms of matrices. If vectors were always considered as degenerate matrices, we'd simply do matrix multiplication and forget about defining it as a vector operation.
Instead, we have separate operators (the dot or the Cartesian x) so that the vectors can remain vectors (as in vector-space), not specifically row or column vectors. The result is equivalent to considering the first vector as a row-vector and the second as a column-vector and applying normal matrix multiplication, but matrix multiplication is not commutative.
I believe the generalization (neither considered as row nor as column) makes dot product truly commutative. (additional operations such as transposition should not be artificially required by forcing row or column orientation on your model).
_-T 德 —Preceding unsigned comment added by 129.115.13.107 ( talk) 16:29, 21 March 2008 (UTC)
Is the dot product valid for "non-orthonormal vector spaces"? For readers who don't know the exact definition of the Euclidean space (i.e. most readers, in my opinion), the introduction does not answer this question explicitly. It just states that the dot product "is the standard inner product of the Euclidean space". Readers appreciate a concise and generic introduction, but they are not supposed to know whether the Euclidean space is required to be orthonormal or not. The definition section creates or accentuates this doubt. Here is the first sentence of that section:
The dot product of two vectors (from an orthonormal vector space) a = [a1, a2, … , an] and b = [b1, b2, … , bn] is by definition: |
The phrase within parentheses puzzles me and may also confuse other readers. Why did the author use parentheses?
First hypothesis. Did the author mean that the dot product is only defined in orthonormal vector spaces? (and only its generalization, the inner product, can be used for non-orthonormal vector spaces)? In this case, the parenheses might indicate that the writer regards the information as redundant. However, this is not true for all readers, because the information is not given explicitly in the previous paragraphs.
Opposite hypothesis. Did the author mean, on the contrary, that a more general and complex definition of the dot product exists (coinciding with the definition of the inner product given below), valid for non-orthonormal vector spaces? Did the author choose to give first a commonly used simplified version of that definition? In this case, the definition section should be completed with the warning that a more general definition exists in the literature, and it coincides with the definition of the inner product.
What is the correct hypothesis? Do you agree that the phrase is an important part of the definition and should be enphasized, rather than confined within parenthesis? Paolo.dL ( talk) 12:57, 19 April 2008 (UTC)
I believe that it is important to warn the reader that the definition of the dot product is not valid for non-orthogonal Euclidean vector spaces. Not valid means that it does not compute what is meant to compute (a number which can be used to define length and angle). I agree with Ezrakilty that this "validity" is related to the geometrical interpretation of the dot product, but IMO the geometrical interpretation is inseparable from the mathematical definition
Moreover, if we accept Ezrakilty's decision to remove from the definition section the sentence about restriction to orthogonal vector spaces, we should remove it everywhere in the article, including the beginning of section "Conversion to matrix multiplication". Also, notice that at the end of the section "Generalization", the inner product formula is shown to simplify into the dot product formula when the basis set is orthogonal. This information is precious in this context.
Being not sure about what's the most common or most appropriate definition of the dot product, I undid Ezrakilty's edit, just to restore consistency between sections. I hope that the discussion on this topic will continue. Paolo.dL ( talk) 14:44, 13 May 2008 (UTC)
Your point is reasonable. Indeed, in physics textbooks, where I learned the definition of the dot product, the operation is described in R3 and is not even extended to n-dimensional spaces.
I guess that you accept my "first hypothesis". I still have a doubt. Can we exclude that some authoritative author might have endorsed, in the literature, my "opposite hypothesis" (see previous section)? In other words, does any mathematician call dot product the inner product for non-orthogonal vector spaces? I hope not. And I hope that some expert mathematician will give a final answer to this question either by posting a comment here or by editing the article. Paolo.dL ( talk) 15:35, 15 May 2008 (UTC)
We still don't have an answer, but in the meantime I removed the ambiguous sentence. The last sentence of the definition section presents non-ambiguously a similar concept, hedged by the adverb "typically". Even if my "opposite hypothesis" were true (and I hope it's not), the ambiguous sentence would not be a good way to inform the readers. Paolo.dL ( talk) 09:00, 30 May 2008 (UTC)
In the section "Properties," there's a spurious comment: "Decomposing vectors is often useful for conveniently adding them, e.g. in the calculation of net force in mechanics." Decomposition of vectors isn't defined anywhere in the article; but this is an interesting topic and perhaps it deserves treatment. Would anyone like to move this line to a new section and expand on the idea? Ezrakilty ( talk) 18:46, 12 May 2008 (UTC)
Quotting: "I think there could be two articles, the inner product space one giving the abstract algebraic formulation, and a more concrete geometric one focusing just on R^n or C^n, aimed more at people who might just use it for calculus or physics classes."
I STRONG agree with the separation proposed. PLEASE, do it (and a lot of students will be gracefull).
Armando Simoes, student —Preceding unsigned comment added by 83.132.150.145 ( talk) 16:37, 26 November 2008 (UTC)
I was wondering if the section dealing with the dot product as matrix multiplication should be extended to also cover the complex case in terms of the transpose conjugate:
where
is the transpose conjugate. The ugly thing about this matrix notation is that the order of the vectors is reversed. It is therefore something you have to be careful about and which I think could be highlighted explicitly
The present real number formula
could tempt casual readers looking for the complex relation (which they wont find) into generalizing this as for the complex case, which is wrong.
Perhaps, if the real number example was extended to
the casual reader looking for the complex version would think twice before generalizing by seeing the symmetry in the two equivalent expressions.
Actually, I would have personally preferred if the whole article had used the generalized complex version for defining the dot product and then treated the real case as a (very important) special case. I do see the problems in this as it may be a turn-off for the lay person reader, who may be intimidated by the notion of complex numbers. An alternative idea could be to completely split up the article into two articles. This one dealing exclusively with the dot product of real numbers and another article, e.g., dot product of complex vectors dealing with the generalized case. There should then be a reference to the generalized case from the preamble of this article.
Alternatively, the more abstract inner product article could be extended with a more thorough treatment of the dot product of complex vectors, including the matrix notation.
-- Slaunger ( talk) 10:25, 11 February 2009 (UTC)
I think it is important to have a thread running through the article that uses just real numbers, so that readers unfamiliar with complex numbers can still appreciate the dot product. Also, I think it's good to keep the more general definitions within this article, so that casual readers can have their interest piqued and perhaps extend their grasp.
All that said, when we can make formulas for the real case look like their complex friends, we should do so, as with your example
In this case, I think, we are not adding any burden for readers oblivious of complex numbers, yet as you note the temptation to generalize wrongly would be reduced.
-- Ezrakilty ( talk) 12:52, 11 February 2009 (UTC)
In 1]: a = array([1, 1j]) # Vector notation
In 2]: b = array([1, -1j]) # Also as a vector
In 3]: a_dot_b = dot(a, b.conjugate()) # Explicit conjugation needed in SciPy
In 4]: print a_dot_b
0j
In 5]: a = mat('[1; 1j]') # As column matrix
In 6]: b = mat('[1; -1j]') # Also as columns matrix
In 7]: b_star_a = b.H * a
In 8]: print b_star_a
[[ 0.+0.j]]
In 9]: a_dot_b = b_star_a0,0 # Pick the element from the matrix
In 10]: print a_dot_b
0j
Perhaps it is just me, but despite refreshing, the image described below does not appear. Instead, it's alt-text is shown. I am not familiar with Latex or whatever is used to generate these images, so I apologize that I cannot fix it myself.
\mathbf{a} \cdot \mathbf{b} = \mathbf{a}^T \mathbf{b} \,
76.236.188.70 ( talk) 02:42, 21 February 2009 (UTC)
In "Quick Introduction to Tensor Analysis" R.A. Sharipov calls a covector times a vector a scalar product and distinguishes it from a vector times a vector which he calls a dot product. This distinction isn't referenced in this article but perhaps it should be.
When describing the projection of a vector. The text was confusing due to the close proximity of two separate definitions.
f neither a nor b is a unit vector, then the magnitude of the projection of a in the direction of b, for example, would be a · (b / |b|). b / |b| is the unit vector in the direction of b.
Changed this to hopefully remove ambiguity. —Preceding unsigned comment added by Fflanner ( talk • contribs) 00:37, 10 March 2009 (UTC)
The fourth property
does not make any sense to me. This is clearly false when a and c are not parallel. -- Marozols ( talk) 15:06, 5 July 2009 (UTC)
Is this article trying to maintain some consistency in the use of bold and capital versions of letters? Especially given that vectors and matrices are under discussion, this seems important... yet in the section "Geometric Interpretation" the narrative uses bold smalls for what appear to be the same vectors as the bold caps in the figure. Is this intentional or a mistake, or regarded as not significant? Gwideman ( talk) 20:30, 18 March 2010 (UTC)
In the Geometric Interpretation section, after the equation "theta = arccos(...)", appears the sentence "The cosine of the angle is returned". Clearly this does not refer to what the preceding equation returns, as this returns an angle. Probably the writer intended to talk about the meaning of the argument to arccos. Gwideman ( talk) 20:29, 8 April 2010 (UTC)
is the
scalar projection of A onto B.
Since A • B = , then AB = (A • B) / .
This caption is badly formatted and contains a clearly wrong equation:
AB = (A • B) / ,
Parts of the equations are formatted with <math>, other parts are not. Moreover, in Wikipedia image captions are always rendered with a smaller font size, with respect to the text, so the <math> format is inappropriate. The previous caption was:
This is why I am going to revert this edit. If you do not agree, please first explain here the reason. Paolo.dL ( talk) 21:22, 9 December 2010 (UTC)
See Table of mathematical symbols. In my opinion, two pipes are used when you are afraid that a single pipe can be confused with I. Sincerely, I have rarely seen university teachers using double pipe when they write on the board. The single pipe, which is the same symbol used for the absolute value of a scalar, but there's no ambiguity when it is applied to a vector, as nobody needs to take the absolute value of all the elements of a vector (it makes no sense).
And on the other hand, there's no ambiguity when you use it with a scalar, because the square root of the square of a scalar coincides with its absolute value. I mean, to compute the absolute value of a scalar you can take its Euclidean norm, if you like!
The problem is that in this article we use both conventions, and this is likely to confuse the reader. Another problem is that the image uses the single pipe. So, we can't use the double pipe in the caption. Feel free to change everything to single pipe. Paolo.dL
However, this is not the only difference between the figure and the text. Namely, the figure uses A and B, the text uses a and b It would be nice to have a picture using labels consistent with the text. Not vice versa, as vectors are typically represented by lowercase characters. ( talk) 09:10, 10 December 2010 (UTC)