![]() | This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
As a mathematician, I think the first line is nonsense:
"Eigenfaces are eigenvectors in the high-dimensional vector space of possible faces of human beings."
You can't have eigenvectors without first defining an operator for which they are eigenvectors, see eigenvector. Without an operator, saying that a vector is an eigenvector has exactly zero meaning. - Andre Engels 22:01, 17 May 2004 (UTC)
I hope what I've done with that secon paragraph is ok. I understand eigenvectors and I've heard of eigenfaces before, but I didn't understand that first paragraph at all. My explanation is very hand-wavy, so feel free to make it more accurate (but keep it readable for non-mathematicians). - G 16:10, 20 May 2004 (UTC)
Is the following explanation completely right?
Instead, shouldn't we give an explanation such as:
If we have a set of 100x100 images, and the set contains M such images, we can imagine any other image data as a point in a 100x100=10,000=N dimensional space. Thanks to the eigenface decomposition, we do not need all these dimensions, and we can fully represent data that already belongs to the set in terms of M (=number of images) eigenfaces. We can even further reduce the number of dimensions to M' with a reasonable loss, where M' can be down to 0.1M, that is, if we have 300 images with 10,000 pixels each, we can have reasonable results with only 300/10= 30 eigenvectors.
I am not an expert in the subject, so I did not change the article, but the explanation there looks unclear or incorrect to me.
-- spAs 12:41, 17 August 2007 (UTC)
I agree with User:Spas, putting it as "If we are working with a 100X100 image, then this system will have 10,000 dimensions resulting in 10,000 eigen vectors" will make it more clear for the technically uninitiated. —Preceding unsigned comment added by 122.170.25.146 ( talk) 15:51, 18 August 2008 (UTC)
I think (I'm no expert) the article has the following problems:
146.50.8.103 ( talk) 16:16, 20 November 2008 (UTC)
In the section Practical Implementation, the matrix T is defined as having the flattened images in its rows, meaning each row represents an image. In the next section Computing the eigenvectors, this assumption however is suddenly changing: Now the matrix T has the images in its columns. For a reader that is new to the subject (such as me). This can be rather confusing if one doesn't read very attentively. Would it be possible to unify the definitions for T? -- 131.152.224.31 ( talk) 12:40, 3 April 2012 (UTC)
The statistics in the Use in facial recognition are completely meaningless without some sort of context (i.e. how big was the data set). 81.98.54.232 ( talk) 19:43, 18 October 2010 (UTC)
![]() | This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
As a mathematician, I think the first line is nonsense:
"Eigenfaces are eigenvectors in the high-dimensional vector space of possible faces of human beings."
You can't have eigenvectors without first defining an operator for which they are eigenvectors, see eigenvector. Without an operator, saying that a vector is an eigenvector has exactly zero meaning. - Andre Engels 22:01, 17 May 2004 (UTC)
I hope what I've done with that secon paragraph is ok. I understand eigenvectors and I've heard of eigenfaces before, but I didn't understand that first paragraph at all. My explanation is very hand-wavy, so feel free to make it more accurate (but keep it readable for non-mathematicians). - G 16:10, 20 May 2004 (UTC)
Is the following explanation completely right?
Instead, shouldn't we give an explanation such as:
If we have a set of 100x100 images, and the set contains M such images, we can imagine any other image data as a point in a 100x100=10,000=N dimensional space. Thanks to the eigenface decomposition, we do not need all these dimensions, and we can fully represent data that already belongs to the set in terms of M (=number of images) eigenfaces. We can even further reduce the number of dimensions to M' with a reasonable loss, where M' can be down to 0.1M, that is, if we have 300 images with 10,000 pixels each, we can have reasonable results with only 300/10= 30 eigenvectors.
I am not an expert in the subject, so I did not change the article, but the explanation there looks unclear or incorrect to me.
-- spAs 12:41, 17 August 2007 (UTC)
I agree with User:Spas, putting it as "If we are working with a 100X100 image, then this system will have 10,000 dimensions resulting in 10,000 eigen vectors" will make it more clear for the technically uninitiated. —Preceding unsigned comment added by 122.170.25.146 ( talk) 15:51, 18 August 2008 (UTC)
I think (I'm no expert) the article has the following problems:
146.50.8.103 ( talk) 16:16, 20 November 2008 (UTC)
In the section Practical Implementation, the matrix T is defined as having the flattened images in its rows, meaning each row represents an image. In the next section Computing the eigenvectors, this assumption however is suddenly changing: Now the matrix T has the images in its columns. For a reader that is new to the subject (such as me). This can be rather confusing if one doesn't read very attentively. Would it be possible to unify the definitions for T? -- 131.152.224.31 ( talk) 12:40, 3 April 2012 (UTC)
The statistics in the Use in facial recognition are completely meaningless without some sort of context (i.e. how big was the data set). 81.98.54.232 ( talk) 19:43, 18 October 2010 (UTC)