![]() | This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
Looks interesting, but this is not a place for academics to promote concepts not widely discussed - somewhere other than their own pet web site. Please outline what an SOM actually is, with examples, so that this can be a proper article. —The preceding unsigned comment was added by 142.177.93.97 ( talk) 22:57, 24 January 2003
Easy guys, name calling won't get you nowhere
Paskari
21:38, 1 December 2006 (UTC)
I just wanted to add, that SOM are impotant to have even if the article is not perfect yet. Because the history of the self-organizing map goes a while back, i would suggest the introduction of some historical references, as well as some review articles. -- MNegrello ( talk) 01:02, 16 September 2008 (UTC)
Something went wrong with the 3 .jpg files I uploaded.... They don't seem to load...
Perhaps it should be stressed that the neighbourhood is defined by means of a reference space (the map) as opposed to the data space, which specifies the winning neuron by distance to its corresponding codebook vector. —The preceding unsigned comment was added by 80.55.196.98 ( talk) 21:45, 18 January 2006
I deleted following sentence: "Unsupervised learning" is, technically, supervised in that we do have a desired output.
It doesn't make sense, what is the "desired output"?
Update: Now I think I understand that this probably refers to the fact that weights are moved towards the input vector. I still don't think this justifies calling SOM supervised, because there is no a priori desired output value for an input. AnAj 15:39, 18 June 2006 (UTC)
The first line reads "The self-organizing map (SOM) is a subtype of artificial neural networks", that's like me saying "a toyota is a subtype of automobile". Can we clear this up please Paskari 21:38, 1 December 2006 (UTC)
In the Some variables section of the An example of the algorithm section of the page, the variable lambda (λ) is listed as limit on time iteration, but is not included anywhere else on this page. I'm guessing that its either the result of other info being edited out, or something that was copied in without careful review. —The preceding unsigned comment was added by 66.195.133.75 ( talk) 19:54, 26 December 2006
I removed " gnod, a Kohonen network application." because I don't see any evidence saying that it is and on the gnod page, there are references to sites that say it's not. I added WEBSOM, because it is. -- JamesBrownJr 22:47, 2 March 2007 (UTC)
Added: ", as an artificial neural network," to paragraph two because Prof Kohonen, by no means, invented the idea of mapping higher dimensional spaces into lower ones, which is what the previous existing language of that paragraph implied....June 15, 2007 —The preceding unsigned comment was added by 168.68.129.127 ( talk) 21:39, 15 June 2007
IMHO, it's confusing that this section describes SOM as a feedforward network. In a typical feedforward network, there are multiple layers and weights affect the extent to which output values from neurons in one layer affect the inputs to neurons in the next layer. But if I understand SOM correctly (which may very well be the problem here), SOM has only one "layer" of neurons. Values are never "fed forward" (as there's no next layer to feed them to), and the "weights" in SOM have a completely different role than the weights in a typical feedforward network. I don't understand the statement "Each input is connected to all output neurons" because there is no input layer of neurons. Each vector (aka pattern) from the training dataset will affect the BMU and its neighbors, but this is not same as being "connected to all output neurons". (If my understanding is incorrect, please delete this comment.)
The following appears to contradict what is said in Generative Topographic Map about topology preservation."It is trained using unsupervised learning to produce low dimensional representation of the training samples while preserving the topological properties of the input space."
I would say that it cannot preserve the topological properties of the input space for all possible inputs. Perhaps you should change the sentence to: "It is trained using unsupervised learning to produce low dimensional representation of the training samples while trying to preserve the topological properties of the input space."
The opinion that this contradicts the GTM article is not accurate. Both algorithms can be expected to produce topological error; it is therefore a matter of degree. Typically one would run the SOM algorithm several times and measure the topological error. Depending on how important this issue is for the analyst, he or she would select the map with the lowest topological error. I am removing the contradiction warning but I will add more information on SOM vs GTM. —Preceding unsigned comment added by ElectricTypist ( talk • contribs) 15:48, 21 November 2007 (UTC)
I here by state this article as 'awesome', its a good reference for anyone who wants specific information reguarding the workings of this type of neural network, instead of a smear of general information and no useful engineering qualities at all. -- Chase-san 11:28, 4 August 2007 (UTC)
Without any kind of description and context, those pictures are completely meaningless. What do the colours mean? What is the form of training data? What are the inputs? How do you generate the image from the final sets of weight vectors? How do you interpret the pictures. Deeply non-awesome. -- GWO —Preceding unsigned comment added by Gareth Owen ( talk • contribs) 12:44, 12 September 2007 (UTC)
ITEM 1.
The first paragraph of the article contains the statement that "The map seeks to preserve the topological properties of the input space." This is a reasonable aim, but one that is virtually impossible with the SOM on all but the most trivial of data. It also gives the impression that the SOM generates a topology preserving map.
To improve the article, the sentence "The map seeks to preserve the topological properties of the input space." should be replaced with "The SOM map aims to exploit the topographic relationships of the network nodes to provide more information about the underlying data space than is possible with the regular Vector Quantisation approach."
ITEM 2.
"Interpretation
There are two ways to interpret a SOM. Because in the training phase weights of the whole neighborhood are moved in the same direction, similar items tend to excite adjacent neurons. Therefore, SOM forms a semantic map where similar samples are mapped close together and dissimilar apart."
This last sentence is misleading and wrong. This is only true if there is no violation in the topological mapping of the SOM map. Graph dimensionality, graph size and graph folding can cause violations in the SOM mapping.
Let me expand on this:
Neural maps aim to exploit the topographic relationships of the network nodes to provide more information about the underlying data space than is possible with the regular Vector Quantisation approach. The neural map aims to 1. link the relationships between input data with the relationships between the nodes of the graph, and 2. for the map to be truly useful, the relationships between nodes of the graph should be reflected in the relationships within the input data. This is very important, and is where the SOM approach simply fails on all but the the most trivial of data. In topology preserving mappings, neighbourhoods are preserved in the transformations from data to graph space and vice-versa.
I suggest anybody interested in why the SOM can not be considered a topological mapping reads [H. U. Bauer, M. Herrmann, and T. Villmann. Neural maps and topographic vector quantization. Neural Networks, 12:659-676, 1999]. The SOM (at best) is a topographic mapping - it is not a topological mapping. The Growing Neural Gas algorithm [B. Fritzke. A Growing Neural Gas Network Learns Topologies. In G. Tesauro, D.S. Touretzky, and T.K. Leen, editors, Advances in Neural Processing Systems 7 (NIPS'94), pages 625-632, Cambridge, 1995. MIT Press.] (tends to) generate a topology preserving mapping. This is a well known fact, and this article is misleading with its incorrect use of terminology.
To improve this article, "Therefore, SOM forms a semantic map where similar samples are mapped close together and dissimilar apart." needs much more work. This statement is only true if the map contains no topographic errors, which is difficult to achieve on all but the most trivial of data.
FORTRANslinger ( talk) 20:05, 19 December 2007 (UTC)
Can somebody explore on the history of the subject?
It must be at least, starting 1982. However, there must be a chronological list of related subjects developed before SOM. ( k-means, hard c-means, Linde-Buzo-Gray algorithm and other clustering stuff?) —Preceding unsigned comment added by Arkadi kagan ( talk • contribs) 21:33, 19 January 2010 (UTC)
Perhaps, some precise info signed by Teuvo Kohonen:
The Self-Organizing Map algorithm was introduced in 1981. The earliest applications were mainly in engineering tasks. Later the algorithm has become progressively more accepted as a standard data analysis method in a wide variety of fields ... [1]
Arkadi kagan ( talk) 08:43, 20 January 2010 (UTC)
One of the references is password-protected. This does not seem appropriate for WP:
Rmkeller ( talk) 06:17, 2 November 2010 (UTC)
I was interested in the reference supporting this sentence: "It has been shown that while self-organizing maps with a small number of nodes behave in a way that is similar to K-means, larger self-organizing maps rearrange data in a way that is fundamentally topological in character. [5]" However it points to the website http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Self-organizing_map.html, which is a mere copy of this wikipedia article.... how could that happen? — Preceding unsigned comment added by 141.14.31.91 ( talk) 10:48, 28 January 2015 (UTC)
Why is the self-organizing map considered to be a type of artificial neural network? What is it that is "neural" about them? — Kri ( talk) 12:55, 2 January 2017 (UTC)
The link for this just brings you back to the SOM page. Either the link should be removed or an article for TASOMs should be written (latter is ideal).
![]() | This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
Looks interesting, but this is not a place for academics to promote concepts not widely discussed - somewhere other than their own pet web site. Please outline what an SOM actually is, with examples, so that this can be a proper article. —The preceding unsigned comment was added by 142.177.93.97 ( talk) 22:57, 24 January 2003
Easy guys, name calling won't get you nowhere
Paskari
21:38, 1 December 2006 (UTC)
I just wanted to add, that SOM are impotant to have even if the article is not perfect yet. Because the history of the self-organizing map goes a while back, i would suggest the introduction of some historical references, as well as some review articles. -- MNegrello ( talk) 01:02, 16 September 2008 (UTC)
Something went wrong with the 3 .jpg files I uploaded.... They don't seem to load...
Perhaps it should be stressed that the neighbourhood is defined by means of a reference space (the map) as opposed to the data space, which specifies the winning neuron by distance to its corresponding codebook vector. —The preceding unsigned comment was added by 80.55.196.98 ( talk) 21:45, 18 January 2006
I deleted following sentence: "Unsupervised learning" is, technically, supervised in that we do have a desired output.
It doesn't make sense, what is the "desired output"?
Update: Now I think I understand that this probably refers to the fact that weights are moved towards the input vector. I still don't think this justifies calling SOM supervised, because there is no a priori desired output value for an input. AnAj 15:39, 18 June 2006 (UTC)
The first line reads "The self-organizing map (SOM) is a subtype of artificial neural networks", that's like me saying "a toyota is a subtype of automobile". Can we clear this up please Paskari 21:38, 1 December 2006 (UTC)
In the Some variables section of the An example of the algorithm section of the page, the variable lambda (λ) is listed as limit on time iteration, but is not included anywhere else on this page. I'm guessing that its either the result of other info being edited out, or something that was copied in without careful review. —The preceding unsigned comment was added by 66.195.133.75 ( talk) 19:54, 26 December 2006
I removed " gnod, a Kohonen network application." because I don't see any evidence saying that it is and on the gnod page, there are references to sites that say it's not. I added WEBSOM, because it is. -- JamesBrownJr 22:47, 2 March 2007 (UTC)
Added: ", as an artificial neural network," to paragraph two because Prof Kohonen, by no means, invented the idea of mapping higher dimensional spaces into lower ones, which is what the previous existing language of that paragraph implied....June 15, 2007 —The preceding unsigned comment was added by 168.68.129.127 ( talk) 21:39, 15 June 2007
IMHO, it's confusing that this section describes SOM as a feedforward network. In a typical feedforward network, there are multiple layers and weights affect the extent to which output values from neurons in one layer affect the inputs to neurons in the next layer. But if I understand SOM correctly (which may very well be the problem here), SOM has only one "layer" of neurons. Values are never "fed forward" (as there's no next layer to feed them to), and the "weights" in SOM have a completely different role than the weights in a typical feedforward network. I don't understand the statement "Each input is connected to all output neurons" because there is no input layer of neurons. Each vector (aka pattern) from the training dataset will affect the BMU and its neighbors, but this is not same as being "connected to all output neurons". (If my understanding is incorrect, please delete this comment.)
The following appears to contradict what is said in Generative Topographic Map about topology preservation."It is trained using unsupervised learning to produce low dimensional representation of the training samples while preserving the topological properties of the input space."
I would say that it cannot preserve the topological properties of the input space for all possible inputs. Perhaps you should change the sentence to: "It is trained using unsupervised learning to produce low dimensional representation of the training samples while trying to preserve the topological properties of the input space."
The opinion that this contradicts the GTM article is not accurate. Both algorithms can be expected to produce topological error; it is therefore a matter of degree. Typically one would run the SOM algorithm several times and measure the topological error. Depending on how important this issue is for the analyst, he or she would select the map with the lowest topological error. I am removing the contradiction warning but I will add more information on SOM vs GTM. —Preceding unsigned comment added by ElectricTypist ( talk • contribs) 15:48, 21 November 2007 (UTC)
I here by state this article as 'awesome', its a good reference for anyone who wants specific information reguarding the workings of this type of neural network, instead of a smear of general information and no useful engineering qualities at all. -- Chase-san 11:28, 4 August 2007 (UTC)
Without any kind of description and context, those pictures are completely meaningless. What do the colours mean? What is the form of training data? What are the inputs? How do you generate the image from the final sets of weight vectors? How do you interpret the pictures. Deeply non-awesome. -- GWO —Preceding unsigned comment added by Gareth Owen ( talk • contribs) 12:44, 12 September 2007 (UTC)
ITEM 1.
The first paragraph of the article contains the statement that "The map seeks to preserve the topological properties of the input space." This is a reasonable aim, but one that is virtually impossible with the SOM on all but the most trivial of data. It also gives the impression that the SOM generates a topology preserving map.
To improve the article, the sentence "The map seeks to preserve the topological properties of the input space." should be replaced with "The SOM map aims to exploit the topographic relationships of the network nodes to provide more information about the underlying data space than is possible with the regular Vector Quantisation approach."
ITEM 2.
"Interpretation
There are two ways to interpret a SOM. Because in the training phase weights of the whole neighborhood are moved in the same direction, similar items tend to excite adjacent neurons. Therefore, SOM forms a semantic map where similar samples are mapped close together and dissimilar apart."
This last sentence is misleading and wrong. This is only true if there is no violation in the topological mapping of the SOM map. Graph dimensionality, graph size and graph folding can cause violations in the SOM mapping.
Let me expand on this:
Neural maps aim to exploit the topographic relationships of the network nodes to provide more information about the underlying data space than is possible with the regular Vector Quantisation approach. The neural map aims to 1. link the relationships between input data with the relationships between the nodes of the graph, and 2. for the map to be truly useful, the relationships between nodes of the graph should be reflected in the relationships within the input data. This is very important, and is where the SOM approach simply fails on all but the the most trivial of data. In topology preserving mappings, neighbourhoods are preserved in the transformations from data to graph space and vice-versa.
I suggest anybody interested in why the SOM can not be considered a topological mapping reads [H. U. Bauer, M. Herrmann, and T. Villmann. Neural maps and topographic vector quantization. Neural Networks, 12:659-676, 1999]. The SOM (at best) is a topographic mapping - it is not a topological mapping. The Growing Neural Gas algorithm [B. Fritzke. A Growing Neural Gas Network Learns Topologies. In G. Tesauro, D.S. Touretzky, and T.K. Leen, editors, Advances in Neural Processing Systems 7 (NIPS'94), pages 625-632, Cambridge, 1995. MIT Press.] (tends to) generate a topology preserving mapping. This is a well known fact, and this article is misleading with its incorrect use of terminology.
To improve this article, "Therefore, SOM forms a semantic map where similar samples are mapped close together and dissimilar apart." needs much more work. This statement is only true if the map contains no topographic errors, which is difficult to achieve on all but the most trivial of data.
FORTRANslinger ( talk) 20:05, 19 December 2007 (UTC)
Can somebody explore on the history of the subject?
It must be at least, starting 1982. However, there must be a chronological list of related subjects developed before SOM. ( k-means, hard c-means, Linde-Buzo-Gray algorithm and other clustering stuff?) —Preceding unsigned comment added by Arkadi kagan ( talk • contribs) 21:33, 19 January 2010 (UTC)
Perhaps, some precise info signed by Teuvo Kohonen:
The Self-Organizing Map algorithm was introduced in 1981. The earliest applications were mainly in engineering tasks. Later the algorithm has become progressively more accepted as a standard data analysis method in a wide variety of fields ... [1]
Arkadi kagan ( talk) 08:43, 20 January 2010 (UTC)
One of the references is password-protected. This does not seem appropriate for WP:
Rmkeller ( talk) 06:17, 2 November 2010 (UTC)
I was interested in the reference supporting this sentence: "It has been shown that while self-organizing maps with a small number of nodes behave in a way that is similar to K-means, larger self-organizing maps rearrange data in a way that is fundamentally topological in character. [5]" However it points to the website http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Self-organizing_map.html, which is a mere copy of this wikipedia article.... how could that happen? — Preceding unsigned comment added by 141.14.31.91 ( talk) 10:48, 28 January 2015 (UTC)
Why is the self-organizing map considered to be a type of artificial neural network? What is it that is "neural" about them? — Kri ( talk) 12:55, 2 January 2017 (UTC)
The link for this just brings you back to the SOM page. Either the link should be removed or an article for TASOMs should be written (latter is ideal).