![]() | This page is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
someone changed "compute a true gradient" to "compute the true gradient". why? is there only one true gradient? i don't think so. the former text was correct.
What about radial basis networks? -- FleaPlus 16:45, 14 Apr 2004 (UTC)
Therefore, the following algorithms are not neural networks and do not belong in this article:
I would like to move these out of this article and put them into the "Approaches and algorithms" list under supervised learning. -- hike395 05:52, 15 Apr 2004 (UTC)
I have a question about one of the things on the page, it says that "certain functions that seem exclusive to the brain such as dreaming and learning, have been replicated on a simpler scale, with neural networks." My question is how exactly have a neural network been able to dream? It's seems to me to be quite a human quality to dream. What did it dream about? Computers? Numbers? perhaps it had a nightmare in which the Riemann Hypothesis was disproved? PLEASE clarify that or provide a source, otherwise it should be deleted.
I would like a more expanded introduction, for people with little experience in related fields. A very simple example would be nice, too. All this talk of "nodes" and "functions" but I can't really follow what's actually going on. Label the diagram with "weights" and "nodes" and "functions". Is each circle a node, that sums the weighted (lines) numeric values that enter their inputs, put them through a function and output them? it should be more clear. - Omegatron 19:17, Dec 6, 2004 (UTC)
On line 42 there was a change from -1 to 0. Neural Networks can work in both cases form [0 1] or [-1 1] or for that matter between any two real number. There is some emperical research going on that shows that -1 to 1 works with less epochs, but since this is a encyclopedia not a research paper it should clearly state that any value can be used. There is another error though with this: the threshold and the lower bound cannnot be the same, so the if you want to use the bound [0 1] then a threshold of 0.5 could potentially be used. For [-1 1] then 0 could be your threshold. -- Tim 12 Dec 2004
I split up this section to discuss the removal of the following (which was here already) -- Sp00n17 04:08, Dec 29, 2004 (UTC)
"As a machine learning technique, Artificial Neural Networks are both inelegant theoretically and unwieldy in practice, and therefore have very little merit. The interest in ANNs - which had its ups and downs over the decades - seems to be mainly motivated by the appeal to the analogy with the brain and by inertia."
A bit offtopic, but regarding the use of the term "neural network". That term can sound interesting when you explain to somebody what you work with, but has little value in getting people to actually use it in real-world applications. My company develops neural net based software and services for a very wide range of applications, but we never ever use term "neural nets". It simply sounds too futuristic for people to actually integrate into production systems. I'm personally ok with that as the implicit biological reference is quite misleading. Instead we call them for the customer much more acceptable "adaptive systems". It's vaguer, yes, but it sounds much more like a proven conventional method rather than some sci-fi fantasy. -- Denoir 01:41, 7 Jan 2005 (UTC)
is there any reason that Self-organizing map / Kohonen NN are not here? i was going to add them myself, but maybe there is a reason? :) - -- Cyprus2k1 08:16, 13 Feb 2005 (UTC)
Surely for a XOR you'd want weightings of -1, +2, -1 rather than +1, -2, +1? The diagram as shown gives 0 on equal states, and -1 on differing states, while if -1, +2, -1 were used in place the output would be 0 on equal states, 1 on differing states - as XOR.
I thought it was more like a NXOR, except that you'd need to absolutely add 1 to the states to get [0,1] from [-1,0], but I'm not so sure on that reasoning since that may just be a case of knowing how much current 0 is. -- Firien 13:52, 23 Feb 2005 (UTC)
The diagram also has more nodes than necessary. Xor can be done with 3 nodes: 2 input and 1 output. Node 1 of the input has weights of 1 and a threshhold of 0. Node 2 of the input has weights of -1 and a threshhold of 2. The output node has weights of 1 and a threshhold of -1.
the diagram is good, but the decription is horrible. I didn't understand what it meant until I went to page 2 of this site. I think the definition should be less criptic as the use of threshold and wieght can throw people off. When we are making pages on this system we often get lost in the fog of academia Paskari 19:30, 1 December 2006 (UTC)
Then, you ask what there is to write about neural networks? Quite a lot, actually. You should know that "artificial neural networks" are an application for neural network theory, i.e. theorizing about neural networks (yes, the ones in the brain, of course!) and their properties. There is, as you predicted, a lot of neuroscience, then cognitive science and philosophy, and information theory. The version you indicated has a much better introduction, but it doesn't change the matter. The article is still about artificial neural networks and not about neural networks. If it consoles you, a new article on neural networks could have a disambiguation sentence "If you search for ANN's..." I just googled in 5 sec. these pages to give you some ideas:
cheers, Ben (talk) 10:59, Apr 7, 2005 (UTC)
The German and the Bulgarian wikis, e.g., got it right: See de:Neuronales Netz and bg:Невронна мрежа. The French wikipedia offers a discussion of theoretical neurosiences in the article (they have good research in France), see fr:Réseau de neurones. Ben (talk) 03:28, Apr 8, 2005 (UTC)
1. Let's get some structure in the discussion. Spazzm provided some good arguments, I am impressed. The hammers and carpentry analogy is good also. Neural networks are in the brain, there is research about them, and there is software, artificial networks. This should not be confused. Therefore, move the article! Just look at this mess:
Indeed the theorizing and models of neuronal networks are hard to distinguish sometimes. However, let me try. There is
That was about hammers and carpentry. 2. An article about artificial neural networks should discuss considerations about how to implement theory of neural networks and about some implementations. This has some overlap, of course, as you would expect, as they are based on an understanding of neural cells. Currently the article is only about Artificial Networks. That's a shortcoming or, if you like it better, the scope of the article is wrong. It is not Neural Networks, but Artificial Neural Networks. And that's how the article should be called. 3. Finally, I predict, if the decision will be to keep the article here, it would result over time in a radical rewrite (first some stuff merged with Parallel Distributed Processing, then moving artificial networks that try to implement artificial intelligence (in contrast to the models that are models for research of neural assemblies) to a subsections and afterwards, as this sections will be big, to a different article. So, why not do it now and avoid a lot of mess? There are some people, who actually would like to write something about neural networks (IN the brain). There are even lectures about "Neural Networks" (not the artificial ones). You shouldn't block it, by opposing, the topic of this article here is artificial neural networks. Why blocking other topics? Ben (talk) 04:15, Apr 8, 2005 (UTC)
I am just trying to give the message that there is something called "neural networks", it is in the brain, and many people do research in this field. Obviously, I thought it was too evident. So, that's first, now second, (now I cite you)
You don't have to explain that to me. I never said anything like that. I was pointing out that there are ann's who are developed in ai and neural network models (or ann's) who test theories in neurosciences/cognitive science. So, we completely agree on this one here, probably there was a misunderstanding. Third. You suggest having a disambiguation page here, linking to "BNNs, ANNs and Neural Network Theory". This would mean a compromise but actually means also moving the article. Are you now supporting the page move? Ben (talk) 10:05, Apr 8, 2005 (UTC)
I might support a move if some actual effort was being put into writing articles about Biological Neural Networks and Neural Network Theory, but there isn't. I'm not going to write it since I'm only competent enough to write about a small subset of NN theory, which I feel has been covered adequately here. -- Spazzm 11:11, 2005 Apr 8 (UTC)
cheers, Ben (talk) 17:13, Apr 8, 2005 (UTC)
I think a reason that people haven't already written about neural networks is that there is already an article by that name and they get confused, of course, there are much more computer enthusiasts at wikipedia than there are neuroscientists, but they will come when they see they have a place here. Ben (talk) 17:17, Apr 8, 2005 (UTC)
I think the reason no-one has written an article about Biological Neural Networks is that the topic is covered extensively elsewhere:
et cetera, ad nauseam. There's far more articles on brains than there are on computational intelligence, so saying that the latter is overrepresented at the expense of the former is incorrect. The 'Wikipedia is written by computer geeks' idea is a myth, it may have been true once, but it is certainly not true in this case.
Also, it was mentioned above that Neural Networks have been used in the sense of Biological Neural Networks. This is, of course, correct, but there are far more numerous and reputable examples of using Neural Network in the sense of Artificial Neural network or Neural Network Theory:
and so on... These are worldwide examples, taken from only the most highly respected publications and institutions - more than a few lecture notes or random articles.
If that's not enough, there exist prior encyclopedic usage of Neural network in the sense of Artificial Neural Network or Neural Network Theory:
Moving the current page to Artificial Neural Network and turning Neural Network into an article on Biological Neural Networks would fly in the face of all reason.
There's no page on Biological Neural Networks or Neural Network Theory, so turning Neural Network into a disambiguation page now would be pointless - there is only one page to point to. Those who support a move would perhaps be better served by first writing these articles, then requesting the reorganization.
-- Spazzm 06:37, 2005 Apr 9 (UTC)
As for the anonymous users, they might not be counted if there would be a close decision. As it looks now, it won't be close, so don't worry. It would still be nice to have a consensus on the matter.
Ben (talk) 01:17, Apr 11, 2005 (UTC)
"A neural network is an interconnected group of neurons. It is usual to differentiate between two major groups of neural networks...Biological neural networks [...] and Artificial neural networks [...]"
Just follow the link to see what a neuron is! If this isn't an argument for a move than what?
Adding links to non-existent articles should be done with care. There is no need for you to search for all occurrences of the page title and link to articles that are unlikely ever to be written, or if they are, likely to be removed. For example, quite a few names will show up as song titles, but with few exceptions, we usually do not write articles about individual songs, so there is no point in linking to them. If you must add this type of information, be sure to link to at least one existing article (band, album, etc.).
Summarizing, the WP policy cautions for these cases:
Where was the problem again? Ben (talk) 04:26, Apr 11, 2005 (UTC)
"neural network is a computer program that operates in a manner analogous to the natural neural network in the brain"
The bold formatting is taken from the [ article you refered to.
Obviously, Britannica can't validate your claims, rather the opposite. Ben (talk) 08:21, Apr 11, 2005 (UTC)
"Looks to me like Britannica differentiates between neural networks (defined as computer programs) and natural neural networks." Look, it doesn't say "natural neural networks", it says "natural neural networks", offering two definitions. Let's be exact.
Oh, and yeah, dude. There are soo many articles on "the brain". It just kills me.
What about all the articles about windows, linux, software, etc.? Don't start telling me about neurosciences being overrepresented, it's ridiculous. Ben (talk) 08:32, Apr 11, 2005 (UTC)
On artificial neurons it says:
Artificial neurons (also called "node") is the basic unit of an artificial neural network, simulating a biological neuron.
Just compare the naming conventions: You don't want the article "neural networks" moved to "artificial neural networks", and you ignore the naming conventions in articles neurons and artificial neurons.
I am sure my argument about the term neural network being bold in two meanings can't be misunderstood if not intentionally.
i am tired of having to face the same arguments, we already discussed above, over and over again without anything new coming up. Maybe we should take a time-out here, as we are getting more and more sarcastic? I have other things to do as well. Ben (talk) 08:50, Apr 11, 2005 (UTC)
Ben (talk) 02:50, Apr 12, 2005 (UTC)
BTW, see my first attempt at creating an article on neural networks. Very premature, needs a lot of editing, maybe you help?
Ben (talk) 04:29, Apr 12, 2005 (UTC)
You have some strange ways of counting surely. How about using edit->search({"support","oppose", "concur")? Isn't that how votes are counted? I see 3 times support (by registered users, not including myself), 2 times oppose (you and B.Bryant), 1 time "concur". I don't know how YOU counted the votes, please explain. How about finding a way to count the votes according to the policy in wikipedia? Ben (talk) 05:52, Apr 12, 2005 (UTC)
Ben (talk) 07:21, Apr 12, 2005 (UTC)
I am citing you: I might support a move if some actual effort was being put into writing articles about Biological Neural Networks and Neural Network Theory, but there isn't. I'm not going to write it since I'm only competent enough to write about a small subset of NN theory, which I feel has been covered adequately here. -- Spazzm 11:11, 2005 Apr 8 (UTC)
Interpreting these results:
Is this a rough consensus for moving the page?
This vote is not about whether having a disambiguation page or a new article on this place. It is about whether this article represents all the topics that are consituted by "neural networks" or whether it is general enough as an introduction to the topic as a whole (meaning both biological and artificial neuron). I say no. Let's hear what you say, Spazzm. The time is up anyway as far as I can see. I go home now and come back tomorrow and then let's face the decision together. Please check the article I edited meanwhile for some changes.
Ben (talk) 08:12, Apr 12, 2005 (UTC)
A bit of a late entry, but make a search on google for neural network and you'll see that there are basically all references to the artificial kind. While "neural network" can in theory mean wetware, it's not used that way. My vote would be (if it is not too late), oppose-- Denoir 01:59, 5 May 2005 (UTC)
This article has been renamed as the result of a move request. I supported the move and that, I think, makes it strong enough a majority (4 to 2). I do, however, think that the neural network article should not be a disambiguation page - I reckon it should be a more general overview of the topic and have tried to reflect that in the way I've done the move. I will leave it up to you lot to decide on the way forward from there, though. violet/riga (t) 20:19, 12 Apr 2005 (UTC)
illustrating that the root of the term as it used today is based in the artificial research, and showing that the fact it is based on natural phenomena is entirely irrelevant to the use of the term. This would make things far easier for people to understand, highlighting the importance of the use of the term, and how they can expect to encounter it in the real-world, which is, after all, what WP is supposed to be about. As has been illustrated, very few places use the terms ANN or SNN. Khasurashai ( talk) 16:44, 3 January 2008 (UTC)'Neural Networks were proposed as an artificial study of the processes of biological adaptive systems using analogous components, and only since its conception have the relevant natural neural networks been referred to as neural networks, as a general collective term for various specific biological functions that had previously been referred to as x or individually.'
The first paragraph, "An artificial neural network (ANN), also called a simulated neural network (SNN) or just a neural network (NN), is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation." is while technically correct pretty much useless from the point of view of understanding what NN's are. Those familiar with the topic know that already, and those who are not won't understand what it means.
My suggestion is that we keep the text, but that we before add a short text explaining NN's in more general terms. A good introduction text can be found here. I think we need something similar to that:
"A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. "
At least we need something saying that neural nets are parametric models that adapt their parameters based on data presented to them. I.e something that explains their functionality rather than their structure. --
Denoir 02:15, 5 May 2005 (UTC)
Continuum calculator claims to be an alternative for the Artificial neural network. It features similar properties, but the structure differs significantly.
Currently there's Vote for Delete on this article. Could someone take look and give professional opinion on this article validity on the voting page? Thanks. Pavel Vozenilek 17:41, 21 May 2005 (UTC)
Why do we need to have a complex network with two hidden layers for the XOR function, when there are simpler networks which do the trick with just one hidden layer?
0 0 /|\ / \ / 0 \ 0 0 / /\ \ |\ /| / / \ \ |/ \| 0 0 0 0
...please excuse the line drawings — I was never much good at ascii art.
Anyway, both the functions above can compute XOR easily (I'll leave it to the reader to fill in the weights]). While the first has less units, the second is arguably simpler because it doesn't require connections that span over a layer.
If no one objects, I can make a pretty version of the second diagram in a few days (...after I've finished my dissertation...). — Asbestos | Talk (RFC) 21:15, 18 August 2005 (UTC)
Hello everyone. The neural networks article has now been updated and is in a much better state, though still lacking in some respects. But it is quite readable at least. Now, there are two things that I noticed:
Perhaps, as is done in neural networks, a coherent background discussing the relationship between optimation, statistical estimation and neural networks should be introduced before the list of types of neural networks. Then it would be easy to discuss each type of neural network according to established concepts.
OK, now a few tidbits that I noticed:
So, if I were to re-write this article from scratch, I would do:
1) Introduction, with a link to Neural Networks (which seems to be lacking) for further discussion of the relation to biological systems, and the use of models of neural networks in neuroscience.
2) Models: talk about different non-exclusive categories, maybe without much mention of biological neurons. There is not much need to talk about specific functions here - it may only confuse the reader.
3) Learning: Cost functions and how to minimise them. Give a basic example of linear model using stochastic steepest gradient descent.
4) Types: Start with the Perceptron, and how it's related to the example in 3. Talk about linear separability. Talk about how projecting to another space can make a problem linearly separable. Introduce alpha-perceptron. Introduce the MLP, and the use of the chain rule for minimising with respect to the 'hidden' parameters. Introduce the RBF network as another example. After all this, the reader should be able to easily tackle other networks. Talking about the recurrent networks and their inherent learning stability problems should be easy after the chain rule discussion. Try to avoid talking much about not very commonly used networks, though this is POV.
So, I don't know, does anyone have another plan? Would you prefer to leave it as is? -- Olethros 21:37, 23 December 2005 (UTC)
>Introduce the MLP
Why does the section "Types of neural networks" not mention MLP?
Stevemiller (
talk) 03:07, 15 February 2008 (UTC)
Another comment which I feel might fit under this heading: "Learning paradigms" is an example of a section containing text duplicated in both "Neural networks" and "Artificial neural networks". This is bad from a maintenance point of view; the two have already begun to differ slightly and who will know which version is the most recent and correct. That text should be maintained as a part of the ANN article (where it belongs) and should be reduced to a link and/or a shorter summary in the NN article. 195.60.183.2 ( talk) 17:25, 17 July 2008 (UTC)
I was wondering whether people thought that moving the section "Neural Networks and Artificial Intelligence" from Neural Network to Artificial Neural Network as a "Background" section would be preferrable, or whether a new article called "Neural Network Learning: Theoretical Background" should be created instead. -- Olethros 16:08, 26 December 2005 (UTC)
Started bring stuff over. I think this introduction is OK as it stands now. I added a few more things in the first part of the Background section, where it makes clear why artificial neural networks are called 'networks' and not 'functions' or 'thingies' or whatever. The relation with graphical models is made as clear as possible. My aim here was three-fold. a) correctness b) generality c) links to other fields - When I am talking about a specific model I am trying to talk about it in the form of an example. I think that the later section on types of artificial neural networks is more suitable as a place in which to put lots of information about ANN types. I think that this is satisfactory as an introduction, but I would particularly enjoy comments from people that are not experts. -- Olethros 22:47, 30 December 2005 (UTC)
I'm thinking about creating an article about neural network simulation software. There are quite a few different types of software ranging from pure data mining tools to biological simulators, and I think it would be interesting with an overview. The first step would be to categorize them into subtypes and provide a relatively abstract summary. The secon step, a bit more time demanding would be to describe actual software. Somthing similar can be found for Word processors and other types of software. The overall aim of it would be to provide a more practical view on neural networks. Any suggestions, objections etc are most welcome.-- Denoir 12:43, 12 January 2006 (UTC)
The actual text under the section is correct, stating that "ANNs are frequently used in reinforcement learning as part of the overall algorithm.". The "Learning paradigms" intro however is not. As far as the neural network goes, its use in RL is plain supervised learning with (for example for On-Policy TDL) the input being state, action and the desired output being expected reward.
The request to include RL came from the Nature peer review of Wikipedia.
Incidentally, the next issue, backpropagation is also mentioned there. In the review backpropagation is refered to as a "learning algorithm" and in the article we have "When one tries to minimise this cost using gradient descent for the class of neural networks called Multi-Layer Perceptrons, one obtains the well-known backpropagation algorithm for training neural networks."
Backpropagation is not a learning algorithm per se and it is certainly not tied to gradient descent. What is being propagated depends on both the cost function and the receiving element. And how that information is used to update the system is up to the local learning algorithm. Instead of gradient descent, the propagated error can for instance be used with local GA to optimize weights. Not to mention that there are many learning algorithms that use the local error gradient that are not gradient descent. -- Denoir 00:07, 1 February 2006 (UTC)
I seem to remember reading somewhere (Pinker?, The Emperor's New Mind?) that one thing neural networks are bad at is the type of symbol manipulation believed to be needed for natural language processing. Is this true or false? Is it covered by this or some other wikipedia article? where can I read about it? — Hippietrail 21:43, 7 April 2006 (UTC)
Maybe it's buried in the article somewhere but I can't find it by skimming: Is every node connected to every other node on the next layer? — Hippietrail 18:18, 9 April 2006 (UTC)
I have basic understanding of scientific concepts and know computer programming, but not advanced math. The formulae and topic-specific jargon are difficult for me. I can't seem to find what actually happens at each node? I can see that the connections are strengthened or weakened, but what passes along them? Numbers? How do these numbers change at each node? Am I totally off the mark? — Hippietrail 18:57, 9 April 2006 (UTC)
I think I'm slowly getting there. Please let me know if I'm on the right track:
How am I going so far? Or am I less clear than the jargon and formula filled version? (-:
Maybe these questions and the so-far very helpful answers can be of use in improving the article for lay readers who like me are capable of understanding but lack the scientific background the article currently seems to depend on. — Hippietrail 17:51, 10 April 2006 (UTC)
So if that's a feedforward backpropagation neural net, what would the name be for a net where all nodes are calculated at the same time (like the cells in the Game Of Life), but where there are no layers and any cells might be connected to any other cell, therefor allowing feedback and requiring that the net is run over time rather than a single iteration? — Hippietrail 17:51, 10 April 2006 (UTC)
In part Background->Learning there's a link to online learning. "online learning" gets redirected to E-learning, which most probably isn't what is meant in this (ANN learning) context. Perhaps some qualified person could fix this (creating a new article 'Online_learning_(ANN)' or link to the correct article) since I don't know what is meant by online learning here. —Preceding unsigned comment added by Fiveop ( talk • contribs)
I have cleaned up the external links a bit by removing the software links. There are two reasons for this. First of all the selection was pretty arbitrary, and there is lots of software out there. Second, we do have a neural network software article. I think however that instead of piling up links indiscriminately it would be good to stick to the simple principle of adding a link if there is an article describing the software. -- Denoir 05:55, 1 July 2006 (UTC)
Would someone be able to add an English description of each learning rule? 202.20.73.30 02:36, 8 August 2006 (UTC)
I found the following to be a little vague
In the literature the term perceptron often refers to networks consisting of just one of these units.
what is 'these' refering to? the neurons or the networks
Paskari 17:10, 29 November 2006 (UTC)
I have a sneeking suspicion that whoever wrote this section, copy pasted it from another site. I am creating an ADALINE page, in hopes of simplifying it. Hopefully my efforts won't be in vain Paskari 17:46, 1 December 2006 (UTC)
I'm going to make two stubs called pattern associated and autoassociation, I will try to directly incorporate them into this page (although I do not wish to create a whole new section in here). Also I will put them on the bottom under 'see also' Paskari 13:16, 22 January 2007 (UTC)
I think this article (ANN) should relate to the general concept "Associative memory" somewhere and also cross-ref to it (or at least See also). Furthermore I think the disambig for "Associative memory" probably needs to be expanded and the articles pointed to need tending to. 195.60.183.2 ( talk) 17:11, 17 July 2008 (UTC)
Made a Spiking neural networks page, feel free to contribute.
I removed this article from the computer vision category. ANN is not unrelated to CV but
-- KYN 22:04, 27 July 2007 (UTC)
Isn't it effectively an artificial brain? Therefore, its potential is far beyond the human brain, if it has far more and better neurons? 63.224.202.235 02:50, 22 August 2007 (UTC)
Is there any reason why models of the ART and ARTMAP family (ART-1, ART-2, Fuzzy ART etc) are omitted? —Preceding unsigned comment added by Sswitcher ( talk • contribs) 22:46, 25 September 2007 (UTC)
I feel the article should touch upon the firing rules within an NN. My teacher is currently going through this in Electronics. I came here to find out more and I must say I find the article quite complicated from a learner's prospective. I think the following URLs give a good introduction:
http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html http://www.neurosolutions.com/products/ns/whatisNN.html Dan Bower 01:13, 8 November 2007 (UTC)
Neural networks can actually be controvertial tools within the field of machine learning. The term "neural network" is considered too broad and ambiguous, since it's applied to a wide class of unrelated algorithms. Furthermore, many of the more common "neural network" algorithms (multilayer perceptrons trained by backprop, restricted boltzmann machines trained with contrastive divergence, etc...) require extensive tweaking to fit new learning tasks. Unlike more rigorous models (such as SVMs), using neural networks becomes as much art as science. —Preceding unsigned comment added by 96.250.30.85 ( talk) 05:16, 4 April 2008 (UTC)
Under the "other networks types" I added CPPN. They are becoming increasingly popular in research to evolve digital art and video game content. Review please and see if I made any errors. Fippy Darkpaw ( talk) 20:50, 17 May 2008 (UTC)
It sounds more correct to say that the network performs unsupervised learning than uses it. —Preceding unsigned comment added by Bwieliczko ( talk • contribs) 11:49, 29 August 2008 (UTC)
![]() | This page is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
someone changed "compute a true gradient" to "compute the true gradient". why? is there only one true gradient? i don't think so. the former text was correct.
What about radial basis networks? -- FleaPlus 16:45, 14 Apr 2004 (UTC)
Therefore, the following algorithms are not neural networks and do not belong in this article:
I would like to move these out of this article and put them into the "Approaches and algorithms" list under supervised learning. -- hike395 05:52, 15 Apr 2004 (UTC)
I have a question about one of the things on the page, it says that "certain functions that seem exclusive to the brain such as dreaming and learning, have been replicated on a simpler scale, with neural networks." My question is how exactly have a neural network been able to dream? It's seems to me to be quite a human quality to dream. What did it dream about? Computers? Numbers? perhaps it had a nightmare in which the Riemann Hypothesis was disproved? PLEASE clarify that or provide a source, otherwise it should be deleted.
I would like a more expanded introduction, for people with little experience in related fields. A very simple example would be nice, too. All this talk of "nodes" and "functions" but I can't really follow what's actually going on. Label the diagram with "weights" and "nodes" and "functions". Is each circle a node, that sums the weighted (lines) numeric values that enter their inputs, put them through a function and output them? it should be more clear. - Omegatron 19:17, Dec 6, 2004 (UTC)
On line 42 there was a change from -1 to 0. Neural Networks can work in both cases form [0 1] or [-1 1] or for that matter between any two real number. There is some emperical research going on that shows that -1 to 1 works with less epochs, but since this is a encyclopedia not a research paper it should clearly state that any value can be used. There is another error though with this: the threshold and the lower bound cannnot be the same, so the if you want to use the bound [0 1] then a threshold of 0.5 could potentially be used. For [-1 1] then 0 could be your threshold. -- Tim 12 Dec 2004
I split up this section to discuss the removal of the following (which was here already) -- Sp00n17 04:08, Dec 29, 2004 (UTC)
"As a machine learning technique, Artificial Neural Networks are both inelegant theoretically and unwieldy in practice, and therefore have very little merit. The interest in ANNs - which had its ups and downs over the decades - seems to be mainly motivated by the appeal to the analogy with the brain and by inertia."
A bit offtopic, but regarding the use of the term "neural network". That term can sound interesting when you explain to somebody what you work with, but has little value in getting people to actually use it in real-world applications. My company develops neural net based software and services for a very wide range of applications, but we never ever use term "neural nets". It simply sounds too futuristic for people to actually integrate into production systems. I'm personally ok with that as the implicit biological reference is quite misleading. Instead we call them for the customer much more acceptable "adaptive systems". It's vaguer, yes, but it sounds much more like a proven conventional method rather than some sci-fi fantasy. -- Denoir 01:41, 7 Jan 2005 (UTC)
is there any reason that Self-organizing map / Kohonen NN are not here? i was going to add them myself, but maybe there is a reason? :) - -- Cyprus2k1 08:16, 13 Feb 2005 (UTC)
Surely for a XOR you'd want weightings of -1, +2, -1 rather than +1, -2, +1? The diagram as shown gives 0 on equal states, and -1 on differing states, while if -1, +2, -1 were used in place the output would be 0 on equal states, 1 on differing states - as XOR.
I thought it was more like a NXOR, except that you'd need to absolutely add 1 to the states to get [0,1] from [-1,0], but I'm not so sure on that reasoning since that may just be a case of knowing how much current 0 is. -- Firien 13:52, 23 Feb 2005 (UTC)
The diagram also has more nodes than necessary. Xor can be done with 3 nodes: 2 input and 1 output. Node 1 of the input has weights of 1 and a threshhold of 0. Node 2 of the input has weights of -1 and a threshhold of 2. The output node has weights of 1 and a threshhold of -1.
the diagram is good, but the decription is horrible. I didn't understand what it meant until I went to page 2 of this site. I think the definition should be less criptic as the use of threshold and wieght can throw people off. When we are making pages on this system we often get lost in the fog of academia Paskari 19:30, 1 December 2006 (UTC)
Then, you ask what there is to write about neural networks? Quite a lot, actually. You should know that "artificial neural networks" are an application for neural network theory, i.e. theorizing about neural networks (yes, the ones in the brain, of course!) and their properties. There is, as you predicted, a lot of neuroscience, then cognitive science and philosophy, and information theory. The version you indicated has a much better introduction, but it doesn't change the matter. The article is still about artificial neural networks and not about neural networks. If it consoles you, a new article on neural networks could have a disambiguation sentence "If you search for ANN's..." I just googled in 5 sec. these pages to give you some ideas:
cheers, Ben (talk) 10:59, Apr 7, 2005 (UTC)
The German and the Bulgarian wikis, e.g., got it right: See de:Neuronales Netz and bg:Невронна мрежа. The French wikipedia offers a discussion of theoretical neurosiences in the article (they have good research in France), see fr:Réseau de neurones. Ben (talk) 03:28, Apr 8, 2005 (UTC)
1. Let's get some structure in the discussion. Spazzm provided some good arguments, I am impressed. The hammers and carpentry analogy is good also. Neural networks are in the brain, there is research about them, and there is software, artificial networks. This should not be confused. Therefore, move the article! Just look at this mess:
Indeed the theorizing and models of neuronal networks are hard to distinguish sometimes. However, let me try. There is
That was about hammers and carpentry. 2. An article about artificial neural networks should discuss considerations about how to implement theory of neural networks and about some implementations. This has some overlap, of course, as you would expect, as they are based on an understanding of neural cells. Currently the article is only about Artificial Networks. That's a shortcoming or, if you like it better, the scope of the article is wrong. It is not Neural Networks, but Artificial Neural Networks. And that's how the article should be called. 3. Finally, I predict, if the decision will be to keep the article here, it would result over time in a radical rewrite (first some stuff merged with Parallel Distributed Processing, then moving artificial networks that try to implement artificial intelligence (in contrast to the models that are models for research of neural assemblies) to a subsections and afterwards, as this sections will be big, to a different article. So, why not do it now and avoid a lot of mess? There are some people, who actually would like to write something about neural networks (IN the brain). There are even lectures about "Neural Networks" (not the artificial ones). You shouldn't block it, by opposing, the topic of this article here is artificial neural networks. Why blocking other topics? Ben (talk) 04:15, Apr 8, 2005 (UTC)
I am just trying to give the message that there is something called "neural networks", it is in the brain, and many people do research in this field. Obviously, I thought it was too evident. So, that's first, now second, (now I cite you)
You don't have to explain that to me. I never said anything like that. I was pointing out that there are ann's who are developed in ai and neural network models (or ann's) who test theories in neurosciences/cognitive science. So, we completely agree on this one here, probably there was a misunderstanding. Third. You suggest having a disambiguation page here, linking to "BNNs, ANNs and Neural Network Theory". This would mean a compromise but actually means also moving the article. Are you now supporting the page move? Ben (talk) 10:05, Apr 8, 2005 (UTC)
I might support a move if some actual effort was being put into writing articles about Biological Neural Networks and Neural Network Theory, but there isn't. I'm not going to write it since I'm only competent enough to write about a small subset of NN theory, which I feel has been covered adequately here. -- Spazzm 11:11, 2005 Apr 8 (UTC)
cheers, Ben (talk) 17:13, Apr 8, 2005 (UTC)
I think a reason that people haven't already written about neural networks is that there is already an article by that name and they get confused, of course, there are much more computer enthusiasts at wikipedia than there are neuroscientists, but they will come when they see they have a place here. Ben (talk) 17:17, Apr 8, 2005 (UTC)
I think the reason no-one has written an article about Biological Neural Networks is that the topic is covered extensively elsewhere:
et cetera, ad nauseam. There's far more articles on brains than there are on computational intelligence, so saying that the latter is overrepresented at the expense of the former is incorrect. The 'Wikipedia is written by computer geeks' idea is a myth, it may have been true once, but it is certainly not true in this case.
Also, it was mentioned above that Neural Networks have been used in the sense of Biological Neural Networks. This is, of course, correct, but there are far more numerous and reputable examples of using Neural Network in the sense of Artificial Neural network or Neural Network Theory:
and so on... These are worldwide examples, taken from only the most highly respected publications and institutions - more than a few lecture notes or random articles.
If that's not enough, there exist prior encyclopedic usage of Neural network in the sense of Artificial Neural Network or Neural Network Theory:
Moving the current page to Artificial Neural Network and turning Neural Network into an article on Biological Neural Networks would fly in the face of all reason.
There's no page on Biological Neural Networks or Neural Network Theory, so turning Neural Network into a disambiguation page now would be pointless - there is only one page to point to. Those who support a move would perhaps be better served by first writing these articles, then requesting the reorganization.
-- Spazzm 06:37, 2005 Apr 9 (UTC)
As for the anonymous users, they might not be counted if there would be a close decision. As it looks now, it won't be close, so don't worry. It would still be nice to have a consensus on the matter.
Ben (talk) 01:17, Apr 11, 2005 (UTC)
"A neural network is an interconnected group of neurons. It is usual to differentiate between two major groups of neural networks...Biological neural networks [...] and Artificial neural networks [...]"
Just follow the link to see what a neuron is! If this isn't an argument for a move than what?
Adding links to non-existent articles should be done with care. There is no need for you to search for all occurrences of the page title and link to articles that are unlikely ever to be written, or if they are, likely to be removed. For example, quite a few names will show up as song titles, but with few exceptions, we usually do not write articles about individual songs, so there is no point in linking to them. If you must add this type of information, be sure to link to at least one existing article (band, album, etc.).
Summarizing, the WP policy cautions for these cases:
Where was the problem again? Ben (talk) 04:26, Apr 11, 2005 (UTC)
"neural network is a computer program that operates in a manner analogous to the natural neural network in the brain"
The bold formatting is taken from the [ article you refered to.
Obviously, Britannica can't validate your claims, rather the opposite. Ben (talk) 08:21, Apr 11, 2005 (UTC)
"Looks to me like Britannica differentiates between neural networks (defined as computer programs) and natural neural networks." Look, it doesn't say "natural neural networks", it says "natural neural networks", offering two definitions. Let's be exact.
Oh, and yeah, dude. There are soo many articles on "the brain". It just kills me.
What about all the articles about windows, linux, software, etc.? Don't start telling me about neurosciences being overrepresented, it's ridiculous. Ben (talk) 08:32, Apr 11, 2005 (UTC)
On artificial neurons it says:
Artificial neurons (also called "node") is the basic unit of an artificial neural network, simulating a biological neuron.
Just compare the naming conventions: You don't want the article "neural networks" moved to "artificial neural networks", and you ignore the naming conventions in articles neurons and artificial neurons.
I am sure my argument about the term neural network being bold in two meanings can't be misunderstood if not intentionally.
i am tired of having to face the same arguments, we already discussed above, over and over again without anything new coming up. Maybe we should take a time-out here, as we are getting more and more sarcastic? I have other things to do as well. Ben (talk) 08:50, Apr 11, 2005 (UTC)
Ben (talk) 02:50, Apr 12, 2005 (UTC)
BTW, see my first attempt at creating an article on neural networks. Very premature, needs a lot of editing, maybe you help?
Ben (talk) 04:29, Apr 12, 2005 (UTC)
You have some strange ways of counting surely. How about using edit->search({"support","oppose", "concur")? Isn't that how votes are counted? I see 3 times support (by registered users, not including myself), 2 times oppose (you and B.Bryant), 1 time "concur". I don't know how YOU counted the votes, please explain. How about finding a way to count the votes according to the policy in wikipedia? Ben (talk) 05:52, Apr 12, 2005 (UTC)
Ben (talk) 07:21, Apr 12, 2005 (UTC)
I am citing you: I might support a move if some actual effort was being put into writing articles about Biological Neural Networks and Neural Network Theory, but there isn't. I'm not going to write it since I'm only competent enough to write about a small subset of NN theory, which I feel has been covered adequately here. -- Spazzm 11:11, 2005 Apr 8 (UTC)
Interpreting these results:
Is this a rough consensus for moving the page?
This vote is not about whether having a disambiguation page or a new article on this place. It is about whether this article represents all the topics that are consituted by "neural networks" or whether it is general enough as an introduction to the topic as a whole (meaning both biological and artificial neuron). I say no. Let's hear what you say, Spazzm. The time is up anyway as far as I can see. I go home now and come back tomorrow and then let's face the decision together. Please check the article I edited meanwhile for some changes.
Ben (talk) 08:12, Apr 12, 2005 (UTC)
A bit of a late entry, but make a search on google for neural network and you'll see that there are basically all references to the artificial kind. While "neural network" can in theory mean wetware, it's not used that way. My vote would be (if it is not too late), oppose-- Denoir 01:59, 5 May 2005 (UTC)
This article has been renamed as the result of a move request. I supported the move and that, I think, makes it strong enough a majority (4 to 2). I do, however, think that the neural network article should not be a disambiguation page - I reckon it should be a more general overview of the topic and have tried to reflect that in the way I've done the move. I will leave it up to you lot to decide on the way forward from there, though. violet/riga (t) 20:19, 12 Apr 2005 (UTC)
illustrating that the root of the term as it used today is based in the artificial research, and showing that the fact it is based on natural phenomena is entirely irrelevant to the use of the term. This would make things far easier for people to understand, highlighting the importance of the use of the term, and how they can expect to encounter it in the real-world, which is, after all, what WP is supposed to be about. As has been illustrated, very few places use the terms ANN or SNN. Khasurashai ( talk) 16:44, 3 January 2008 (UTC)'Neural Networks were proposed as an artificial study of the processes of biological adaptive systems using analogous components, and only since its conception have the relevant natural neural networks been referred to as neural networks, as a general collective term for various specific biological functions that had previously been referred to as x or individually.'
The first paragraph, "An artificial neural network (ANN), also called a simulated neural network (SNN) or just a neural network (NN), is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation." is while technically correct pretty much useless from the point of view of understanding what NN's are. Those familiar with the topic know that already, and those who are not won't understand what it means.
My suggestion is that we keep the text, but that we before add a short text explaining NN's in more general terms. A good introduction text can be found here. I think we need something similar to that:
"A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. "
At least we need something saying that neural nets are parametric models that adapt their parameters based on data presented to them. I.e something that explains their functionality rather than their structure. --
Denoir 02:15, 5 May 2005 (UTC)
Continuum calculator claims to be an alternative for the Artificial neural network. It features similar properties, but the structure differs significantly.
Currently there's Vote for Delete on this article. Could someone take look and give professional opinion on this article validity on the voting page? Thanks. Pavel Vozenilek 17:41, 21 May 2005 (UTC)
Why do we need to have a complex network with two hidden layers for the XOR function, when there are simpler networks which do the trick with just one hidden layer?
0 0 /|\ / \ / 0 \ 0 0 / /\ \ |\ /| / / \ \ |/ \| 0 0 0 0
...please excuse the line drawings — I was never much good at ascii art.
Anyway, both the functions above can compute XOR easily (I'll leave it to the reader to fill in the weights]). While the first has less units, the second is arguably simpler because it doesn't require connections that span over a layer.
If no one objects, I can make a pretty version of the second diagram in a few days (...after I've finished my dissertation...). — Asbestos | Talk (RFC) 21:15, 18 August 2005 (UTC)
Hello everyone. The neural networks article has now been updated and is in a much better state, though still lacking in some respects. But it is quite readable at least. Now, there are two things that I noticed:
Perhaps, as is done in neural networks, a coherent background discussing the relationship between optimation, statistical estimation and neural networks should be introduced before the list of types of neural networks. Then it would be easy to discuss each type of neural network according to established concepts.
OK, now a few tidbits that I noticed:
So, if I were to re-write this article from scratch, I would do:
1) Introduction, with a link to Neural Networks (which seems to be lacking) for further discussion of the relation to biological systems, and the use of models of neural networks in neuroscience.
2) Models: talk about different non-exclusive categories, maybe without much mention of biological neurons. There is not much need to talk about specific functions here - it may only confuse the reader.
3) Learning: Cost functions and how to minimise them. Give a basic example of linear model using stochastic steepest gradient descent.
4) Types: Start with the Perceptron, and how it's related to the example in 3. Talk about linear separability. Talk about how projecting to another space can make a problem linearly separable. Introduce alpha-perceptron. Introduce the MLP, and the use of the chain rule for minimising with respect to the 'hidden' parameters. Introduce the RBF network as another example. After all this, the reader should be able to easily tackle other networks. Talking about the recurrent networks and their inherent learning stability problems should be easy after the chain rule discussion. Try to avoid talking much about not very commonly used networks, though this is POV.
So, I don't know, does anyone have another plan? Would you prefer to leave it as is? -- Olethros 21:37, 23 December 2005 (UTC)
>Introduce the MLP
Why does the section "Types of neural networks" not mention MLP?
Stevemiller (
talk) 03:07, 15 February 2008 (UTC)
Another comment which I feel might fit under this heading: "Learning paradigms" is an example of a section containing text duplicated in both "Neural networks" and "Artificial neural networks". This is bad from a maintenance point of view; the two have already begun to differ slightly and who will know which version is the most recent and correct. That text should be maintained as a part of the ANN article (where it belongs) and should be reduced to a link and/or a shorter summary in the NN article. 195.60.183.2 ( talk) 17:25, 17 July 2008 (UTC)
I was wondering whether people thought that moving the section "Neural Networks and Artificial Intelligence" from Neural Network to Artificial Neural Network as a "Background" section would be preferrable, or whether a new article called "Neural Network Learning: Theoretical Background" should be created instead. -- Olethros 16:08, 26 December 2005 (UTC)
Started bring stuff over. I think this introduction is OK as it stands now. I added a few more things in the first part of the Background section, where it makes clear why artificial neural networks are called 'networks' and not 'functions' or 'thingies' or whatever. The relation with graphical models is made as clear as possible. My aim here was three-fold. a) correctness b) generality c) links to other fields - When I am talking about a specific model I am trying to talk about it in the form of an example. I think that the later section on types of artificial neural networks is more suitable as a place in which to put lots of information about ANN types. I think that this is satisfactory as an introduction, but I would particularly enjoy comments from people that are not experts. -- Olethros 22:47, 30 December 2005 (UTC)
I'm thinking about creating an article about neural network simulation software. There are quite a few different types of software ranging from pure data mining tools to biological simulators, and I think it would be interesting with an overview. The first step would be to categorize them into subtypes and provide a relatively abstract summary. The secon step, a bit more time demanding would be to describe actual software. Somthing similar can be found for Word processors and other types of software. The overall aim of it would be to provide a more practical view on neural networks. Any suggestions, objections etc are most welcome.-- Denoir 12:43, 12 January 2006 (UTC)
The actual text under the section is correct, stating that "ANNs are frequently used in reinforcement learning as part of the overall algorithm.". The "Learning paradigms" intro however is not. As far as the neural network goes, its use in RL is plain supervised learning with (for example for On-Policy TDL) the input being state, action and the desired output being expected reward.
The request to include RL came from the Nature peer review of Wikipedia.
Incidentally, the next issue, backpropagation is also mentioned there. In the review backpropagation is refered to as a "learning algorithm" and in the article we have "When one tries to minimise this cost using gradient descent for the class of neural networks called Multi-Layer Perceptrons, one obtains the well-known backpropagation algorithm for training neural networks."
Backpropagation is not a learning algorithm per se and it is certainly not tied to gradient descent. What is being propagated depends on both the cost function and the receiving element. And how that information is used to update the system is up to the local learning algorithm. Instead of gradient descent, the propagated error can for instance be used with local GA to optimize weights. Not to mention that there are many learning algorithms that use the local error gradient that are not gradient descent. -- Denoir 00:07, 1 February 2006 (UTC)
I seem to remember reading somewhere (Pinker?, The Emperor's New Mind?) that one thing neural networks are bad at is the type of symbol manipulation believed to be needed for natural language processing. Is this true or false? Is it covered by this or some other wikipedia article? where can I read about it? — Hippietrail 21:43, 7 April 2006 (UTC)
Maybe it's buried in the article somewhere but I can't find it by skimming: Is every node connected to every other node on the next layer? — Hippietrail 18:18, 9 April 2006 (UTC)
I have basic understanding of scientific concepts and know computer programming, but not advanced math. The formulae and topic-specific jargon are difficult for me. I can't seem to find what actually happens at each node? I can see that the connections are strengthened or weakened, but what passes along them? Numbers? How do these numbers change at each node? Am I totally off the mark? — Hippietrail 18:57, 9 April 2006 (UTC)
I think I'm slowly getting there. Please let me know if I'm on the right track:
How am I going so far? Or am I less clear than the jargon and formula filled version? (-:
Maybe these questions and the so-far very helpful answers can be of use in improving the article for lay readers who like me are capable of understanding but lack the scientific background the article currently seems to depend on. — Hippietrail 17:51, 10 April 2006 (UTC)
So if that's a feedforward backpropagation neural net, what would the name be for a net where all nodes are calculated at the same time (like the cells in the Game Of Life), but where there are no layers and any cells might be connected to any other cell, therefor allowing feedback and requiring that the net is run over time rather than a single iteration? — Hippietrail 17:51, 10 April 2006 (UTC)
In part Background->Learning there's a link to online learning. "online learning" gets redirected to E-learning, which most probably isn't what is meant in this (ANN learning) context. Perhaps some qualified person could fix this (creating a new article 'Online_learning_(ANN)' or link to the correct article) since I don't know what is meant by online learning here. —Preceding unsigned comment added by Fiveop ( talk • contribs)
I have cleaned up the external links a bit by removing the software links. There are two reasons for this. First of all the selection was pretty arbitrary, and there is lots of software out there. Second, we do have a neural network software article. I think however that instead of piling up links indiscriminately it would be good to stick to the simple principle of adding a link if there is an article describing the software. -- Denoir 05:55, 1 July 2006 (UTC)
Would someone be able to add an English description of each learning rule? 202.20.73.30 02:36, 8 August 2006 (UTC)
I found the following to be a little vague
In the literature the term perceptron often refers to networks consisting of just one of these units.
what is 'these' refering to? the neurons or the networks
Paskari 17:10, 29 November 2006 (UTC)
I have a sneeking suspicion that whoever wrote this section, copy pasted it from another site. I am creating an ADALINE page, in hopes of simplifying it. Hopefully my efforts won't be in vain Paskari 17:46, 1 December 2006 (UTC)
I'm going to make two stubs called pattern associated and autoassociation, I will try to directly incorporate them into this page (although I do not wish to create a whole new section in here). Also I will put them on the bottom under 'see also' Paskari 13:16, 22 January 2007 (UTC)
I think this article (ANN) should relate to the general concept "Associative memory" somewhere and also cross-ref to it (or at least See also). Furthermore I think the disambig for "Associative memory" probably needs to be expanded and the articles pointed to need tending to. 195.60.183.2 ( talk) 17:11, 17 July 2008 (UTC)
Made a Spiking neural networks page, feel free to contribute.
I removed this article from the computer vision category. ANN is not unrelated to CV but
-- KYN 22:04, 27 July 2007 (UTC)
Isn't it effectively an artificial brain? Therefore, its potential is far beyond the human brain, if it has far more and better neurons? 63.224.202.235 02:50, 22 August 2007 (UTC)
Is there any reason why models of the ART and ARTMAP family (ART-1, ART-2, Fuzzy ART etc) are omitted? —Preceding unsigned comment added by Sswitcher ( talk • contribs) 22:46, 25 September 2007 (UTC)
I feel the article should touch upon the firing rules within an NN. My teacher is currently going through this in Electronics. I came here to find out more and I must say I find the article quite complicated from a learner's prospective. I think the following URLs give a good introduction:
http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html http://www.neurosolutions.com/products/ns/whatisNN.html Dan Bower 01:13, 8 November 2007 (UTC)
Neural networks can actually be controvertial tools within the field of machine learning. The term "neural network" is considered too broad and ambiguous, since it's applied to a wide class of unrelated algorithms. Furthermore, many of the more common "neural network" algorithms (multilayer perceptrons trained by backprop, restricted boltzmann machines trained with contrastive divergence, etc...) require extensive tweaking to fit new learning tasks. Unlike more rigorous models (such as SVMs), using neural networks becomes as much art as science. —Preceding unsigned comment added by 96.250.30.85 ( talk) 05:16, 4 April 2008 (UTC)
Under the "other networks types" I added CPPN. They are becoming increasingly popular in research to evolve digital art and video game content. Review please and see if I made any errors. Fippy Darkpaw ( talk) 20:50, 17 May 2008 (UTC)
It sounds more correct to say that the network performs unsupervised learning than uses it. —Preceding unsigned comment added by Bwieliczko ( talk • contribs) 11:49, 29 August 2008 (UTC)