This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||
|
will you elaborate the harvard architectur, how it goes and why is this architecture is contrast to von neumann architecture
I would assume that the Z3 used a harvard architecture since it's instructions were read off of tape? I don't know if this is worth adding to the main text, and I won't add it myself as I'm not 100% sure.
Isn't this more a variant of the neumann architecture than a completely new architecture? The only difference is that the harvard model can read and write at the same time from memory. While the neumann model is still the basis of this model. I see the most important parts of the neumann model in the way this architecture handles the difference between memory and the 'working brain' the cpu. Compare this to the neural network model, where no clear distinction between memory and cpu can be made.
Im not that well versed in different forms of computer architecture, but i'm reading some information about neural networks, could someone with more architecture knowledge elaborate? --Soyweiser 30 Jan 2005
The article doesn't explain the origins or timeframe of the architecture, hence the confusion about before/after Von Neumann architecture. I can't add this myself since this is the information I'm looking for! 82.18.196.197 11:12, 14 January 2007 (UTC)
It seems to me like the Microchip Technology mentions are a type of free Advertising. Should they really be a part of this article? Mr. Shoeless ( talk) 22:51, 11 March 2020 (UTC)
Could someone please put at least one sentence of history here, what year was this (was it prior to von Neumann?) and the source (a citation like the von Neumann article has would be ideal.) — Preceding unsigned comment added by 138.38.98.21 ( talk • contribs) 14:25, 12 August 2018 (UTC)
I think this a good, well-written article. I do think however it could be a little better laid-out. Some of the comments on here are pretty valid and they should perhaps be incorporated into the articles. -- Gantlord 13:25, 15 September 2005 (UTC)
Thank you for your suggestion regarding [[: regarding [[:{{{1}}}]]]]! When you feel an article needs improvement, please feel free to make whatever changes you feel are needed. Wikipedia is a wiki, so anyone can edit almost any article by simply following the Edit this page link at the top. You don't even need to log in! (Although there are some reasons why you might like to…) The Wikipedia community encourages you to be bold. Don't worry too much about making honest mistakes—they're likely to be found and corrected quickly. If you're not sure how editing works, check out how to edit a page, or use the sandbox to try out your editing skills. New contributors are always welcome.
Is there a relation between the instruction set (RISC and CISC) and the architecture of the processor/microcontroller? I find Harvard architecture processors with RISC instruction set and Von neumann architecture processors with CISC instruction set (I'm not sure if this is always true). Is there a reason relating the architecture to the instruction set?
This whole article is plagiarised from its primary source. —Preceding unsigned comment added by 202.40.139.164 ( talk) 09:36, 30 April 2009 (UTC)
If there are any currently used examples of the Harvard computer architecture, it might be nice to add such a section. DouglasHeld ( talk) 08:11, 18 February 2019 (UTC)
The introductory paragraph has a poorly informed sentence "These early machines had data storage entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator; the processor could not initialize itself." I can supply the information about why this is mistaken but I'm having no luck finding the right words.
There was no (Programmable) Read Only Memory available for early machines. *All* early computers of every architecture relied on an operator to use a manual method to load a handful of instructions into the ubiquitous magnetic core memory to instruct the machine to read more instructions instructions from an easily controlled I/O device sometimes followed by more from a more difficult to control I/O device. This process analogous to pulling itself up by its bootstraps is the origin of "booting" a machine . Then, as now, the processor's program counter was reset to the same address at power-on, even if manually. Magnetic core memory was volatile on a scale of days rather than microseconds. The use of program memory was typically organized to avoid overwriting the manually entered instructions so that the machine could be restarted more easily, at least Tuesday through Friday, and frequently after a 2 day weekend, but this wasn't usually reliable after a 3 day weekend or any longer period.
The manually entered bootstrap code for a Harvard Architecture machine has more instructions than for a Von Neuman engines because data fetched from a device in I/O space must be loaded to register before writing to data memory, and then when enough bytes are available, subsequently written to program memory. since these instructions were typically entered by a bank of toggle switches with a bank of neon lights to provide a visible feedback, each instruction was time consuming to enter. PolychromePlatypus 22:29, 15 August 2019 (UTC) — Preceding unsigned comment added by PolychromePlatypus ( talk • contribs)
I removed a note that some "8051-compatible microcontrollers from STC have dual ported Flash memory" in revision 912725050. In revision 912725050, Guy Macon restored it. In the name of brevity, I might have gotten a little sloppy and not given sufficient rationale in my description of the change. I don't want to start an edit war by reverting the most recent change without a discussion, so here goes again:
A more generic discussion of the instruction and data busses on typical flash-based microcontrollers would be useful and could certainly cite examples, but I don't think this parenthetical belongs in the article in its current form.
-- 50.239.58.122 ( talk) 18:04, 4 September 2019 (UTC)
The claims made in regard to the meaning, origins, and benefits claimed for the so-called 'Harvard architecture' have bothered me for years, from both technical and historical perspectives. Starting 2019 I spent 2 years researching this topic, and then a year getting the resulting paper 'The Myth of the Harvard Architecture' through the peer-review process of the IEEE Annals of the History of Computing (considered the 'journal of record' for computing history) and it has just been published. If you don't have access to that journal you can download a pre-publication version from my own website here: http://metalup.org/harvardarchitecture/The%20Myth%20of%20the%20Harvard%20Architecture.pdf . Please read it.
I believe this is the only peer-reviewed research paper on the subject of the 'Harvard architecture' and on that basis alone should be considered a significant resource.
Frankly, based on the findings from my research (which lists 40+ references), I would like to re-write this whole Wikipedia article, which perpetuates several of the myths I exposed. I would prefer it if others would read my paper and make changes, but if I don't see anyone else taking up the baton I will start to make changes. Rpawson ( talk) 15:46, 29 September 2022 (UTC)
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||
|
will you elaborate the harvard architectur, how it goes and why is this architecture is contrast to von neumann architecture
I would assume that the Z3 used a harvard architecture since it's instructions were read off of tape? I don't know if this is worth adding to the main text, and I won't add it myself as I'm not 100% sure.
Isn't this more a variant of the neumann architecture than a completely new architecture? The only difference is that the harvard model can read and write at the same time from memory. While the neumann model is still the basis of this model. I see the most important parts of the neumann model in the way this architecture handles the difference between memory and the 'working brain' the cpu. Compare this to the neural network model, where no clear distinction between memory and cpu can be made.
Im not that well versed in different forms of computer architecture, but i'm reading some information about neural networks, could someone with more architecture knowledge elaborate? --Soyweiser 30 Jan 2005
The article doesn't explain the origins or timeframe of the architecture, hence the confusion about before/after Von Neumann architecture. I can't add this myself since this is the information I'm looking for! 82.18.196.197 11:12, 14 January 2007 (UTC)
It seems to me like the Microchip Technology mentions are a type of free Advertising. Should they really be a part of this article? Mr. Shoeless ( talk) 22:51, 11 March 2020 (UTC)
Could someone please put at least one sentence of history here, what year was this (was it prior to von Neumann?) and the source (a citation like the von Neumann article has would be ideal.) — Preceding unsigned comment added by 138.38.98.21 ( talk • contribs) 14:25, 12 August 2018 (UTC)
I think this a good, well-written article. I do think however it could be a little better laid-out. Some of the comments on here are pretty valid and they should perhaps be incorporated into the articles. -- Gantlord 13:25, 15 September 2005 (UTC)
Thank you for your suggestion regarding [[: regarding [[:{{{1}}}]]]]! When you feel an article needs improvement, please feel free to make whatever changes you feel are needed. Wikipedia is a wiki, so anyone can edit almost any article by simply following the Edit this page link at the top. You don't even need to log in! (Although there are some reasons why you might like to…) The Wikipedia community encourages you to be bold. Don't worry too much about making honest mistakes—they're likely to be found and corrected quickly. If you're not sure how editing works, check out how to edit a page, or use the sandbox to try out your editing skills. New contributors are always welcome.
Is there a relation between the instruction set (RISC and CISC) and the architecture of the processor/microcontroller? I find Harvard architecture processors with RISC instruction set and Von neumann architecture processors with CISC instruction set (I'm not sure if this is always true). Is there a reason relating the architecture to the instruction set?
This whole article is plagiarised from its primary source. —Preceding unsigned comment added by 202.40.139.164 ( talk) 09:36, 30 April 2009 (UTC)
If there are any currently used examples of the Harvard computer architecture, it might be nice to add such a section. DouglasHeld ( talk) 08:11, 18 February 2019 (UTC)
The introductory paragraph has a poorly informed sentence "These early machines had data storage entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator; the processor could not initialize itself." I can supply the information about why this is mistaken but I'm having no luck finding the right words.
There was no (Programmable) Read Only Memory available for early machines. *All* early computers of every architecture relied on an operator to use a manual method to load a handful of instructions into the ubiquitous magnetic core memory to instruct the machine to read more instructions instructions from an easily controlled I/O device sometimes followed by more from a more difficult to control I/O device. This process analogous to pulling itself up by its bootstraps is the origin of "booting" a machine . Then, as now, the processor's program counter was reset to the same address at power-on, even if manually. Magnetic core memory was volatile on a scale of days rather than microseconds. The use of program memory was typically organized to avoid overwriting the manually entered instructions so that the machine could be restarted more easily, at least Tuesday through Friday, and frequently after a 2 day weekend, but this wasn't usually reliable after a 3 day weekend or any longer period.
The manually entered bootstrap code for a Harvard Architecture machine has more instructions than for a Von Neuman engines because data fetched from a device in I/O space must be loaded to register before writing to data memory, and then when enough bytes are available, subsequently written to program memory. since these instructions were typically entered by a bank of toggle switches with a bank of neon lights to provide a visible feedback, each instruction was time consuming to enter. PolychromePlatypus 22:29, 15 August 2019 (UTC) — Preceding unsigned comment added by PolychromePlatypus ( talk • contribs)
I removed a note that some "8051-compatible microcontrollers from STC have dual ported Flash memory" in revision 912725050. In revision 912725050, Guy Macon restored it. In the name of brevity, I might have gotten a little sloppy and not given sufficient rationale in my description of the change. I don't want to start an edit war by reverting the most recent change without a discussion, so here goes again:
A more generic discussion of the instruction and data busses on typical flash-based microcontrollers would be useful and could certainly cite examples, but I don't think this parenthetical belongs in the article in its current form.
-- 50.239.58.122 ( talk) 18:04, 4 September 2019 (UTC)
The claims made in regard to the meaning, origins, and benefits claimed for the so-called 'Harvard architecture' have bothered me for years, from both technical and historical perspectives. Starting 2019 I spent 2 years researching this topic, and then a year getting the resulting paper 'The Myth of the Harvard Architecture' through the peer-review process of the IEEE Annals of the History of Computing (considered the 'journal of record' for computing history) and it has just been published. If you don't have access to that journal you can download a pre-publication version from my own website here: http://metalup.org/harvardarchitecture/The%20Myth%20of%20the%20Harvard%20Architecture.pdf . Please read it.
I believe this is the only peer-reviewed research paper on the subject of the 'Harvard architecture' and on that basis alone should be considered a significant resource.
Frankly, based on the findings from my research (which lists 40+ references), I would like to re-write this whole Wikipedia article, which perpetuates several of the myths I exposed. I would prefer it if others would read my paper and make changes, but if I don't see anyone else taking up the baton I will start to make changes. Rpawson ( talk) 15:46, 29 September 2022 (UTC)