This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |
UHHH!!!! This page is really gross and out of date :(
Very very slightly better now. The first section still needs further breakup and generalisation and there should be more in the pre-toc sumary part. Also, the software part needs expansion and something should be done with the 'general topics' part.
These two are under programming languages but they aren't, they are library for C/C++ and FORTRAN High level programming languages.
If there's anybody who actively maintains this article, I think it ought to be rephrased a bit to at least acknowledge that TLP and multiprocessing aren't the only forms of parallel computing; ILP just as significant. -- uberpenguin 07:41, 11 December 2005 (UTC)
I'm proposing that Parallel programming be merged into Parallel computing, since the bulk of the content (what little there is) in Parallel programming is already contained in the Parallel computing article (which also provides more context). Is there significant additional content that can be added to Parallel programming to justify keeping it a separate article? -- Allan McInnes 20:37, 13 January 2006 (UTC)
I have put up a proposal for a concurrency wikiproject at User:Allan McInnes/Concurrency project. Input from all interested parties (especially those who will actually contribute) is most welcome. -- Allan McInnes 21:31, 20 January 2006 (UTC)
What is the status of the merge? —Preceding unsigned comment added by Adamstevenson ( talk • contribs)
OpenMP is great, but if you go to the official website, you realize the last posting of "In the Media" is 1998. You then realize this is for Fortran and C++.
My point is the world has moved on to multicore chips in $900 laptops, Java and C#. Parallel computing has moved _much_ farther than OpenMP has. We need more content here for the masses, not the PhD's on supercomputers.
I tried to post _recent_ information about new parallel frameworks like DataRush, but was 'smacked down' by Eagle_101 (or some close facsimile) and all my content was obliterated.
QUESTION: Why is it okay for Informatica, Tibco and other large corporations to post massive marketing datasheets in wikipedia, but not allow subject matter experts to post useful information in appropriate topic areas?
I concede DataRush is free but not open source, nor a global standard. But indeed it is recent, topical and factual. The programming framework exists, is useful to mankind etc...etc... So why does wikipedia not want to inform the world of its existence? —Preceding unsigned comment added by EmilioB ( talk • contribs)
There are two key frameworks in the Java community -- DataRush and Javolution.
Both provide "hyper-parallel" execution of code when you use the frameworks.
http://www.pervasivedatarush.com
Emilio 16:39, 3 January 2007 (UTC)EmilioB
"One woman can have a baby in nine months, but nine women can't have a baby in one month." - Does anyone know the origin of this quote? Raul654 17:57, 3 April 2007 (UTC)
AFAIK, a modern processor's power is linear with clock speed, not "super linear". This section could also probably do with some references to back up any claims.
I frankly can't believe nobody mentioned the first computer to use parallell processing, & the year. What was it? Trekphiler 17:36, 25 August 2007 (UTC)
One statement of conditions necessary for valid parallel computing is:
* I_j \cap O_i = \varnothing, \, * I_i \cap O_j = \varnothing, \, * O_i \cap O_j = \varnothing. \,
Violation of the first condition introduces a flow dependency, corresponding to the first statement producing a result used by the second statement. The second condition represents an anti-dependency, when the first statement overwrites a variable needed by the second expression. The third and final condition, q, is an output dependency.
The variable q is referenced but not defined. The third condition is a copy of the second. MichaelWattam ( talk) 17:24, 18 March 2009 (UTC)
Is this statement in the article referring to interlocks?
Only one instruction may execute at a time—after that instruction is finished, the next is executed
-- Ramu50 ( talk) 03:13, 15 June 2008 (UTC)
Read the Wikipedia Article " MAJC" paragraph 5, and you will understand it. It is sort of a method of rendering instructions codes in CPU. -- Ramu50 ( talk) 22:55, 15 June 2008 (UTC)
This article is really bad - even the first sentence has a non-trivial error ("Parallel computing is the simultaneous execution of some combination of multiple instances of programmed instructions and data on multiple processors in order to obtain results faster" -- this totally ignores ILP). I'm going to rewrite it. I don't have my copies of Patterson and Hennessey handy - they're in my office. I'll pick them up from the office tomorrow. In the meantime, I've started the rewrite at User:Raul654/PC Raul654 04:11, 28 October 2007 (UTC)
Raul - this is a nice, well illustrated article. While I can understand your redirecting Concurrent computing to here, I think you are being a tad overzealous in redirecting Distributed computing to this page. You've consigned the entire subject under the rather obscure-sounding subheading of "Distributed memory multiprocessing". Suddenly, Wikipedia seems to have lost the rather important subject of Distributed computing! Please reconsider your action and undo the redirect on Distributed computing. That page may still need more work, but it's way too big a sub-topic of parallel computing to cram into this page. - JCLately 06:18, 7 November 2007 (UTC)
Hi, below is a suggestion for a new introductory paragraph. Comments? henrik• talk 06:51, 9 November 2007 (UTC)
Parallel computing is a form of computing in which many operations are carried out simultaneously. Parallel computing operates on the principle that large problems can almost always be divided into smaller ones, which may be carried out concurrently ("in parallel"). Parallel computing has been used for many years, mainly in high performance computing, but interest has been renewed in later years due to physical constraints preventing frequency scaling. Parallel computing has recently become the dominant paradigm in computer architecture, mainly in the form of multicore processors.
I would like the article to talk about dependencies, pipelining and vectorization in a slightly more general way (i.e. not tied to parallelism as in thread parallelism or a specific implementation in a processor), as well as using slightly different terminology. I've started to type up some notes at User:Henrik/sandbox/Parallel computing notes. But I'm a little bit hesitant to make so major modifications to a current WP:FAC article, so I thought I'd bring it here for discussion first. Your thoughts would be appreciated. henrik• talk 19:40, 9 November 2007 (UTC)
Things left from the old FAC nom that need to be addressed before renominating:
After those are done, renominate on FAC. Raul654 ( talk) 20:36, 10 December 2007 (UTC)
This section is very odd. How is fine-grained and coarse-grained parallelism related to communication between subtasks at all? The granularity is strictly related to computation length, not communication frequency. —Preceding unsigned comment added by Joshphillips ( talk • contribs) 22:38, 19 December 2007 (UTC)
I have just finished reviewing this article for good article (GA) status. As it stands, I cannot pass it as a GA due to some issues outlined below. As such, I have put the nomination on hold, which allows up to seven days in which editors can address these problems before the nomination is failed without further notice. If and when the necessary changes are made, I will come back to re-review it.
I have expanded the discussion of loop carried dependencies. Raul654 ( talk) 19:00, 6 January 2008 (UTC)
1: prev1 := 0 2: prev2 := 0 3: cur := 1 4: do: 5: prev1 := prev2 6: prev2 := cur 7: cur := prev1 + prev2 8: while (cur < 10)
I don't think this redirect necessarily is appropriate: a distributed system is not just a computer concept (an organisation of employees is a distributed system). I think there should be a separate article, hyperlinking to this one, on the wider concept. Any thoughts? ElectricRay ( talk) 10:51, 21 January 2008 (UTC)
I'm tempted to delete a few occurrances of this word and just write "computers" as in:
One more thing:
This article uses the terms speed-up and speedup. I think they're both ugly, but whichever is to be preferred, then the article ought to be consistent. -- Malleus Fatuorum ( talk) 03:36, 15 June 2008 (UTC)
Seeing as this is a featured article, it should be as perfect as possible:
Article says: "Slotnick had proposed building a massively parallel computer for the Lawrence Livermore National Laboratory" - this is quite misleading, as ILLIAC IV has been build at the University of Illinois. kuszi ( talk) 09:07, 13 August 2008 (UTC).
Article says: "With the end of frequency scaling, these additional transistors (which are no longer used for frequency scaling) can be used to add extra hardware for parallel computing." - it suggests that additional transistors where (before 2004) needed for frequency scaling - rather nonsense. kuszi ( talk) 09:24, 13 August 2008 (UTC).
Article says: "In 1967, Amdahl and Slotnick published a debate about the feasibility of parallel processing at American Federation of Information Processing Societies Conference" - probably you mean Amdahl's article: "Gene Amdahl,Validity of the single processor approach to achieving large-scale computing capabilities. In: Proceedings of the American Federation Information Processing Society, vol. 30, 1967, 483–485."? kuszi ( talk) 09:24, 13 August 2008 (UTC).
Michael R. D'Amour was never CEO of DRC. He was not around when we worked with AMD on the socket stealer proposal. I came up with the idea of going into the socket in 1996 (US Patent 6178494, Issued Jan 2001). 16 days after the Opteron was announced I had figured out how to replace the Opteron with an FPGA. I personally did the work to architect the board, fix the Xilinx non-working Hypertransport verilog code and get the (then) linux BIOS to boot the FPGA in the Opteron socket. I was also responsible for figuring out if current FPGA's could work in the high performance last generation of the FSB. This lead to the Quick path program. We opted out of the FSB program as it is a dead bus. Beercandyman ( talk) 20:57, 16 February 2009 (UTC) Steve Casselman CTO and Founder DRC Computer.
As power consumption by computers has become a concern in recent years,[3] parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multicore processors.[4]
The sentence is missing a clause (and source) to relate "power consumption is a concern" to "parallel computing is dominant". It goes on to say that the dominance applies to "computer" architecture, "mainly" in processors. Are we sure that it isn't "only" in processors? It seems that nearly every other component has been moving towards serial architectures, including USB, Firewire, Serial ATA, and PCI Express, at the expense of the previous parallel standard. The serial communications and parallel communications articles have more information, and a very in-depth comparison can be found in this HyperTransport white paper: [1] (HT combines serial and parallel). Ham Pastrami ( talk) 06:49, 18 March 2009 (UTC)
What does the "embarassingly" mean in: "grid computing typically deals only with embarrassingly parallel problems". Does it just mean "highly"? There must be a better word for it. Embarassingly sounds POV, as it makes me wonder who is embarassed - presumably the people trying to cure cancer don't think it is an embarassing problem? Maybe Seti@home users should be embarassed, but i doubt they are.
I didn't change it as i havn't read the source, and i'm not certian of what it should mean. Yob Mod 08:43, 18 March 2009 (UTC)
That's quite irresponsible to be on the front page; it's not power consumption the main reason of success of multi-core processors but simply the inability to produce faster processors in the pace (comparable to the 80s and 90s) needed to sell to the public "shiny new super faster computers compared your last one that you must buy now or you're behind the times". Shortening the transistor size is slower and parallel computing offers another route to "shiny new faster personal computers". The combination of smaller transistors and multicore processors is the new deal. -- AaThinker ( talk) 10:00, 18 March 2009 (UTC)
Is the Cray-2 image the right one for this article? It is a vector processor, while a MPP like the Intel Paragon, CM-2, ASCI Red or Blue Gene/P might be better illustrations of parallel computing. -- Autopilot ( talk) 12:43, 18 March 2009 (UTC)
This is conflating vector math with vector processing, isn't it? Can someone knowledgeable remove the above line if it's not talking about Cray-1 style vector computers? Tempshill ( talk) 16:20, 18 March 2009 (UTC)
Surprised there was no mention of Thinking Machines' Connection Machine, not even in the history section. -- IanOsgood ( talk) 22:19, 18 March 2009 (UTC)
Yes, the first thing that comes to mind when you mention
parallel processing is the Hillis Connection Machine. It
failed in business only because it was ahead of it's time.
Should get a mention. blucat - David Ritter —Preceding unsigned comment added by 198.142.44.68 ( talk) 15:24, 21 March 2009 (UTC)
Category:Parallel computing is itself a category within Category:Concurrent computing — Robert Greer ( talk) 23:15, 18 March 2009 (UTC)
The most common type of cluster is the Beowulf cluster
I think there should be a reference cited for this, otherwise this is just an opinion of an editor. -- 66.69.248.6 ( talk) 17:54, 21 March 2009 (UTC)
Please see Talk:Distributed computing#What is parallel computing? JonH ( talk) 22:53, 10 August 2009 (UTC)
Does anyone object to me setting up automatic archiving for this page using MizaBot? Unless otherwise agreed, I would set it to archive threads that have been inactive for 60 days.-- Oneiros ( talk) 13:13, 21 December 2009 (UTC)
Please stay calm and civil while commenting or presenting evidence, and do not make personal attacks. Be patient when approaching solutions to any issues. If consensus is not reached, other solutions exist to draw attention and ensure that more editors mediate or comment on the dispute. |
I am green as a freshly minted Franklin, never posted before (so be nice)
Graduate student at the University of Illinois at Urbana-Champaign
Semester project; regardless, always wanted to do something like this...
All initial work should be (majority) my effort
As a word to the wise is sufficient; please advise, rather than take first-hand action.
The article should (and will) be of substantial size; but is currently no more that scaffolding
The "bullet-points" are intended to outline the potential discussion, and will NOT be in the finished product
The snippet of text under each reference is from the reference itself, to display applicability
Direct copying of reference information WILL NOT be part of any section of this article
Again, this information is here to give an idea of the paper, without having to go and read it...
Article sections that are drafted so far are quite "wordy".... (yawn...)
Most of the prose at this point has about a 1.5X - 2.0X inflated over the anticipated final product
This is my writing style, which has a natural evolution, through iteration
Complex -> confused -> constrained -> coherent -> concise (now, if it only took 5 iterations???)
Again, thank you in advance for you patience and understanding
I look forward to working with you guys...
Project Location:
Distributed operating system
Project Discussion:
Talk: Distributed operating system
JLSjr (
talk)
01:31, 20 March 2010 (UTC)
I've uploaded an audio recording of this article for the Spoken Wikipedia project. Please let me know if I've made any mistakes. Thanks. -- Mangst ( talk) 20:20, 1 November 2010 (UTC)
Should the paragraph about Application Checkpointing be in this article about parallel computing?
I think it's not a core part of parallel computing but a part of the way applications work and store their state. Jan Hoeve ( talk) 19:33, 8 March 2010 (UTC)
I came to the page to read the article and was also confused as to why checkpointing was there. It seems very out of place, and while fault tolerance may be important to parallelism, this isn't an article about fault tolerance mechanisms. It would be more logical to mention that parallelism has a strong need for fault tolerance and then link to other pages on the topic. 66.134.120.148 ( talk) 01:27, 23 April 2011 (UTC)
What a pleasant surprise. A Wikipedia article on advanced computing that is actually in good shape. The article structure is (surprise) logical, and I see no major errors in it. But the sub-articles it points to are often low quality, e.g. Automatic parallelization, Application checkpointing, etc.
The hardware aspects are handled better here than the software issues, however. The Algorithmic methods section can do with a serious rework.
Yet a few logical errors still remain even in the hardware aspects, e.g. computer clusters are viewed as not massively parallel, a case invalidated by the K computer, of course.
The template used here called programming paradigms, is however, in hopeless shape and I will remove that given that it is a sad spot on an otherwise nice article. History2007 ( talk) 22:40, 8 February 2012 (UTC)
Can we agree that parallel computing isn't the same as concurrent computing? See /info/en/?search=Concurrent_computing#Introduction — Preceding unsigned comment added by Mister Mormon ( talk • contribs) 16:47, 3 February 2016 (UTC)
"The origins of true (MIMD) parallelism go back to Federico Luigi, Conte Menabrea and his "Sketch of the Analytic Engine Invented by Charles Babbage".[45][46][47]"
Not that I can see. This single mention refers to a system that does not appear in any other work, did not appear in Babbage's designs, and appears to be nothing more than "it would be nice if..." Well of course it would be. Unless someone has a much better reference, one that suggests how this was to work, I remain highly skeptical that the passage is correct in any way. Babbage's design did have parallelism in the ALU (which is all it was) but that is not parallel computing in the modern sense of the term. Maury Markowitz ( talk) 14:25, 25 February 2015 (UTC)
Dear Maury Markowitz,
Forgive me for reverting a recent edit you made to the parallel computing article.
You are right that Babbage's machine had a parallel ALU, but does not have parallel instructions or operands and so does not meet the modern definition of the term "parallel computing".
However, at least one source says "The earliest reference to parallelism in computer design is thought to be in General L. F. Menabrea's publication ... It does not appear that this ability to perform parallel operation was included in the final design of Babbage's calculating engine" -- Hockney and Jesshope, p. 8. (Are they referring to the phrase "give several results at the same time" in (Augusta's translation of) Menabrea's article?)
So my understanding is that source says that the modern idea of parallel computing does go back at least to Menabrea's article, even though the idea of parallel computing was only a brief tangent in Menabrea's article whose main topic was a machine that does not meet the modern definition of parallel computing.
Perhaps that source is wrong. Can we find any sources that disagree? The first paragraph of the WP:VERIFY policy seems to encourage presenting what the various sources say, even when it is obvious that some of them are wrong. (Like many other aspects of Wikipedia, that aspect of "WP:VERIFY" strikes me as crazy at first, but then months later I start to think it's a good idea).
The main problem I have with that sentence is that implies that only MIMD qualifies as "true parallelism". So if systolic arrays (MISD) and the machines from MasPar and Thinking Machines Corporation (SIMD) don't qualify as true parallel computing, but they are not sequential computing (SISD) either, then what are they? Is the "MIMD" part of the sentence supported by any sources? -- DavidCary ( talk) 07:04, 26 February 2015 (UTC)
I'm concerned that asynchronous programming redirects to this page. Asynchronous programming/computing is not the same as parallel computing. For example, JavaScript engines are asynchronous, but single-threaded (less web workers), meaning that tasks do not actually run in parallel, though they are still asynchronous. To my knowledge, that is how it works, and that is how NodeJS works, browsers, and Nginx. They are all single-threaded, yet asynchronous, and so not parallel. — Preceding unsigned comment added by 2605:A601:64C:9B01:7083:DDD:19AF:B6B7 ( talk) 03:40, 14 February 2016 (UTC)
Hello fellow Wikipedians,
I have just modified 6 external links on Parallel computing. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 14:59, 20 May 2017 (UTC)
Hello fellow Wikipedians,
I have just modified 4 external links on Parallel computing. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 17:20, 24 September 2017 (UTC)
Hello fellow Wikipedians,
I have just modified one external link on Parallel computing. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 09:34, 7 October 2017 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |
UHHH!!!! This page is really gross and out of date :(
Very very slightly better now. The first section still needs further breakup and generalisation and there should be more in the pre-toc sumary part. Also, the software part needs expansion and something should be done with the 'general topics' part.
These two are under programming languages but they aren't, they are library for C/C++ and FORTRAN High level programming languages.
If there's anybody who actively maintains this article, I think it ought to be rephrased a bit to at least acknowledge that TLP and multiprocessing aren't the only forms of parallel computing; ILP just as significant. -- uberpenguin 07:41, 11 December 2005 (UTC)
I'm proposing that Parallel programming be merged into Parallel computing, since the bulk of the content (what little there is) in Parallel programming is already contained in the Parallel computing article (which also provides more context). Is there significant additional content that can be added to Parallel programming to justify keeping it a separate article? -- Allan McInnes 20:37, 13 January 2006 (UTC)
I have put up a proposal for a concurrency wikiproject at User:Allan McInnes/Concurrency project. Input from all interested parties (especially those who will actually contribute) is most welcome. -- Allan McInnes 21:31, 20 January 2006 (UTC)
What is the status of the merge? —Preceding unsigned comment added by Adamstevenson ( talk • contribs)
OpenMP is great, but if you go to the official website, you realize the last posting of "In the Media" is 1998. You then realize this is for Fortran and C++.
My point is the world has moved on to multicore chips in $900 laptops, Java and C#. Parallel computing has moved _much_ farther than OpenMP has. We need more content here for the masses, not the PhD's on supercomputers.
I tried to post _recent_ information about new parallel frameworks like DataRush, but was 'smacked down' by Eagle_101 (or some close facsimile) and all my content was obliterated.
QUESTION: Why is it okay for Informatica, Tibco and other large corporations to post massive marketing datasheets in wikipedia, but not allow subject matter experts to post useful information in appropriate topic areas?
I concede DataRush is free but not open source, nor a global standard. But indeed it is recent, topical and factual. The programming framework exists, is useful to mankind etc...etc... So why does wikipedia not want to inform the world of its existence? —Preceding unsigned comment added by EmilioB ( talk • contribs)
There are two key frameworks in the Java community -- DataRush and Javolution.
Both provide "hyper-parallel" execution of code when you use the frameworks.
http://www.pervasivedatarush.com
Emilio 16:39, 3 January 2007 (UTC)EmilioB
"One woman can have a baby in nine months, but nine women can't have a baby in one month." - Does anyone know the origin of this quote? Raul654 17:57, 3 April 2007 (UTC)
AFAIK, a modern processor's power is linear with clock speed, not "super linear". This section could also probably do with some references to back up any claims.
I frankly can't believe nobody mentioned the first computer to use parallell processing, & the year. What was it? Trekphiler 17:36, 25 August 2007 (UTC)
One statement of conditions necessary for valid parallel computing is:
* I_j \cap O_i = \varnothing, \, * I_i \cap O_j = \varnothing, \, * O_i \cap O_j = \varnothing. \,
Violation of the first condition introduces a flow dependency, corresponding to the first statement producing a result used by the second statement. The second condition represents an anti-dependency, when the first statement overwrites a variable needed by the second expression. The third and final condition, q, is an output dependency.
The variable q is referenced but not defined. The third condition is a copy of the second. MichaelWattam ( talk) 17:24, 18 March 2009 (UTC)
Is this statement in the article referring to interlocks?
Only one instruction may execute at a time—after that instruction is finished, the next is executed
-- Ramu50 ( talk) 03:13, 15 June 2008 (UTC)
Read the Wikipedia Article " MAJC" paragraph 5, and you will understand it. It is sort of a method of rendering instructions codes in CPU. -- Ramu50 ( talk) 22:55, 15 June 2008 (UTC)
This article is really bad - even the first sentence has a non-trivial error ("Parallel computing is the simultaneous execution of some combination of multiple instances of programmed instructions and data on multiple processors in order to obtain results faster" -- this totally ignores ILP). I'm going to rewrite it. I don't have my copies of Patterson and Hennessey handy - they're in my office. I'll pick them up from the office tomorrow. In the meantime, I've started the rewrite at User:Raul654/PC Raul654 04:11, 28 October 2007 (UTC)
Raul - this is a nice, well illustrated article. While I can understand your redirecting Concurrent computing to here, I think you are being a tad overzealous in redirecting Distributed computing to this page. You've consigned the entire subject under the rather obscure-sounding subheading of "Distributed memory multiprocessing". Suddenly, Wikipedia seems to have lost the rather important subject of Distributed computing! Please reconsider your action and undo the redirect on Distributed computing. That page may still need more work, but it's way too big a sub-topic of parallel computing to cram into this page. - JCLately 06:18, 7 November 2007 (UTC)
Hi, below is a suggestion for a new introductory paragraph. Comments? henrik• talk 06:51, 9 November 2007 (UTC)
Parallel computing is a form of computing in which many operations are carried out simultaneously. Parallel computing operates on the principle that large problems can almost always be divided into smaller ones, which may be carried out concurrently ("in parallel"). Parallel computing has been used for many years, mainly in high performance computing, but interest has been renewed in later years due to physical constraints preventing frequency scaling. Parallel computing has recently become the dominant paradigm in computer architecture, mainly in the form of multicore processors.
I would like the article to talk about dependencies, pipelining and vectorization in a slightly more general way (i.e. not tied to parallelism as in thread parallelism or a specific implementation in a processor), as well as using slightly different terminology. I've started to type up some notes at User:Henrik/sandbox/Parallel computing notes. But I'm a little bit hesitant to make so major modifications to a current WP:FAC article, so I thought I'd bring it here for discussion first. Your thoughts would be appreciated. henrik• talk 19:40, 9 November 2007 (UTC)
Things left from the old FAC nom that need to be addressed before renominating:
After those are done, renominate on FAC. Raul654 ( talk) 20:36, 10 December 2007 (UTC)
This section is very odd. How is fine-grained and coarse-grained parallelism related to communication between subtasks at all? The granularity is strictly related to computation length, not communication frequency. —Preceding unsigned comment added by Joshphillips ( talk • contribs) 22:38, 19 December 2007 (UTC)
I have just finished reviewing this article for good article (GA) status. As it stands, I cannot pass it as a GA due to some issues outlined below. As such, I have put the nomination on hold, which allows up to seven days in which editors can address these problems before the nomination is failed without further notice. If and when the necessary changes are made, I will come back to re-review it.
I have expanded the discussion of loop carried dependencies. Raul654 ( talk) 19:00, 6 January 2008 (UTC)
1: prev1 := 0 2: prev2 := 0 3: cur := 1 4: do: 5: prev1 := prev2 6: prev2 := cur 7: cur := prev1 + prev2 8: while (cur < 10)
I don't think this redirect necessarily is appropriate: a distributed system is not just a computer concept (an organisation of employees is a distributed system). I think there should be a separate article, hyperlinking to this one, on the wider concept. Any thoughts? ElectricRay ( talk) 10:51, 21 January 2008 (UTC)
I'm tempted to delete a few occurrances of this word and just write "computers" as in:
One more thing:
This article uses the terms speed-up and speedup. I think they're both ugly, but whichever is to be preferred, then the article ought to be consistent. -- Malleus Fatuorum ( talk) 03:36, 15 June 2008 (UTC)
Seeing as this is a featured article, it should be as perfect as possible:
Article says: "Slotnick had proposed building a massively parallel computer for the Lawrence Livermore National Laboratory" - this is quite misleading, as ILLIAC IV has been build at the University of Illinois. kuszi ( talk) 09:07, 13 August 2008 (UTC).
Article says: "With the end of frequency scaling, these additional transistors (which are no longer used for frequency scaling) can be used to add extra hardware for parallel computing." - it suggests that additional transistors where (before 2004) needed for frequency scaling - rather nonsense. kuszi ( talk) 09:24, 13 August 2008 (UTC).
Article says: "In 1967, Amdahl and Slotnick published a debate about the feasibility of parallel processing at American Federation of Information Processing Societies Conference" - probably you mean Amdahl's article: "Gene Amdahl,Validity of the single processor approach to achieving large-scale computing capabilities. In: Proceedings of the American Federation Information Processing Society, vol. 30, 1967, 483–485."? kuszi ( talk) 09:24, 13 August 2008 (UTC).
Michael R. D'Amour was never CEO of DRC. He was not around when we worked with AMD on the socket stealer proposal. I came up with the idea of going into the socket in 1996 (US Patent 6178494, Issued Jan 2001). 16 days after the Opteron was announced I had figured out how to replace the Opteron with an FPGA. I personally did the work to architect the board, fix the Xilinx non-working Hypertransport verilog code and get the (then) linux BIOS to boot the FPGA in the Opteron socket. I was also responsible for figuring out if current FPGA's could work in the high performance last generation of the FSB. This lead to the Quick path program. We opted out of the FSB program as it is a dead bus. Beercandyman ( talk) 20:57, 16 February 2009 (UTC) Steve Casselman CTO and Founder DRC Computer.
As power consumption by computers has become a concern in recent years,[3] parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multicore processors.[4]
The sentence is missing a clause (and source) to relate "power consumption is a concern" to "parallel computing is dominant". It goes on to say that the dominance applies to "computer" architecture, "mainly" in processors. Are we sure that it isn't "only" in processors? It seems that nearly every other component has been moving towards serial architectures, including USB, Firewire, Serial ATA, and PCI Express, at the expense of the previous parallel standard. The serial communications and parallel communications articles have more information, and a very in-depth comparison can be found in this HyperTransport white paper: [1] (HT combines serial and parallel). Ham Pastrami ( talk) 06:49, 18 March 2009 (UTC)
What does the "embarassingly" mean in: "grid computing typically deals only with embarrassingly parallel problems". Does it just mean "highly"? There must be a better word for it. Embarassingly sounds POV, as it makes me wonder who is embarassed - presumably the people trying to cure cancer don't think it is an embarassing problem? Maybe Seti@home users should be embarassed, but i doubt they are.
I didn't change it as i havn't read the source, and i'm not certian of what it should mean. Yob Mod 08:43, 18 March 2009 (UTC)
That's quite irresponsible to be on the front page; it's not power consumption the main reason of success of multi-core processors but simply the inability to produce faster processors in the pace (comparable to the 80s and 90s) needed to sell to the public "shiny new super faster computers compared your last one that you must buy now or you're behind the times". Shortening the transistor size is slower and parallel computing offers another route to "shiny new faster personal computers". The combination of smaller transistors and multicore processors is the new deal. -- AaThinker ( talk) 10:00, 18 March 2009 (UTC)
Is the Cray-2 image the right one for this article? It is a vector processor, while a MPP like the Intel Paragon, CM-2, ASCI Red or Blue Gene/P might be better illustrations of parallel computing. -- Autopilot ( talk) 12:43, 18 March 2009 (UTC)
This is conflating vector math with vector processing, isn't it? Can someone knowledgeable remove the above line if it's not talking about Cray-1 style vector computers? Tempshill ( talk) 16:20, 18 March 2009 (UTC)
Surprised there was no mention of Thinking Machines' Connection Machine, not even in the history section. -- IanOsgood ( talk) 22:19, 18 March 2009 (UTC)
Yes, the first thing that comes to mind when you mention
parallel processing is the Hillis Connection Machine. It
failed in business only because it was ahead of it's time.
Should get a mention. blucat - David Ritter —Preceding unsigned comment added by 198.142.44.68 ( talk) 15:24, 21 March 2009 (UTC)
Category:Parallel computing is itself a category within Category:Concurrent computing — Robert Greer ( talk) 23:15, 18 March 2009 (UTC)
The most common type of cluster is the Beowulf cluster
I think there should be a reference cited for this, otherwise this is just an opinion of an editor. -- 66.69.248.6 ( talk) 17:54, 21 March 2009 (UTC)
Please see Talk:Distributed computing#What is parallel computing? JonH ( talk) 22:53, 10 August 2009 (UTC)
Does anyone object to me setting up automatic archiving for this page using MizaBot? Unless otherwise agreed, I would set it to archive threads that have been inactive for 60 days.-- Oneiros ( talk) 13:13, 21 December 2009 (UTC)
Please stay calm and civil while commenting or presenting evidence, and do not make personal attacks. Be patient when approaching solutions to any issues. If consensus is not reached, other solutions exist to draw attention and ensure that more editors mediate or comment on the dispute. |
I am green as a freshly minted Franklin, never posted before (so be nice)
Graduate student at the University of Illinois at Urbana-Champaign
Semester project; regardless, always wanted to do something like this...
All initial work should be (majority) my effort
As a word to the wise is sufficient; please advise, rather than take first-hand action.
The article should (and will) be of substantial size; but is currently no more that scaffolding
The "bullet-points" are intended to outline the potential discussion, and will NOT be in the finished product
The snippet of text under each reference is from the reference itself, to display applicability
Direct copying of reference information WILL NOT be part of any section of this article
Again, this information is here to give an idea of the paper, without having to go and read it...
Article sections that are drafted so far are quite "wordy".... (yawn...)
Most of the prose at this point has about a 1.5X - 2.0X inflated over the anticipated final product
This is my writing style, which has a natural evolution, through iteration
Complex -> confused -> constrained -> coherent -> concise (now, if it only took 5 iterations???)
Again, thank you in advance for you patience and understanding
I look forward to working with you guys...
Project Location:
Distributed operating system
Project Discussion:
Talk: Distributed operating system
JLSjr (
talk)
01:31, 20 March 2010 (UTC)
I've uploaded an audio recording of this article for the Spoken Wikipedia project. Please let me know if I've made any mistakes. Thanks. -- Mangst ( talk) 20:20, 1 November 2010 (UTC)
Should the paragraph about Application Checkpointing be in this article about parallel computing?
I think it's not a core part of parallel computing but a part of the way applications work and store their state. Jan Hoeve ( talk) 19:33, 8 March 2010 (UTC)
I came to the page to read the article and was also confused as to why checkpointing was there. It seems very out of place, and while fault tolerance may be important to parallelism, this isn't an article about fault tolerance mechanisms. It would be more logical to mention that parallelism has a strong need for fault tolerance and then link to other pages on the topic. 66.134.120.148 ( talk) 01:27, 23 April 2011 (UTC)
What a pleasant surprise. A Wikipedia article on advanced computing that is actually in good shape. The article structure is (surprise) logical, and I see no major errors in it. But the sub-articles it points to are often low quality, e.g. Automatic parallelization, Application checkpointing, etc.
The hardware aspects are handled better here than the software issues, however. The Algorithmic methods section can do with a serious rework.
Yet a few logical errors still remain even in the hardware aspects, e.g. computer clusters are viewed as not massively parallel, a case invalidated by the K computer, of course.
The template used here called programming paradigms, is however, in hopeless shape and I will remove that given that it is a sad spot on an otherwise nice article. History2007 ( talk) 22:40, 8 February 2012 (UTC)
Can we agree that parallel computing isn't the same as concurrent computing? See /info/en/?search=Concurrent_computing#Introduction — Preceding unsigned comment added by Mister Mormon ( talk • contribs) 16:47, 3 February 2016 (UTC)
"The origins of true (MIMD) parallelism go back to Federico Luigi, Conte Menabrea and his "Sketch of the Analytic Engine Invented by Charles Babbage".[45][46][47]"
Not that I can see. This single mention refers to a system that does not appear in any other work, did not appear in Babbage's designs, and appears to be nothing more than "it would be nice if..." Well of course it would be. Unless someone has a much better reference, one that suggests how this was to work, I remain highly skeptical that the passage is correct in any way. Babbage's design did have parallelism in the ALU (which is all it was) but that is not parallel computing in the modern sense of the term. Maury Markowitz ( talk) 14:25, 25 February 2015 (UTC)
Dear Maury Markowitz,
Forgive me for reverting a recent edit you made to the parallel computing article.
You are right that Babbage's machine had a parallel ALU, but does not have parallel instructions or operands and so does not meet the modern definition of the term "parallel computing".
However, at least one source says "The earliest reference to parallelism in computer design is thought to be in General L. F. Menabrea's publication ... It does not appear that this ability to perform parallel operation was included in the final design of Babbage's calculating engine" -- Hockney and Jesshope, p. 8. (Are they referring to the phrase "give several results at the same time" in (Augusta's translation of) Menabrea's article?)
So my understanding is that source says that the modern idea of parallel computing does go back at least to Menabrea's article, even though the idea of parallel computing was only a brief tangent in Menabrea's article whose main topic was a machine that does not meet the modern definition of parallel computing.
Perhaps that source is wrong. Can we find any sources that disagree? The first paragraph of the WP:VERIFY policy seems to encourage presenting what the various sources say, even when it is obvious that some of them are wrong. (Like many other aspects of Wikipedia, that aspect of "WP:VERIFY" strikes me as crazy at first, but then months later I start to think it's a good idea).
The main problem I have with that sentence is that implies that only MIMD qualifies as "true parallelism". So if systolic arrays (MISD) and the machines from MasPar and Thinking Machines Corporation (SIMD) don't qualify as true parallel computing, but they are not sequential computing (SISD) either, then what are they? Is the "MIMD" part of the sentence supported by any sources? -- DavidCary ( talk) 07:04, 26 February 2015 (UTC)
I'm concerned that asynchronous programming redirects to this page. Asynchronous programming/computing is not the same as parallel computing. For example, JavaScript engines are asynchronous, but single-threaded (less web workers), meaning that tasks do not actually run in parallel, though they are still asynchronous. To my knowledge, that is how it works, and that is how NodeJS works, browsers, and Nginx. They are all single-threaded, yet asynchronous, and so not parallel. — Preceding unsigned comment added by 2605:A601:64C:9B01:7083:DDD:19AF:B6B7 ( talk) 03:40, 14 February 2016 (UTC)
Hello fellow Wikipedians,
I have just modified 6 external links on Parallel computing. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 14:59, 20 May 2017 (UTC)
Hello fellow Wikipedians,
I have just modified 4 external links on Parallel computing. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 17:20, 24 September 2017 (UTC)
Hello fellow Wikipedians,
I have just modified one external link on Parallel computing. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 09:34, 7 October 2017 (UTC)