Vector processor ( final version) received a peer review by Wikipedia editors, which on 14 December 2021 was archived. It may contain ideas you can use to improve this article. |
This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
In most of the computer architecture books that I have read, SIMD is a categorized as type of multiprocessing, not as a type of vectorization. My understanding of the meaning of vectorization is an architecture which streams data into an execution unit. That is, it achieves high performance through high temporal utilization of a single functional unit. SIMD achieves high performance through a different axis, that of replication of functional units. For that reason, I believe this article is confusing SIMD as a type of vectorization. Dyl 23:34, 27 December 2005 (UTC)
If it's allowable to use multiple cycles in data processing then do the x86 family, with things like the string operations, fit into this category? -- ToobMug 15:47, 26 May 2007 (UTC)
This is one of the best articles on microcomputer architecture I've ever read. It's descriptions are simple enough for a layman like me to understand, and yet leads the casual reader into a wealth of information.I'm sure other technical articles on Wikipedia could do with emulating this style. Fantastic work !
Isn't a shader in a typical ATI or Nvidia GPU a vector processor? They process pixels and color data as vectors. 76.205.122.29 ( talk) 18:48, 26 May 2010 (UTC)
Array processors and vector processors are different, Aren't they? I think redirect from Array processor shd be disabled and a separate section for Array processor has to be made —Preceding unsigned comment added by 129.217.129.131 ( talk) 20:47, 5 January 2011 (UTC)
Tanenbaum, A.S. 1999. Structured Computer Organization. Prentice Hall. makes a difference between array machines and vector machines (I don't have the book here right now, I might remember incorrectly). I just looked into the new edition via Amazon and there Tanenbaum makes a difference between "SIMD processor" and "vector processor". The former have multiple PEs (processing elements) which have local memory, and are controlled by a single instruction stream (example ILLIAC IV). Vector processors on the other hand have vector registers and a single functional unit to operate on all entries in such a register. Tanenbaum cites the Cray-1 as an example. Other examples are SSE, AVX, AltiVec, NEON. It seems hard to find a consistent differentiation in naming the different SIMD hardware. I do find it important, though, to be clear about the differences there are. Mkretz ( talk) 11:17, 29 June 2013 (UTC)
It is a distortion of historical events to characterize on-chip simd operations as vector instructions. The simd concept originated with the early work on parallel computers which was both separate from and earlier than the big-iron vector machines. Jfgrcar ( talk) 03:23, 29 January 2011 (UTC)
This page needs real clean up. A simple diagram would do a lot, and there are zero refs now. Unless there are objections I will remove the x86 architecture code that has no place in an encyclopedia. I will have to find a nice image to explain the concept. Does anyone have a nice diagram for this? History2007 ( talk) 21:17, 8 July 2011 (UTC)
The article notes that such things as AltiVec and SSE are examples of vector processing, and so it's common on current chips.
But if that is the case, then vector processing goes back long before the STAR-100.
Intel's MMX split up a 64-bit word into multiple 32-bit or 16-bit integers.
With a 36-bit word, the Lincoln Laboratories TX-2 was doing the same thing, as was the AN/FSQ-31 and 32 with a 48-bit word. And those two were derived from IBM's SAGE system, which operated on vectors of two 16-bit numbers at once.
The kind of vector processing that a Cray-I did, on the other hand, isn't nearly as common; right now, the only current system of that general kind is the SX-ACE from NEC. — Preceding unsigned comment added by Quadibloc ( talk • contribs) 22:33, 7 August 2016 (UTC)
Aren't SIMD and vector processors largely synonymous? Isn't SIMD usually vector? Isn't vector processing usually SIMD? WorldQuestioneer ( talk) 20:10, 13 July 2020 (UTC)
Vector-processing architectures are now considered separate from SIMD computers, based on the fact that vector computers processed the vectors one word at a time through pipelined processors (though still based on a single instruction), whereas modern SIMD computers process all elements of the vector simultaneously. [some 1998 ref here]
the page currently does not have importance set. i recommend it be changed to "top" after a review.
basically the page has near zero recognition of the strategic importance of how Vector processing has influenced our lives, in computing. this is actually a cause for some concern, from a sociological and historic perspective. however i am not comfortable setting it myself, would prefer a review. Lkcl ( talk) 16:53, 10 June 2021 (UTC)
Lkcl has been quite insistent that vector processors are distinguishable from SIMD and offers a two-point test at the end of the lead for identifying a vector processor. I don't see support for this test in any cited sources. The citations provided are for WP:PRIMARY technical details of individual architectures which is a recipe for WP:SYNTHESIS. We need to reference these assertions to a WP:SECONDARY source like a textbook on processor architecture. ~ Kvng ( talk) 14:26, 13 June 2021 (UTC)
/info/en/?search=Talk:SIMD#Page_quality_is_awful_(in_the_summary)
there are fundamental problems with the three pages, Vector Processing, SIMD, and SIMT. from the link above it can be seen that there is MASSIVE confusion even from academic coursework and academic literature on this topic.
it also does not help that neither Flynn nor Duncan taxonomy cover SIMT! Even i was not aware in 2004 when working for Aspex that it was a *SIMT* processor not a *SIMD* one because NVIDIA had not coined the phrase, only introducing it in what... 2012? 2016? sonething like that.
it also does not help that a pure SIMD only processor with zero scalar capability and no scalar registers is ANOTHER class of processor that at the hardware level is virtually indistinguishable from SIMT.
some diagrams are urgently needed here which illustrate these things properly.
given that SIMD is literally the top world hit on google search engines, this is a pretty damn high priority task. how can this be properly given attention and resources? Lkcl ( talk) 14:21, 15 June 2021 (UTC)
after creating a second example based on real-world Vector Processor ISAs i realised my initial deduction of what constitutes a Vector Processor was slightly inaccurate. the two discerning features, and SIMD is incapable of one of them by definition, are:
the examples which are not original research one of which was there before i started editing the page show this distinction very clearly.
Predicated SIMD sort-of deserves the moniker "Vector capable" but unfortunately for SIMD the lack of Horizontal Sum and other reduction operations kicks it in the nuts as far as defining Predicated SIMD as "A Vector Processor", to use a colloquial term.
now, there does exist the possibility that some random processor out there may have "SIMD Horizontal Sum" however given the absolutely ridiculous stackoverflow discussions when searching for this i would not hold my breath waiting for Intel to add it. there might be some obscure historic SIMD architectures out there but if they have "Horizontal Sum" i haven't encountered them yet. Lkcl ( talk) 16:17, 21 June 2021 (UTC)
Kvng i think i might have a way to express this which stops people from making false and misleading statements but also does not risk "SYNTHESIS".
it involves simply stating that a comparison between typical SIMD ISAs and typical Vector ISAs shows that SIMD ISAs miss two features:
if understated enough, the fact that these are the discernable differences, combined with the examples, should stop people from making the mistake of thinking "oh, ARM and IBM and Intel marketing material said they do Vectors therefore ARM and IBM and Intel must all be Vector Processors". what's your thoughts on that approach? Lkcl ( talk) 00:53, 23 June 2021 (UTC)
arg after guy kindly found the 1972 flynn paper, strictly speaking the redirect Array Processors should instead be to the (new) subsection in Flynn's taxonomy because Array Processor is a subclass of SIMD. whoops. Lkcl ( talk) 05:43, 18 June 2021 (UTC)
/info/en/?search=Array_data_structure#Compact_layouts
ASPEX's DMA Engine was able to do up to 3 dimensions of reordering. need to find other processors, Mitch Alsup mentioned on comp.arch that some AMD GPUs could do it, must look them up. Lkcl ( talk) 22:49, 19 June 2021 (UTC)
This article has numerous severe problems. It looks like there is a significant amount of editorializing, improper synthesis, and original research present, given the tone and content of this article, and the fact that majority of the sources cited in are primary sources. The entire article feels like it was written by taking what is found in ARM SVE and the RISC "V" Vector Extension and applying it forcibly to vector processors, instead of treating these as just two examples of vector architectures, as is evidenced by all the mentions of SVE and RVV in the article.
Because of this, the article actually misses all of the fundamental theory pertaining vector processor architecture and the organizations they enable.
For instance, it more or less ignores chaining, even though this technique is quintessential to vector processor. Inexplicably, chaining is hidden away in another article, Chaining (vector processing), even though this technique has no relevance outside of vector processors! A closely related concept, tailgating, is something this article is completely ignorant of.
There is no explicit explanation of how a vector processor could vary the number of operations it performs per cycle on the spatial dimension by varying the number of lanes it has. In fact, the term lane is related to the vector length in a manner that is entirely incorrect in the section, Vector processor#Pure (true) Vector ISA! The number of lanes is not the maximum vector length; it is the number of pipelines that operate concurrently to execute one vector instruction. The maximum vector length is the maximum number of elements that fits in one vector register (for register-to-register vector processors).
The above discussion leads to another egregious omission: that vector processors can be of two kinds: memory-to-memory and register-to-register. This article has a couple of incidental mentions, neglecting the fact that this is in fact a central issue as to how vector processors work: whether it is memory-to-memory and register-to-register goes a long way to explain the relationship between vector length and the speed-up attained over a scalar processor (and for fun, vector caches can be added to this discussion).
The treatment of vector masks is also fatally flawed. It is viewed entirely through the lens of ARM SVE, where they are called predicate elements (IIRC, I don't have my ARM SVE manual at-hand), despite it being a feature of one of the first standalone vector processors, the CDC STAR-100 from the 1970s, and a feature of most vector processors since. Why this article treats SVE's conception of vector masking as the progenitor is beyond me.
All of the above shortcomings are central issues in chapter 4 and appendix F of Hennessey and Patterson's Computer Architecture: A Quantitative Approach (5E).
The other problems are too numerous to discuss here. This entire article needs to have the synthesis, running commentary, and original research removed. Then, it needs to be rewritten from scratch based on reliable, secondary sources such as textbooks, monographs, surveys, not the lecture slides, architecture manuals, and research papers that this article uses (and misuses—the research papers are likely being used for their related work sections, which describe the state of the art—this is misuse because the purpose of related work sections in papers is to contextualize the research presented, not to explain, analyze, or survey vector processing).
As a final note, array processors are presented here as synonymous to vector processors. While it is true that some authors use the term to refer to what other authors call vector processors, in many cases where it is used, it is in reference to something else: either SIMD processor arrays (such as the ILLIAC IV) or processors that are designed for processing arrays of data, but have no conception of an array in their architecture (and thus cannot be SIMD processor arrays or vector processors) (such as the Floating Point Systems AP-120B). See R.W. Hockney and C.R. Jesshope's Parallel Computers (2E), sections 1.1.3, 1.1.4, 1.1.6, and figure 1.2. This is a good source because it is from a time when the literature had to be specific about what kind of machine was being discussed, as vector processors, SIMD processor arrays, and "array processors" were in existence. This is in contrast to books from the 2000s and later (which are easy to find on Google), which only mention these topics in passing, without careful analysis.
I feel like if this was explained, then this article would be much more focused on vector processors, and would not have so many digressions as to what GPUs do (which Hennessey and Patterson's Computer Architecture states are related but also distinct from vector processors). HTW217 ( talk) 11:42, 20 November 2021 (UTC)
hooray! finally! someone else who knows what the hwll rhey're talking about. you should have seen the mess of fundamental flawed assertions made when i began editing about a year ago: it stated something akin to "A GPU is a Vector Processor therefore SIMD equals Vectors". i did my best, however have not looked at the page for,some considerable time. if anyone has since edited it and asserted "Predicate Masks Equals SVE" that was definitely NOT me because i know it to be blatantly false. the Array Processors was a legacy redirect long before i contributed to the page. yes, terminology in this complex area is massively confused, and Multi-billion-dollar Corporations trying to peddle their Packed SIMD processors as "Vector" (even IBM calling Packed SIMD "VSX") is not helping. lastly: if Patterson had not been so shockingly arrogantly rude to me at a Conference i attended i would be much more inclined to listen to what he has to teach. Lkcl ( talk) 22:44, 6 May 2022 (UTC)
followup: basically the page was a bit of a mess to start with, and so full of misunderstandings and mistakes, but had such a high pagerank, i had to do... *something*. but it is an incremental process and starting from a "legacy" position if you know what i mean. i'd be more than happy to collaborate.
one thing: there is a cyclic dependency problem here with this page. it had been so wrong for such a long time and has such a high pagerank that its misinformation was starting to leak into online textbook material. this will need to be taken into consideration when editing the page, because by citing those sources which themselves used the misinformation it reinforces the misinformation.
the other thing is that with the entire industry except for NEC pretty much having passed Vector Processing by for almost three decades, finding modern online primary sources is almost impossible. i had to go to a specialist archive site to find the Cray-1 tech manual which was a PDF of scanned images from a carbon-copy old school typewriter!
Lkcl ( talk) 09:40, 7 May 2022 (UTC)
followup: basically the page was a bit of a mess to start with, and so full of misunderstandings and mistakes, but had such a high pagerank, i had to do... *something*. but it is an incremental process and starting from a "legacy" position if you know what i mean. i'd be more than happy to collaborate.
one thing: there is a cyclic dependency problem here with this page. it had been so wrong for such a long time and has such a high pagerank that its misinformation was starting to leak into online textbook material. this will need to be taken into consideration when editing the page, because by citing those sources which themselves used the misinformation it reinforces the misinformation.
the other thing is that with the entire industry except for NEC pretty much having passed Vector Processing by for almost three decades, finding modern online primary sources is almost impossible. i had to go to a specialist archive site to find the Cray-1 tech manual which was a PDF of scanned images from a carbon-copy old school typewriter!
Lkcl ( talk) 09:52, 7 May 2022 (UTC)
The paragraph describing the Aspex Microelectronics ASP has been removed. The reasons for this removal are as follows:
Associative processors are categorized under the SIMD category in Flynn's taxonomy, but they are architecturally distinct from the vector processors this article is about, which are also categorized as SIMD. Flynn's 1972 paper describing an updated version of the taxonomy makes it quite clear they are distinct (note that the paper refers to vector processors as a pipelined version of the processor array [e.g. ILLIAC IV]).
The paragraph justifies the inclusion of the ASP by stating "[it] categorised itself as "Massive wide SIMD" but had bit-level ALUs and bit-level predication, and so could definitively be considered an array (vector) processor." This is a textbook example of weasel words. The statement also is advancing a case that this processor is a vector processor, which is clearly original research. No reliable secondary or tertiary sources have been presented to support the claim.
The cited paper's abstract describes it as an associative processor with a fine-grained SIMD architecture. The paper is behind a paywall, so I can't (and won't) access it, but there's nothing in the abstract to suggest it could be a vector processor. The company's marketing materials offer no further clarity on this matter. These are all primary sources that contradict the claim that it is a vector processor.
A related issue is the overt exaggeration and promotion leading to falsities. It's bit-level ALUs, bit-level predication, 4,096 PEs, and CAMs, which are described as an "extreme and rare example of an Array Processor" is patent nonsense. 1-bit PEs are found in the Thinking Machines CM-1. Predication (masking) isn't unique; every array processor since Slotnick's SOLOMON I from the early 1960s has it; when the PEs (ALUs) are 1-bit wide, then it is natural that masking is done on a bit-by-bit basis. That it had 4,096 PEs is unremarkable; the CM-1 could have up to 65,536. That each PE had its own CAM is an intrinsic feature of all associative processors.
The paragraph's inclusion in a section describing vector supercomputers is even more perplexing. Even if it were a vector processor, it is not a vector supercomputer. The cited paper claims the processor will be used in a massive parallel neural network. But this does not appear to have materialized, AFAICT, from Google searches. In fact, Google searches turn up very little relevant material about this company and its processor. Its would seem the paragraph overstates their impact. HTW217 ( talk) 11:29, 6 January 2022 (UTC)
Vector processor ( final version) received a peer review by Wikipedia editors, which on 14 December 2021 was archived. It may contain ideas you can use to improve this article. |
This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
In most of the computer architecture books that I have read, SIMD is a categorized as type of multiprocessing, not as a type of vectorization. My understanding of the meaning of vectorization is an architecture which streams data into an execution unit. That is, it achieves high performance through high temporal utilization of a single functional unit. SIMD achieves high performance through a different axis, that of replication of functional units. For that reason, I believe this article is confusing SIMD as a type of vectorization. Dyl 23:34, 27 December 2005 (UTC)
If it's allowable to use multiple cycles in data processing then do the x86 family, with things like the string operations, fit into this category? -- ToobMug 15:47, 26 May 2007 (UTC)
This is one of the best articles on microcomputer architecture I've ever read. It's descriptions are simple enough for a layman like me to understand, and yet leads the casual reader into a wealth of information.I'm sure other technical articles on Wikipedia could do with emulating this style. Fantastic work !
Isn't a shader in a typical ATI or Nvidia GPU a vector processor? They process pixels and color data as vectors. 76.205.122.29 ( talk) 18:48, 26 May 2010 (UTC)
Array processors and vector processors are different, Aren't they? I think redirect from Array processor shd be disabled and a separate section for Array processor has to be made —Preceding unsigned comment added by 129.217.129.131 ( talk) 20:47, 5 January 2011 (UTC)
Tanenbaum, A.S. 1999. Structured Computer Organization. Prentice Hall. makes a difference between array machines and vector machines (I don't have the book here right now, I might remember incorrectly). I just looked into the new edition via Amazon and there Tanenbaum makes a difference between "SIMD processor" and "vector processor". The former have multiple PEs (processing elements) which have local memory, and are controlled by a single instruction stream (example ILLIAC IV). Vector processors on the other hand have vector registers and a single functional unit to operate on all entries in such a register. Tanenbaum cites the Cray-1 as an example. Other examples are SSE, AVX, AltiVec, NEON. It seems hard to find a consistent differentiation in naming the different SIMD hardware. I do find it important, though, to be clear about the differences there are. Mkretz ( talk) 11:17, 29 June 2013 (UTC)
It is a distortion of historical events to characterize on-chip simd operations as vector instructions. The simd concept originated with the early work on parallel computers which was both separate from and earlier than the big-iron vector machines. Jfgrcar ( talk) 03:23, 29 January 2011 (UTC)
This page needs real clean up. A simple diagram would do a lot, and there are zero refs now. Unless there are objections I will remove the x86 architecture code that has no place in an encyclopedia. I will have to find a nice image to explain the concept. Does anyone have a nice diagram for this? History2007 ( talk) 21:17, 8 July 2011 (UTC)
The article notes that such things as AltiVec and SSE are examples of vector processing, and so it's common on current chips.
But if that is the case, then vector processing goes back long before the STAR-100.
Intel's MMX split up a 64-bit word into multiple 32-bit or 16-bit integers.
With a 36-bit word, the Lincoln Laboratories TX-2 was doing the same thing, as was the AN/FSQ-31 and 32 with a 48-bit word. And those two were derived from IBM's SAGE system, which operated on vectors of two 16-bit numbers at once.
The kind of vector processing that a Cray-I did, on the other hand, isn't nearly as common; right now, the only current system of that general kind is the SX-ACE from NEC. — Preceding unsigned comment added by Quadibloc ( talk • contribs) 22:33, 7 August 2016 (UTC)
Aren't SIMD and vector processors largely synonymous? Isn't SIMD usually vector? Isn't vector processing usually SIMD? WorldQuestioneer ( talk) 20:10, 13 July 2020 (UTC)
Vector-processing architectures are now considered separate from SIMD computers, based on the fact that vector computers processed the vectors one word at a time through pipelined processors (though still based on a single instruction), whereas modern SIMD computers process all elements of the vector simultaneously. [some 1998 ref here]
the page currently does not have importance set. i recommend it be changed to "top" after a review.
basically the page has near zero recognition of the strategic importance of how Vector processing has influenced our lives, in computing. this is actually a cause for some concern, from a sociological and historic perspective. however i am not comfortable setting it myself, would prefer a review. Lkcl ( talk) 16:53, 10 June 2021 (UTC)
Lkcl has been quite insistent that vector processors are distinguishable from SIMD and offers a two-point test at the end of the lead for identifying a vector processor. I don't see support for this test in any cited sources. The citations provided are for WP:PRIMARY technical details of individual architectures which is a recipe for WP:SYNTHESIS. We need to reference these assertions to a WP:SECONDARY source like a textbook on processor architecture. ~ Kvng ( talk) 14:26, 13 June 2021 (UTC)
/info/en/?search=Talk:SIMD#Page_quality_is_awful_(in_the_summary)
there are fundamental problems with the three pages, Vector Processing, SIMD, and SIMT. from the link above it can be seen that there is MASSIVE confusion even from academic coursework and academic literature on this topic.
it also does not help that neither Flynn nor Duncan taxonomy cover SIMT! Even i was not aware in 2004 when working for Aspex that it was a *SIMT* processor not a *SIMD* one because NVIDIA had not coined the phrase, only introducing it in what... 2012? 2016? sonething like that.
it also does not help that a pure SIMD only processor with zero scalar capability and no scalar registers is ANOTHER class of processor that at the hardware level is virtually indistinguishable from SIMT.
some diagrams are urgently needed here which illustrate these things properly.
given that SIMD is literally the top world hit on google search engines, this is a pretty damn high priority task. how can this be properly given attention and resources? Lkcl ( talk) 14:21, 15 June 2021 (UTC)
after creating a second example based on real-world Vector Processor ISAs i realised my initial deduction of what constitutes a Vector Processor was slightly inaccurate. the two discerning features, and SIMD is incapable of one of them by definition, are:
the examples which are not original research one of which was there before i started editing the page show this distinction very clearly.
Predicated SIMD sort-of deserves the moniker "Vector capable" but unfortunately for SIMD the lack of Horizontal Sum and other reduction operations kicks it in the nuts as far as defining Predicated SIMD as "A Vector Processor", to use a colloquial term.
now, there does exist the possibility that some random processor out there may have "SIMD Horizontal Sum" however given the absolutely ridiculous stackoverflow discussions when searching for this i would not hold my breath waiting for Intel to add it. there might be some obscure historic SIMD architectures out there but if they have "Horizontal Sum" i haven't encountered them yet. Lkcl ( talk) 16:17, 21 June 2021 (UTC)
Kvng i think i might have a way to express this which stops people from making false and misleading statements but also does not risk "SYNTHESIS".
it involves simply stating that a comparison between typical SIMD ISAs and typical Vector ISAs shows that SIMD ISAs miss two features:
if understated enough, the fact that these are the discernable differences, combined with the examples, should stop people from making the mistake of thinking "oh, ARM and IBM and Intel marketing material said they do Vectors therefore ARM and IBM and Intel must all be Vector Processors". what's your thoughts on that approach? Lkcl ( talk) 00:53, 23 June 2021 (UTC)
arg after guy kindly found the 1972 flynn paper, strictly speaking the redirect Array Processors should instead be to the (new) subsection in Flynn's taxonomy because Array Processor is a subclass of SIMD. whoops. Lkcl ( talk) 05:43, 18 June 2021 (UTC)
/info/en/?search=Array_data_structure#Compact_layouts
ASPEX's DMA Engine was able to do up to 3 dimensions of reordering. need to find other processors, Mitch Alsup mentioned on comp.arch that some AMD GPUs could do it, must look them up. Lkcl ( talk) 22:49, 19 June 2021 (UTC)
This article has numerous severe problems. It looks like there is a significant amount of editorializing, improper synthesis, and original research present, given the tone and content of this article, and the fact that majority of the sources cited in are primary sources. The entire article feels like it was written by taking what is found in ARM SVE and the RISC "V" Vector Extension and applying it forcibly to vector processors, instead of treating these as just two examples of vector architectures, as is evidenced by all the mentions of SVE and RVV in the article.
Because of this, the article actually misses all of the fundamental theory pertaining vector processor architecture and the organizations they enable.
For instance, it more or less ignores chaining, even though this technique is quintessential to vector processor. Inexplicably, chaining is hidden away in another article, Chaining (vector processing), even though this technique has no relevance outside of vector processors! A closely related concept, tailgating, is something this article is completely ignorant of.
There is no explicit explanation of how a vector processor could vary the number of operations it performs per cycle on the spatial dimension by varying the number of lanes it has. In fact, the term lane is related to the vector length in a manner that is entirely incorrect in the section, Vector processor#Pure (true) Vector ISA! The number of lanes is not the maximum vector length; it is the number of pipelines that operate concurrently to execute one vector instruction. The maximum vector length is the maximum number of elements that fits in one vector register (for register-to-register vector processors).
The above discussion leads to another egregious omission: that vector processors can be of two kinds: memory-to-memory and register-to-register. This article has a couple of incidental mentions, neglecting the fact that this is in fact a central issue as to how vector processors work: whether it is memory-to-memory and register-to-register goes a long way to explain the relationship between vector length and the speed-up attained over a scalar processor (and for fun, vector caches can be added to this discussion).
The treatment of vector masks is also fatally flawed. It is viewed entirely through the lens of ARM SVE, where they are called predicate elements (IIRC, I don't have my ARM SVE manual at-hand), despite it being a feature of one of the first standalone vector processors, the CDC STAR-100 from the 1970s, and a feature of most vector processors since. Why this article treats SVE's conception of vector masking as the progenitor is beyond me.
All of the above shortcomings are central issues in chapter 4 and appendix F of Hennessey and Patterson's Computer Architecture: A Quantitative Approach (5E).
The other problems are too numerous to discuss here. This entire article needs to have the synthesis, running commentary, and original research removed. Then, it needs to be rewritten from scratch based on reliable, secondary sources such as textbooks, monographs, surveys, not the lecture slides, architecture manuals, and research papers that this article uses (and misuses—the research papers are likely being used for their related work sections, which describe the state of the art—this is misuse because the purpose of related work sections in papers is to contextualize the research presented, not to explain, analyze, or survey vector processing).
As a final note, array processors are presented here as synonymous to vector processors. While it is true that some authors use the term to refer to what other authors call vector processors, in many cases where it is used, it is in reference to something else: either SIMD processor arrays (such as the ILLIAC IV) or processors that are designed for processing arrays of data, but have no conception of an array in their architecture (and thus cannot be SIMD processor arrays or vector processors) (such as the Floating Point Systems AP-120B). See R.W. Hockney and C.R. Jesshope's Parallel Computers (2E), sections 1.1.3, 1.1.4, 1.1.6, and figure 1.2. This is a good source because it is from a time when the literature had to be specific about what kind of machine was being discussed, as vector processors, SIMD processor arrays, and "array processors" were in existence. This is in contrast to books from the 2000s and later (which are easy to find on Google), which only mention these topics in passing, without careful analysis.
I feel like if this was explained, then this article would be much more focused on vector processors, and would not have so many digressions as to what GPUs do (which Hennessey and Patterson's Computer Architecture states are related but also distinct from vector processors). HTW217 ( talk) 11:42, 20 November 2021 (UTC)
hooray! finally! someone else who knows what the hwll rhey're talking about. you should have seen the mess of fundamental flawed assertions made when i began editing about a year ago: it stated something akin to "A GPU is a Vector Processor therefore SIMD equals Vectors". i did my best, however have not looked at the page for,some considerable time. if anyone has since edited it and asserted "Predicate Masks Equals SVE" that was definitely NOT me because i know it to be blatantly false. the Array Processors was a legacy redirect long before i contributed to the page. yes, terminology in this complex area is massively confused, and Multi-billion-dollar Corporations trying to peddle their Packed SIMD processors as "Vector" (even IBM calling Packed SIMD "VSX") is not helping. lastly: if Patterson had not been so shockingly arrogantly rude to me at a Conference i attended i would be much more inclined to listen to what he has to teach. Lkcl ( talk) 22:44, 6 May 2022 (UTC)
followup: basically the page was a bit of a mess to start with, and so full of misunderstandings and mistakes, but had such a high pagerank, i had to do... *something*. but it is an incremental process and starting from a "legacy" position if you know what i mean. i'd be more than happy to collaborate.
one thing: there is a cyclic dependency problem here with this page. it had been so wrong for such a long time and has such a high pagerank that its misinformation was starting to leak into online textbook material. this will need to be taken into consideration when editing the page, because by citing those sources which themselves used the misinformation it reinforces the misinformation.
the other thing is that with the entire industry except for NEC pretty much having passed Vector Processing by for almost three decades, finding modern online primary sources is almost impossible. i had to go to a specialist archive site to find the Cray-1 tech manual which was a PDF of scanned images from a carbon-copy old school typewriter!
Lkcl ( talk) 09:40, 7 May 2022 (UTC)
followup: basically the page was a bit of a mess to start with, and so full of misunderstandings and mistakes, but had such a high pagerank, i had to do... *something*. but it is an incremental process and starting from a "legacy" position if you know what i mean. i'd be more than happy to collaborate.
one thing: there is a cyclic dependency problem here with this page. it had been so wrong for such a long time and has such a high pagerank that its misinformation was starting to leak into online textbook material. this will need to be taken into consideration when editing the page, because by citing those sources which themselves used the misinformation it reinforces the misinformation.
the other thing is that with the entire industry except for NEC pretty much having passed Vector Processing by for almost three decades, finding modern online primary sources is almost impossible. i had to go to a specialist archive site to find the Cray-1 tech manual which was a PDF of scanned images from a carbon-copy old school typewriter!
Lkcl ( talk) 09:52, 7 May 2022 (UTC)
The paragraph describing the Aspex Microelectronics ASP has been removed. The reasons for this removal are as follows:
Associative processors are categorized under the SIMD category in Flynn's taxonomy, but they are architecturally distinct from the vector processors this article is about, which are also categorized as SIMD. Flynn's 1972 paper describing an updated version of the taxonomy makes it quite clear they are distinct (note that the paper refers to vector processors as a pipelined version of the processor array [e.g. ILLIAC IV]).
The paragraph justifies the inclusion of the ASP by stating "[it] categorised itself as "Massive wide SIMD" but had bit-level ALUs and bit-level predication, and so could definitively be considered an array (vector) processor." This is a textbook example of weasel words. The statement also is advancing a case that this processor is a vector processor, which is clearly original research. No reliable secondary or tertiary sources have been presented to support the claim.
The cited paper's abstract describes it as an associative processor with a fine-grained SIMD architecture. The paper is behind a paywall, so I can't (and won't) access it, but there's nothing in the abstract to suggest it could be a vector processor. The company's marketing materials offer no further clarity on this matter. These are all primary sources that contradict the claim that it is a vector processor.
A related issue is the overt exaggeration and promotion leading to falsities. It's bit-level ALUs, bit-level predication, 4,096 PEs, and CAMs, which are described as an "extreme and rare example of an Array Processor" is patent nonsense. 1-bit PEs are found in the Thinking Machines CM-1. Predication (masking) isn't unique; every array processor since Slotnick's SOLOMON I from the early 1960s has it; when the PEs (ALUs) are 1-bit wide, then it is natural that masking is done on a bit-by-bit basis. That it had 4,096 PEs is unremarkable; the CM-1 could have up to 65,536. That each PE had its own CAM is an intrinsic feature of all associative processors.
The paragraph's inclusion in a section describing vector supercomputers is even more perplexing. Even if it were a vector processor, it is not a vector supercomputer. The cited paper claims the processor will be used in a massive parallel neural network. But this does not appear to have materialized, AFAICT, from Google searches. In fact, Google searches turn up very little relevant material about this company and its processor. Its would seem the paragraph overstates their impact. HTW217 ( talk) 11:29, 6 January 2022 (UTC)