This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||
|
Large parts of this article are now obsolete, and it may now be appropriate to replace "GPU" in the article title with something else. Modeless ( talk) —Preceding undated comment added 00:28, 5 December 2009 (UTC).
How do you get (32 cores)*(16x vector)*(2 GHz) = 2 TFLOPS ? It's 1 TFLOP, which is still pretty good. —Preceding unsigned comment added by 70.69.136.94 ( talk) 03:19, 12 June 2009 (UTC)
This article needs a screenshot of the demo from GDC - plus any other info released at the conference.
Larrabee demo video: http://www.youtube.com/watch?v=gxr_4sZ7w7k
Do these two sentences really warrant an entire section? Of course the criticism is valid, but it looks like somebody rushed to "hate". It makes the rest of the article feel unfinished and hasty, when actually the article is rather complete and of good quality. Perhaps it should be moved to another section? 84.9.165.14 ( talk) 21:38, 5 October 2008 (UTC)
Ramu50, your concern about marketing speak in calling x86 "popular" is valid; I've changed it to say "common" instead. I have also reintroduced the information about the Pentagon's involvement in Larrabee's design.
The SIGGRAPH paper clearly states that Larrabee has a coherent cache across all cores: (Italics added for emphasis) Page 1: "The cores each access their own subset of a coherent L2 cache". Page 3: "These cache control instructions also allow the L2 cache to be used similarly to a scratchpad memory, while remaining fully coherent." Page 11: "In contrast, all memory on Larrabee is shared by all processor cores. For Larrabee programmers, local data structure sharing is transparently supported by the coherent cached memory hierarchy regardless of the thread’s processor."
The text claiming that Larrabee has a crossbar architecture is not supported by sources. Larrabee uses a ring bus, as described in the SIGGRAPH paper.
Modeless ( talk) 23:22, 10 August 2008 (UTC)
Does anyone else have an issue with the title of this page, "Larrabee (GPU)"? I don't know if it is known for certain if Larrabee is meant to stand alone as the only processor in a PC, like what everyone had before 1997, or function like a modern video card with a dedicated CPU. But either way I think calling this a GPU is a stretch. —Preceding unsigned comment added by 74.219.122.138 ( talk) 13:30, 8 August 2008 (UTC)
Any speculation about the estimated release date ? --Xav
Likewise, any references to open source drivers? -- Daniel11 06:43, 10 September 2007 (UTC)
I think it will be easier to write open source drivers. Just use Gallium/Mesa software OpenGL stack, compile for x86, use in some loops this wide SIMD, and texture units. Only real probleme with just the hardware will be modeseting, but knowing Intel it will be working from the day-0. —Preceding unsigned comment added by 83.175.177.180 ( talk) 09:53, 6 April 2009 (UTC)
Larrabee was the name of the slow-witted assistant to the Chief of CONTROL on the TV series Get Smart. ;) He's the guy who when told to take away a bugged aerosol can "and step on it" (meaning do it quickly), took the order literally. After an offscreen explosion, Larrabee came back in with his clothes shredded and said "Chief, I don't think you're supposed to step on those.". —Preceding unsigned comment added by Bizzybody ( talk • contribs) 10:35, 8 December 2007 (UTC)
http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf
I'll leave it to someone else with more time on their hands and better understanding of Wikipedia rules and regulations to beautify this article. -- Tatejl ( talk) 21:06, 5 August 2008 (UTC)
Requested permission from authors for image use. dila ( talk) 20:37, 8 August 2008 (UTC)
Removed entire opening paragraph and replaced with a minimal buzzword description of Larrabee.
dila (
talk)
01:40, 9 August 2008 (UTC)
Since the article compares the chip to GPUs from Nvidia, but is more of a multicore, high performance general purpose computer with a ring based connectivity, whouldn't at least a reference to the Cell (Cell Broadband Engine) be suitable? —Preceding unsigned comment added by 62.209.166.130 ( talk) 11:16, 12 August 2008 (UTC)
This article looks like an advertisment to me. Especially with the inclusion of that programmability graph with all arrows pointing towards it. Also there is no mention of any competing products. NPOV anyone? -- DustWolf ( talk) 00:56, 13 August 2008 (UTC)
Here's another interesting question: How can a chip that is not technically a GPU according to it's maker, compete in the GPGPU market? -- DustWolf ( talk) 13:47, 31 August 2008 (UTC)
The article also says CPU quite a bit.... and finishes with: "Larrabee is an appropriate platform for the convergence of GPU and CPU applications." Intel will be the ones to decide what acronym best suits it when they start marketing the thing to consumers, until then any classification based on deduction, or comparison to older architectures, is nothing but speculation.
I also think the statement that Intel will release a "video" card in late 2009/10 needs to be sourced properly, the current source states that Paul Ottelini said "A product" will be out then. He never said anything about a video card. A PCIe card has been called out as a possible package, depending on the application. This in no way confirms or denies that a video card is how Larrabee will first be sold. Making assumptions about Larrabee is a mistake, it's a completely new product and so does not need to match established conventions. Video cards are not the only way to use graphics processing, and graphics processing is not the only way to use Larrabee. 192.198.151.129 ( talk) 11:31, 2 January 2009 (UTC)
Also, http://www.intel.com/technology/visual/microarch.htm?iid=tech_micro+larrabee is an official intel release on this, and states that Larrabee is a multi-core architecture. Calling Larrabee a GPU is clearly a misinterpretation, and I move to change the article name to "Larrabee (Microarchitecture)", and remove any references to Larrabee as a GPU. I cannot find one non-speculative source that calls Larrabee a GPU. In the SIGGRAPH paper the hardware differences between Larrabee and current GPUs and CPUs are both examined. The applications of Larrabee look to be targeted at the market segment that GPUs dominate, but those applications can also be run on CPUs and GPGPUs. And to the best of my knowledge Intel never called it a GPU. 192.198.151.129 ( talk) 15:08, 2 January 2009 (UTC)
This is wrong. Having additional capabilities doesn't prevent something from being a GPU. Current GPUs are being used for molecular dynamics simulation, database searching, financial modeling, etc etc. What defines a GPU is the capability of accelerating graphics APIs, not a lack of ability in other areas.Contrary to what you have said, the fact that a Larrabee based product will probably dramatically accelerate graphics performance alongside a CPU does not necessarily make it a GPU, because of the additional capabilities a Larrabee based product may have.
There is no conflict. Intel has not talked about using Larrabee for anything other than GPU or GPGPU tasks. Intel has never denied that Larrabee is a GPU. All they have claimed is that Larrabee is more flexible than current GPUs, but I stress again, that flexibility doesn't make Larrabee *not* a GPU. As I said: if Intel announces a Larrabee product which is *not* a GPU; i.e. it does not have texture sampling units or display interfaces and is not intended for graphics acceleration, *that's* when the article should be changed and not before. Modeless ( talk) 08:51, 5 January 2009 (UTC)As secondary sources SIGGRAPH and Intel must be considered more reliable sources than speculative internet magazines if the two conflict in terminology and ideas.
I agree, and yet: this is still just speculation. There is no evidence Intel intends to do this. The only evidence of a Larrabee product that exists is of a Larrabee "discrete graphics" product.Products using the Larrabee architecture could conceivably be used in standalone processors
Precisely the opposite. The *only* information we have is about the first implementation of this architecture as a GPU. We know nothing about where Intel plans to take Larrabee in the future. Larrabee 2 could be a very different beast. If Larrabee flops, it could even be stillborn. The information in this article belongs here, and it's the "architecture" article that would necessarily be a stub until more information about future Larrabee products is released.There is not enough information released on the first implementation of this architecture to warrant a substantial article. The architecture itself warrants an article.
The information in this article belongs here, and it's the "architecture" article that would necessarily be a stub until more information about future Larrabee products is released.
We've already gone through this; it's irrelevant. Intel makes products which even you cannot deny are GPUs, and yet Intel never refer to those as GPUs either. Would you lead a crusade to rid the Intel GMA article of the acronym "GPU" as well? Unless you find me an Intel quote saying Larrabee is *not* a GPU, this argument is dead.[Intel] never once (that's right not once, not anywhere) refer to it as a GPU
Me, and CNET, PC Perspective, Personal Computer World, Tech Radar, Daily Tech, XBitLabs, Gizmodo, ZDNet, Futuremark, Ars Technica, and countless other tech news sites too numerous to link here all call Larrabee a GPU. In particular I'd like to call your attention to the Ars Technica article:The particular interpretation you have made is that Larrabee is a GPU. [...] is there any way to get a third person, or some way of getting concensus on this?
Ars Technica is a respected and reliable secondary source, published by Condé Nast (owner of Wired, The New Yorker, and Vanity Fair, among others). I'm not drawing conclusions from thin air; I've got sources to spare. Modeless ( talk) 06:59, 9 January 2009 (UTC)"Intel confirmed that the first Larrabee products will consists of add-in boards aimed at accelerating 3D games—in other words, Larrabee will make its debut as a "GPU.""
Good God, I imagine this will be the only way of working out if it's a GPU. Though I expect the MK II will be called a CPU by everyone. 86.13.78.220 ( talk) 21:49, 19 February 2009 (UTC)
I just deleted the sections on simt and threading. The papers on Larrabee clearly describe a threading model almost identical to that used in current gpu's (especially Nvidia's), the only real difference is in terminology. In Larrabee each core has 4 'threads' but each thread manages 2-10 'fibers' (equivalent to a 'warp' in Nvidia's GPU's) and each fiber contains between 16 and 64 'strands' (equal to what nvidia calls a 'thread') each strand execute in a separate simd execution lane. This is exactly like Nvidia architecture where each warp consists of 32 "threads" each of which executes in a simd execution lane. And no, there is nothing 'effectively scalar' about either architecture. (Damn forgot to sign in). —Preceding unsigned comment added by 124.186.186.7 ( talk) 06:40, 3 January 2009 (UTC)
There's talk about the scaling of performance with number of cores in this article and it is claimed to be linear or "90% of linear" in the case of 48 cores. This "90% of linear" talk is incoherent. A linear function is one that forms a line, hence the name. More precisely, f(x) is a linear function just in case there is a number n such that f(x) = n*x. 90% is just one possible value for n, and if f(x) is linear then .9*f(x) is linear. What the hell is this article saying at this point? I didn't edit because I presume that something that makes sense was intended and someone who knows about larrabee will understand this 90% of linear stuff. I just surfed here looking for some info on larabee.
Also, "scaling is linear with number of cores" doesn't mean very much. If each increase in the number of cores by one gives a 10% increase in performance regardless of whether your increasing for one to two of 40 to 41, the relationship between number of cores and performance is linear. philosofool ( talk) 01:46, 9 February 2009 (UTC)
I have altered that bit to read "At 48 cores the performance drops to 90% of what would be expected if the linear relationship continued." Not perfect, but I feel that it is a little more elegant than the original. The last bit that Philosofool talks about is incorrect though. The situation described (adding a core increases performance by 10%) would be a case of exponential scaling. Oh how I wish it were possible! Linear scaling will always produce a situation where doubling the number of cores doubles performance. Mostly scaling will be less than linear, with additional cores giving less and less benefit. 80.6.80.19 ( talk) 23:31, 18 February 2009 (UTC)
According to [6] and [7], Intel canceled the hardware version of Larrabee and the name would only exist as a software platform. —Preceding unsigned comment added by 174.112.102.18 ( talk) 17:01, 5 December 2009 (UTC)
Should this really be called a GPU? 192.198.151.37 ( talk) 09:38, 8 December 2009 (UTC)
The result of the move request was: move to Larrabee (microarchitecture) Labattblueboy ( talk) 13:45, 10 February 2010 (UTC)
Larrabee (GPU) →
Larrabee (Computing Architecture) — Intel refers to Larrabee as an architecture, and has revealed that the first incarnation of Larrabee will not be a discrete GPU. —
86.42.213.196 (
talk)
18:49, 1 February 2010 (UTC)
*'''Support'''
or *'''Oppose'''
, then sign your comment with ~~~~
. Since
polling is not a substitute for discussion, please explain your reasons, taking into account
Wikipedia's naming conventions.This article reminds me of all of that Merced hype and how it was supposed to cure cancer, end world hunger, and be so superior to every other CPU architecture that everybody would switch to it before it even came out. This article has pro-Intel marketing speak like how using x86 cores for simple parallel arithmetic is superior to using dedicated GPU cores (only superior for vendor lock-in) and just like Merced hype parts are written as if the Larrabee was already out instead of with speculations and possibilities.
So x86 architecture, cache coherency, and "very little specialized graphics hardware" are supposed to allow "more differentiation in appearance between games or other 3D applications"? That doesn't even make sense! It's like when Intel said that "the Internet was developed on x86," a massive lie! When the Internet was developed, x86 wasn't even around. It's like saying "the Interstate was developed for SMART cars!" —Preceding unsigned comment added by 69.54.60.34 ( talk) 14:09, 31 March 2011 (UTC)
This is directed to FarbrorJoakim with regard to his recent edits to this article. Intel has Knights Ferry, an implementation of the Larrabee architecture, in the roadmaps and scheduled for release on 22nm this year or the next. There has been no announcement whatsoever that the Larrabee project has been cancelled. Please stop making large-scale edits that are based on assumptions and without supplying any references. 83.226.206.82 ( talk) 01:32, 29 May 2011 (UTC)
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||
|
Large parts of this article are now obsolete, and it may now be appropriate to replace "GPU" in the article title with something else. Modeless ( talk) —Preceding undated comment added 00:28, 5 December 2009 (UTC).
How do you get (32 cores)*(16x vector)*(2 GHz) = 2 TFLOPS ? It's 1 TFLOP, which is still pretty good. —Preceding unsigned comment added by 70.69.136.94 ( talk) 03:19, 12 June 2009 (UTC)
This article needs a screenshot of the demo from GDC - plus any other info released at the conference.
Larrabee demo video: http://www.youtube.com/watch?v=gxr_4sZ7w7k
Do these two sentences really warrant an entire section? Of course the criticism is valid, but it looks like somebody rushed to "hate". It makes the rest of the article feel unfinished and hasty, when actually the article is rather complete and of good quality. Perhaps it should be moved to another section? 84.9.165.14 ( talk) 21:38, 5 October 2008 (UTC)
Ramu50, your concern about marketing speak in calling x86 "popular" is valid; I've changed it to say "common" instead. I have also reintroduced the information about the Pentagon's involvement in Larrabee's design.
The SIGGRAPH paper clearly states that Larrabee has a coherent cache across all cores: (Italics added for emphasis) Page 1: "The cores each access their own subset of a coherent L2 cache". Page 3: "These cache control instructions also allow the L2 cache to be used similarly to a scratchpad memory, while remaining fully coherent." Page 11: "In contrast, all memory on Larrabee is shared by all processor cores. For Larrabee programmers, local data structure sharing is transparently supported by the coherent cached memory hierarchy regardless of the thread’s processor."
The text claiming that Larrabee has a crossbar architecture is not supported by sources. Larrabee uses a ring bus, as described in the SIGGRAPH paper.
Modeless ( talk) 23:22, 10 August 2008 (UTC)
Does anyone else have an issue with the title of this page, "Larrabee (GPU)"? I don't know if it is known for certain if Larrabee is meant to stand alone as the only processor in a PC, like what everyone had before 1997, or function like a modern video card with a dedicated CPU. But either way I think calling this a GPU is a stretch. —Preceding unsigned comment added by 74.219.122.138 ( talk) 13:30, 8 August 2008 (UTC)
Any speculation about the estimated release date ? --Xav
Likewise, any references to open source drivers? -- Daniel11 06:43, 10 September 2007 (UTC)
I think it will be easier to write open source drivers. Just use Gallium/Mesa software OpenGL stack, compile for x86, use in some loops this wide SIMD, and texture units. Only real probleme with just the hardware will be modeseting, but knowing Intel it will be working from the day-0. —Preceding unsigned comment added by 83.175.177.180 ( talk) 09:53, 6 April 2009 (UTC)
Larrabee was the name of the slow-witted assistant to the Chief of CONTROL on the TV series Get Smart. ;) He's the guy who when told to take away a bugged aerosol can "and step on it" (meaning do it quickly), took the order literally. After an offscreen explosion, Larrabee came back in with his clothes shredded and said "Chief, I don't think you're supposed to step on those.". —Preceding unsigned comment added by Bizzybody ( talk • contribs) 10:35, 8 December 2007 (UTC)
http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf
I'll leave it to someone else with more time on their hands and better understanding of Wikipedia rules and regulations to beautify this article. -- Tatejl ( talk) 21:06, 5 August 2008 (UTC)
Requested permission from authors for image use. dila ( talk) 20:37, 8 August 2008 (UTC)
Removed entire opening paragraph and replaced with a minimal buzzword description of Larrabee.
dila (
talk)
01:40, 9 August 2008 (UTC)
Since the article compares the chip to GPUs from Nvidia, but is more of a multicore, high performance general purpose computer with a ring based connectivity, whouldn't at least a reference to the Cell (Cell Broadband Engine) be suitable? —Preceding unsigned comment added by 62.209.166.130 ( talk) 11:16, 12 August 2008 (UTC)
This article looks like an advertisment to me. Especially with the inclusion of that programmability graph with all arrows pointing towards it. Also there is no mention of any competing products. NPOV anyone? -- DustWolf ( talk) 00:56, 13 August 2008 (UTC)
Here's another interesting question: How can a chip that is not technically a GPU according to it's maker, compete in the GPGPU market? -- DustWolf ( talk) 13:47, 31 August 2008 (UTC)
The article also says CPU quite a bit.... and finishes with: "Larrabee is an appropriate platform for the convergence of GPU and CPU applications." Intel will be the ones to decide what acronym best suits it when they start marketing the thing to consumers, until then any classification based on deduction, or comparison to older architectures, is nothing but speculation.
I also think the statement that Intel will release a "video" card in late 2009/10 needs to be sourced properly, the current source states that Paul Ottelini said "A product" will be out then. He never said anything about a video card. A PCIe card has been called out as a possible package, depending on the application. This in no way confirms or denies that a video card is how Larrabee will first be sold. Making assumptions about Larrabee is a mistake, it's a completely new product and so does not need to match established conventions. Video cards are not the only way to use graphics processing, and graphics processing is not the only way to use Larrabee. 192.198.151.129 ( talk) 11:31, 2 January 2009 (UTC)
Also, http://www.intel.com/technology/visual/microarch.htm?iid=tech_micro+larrabee is an official intel release on this, and states that Larrabee is a multi-core architecture. Calling Larrabee a GPU is clearly a misinterpretation, and I move to change the article name to "Larrabee (Microarchitecture)", and remove any references to Larrabee as a GPU. I cannot find one non-speculative source that calls Larrabee a GPU. In the SIGGRAPH paper the hardware differences between Larrabee and current GPUs and CPUs are both examined. The applications of Larrabee look to be targeted at the market segment that GPUs dominate, but those applications can also be run on CPUs and GPGPUs. And to the best of my knowledge Intel never called it a GPU. 192.198.151.129 ( talk) 15:08, 2 January 2009 (UTC)
This is wrong. Having additional capabilities doesn't prevent something from being a GPU. Current GPUs are being used for molecular dynamics simulation, database searching, financial modeling, etc etc. What defines a GPU is the capability of accelerating graphics APIs, not a lack of ability in other areas.Contrary to what you have said, the fact that a Larrabee based product will probably dramatically accelerate graphics performance alongside a CPU does not necessarily make it a GPU, because of the additional capabilities a Larrabee based product may have.
There is no conflict. Intel has not talked about using Larrabee for anything other than GPU or GPGPU tasks. Intel has never denied that Larrabee is a GPU. All they have claimed is that Larrabee is more flexible than current GPUs, but I stress again, that flexibility doesn't make Larrabee *not* a GPU. As I said: if Intel announces a Larrabee product which is *not* a GPU; i.e. it does not have texture sampling units or display interfaces and is not intended for graphics acceleration, *that's* when the article should be changed and not before. Modeless ( talk) 08:51, 5 January 2009 (UTC)As secondary sources SIGGRAPH and Intel must be considered more reliable sources than speculative internet magazines if the two conflict in terminology and ideas.
I agree, and yet: this is still just speculation. There is no evidence Intel intends to do this. The only evidence of a Larrabee product that exists is of a Larrabee "discrete graphics" product.Products using the Larrabee architecture could conceivably be used in standalone processors
Precisely the opposite. The *only* information we have is about the first implementation of this architecture as a GPU. We know nothing about where Intel plans to take Larrabee in the future. Larrabee 2 could be a very different beast. If Larrabee flops, it could even be stillborn. The information in this article belongs here, and it's the "architecture" article that would necessarily be a stub until more information about future Larrabee products is released.There is not enough information released on the first implementation of this architecture to warrant a substantial article. The architecture itself warrants an article.
The information in this article belongs here, and it's the "architecture" article that would necessarily be a stub until more information about future Larrabee products is released.
We've already gone through this; it's irrelevant. Intel makes products which even you cannot deny are GPUs, and yet Intel never refer to those as GPUs either. Would you lead a crusade to rid the Intel GMA article of the acronym "GPU" as well? Unless you find me an Intel quote saying Larrabee is *not* a GPU, this argument is dead.[Intel] never once (that's right not once, not anywhere) refer to it as a GPU
Me, and CNET, PC Perspective, Personal Computer World, Tech Radar, Daily Tech, XBitLabs, Gizmodo, ZDNet, Futuremark, Ars Technica, and countless other tech news sites too numerous to link here all call Larrabee a GPU. In particular I'd like to call your attention to the Ars Technica article:The particular interpretation you have made is that Larrabee is a GPU. [...] is there any way to get a third person, or some way of getting concensus on this?
Ars Technica is a respected and reliable secondary source, published by Condé Nast (owner of Wired, The New Yorker, and Vanity Fair, among others). I'm not drawing conclusions from thin air; I've got sources to spare. Modeless ( talk) 06:59, 9 January 2009 (UTC)"Intel confirmed that the first Larrabee products will consists of add-in boards aimed at accelerating 3D games—in other words, Larrabee will make its debut as a "GPU.""
Good God, I imagine this will be the only way of working out if it's a GPU. Though I expect the MK II will be called a CPU by everyone. 86.13.78.220 ( talk) 21:49, 19 February 2009 (UTC)
I just deleted the sections on simt and threading. The papers on Larrabee clearly describe a threading model almost identical to that used in current gpu's (especially Nvidia's), the only real difference is in terminology. In Larrabee each core has 4 'threads' but each thread manages 2-10 'fibers' (equivalent to a 'warp' in Nvidia's GPU's) and each fiber contains between 16 and 64 'strands' (equal to what nvidia calls a 'thread') each strand execute in a separate simd execution lane. This is exactly like Nvidia architecture where each warp consists of 32 "threads" each of which executes in a simd execution lane. And no, there is nothing 'effectively scalar' about either architecture. (Damn forgot to sign in). —Preceding unsigned comment added by 124.186.186.7 ( talk) 06:40, 3 January 2009 (UTC)
There's talk about the scaling of performance with number of cores in this article and it is claimed to be linear or "90% of linear" in the case of 48 cores. This "90% of linear" talk is incoherent. A linear function is one that forms a line, hence the name. More precisely, f(x) is a linear function just in case there is a number n such that f(x) = n*x. 90% is just one possible value for n, and if f(x) is linear then .9*f(x) is linear. What the hell is this article saying at this point? I didn't edit because I presume that something that makes sense was intended and someone who knows about larrabee will understand this 90% of linear stuff. I just surfed here looking for some info on larabee.
Also, "scaling is linear with number of cores" doesn't mean very much. If each increase in the number of cores by one gives a 10% increase in performance regardless of whether your increasing for one to two of 40 to 41, the relationship between number of cores and performance is linear. philosofool ( talk) 01:46, 9 February 2009 (UTC)
I have altered that bit to read "At 48 cores the performance drops to 90% of what would be expected if the linear relationship continued." Not perfect, but I feel that it is a little more elegant than the original. The last bit that Philosofool talks about is incorrect though. The situation described (adding a core increases performance by 10%) would be a case of exponential scaling. Oh how I wish it were possible! Linear scaling will always produce a situation where doubling the number of cores doubles performance. Mostly scaling will be less than linear, with additional cores giving less and less benefit. 80.6.80.19 ( talk) 23:31, 18 February 2009 (UTC)
According to [6] and [7], Intel canceled the hardware version of Larrabee and the name would only exist as a software platform. —Preceding unsigned comment added by 174.112.102.18 ( talk) 17:01, 5 December 2009 (UTC)
Should this really be called a GPU? 192.198.151.37 ( talk) 09:38, 8 December 2009 (UTC)
The result of the move request was: move to Larrabee (microarchitecture) Labattblueboy ( talk) 13:45, 10 February 2010 (UTC)
Larrabee (GPU) →
Larrabee (Computing Architecture) — Intel refers to Larrabee as an architecture, and has revealed that the first incarnation of Larrabee will not be a discrete GPU. —
86.42.213.196 (
talk)
18:49, 1 February 2010 (UTC)
*'''Support'''
or *'''Oppose'''
, then sign your comment with ~~~~
. Since
polling is not a substitute for discussion, please explain your reasons, taking into account
Wikipedia's naming conventions.This article reminds me of all of that Merced hype and how it was supposed to cure cancer, end world hunger, and be so superior to every other CPU architecture that everybody would switch to it before it even came out. This article has pro-Intel marketing speak like how using x86 cores for simple parallel arithmetic is superior to using dedicated GPU cores (only superior for vendor lock-in) and just like Merced hype parts are written as if the Larrabee was already out instead of with speculations and possibilities.
So x86 architecture, cache coherency, and "very little specialized graphics hardware" are supposed to allow "more differentiation in appearance between games or other 3D applications"? That doesn't even make sense! It's like when Intel said that "the Internet was developed on x86," a massive lie! When the Internet was developed, x86 wasn't even around. It's like saying "the Interstate was developed for SMART cars!" —Preceding unsigned comment added by 69.54.60.34 ( talk) 14:09, 31 March 2011 (UTC)
This is directed to FarbrorJoakim with regard to his recent edits to this article. Intel has Knights Ferry, an implementation of the Larrabee architecture, in the roadmaps and scheduled for release on 22nm this year or the next. There has been no announcement whatsoever that the Larrabee project has been cancelled. Please stop making large-scale edits that are based on assumptions and without supplying any references. 83.226.206.82 ( talk) 01:32, 29 May 2011 (UTC)