This is the
talk page for discussing improvements to the
Direct memory access article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
The PCIe section doesn't make much sense. It contains a single sentence, "PCI Express uses DMA. The DMA engine appears as another function on the upstream post with a TYPE 0 configuration header." First of all, obviously "port" is meant, not "post". But PCI Express doesn't use DMA and devices don't need to implement DMA to comply with PCI Express; PCI Express is a protocol which can be used for DMA. DMA engines aren't PCIe functions. Depending on architecture there may be a one-to-one, many-to-one, or one-to-many correspondence between DMA engines and PCIe channels of a device. For example, in an SR-IOV device, there could be many virtual Functions sharing one (or a few) engine(s).
I'm not sure whether the section should be replaced with something more informative, or deleted altogether. 198.70.193.2 ( talk) 17:44, 8 April 2010 (UTC)
"in CPU utilization with receiving workloads, and no improvement when transmitting data.[4]" The source cited seems to indicate the improvements are more complex than simple CPU utilization measurements indicate. In particular, this seems relevant: "This data shows that I/OAT really benefits from larger application buffer sizes. There is a CPU spike at 2K, although also increased throughput." Which seems to indicate that I/OAT is enabling greater throughput and CPU utilization with buffers <2K. —Preceding unsigned comment added by 68.50.112.195 ( talk) 18:14, 30 March 2009 (UTC)
This section was either lifted directly from http://www.avsmedia.com/OnlineHelp/DVDCopy/Appendix/dma.aspx, or visa-versa. —Preceding unsigned comment added by 74.229.8.169 ( talk) 11:31, 11 January 2008 (UTC)
I've removed this sentence because it makes no sense to me.
How can an application outperform cache? Did you mean an application (implementation) of DMA? If so, perhaps the term "implementation" should be used, because "application" certainly reminds of the concept of a software application. Even then, can DMA outperform cache? Aren't we comparing apples to oranges, or at least aren't we unless the context is made clearer?
LjL 21:59, 28 Apr 2005 (UTC)
Question: Is UDMA related to DMA? — Preceding unsigned comment added by 88.105.167.50 ( talk) 00:07, 23 May 2006 (UTC (UTC)
Yes. UDMA is an advanced DMA for hard disks and CD/DVD drives. — Preceding unsigned comment added by 82.155.155.251 ( talk) 02:33, 1 July 2006 (UTC)
Can someone explain this to me? How is it that the process of accessing I/O devices can be slower than normal system ram? Did the author mean slower than accessing RAM? Exactly what is doing the copying, and from where to where? And which process is slower? Dudboi 12:21, 5 November 2006 (UTC)
Note that I'm not very knowledgeable about low-level hardware interaction, but I found two out of the three bullets in the "counterexamples" section dubious:
Not entirely true – at least AGP/PCIe graphics cards these days come with an IOMMU (GART). Not sure about other kinds of devices.
Is it really "so expensive on a PC" or does it just have more overhead considering today's processing power? I don't know about 2D hardware, but mapping textures to surfaces is very trivial in 3D hardware, these days.
This bullet says "later in history" – but later than what? -- intgr 16:09, 5 November 2006 (UTC)
I don't think the strcpy example was a particularly good one about DMA engines, and I don't think mention of DMA engines deserves a place in the lead section, either. strcpy is a particularly problematic function because:
I can see, however, that DMA engines can be beneficial when copying large buffers, or when building for example, network packets within the kernel. And indeed, such copies are not currently offloaded since today's computers lack such a device. Thus I've created a new section, 'DMA engines' for this. It's still a stub, though; Intel's I/OAT certainly deserves a mention. -- intgr 17:12, 12 November 2006 (UTC)
This seems like a bad idea to me. How would the CPU know when the memory is being written - what if the device is in the middle of updating a couple of KB of data in memory and the CPU reads off the whole range and gets half the new values and half the old values? Why not just have a dedicated component on the CPU for data throughput that shares the CPU clock and makes the appropriate information available to memory protection systems? -- froth T C 17:20, 28 November 2006 (UTC)
"A modern x86 CPU may use more than 4 GiB of memory, utilizing PAE, a 64-bit addressing mode. In such case, a device using DMA with 32-bit address bus is unable to address the memory above 4 GiB line."
I don't understand what the phrase "32-bit address bus" means in the context above. Is this referring to the device's own addressing ability, or some bus external to the device, or something else? -- AzzAz ( talk) 20:27, 28 February 2008 (UTC)
So DMA channels are ISA-specific, right? In other words they would not be applicable to PCI, since any PCI device can bus-master? Also, what is the "Direct Memory Access Controller" shown with Channel 4 in msinfo32.exe on Windows? -- AzzAz ( talk) 20:27, 28 February 2008 (UTC)
Can someone please create a history section for this? I am curious as to when DMA came into existence. sweecoo ( talk) 21:41, 12 November 2008 (UTC)
The article could explain what memory types can be read or written with DMA. For example does it work to/from flash memory?-- 85.78.29.213 ( talk) 08:38, 13 January 2011 (UTC)
DMA controllers don't do checksum calculations; if it was that smart, it would be an IO processor, not a simple DMA controller. Need to talk about bus mastering and cycle stealing; in the PCish world, you can either use the DMA controller on board or become bus master and write to memory directly. ( I can kind of see how this would have worked in the old days, but have no idea how modern designs do this.) -- Wtshymanski ( talk) 02:38, 23 August 2011 (UTC)
We have a diagram to explain cache coherency but not one that explains how DMA works. This could be two panels; first panel shows CPU doing reads/writes to IO device, and data passing through a CPU register to/from memroy. Second panel shows CPU doing something else and DMA controller doing the transfers. Rainy day project for me if I can't find one on Commons. -- Wtshymanski ( talk) 13:58, 30 March 2012 (UTC)
Diagram is needed for dma Lingeshwaram ( talk) 15:19, 9 November 2022 (UTC)
I added a wikilink from scatter/gather to vectored I/O. Are these two terms indeed referring to the same thing? — Preceding unsigned comment added by Jimw338 ( talk • contribs) 16:03, 8 August 2012 (UTC)
readv()
and writev()
calls in UN*Xes and the ReadFileScatter()
and WriteFileGather()
calls in Windows. Those calls might be implemented using scatter/gather at the hardware I/O layer, but there may be other places where scaller/gather at the hardware I/O layer is used.
Guy Harris (
talk)
23:13, 17 August 2022 (UTC)The article is talking about modern architecture and uses there the North and the South-Bridge. In modern architectures is only one "Hub" left. — Preceding unsigned comment added by 94.175.83.222 ( talk) 22:20, 22 October 2014 (UTC)
We don't need a lot of undue emphasis on "DMA Attacks" in the main text, do we? We might as well write that the keyboard is a security vulnerability because it allows the user to delete files, alter system parameters, or even shut down the computer. There's no protecting a computer from random hardware plugged into it. -- Wtshymanski ( talk) 05:52, 4 November 2018 (UTC)
This article was the subject of an educational assignment at Department of Electronics and Telecommunication, College of Engineering, Pune, India supported by Wikipedia Ambassadors through the India Education Program during the 2011 Q3 term. Further details are available on the course page.
The above message was substituted from {{IEP assignment}}
by
PrimeBOT (
talk) on
19:58, 1 February 2023 (UTC)
This is the
talk page for discussing improvements to the
Direct memory access article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
The PCIe section doesn't make much sense. It contains a single sentence, "PCI Express uses DMA. The DMA engine appears as another function on the upstream post with a TYPE 0 configuration header." First of all, obviously "port" is meant, not "post". But PCI Express doesn't use DMA and devices don't need to implement DMA to comply with PCI Express; PCI Express is a protocol which can be used for DMA. DMA engines aren't PCIe functions. Depending on architecture there may be a one-to-one, many-to-one, or one-to-many correspondence between DMA engines and PCIe channels of a device. For example, in an SR-IOV device, there could be many virtual Functions sharing one (or a few) engine(s).
I'm not sure whether the section should be replaced with something more informative, or deleted altogether. 198.70.193.2 ( talk) 17:44, 8 April 2010 (UTC)
"in CPU utilization with receiving workloads, and no improvement when transmitting data.[4]" The source cited seems to indicate the improvements are more complex than simple CPU utilization measurements indicate. In particular, this seems relevant: "This data shows that I/OAT really benefits from larger application buffer sizes. There is a CPU spike at 2K, although also increased throughput." Which seems to indicate that I/OAT is enabling greater throughput and CPU utilization with buffers <2K. —Preceding unsigned comment added by 68.50.112.195 ( talk) 18:14, 30 March 2009 (UTC)
This section was either lifted directly from http://www.avsmedia.com/OnlineHelp/DVDCopy/Appendix/dma.aspx, or visa-versa. —Preceding unsigned comment added by 74.229.8.169 ( talk) 11:31, 11 January 2008 (UTC)
I've removed this sentence because it makes no sense to me.
How can an application outperform cache? Did you mean an application (implementation) of DMA? If so, perhaps the term "implementation" should be used, because "application" certainly reminds of the concept of a software application. Even then, can DMA outperform cache? Aren't we comparing apples to oranges, or at least aren't we unless the context is made clearer?
LjL 21:59, 28 Apr 2005 (UTC)
Question: Is UDMA related to DMA? — Preceding unsigned comment added by 88.105.167.50 ( talk) 00:07, 23 May 2006 (UTC (UTC)
Yes. UDMA is an advanced DMA for hard disks and CD/DVD drives. — Preceding unsigned comment added by 82.155.155.251 ( talk) 02:33, 1 July 2006 (UTC)
Can someone explain this to me? How is it that the process of accessing I/O devices can be slower than normal system ram? Did the author mean slower than accessing RAM? Exactly what is doing the copying, and from where to where? And which process is slower? Dudboi 12:21, 5 November 2006 (UTC)
Note that I'm not very knowledgeable about low-level hardware interaction, but I found two out of the three bullets in the "counterexamples" section dubious:
Not entirely true – at least AGP/PCIe graphics cards these days come with an IOMMU (GART). Not sure about other kinds of devices.
Is it really "so expensive on a PC" or does it just have more overhead considering today's processing power? I don't know about 2D hardware, but mapping textures to surfaces is very trivial in 3D hardware, these days.
This bullet says "later in history" – but later than what? -- intgr 16:09, 5 November 2006 (UTC)
I don't think the strcpy example was a particularly good one about DMA engines, and I don't think mention of DMA engines deserves a place in the lead section, either. strcpy is a particularly problematic function because:
I can see, however, that DMA engines can be beneficial when copying large buffers, or when building for example, network packets within the kernel. And indeed, such copies are not currently offloaded since today's computers lack such a device. Thus I've created a new section, 'DMA engines' for this. It's still a stub, though; Intel's I/OAT certainly deserves a mention. -- intgr 17:12, 12 November 2006 (UTC)
This seems like a bad idea to me. How would the CPU know when the memory is being written - what if the device is in the middle of updating a couple of KB of data in memory and the CPU reads off the whole range and gets half the new values and half the old values? Why not just have a dedicated component on the CPU for data throughput that shares the CPU clock and makes the appropriate information available to memory protection systems? -- froth T C 17:20, 28 November 2006 (UTC)
"A modern x86 CPU may use more than 4 GiB of memory, utilizing PAE, a 64-bit addressing mode. In such case, a device using DMA with 32-bit address bus is unable to address the memory above 4 GiB line."
I don't understand what the phrase "32-bit address bus" means in the context above. Is this referring to the device's own addressing ability, or some bus external to the device, or something else? -- AzzAz ( talk) 20:27, 28 February 2008 (UTC)
So DMA channels are ISA-specific, right? In other words they would not be applicable to PCI, since any PCI device can bus-master? Also, what is the "Direct Memory Access Controller" shown with Channel 4 in msinfo32.exe on Windows? -- AzzAz ( talk) 20:27, 28 February 2008 (UTC)
Can someone please create a history section for this? I am curious as to when DMA came into existence. sweecoo ( talk) 21:41, 12 November 2008 (UTC)
The article could explain what memory types can be read or written with DMA. For example does it work to/from flash memory?-- 85.78.29.213 ( talk) 08:38, 13 January 2011 (UTC)
DMA controllers don't do checksum calculations; if it was that smart, it would be an IO processor, not a simple DMA controller. Need to talk about bus mastering and cycle stealing; in the PCish world, you can either use the DMA controller on board or become bus master and write to memory directly. ( I can kind of see how this would have worked in the old days, but have no idea how modern designs do this.) -- Wtshymanski ( talk) 02:38, 23 August 2011 (UTC)
We have a diagram to explain cache coherency but not one that explains how DMA works. This could be two panels; first panel shows CPU doing reads/writes to IO device, and data passing through a CPU register to/from memroy. Second panel shows CPU doing something else and DMA controller doing the transfers. Rainy day project for me if I can't find one on Commons. -- Wtshymanski ( talk) 13:58, 30 March 2012 (UTC)
Diagram is needed for dma Lingeshwaram ( talk) 15:19, 9 November 2022 (UTC)
I added a wikilink from scatter/gather to vectored I/O. Are these two terms indeed referring to the same thing? — Preceding unsigned comment added by Jimw338 ( talk • contribs) 16:03, 8 August 2012 (UTC)
readv()
and writev()
calls in UN*Xes and the ReadFileScatter()
and WriteFileGather()
calls in Windows. Those calls might be implemented using scatter/gather at the hardware I/O layer, but there may be other places where scaller/gather at the hardware I/O layer is used.
Guy Harris (
talk)
23:13, 17 August 2022 (UTC)The article is talking about modern architecture and uses there the North and the South-Bridge. In modern architectures is only one "Hub" left. — Preceding unsigned comment added by 94.175.83.222 ( talk) 22:20, 22 October 2014 (UTC)
We don't need a lot of undue emphasis on "DMA Attacks" in the main text, do we? We might as well write that the keyboard is a security vulnerability because it allows the user to delete files, alter system parameters, or even shut down the computer. There's no protecting a computer from random hardware plugged into it. -- Wtshymanski ( talk) 05:52, 4 November 2018 (UTC)
This article was the subject of an educational assignment at Department of Electronics and Telecommunication, College of Engineering, Pune, India supported by Wikipedia Ambassadors through the India Education Program during the 2011 Q3 term. Further details are available on the course page.
The above message was substituted from {{IEP assignment}}
by
PrimeBOT (
talk) on
19:58, 1 February 2023 (UTC)