This is the
talk page for discussing improvements to the
Network scheduler article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
![]() | This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||
|
![]() | The contents of the Queuing discipline page were merged into Network scheduler on September 2014. For the contribution history and old versions of the redirected page, please see its history; for the discussion at that location, see its talk page. |
This appears to be a Linux-specific component and term. Recent edits have created links from general topics to this page. Those edits need to be reverted or this article needs to be made and referenced more generally. I'm not sure the latter is possible as I not aware of "network scheduler" having widespread use in describing queue and congestion management topics. Wikipedia is not the place to create new terminology. ~ KvnG 13:15, 14 September 2013 (UTC)
My problem is that there is absolutely no effective difference between the leaky and token bucket algorithms : they are exact mirror images of one another. In one, the bucket content is added by a conforming packet/frame/cell if it will not overflow and leaks away over time ; in the other, the bucket content is removed by a conforming packet/frame/cell if it will not underflow and is dripped in over time. Hence, in each case and point, an exactly equal and oposite action is taken. The only reasonably credible place I can find where a distinction is made is in Tanenbaum’s Computer Networks, and I'm afraid that is simply wrong in this instance.
If there’s any confusion over this issue, I suggest looking at the leaky bucket page, specifically at the section on the leaky bucket as a queue, which is all that Tanenbaum describes, despite referencing Turner’s original description of the leaky bucket algorithm, which is clearly of a leaky bucket as a meter. The claim that a traffic policing or shaping function using the leaky bucket algorithm is incapable of passing traffic with a burstiness greater than zero is simply untrue. This is clearly so for the GCRA, which is an implementation of the LBA and used in ATM UPC/NPC in some cases as a dual bucket (where the burstiness of one bucket specifically determins the number of packet/frame/cells in the higher layer message/packet length), and burstiness is specifically mentioned in Turner's original description : “A counter associated with each user transmitting on a connection is incremented whenever the user sends a packet and is decremented periodically. If the counter exceeds a threshold upon being incremented, the network discards the packet. The user specifies the rate at which the counter is decremented (this determines the average bandwidth) and the value of the threshold (a measure of burstiness)”.
There is, obviously, a difference between traffic shaping and traffic policing, and some do seem to apply the token bucket to policing and the leaky bucket to shaping. But I don't know if that is what is meant in the article.
Graham.Fountain | Talk 11:15, 12 September 2014 (UTC)
There've been no arguments that a difference is correctly identified, and the page is actively being edited, so if there were they should have come out. So now I've taken out the statement about the difference between the token and leaky buckets. Graham.Fountain | Talk 10:05, 22 September 2014 (UTC)
user:Dsimic reverted user:Graham.Fountain's merge of Queuing discipline into Network scheduler claiming the proper procedure was not followed. Proper merge procedure allows for WP:BOLD action. Is the objection to these action purely procedural or is there opposition to the merge? My understanding is that Queuing discipline is simply the Linux lingo for different Network scheduler behaviors so a merge is reasonable. ~ KvnG 13:50, 16 September 2014 (UTC)
Sorry for jumping the gun, Dsimic; I had assumed that if you had any objection on the merits, you would have mentioned when you undid the merger. I agree with the above arguments in favor of the merger; all of these algorithms and any performance metrics are of interest to users of all operating systems. The fact that one OS implements some of them is interesting to have mentioned, but having two lists of algorithms separately is more confusing than helpful for readers. One of the main yardsticks I use is how much the articles overlap. "Queueing discipline" as it stands is nearly 100% contained in "Network scheduler" and certainly if the latter is filled in properly. I agree this article is too short to need splitting along any lines, whether that's by OS or by subtopic. -- Beland ( talk) 15:00, 17 September 2014 (UTC)
Merged. -- Beland ( talk) 20:46, 29 September 2014 (UTC)
So, which algorithm does the Linux kernel implement by default? After reading the "Linux kernel" section, I'm still unclear on that. -- Beland ( talk) 15:02, 17 September 2014 (UTC)
I removed this explanation from the article because it is unreferenced and sounds dubious:
The number and purpose of queues depends on the algorithm being implemented. For example, fair queuing has multiple packet flows, but simple FIFO doesn't.
It also seems that the difference between actual queues (removing a packet from the network interface circular receive buffer and add it to a queue data structure on the heap) and virtual queues (some heap structure tracks the ticket numbers or whatever attributes of packets in the circular receive buffer without moving it until its turn to be transmitted) is an implementation detail we can't really generalize about. -- Beland ( talk) 23:40, 18 September 2014 (UTC)
After some discussions on the Talk:Fair queuing#Family of Algorithms page, the need arises to re-organise a set of related pages. First, there is a set of related protocols: fair queuing, weighted fair queuing, deficit round robin, Generalized_processor_sharing, and perhaps others. Coarsely, they all implement the same idea, with different trade-offs. Second, there is a set of global principles: traffic shaping, quality of service, Fairness_measure.
My suggestion, following the one of John Nagle would be to use the current Network scheduler as the top page, mentioning the problem of different flows sharing the same queue: how to protect some flows from others? how to handle different requirements (latency vs troughput)? Then, two approaches can be distinguished:
Any comment? MarcBoyerONERA ( talk) 04:59, 22 April 2015 (UTC)
Being totally unfamiliar with wikipedia and feeling a need to strengthen some terms, I have made a resoluton to learn how. However, I am a primary source, one
of the primary experts, and having a more expert editor of wikipedia along for the ride (topics include fq_codel, flow queuing, "smart queue managment", apple's new "RPM" tests, and a dozen others, would help a lot. ~ Dave Taht Dtaht ( talk) 01:28, 4 October 2021 (UTC)
This is the
talk page for discussing improvements to the
Network scheduler article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
![]() | This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||
|
![]() | The contents of the Queuing discipline page were merged into Network scheduler on September 2014. For the contribution history and old versions of the redirected page, please see its history; for the discussion at that location, see its talk page. |
This appears to be a Linux-specific component and term. Recent edits have created links from general topics to this page. Those edits need to be reverted or this article needs to be made and referenced more generally. I'm not sure the latter is possible as I not aware of "network scheduler" having widespread use in describing queue and congestion management topics. Wikipedia is not the place to create new terminology. ~ KvnG 13:15, 14 September 2013 (UTC)
My problem is that there is absolutely no effective difference between the leaky and token bucket algorithms : they are exact mirror images of one another. In one, the bucket content is added by a conforming packet/frame/cell if it will not overflow and leaks away over time ; in the other, the bucket content is removed by a conforming packet/frame/cell if it will not underflow and is dripped in over time. Hence, in each case and point, an exactly equal and oposite action is taken. The only reasonably credible place I can find where a distinction is made is in Tanenbaum’s Computer Networks, and I'm afraid that is simply wrong in this instance.
If there’s any confusion over this issue, I suggest looking at the leaky bucket page, specifically at the section on the leaky bucket as a queue, which is all that Tanenbaum describes, despite referencing Turner’s original description of the leaky bucket algorithm, which is clearly of a leaky bucket as a meter. The claim that a traffic policing or shaping function using the leaky bucket algorithm is incapable of passing traffic with a burstiness greater than zero is simply untrue. This is clearly so for the GCRA, which is an implementation of the LBA and used in ATM UPC/NPC in some cases as a dual bucket (where the burstiness of one bucket specifically determins the number of packet/frame/cells in the higher layer message/packet length), and burstiness is specifically mentioned in Turner's original description : “A counter associated with each user transmitting on a connection is incremented whenever the user sends a packet and is decremented periodically. If the counter exceeds a threshold upon being incremented, the network discards the packet. The user specifies the rate at which the counter is decremented (this determines the average bandwidth) and the value of the threshold (a measure of burstiness)”.
There is, obviously, a difference between traffic shaping and traffic policing, and some do seem to apply the token bucket to policing and the leaky bucket to shaping. But I don't know if that is what is meant in the article.
Graham.Fountain | Talk 11:15, 12 September 2014 (UTC)
There've been no arguments that a difference is correctly identified, and the page is actively being edited, so if there were they should have come out. So now I've taken out the statement about the difference between the token and leaky buckets. Graham.Fountain | Talk 10:05, 22 September 2014 (UTC)
user:Dsimic reverted user:Graham.Fountain's merge of Queuing discipline into Network scheduler claiming the proper procedure was not followed. Proper merge procedure allows for WP:BOLD action. Is the objection to these action purely procedural or is there opposition to the merge? My understanding is that Queuing discipline is simply the Linux lingo for different Network scheduler behaviors so a merge is reasonable. ~ KvnG 13:50, 16 September 2014 (UTC)
Sorry for jumping the gun, Dsimic; I had assumed that if you had any objection on the merits, you would have mentioned when you undid the merger. I agree with the above arguments in favor of the merger; all of these algorithms and any performance metrics are of interest to users of all operating systems. The fact that one OS implements some of them is interesting to have mentioned, but having two lists of algorithms separately is more confusing than helpful for readers. One of the main yardsticks I use is how much the articles overlap. "Queueing discipline" as it stands is nearly 100% contained in "Network scheduler" and certainly if the latter is filled in properly. I agree this article is too short to need splitting along any lines, whether that's by OS or by subtopic. -- Beland ( talk) 15:00, 17 September 2014 (UTC)
Merged. -- Beland ( talk) 20:46, 29 September 2014 (UTC)
So, which algorithm does the Linux kernel implement by default? After reading the "Linux kernel" section, I'm still unclear on that. -- Beland ( talk) 15:02, 17 September 2014 (UTC)
I removed this explanation from the article because it is unreferenced and sounds dubious:
The number and purpose of queues depends on the algorithm being implemented. For example, fair queuing has multiple packet flows, but simple FIFO doesn't.
It also seems that the difference between actual queues (removing a packet from the network interface circular receive buffer and add it to a queue data structure on the heap) and virtual queues (some heap structure tracks the ticket numbers or whatever attributes of packets in the circular receive buffer without moving it until its turn to be transmitted) is an implementation detail we can't really generalize about. -- Beland ( talk) 23:40, 18 September 2014 (UTC)
After some discussions on the Talk:Fair queuing#Family of Algorithms page, the need arises to re-organise a set of related pages. First, there is a set of related protocols: fair queuing, weighted fair queuing, deficit round robin, Generalized_processor_sharing, and perhaps others. Coarsely, they all implement the same idea, with different trade-offs. Second, there is a set of global principles: traffic shaping, quality of service, Fairness_measure.
My suggestion, following the one of John Nagle would be to use the current Network scheduler as the top page, mentioning the problem of different flows sharing the same queue: how to protect some flows from others? how to handle different requirements (latency vs troughput)? Then, two approaches can be distinguished:
Any comment? MarcBoyerONERA ( talk) 04:59, 22 April 2015 (UTC)
Being totally unfamiliar with wikipedia and feeling a need to strengthen some terms, I have made a resoluton to learn how. However, I am a primary source, one
of the primary experts, and having a more expert editor of wikipedia along for the ride (topics include fq_codel, flow queuing, "smart queue managment", apple's new "RPM" tests, and a dozen others, would help a lot. ~ Dave Taht Dtaht ( talk) 01:28, 4 October 2021 (UTC)