From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alan Goodman Date: Sun, 06 Jul 2014 16:49:45 +0000 Subject: Re: HFSC not working as expected Message-Id: <53B97E29.9020404@yescomputersolutions.com> List-Id: References: <53AC30A8.2080403@yescomputersolutions.com> In-Reply-To: <53AC30A8.2080403@yescomputersolutions.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: lartc@vger.kernel.org Thanks Andy, I have been playing around a bit and may have been slightly quick to comment in regard of download... With hfsc engaged and total limit set to 17100kbit the actual throughput I see is closer to 14mbit for some reason. No traffic shaping: http://www.thinkbroadband.com/speedtest/results.html?id0466829057682641064 hfsc->sfq perturb 10 http://www.thinkbroadband.com/speedtest/results.html?id0466829057682641064 On 06/07/14 17:42, Andy Furniss wrote: > If you have the choice of pppoa vs pppoe why not use a so you can use > overhead 10 and be more efficient for upload. > > THe 88.2 thing is not atm rate, they do limit slightly below sync, > but that is a marketing (inexact) approximate ip rate. > > If you were really matching their rate after allowing for overheads your > incoming shaping would do nothing at all. My understanding is that they limit the BRAS profile to 88.2% of your downstream sync to prevent traffic backing up in the exchange links. >> This works great in almost every test case except 'excessive p2p'. As >> a test I configured a 9mbit RATE and upper limit m2 10mbit on my bulk >> class. I then started downloading a CentOS torrent with very high >> maximum connection limit set. I see 10mbit coming in on my ppp0 >> interface however latency in my priority queue (sc umax 1412b dmax >> 20ms rate 460kbit) however my priority queue roundtrip is hitting >> 100+ms. Below is a clip from a ping session which shows what happens >> when I pause the torrent download. > > Shaping from the wrong end of the bottleneck is not ideal, if you really > care about latency you need to set lower limit for bulk and short queue > length. > > As you have found hitting hard with many connections is the worse case. Are you saying that in addition to setting the 10mbit upper limit I should also set sfq limit to say 25 packets? Alan