All of lore.kernel.org
 help / color / mirror / Atom feed
From: Grant Taylor <gtaylor@tnetconsulting.net>
To: lartc@vger.kernel.org
Subject: Re: tc question about ingress bandwidth splitting
Date: Sun, 22 Mar 2020 22:59:37 +0000	[thread overview]
Message-ID: <7d249453-9469-68be-5cae-a78e1df3e053@spamtrap.tnetconsulting.net> (raw)
In-Reply-To: <74CFEE65-9CE8-4CF7-9706-2E2E67B24E08@redfish-solutions.com>

[-- Attachment #1: Type: text/plain, Size: 6022 bytes --]

On 3/22/20 3:56 PM, Philip Prindeville wrote:
> Hi all,

Hi Philip,

> The uplink is G.PON 50/10 mbps.

Aside:  /Gigabit/ PON serving 50 / 10 Mbps.  ~chuckle~

> I’d like to cap the usage on “guest” to 10/2 mbps.  Any unused 
> bandwidth from “guest” goes to “production”.

Does any of production's unused bandwidth go to guest?  Or is guest hard 
capped at 10 & 2?

> I thought about marking the traffic coming in off “wan" (the public 
> interface).

One of the most important lessons that I remember about QoS is that you 
can only /effectively/ limit what you send.

Read:  You can't limit what is sent down your line to your router.

Further read:  You will receive more down your line than the 10 & 2 that 
you limit guest to, but you can feed guest at 10 & 2.

> Then using HTB to have a 50 mbps cap at the root, and allocating 10mb/s 
> to the child “guest”.  The other sibling would be “production”, 
> and he gets the remaining traffic.
> 
> Upstream would be the reverse, marking ingress traffic from “guest” 
> with a separate tag.  Allocating upstream root on “wan” with 10 
> mbps, and the child “guest” getting 2 mbps.  The remainder goes 
> to the sibling “production”.

It's been 15+ years since I've done much with designing QoS trees.  I'm 
sure that things have changed since the last time I looked at them.

> Should be straightforward enough, right? (Well, forwarding is more 
> straightforward than traffic terminating on the router itself, 
> I guess… bonus points for getting that right, too.)

As they say, the devil is in the details.

Conceptually, it's simple enough.  The the particulars of the execution 
is going to take effort.

> I’m hoping that the limiting will work adequately so that the 
> end-to-end path has adequate congestion avoidance happening, and that 
> upstream doesn’t overrun the receiver and cause a lot of packets to 
> be dropped on the last hop (work case of wasted bandwidth).

(See further read above.)

> Not sure if I need special accommodations for bursting or if that 
> would just delay the “settling” of congestion avoidance into 
> steady-state.

Well, if the connection is a hard 50 & 10, there's nothing that can 
burst over that.

The last time I dealt with bursting, I found that it was a lot of 
effort, for minimal return on said effort.  Further, I was able to get 
quite similar effort by allowing production and guest to use the 
bandwidth that the other didn't use, which was considerably simpler to 
set up.

The bursting I used in the past was bucket based (I don't remember the 
exact QoS term) where the bucket filled at the defined rate, and could 
empty it's contents as fast as it could be taken out.  So if the bucket 
was 5 gallons, then a burst at line rate up to 5 gallons was possible. 
Then it became a matter of how big the bucket needed to be, 5 gallons, 
55 gallons, 1000 gallons, etc.

I found that guaranteeing each class a specific amount of bandwidth and 
allowing the unused bandwidth to be used by other classes simpler and 
just as effective.

Read:  Speed of burst, without the complexity and better (more 
consistent) use of the bandwidth.  Remember, if the bandwidth isn't 
used, it's gone, wasted, so why not let someone use it?

> Also not sure if ECN is worth marking at this point.  Congestion 
> control is supposed to work better than congestion avoidance, right?

If I could relatively easily mark things with ECN, I would.  But I don't 
know how valuable ECN really is.  I've not looked in 10+ years, and the 
last time I did, I didn't find much that was actually utilizing it.

> Anyone know what the steps would look like to accomplish the above?

It is going to be highly dependent on what you want to do and what your 
device is capable of.

I have an idea of what I would do if I were to implement this on a 
standard Linux machine functioning as the router.

1st:  Address the fact that you can only effectively rate limit what you 
send.  So, change the problem so that you rate limit what is sent to 
your router.  I would do this by having the incoming connection go into 
a Network Namespace and a new virtual connection to the main part of the 
router.  This Network Namespace can then easily rate limit what it sends 
to the main part of the router, on a single interface.

              +------------------------+
(Internet)---+-eth5  router  eth{0,1}-+---(LAN)
              +------------------------+

              +--------------------+-------------------------+
(Internet)---+-eth5  NetNS  veth0=|=veth5  router  eth{0,1}-+---(LAN)
              +--------------------+-------------------------+

This has the advantage that the QoS tree in the NetNS only needs to deal 
with sending on one interface, veth0.

This has the added advantage that QoS tree won't be applied to traffic 
between production and guest.  (Or you don't need to make the QoS tree 
/more/ complex to account for this.)

2nd:  Don't worry about bucketing.  Define a minimum that each traffic 
class is guaranteed to get if it uses it.  Then allow the other traffic 
class to use what ever bandwidth the first traffic class did not use.

Why limit guest to 10 Mbps if production is only using 5 Mbps.  That's 
35 Mbps of available download that's wasted.

3rd:  The nature of things, TCP in particular, is to keep bumping into 
the ceiling.  So if you artificially lower the ceiling, traffic coming 
in /will/ go over the limit.  Conversely, the circuit is limited at 50 
Mbps inbound.  That limit is enforced by the ISP.  There is no way that 
the traffic can go over it.

> A bunch of people responded, “yeah, I’ve been wanting to do that 
> too…” when I brought up my question, so if I get a good solution 
> I’ll submit a FAQ entry.

Cool.

> Thanks,

You're welcome.

Good luck.



-- 
Grant. . . .
unix || die


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4013 bytes --]

  reply	other threads:[~2020-03-22 22:59 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-22 21:56 tc question about ingress bandwidth splitting Philip Prindeville
2020-03-22 22:59 ` Grant Taylor [this message]
2020-03-24  6:51 ` Philip Prindeville
2020-03-24  9:21 ` Marco Gaiarin
2020-03-24 17:57 ` Grant Taylor
2020-03-24 18:17 ` Grant Taylor
2020-03-26  3:44 ` Philip Prindeville
2020-03-26  3:53 ` Fwd: " Philip Prindeville
2020-03-26 12:50   ` Toke Høiland-Jørgensen
2020-03-26  4:03 ` Grant Taylor
2020-04-01  9:48 ` Marco Gaiarin
2020-04-03 22:44 ` Grant Taylor
2020-04-06  9:13 ` Marco Gaiarin
2020-04-13  1:11 ` Grant Taylor
2020-04-17  9:58 ` Marco Gaiarin
  -- strict thread matches above, loose matches on Subject: below --
2020-03-22 18:20 Philip Prindeville
2020-03-23  6:47 ` Gáspár Lajos
2020-03-23  9:36   ` Marc SCHAEFER
2020-03-23 18:15     ` Philip Prindeville

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7d249453-9469-68be-5cae-a78e1df3e053@spamtrap.tnetconsulting.net \
    --to=gtaylor@tnetconsulting.net \
    --cc=lartc@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.