* [PATCH net-next] bonding: correct rr balancing during link failure
@ 2020-12-02 20:55 Lars Everbrand
2020-12-05 19:45 ` Jakub Kicinski
0 siblings, 1 reply; 5+ messages in thread
From: Lars Everbrand @ 2020-12-02 20:55 UTC (permalink / raw)
To: linux-kernel
Cc: Jay Vosburgh, Veaceslav Falico, Andy Gospodarek, David S. Miller,
Jakub Kicinski, netdev
This patch updates the sending algorithm for roundrobin to avoid
over-subscribing interface(s) when one or more interfaces in the bond is
not able to send packets. This happened when order was not random and
more than 2 interfaces were used.
Previously the algorithm would find the next available interface
when an interface failed to send by, this means that most often it is
current_interface + 1. The problem is that when the next packet is to be
sent and the "normal" algorithm then continues with interface++ which
then hits that same interface again.
This patch updates the resending algorithm to update the global counter
of the next interface to use.
Example (prior to patch):
Consider 6 x 100 Mbit/s interfaces in a rr bond. The normal order of links
being used to send would look like:
1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 ...
If, for instance, interface 2 where unable to send the order would have been:
1 3 3 4 5 6 1 3 3 4 5 6 1 3 3 4 5 6 ...
The resulting speed (for TCP) would then become:
50 + 0 + 100 + 50 + 50 + 50 = 300 Mbit/s
instead of the expected 500 Mbit/s.
If interface 3 also would fail the resulting speed would be half of the
expected 400 Mbit/s (33 + 0 + 0 + 100 + 33 + 33).
Signed-off-by: Lars Everbrand <lars.everbrand@protonmail.com>
---
drivers/net/bonding/bond_main.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index e0880a3840d7..e02d9c6d40ee 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4107,6 +4107,7 @@ static struct slave *bond_get_slave_by_id(struct bonding *bond,
if (--i < 0) {
if (bond_slave_can_tx(slave))
return slave;
+ bond->rr_tx_counter++;
}
}
@@ -4117,6 +4118,7 @@ static struct slave *bond_get_slave_by_id(struct bonding *bond,
break;
if (bond_slave_can_tx(slave))
return slave;
+ bond->rr_tx_counter++;
}
/* no slave that can tx has been found */
return NULL;
--
2.29.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH net-next] bonding: correct rr balancing during link failure
2020-12-02 20:55 [PATCH net-next] bonding: correct rr balancing during link failure Lars Everbrand
@ 2020-12-05 19:45 ` Jakub Kicinski
2020-12-08 21:46 ` Jay Vosburgh
2020-12-15 21:32 ` Lars Everbrand
0 siblings, 2 replies; 5+ messages in thread
From: Jakub Kicinski @ 2020-12-05 19:45 UTC (permalink / raw)
To: Lars Everbrand
Cc: linux-kernel, Jay Vosburgh, Veaceslav Falico, Andy Gospodarek,
David S. Miller, netdev
On Wed, 02 Dec 2020 20:55:57 +0000 Lars Everbrand wrote:
> This patch updates the sending algorithm for roundrobin to avoid
> over-subscribing interface(s) when one or more interfaces in the bond is
> not able to send packets. This happened when order was not random and
> more than 2 interfaces were used.
>
> Previously the algorithm would find the next available interface
> when an interface failed to send by, this means that most often it is
> current_interface + 1. The problem is that when the next packet is to be
> sent and the "normal" algorithm then continues with interface++ which
> then hits that same interface again.
>
> This patch updates the resending algorithm to update the global counter
> of the next interface to use.
>
> Example (prior to patch):
>
> Consider 6 x 100 Mbit/s interfaces in a rr bond. The normal order of links
> being used to send would look like:
> 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 ...
>
> If, for instance, interface 2 where unable to send the order would have been:
> 1 3 3 4 5 6 1 3 3 4 5 6 1 3 3 4 5 6 ...
>
> The resulting speed (for TCP) would then become:
> 50 + 0 + 100 + 50 + 50 + 50 = 300 Mbit/s
> instead of the expected 500 Mbit/s.
>
> If interface 3 also would fail the resulting speed would be half of the
> expected 400 Mbit/s (33 + 0 + 0 + 100 + 33 + 33).
>
> Signed-off-by: Lars Everbrand <lars.everbrand@protonmail.com>
Thanks for the patch!
Looking at the code in question it feels a little like we're breaking
abstractions if we bump the counter directly in get_slave_by_id.
For one thing when the function is called for IGMP packets the counter
should not be incremented at all. But also if packets_per_slave is not
1 we'd still be hitting the same leg multiple times (packets_per_slave
/ 2). So it seems like we should round the counter up somehow?
For IGMP maybe we don't have to call bond_get_slave_by_id() at all,
IMHO, just find first leg that can TX. Then we can restructure
bond_get_slave_by_id() appropriately for the non-IGMP case.
> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
> index e0880a3840d7..e02d9c6d40ee 100644
> --- a/drivers/net/bonding/bond_main.c
> +++ b/drivers/net/bonding/bond_main.c
> @@ -4107,6 +4107,7 @@ static struct slave *bond_get_slave_by_id(struct bonding *bond,
> if (--i < 0) {
> if (bond_slave_can_tx(slave))
> return slave;
> + bond->rr_tx_counter++;
> }
> }
>
> @@ -4117,6 +4118,7 @@ static struct slave *bond_get_slave_by_id(struct bonding *bond,
> break;
> if (bond_slave_can_tx(slave))
> return slave;
> + bond->rr_tx_counter++;
> }
> /* no slave that can tx has been found */
> return NULL;
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next] bonding: correct rr balancing during link failure
2020-12-05 19:45 ` Jakub Kicinski
@ 2020-12-08 21:46 ` Jay Vosburgh
2020-12-15 21:54 ` Lars Everbrand
2020-12-15 21:32 ` Lars Everbrand
1 sibling, 1 reply; 5+ messages in thread
From: Jay Vosburgh @ 2020-12-08 21:46 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Lars Everbrand, linux-kernel, Veaceslav Falico, Andy Gospodarek,
David S. Miller, netdev
Jakub Kicinski <kuba@kernel.org> wrote:
>On Wed, 02 Dec 2020 20:55:57 +0000 Lars Everbrand wrote:
>> This patch updates the sending algorithm for roundrobin to avoid
>> over-subscribing interface(s) when one or more interfaces in the bond is
>> not able to send packets. This happened when order was not random and
>> more than 2 interfaces were used.
>>
>> Previously the algorithm would find the next available interface
>> when an interface failed to send by, this means that most often it is
>> current_interface + 1. The problem is that when the next packet is to be
>> sent and the "normal" algorithm then continues with interface++ which
>> then hits that same interface again.
>>
>> This patch updates the resending algorithm to update the global counter
>> of the next interface to use.
>>
>> Example (prior to patch):
>>
>> Consider 6 x 100 Mbit/s interfaces in a rr bond. The normal order of links
>> being used to send would look like:
>> 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 ...
>>
>> If, for instance, interface 2 where unable to send the order would have been:
>> 1 3 3 4 5 6 1 3 3 4 5 6 1 3 3 4 5 6 ...
>>
>> The resulting speed (for TCP) would then become:
>> 50 + 0 + 100 + 50 + 50 + 50 = 300 Mbit/s
>> instead of the expected 500 Mbit/s.
>>
>> If interface 3 also would fail the resulting speed would be half of the
>> expected 400 Mbit/s (33 + 0 + 0 + 100 + 33 + 33).
Are these bandwidth numbers from observation of the actual
behavior? I'm not sure the real system would behave this way; my
suspicion is that it would increase the likelihood of drops on the
overused slave, not that the overall capacity would be limited.
>> Signed-off-by: Lars Everbrand <lars.everbrand@protonmail.com>
>
>Thanks for the patch!
>
>Looking at the code in question it feels a little like we're breaking
>abstractions if we bump the counter directly in get_slave_by_id.
Agreed; I think a better way to fix this is to enable the slave
array for balance-rr mode, and then use the array to find the right
slave. This way, we then avoid the problematic "skip unable to tx"
logic for free.
>For one thing when the function is called for IGMP packets the counter
>should not be incremented at all. But also if packets_per_slave is not
>1 we'd still be hitting the same leg multiple times (packets_per_slave
>/ 2). So it seems like we should round the counter up somehow?
>
>For IGMP maybe we don't have to call bond_get_slave_by_id() at all,
>IMHO, just find first leg that can TX. Then we can restructure
>bond_get_slave_by_id() appropriately for the non-IGMP case.
For IGMP, the theory is to confine that traffic to a single
device. Normally, this will be curr_active_slave, which is updated even
in balance-rr mode as interfaces are added to or removed from the bond.
The call to bond_get_slave_by_id should be a fallback in case
curr_active_slave is empty, and should be the exception, and may not be
possible at all.
But either way, the IGMP path shouldn't mess with rr_tx_counter,
it should be out of band of the normal TX packet counting, so to speak.
-J
>> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
>> index e0880a3840d7..e02d9c6d40ee 100644
>> --- a/drivers/net/bonding/bond_main.c
>> +++ b/drivers/net/bonding/bond_main.c
>> @@ -4107,6 +4107,7 @@ static struct slave *bond_get_slave_by_id(struct bonding *bond,
>> if (--i < 0) {
>> if (bond_slave_can_tx(slave))
>> return slave;
>> + bond->rr_tx_counter++;
>> }
>> }
>>
>> @@ -4117,6 +4118,7 @@ static struct slave *bond_get_slave_by_id(struct bonding *bond,
>> break;
>> if (bond_slave_can_tx(slave))
>> return slave;
>> + bond->rr_tx_counter++;
>> }
>> /* no slave that can tx has been found */
>> return NULL;
>
---
-Jay Vosburgh, jay.vosburgh@canonical.com
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next] bonding: correct rr balancing during link failure
2020-12-05 19:45 ` Jakub Kicinski
2020-12-08 21:46 ` Jay Vosburgh
@ 2020-12-15 21:32 ` Lars Everbrand
1 sibling, 0 replies; 5+ messages in thread
From: Lars Everbrand @ 2020-12-15 21:32 UTC (permalink / raw)
To: Jakub Kicinski
Cc: linux-kernel, Jay Vosburgh, Veaceslav Falico, Andy Gospodarek,
David S. Miller, netdev
On Sat, Dec 05, 2020 at 11:45:13AM -0800, Jakub Kicinski wrote:
> Thanks for the patch!
Kind words for my first attempt at this. Sorry for answering a bit late,
proton-bridge is not my best friend lately.
>
> Looking at the code in question it feels a little like we're breaking
> abstractions if we bump the counter directly in get_slave_by_id.
My intention was to avoid a big change, and this was the easiest way. I
trust your opinion here.
>
> For one thing when the function is called for IGMP packets the counter
> should not be incremented at all. But also if packets_per_slave is not
> 1 we'd still be hitting the same leg multiple times (packets_per_slave
> / 2). So it seems like we should round the counter up somehow?
I did not consider this case, I only test =1 and random. Yeah, it breaks
if the counter is updated per packet in any >1 case.
>
> For IGMP maybe we don't have to call bond_get_slave_by_id() at all,
> IMHO, just find first leg that can TX. Then we can restructure
> bond_get_slave_by_id() appropriately for the non-IGMP case.
I can have another look but my I am not confident that I am skilled
enough in this area to produce a larger overhaul...
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next] bonding: correct rr balancing during link failure
2020-12-08 21:46 ` Jay Vosburgh
@ 2020-12-15 21:54 ` Lars Everbrand
0 siblings, 0 replies; 5+ messages in thread
From: Lars Everbrand @ 2020-12-15 21:54 UTC (permalink / raw)
To: Jay Vosburgh
Cc: Jakub Kicinski, linux-kernel, Veaceslav Falico, Andy Gospodarek,
David S. Miller, netdev
On Tue, Dec 08, 2020 at 01:46:09PM -0800, Jay Vosburgh wrote:
>
> Jakub Kicinski <kuba@kernel.org> wrote:
>
> >On Wed, 02 Dec 2020 20:55:57 +0000 Lars Everbrand wrote:
> Are these bandwidth numbers from observation of the actual
> behavior? I'm not sure the real system would behave this way; my
> suspicion is that it would increase the likelihood of drops on the
> overused slave, not that the overall capacity would be limited.
I tested this with with 2 VMs and 5 bridges with bandwidth limitiation
via 'virsh domiftune' to bring the speed down to something similar to
100Mbit/s.
iperf results:
with patch:
---
working iperf
interfaces speed [mbit/s]
5 442
4 363
3 278
2 199
1 107
without patch:
---
working iperf
interfaces speed [mbit/s]
5 444
4 226
3 155
2 129
1 107
The speed at 5x100 is not going as high as I expected but the
sub-optimal speed is still visible.
Note that the degradation tested is with downing interfaces sequentially
which is the worst-case for this problem.
> >Looking at the code in question it feels a little like we're breaking
> >abstractions if we bump the counter directly in get_slave_by_id.
>
> Agreed; I think a better way to fix this is to enable the slave
> array for balance-rr mode, and then use the array to find the right
> slave. This way, we then avoid the problematic "skip unable to tx"
> logic for free.
>
> >For one thing when the function is called for IGMP packets the counter
> >should not be incremented at all. But also if packets_per_slave is not
> >1 we'd still be hitting the same leg multiple times (packets_per_slave
> >/ 2). So it seems like we should round the counter up somehow?
> >
> >For IGMP maybe we don't have to call bond_get_slave_by_id() at all,
> >IMHO, just find first leg that can TX. Then we can restructure
> >bond_get_slave_by_id() appropriately for the non-IGMP case.
>
> For IGMP, the theory is to confine that traffic to a single
> device. Normally, this will be curr_active_slave, which is updated even
> in balance-rr mode as interfaces are added to or removed from the bond.
> The call to bond_get_slave_by_id should be a fallback in case
> curr_active_slave is empty, and should be the exception, and may not be
> possible at all.
>
> But either way, the IGMP path shouldn't mess with rr_tx_counter,
> it should be out of band of the normal TX packet counting, so to speak.
>
> -J
>
> >> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
> >> index e0880a3840d7..e02d9c6d40ee 100644
> >> --- a/drivers/net/bonding/bond_main.c
> >> +++ b/drivers/net/bonding/bond_main.c
> >> @@ -4107,6 +4107,7 @@ static struct slave *bond_get_slave_by_id(struct bonding *bond,
> >> if (--i < 0) {
> >> if (bond_slave_can_tx(slave))
> >> return slave;
> >> + bond->rr_tx_counter++;
> >> }
> >> }
> >>
> >> @@ -4117,6 +4118,7 @@ static struct slave *bond_get_slave_by_id(struct bonding *bond,
> >> break;
> >> if (bond_slave_can_tx(slave))
> >> return slave;
> >> + bond->rr_tx_counter++;
> >> }
> >> /* no slave that can tx has been found */
> >> return NULL;
> >
>
> ---
> -Jay Vosburgh, jay.vosburgh@canonical.com
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-12-15 21:58 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-02 20:55 [PATCH net-next] bonding: correct rr balancing during link failure Lars Everbrand
2020-12-05 19:45 ` Jakub Kicinski
2020-12-08 21:46 ` Jay Vosburgh
2020-12-15 21:54 ` Lars Everbrand
2020-12-15 21:32 ` Lars Everbrand
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.