All of lore.kernel.org
 help / color / mirror / Atom feed
* Hardware capabilities and bonding offload
@ 2015-11-16  9:29 Premkumar Jonnala
  2015-11-16 15:30 ` Jiri Pirko
  0 siblings, 1 reply; 8+ messages in thread
From: Premkumar Jonnala @ 2015-11-16  9:29 UTC (permalink / raw)
  To: netdev

Hello,

I am looking to offload bond interfaces to hardware for forwarding.  Linux allows for configuring 
a variety of parameters on bonds or slave interfaces.  Not all configurations can be offloaded to 
hardware.  For example, certain hardware cannot support bonds with mode of adaptive load balancing.

When such a configuration is provided by user, we have two options at hand (for platforms supporting
hardware offloads):

1. Reject the configuration.  

2. Handle the bond interface in software.  In a scenario where this bond interface is part
of a bridge interface, for simplicity purpose, all other interfaces in the bridge need to be 
handled in software - which results in a very low packet processing performance.

I'm writing to gather feedback on the approach to take, including any other suggestions.
We have discussed this briefly in our weekly netdev meeting, and I received votes for
option #1.

Thanks
Prem

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Hardware capabilities and bonding offload
  2015-11-16  9:29 Hardware capabilities and bonding offload Premkumar Jonnala
@ 2015-11-16 15:30 ` Jiri Pirko
  2015-11-16 16:10   ` John Fastabend
  0 siblings, 1 reply; 8+ messages in thread
From: Jiri Pirko @ 2015-11-16 15:30 UTC (permalink / raw)
  To: Premkumar Jonnala; +Cc: netdev

Mon, Nov 16, 2015 at 10:29:12AM CET, pjonnala@broadcom.com wrote:
>Hello,
>
>I am looking to offload bond interfaces to hardware for forwarding.  Linux allows for configuring 
>a variety of parameters on bonds or slave interfaces.  Not all configurations can be offloaded to 
>hardware.  For example, certain hardware cannot support bonds with mode of adaptive load balancing.
>
>When such a configuration is provided by user, we have two options at hand (for platforms supporting
>hardware offloads):
>
>1. Reject the configuration.  
>
>2. Handle the bond interface in software.  In a scenario where this bond interface is part
>of a bridge interface, for simplicity purpose, all other interfaces in the bridge need to be 
>handled in software - which results in a very low packet processing performance.

Although it might sound intriguing to fallback to sw here, it makes no
sense and user certainly does not want that. For example in case of our
HW, we have 100gbit forwarding which would be degraded to ~1gbit (for one
port pair). Another thing is that for some HW this mignt not be even
possible. In our case it would be very complicated.

I believe that the correct approach is to let driver decide if the
configuration is acceptable or not and reject it in case it is not.


>
>I'm writing to gather feedback on the approach to take, including any other suggestions.
>We have discussed this briefly in our weekly netdev meeting, and I received votes for
>option #1.
>
>Thanks
>Prem
>--
>To unsubscribe from this list: send the line "unsubscribe netdev" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Hardware capabilities and bonding offload
  2015-11-16 15:30 ` Jiri Pirko
@ 2015-11-16 16:10   ` John Fastabend
  2015-11-17 22:03     ` Simon Horman
  0 siblings, 1 reply; 8+ messages in thread
From: John Fastabend @ 2015-11-16 16:10 UTC (permalink / raw)
  To: Jiri Pirko, Premkumar Jonnala; +Cc: netdev

On 15-11-16 07:30 AM, Jiri Pirko wrote:
> Mon, Nov 16, 2015 at 10:29:12AM CET, pjonnala@broadcom.com wrote:
>> Hello,
>>
>> I am looking to offload bond interfaces to hardware for forwarding.  Linux allows for configuring 
>> a variety of parameters on bonds or slave interfaces.  Not all configurations can be offloaded to 
>> hardware.  For example, certain hardware cannot support bonds with mode of adaptive load balancing.
>>
>> When such a configuration is provided by user, we have two options at hand (for platforms supporting
>> hardware offloads):
>>
>> 1. Reject the configuration.  
>>
>> 2. Handle the bond interface in software.  In a scenario where this bond interface is part
>> of a bridge interface, for simplicity purpose, all other interfaces in the bridge need to be 
>> handled in software - which results in a very low packet processing performance.
> 
> Although it might sound intriguing to fallback to sw here, it makes no
> sense and user certainly does not want that. For example in case of our
> HW, we have 100gbit forwarding which would be degraded to ~1gbit (for one
> port pair). Another thing is that for some HW this mignt not be even
> possible. In our case it would be very complicated.
> 
> I believe that the correct approach is to let driver decide if the
> configuration is acceptable or not and reject it in case it is not.
> 

+1 I agree the best approach is to throw a hard error and reject
it if there is no mapping on to the hardware. This lets your
management software propagate that error up so you can handle it
correctly at higher levels in the stack.

You could if needed add a bit to enable setup in software only
if that is needed.

Thanks,
John

> 
>>
>> I'm writing to gather feedback on the approach to take, including any other suggestions.
>> We have discussed this briefly in our weekly netdev meeting, and I received votes for
>> option #1.
>>
>> Thanks
>> Prem
>> --
>> To unsubscribe from this list: send the line "unsubscribe netdev" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Hardware capabilities and bonding offload
  2015-11-16 16:10   ` John Fastabend
@ 2015-11-17 22:03     ` Simon Horman
  2015-11-18  0:57       ` John Fastabend
  0 siblings, 1 reply; 8+ messages in thread
From: Simon Horman @ 2015-11-17 22:03 UTC (permalink / raw)
  To: John Fastabend; +Cc: Jiri Pirko, Premkumar Jonnala, netdev

On Mon, Nov 16, 2015 at 08:10:27AM -0800, John Fastabend wrote:
> On 15-11-16 07:30 AM, Jiri Pirko wrote:
> > Mon, Nov 16, 2015 at 10:29:12AM CET, pjonnala@broadcom.com wrote:
> >> Hello,
> >>
> >> I am looking to offload bond interfaces to hardware for forwarding.  Linux allows for configuring 
> >> a variety of parameters on bonds or slave interfaces.  Not all configurations can be offloaded to 
> >> hardware.  For example, certain hardware cannot support bonds with mode of adaptive load balancing.
> >>
> >> When such a configuration is provided by user, we have two options at hand (for platforms supporting
> >> hardware offloads):
> >>
> >> 1. Reject the configuration.  
> >>
> >> 2. Handle the bond interface in software.  In a scenario where this bond interface is part
> >> of a bridge interface, for simplicity purpose, all other interfaces in the bridge need to be 
> >> handled in software - which results in a very low packet processing performance.
> > 
> > Although it might sound intriguing to fallback to sw here, it makes no
> > sense and user certainly does not want that. For example in case of our
> > HW, we have 100gbit forwarding which would be degraded to ~1gbit (for one
> > port pair). Another thing is that for some HW this mignt not be even
> > possible. In our case it would be very complicated.
> > 
> > I believe that the correct approach is to let driver decide if the
> > configuration is acceptable or not and reject it in case it is not.
> > 
> 
> +1 I agree the best approach is to throw a hard error and reject
> it if there is no mapping on to the hardware. This lets your
> management software propagate that error up so you can handle it
> correctly at higher levels in the stack.
> 
> You could if needed add a bit to enable setup in software only
> if that is needed.

FWIW, I agree that such a scheme ought to cover the bases.
But perhaps it would be best to old off on adding the software only
bit until a use-case arises. John, perhaps you already have one?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Hardware capabilities and bonding offload
  2015-11-17 22:03     ` Simon Horman
@ 2015-11-18  0:57       ` John Fastabend
  2015-11-18 14:05         ` Andrew Lunn
  0 siblings, 1 reply; 8+ messages in thread
From: John Fastabend @ 2015-11-18  0:57 UTC (permalink / raw)
  To: Simon Horman; +Cc: Jiri Pirko, Premkumar Jonnala, netdev

On 15-11-17 02:03 PM, Simon Horman wrote:
> On Mon, Nov 16, 2015 at 08:10:27AM -0800, John Fastabend wrote:
>> On 15-11-16 07:30 AM, Jiri Pirko wrote:
>>> Mon, Nov 16, 2015 at 10:29:12AM CET, pjonnala@broadcom.com wrote:
>>>> Hello,
>>>>
>>>> I am looking to offload bond interfaces to hardware for forwarding.  Linux allows for configuring 
>>>> a variety of parameters on bonds or slave interfaces.  Not all configurations can be offloaded to 
>>>> hardware.  For example, certain hardware cannot support bonds with mode of adaptive load balancing.
>>>>
>>>> When such a configuration is provided by user, we have two options at hand (for platforms supporting
>>>> hardware offloads):
>>>>
>>>> 1. Reject the configuration.  
>>>>
>>>> 2. Handle the bond interface in software.  In a scenario where this bond interface is part
>>>> of a bridge interface, for simplicity purpose, all other interfaces in the bridge need to be 
>>>> handled in software - which results in a very low packet processing performance.
>>>
>>> Although it might sound intriguing to fallback to sw here, it makes no
>>> sense and user certainly does not want that. For example in case of our
>>> HW, we have 100gbit forwarding which would be degraded to ~1gbit (for one
>>> port pair). Another thing is that for some HW this mignt not be even
>>> possible. In our case it would be very complicated.
>>>
>>> I believe that the correct approach is to let driver decide if the
>>> configuration is acceptable or not and reject it in case it is not.
>>>
>>
>> +1 I agree the best approach is to throw a hard error and reject
>> it if there is no mapping on to the hardware. This lets your
>> management software propagate that error up so you can handle it
>> correctly at higher levels in the stack.
>>
>> You could if needed add a bit to enable setup in software only
>> if that is needed.
> 
> FWIW, I agree that such a scheme ought to cover the bases.
> But perhaps it would be best to old off on adding the software only
> bit until a use-case arises. John, perhaps you already have one?
> 

I suspect the use case here would be the hardware can't support
the offloaded bond _but_ I really want to bond my traffic and I'm OK
with the performance penalty that is going to be incurred because
either (a) I know the traffic rate is low enough my CPU(s) can handle
it or (b) it wont completely trash my network to have a slow link in
the middle for whatever reason.

To be honest though this is more of an argument in theory versus
some existing management agent I know of today. If you need to do
bonding type X in your network and the particular switch doesn't support
it I'm not even sure what the mgmt layer is going to do. Maybe just
put the switch offline for that network segment.

If you leave the sw bit out in the first iteration I'm OK with that
we can easily add it when we have software that needs it.

Thanks,
John

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Hardware capabilities and bonding offload
  2015-11-18  0:57       ` John Fastabend
@ 2015-11-18 14:05         ` Andrew Lunn
  2015-11-18 14:29           ` Jiri Pirko
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Lunn @ 2015-11-18 14:05 UTC (permalink / raw)
  To: John Fastabend; +Cc: Simon Horman, Jiri Pirko, Premkumar Jonnala, netdev

> To be honest though this is more of an argument in theory versus
> some existing management agent I know of today. If you need to do
> bonding type X in your network and the particular switch doesn't support
> it I'm not even sure what the mgmt layer is going to do. Maybe just
> put the switch offline for that network segment.
> 
> If you leave the sw bit out in the first iteration I'm OK with that
> we can easily add it when we have software that needs it.

Taking a step back...

Have we defined a consistent way for signalling:

1) Failed to offload to the hardware, because the hardware cannot do
   what you requested.
2) Do this in software, rather than trying and failing to offload to
   hardware.

At least in DSA, we return EOPNOTSUP for 1).

   Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Hardware capabilities and bonding offload
  2015-11-18 14:05         ` Andrew Lunn
@ 2015-11-18 14:29           ` Jiri Pirko
  2015-11-18 14:50             ` Andrew Lunn
  0 siblings, 1 reply; 8+ messages in thread
From: Jiri Pirko @ 2015-11-18 14:29 UTC (permalink / raw)
  To: Andrew Lunn; +Cc: John Fastabend, Simon Horman, Premkumar Jonnala, netdev

Wed, Nov 18, 2015 at 03:05:12PM CET, andrew@lunn.ch wrote:
>> To be honest though this is more of an argument in theory versus
>> some existing management agent I know of today. If you need to do
>> bonding type X in your network and the particular switch doesn't support
>> it I'm not even sure what the mgmt layer is going to do. Maybe just
>> put the switch offline for that network segment.
>> 
>> If you leave the sw bit out in the first iteration I'm OK with that
>> we can easily add it when we have software that needs it.
>
>Taking a step back...
>
>Have we defined a consistent way for signalling:
>
>1) Failed to offload to the hardware, because the hardware cannot do
>   what you requested.
>2) Do this in software, rather than trying and failing to offload to
>   hardware.
>
>At least in DSA, we return EOPNOTSUP for 1).

Well for example in case of bonding there is quite impossible to do
things in software in case the hardware datapath simply cannot pass
packets to kernel. Driver should know and should forbid such
non-functional setup.


>
>   Andrew
>
>--
>To unsubscribe from this list: send the line "unsubscribe netdev" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Hardware capabilities and bonding offload
  2015-11-18 14:29           ` Jiri Pirko
@ 2015-11-18 14:50             ` Andrew Lunn
  0 siblings, 0 replies; 8+ messages in thread
From: Andrew Lunn @ 2015-11-18 14:50 UTC (permalink / raw)
  To: Jiri Pirko; +Cc: John Fastabend, Simon Horman, Premkumar Jonnala, netdev

On Wed, Nov 18, 2015 at 03:29:23PM +0100, Jiri Pirko wrote:
> Wed, Nov 18, 2015 at 03:05:12PM CET, andrew@lunn.ch wrote:
> >> To be honest though this is more of an argument in theory versus
> >> some existing management agent I know of today. If you need to do
> >> bonding type X in your network and the particular switch doesn't support
> >> it I'm not even sure what the mgmt layer is going to do. Maybe just
> >> put the switch offline for that network segment.
> >> 
> >> If you leave the sw bit out in the first iteration I'm OK with that
> >> we can easily add it when we have software that needs it.
> >
> >Taking a step back...
> >
> >Have we defined a consistent way for signalling:
> >
> >1) Failed to offload to the hardware, because the hardware cannot do
> >   what you requested.
> >2) Do this in software, rather than trying and failing to offload to
> >   hardware.
> >
> >At least in DSA, we return EOPNOTSUP for 1).
> 
> Well for example in case of bonding there is quite impossible to do
> things in software in case the hardware datapath simply cannot pass
> packets to kernel. Driver should know and should forbid such
> non-functional setup.

I said, "taking a step back..." meaning, in the general case, do we
have a well defined way to do this. What we don't want is X different
ways for Y difference API calls to say, if offload of this to hardware
fails, do it in software, if that is possible.

       Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-11-18 14:50 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-16  9:29 Hardware capabilities and bonding offload Premkumar Jonnala
2015-11-16 15:30 ` Jiri Pirko
2015-11-16 16:10   ` John Fastabend
2015-11-17 22:03     ` Simon Horman
2015-11-18  0:57       ` John Fastabend
2015-11-18 14:05         ` Andrew Lunn
2015-11-18 14:29           ` Jiri Pirko
2015-11-18 14:50             ` Andrew Lunn

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.