All of lore.kernel.org
 help / color / mirror / Atom feed
* ixgbe tuning reset when XDP is setup
@ 2017-12-15 10:24 Eric Leblond
  2017-12-15 15:53 ` David Miller
  0 siblings, 1 reply; 8+ messages in thread
From: Eric Leblond @ 2017-12-15 10:24 UTC (permalink / raw)
  To: netdev, xdp-newbies; +Cc: Peter Manev

Hello,

When using an ixgbe card with Suricata we are using the following
commands to get a symmetric hash on RSS load balancing:

./set_irq_affinity 0-15 eth3
ethtool -X eth3 hkey 6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A equal 16
ethtool -x eth3
ethtool -n eth3

Then we start Suricata.

In my current experiment on XDP, I have Suricata that inject the eBPF
program when starting. The consequence of that when using an ixgbe card
is that the load balancing get reset and all interrupts are reaching
the first core.

My analysis is that in the ixgbe_xdp_setup() function we call
ixgbe_setup_tc() that reset the hash tuning parameter.

If we run the affinity script and the ethtool commands after XDP is
loaded then things are going normal again. But this is not an optimal
behavior.

Is this really what is happening ? Is there a known workaround for this
issue ?

BR,
-- 
Eric Leblond <eric@regit.org>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ixgbe tuning reset when XDP is setup
  2017-12-15 10:24 ixgbe tuning reset when XDP is setup Eric Leblond
@ 2017-12-15 15:53 ` David Miller
  2017-12-15 16:03   ` John Fastabend
  0 siblings, 1 reply; 8+ messages in thread
From: David Miller @ 2017-12-15 15:53 UTC (permalink / raw)
  To: eric; +Cc: netdev, xdp-newbies, pmanev

From: Eric Leblond <eric@regit.org>
Date: Fri, 15 Dec 2017 11:24:46 +0100

> Hello,
> 
> When using an ixgbe card with Suricata we are using the following
> commands to get a symmetric hash on RSS load balancing:
> 
> ./set_irq_affinity 0-15 eth3
> ethtool -X eth3 hkey 6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A equal 16
> ethtool -x eth3
> ethtool -n eth3
> 
> Then we start Suricata.
> 
> In my current experiment on XDP, I have Suricata that inject the eBPF
> program when starting. The consequence of that when using an ixgbe card
> is that the load balancing get reset and all interrupts are reaching
> the first core.

This definitely should _not_ be a side effect of enabling XDP on a device.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ixgbe tuning reset when XDP is setup
  2017-12-15 15:53 ` David Miller
@ 2017-12-15 16:03   ` John Fastabend
  2017-12-15 16:51     ` Alexander Duyck
  0 siblings, 1 reply; 8+ messages in thread
From: John Fastabend @ 2017-12-15 16:03 UTC (permalink / raw)
  To: David Miller, eric
  Cc: netdev, xdp-newbies, pmanev, Alexander Duyck, Emil Tantilov

On 12/15/2017 07:53 AM, David Miller wrote:
> From: Eric Leblond <eric@regit.org>
> Date: Fri, 15 Dec 2017 11:24:46 +0100
> 
>> Hello,
>>
>> When using an ixgbe card with Suricata we are using the following
>> commands to get a symmetric hash on RSS load balancing:
>>
>> ./set_irq_affinity 0-15 eth3
>> ethtool -X eth3 hkey 6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A equal 16
>> ethtool -x eth3
>> ethtool -n eth3
>>
>> Then we start Suricata.
>>
>> In my current experiment on XDP, I have Suricata that inject the eBPF
>> program when starting. The consequence of that when using an ixgbe card
>> is that the load balancing get reset and all interrupts are reaching
>> the first core.
> 
> This definitely should _not_ be a side effect of enabling XDP on a device.
> 

Agreed, CC Emil and Alex we should restore these settings after the
reconfiguration done to support a queue per core.

.John

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ixgbe tuning reset when XDP is setup
  2017-12-15 16:03   ` John Fastabend
@ 2017-12-15 16:51     ` Alexander Duyck
  2017-12-15 16:56       ` Peter Manev
  0 siblings, 1 reply; 8+ messages in thread
From: Alexander Duyck @ 2017-12-15 16:51 UTC (permalink / raw)
  To: John Fastabend
  Cc: David Miller, eric, Netdev, xdp-newbies, pmanev, Emil Tantilov

On Fri, Dec 15, 2017 at 8:03 AM, John Fastabend
<john.fastabend@gmail.com> wrote:
> On 12/15/2017 07:53 AM, David Miller wrote:
>> From: Eric Leblond <eric@regit.org>
>> Date: Fri, 15 Dec 2017 11:24:46 +0100
>>
>>> Hello,
>>>
>>> When using an ixgbe card with Suricata we are using the following
>>> commands to get a symmetric hash on RSS load balancing:
>>>
>>> ./set_irq_affinity 0-15 eth3
>>> ethtool -X eth3 hkey 6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A equal 16
>>> ethtool -x eth3
>>> ethtool -n eth3
>>>
>>> Then we start Suricata.
>>>
>>> In my current experiment on XDP, I have Suricata that inject the eBPF
>>> program when starting. The consequence of that when using an ixgbe card
>>> is that the load balancing get reset and all interrupts are reaching
>>> the first core.
>>
>> This definitely should _not_ be a side effect of enabling XDP on a device.
>>
>
> Agreed, CC Emil and Alex we should restore these settings after the
> reconfiguration done to support a queue per core.
>
> .John

So the interrupt configuration has to get reset since we have to
assign 2 Tx queues for every Rx queue instead of the 1-1 that was
previously there. That is a natural consequence of rearranging the
queues as currently happens. The issue is the q_vectors themselves
have to be reallocated. The only way to not make that happen would be
to pre-allocate the Tx queues for XDP always.

Also just to be clear we are talking about the interrupts being reset,
not the RSS key right? I just want to make sure that is what we are
talking about.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ixgbe tuning reset when XDP is setup
  2017-12-15 16:51     ` Alexander Duyck
@ 2017-12-15 16:56       ` Peter Manev
  2018-01-25 13:09         ` Peter Manev
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Manev @ 2017-12-15 16:56 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: John Fastabend, David Miller, eric, Netdev, xdp-newbies, Emil Tantilov


> On 15 Dec 2017, at 17:51, Alexander Duyck <alexander.duyck@gmail.com> wrote:
> 
> On Fri, Dec 15, 2017 at 8:03 AM, John Fastabend
> <john.fastabend@gmail.com> wrote:
>> On 12/15/2017 07:53 AM, David Miller wrote:
>>> From: Eric Leblond <eric@regit.org>
>>> Date: Fri, 15 Dec 2017 11:24:46 +0100
>>> 
>>>> Hello,
>>>> 
>>>> When using an ixgbe card with Suricata we are using the following
>>>> commands to get a symmetric hash on RSS load balancing:
>>>> 
>>>> ./set_irq_affinity 0-15 eth3
>>>> ethtool -X eth3 hkey 6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A equal 16
>>>> ethtool -x eth3
>>>> ethtool -n eth3
>>>> 
>>>> Then we start Suricata.
>>>> 
>>>> In my current experiment on XDP, I have Suricata that inject the eBPF
>>>> program when starting. The consequence of that when using an ixgbe card
>>>> is that the load balancing get reset and all interrupts are reaching
>>>> the first core.
>>> 
>>> This definitely should _not_ be a side effect of enabling XDP on a device.
>>> 
>> 
>> Agreed, CC Emil and Alex we should restore these settings after the
>> reconfiguration done to support a queue per core.
>> 
>> .John
> 
> So the interrupt configuration has to get reset since we have to
> assign 2 Tx queues for every Rx queue instead of the 1-1 that was
> previously there. That is a natural consequence of rearranging the
> queues as currently happens. The issue is the q_vectors themselves
> have to be reallocated. The only way to not make that happen would be
> to pre-allocate the Tx queues for XDP always.
> 
> Also just to be clear we are talking about the interrupts being reset,
> not the RSS key right? I just want to make sure that is what we are
> talking about.
> 

Yes.
From the tests we did I only observed the IRQs being all reset to the first CPU after Suricata started.



> Thanks.
> 
> - Alex

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ixgbe tuning reset when XDP is setup
  2017-12-15 16:56       ` Peter Manev
@ 2018-01-25 13:09         ` Peter Manev
  2018-01-25 15:07           ` Alexander Duyck
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Manev @ 2018-01-25 13:09 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: John Fastabend, David Miller, Eric Leblond, Netdev, xdp-newbies,
	Emil Tantilov

On Fri, Dec 15, 2017 at 5:56 PM, Peter Manev <petermanev@gmail.com> wrote:
>
>> On 15 Dec 2017, at 17:51, Alexander Duyck <alexander.duyck@gmail.com> wrote:
>>
>> On Fri, Dec 15, 2017 at 8:03 AM, John Fastabend
>> <john.fastabend@gmail.com> wrote:
>>> On 12/15/2017 07:53 AM, David Miller wrote:
>>>> From: Eric Leblond <eric@regit.org>
>>>> Date: Fri, 15 Dec 2017 11:24:46 +0100
>>>>
>>>>> Hello,
>>>>>
>>>>> When using an ixgbe card with Suricata we are using the following
>>>>> commands to get a symmetric hash on RSS load balancing:
>>>>>
>>>>> ./set_irq_affinity 0-15 eth3
>>>>> ethtool -X eth3 hkey 6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A equal 16
>>>>> ethtool -x eth3
>>>>> ethtool -n eth3
>>>>>
>>>>> Then we start Suricata.
>>>>>
>>>>> In my current experiment on XDP, I have Suricata that inject the eBPF
>>>>> program when starting. The consequence of that when using an ixgbe card
>>>>> is that the load balancing get reset and all interrupts are reaching
>>>>> the first core.
>>>>
>>>> This definitely should _not_ be a side effect of enabling XDP on a device.
>>>>
>>>
>>> Agreed, CC Emil and Alex we should restore these settings after the
>>> reconfiguration done to support a queue per core.
>>>
>>> .John
>>
>> So the interrupt configuration has to get reset since we have to
>> assign 2 Tx queues for every Rx queue instead of the 1-1 that was
>> previously there. That is a natural consequence of rearranging the
>> queues as currently happens. The issue is the q_vectors themselves
>> have to be reallocated. The only way to not make that happen would be
>> to pre-allocate the Tx queues for XDP always.
>>
>> Also just to be clear we are talking about the interrupts being reset,
>> not the RSS key right? I just want to make sure that is what we are
>> talking about.
>>
>
> Yes.
> From the tests we did I only observed the IRQs being all reset to the first CPU after Suricata started.
>
>
>
>> Thanks.
>>
>> - Alex

Hi,

We were wondering if there is any follow up/potential solution for that?
If there is something we could help out testing with regards to that
- please let us know.

Thank you

-- 
Regards,
Peter Manev

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ixgbe tuning reset when XDP is setup
  2018-01-25 13:09         ` Peter Manev
@ 2018-01-25 15:07           ` Alexander Duyck
  2018-02-21 20:01             ` Peter Manev
  0 siblings, 1 reply; 8+ messages in thread
From: Alexander Duyck @ 2018-01-25 15:07 UTC (permalink / raw)
  To: Peter Manev
  Cc: John Fastabend, David Miller, Eric Leblond, Netdev, xdp-newbies,
	Emil Tantilov

On Thu, Jan 25, 2018 at 5:09 AM, Peter Manev <petermanev@gmail.com> wrote:
> On Fri, Dec 15, 2017 at 5:56 PM, Peter Manev <petermanev@gmail.com> wrote:
>>
>>> On 15 Dec 2017, at 17:51, Alexander Duyck <alexander.duyck@gmail.com> wrote:
>>>
>>> On Fri, Dec 15, 2017 at 8:03 AM, John Fastabend
>>> <john.fastabend@gmail.com> wrote:
>>>> On 12/15/2017 07:53 AM, David Miller wrote:
>>>>> From: Eric Leblond <eric@regit.org>
>>>>> Date: Fri, 15 Dec 2017 11:24:46 +0100
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> When using an ixgbe card with Suricata we are using the following
>>>>>> commands to get a symmetric hash on RSS load balancing:
>>>>>>
>>>>>> ./set_irq_affinity 0-15 eth3
>>>>>> ethtool -X eth3 hkey 6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A equal 16
>>>>>> ethtool -x eth3
>>>>>> ethtool -n eth3
>>>>>>
>>>>>> Then we start Suricata.
>>>>>>
>>>>>> In my current experiment on XDP, I have Suricata that inject the eBPF
>>>>>> program when starting. The consequence of that when using an ixgbe card
>>>>>> is that the load balancing get reset and all interrupts are reaching
>>>>>> the first core.
>>>>>
>>>>> This definitely should _not_ be a side effect of enabling XDP on a device.
>>>>>
>>>>
>>>> Agreed, CC Emil and Alex we should restore these settings after the
>>>> reconfiguration done to support a queue per core.
>>>>
>>>> .John
>>>
>>> So the interrupt configuration has to get reset since we have to
>>> assign 2 Tx queues for every Rx queue instead of the 1-1 that was
>>> previously there. That is a natural consequence of rearranging the
>>> queues as currently happens. The issue is the q_vectors themselves
>>> have to be reallocated. The only way to not make that happen would be
>>> to pre-allocate the Tx queues for XDP always.
>>>
>>> Also just to be clear we are talking about the interrupts being reset,
>>> not the RSS key right? I just want to make sure that is what we are
>>> talking about.
>>>
>>
>> Yes.
>> From the tests we did I only observed the IRQs being all reset to the first CPU after Suricata started.
>>
>>
>>
>>> Thanks.
>>>
>>> - Alex
>
> Hi,
>
> We were wondering if there is any follow up/potential solution for that?
> If there is something we could help out testing with regards to that
> - please let us know.
>
> Thank you
>
> --
> Regards,
> Peter Manev

We don't have a solution available for this yet. Basically what it
comes down to is that we have to change the driver code so that if
assumes it is going to need to alloc Tx rings for XDP always, and then
if it can't we have to disable the XDP feature. The current logic is
to advertise the XDP feature, and then allocate the rings when XDP is
actually used, and if that fails we fail to load the XDP program.

Unfortunately I don't have an ETA for when we can get to that. It may
be a while, however patches are always welcome.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ixgbe tuning reset when XDP is setup
  2018-01-25 15:07           ` Alexander Duyck
@ 2018-02-21 20:01             ` Peter Manev
  0 siblings, 0 replies; 8+ messages in thread
From: Peter Manev @ 2018-02-21 20:01 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: John Fastabend, David Miller, Eric Leblond, Netdev, xdp-newbies,
	Emil Tantilov

>>
>> Hi,
>>
>> We were wondering if there is any follow up/potential solution for that?
>> If there is something we could help out testing with regards to that
>> - please let us know.
>>
>> Thank you
>>
>> --
>> Regards,
>> Peter Manev
>
> We don't have a solution available for this yet. Basically what it
> comes down to is that we have to change the driver code so that if
> assumes it is going to need to alloc Tx rings for XDP always, and then
> if it can't we have to disable the XDP feature. The current logic is
> to advertise the XDP feature, and then allocate the rings when XDP is
> actually used, and if that fails we fail to load the XDP program.
>
> Unfortunately I don't have an ETA for when we can get to that. It may
> be a while, however patches are always welcome.
>
> Thanks.
>
> - Alex

It seems the issue is not present on 4.15.2 (while it is in 4.13.10
for example) using the ixgbe available in tree
(/lib/modules/4.15.2-amd64/kernel/drivers/net/ethernet/intel/ixgbe)

Not sure what triggered the fix -  thought here would be a good place
to ask for some pointers.

Thank you

-- 
Regards,
Peter Manev

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-02-21 20:01 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-15 10:24 ixgbe tuning reset when XDP is setup Eric Leblond
2017-12-15 15:53 ` David Miller
2017-12-15 16:03   ` John Fastabend
2017-12-15 16:51     ` Alexander Duyck
2017-12-15 16:56       ` Peter Manev
2018-01-25 13:09         ` Peter Manev
2018-01-25 15:07           ` Alexander Duyck
2018-02-21 20:01             ` Peter Manev

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.