xdp-newbies.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* XDP Redirect and TX Metadata
@ 2024-02-12  8:27 Florian Kauer
  2024-02-12 13:41 ` [xdp-hints] " Toke Høiland-Jørgensen
  0 siblings, 1 reply; 4+ messages in thread
From: Florian Kauer @ 2024-02-12  8:27 UTC (permalink / raw)
  To: xdp-hints, xdp-newbies

Hi,
I am currently implementing an eBPF for redirecting from one physical interface to another. So basically loading the following at enp8s0:

SEC("prog")
int xdp_redirect(struct xdp_md *ctx) {
	/* ... */
	return bpf_redirect(3 /* enp5s0 */, 0);
}

I made three observations that I would like to discuss with you:

1. The redirection only works when I ALSO load some eBPF at the egress interface (enp5s0). It can be just

SEC("prog")
int xdp_pass(struct xdp_md *ctx) {
	return XDP_PASS;
}

but there has to be at least something. Otherwise, only xdp_redirect is called, but xdp_devmap_xmit is not.
It seems somewhat reasonable that the interface where the traffic is redirected to also needs to have the
XDP functionality initialized somehow, but it was unexpected at first. It tested it with an i225-IT (igc driver)
and a 82576 (igb driver). So, is this a bug or a feature?

2. For the RX side, the metadata is documented as "XDP RX Metadata" (https://docs.kernel.org/networking/xdp-rx-metadata.html),
while for TX it is "AF_XDP TX Metadata" (https://www.kernel.org/doc/html/next/networking/xsk-tx-metadata.html).
That seems to imply that TX metadata only works for AF_XDP, but not for direct redirection. Is there a reason for that?

3. At least for the igc, the egress queue is currently selected by using the smp_processor_id.
(https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/intel/igc/igc_main.c?h=v6.8-rc4#n2453)
For our application, I would like to define the queue on a per-packet basis via the eBPF.
This would allow to steer the traffic to the correct queue when using TAPRIO full hardware offload.
Do you see any problem with introducing a new metadata field to define the egress queue?

Thanks,
Florian

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [xdp-hints] XDP Redirect and TX Metadata
  2024-02-12  8:27 XDP Redirect and TX Metadata Florian Kauer
@ 2024-02-12 13:41 ` Toke Høiland-Jørgensen
  2024-02-12 14:35   ` Florian Kauer
  0 siblings, 1 reply; 4+ messages in thread
From: Toke Høiland-Jørgensen @ 2024-02-12 13:41 UTC (permalink / raw)
  To: Florian Kauer, xdp-hints, xdp-newbies

Florian Kauer <florian.kauer@linutronix.de> writes:

> Hi,
> I am currently implementing an eBPF for redirecting from one physical interface to another. So basically loading the following at enp8s0:
>
> SEC("prog")
> int xdp_redirect(struct xdp_md *ctx) {
> 	/* ... */
> 	return bpf_redirect(3 /* enp5s0 */, 0);
> }
>
> I made three observations that I would like to discuss with you:
>
> 1. The redirection only works when I ALSO load some eBPF at the egress interface (enp5s0). It can be just
>
> SEC("prog")
> int xdp_pass(struct xdp_md *ctx) {
> 	return XDP_PASS;
> }
>
> but there has to be at least something. Otherwise, only xdp_redirect is called, but xdp_devmap_xmit is not.
> It seems somewhat reasonable that the interface where the traffic is redirected to also needs to have the
> XDP functionality initialized somehow, but it was unexpected at first. It tested it with an i225-IT (igc driver)
> and a 82576 (igb driver). So, is this a bug or a feature?

I personally consider it a bug, but all the Intel drivers work this way,
unfortunately. The was some discussion around making the XDP feature
bits read-write, making it possible to enable XDP via ethtool instead of
having to load a dummy XDP program. But no patches have materialised yet.

> 2. For the RX side, the metadata is documented as "XDP RX Metadata"
> (https://docs.kernel.org/networking/xdp-rx-metadata.html), while for
> TX it is "AF_XDP TX Metadata"
> (https://www.kernel.org/doc/html/next/networking/xsk-tx-metadata.html).
> That seems to imply that TX metadata only works for AF_XDP, but not
> for direct redirection. Is there a reason for that?

Well, IIRC, AF_XDP was the most pressing use case, and no one has gotten
around to extending this to the regular XDP forwarding path yet.

> 3. At least for the igc, the egress queue is currently selected by
> using the smp_processor_id.
> (https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/intel/igc/igc_main.c?h=v6.8-rc4#n2453)
> For our application, I would like to define the queue on a per-packet
> basis via the eBPF. This would allow to steer the traffic to the
> correct queue when using TAPRIO full hardware offload. Do you see any
> problem with introducing a new metadata field to define the egress
> queue?

Well, a couple :)

1. We'd have to find agreement across drivers for a numbering scheme to
refer to queues.

2. Selecting queues based on CPU index the way its done now means we
guarantee that the same queue will only be served from one CPU. Which
means we don't have to do any locking, which helps tremendously with
performance. Drivers handle the case where there are more CPUs than
queues a bit differently, but the ones that do generally have a lock
(with associated performance overhead).

As a workaround, you can use a cpumap to steer packets to specific CPUs
and perform the egress redirect inside the cpumap instead of directly on
RX. Requires a bit of knowledge of the hardware configuration, but it
may be enough for what you're trying to do.

-Toke

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [xdp-hints] XDP Redirect and TX Metadata
  2024-02-12 13:41 ` [xdp-hints] " Toke Høiland-Jørgensen
@ 2024-02-12 14:35   ` Florian Kauer
  2024-02-13 13:00     ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 4+ messages in thread
From: Florian Kauer @ 2024-02-12 14:35 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen, xdp-hints, xdp-newbies

On 12.02.24 14:41, Toke Høiland-Jørgensen wrote:
> Florian Kauer <florian.kauer@linutronix.de> writes:
> 
>> Hi,
>> I am currently implementing an eBPF for redirecting from one physical interface to another. So basically loading the following at enp8s0:
>>
>> SEC("prog")
>> int xdp_redirect(struct xdp_md *ctx) {
>> 	/* ... */
>> 	return bpf_redirect(3 /* enp5s0 */, 0);
>> }
>>
>> I made three observations that I would like to discuss with you:
>>
>> 1. The redirection only works when I ALSO load some eBPF at the egress interface (enp5s0). It can be just
>>
>> SEC("prog")
>> int xdp_pass(struct xdp_md *ctx) {
>> 	return XDP_PASS;
>> }
>>
>> but there has to be at least something. Otherwise, only xdp_redirect is called, but xdp_devmap_xmit is not.
>> It seems somewhat reasonable that the interface where the traffic is redirected to also needs to have the
>> XDP functionality initialized somehow, but it was unexpected at first. It tested it with an i225-IT (igc driver)
>> and a 82576 (igb driver). So, is this a bug or a feature?
> 
> I personally consider it a bug, but all the Intel drivers work this way,
> unfortunately. The was some discussion around making the XDP feature
> bits read-write, making it possible to enable XDP via ethtool instead of
> having to load a dummy XDP program. But no patches have materialised yet.

I see, thanks! So at least it is expected behavior for now.
How do other non-Intel drivers handle this?


>> 2. For the RX side, the metadata is documented as "XDP RX Metadata"
>> (https://docs.kernel.org/networking/xdp-rx-metadata.html), while for
>> TX it is "AF_XDP TX Metadata"
>> (https://www.kernel.org/doc/html/next/networking/xsk-tx-metadata.html).
>> That seems to imply that TX metadata only works for AF_XDP, but not
>> for direct redirection. Is there a reason for that?
> 
> Well, IIRC, AF_XDP was the most pressing use case, and no one has gotten
> around to extending this to the regular XDP forwarding path yet.

Ok, that is fine. I had the fear that there is some fundamental problem
that prevents to implement this.


>> 3. At least for the igc, the egress queue is currently selected by
>> using the smp_processor_id.
>> (https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/intel/igc/igc_main.c?h=v6.8-rc4#n2453)
>> For our application, I would like to define the queue on a per-packet
>> basis via the eBPF. This would allow to steer the traffic to the
>> correct queue when using TAPRIO full hardware offload. Do you see any
>> problem with introducing a new metadata field to define the egress
>> queue?
> 
> Well, a couple :)
> 
> 1. We'd have to find agreement across drivers for a numbering scheme to
> refer to queues.

Good point! At least we already refer to queues in the MQPRIO qdisc
( queues count1@offset1 count2@offset2 ... ).
There might be different alternatives (like using the traffic class)
for this IF we want to implement this ...

> 2. Selecting queues based on CPU index the way its done now means we
> guarantee that the same queue will only be served from one CPU. Which
> means we don't have to do any locking, which helps tremendously with
> performance. Drivers handle the case where there are more CPUs than
> queues a bit differently, but the ones that do generally have a lock
> (with associated performance overhead).

... but this will likely completely prevent to implement this in the
straight forward way. You are right, we do not want the CPUs to constantly
fight for access to the same queues for every packet.

> As a workaround, you can use a cpumap to steer packets to specific CPUs
> and perform the egress redirect inside the cpumap instead of directly on
> RX. Requires a bit of knowledge of the hardware configuration, but it
> may be enough for what you're trying to do.

So I really like this approach on first glance since it prevents the issue
you describe above.

However, as you write, it is very hardware dependent and also depends on
how exactly the driver handles the CPU -> Queue mapping internally.
I have the feeling that the mapping CPU % Queue Number -> Queue as it is
implemented in the moment might neither be stable over time nor over
different drivers, even if it is the most likely one.

What do you think maybe about exporting an interface (e.g. via ethtool)
to define the mapping of CPU -> Queue?

Thanks,
Florian

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [xdp-hints] XDP Redirect and TX Metadata
  2024-02-12 14:35   ` Florian Kauer
@ 2024-02-13 13:00     ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 4+ messages in thread
From: Toke Høiland-Jørgensen @ 2024-02-13 13:00 UTC (permalink / raw)
  To: Florian Kauer, xdp-hints, xdp-newbies

Florian Kauer <florian.kauer@linutronix.de> writes:

> On 12.02.24 14:41, Toke Høiland-Jørgensen wrote:
>> Florian Kauer <florian.kauer@linutronix.de> writes:
>> 
>>> Hi,
>>> I am currently implementing an eBPF for redirecting from one physical interface to another. So basically loading the following at enp8s0:
>>>
>>> SEC("prog")
>>> int xdp_redirect(struct xdp_md *ctx) {
>>> 	/* ... */
>>> 	return bpf_redirect(3 /* enp5s0 */, 0);
>>> }
>>>
>>> I made three observations that I would like to discuss with you:
>>>
>>> 1. The redirection only works when I ALSO load some eBPF at the egress interface (enp5s0). It can be just
>>>
>>> SEC("prog")
>>> int xdp_pass(struct xdp_md *ctx) {
>>> 	return XDP_PASS;
>>> }
>>>
>>> but there has to be at least something. Otherwise, only xdp_redirect is called, but xdp_devmap_xmit is not.
>>> It seems somewhat reasonable that the interface where the traffic is redirected to also needs to have the
>>> XDP functionality initialized somehow, but it was unexpected at first. It tested it with an i225-IT (igc driver)
>>> and a 82576 (igb driver). So, is this a bug or a feature?
>> 
>> I personally consider it a bug, but all the Intel drivers work this way,
>> unfortunately. The was some discussion around making the XDP feature
>> bits read-write, making it possible to enable XDP via ethtool instead of
>> having to load a dummy XDP program. But no patches have materialised yet.
>
> I see, thanks! So at least it is expected behavior for now.
> How do other non-Intel drivers handle this?

I believe Mellanox drivers have some kind of global switch that can
completely disable XDP, but if it's enabled (which it is by default)
everything works including redirect. Other drivers just have XDP
features always enabled.

>>> 2. For the RX side, the metadata is documented as "XDP RX Metadata"
>>> (https://docs.kernel.org/networking/xdp-rx-metadata.html), while for
>>> TX it is "AF_XDP TX Metadata"
>>> (https://www.kernel.org/doc/html/next/networking/xsk-tx-metadata.html).
>>> That seems to imply that TX metadata only works for AF_XDP, but not
>>> for direct redirection. Is there a reason for that?
>> 
>> Well, IIRC, AF_XDP was the most pressing use case, and no one has gotten
>> around to extending this to the regular XDP forwarding path yet.
>
> Ok, that is fine. I had the fear that there is some fundamental problem
> that prevents to implement this.
>
>
>>> 3. At least for the igc, the egress queue is currently selected by
>>> using the smp_processor_id.
>>> (https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/intel/igc/igc_main.c?h=v6.8-rc4#n2453)
>>> For our application, I would like to define the queue on a per-packet
>>> basis via the eBPF. This would allow to steer the traffic to the
>>> correct queue when using TAPRIO full hardware offload. Do you see any
>>> problem with introducing a new metadata field to define the egress
>>> queue?
>> 
>> Well, a couple :)
>> 
>> 1. We'd have to find agreement across drivers for a numbering scheme to
>> refer to queues.
>
> Good point! At least we already refer to queues in the MQPRIO qdisc
> ( queues count1@offset1 count2@offset2 ... ).
> There might be different alternatives (like using the traffic class)
> for this IF we want to implement this ...

Oh, plenty of options; the tricky bit is agreeing on one, and figuring
out what the right kernel abstraction is. For instance, in the regular
networking stack, the concept of a queue is exposed from the driver into
the core stack, but for XDP it isn't.

>> 2. Selecting queues based on CPU index the way its done now means we
>> guarantee that the same queue will only be served from one CPU. Which
>> means we don't have to do any locking, which helps tremendously with
>> performance. Drivers handle the case where there are more CPUs than
>> queues a bit differently, but the ones that do generally have a lock
>> (with associated performance overhead).
>
> ... but this will likely completely prevent to implement this in the
> straight forward way. You are right, we do not want the CPUs to constantly
> fight for access to the same queues for every packet.
>
>> As a workaround, you can use a cpumap to steer packets to specific CPUs
>> and perform the egress redirect inside the cpumap instead of directly on
>> RX. Requires a bit of knowledge of the hardware configuration, but it
>> may be enough for what you're trying to do.
>
> So I really like this approach on first glance since it prevents the issue
> you describe above.
>
> However, as you write, it is very hardware dependent and also depends on
> how exactly the driver handles the CPU -> Queue mapping internally.
> I have the feeling that the mapping CPU % Queue Number -> Queue as it is
> implemented in the moment might neither be stable over time nor over
> different drivers, even if it is the most likely one.

No, the application would have to figure that out. FWIW I looked at this
for other reasons at some point and didn't find any drivers that did
something different than using the CPU number (with or without the
modulus operation). So in practice I think using the CPU ID as a proxy
for queue number will work just fine on most hardware...

> What do you think maybe about exporting an interface (e.g. via
> ethtool) to define the mapping of CPU -> Queue?

Well, this would require ethtool to know about those queues, which means
defining a driver<->stack concept of queues for XDP. There was some
attempt at doing this some years ago, but it never went anywhere,
unfortunately. I personally think doing something like this would be
worthwhile, but it's a decidedly non-trivial undertaking :)

-Toke

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-02-13 13:00 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-12  8:27 XDP Redirect and TX Metadata Florian Kauer
2024-02-12 13:41 ` [xdp-hints] " Toke Høiland-Jørgensen
2024-02-12 14:35   ` Florian Kauer
2024-02-13 13:00     ` Toke Høiland-Jørgensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).