[v4,bpf-next,5/6] xsk: respect device's headroom and tailroom on generic xmit path
diff mbox series

Message ID 20210216113740.62041-6-alobakin@pm.me
State New, archived
Headers show
Series
  • xsk: build skb by page (aka generic zerocopy xmit)
Related show

Commit Message

Alexander Lobakin Feb. 16, 2021, 11:39 a.m. UTC
xsk_generic_xmit() allocates a new skb and then queues it for
xmitting. The size of new skb's headroom is desc->len, so it comes
to the driver/device with no reserved headroom and/or tailroom.
Lots of drivers need some headroom (and sometimes tailroom) to
prepend (and/or append) some headers or data, e.g. CPU tags,
device-specific headers/descriptors (LSO, TLS etc.), and if case
of no available space skb_cow_head() will reallocate the skb.
Reallocations are unwanted on fast-path, especially when it comes
to XDP, so generic XSK xmit should reserve the spaces declared in
dev->needed_headroom and dev->needed tailroom to avoid them.

Note on max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom)):

Usually, output functions reserve LL_RESERVED_SPACE(dev), which
consists of dev->hard_header_len + dev->needed_headroom, aligned
by 16.
However, on XSK xmit hard header is already here in the chunk, so
hard_header_len is not needed. But it'd still be better to align
data up to cacheline, while reserving no less than driver requests
for headroom. NET_SKB_PAD here is to double-insure there will be
no reallocations even when the driver advertises no needed_headroom,
but in fact need it (not so rare case).

Fixes: 35fcde7f8deb ("xsk: support for Tx")
Signed-off-by: Alexander Lobakin <alobakin@pm.me>
---
 net/xdp/xsk.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

Comments

Magnus Karlsson Feb. 16, 2021, 2:08 p.m. UTC | #1
On Tue, Feb 16, 2021 at 12:44 PM Alexander Lobakin <alobakin@pm.me> wrote:
>
> xsk_generic_xmit() allocates a new skb and then queues it for
> xmitting. The size of new skb's headroom is desc->len, so it comes
> to the driver/device with no reserved headroom and/or tailroom.
> Lots of drivers need some headroom (and sometimes tailroom) to
> prepend (and/or append) some headers or data, e.g. CPU tags,
> device-specific headers/descriptors (LSO, TLS etc.), and if case
> of no available space skb_cow_head() will reallocate the skb.
> Reallocations are unwanted on fast-path, especially when it comes
> to XDP, so generic XSK xmit should reserve the spaces declared in
> dev->needed_headroom and dev->needed tailroom to avoid them.
>
> Note on max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom)):
>
> Usually, output functions reserve LL_RESERVED_SPACE(dev), which
> consists of dev->hard_header_len + dev->needed_headroom, aligned
> by 16.
> However, on XSK xmit hard header is already here in the chunk, so
> hard_header_len is not needed. But it'd still be better to align
> data up to cacheline, while reserving no less than driver requests
> for headroom. NET_SKB_PAD here is to double-insure there will be
> no reallocations even when the driver advertises no needed_headroom,
> but in fact need it (not so rare case).
>
> Fixes: 35fcde7f8deb ("xsk: support for Tx")
> Signed-off-by: Alexander Lobakin <alobakin@pm.me>

Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>

> ---
>  net/xdp/xsk.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 4faabd1ecfd1..143979ea4165 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -454,12 +454,16 @@ static int xsk_generic_xmit(struct sock *sk)
>         struct sk_buff *skb;
>         unsigned long flags;
>         int err = 0;
> +       u32 hr, tr;
>
>         mutex_lock(&xs->mutex);
>
>         if (xs->queue_id >= xs->dev->real_num_tx_queues)
>                 goto out;
>
> +       hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(xs->dev->needed_headroom));
> +       tr = xs->dev->needed_tailroom;
> +
>         while (xskq_cons_peek_desc(xs->tx, &desc, xs->pool)) {
>                 char *buffer;
>                 u64 addr;
> @@ -471,11 +475,13 @@ static int xsk_generic_xmit(struct sock *sk)
>                 }
>
>                 len = desc.len;
> -               skb = sock_alloc_send_skb(sk, len, 1, &err);
> +               skb = sock_alloc_send_skb(sk, hr + len + tr, 1, &err);
>                 if (unlikely(!skb))
>                         goto out;
>
> +               skb_reserve(skb, hr);
>                 skb_put(skb, len);
> +
>                 addr = desc.addr;
>                 buffer = xsk_buff_raw_get_data(xs->pool, addr);
>                 err = skb_store_bits(skb, 0, buffer, len);
> --
> 2.30.1
>
>

Patch
diff mbox series

diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 4faabd1ecfd1..143979ea4165 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -454,12 +454,16 @@  static int xsk_generic_xmit(struct sock *sk)
 	struct sk_buff *skb;
 	unsigned long flags;
 	int err = 0;
+	u32 hr, tr;
 
 	mutex_lock(&xs->mutex);
 
 	if (xs->queue_id >= xs->dev->real_num_tx_queues)
 		goto out;
 
+	hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(xs->dev->needed_headroom));
+	tr = xs->dev->needed_tailroom;
+
 	while (xskq_cons_peek_desc(xs->tx, &desc, xs->pool)) {
 		char *buffer;
 		u64 addr;
@@ -471,11 +475,13 @@  static int xsk_generic_xmit(struct sock *sk)
 		}
 
 		len = desc.len;
-		skb = sock_alloc_send_skb(sk, len, 1, &err);
+		skb = sock_alloc_send_skb(sk, hr + len + tr, 1, &err);
 		if (unlikely(!skb))
 			goto out;
 
+		skb_reserve(skb, hr);
 		skb_put(skb, len);
+
 		addr = desc.addr;
 		buffer = xsk_buff_raw_get_data(xs->pool, addr);
 		err = skb_store_bits(skb, 0, buffer, len);