From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Martin KaFai Lau <martin.lau@linux.dev>
Cc: "Alexander Lobakin" <aleksander.lobakin@intel.com>,
"Maciej Fijalkowski" <maciej.fijalkowski@intel.com>,
"Larysa Zaremba" <larysa.zaremba@intel.com>,
"Toke Høiland-Jørgensen" <toke@redhat.com>,
"Song Liu" <song@kernel.org>,
"Jesper Dangaard Brouer" <hawk@kernel.org>,
"John Fastabend" <john.fastabend@gmail.com>,
"Menglong Dong" <imagedong@tencent.com>,
"Mykola Lysenko" <mykolal@fb.com>,
"David S. Miller" <davem@davemloft.net>,
"Jakub Kicinski" <kuba@kernel.org>,
"Eric Dumazet" <edumazet@google.com>,
"Paolo Abeni" <pabeni@redhat.com>,
bpf@vger.kernel.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: [PATCH bpf-next v3 0/4] xdp: recycle Page Pool backed skbs built from XDP frames
Date: Mon, 13 Mar 2023 22:42:56 +0100 [thread overview]
Message-ID: <20230313214300.1043280-1-aleksander.lobakin@intel.com> (raw)
Yeah, I still remember that "Who needs cpumap nowadays" (c), but anyway.
__xdp_build_skb_from_frame() missed the moment when the networking stack
became able to recycle skb pages backed by a page_pool. This was making
e.g. cpumap redirect even less effective than simple %XDP_PASS. veth was
also affected in some scenarios.
A lot of drivers use skb_mark_for_recycle() already, it's been almost
two years and seems like there are no issues in using it in the generic
code too. {__,}xdp_release_frame() can be then removed as it losts its
last user.
Page Pool becomes then zero-alloc (or almost) in the abovementioned
cases, too. Other memory type models (who needs them at this point)
have no changes.
Some numbers on 1 Xeon Platinum core bombed with 27 Mpps of 64-byte
IPv6 UDP, iavf w/XDP[0] (CONFIG_PAGE_POOL_STATS is enabled):
Plain %XDP_PASS on baseline, Page Pool driver:
src cpu Rx drops dst cpu Rx
2.1 Mpps N/A 2.1 Mpps
cpumap redirect (cross-core, w/o leaving its NUMA node) on baseline:
6.8 Mpps 5.0 Mpps 1.8 Mpps
cpumap redirect with skb PP recycling:
7.9 Mpps 5.7 Mpps 2.2 Mpps
+22% (from cpumap redir on baseline)
[0] https://github.com/alobakin/linux/commits/iavf-xdp
Alexander Lobakin (4):
selftests/bpf: robustify test_xdp_do_redirect with more payload magics
net: page_pool, skbuff: make skb_mark_for_recycle() always available
xdp: recycle Page Pool backed skbs built from XDP frames
xdp: remove unused {__,}xdp_release_frame()
include/linux/skbuff.h | 4 +--
include/net/xdp.h | 29 ---------------
net/core/xdp.c | 19 ++--------
.../bpf/progs/test_xdp_do_redirect.c | 36 +++++++++++++------
4 files changed, 30 insertions(+), 58 deletions(-)
---
From v2[1]:
* fix the test_xdp_do_redirect selftest failing after the series: it was
relying on that %XDP_PASS frames can't be recycled on veth
(BPF CI, Alexei);
* explain "w/o leaving its node" in the cover letter (Jesper).
From v1[2]:
* make skb_mark_for_recycle() always available, otherwise there are build
failures on non-PP systems (kbuild bot);
* 'Page Pool' -> 'page_pool' when it's about a page_pool instance, not
API (Jesper);
* expanded test system info a bit in the cover letter (Jesper).
[1] https://lore.kernel.org/bpf/20230303133232.2546004-1-aleksander.lobakin@intel.com
[2] https://lore.kernel.org/bpf/20230301160315.1022488-1-aleksander.lobakin@intel.com
--
2.39.2
next reply other threads:[~2023-03-16 11:37 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-13 21:42 Alexander Lobakin [this message]
2023-03-13 21:42 ` [PATCH bpf-next v3 1/4] selftests/bpf: robustify test_xdp_do_redirect with more payload magics Alexander Lobakin
2023-03-13 21:42 ` [PATCH bpf-next v3 2/4] net: page_pool, skbuff: make skb_mark_for_recycle() always available Alexander Lobakin
2023-03-13 21:42 ` [PATCH bpf-next v3 3/4] xdp: recycle Page Pool backed skbs built from XDP frames Alexander Lobakin
2023-03-13 21:43 ` [PATCH bpf-next v3 4/4] xdp: remove unused {__,}xdp_release_frame() Alexander Lobakin
2023-03-16 11:57 ` [PATCH bpf-next v3 0/4] xdp: recycle Page Pool backed skbs built from XDP frames Alexander Lobakin
-- strict thread matches above, loose matches on Subject: below --
2023-03-13 21:55 Alexander Lobakin
2023-03-14 11:57 ` Alexander Lobakin
2023-03-14 18:52 ` Alexei Starovoitov
2023-03-14 23:54 ` Alexei Starovoitov
2023-03-15 9:56 ` Alexander Lobakin
2023-03-15 10:54 ` Alexander Lobakin
2023-03-15 14:54 ` Ilya Leoshkevich
2023-03-15 18:00 ` Ilya Leoshkevich
2023-03-15 18:12 ` Alexander Lobakin
2023-03-15 18:26 ` Ilya Leoshkevich
2023-03-16 13:22 ` Alexander Lobakin
2023-03-15 16:55 ` Alexei Starovoitov
2023-03-14 22:30 ` patchwork-bot+netdevbpf
2023-03-13 19:08 Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230313214300.1043280-1-aleksander.lobakin@intel.com \
--to=aleksander.lobakin@intel.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=imagedong@tencent.com \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=larysa.zaremba@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=maciej.fijalkowski@intel.com \
--cc=martin.lau@linux.dev \
--cc=mykolal@fb.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=song@kernel.org \
--cc=toke@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.