From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: "John Fastabend" <john.fastabend@gmail.com>,
"Toke Høiland-Jørgensen" <toke@redhat.com>,
"David S. Miller" <davem@davemloft.net>,
"Eric Dumazet" <edumazet@google.com>,
"Jakub Kicinski" <kuba@kernel.org>,
"Paolo Abeni" <pabeni@redhat.com>,
"Alexei Starovoitov" <ast@kernel.org>,
"Daniel Borkmann" <daniel@iogearbox.net>,
"Jesper Dangaard Brouer" <hawk@kernel.org>
Cc: <netdev@vger.kernel.org>, <bpf@vger.kernel.org>
Subject: Re: [PATCH net-next v2 1/4] net: Register system page pool as an XDP memory model
Date: Thu, 4 Apr 2024 11:08:14 +0200 [thread overview]
Message-ID: <2ac91d55-377a-4d07-8d13-b7fa9ee46302@intel.com> (raw)
In-Reply-To: <660dba106f0ed_1cf6b208ad@john.notmuch>
From: John Fastabend <john.fastabend@gmail.com>
Date: Wed, 03 Apr 2024 13:20:32 -0700
> Toke Høiland-Jørgensen wrote:
>> To make the system page pool usable as a source for allocating XDP
>> frames, we need to register it with xdp_reg_mem_model(), so that page
>> return works correctly. This is done in preparation for using the system
>> page pool for the XDP live frame mode in BPF_TEST_RUN; for the same
>> reason, make the per-cpu variable non-static so we can access it from
>> the test_run code as well.
>>
>> Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>
>> Tested-by: Alexander Lobakin <aleksander.lobakin@intel.com>
>> Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
>> ---
>> include/linux/netdevice.h | 1 +
>> net/core/dev.c | 13 ++++++++++++-
>> 2 files changed, 13 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
>> index c541550b0e6e..e1dfdf0c4075 100644
>> --- a/include/linux/netdevice.h
>> +++ b/include/linux/netdevice.h
>> @@ -3345,6 +3345,7 @@ static inline void input_queue_tail_incr_save(struct softnet_data *sd,
>> }
>>
>> DECLARE_PER_CPU_ALIGNED(struct softnet_data, softnet_data);
>> +DECLARE_PER_CPU_ALIGNED(struct page_pool *, system_page_pool);
>>
>> static inline int dev_recursion_level(void)
>> {
>> diff --git a/net/core/dev.c b/net/core/dev.c
>> index d8dd293a7a27..cdb916a647e7 100644
>> --- a/net/core/dev.c
>> +++ b/net/core/dev.c
>> @@ -428,7 +428,7 @@ EXPORT_PER_CPU_SYMBOL(softnet_data);
>> * PP consumers must pay attention to run APIs in the appropriate context
>> * (e.g. NAPI context).
>> */
>> -static DEFINE_PER_CPU_ALIGNED(struct page_pool *, system_page_pool);
>> +DEFINE_PER_CPU_ALIGNED(struct page_pool *, system_page_pool);
>>
>> #ifdef CONFIG_LOCKDEP
>> /*
>> @@ -11739,12 +11739,20 @@ static int net_page_pool_create(int cpuid)
>> .pool_size = SYSTEM_PERCPU_PAGE_POOL_SIZE,
>> .nid = NUMA_NO_NODE,
>> };
>> + struct xdp_mem_info info;
>> struct page_pool *pp_ptr;
>> + int err;
>>
>> pp_ptr = page_pool_create_percpu(&page_pool_params, cpuid);
>> if (IS_ERR(pp_ptr))
>> return -ENOMEM;
>>
>> + err = xdp_reg_mem_model(&info, MEM_TYPE_PAGE_POOL, pp_ptr);
>> + if (err) {
>> + page_pool_destroy(pp_ptr);
>> + return err;
>> + }
>> +
>> per_cpu(system_page_pool, cpuid) = pp_ptr;
>> #endif
>> return 0;
>> @@ -11834,12 +11842,15 @@ static int __init net_dev_init(void)
>> out:
>> if (rc < 0) {
>> for_each_possible_cpu(i) {
>> + struct xdp_mem_info mem = { .type = MEM_TYPE_PAGE_POOL };
>> struct page_pool *pp_ptr;
>>
>> pp_ptr = per_cpu(system_page_pool, i);
>> if (!pp_ptr)
>> continue;
>>
>> + mem.id = pp_ptr->xdp_mem_id;
>> + xdp_unreg_mem_model(&mem);
>
> Take it or leave it, a net_page_pool_destroy(int cpuid) would be
> symmetric here.
>
>> page_pool_destroy(pp_ptr);
I believe it's better to remove this page_pool_destroy() and let
xdp_unreg_mem_model() destroy it.
>> per_cpu(system_page_pool, i) = NULL;
>> }
>
> Acked-by: John Fastabend <john.fastabend@gmail.com>
Thanks,
Olek
next prev parent reply other threads:[~2024-04-04 9:09 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-20 21:03 [PATCH net-next v2 0/4] Change BPF_TEST_RUN use the system page pool for live XDP frames Toke Høiland-Jørgensen
2024-02-20 21:03 ` [PATCH net-next v2 1/4] net: Register system page pool as an XDP memory model Toke Høiland-Jørgensen
2024-04-03 20:20 ` John Fastabend
2024-04-04 9:08 ` Alexander Lobakin [this message]
2024-02-20 21:03 ` [PATCH net-next v2 2/4] bpf: test_run: Use system page pool for XDP live frame mode Toke Høiland-Jørgensen
2024-02-21 14:48 ` Toke Høiland-Jørgensen
2024-04-04 11:23 ` Jesper Dangaard Brouer
2024-04-04 13:34 ` Toke Høiland-Jørgensen
2024-04-03 16:34 ` Alexander Lobakin
2024-04-03 20:39 ` John Fastabend
2024-04-04 11:43 ` Jesper Dangaard Brouer
2024-04-04 13:09 ` Alexander Lobakin
2024-02-20 21:03 ` [PATCH net-next v2 3/4] bpf: test_run: Fix cacheline alignment of live XDP frame data structures Toke Høiland-Jørgensen
2024-02-21 14:45 ` [PATCH net-next v2 0/4] Change BPF_TEST_RUN use the system page pool for live XDP frames Toke Høiland-Jørgensen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2ac91d55-377a-4d07-8d13-b7fa9ee46302@intel.com \
--to=aleksander.lobakin@intel.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=toke@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).