netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mina Almasry <almasrymina@google.com>
To: David Wei <dw@davidwei.uk>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org,
	linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
	linux-arch@vger.kernel.org, bpf@vger.kernel.org,
	linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org,
	dri-devel@lists.freedesktop.org,
	"David S. Miller" <davem@davemloft.net>,
	"Eric Dumazet" <edumazet@google.com>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Paolo Abeni" <pabeni@redhat.com>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Richard Henderson" <richard.henderson@linaro.org>,
	"Ivan Kokshaysky" <ink@jurassic.park.msu.ru>,
	"Matt Turner" <mattst88@gmail.com>,
	"Thomas Bogendoerfer" <tsbogend@alpha.franken.de>,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	"Helge Deller" <deller@gmx.de>,
	"Andreas Larsson" <andreas@gaisler.com>,
	"Jesper Dangaard Brouer" <hawk@kernel.org>,
	"Ilias Apalodimas" <ilias.apalodimas@linaro.org>,
	"Steven Rostedt" <rostedt@goodmis.org>,
	"Masami Hiramatsu" <mhiramat@kernel.org>,
	"Mathieu Desnoyers" <mathieu.desnoyers@efficios.com>,
	"Arnd Bergmann" <arnd@arndb.de>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Daniel Borkmann" <daniel@iogearbox.net>,
	"Andrii Nakryiko" <andrii@kernel.org>,
	"Martin KaFai Lau" <martin.lau@linux.dev>,
	"Eduard Zingerman" <eddyz87@gmail.com>,
	"Song Liu" <song@kernel.org>,
	"Yonghong Song" <yonghong.song@linux.dev>,
	"John Fastabend" <john.fastabend@gmail.com>,
	"KP Singh" <kpsingh@kernel.org>,
	"Stanislav Fomichev" <sdf@google.com>,
	"Hao Luo" <haoluo@google.com>, "Jiri Olsa" <jolsa@kernel.org>,
	"David Ahern" <dsahern@kernel.org>,
	"Willem de Bruijn" <willemdebruijn.kernel@gmail.com>,
	"Shuah Khan" <shuah@kernel.org>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Pavel Begunkov" <asml.silence@gmail.com>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Yunsheng Lin" <linyunsheng@huawei.com>,
	"Shailend Chand" <shailend@google.com>,
	"Harshitha Ramamurthy" <hramamurthy@google.com>,
	"Jeroen de Borst" <jeroendb@google.com>,
	"Praveen Kaligineedi" <pkaligineedi@google.com>,
	shakeel.butt@linux.dev
Subject: Re: [RFC PATCH net-next v6 02/15] net: page_pool: create hooks for custom page providers
Date: Tue, 5 Mar 2024 14:36:59 -0800	[thread overview]
Message-ID: <CAHS8izM5O39mnTQ8mhcQE75amDT4G-3vcgozzjcYsAdd_-he1g@mail.gmail.com> (raw)
In-Reply-To: <1b57dac2-4b04-4bec-b2d7-d0edb4fcabbc@davidwei.uk>

On Tue, Mar 5, 2024 at 1:55 PM David Wei <dw@davidwei.uk> wrote:
>
> On 2024-03-04 18:01, Mina Almasry wrote:
> > +struct memory_provider_ops {
> > +     int (*init)(struct page_pool *pool);
> > +     void (*destroy)(struct page_pool *pool);
> > +     struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp);
> > +     bool (*release_page)(struct page_pool *pool, struct page *page);
>
> For ZC Rx we added a scrub() function to memory_provider_ops that is
> called from page_pool_scrub(). Does TCP devmem not custom behaviour
> waiting for all netmem_refs to return before destroying the page pool?
> What happens if e.g. application crashes?

(sorry for the long reply, but he refcounting is pretty complicated to
explain and I feel like we need to agree on how things currently work)

Yeah, the addition of the page_pool_scrub() function is a bit of a
head scratcher for me. Here is how the (complicated) refcounting works
for devmem TCP (assuming the driver is not doing its own recycling
logic which complicates things further):

1. When a netmem_ref is allocated by the page_pool (from dmabuf or
page), the netmem_get_pp_ref_count_ref()==1 and belongs to the page
pool as long as the netmem is waiting in the pool for driver
allocation.

2. When a netmem is allocated by the driver, no refcounting is
changed, but the ownership of the netmem_get_pp_ref_count_ref() is
implicitly transferred from the page pool to the driver. i.e. the ref
now belongs to the driver until an skb is formed.

3. When the driver forms an skb using skb_rx_add_frag_netmem(), no
refcounting is changed, but the ownership of the
netmem_get_pp_ref_count_ref() is transferred from the driver to the
TCP stack.

4. When the TCP stack hands the skb to the application, the TCP stack
obtains an additional refcount, so netmem_get_pp_ref_count_ref()==2,
and frees the skb using skb_frag_unref(), which drops the
netmem_get_pp_ref_count_ref()==1.

5. When the user is done with the skb, the user calls the
DEVMEM_DONTNEED setsockopt which calls napi_pp_put_netmem() which
recycles the netmem back to the page pool. This doesn't modify any
refcounting, but the refcount ownership transfers from the userspace
back to the page pool, and we're back at step 1.

So all in all netmem can belong either to (a) the page pool, or (b)
the driver, or (c) the TCP stack, or (d) the application depending on
where exactly it is in the RX path.

When an application running devmem TCP crashes, the netmem that belong
to the page pool or driver are not touched, because the page pool is
not tied to the application in our case really. However, the TCP stack
notices the devmem socket of the application close, and when it does,
the TCP stack will:

1. Free all the skbs in the sockets receive queue. This is not custom
behavior for devmem TCP, it's just standard for TCP to free all skbs
waiting to be received by the application.
2. The TCP stack will free references that belong to the application.
Since the application crashed, it will not call the DEVMEM_DONTNEED
setsockopt, so we need to free those on behalf of the application.
This is done in this diff:

@@ -2498,6 +2498,15 @@ static void tcp_md5sig_info_free_rcu(struct
rcu_head *head)
 void tcp_v4_destroy_sock(struct sock *sk)
 {
  struct tcp_sock *tp = tcp_sk(sk);
+ __maybe_unused unsigned long index;
+ __maybe_unused void *netmem;
+
+#ifdef CONFIG_PAGE_POOL
+ xa_for_each(&sk->sk_user_frags, index, netmem)
+ WARN_ON_ONCE(!napi_pp_put_page((__force netmem_ref)netmem, false));
+#endif
+
+ xa_destroy(&sk->sk_user_frags);

  trace_tcp_destroy_sock(sk);

To be honest, I think it makes sense for the TCP stack to be
responsible for putting the references that belong to it and the
application. To me, it does not make much sense for the page pool to
be responsible for putting the reference that belongs to the TCP stack
or driver via a page_pool_scrub() function, as those references do not
belong to the page pool really. I'm not sure why there is a diff
between our use cases here because I'm not an io_uring expert. Why do
you need to scrub all the references on page pool destruction? Don't
these belong to non-page pool components like io_uring stack or TCP
stack ol otherwise?

-- 
Thanks,
Mina

  reply	other threads:[~2024-03-05 22:37 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-05  2:01 [RFC PATCH net-next v6 00/15] Device Memory TCP Mina Almasry
2024-03-05  2:01 ` [RFC PATCH net-next v6 01/15] queue_api: define queue api Mina Almasry
2024-03-08  1:30   ` Jakub Kicinski
2024-03-08  2:08     ` Mina Almasry
2024-03-08  3:36       ` Jakub Kicinski
2024-03-08 23:47   ` David Wei
2024-03-09  0:27     ` Mina Almasry
2024-03-11  1:12     ` David Ahern
2024-03-05  2:01 ` [RFC PATCH net-next v6 02/15] net: page_pool: create hooks for custom page providers Mina Almasry
2024-03-05 21:54   ` David Wei
2024-03-05 22:36     ` Mina Almasry [this message]
2024-03-06 14:29       ` Pavel Begunkov
2024-03-06 17:04         ` Mina Almasry
2024-03-06 19:12           ` Pavel Begunkov
2024-03-06 21:59             ` Mina Almasry
2024-03-07 14:25               ` Pavel Begunkov
2024-03-08  4:57   ` David Wei
2024-03-08 19:53     ` Mina Almasry
2024-03-18  2:02   ` Christoph Hellwig
2024-03-18  2:49     ` David Wei
2024-03-18 23:22       ` Christoph Hellwig
2024-03-22 17:40         ` Mina Almasry
2024-03-22 23:19           ` Jakub Kicinski
2024-03-24 23:35             ` Christoph Hellwig
2024-03-24 23:35           ` Christoph Hellwig
2024-03-22 17:54     ` Mina Almasry
2024-03-24 23:37       ` Christoph Hellwig
2024-03-26 20:19         ` Mina Almasry
2024-03-28  7:31           ` Christoph Hellwig
2024-04-01 19:22             ` Mina Almasry
2024-04-08 15:34               ` Cong Wang
2024-03-05  2:01 ` [RFC PATCH net-next v6 03/15] net: page_pool: factor out page_pool recycle check Mina Almasry
2024-03-05 12:55   ` Yunsheng Lin
2024-03-05  2:01 ` [RFC PATCH net-next v6 04/15] net: netdev netlink api to bind dma-buf to a net device Mina Almasry
2024-03-05  2:01 ` [RFC PATCH net-next v6 05/15] netdev: support binding dma-buf to netdevice Mina Almasry
2024-03-05  9:04   ` Arnd Bergmann
2024-03-05 20:00     ` Mina Almasry
2024-03-05 21:42       ` Arnd Bergmann
2024-03-05 12:55   ` Yunsheng Lin
2024-03-05 21:17     ` Mina Almasry
2024-03-06 12:38       ` Yunsheng Lin
2024-03-06 22:10         ` Mina Almasry
2024-03-07 12:15           ` Yunsheng Lin
2024-03-08  3:58   ` Jakub Kicinski
2024-03-05  2:01 ` [RFC PATCH net-next v6 06/15] netdev: netdevice devmem allocator Mina Almasry
2024-03-05  2:01 ` [RFC PATCH net-next v6 07/15] page_pool: convert to use netmem Mina Almasry
2024-03-05 21:30   ` Mina Almasry
2024-03-05  2:01 ` [RFC PATCH net-next v6 08/15] page_pool: devmem support Mina Almasry
2024-03-05 21:42   ` Mina Almasry
2024-03-05  2:01 ` [RFC PATCH net-next v6 09/15] memory-provider: dmabuf devmem memory provider Mina Almasry
2024-03-06  2:28   ` David Wei
2024-03-06  2:42     ` Mina Almasry
2024-03-06  2:46       ` David Wei
2024-03-06  2:54         ` Mina Almasry
2024-03-06 14:58       ` Pavel Begunkov
2024-03-06 16:51         ` Mina Almasry
2024-03-05  2:01 ` [RFC PATCH net-next v6 10/15] net: support non paged skb frags Mina Almasry
2024-03-05  2:01 ` [RFC PATCH net-next v6 11/15] net: add support for skbs with unreadable frags Mina Almasry
2024-03-05  2:01 ` [RFC PATCH net-next v6 12/15] tcp: RX path for devmem TCP Mina Almasry
2024-03-05  8:41   ` Arnd Bergmann
2024-03-05 19:22     ` Mina Almasry
2024-03-05 19:39       ` Arnd Bergmann
2024-03-05  2:01 ` [RFC PATCH net-next v6 13/15] net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags Mina Almasry
2024-03-05  2:01 ` [RFC PATCH net-next v6 14/15] net: add devmem TCP documentation Mina Almasry
2024-03-08  1:52   ` Jakub Kicinski
2024-03-05  2:01 ` [RFC PATCH net-next v6 15/15] selftests: add ncdevmem, netcat for devmem TCP Mina Almasry
2024-03-05  7:16 ` [RFC PATCH net-next v6 07/15] page_pool: convert to use netmem David Howells
2024-03-05 12:54 ` [RFC PATCH net-next v6 00/15] Device Memory TCP Yunsheng Lin
2024-03-05 19:38   ` Mina Almasry
2024-03-06 12:37     ` Yunsheng Lin
2024-03-26  0:28     ` Mina Almasry
2024-03-26 12:47       ` Yunsheng Lin
2024-03-26 20:14         ` Mina Almasry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHS8izM5O39mnTQ8mhcQE75amDT4G-3vcgozzjcYsAdd_-he1g@mail.gmail.com \
    --to=almasrymina@google.com \
    --cc=James.Bottomley@hansenpartnership.com \
    --cc=andreas@gaisler.com \
    --cc=andrii@kernel.org \
    --cc=arnd@arndb.de \
    --cc=asml.silence@gmail.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=christian.koenig@amd.com \
    --cc=corbet@lwn.net \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=deller@gmx.de \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=dsahern@kernel.org \
    --cc=dw@davidwei.uk \
    --cc=eddyz87@gmail.com \
    --cc=edumazet@google.com \
    --cc=haoluo@google.com \
    --cc=hawk@kernel.org \
    --cc=hramamurthy@google.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=ink@jurassic.park.msu.ru \
    --cc=jeroendb@google.com \
    --cc=jgg@ziepe.ca \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=kpsingh@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-alpha@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-parisc@vger.kernel.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=martin.lau@linux.dev \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mattst88@gmail.com \
    --cc=mhiramat@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pkaligineedi@google.com \
    --cc=richard.henderson@linaro.org \
    --cc=rostedt@goodmis.org \
    --cc=sdf@google.com \
    --cc=shailend@google.com \
    --cc=shakeel.butt@linux.dev \
    --cc=shuah@kernel.org \
    --cc=song@kernel.org \
    --cc=sparclinux@vger.kernel.org \
    --cc=sumit.semwal@linaro.org \
    --cc=tsbogend@alpha.franken.de \
    --cc=willemdebruijn.kernel@gmail.com \
    --cc=yonghong.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).