bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Duyck <alexander.duyck@gmail.com>
To: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Yunsheng Lin <linyunsheng@huawei.com>,
	David Miller <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	linuxarm@openeuler.org, yisen.zhuang@huawei.com,
	Salil Mehta <salil.mehta@huawei.com>,
	thomas.petazzoni@bootlin.com, Marcin Wojtas <mw@semihalf.com>,
	Russell King - ARM Linux <linux@armlinux.org.uk>,
	hawk@kernel.org, Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	John Fastabend <john.fastabend@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Will Deacon <will@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	fenghua.yu@intel.com, guro@fb.com, peterx@redhat.com,
	Feng Tang <feng.tang@intel.com>, Jason Gunthorpe <jgg@ziepe.ca>,
	mcroce@microsoft.com, Hugh Dickins <hughd@google.com>,
	Jonathan Lemon <jonathan.lemon@gmail.com>,
	Alexander Lobakin <alobakin@pm.me>,
	Willem de Bruijn <willemb@google.com>,
	wenxu@ucloud.cn, cong.wang@bytedance.com,
	Kevin Hao <haokexin@gmail.com>,
	nogikh@google.com, Marco Elver <elver@google.com>,
	Netdev <netdev@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>, bpf <bpf@vger.kernel.org>
Subject: Re: [PATCH net-next RFC 1/2] page_pool: add page recycling support based on elevated refcnt
Date: Thu, 8 Jul 2021 08:29:56 -0700	[thread overview]
Message-ID: <CAKgT0UcoLE=MhG+QxS=up5BH_cK5FBSwyMHDvfUg2g8083UM+w@mail.gmail.com> (raw)
In-Reply-To: <YOcXDISpR7Cf+eZG@enceladus>

On Thu, Jul 8, 2021 at 8:17 AM Ilias Apalodimas
<ilias.apalodimas@linaro.org> wrote:
>
> > > >
> > > > > > > >
> > > >
> > > > [...]
> > > >
> > > > > > > > The above expectation is based on that the last user will always
> > > > > > > > call page_pool_put_full_page() in order to do the recycling or do
> > > > > > > > the resource cleanup(dma unmaping..etc).
> > > > > > > >
> > > > > > > > As the skb_free_head() and skb_release_data() have both checked the
> > > > > > > > skb->pp_recycle to call the page_pool_put_full_page() if needed, I
> > > > > > > > think we are safe for most case, the one case I am not so sure above
> > > > > > > > is the rx zero copy, which seems to also bump up the refcnt before
> > > > > > > > mapping the page to user space, we might need to ensure rx zero copy
> > > > > > > > is not the last user of the page or if it is the last user, make sure
> > > > > > > > it calls page_pool_put_full_page() too.
> > > > > > >
> > > > > > > Yes, but the skb->pp_recycle value is per skb, not per page. So my
> > > > > > > concern is that carrying around that value can be problematic as there
> > > > > > > are a number of possible cases where the pages might be
> > > > > > > unintentionally recycled. All it would take is for a packet to get
> > > > > > > cloned a few times and then somebody starts using pskb_expand_head and
> > > > > > > you would have multiple cases, possibly simultaneously, of entities
> > > > > > > trying to free the page. I just worry it opens us up to a number of
> > > > > > > possible races.
> > > > > >
> > > > > > Maybe I missde something, but I thought the cloned SKBs would never trigger
> > > > > > the recycling path, since they are protected by the atomic dataref check in
> > > > > > skb_release_data(). What am I missing?
> > > > >
> > > > > Are you talking about the head frag? So normally a clone wouldn't
> > > > > cause an issue because the head isn't changed. In the case of the
> > > > > head_frag we should be safe since pskb_expand_head will just kmalloc
> > > > > the new head and clears head_frag so it won't trigger
> > > > > page_pool_return_skb_page on the head_frag since the dataref just goes
> > > > > from 2 to 1.
> > > > >
> > > > > The problem is that pskb_expand_head memcopies the page frags over and
> > > > > takes a reference on the pages. At that point you would have two skbs
> > > > > both pointing to the same set of pages and each one ready to call
> > > > > page_pool_return_skb_page on the pages at any time and possibly racing
> > > > > with the other.
> > > >
> > > > Ok let me make sure I get the idea properly.
> > > > When pskb_expand_head is called, the new dataref will be 1, but the
> > > > head_frag will be set to 0, in which case the recycling code won't be
> > > > called for that skb.
> > > > So you are mostly worried about a race within the context of
> > > > pskb_expand_skb() between copying the frags, releasing the previous head
> > > > and preparing the new one (on a cloned skb)?
> > >
> > > The race is between freeing the two skbs. So the original and the
> > > clone w/ the expanded head will have separate instances of the page. I
> > > am pretty certain there is a race if the two of them start trying to
> > > free the page frags at the same time.
> > >
> >
> > Right, I completely forgot calling __skb_frag_unref() before releasing the
> > head ...
> > You are right, this will be a race.  Let me go back to the original mail
> > thread and see what we can do
> >
>
> What do you think about resetting pp_recycle bit on pskb_expand_head()?

I assume you mean specifically in the cloned case?

> If my memory serves me right Eric wanted that from the beginning. Then the
> cloned/expanded SKB won't trigger the recycling.  If that skb hits the free
> path first, we'll end up recycling the fragments eventually.  If the
> original one goes first, we'll just unmap the page(s) and freeing the cloned
> one will free all the remaining buffers.

I *think* that should be fine. Effectively what we are doing is making
it so that if the original skb is freed first the pages are released,
and if it is released after the clone/expended skb then it can be
recycled.

The issue is we have to maintain it so that there will be exactly one
caller of the recycling function for the pages. So any spot where we
are updating skb->head we will have to see if there is a clone and if
so we have to clear the pp_recycle flag on our skb so that it doesn't
try to recycle the page frags as well.

  reply	other threads:[~2021-07-08 15:30 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-30  9:17 [PATCH net-next RFC 0/2] add elevated refcnt support for page pool Yunsheng Lin
2021-06-30  9:17 ` [PATCH net-next RFC 1/2] page_pool: add page recycling support based on elevated refcnt Yunsheng Lin
2021-07-02  9:42   ` Jesper Dangaard Brouer
2021-07-02 10:15     ` Yunsheng Lin
2021-07-06  4:54       ` Ilias Apalodimas
2021-07-06  6:46         ` Yunsheng Lin
2021-07-06  8:18           ` Ilias Apalodimas
2021-07-06 20:45   ` Alexander Duyck
2021-07-07  3:05     ` Yunsheng Lin
2021-07-07 15:01       ` Alexander Duyck
2021-07-07 19:03         ` Ilias Apalodimas
2021-07-07 21:49           ` Alexander Duyck
2021-07-08 14:21             ` Ilias Apalodimas
2021-07-08 14:24               ` Alexander Duyck
2021-07-08 14:50                 ` Ilias Apalodimas
2021-07-08 15:17                   ` Ilias Apalodimas
2021-07-08 15:29                     ` Alexander Duyck [this message]
2021-07-08 15:36                       ` Ilias Apalodimas
2021-07-08 15:41                         ` Alexander Duyck
2021-07-08 15:47                           ` Ilias Apalodimas
2021-07-08  2:27         ` Yunsheng Lin
2021-07-08 15:36           ` Alexander Duyck
2021-07-09  6:26             ` Yunsheng Lin
2021-07-09 14:15               ` Alexander Duyck
2021-07-10  9:16                 ` [Linuxarm] " Yunsheng Lin
2021-06-30  9:17 ` [PATCH net-next RFC 2/2] net: hns3: support skb's frag page recycling based on page pool Yunsheng Lin
2021-07-02  8:36 ` [PATCH net-next RFC 0/2] add elevated refcnt support for " Ilias Apalodimas
2021-07-02 13:39 ` Matteo Croce
2021-07-06 15:51   ` Russell King (Oracle)
2021-07-06 23:19     ` Matteo Croce
2021-07-07 16:50       ` Marcin Wojtas
2021-07-09  4:15         ` Matteo Croce
2021-07-09  6:40           ` Yunsheng Lin
2021-07-09  6:42             ` Ilias Apalodimas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKgT0UcoLE=MhG+QxS=up5BH_cK5FBSwyMHDvfUg2g8083UM+w@mail.gmail.com' \
    --to=alexander.duyck@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=alobakin@pm.me \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=cong.wang@bytedance.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=elver@google.com \
    --cc=feng.tang@intel.com \
    --cc=fenghua.yu@intel.com \
    --cc=guro@fb.com \
    --cc=haokexin@gmail.com \
    --cc=hawk@kernel.org \
    --cc=hughd@google.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jgg@ziepe.ca \
    --cc=john.fastabend@gmail.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=linuxarm@openeuler.org \
    --cc=linyunsheng@huawei.com \
    --cc=mcroce@microsoft.com \
    --cc=mw@semihalf.com \
    --cc=netdev@vger.kernel.org \
    --cc=nogikh@google.com \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=salil.mehta@huawei.com \
    --cc=thomas.petazzoni@bootlin.com \
    --cc=vbabka@suse.cz \
    --cc=wenxu@ucloud.cn \
    --cc=will@kernel.org \
    --cc=willemb@google.com \
    --cc=willy@infradead.org \
    --cc=yisen.zhuang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).