linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Felix Fietkau <nbd@nbd.name>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	netdev@vger.kernel.org, Jesper Dangaard Brouer <hawk@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	linux-kernel@vger.kernel.org,
	Yunsheng Lin <linyunsheng@huawei.com>
Subject: Re: [PATCH] net: page_pool: fix refcounting issues with fragmented allocation
Date: Thu, 26 Jan 2023 18:44:17 +0100	[thread overview]
Message-ID: <bc0fa31a-c935-c6f0-f968-9e2a54bafd45@nbd.name> (raw)
In-Reply-To: <CAKgT0Ud8npNtncH-KbMtj_R=UZ=aFA9T8U=TZoLG_94eVUxKPA@mail.gmail.com>

On 26.01.23 17:08, Alexander Duyck wrote:
> On Thu, Jan 26, 2023 at 1:15 AM Felix Fietkau <nbd@nbd.name> wrote:
>>
>> On 26.01.23 07:12, Felix Fietkau wrote:
>> > On 25.01.23 23:14, Alexander H Duyck wrote:
>> >> On Wed, 2023-01-25 at 20:40 +0100, Felix Fietkau wrote:
>> >>> On 25.01.23 20:10, Felix Fietkau wrote:
>> >>> > On 25.01.23 20:02, Alexander H Duyck wrote:
>> >>> > > On Wed, 2023-01-25 at 19:42 +0100, Felix Fietkau wrote:
>> >>> > > > On 25.01.23 19:26, Alexander H Duyck wrote:
>> >>> > > > > On Wed, 2023-01-25 at 18:32 +0100, Felix Fietkau wrote:
>> >>> > > > > > On 25.01.23 18:11, Alexander H Duyck wrote:
>> >>> > > > > > > On Tue, 2023-01-24 at 22:30 +0100, Felix Fietkau wrote:
>> >>> > > > > > > > On 24.01.23 22:10, Alexander H Duyck wrote:
>> >>> > > > > > > > > On Tue, 2023-01-24 at 18:22 +0100, Felix Fietkau wrote:
>> >>> > > > > > > > > > On 24.01.23 15:11, Ilias Apalodimas wrote:
>> >>> > > > > > > > > > > Hi Felix,
>> >>> > > > > > > > > > >
>> >>> > > > > > > > > > > ++cc Alexander and Yunsheng.
>> >>> > > > > > > > > > >
>> >>> > > > > > > > > > > Thanks for the report
>> >>> > > > > > > > > > >
>> >>> > > > > > > > > > > On Tue, 24 Jan 2023 at 14:43, Felix Fietkau <nbd@nbd.name> wrote:
>> >>> > > > > > > > > > > >
>> >>> > > > > > > > > > > > While testing fragmented page_pool allocation in the mt76 driver, I was able
>> >>> > > > > > > > > > > > to reliably trigger page refcount underflow issues, which did not occur with
>> >>> > > > > > > > > > > > full-page page_pool allocation.
>> >>> > > > > > > > > > > > It appears to me, that handling refcounting in two separate counters
>> >>> > > > > > > > > > > > (page->pp_frag_count and page refcount) is racy when page refcount gets
>> >>> > > > > > > > > > > > incremented by code dealing with skb fragments directly, and
>> >>> > > > > > > > > > > > page_pool_return_skb_page is called multiple times for the same fragment.
>> >>> > > > > > > > > > > >
>> >>> > > > > > > > > > > > Dropping page->pp_frag_count and relying entirely on the page refcount makes
>> >>> > > > > > > > > > > > these underflow issues and crashes go away.
>> >>> > > > > > > > > > > >
>> >>> > > > > > > > > > >
>> >>> > > > > > > > > > > This has been discussed here [1].  TL;DR changing this to page
>> >>> > > > > > > > > > > refcount might blow up in other colorful ways.  Can we look closer and
>> >>> > > > > > > > > > > figure out why the underflow happens?
>> >>> > > > > > > > > > I don't see how the approch taken in my patch would blow up. From what I
>> >>> > > > > > > > > > can tell, it should be fairly close to how refcount is handled in
>> >>> > > > > > > > > > page_frag_alloc. The main improvement it adds is to prevent it from
>> >>> > > > > > > > > > blowing up if pool-allocated fragments get shared across multiple skbs
>> >>> > > > > > > > > > with corresponding get_page and page_pool_return_skb_page calls.
>> >>> > > > > > > > > >
>> >>> > > > > > > > > > - Felix
>> >>> > > > > > > > > >
>> >>> > > > > > > > >
>> >>> > > > > > > > > Do you have the patch available to review as an RFC? From what I am
>> >>> > > > > > > > > seeing it looks like you are underrunning on the pp_frag_count itself.
>> >>> > > > > > > > > I would suspect the issue to be something like starting with a bad
>> >>> > > > > > > > > count in terms of the total number of references, or deducing the wrong
>> >>> > > > > > > > > amount when you finally free the page assuming you are tracking your
>> >>> > > > > > > > > frag count using a non-atomic value in the driver.
>> >>> > > > > > > > The driver patches for page pool are here:
>> >>> > > > > > > > https://patchwork.kernel.org/project/linux-wireless/patch/64abb23f4867c075c19d704beaae5a0a2f8e8821.1673963374.git.lorenzo@kernel.org/
>> >>> > > > > > > > https://patchwork.kernel.org/project/linux-wireless/patch/68081e02cbe2afa2d35c8aa93194f0adddbd0f05.1673963374.git.lorenzo@kernel.org/
>> >>> > > > > > > >
>> >>> > > > > > > > They are also applied in my mt76 tree at:
>> >>> > > > > > > > https://github.com/nbd168/wireless
>> >>> > > > > > > >
>> >>> > > > > > > > - Felix
>> >>> > > > > > >
>> >>> > > > > > > So one thing I am thinking is that we may be seeing an issue where we
>> >>> > > > > > > are somehow getting a mix of frag and non-frag based page pool pages.
>> >>> > > > > > > That is the only case I can think of where we might be underflowing
>> >>> > > > > > > negative. If you could add some additional debug info on the underflow
>> >>> > > > > > > WARN_ON case in page_pool_defrag_page that might be useful.
>> >>> > > > > > > Specifically I would be curious what the actual return value is. I'm
>> >>> > > > > > > assuming we are only hitting negative 1, but I would want to verify we
>> >>> > > > > > > aren't seeing something else.
>> >>> > > > > > I'll try to run some more tests soon. However, I think I found the piece
>> >>> > > > > > of code that is incompatible with using pp_frag_count.
>> >>> > > > > > When receiving an A-MSDU packet (multiple MSDUs within a single 802.11
>> >>> > > > > > packet), and it is not split by the hardware, a cfg80211 function
>> >>> > > > > > extracts the individual MSDUs into separate skbs. In that case, a
>> >>> > > > > > fragment can be shared across multiple skbs, and get_page is used to
>> >>> > > > > > increase the refcount.
>> >>> > > > > > You can find this in net/wireless/util.c: ieee80211_amsdu_to_8023s (and
>> >>> > > > > > its helper functions).
>> >>> > > > >
>> >>> > > > > I'm not sure if it is problematic or not. Basically it is trading off
>> >>> > > > > by copying over the frags, calling get_page on each frag, and then
>> >>> > > > > using dev_kfree_skb to disassemble and release the pp_frag references.
>> >>> > > > > There should be other paths in the kernel that are doing something
>> >>> > > > > similar.
>> >>> > > > >
>> >>> > > > > > This code also has a bug where it doesn't set pp_recycle on the newly
>> >>> > > > > > allocated skb if the previous one has it, but that's a separate matter
>> >>> > > > > > and fixing it doesn't make the crash go away.
>> >>> > > > >
>> >>> > > > > Adding the recycle would cause this bug. So one thing we might be
>> >>> > > > > seeing is something like that triggering this error. Specifically if
>> >>> > > > > the page is taken via get_page when assembling the new skb then we
>> >>> > > > > cannot set the recycle flag in the new skb otherwise it will result in
>> >>> > > > > the reference undercount we are seeing. What we are doing is shifting
>> >>> > > > > the references away from the pp_frag_count to the page reference count
>> >>> > > > > in this case. If we set the pp_recycle flag then it would cause us to
>> >>> > > > > decrement pp_frag_count instead of the page reference count resulting
>> >>> > > > > in the underrun.
>> >>> > > > Couldn't leaving out the pp_recycle flag potentially lead to a case
>> >>> > > > where the last user of the page drops it via page_frag_free instead of
>> >>> > > > page_pool_return_skb_page? Is that valid?
>> >>> > >
>> >>> > > No. What will happen is that when the pp_frag_count is exhausted the
>> >>> > > page will be unmapped and evicted from the page pool. When the page is
>> >>> > > then finally freed it will end up going back to the page allocator
>> >>> > > instead of page pool.
>> >>> > >
>> >>> > > Basically the idea is that until pp_frag_count reaches 0 there will be
>> >>> > > at least 1 page reference held.
>> >>> > >
>> >>> > > > > > Is there any way I can make that part of the code work with the current
>> >>> > > > > > page pool frag implementation?
>> >>> > > > >
>> >>> > > > > The current code should work. Basically as long as the references are
>> >>> > > > > taken w/ get_page and skb->pp_recycle is not set then we shouldn't run
>> >>> > > > > into this issue because the pp_frag_count will be dropped when the
>> >>> > > > > original skb is freed and the page reference count will be decremented
>> >>> > > > > when the new one is freed.
>> >>> > > > >
>> >>> > > > > For page pool page fragments the main thing to keep in mind is that if
>> >>> > > > > pp_recycle is set it will update the pp_frag_count and if it is not
>> >>> > > > > then it will just decrement the page reference count.
>> >>> > > > What takes care of DMA unmap and other cleanup if the last reference to
>> >>> > > > the page is dropped via page_frag_free?
>> >>> > > >
>> >>> > > > - Felix
>> >>> > >
>> >>> > > When the page is freed on the skb w/ pp_recycle set it will unmap the
>> >>> > > page and evict it from the page pool. Basically in these cases the page
>> >>> > > goes from the page pool back to the page allocator.
>> >>> > >
>> >>> > > The general idea with this is that if we are using fragments that there
>> >>> > > will be enough of them floating around that if one or two frags have a
>> >>> > > temporeary detour through a non-recycling path that hopefully by the
>> >>> > > time the last fragment is freed the other instances holding the
>> >>> > > additional page reference will have let them go. If not then the page
>> >>> > > will go back to the page allocator and it will have to be replaced in
>> >>> > > the page pool.
>> >>> > Thanks for the explanation, it makes sense to me now. Unfortunately it
>> >>> > also means that I have no idea what could cause this issue. I will
>> >>> > finish my mt76 patch rework which gets rid of the pp vs non-pp
>> >>> > allocation mix and re-run my tests to provide updated traces.
>> >>> Here's the updated mt76 page pool support commit:
>> >>> https://github.com/nbd168/wireless/commit/923cdab6d4c92a0acb3536b3b0cc4af9fee7c808
>> >>
>> >> Yeah, so I don't see anything wrong with the patch in terms of page
>> >> pool.
>> >>
>> >>> And here is the trace that I'm getting with 6.1:
>> >>> https://nbd.name/p/a16957f2
>> >>>
>> >>> If you have any debug patch you'd like me to test, please let me know.
>> >>>
>> >>> - Felix
>> >>
>> >> So looking at the traces I am assuming what we are seeing is the
>> >> deferred freeing from the TCP Rx path since I don't see a driver
>> >> anywhere between net_rx_action and napi_consume skb. So it seems like
>> >> the packets are likely making it all the way up the network stack.
>> >>
>> >> Is this the first wireless driver to add support for page pool? I'm
>> >> thinking we must be seeing something in the wireless path that is
>> >> causing an issue such as the function you called out earlier but I
>> >> can't see anything obvious.
>> > Yes, it's the first driver with page pool support.
>> >
>> >> One thing we need to be on the lookout for is cloned skbs. When an skb
>> >> is cloned the pp_recycle gets copied over. In that case the reference
>> >> is moved over to the skb dataref count. What comes to mind is something
>> >> like commit 1effe8ca4e34c ("skbuff: fix coalescing for page_pool
>> >> fragment recycling").
>> > I suspect that the crash might be related to a bad interaction between
>> > the page reuse in A-MSDU rx + skb coalescing on TCP rx.
>> > If I change the A-MSDU code to copy data instead of reusing fragments,
>> > it doesn't crash anymore.
> 
> Which piece did you change? My main interest would be trying to narrow
> down the section of this function that is causing this. Did you modify
> __ieee80211_amsdu_copy or some other function within the setup?
I replaced this line:
   bool reuse_frag = skb->head_frag && !skb_has_frag_list(skb);
with:
   bool reuse_frag = false;

>> > I believe the issue must be specific to that codepath, since most
>> > received and processed packets are either not A-MSDU or A-MSDU decap has
>> > already been performed by the hardware.
>> > If I change my test to use 3 client mode interfaces instead of 4, the
>> > hardware is able to offload all A-MSDU rx processing and I don't see any
>> > crashes anymore.
>> >
>> > Could you please take another look at ieee80211_amsdu_to_8023s to see if
>> > there's anything in there that could cause these issues?
> 
> The thing is I am not sure it is the only cause for this. I am
> suspecting we are seeing this triggering an issue when combined with
> something else.
> 
> If we could add some tracing to dump the skb and list buffers that
> might be helpful. We would want to verify the pp_recycle value, clone
> flag, and for the frags we would want to frag count and page reference
> counts. The expectation would be that the original skb should have the
> pp_recycle flag set and the frag counts consistent through the
> process, and the list skbs should all have taken page references w/ no
> pp_recycle on the skbs in the list.
> 
>> Here are clues from a few more tests that I ran:
>> - preventing the reuse of the last skb in ieee80211_amsdu_to_8023s does
>> not prevent the crashes, so the issue is indeed related to taking page
>> references and putting the pages in skb fragments.
> 
> You said in the first email it stops it and in the second "does not".
> I am assuming that is some sort of typo since you seem to be implying
> it does resolve it for you. Is that correct?
For everything except for the last subframe, the function pulls 
fragments out of the original skb and puts them into a new skb allocated 
with dev_alloc_skb. Additionally, the last skb is reused for the final 
subframe. What I meant was: In order to figure out if the skb-reuse is 
problematic, I prevented it from happening, ensuring that all subframes 
are allocated and get the fragments from the skb.
All I meant to say is that since that led to the same crash, the 
skb-reuse is not the problem.

Regarding the question from your other mail: without GRO there is no 
crash and it looks stable.

- Felix


  parent reply	other threads:[~2023-01-26 17:44 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-24 12:43 [PATCH] net: page_pool: fix refcounting issues with fragmented allocation Felix Fietkau
2023-01-24 14:11 ` Ilias Apalodimas
2023-01-24 15:57   ` Alexander H Duyck
2023-01-24 16:59     ` Felix Fietkau
2023-01-26 10:31     ` Ilias Apalodimas
2023-01-26 15:41       ` Alexander Duyck
2023-01-26 16:05         ` Ilias Apalodimas
2023-01-24 17:22   ` Felix Fietkau
2023-01-24 21:10     ` Alexander H Duyck
2023-01-24 21:30       ` Felix Fietkau
2023-01-25 17:11         ` Alexander H Duyck
2023-01-25 17:32           ` Felix Fietkau
2023-01-25 18:26             ` Alexander H Duyck
2023-01-25 18:42               ` Felix Fietkau
2023-01-25 19:02                 ` Alexander H Duyck
2023-01-25 19:10                   ` Felix Fietkau
2023-01-25 19:40                     ` Felix Fietkau
2023-01-25 20:02                       ` Felix Fietkau
2023-01-25 22:14                       ` Alexander H Duyck
2023-01-26  6:12                         ` Felix Fietkau
2023-01-26  9:14                           ` Felix Fietkau
2023-01-26 16:08                             ` Alexander Duyck
2023-01-26 16:40                               ` Alexander Duyck
2023-01-26 17:44                               ` Felix Fietkau [this message]
2023-01-26 18:38                                 ` Alexander H Duyck
2023-01-26 18:43                                   ` Felix Fietkau
2023-01-26 19:06                                     ` [net PATCH] skb: Do mix page pool and page referenced frags in GRO Alexander Duyck
2023-01-26 19:14                                       ` Toke Høiland-Jørgensen
2023-01-26 19:48                                         ` Alexander Duyck
2023-01-26 21:35                                           ` Toke Høiland-Jørgensen
2023-01-26 23:13                                       ` Jakub Kicinski
2023-01-27  7:15                                         ` Ilias Apalodimas
2023-01-27  7:21                                         ` Felix Fietkau
2023-01-30 16:49                                         ` Jesper Dangaard Brouer
2023-01-28  2:37                                       ` Yunsheng Lin
2023-01-28  5:26                                         ` Jakub Kicinski
2023-01-28  7:08                                           ` Eric Dumazet
2023-01-30  8:50                                             ` Paolo Abeni
2023-01-30 16:17                                               ` Alexander Duyck
2023-01-28  7:15                                           ` Eric Dumazet
2023-01-28 17:08                                             ` Alexander Duyck
2023-01-28  7:50                                       ` patchwork-bot+netdevbpf
2023-01-26 10:32     ` [PATCH] net: page_pool: fix refcounting issues with fragmented allocation Ilias Apalodimas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bc0fa31a-c935-c6f0-f968-9e2a54bafd45@nbd.name \
    --to=nbd@nbd.name \
    --cc=alexander.duyck@gmail.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=lorenzo@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).