From: Jesper Dangaard Brouer <jbrouer@redhat.com>
To: Alexander Duyck <alexander.duyck@gmail.com>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: brouer@redhat.com, Netdev <netdev@vger.kernel.org>,
Alexander Duyck <alexanderduyck@fb.com>,
"David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>,
Alexander Lobakin <alobakin@pm.me>,
Jonathan Lemon <jonathan.lemon@gmail.com>,
Willem de Bruijn <willemb@google.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Guillaume Nault <gnault@redhat.com>,
Cong Wang <cong.wang@bytedance.com>,
Matteo Croce <mcroce@microsoft.com>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/1 v2] skbuff: Fix a potential race while recycling page_pool packets
Date: Mon, 12 Jul 2021 13:52:46 +0200 [thread overview]
Message-ID: <1e0596d2-d6dc-b4f8-b908-0fedc9060124@redhat.com> (raw)
In-Reply-To: <CAKgT0UdQmrzZufHpvRBtWgbFdTCVmKH4Vx6GzwtmC9FuM8K+hg@mail.gmail.com>
On 09/07/2021 16.34, Alexander Duyck wrote:
> On Thu, Jul 8, 2021 at 11:30 PM Ilias Apalodimas
> <ilias.apalodimas@linaro.org> wrote:
>>
>> As Alexander points out, when we are trying to recycle a cloned/expanded
>> SKB we might trigger a race. The recycling code relies on the
>> pp_recycle bit to trigger, which we carry over to cloned SKBs.
>> If that cloned SKB gets expanded or if we get references to the frags,
>> call skbb_release_data() and overwrite skb->head, we are creating separate
>> instances accessing the same page frags. Since the skb_release_data()
>> will first try to recycle the frags, there's a potential race between
>> the original and cloned SKB, since both will have the pp_recycle bit set.
>>
>> Fix this by explicitly those SKBs not recyclable.
>> The atomic_sub_return effectively limits us to a single release case,
>> and when we are calling skb_release_data we are also releasing the
>> option to perform the recycling, or releasing the pages from the page pool.
>>
>> Fixes: 6a5bcd84e886 ("page_pool: Allow drivers to hint on SKB recycling")
>> Reported-by: Alexander Duyck <alexanderduyck@fb.com>
>> Suggested-by: Alexander Duyck <alexanderduyck@fb.com>
>> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
>> ---
>> Changes since v1:
>> - Set the recycle bit to 0 during skb_release_data instead of the
>> individual fucntions triggering the issue, in order to catch all
>> cases
>> net/core/skbuff.c | 4 +++-
>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>> index 12aabcda6db2..f91f09a824be 100644
>> --- a/net/core/skbuff.c
>> +++ b/net/core/skbuff.c
>> @@ -663,7 +663,7 @@ static void skb_release_data(struct sk_buff *skb)
>> if (skb->cloned &&
>> atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1,
>> &shinfo->dataref))
>> - return;
>> + goto exit;
>>
>> skb_zcopy_clear(skb, true);
>>
>> @@ -674,6 +674,8 @@ static void skb_release_data(struct sk_buff *skb)
>> kfree_skb_list(shinfo->frag_list);
>>
>> skb_free_head(skb);
>> +exit:
>> + skb->pp_recycle = 0;
>> }
>>
>> /*
>> --
>> 2.32.0.rc0
>>
>
> This is probably the cleanest approach with the least amount of
> change, but one thing I am concerned with in this approach is that we
> end up having to dirty a cacheline that I am not sure is otherwise
> touched during skb cleanup. I am not sure if that will be an issue or
> not. If it is then an alternative or follow-on patch could move the
> pp_recycle flag into the skb_shared_info flags itself and then make
> certain that we clear it around the same time we are setting
> shinfo->dataref to 1.
>
The skb->cloned and skb->pp_recycle (bitfields) are on the same
cache-line (incl. nohdr, destructor, active_extensions). Thus, we know
this must be in CPUs cache, regardless of this change. I do acknowledge
that it might be in cache coherency "Shared" state, and writing
skb->pp_recycle=0 the CPU *might* have to change the cache coherency
state, but I don't expect this to be a performance problem.
> Otherwise this looks good to me.
>
> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
I've gone over the code-path, with Ilias on IRC and I've convinced
myself that this fix is correct, thus ACK.
next prev parent reply other threads:[~2021-07-12 11:52 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-09 6:29 [PATCH 1/1 v2] skbuff: Fix a potential race while recycling page_pool packets Ilias Apalodimas
2021-07-09 14:34 ` Alexander Duyck
2021-07-09 17:29 ` Ilias Apalodimas
2021-07-12 11:52 ` Jesper Dangaard Brouer [this message]
2021-07-15 4:00 ` Yunsheng Lin
2021-07-15 10:00 ` Ilias Apalodimas
2021-07-15 10:38 ` Ilias Apalodimas
2021-07-15 10:47 ` Yunsheng Lin
2021-07-15 12:37 ` Ilias Apalodimas
2021-07-15 14:25 ` Alexander Duyck
2021-07-15 14:45 ` Ilias Apalodimas
2021-07-15 14:57 ` Alexander Duyck
2021-07-15 15:02 ` Ilias Apalodimas
2021-07-16 2:30 ` Yunsheng Lin
2021-07-16 6:43 ` Ilias Apalodimas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1e0596d2-d6dc-b4f8-b908-0fedc9060124@redhat.com \
--to=jbrouer@redhat.com \
--cc=alexander.duyck@gmail.com \
--cc=alexanderduyck@fb.com \
--cc=alobakin@pm.me \
--cc=brouer@redhat.com \
--cc=cong.wang@bytedance.com \
--cc=davem@davemloft.net \
--cc=gnault@redhat.com \
--cc=ilias.apalodimas@linaro.org \
--cc=jonathan.lemon@gmail.com \
--cc=kuba@kernel.org \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mcroce@microsoft.com \
--cc=netdev@vger.kernel.org \
--cc=willemb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).