From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 842E9C433FE for ; Thu, 23 Sep 2021 11:47:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 666AC61107 for ; Thu, 23 Sep 2021 11:47:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238930AbhIWLtL (ORCPT ); Thu, 23 Sep 2021 07:49:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240617AbhIWLtI (ORCPT ); Thu, 23 Sep 2021 07:49:08 -0400 Received: from mail-qk1-x72a.google.com (mail-qk1-x72a.google.com [IPv6:2607:f8b0:4864:20::72a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4252AC061756 for ; Thu, 23 Sep 2021 04:47:37 -0700 (PDT) Received: by mail-qk1-x72a.google.com with SMTP id d207so21355179qkg.0 for ; Thu, 23 Sep 2021 04:47:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=f5q4Q0naikJ3U5wGiMvQKb9E4gch+MACgCJ4ofFGfM0=; b=CEH2FkNz7FYz8p9aGi3j4nq9KkH7fQWudfbUsWh3XWNrD1EXDnPuz74TC8yoAu2XVO NnQSYCO//oyWdM/a5Dm/aNX31E6SlKEQjMOC3KXQZ0JIMmSZCTCCzpwMjQqKDOylxL3+ Gy7yUqcjdaOAUXjnv+s7608EgRwji7aKg35yxWswRHMAQEK3YhJghchL8h5c056GPCqx TAo6i2rwVPZTz8BCAy5NcFT3aE19GxfDy8IcdHkxcH7NCgOfXpFnxwuzqmn4JEcmemrQ j8+DSincvKzfd3Sq1KwQ25KpCoY33lJAHMp80jedyervndLDH9pS/H3yN5/3z8BZCKPi SSXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=f5q4Q0naikJ3U5wGiMvQKb9E4gch+MACgCJ4ofFGfM0=; b=rzUR9lWboS4bdnso8rPhAlQl2Sgp1K5Iv/BhGQwGcsHXPVHx89bwGkq28Nmc7SZm6a vTnKWtBTJVFh7t5pegONp/h6Ej+jr5lut8NrNRGjTwgwXUHzlhRJzjZWrHF6eii/YRhe ufIAacX+QldYYA8RiW85kb9eKOR2rxzI2RL3h94sLL5z5OCMwQw5+h6rMBQAYtEM1GrQ HbwrzUbDsDj/jtgMF5Sh/m9xFNorY7IVnzCheIf+Hd9C+/lPRc+H17dLzogDFXiNgkR1 qVo2CfkHH4l6aHSw2u5xTux6fYpAZ6lV47IaOjCWG4fS6gzFcB61mBnXlGcfH7+Nw0vz fJIw== X-Gm-Message-State: AOAM532LAQ0GIFD4oz0ifY58GC4GaSm0FqPbEsZUsQDmuAk8Qj8UJBGK 8JqxmpnPNqk6PTWA/VtG0/jWnaz4x+fFGzHXzVbyxg== X-Google-Smtp-Source: ABdhPJwLLqnUaFZAORZ5Q3MAjaFmeajoEHsSPv22Q1Lx23MGL0NOtHPTqJryA3AIJc+i2XNlqHoRGi0UFbEiX9YEXlk= X-Received: by 2002:a25:2f48:: with SMTP id v69mr5031072ybv.339.1632397656362; Thu, 23 Sep 2021 04:47:36 -0700 (PDT) MIME-Version: 1.0 References: <20210922094131.15625-1-linyunsheng@huawei.com> <20210922094131.15625-4-linyunsheng@huawei.com> In-Reply-To: From: Ilias Apalodimas Date: Thu, 23 Sep 2021 14:47:00 +0300 Message-ID: Subject: Re: [PATCH net-next 3/7] pool_pool: avoid calling compound_head() for skb frag page To: Yunsheng Lin Cc: "David S. Miller" , Jakub Kicinski , Networking , open list , linuxarm@openeuler.org, Jesper Dangaard Brouer , Jonathan Lemon , Alexander Lobakin , Willem de Bruijn , Cong Wang , Paolo Abeni , Kevin Hao , Aleksandr Nogikh , Marco Elver , memxor@gmail.com, Eric Dumazet , Alexander Duyck , David Ahern Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 23 Sept 2021 at 14:24, Yunsheng Lin wrote: > > On 2021/9/23 16:33, Ilias Apalodimas wrote: > > On Wed, Sep 22, 2021 at 05:41:27PM +0800, Yunsheng Lin wrote: > >> As the pp page for a skb frag is always a head page, so make > >> sure skb_pp_recycle() passes a head page to avoid calling > >> compound_head() for skb frag page case. > > > > Doesn't that rely on the driver mostly (i.e what's passed in skb_frag_set_page() ? > > None of the current netstack code assumes bv_page is the head page of a > > compound page. Since our page_pool allocator can will allocate compound > > pages for order > 0, why should we rely on it ? > > As the page pool alloc function return 'struct page *' to the caller, which > is the head page of a compound pages for order > 0, so I assume the caller > will pass that to skb_frag_set_page(). Yea that's exactly the assumption I was afraid of. Sure not passing the head page might seem weird atm and the assumption stands, but the point is we shouldn't blow up the entire network stack if someone does that eventually. > > For non-pp page, I assume it is ok whether the page is a head page or tail > page, as the pp_magic for both of them are not set with PP_SIGNATURE. Yea that's true, although we removed the checking for coalescing recyclable and non-recyclable SKBs, the next patch first checks the signature before trying to do anything with the skb. > > Or should we play safe here, and do the trick as skb_free_head() does in > patch 6? I don't think the &1 will even be measurable, so I'd suggest just dropping this and play safe? Cheers /Ilias > > > > > Thanks > > /Ilias > >> > >> Signed-off-by: Yunsheng Lin > >> --- > >> include/linux/skbuff.h | 2 +- > >> net/core/page_pool.c | 2 -- > >> 2 files changed, 1 insertion(+), 3 deletions(-) > >> > >> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > >> index 6bdb0db3e825..35eebc2310a5 100644 > >> --- a/include/linux/skbuff.h > >> +++ b/include/linux/skbuff.h > >> @@ -4722,7 +4722,7 @@ static inline bool skb_pp_recycle(struct sk_buff *skb, void *data) > >> { > >> if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle) > >> return false; > >> - return page_pool_return_skb_page(virt_to_page(data)); > >> + return page_pool_return_skb_page(virt_to_head_page(data)); > >> } > >> > >> #endif /* __KERNEL__ */ > >> diff --git a/net/core/page_pool.c b/net/core/page_pool.c > >> index f7e71dcb6a2e..357fb53343a0 100644 > >> --- a/net/core/page_pool.c > >> +++ b/net/core/page_pool.c > >> @@ -742,8 +742,6 @@ bool page_pool_return_skb_page(struct page *page) > >> { > >> struct page_pool *pp; > >> > >> - page = compound_head(page); > >> - > >> /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation > >> * in order to preserve any existing bits, such as bit 0 for the > >> * head page of compound page and bit 1 for pfmemalloc page, so > >> -- > >> 2.33.0 > >> > > . > >