From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65D35C636CE for ; Wed, 21 Jul 2021 14:06:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4992261248 for ; Wed, 21 Jul 2021 14:06:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239197AbhGUN0L (ORCPT ); Wed, 21 Jul 2021 09:26:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238207AbhGUN0D (ORCPT ); Wed, 21 Jul 2021 09:26:03 -0400 Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [IPv6:2a00:1450:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D1CEC0613CF; Wed, 21 Jul 2021 07:06:39 -0700 (PDT) Received: by mail-ej1-x62f.google.com with SMTP id nd37so3465578ejc.3; Wed, 21 Jul 2021 07:06:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=DAmDYxmMpDWNNKS5PvAQ9jk+GlaX2ZHizzSfKKpr7c8=; b=rXlo5cDc2o6TPVIi9/5jj1a+i7h8oobVBPBUNHATW692zK7YfztPZGksFVsfKh7JNx ZbkN/OVKpEtRDgMSuxaxE4fU1QXp9L8uneQYXYOzeSOTeiCWutG7dPa0I4dseec8LqNA dKuGDz+WThWoIi2CckjuIQItS4V+eFFCHv6rVFuqOjVS7VuiPIHH3bLCAJDon/fSJRW7 XxU2EWJWUaewXCj3VNtp3zuVLeyC6HMKW0Z2c9V2Ros6wDdgtmwjc0ZgG7PMGfpITEUp 56vI/zVuQO4bLQEYHpPzIeFhH5S7KusrKp2CZBbHb76pvYbzqfgKLgxhs4jeKeDzVON4 fOyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=DAmDYxmMpDWNNKS5PvAQ9jk+GlaX2ZHizzSfKKpr7c8=; b=LeLD1SDHZIiQgs6On067sgsQAvjiEwNflSWCTuKF0XopabwCgR4/vo/1f8V/lP4zhK 1ygwPmp2KE5zepyxJiJPvtXLi5BDnfyeOZn7k+/7XfY6QOA2Uutjd82ZHFnsHkmGAwyK d5ISWN/StYTrdx1e1ua3IWHEX7CX/3cVkeREUvWXOUFNIVZjJSYzQ95iaDpLQQ1ezawa KYW9LuO944a8nPFF4YN+teW8rKUBTzHV87m08Rod+3ZfMRXauobuEV6rD3ZEYKeB0Y2g 0GL/DPd82C0cwIHKTKXN/BceAaeaLQ3nh7mfkRAfxGJqADUgzz/8oYoS3qRhs549veVz S1vw== X-Gm-Message-State: AOAM530cpo17/5oTGJ7RCVUR3cFgmkeuAw0FrQvVp5ktL2XV6GP5+84d CND5htiw7nqBzBi1wKB9WWRNBPY5sv35POcw52c= X-Google-Smtp-Source: ABdhPJzkczMR/KPGmaX7A1pCqBxTELfqlO5UbE+O/JtIXbXCT8oik3eChBfIqJizfo2mOlWUXZx+cvPK50XkAoFsgLc= X-Received: by 2002:a17:907:397:: with SMTP id ss23mr37918535ejb.470.1626876397998; Wed, 21 Jul 2021 07:06:37 -0700 (PDT) MIME-Version: 1.0 References: <1626752145-27266-1-git-send-email-linyunsheng@huawei.com> <1626752145-27266-3-git-send-email-linyunsheng@huawei.com> <92e68f4e-49a4-568c-a281-2865b54a146e@huawei.com> In-Reply-To: <92e68f4e-49a4-568c-a281-2865b54a146e@huawei.com> From: Alexander Duyck Date: Wed, 21 Jul 2021 07:06:26 -0700 Message-ID: Subject: Re: [PATCH rfc v6 2/4] page_pool: add interface to manipulate frag count in page pool To: Yunsheng Lin Cc: David Miller , Jakub Kicinski , Russell King - ARM Linux , Marcin Wojtas , linuxarm@openeuler.org, yisen.zhuang@huawei.com, Salil Mehta , thomas.petazzoni@bootlin.com, hawk@kernel.org, Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrew Morton , Peter Zijlstra , Will Deacon , Matthew Wilcox , Vlastimil Babka , fenghua.yu@intel.com, guro@fb.com, Peter Xu , Feng Tang , Jason Gunthorpe , Matteo Croce , Hugh Dickins , Jonathan Lemon , Alexander Lobakin , Willem de Bruijn , wenxu@ucloud.cn, Cong Wang , Kevin Hao , nogikh@google.com, Marco Elver , Yonghong Song , kpsingh@kernel.org, andrii@kernel.org, Martin KaFai Lau , songliubraving@fb.com, Netdev , LKML , bpf Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 21, 2021 at 1:15 AM Yunsheng Lin wrote: > > On 2021/7/20 23:43, Alexander Duyck wrote: > > On Mon, Jul 19, 2021 at 8:36 PM Yunsheng Lin wrote: > >> > >> For 32 bit systems with 64 bit dma, dma_addr[1] is used to > >> store the upper 32 bit dma addr, those system should be rare > >> those days. > >> > >> For normal system, the dma_addr[1] in 'struct page' is not > >> used, so we can reuse dma_addr[1] for storing frag count, > >> which means how many frags this page might be splited to. > >> > >> In order to simplify the page frag support in the page pool, > >> the PAGE_POOL_DMA_USE_PP_FRAG_COUNT macro is added to indicate > >> the 32 bit systems with 64 bit dma, and the page frag support > >> in page pool is disabled for such system. > >> > >> The newly added page_pool_set_frag_count() is called to reserve > >> the maximum frag count before any page frag is passed to the > >> user. The page_pool_atomic_sub_frag_count_return() is called > >> when user is done with the page frag. > >> > >> Signed-off-by: Yunsheng Lin > >> --- > >> include/linux/mm_types.h | 18 +++++++++++++----- > >> include/net/page_pool.h | 41 ++++++++++++++++++++++++++++++++++------- > >> net/core/page_pool.c | 4 ++++ > >> 3 files changed, 51 insertions(+), 12 deletions(-) > >> > > > > > > > >> +static inline long page_pool_atomic_sub_frag_count_return(struct page *page, > >> + long nr) > >> +{ > >> + long frag_count = atomic_long_read(&page->pp_frag_count); > >> + long ret; > >> + > >> + if (frag_count == nr) > >> + return 0; > >> + > >> + ret = atomic_long_sub_return(nr, &page->pp_frag_count); > >> + WARN_ON(ret < 0); > >> + return ret; > >> } > >> > > > > So this should just be an atomic_long_sub_return call. You should get > > rid of the atomic_long_read portion of this as it can cover up > > reference count errors. > > atomic_long_sub_return() is used to avoid one possible cache bouncing and > barrrier caused by the last user. I assume you mean "atomic_long_read()" here. > You are right that that may cover up the reference count errors. How about > something like below: > > static inline long page_pool_atomic_sub_frag_count_return(struct page *page, > long nr) > { > #ifdef CONFIG_DEBUG_PAGE_REF > long ret = atomic_long_sub_return(nr, &page->pp_frag_count); > > WARN_ON(ret < 0); > > return ret; > #else > if (atomic_long_read(&page->pp_frag_count) == nr) > return 0; > > return atomic_long_sub_return(nr, &page->pp_frag_count); > #end > } > > Or any better suggestion? So the one thing I might change would be to make it so that you only do the atomic_long_read if nr is a constant via __builtin_constant_p. That way you would be performing the comparison in __page_pool_put_page and in the cases of freeing or draining the page_frags you would be using the atomic_long_sub_return which should be paths where you would not expect it to match or that are slowpath anyway. Also I would keep the WARN_ON in both paths just to be on the safe side.