From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.4 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC120C636CA for ; Wed, 21 Jul 2021 08:20:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC91A61245 for ; Wed, 21 Jul 2021 08:20:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236412AbhGUHj4 (ORCPT ); Wed, 21 Jul 2021 03:39:56 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:12282 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236341AbhGUHfH (ORCPT ); Wed, 21 Jul 2021 03:35:07 -0400 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4GV7Z021S8z7wb4; Wed, 21 Jul 2021 16:10:48 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 21 Jul 2021 16:15:24 +0800 Received: from [10.69.30.204] (10.69.30.204) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Wed, 21 Jul 2021 16:15:23 +0800 Subject: Re: [PATCH rfc v6 2/4] page_pool: add interface to manipulate frag count in page pool To: Alexander Duyck CC: David Miller , Jakub Kicinski , Russell King - ARM Linux , Marcin Wojtas , , , "Salil Mehta" , , , Ilias Apalodimas , "Alexei Starovoitov" , Daniel Borkmann , "John Fastabend" , Andrew Morton , Peter Zijlstra , "Will Deacon" , Matthew Wilcox , "Vlastimil Babka" , , , Peter Xu , Feng Tang , Jason Gunthorpe , Matteo Croce , Hugh Dickins , Jonathan Lemon , "Alexander Lobakin" , Willem de Bruijn , , Cong Wang , Kevin Hao , , Marco Elver , Yonghong Song , , , "Martin KaFai Lau" , , Netdev , LKML , bpf References: <1626752145-27266-1-git-send-email-linyunsheng@huawei.com> <1626752145-27266-3-git-send-email-linyunsheng@huawei.com> From: Yunsheng Lin Message-ID: <92e68f4e-49a4-568c-a281-2865b54a146e@huawei.com> Date: Wed, 21 Jul 2021 16:15:23 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.69.30.204] X-ClientProxiedBy: dggeme706-chm.china.huawei.com (10.1.199.102) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/7/20 23:43, Alexander Duyck wrote: > On Mon, Jul 19, 2021 at 8:36 PM Yunsheng Lin wrote: >> >> For 32 bit systems with 64 bit dma, dma_addr[1] is used to >> store the upper 32 bit dma addr, those system should be rare >> those days. >> >> For normal system, the dma_addr[1] in 'struct page' is not >> used, so we can reuse dma_addr[1] for storing frag count, >> which means how many frags this page might be splited to. >> >> In order to simplify the page frag support in the page pool, >> the PAGE_POOL_DMA_USE_PP_FRAG_COUNT macro is added to indicate >> the 32 bit systems with 64 bit dma, and the page frag support >> in page pool is disabled for such system. >> >> The newly added page_pool_set_frag_count() is called to reserve >> the maximum frag count before any page frag is passed to the >> user. The page_pool_atomic_sub_frag_count_return() is called >> when user is done with the page frag. >> >> Signed-off-by: Yunsheng Lin >> --- >> include/linux/mm_types.h | 18 +++++++++++++----- >> include/net/page_pool.h | 41 ++++++++++++++++++++++++++++++++++------- >> net/core/page_pool.c | 4 ++++ >> 3 files changed, 51 insertions(+), 12 deletions(-) >> > > > >> +static inline long page_pool_atomic_sub_frag_count_return(struct page *page, >> + long nr) >> +{ >> + long frag_count = atomic_long_read(&page->pp_frag_count); >> + long ret; >> + >> + if (frag_count == nr) >> + return 0; >> + >> + ret = atomic_long_sub_return(nr, &page->pp_frag_count); >> + WARN_ON(ret < 0); >> + return ret; >> } >> > > So this should just be an atomic_long_sub_return call. You should get > rid of the atomic_long_read portion of this as it can cover up > reference count errors. atomic_long_sub_return() is used to avoid one possible cache bouncing and barrrier caused by the last user. You are right that that may cover up the reference count errors. How about something like below: static inline long page_pool_atomic_sub_frag_count_return(struct page *page, long nr) { #ifdef CONFIG_DEBUG_PAGE_REF long ret = atomic_long_sub_return(nr, &page->pp_frag_count); WARN_ON(ret < 0); return ret; #else if (atomic_long_read(&page->pp_frag_count) == nr) return 0; return atomic_long_sub_return(nr, &page->pp_frag_count); #end } Or any better suggestion? > . >