From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1629DC07E9C for ; Mon, 12 Jul 2021 17:20:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED56C611CB for ; Mon, 12 Jul 2021 17:20:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234771AbhGLRXn (ORCPT ); Mon, 12 Jul 2021 13:23:43 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:58854 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234224AbhGLRXk (ORCPT ); Mon, 12 Jul 2021 13:23:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626110452; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GTa3P9ZWFrJfV2ObKCtTAuiG3I+N8Ir5qQrelfNdXg4=; b=Dq8tVSyxBYrF/i+jc+6IVNDjmN44NmMIcz7J7GcDFG6DQxgYIW3QjINNWcTANhM3p82l5v K5JjnyKzvRhyoMsqOeUjOUJQW81UFh+oKbl+yN6fqK7tHV3Ma8SS9XWz+FrxedVCr/LTdJ WRs+7hpbtwEbMUd9y6JTojyxxbD11pU= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-278-XjVCubaIMeiDcy7bVcb0Yw-1; Mon, 12 Jul 2021 13:20:50 -0400 X-MC-Unique: XjVCubaIMeiDcy7bVcb0Yw-1 Received: by mail-wm1-f69.google.com with SMTP id n5-20020a05600c3b85b02902152e9caa1dso5944709wms.3 for ; Mon, 12 Jul 2021 10:20:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:cc:subject:to:references:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=GTa3P9ZWFrJfV2ObKCtTAuiG3I+N8Ir5qQrelfNdXg4=; b=TkeTM4erfGTxUSDsytEkQyLkfYNEIuSjhw8xEULQ9IDwk2zE8masoN6x7tyORi2YBG V02b46B6vAPVfU8KN8fxHzfH/A65jtoRXEQ59t0kJD7jroOzTzIlcQktKKGX4aDYFFvX 4gHhSuCSvooljwHbLwsi53U4b+tqmB6kkaJL22AtCUHXUL1X6jEZlsSDbMJ+Q9jybzMk t1NUiiEyGe9prNBaLLVLu88EjXxTNPbSlncf4BkblYZiFzJMOa+wweDevioxlXZd9+xG 61FB6n/W7XxyWQoy6dYR1+oegNG4VVJhxIfbvAK+pGTPVUGE8ihLVsvYewc2xX+SMjtn zqrQ== X-Gm-Message-State: AOAM533L4QPtGIVh/dWW8kQbbk0PtvAF+wW/ptOzM77jAu2O0VpPrVkS lR+jGrkQ6VzLg7hNQ1CPwFoOO7yRu+75OvggbsjeEWx76JsOKWkQ6i+HithLl3nEhyFDCcRA2Qf I6WU50VkjihwC3T3MBrbmEEM9 X-Received: by 2002:adf:facf:: with SMTP id a15mr59287wrs.39.1626110449729; Mon, 12 Jul 2021 10:20:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy+d/U8vl8SdYAtGo2IkkoAiNENGSESyGiWUbqzEMUERbu/JIM77t4J+I7Z2Njn7zh2uPwhcw== X-Received: by 2002:adf:facf:: with SMTP id a15mr59222wrs.39.1626110449396; Mon, 12 Jul 2021 10:20:49 -0700 (PDT) Received: from [192.168.42.238] (3-14-107-185.static.kviknet.dk. [185.107.14.3]) by smtp.gmail.com with ESMTPSA id w15sm567272wmi.3.2021.07.12.10.20.47 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 12 Jul 2021 10:20:48 -0700 (PDT) From: Jesper Dangaard Brouer X-Google-Original-From: Jesper Dangaard Brouer Cc: brouer@redhat.com, David Miller , Jakub Kicinski , Russell King - ARM Linux , Marcin Wojtas , linuxarm@openeuler.org, yisen.zhuang@huawei.com, Salil Mehta , thomas.petazzoni@bootlin.com, hawk@kernel.org, Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrew Morton , Peter Zijlstra , Will Deacon , Matthew Wilcox , Vlastimil Babka , fenghua.yu@intel.com, guro@fb.com, Peter Xu , Feng Tang , Jason Gunthorpe , Matteo Croce , Hugh Dickins , Jonathan Lemon , Alexander Lobakin , Willem de Bruijn , wenxu@ucloud.cn, Cong Wang , Kevin Hao , nogikh@google.com, Marco Elver , Yonghong Song , kpsingh@kernel.org, andrii@kernel.org, Martin KaFai Lau , songliubraving@fb.com, Netdev , LKML , bpf Subject: Re: [PATCH rfc v3 2/4] page_pool: add interface for getting and setting pagecnt_bias To: Alexander Duyck , Yunsheng Lin References: <1626092196-44697-1-git-send-email-linyunsheng@huawei.com> <1626092196-44697-3-git-send-email-linyunsheng@huawei.com> Message-ID: <2d9a3d29-8e6b-8462-c410-6b7fd4518c9d@redhat.com> Date: Mon, 12 Jul 2021 19:20:46 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/07/2021 18.02, Alexander Duyck wrote: > On Mon, Jul 12, 2021 at 5:17 AM Yunsheng Lin wrote: >> >> As suggested by Alexander, "A DMA mapping should be page >> aligned anyway so the lower 12 bits would be reserved 0", >> so it might make more sense to repurpose the lower 12 bits >> of the dma address to store the pagecnt_bias for frag page >> support in page pool. >> >> As newly added page_pool_get_pagecnt_bias() may be called >> outside of the softirq context, so annotate the access to >> page->dma_addr[0] with READ_ONCE() and WRITE_ONCE(). >> >> And page_pool_get_pagecnt_bias_ptr() is added to implement >> the pagecnt_bias atomic updating when a page is passsed to >> the user. >> >> Other three interfaces using page->dma_addr[0] is only called >> in the softirq context during normal rx processing, hopefully >> the barrier in the rx processing will ensure the correct order >> between getting and setting pagecnt_bias. >> >> Signed-off-by: Yunsheng Lin >> --- >> include/net/page_pool.h | 29 +++++++++++++++++++++++++++-- >> net/core/page_pool.c | 8 +++++++- >> 2 files changed, 34 insertions(+), 3 deletions(-) >> >> diff --git a/include/net/page_pool.h b/include/net/page_pool.h >> index 8d7744d..84cd972 100644 >> --- a/include/net/page_pool.h >> +++ b/include/net/page_pool.h >> @@ -200,17 +200,42 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, >> >> static inline dma_addr_t page_pool_get_dma_addr(struct page *page) >> { >> - dma_addr_t ret = page->dma_addr[0]; >> + dma_addr_t ret = READ_ONCE(page->dma_addr[0]) & PAGE_MASK; >> if (sizeof(dma_addr_t) > sizeof(unsigned long)) >> ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16; >> return ret; >> } >> >> -static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) >> +static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) >> { >> + if (WARN_ON(addr & ~PAGE_MASK)) >> + return false; >> + >> page->dma_addr[0] = addr; >> if (sizeof(dma_addr_t) > sizeof(unsigned long)) >> page->dma_addr[1] = upper_32_bits(addr); >> + >> + return true; >> +} >> + > > Rather than making this a part of the check here it might make more > sense to pull this out and perform the WARN_ON after the check for > dma_mapping_error. I need to point out that I don't like WARN_ON and BUG_ON code in fast-path code, because compiler adds 'ud2' assembler instructions that influences the instruction-cache fetching in the CPU. Yes, I have seen a measuresable impact from this before. > Also it occurs to me that we only really have to do this in the case > where dma_addr_t is larger than the size of a long. Otherwise we could > just have the code split things so that dma_addr[0] is the dma_addr > and dma_addr[1] is our pagecnt_bias value in which case we could > probably just skip the check. The dance to get 64-bit DMA addr on 32-bit systems is rather ugly and confusing, sadly. We could take advantage of this, I just hope this will not make it uglier. >> +static inline int page_pool_get_pagecnt_bias(struct page *page) >> +{ >> + return READ_ONCE(page->dma_addr[0]) & ~PAGE_MASK; >> +} >> + >> +static inline unsigned long *page_pool_pagecnt_bias_ptr(struct page *page) >> +{ >> + return page->dma_addr; >> +} >> + >> +static inline void page_pool_set_pagecnt_bias(struct page *page, int bias) >> +{ >> + unsigned long dma_addr_0 = READ_ONCE(page->dma_addr[0]); >> + >> + dma_addr_0 &= PAGE_MASK; >> + dma_addr_0 |= bias; >> + >> + WRITE_ONCE(page->dma_addr[0], dma_addr_0); >> } >> >> static inline bool is_page_pool_compiled_in(void) >> diff --git a/net/core/page_pool.c b/net/core/page_pool.c >> index 78838c6..1abefc6 100644 >> --- a/net/core/page_pool.c >> +++ b/net/core/page_pool.c >> @@ -198,7 +198,13 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) >> if (dma_mapping_error(pool->p.dev, dma)) >> return false; >> > > So instead of adding to the function below you could just add your > WARN_ON check here with the unmapping call. > >> - page_pool_set_dma_addr(page, dma); >> + if (unlikely(!page_pool_set_dma_addr(page, dma))) { >> + dma_unmap_page_attrs(pool->p.dev, dma, >> + PAGE_SIZE << pool->p.order, >> + pool->p.dma_dir, >> + DMA_ATTR_SKIP_CPU_SYNC); >> + return false; >> + } >> >> if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) >> page_pool_dma_sync_for_device(pool, page, pool->p.max_len); >> -- >> 2.7.4 >> >