From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC0EDC433ED for ; Mon, 17 May 2021 11:10:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9E2D861261 for ; Mon, 17 May 2021 11:10:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236649AbhEQLLa (ORCPT ); Mon, 17 May 2021 07:11:30 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:3569 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236528AbhEQLL3 (ORCPT ); Mon, 17 May 2021 07:11:29 -0400 Received: from dggems706-chm.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FkGYp48QHzmVSc; Mon, 17 May 2021 19:07:26 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggems706-chm.china.huawei.com (10.3.19.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 17 May 2021 19:10:10 +0800 Received: from [127.0.0.1] (10.69.30.204) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Mon, 17 May 2021 19:10:10 +0800 Subject: Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling To: Ilias Apalodimas CC: Matteo Croce , , , Ayush Sawal , "Vinay Kumar Yadav" , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , "Tariq Toukan" , Jesper Dangaard Brouer , "Alexei Starovoitov" , Daniel Borkmann , "John Fastabend" , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Guillaume Nault , , , , Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen References: <20210513165846.23722-1-mcroce@linux.microsoft.com> <20210513165846.23722-4-mcroce@linux.microsoft.com> <798d6dad-7950-91b2-46a5-3535f44df4e2@huawei.com> <212498cf-376b-2dac-e1cd-12c7cc7910c6@huawei.com> From: Yunsheng Lin Message-ID: <074b0d1d-9531-57f3-8e0e-a447387478d1@huawei.com> Date: Mon, 17 May 2021 19:10:09 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.69.30.204] X-ClientProxiedBy: dggeme716-chm.china.huawei.com (10.1.199.112) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On 2021/5/17 17:36, Ilias Apalodimas wrote: >> >> Even if when skb->pp_recycle is 1, pages allocated from page allocator directly >> or page pool are both supported, so it seems page->signature need to be reliable >> to indicate a page is indeed owned by a page pool, which means the skb->pp_recycle >> is used mainly to short cut the code path for skb->pp_recycle is 0 case, so that >> the page->signature does not need checking? > > Yes, the idea for the recycling bit, is that you don't have to fetch the page > in cache do do more processing (since freeing is asynchronous and we > can't have any guarantees on what the cache will have at that point). So we > are trying to affect the existing release path a less as possible. However it's > that new skb bit that triggers the whole path. > > What you propose could still be doable though. As you said we can add the > page pointer to struct page when we allocate a page_pool page and never > reset it when we recycle the buffer. But I don't think there will be any > performance impact whatsoever. So I prefer the 'visible' approach, at least for setting and unsetting the page_pool ptr every time the page is recycled may cause a cache bouncing problem when rx cleaning and skb releasing is not happening on the same cpu. > the first iteration. > > Thanks > /Ilias > > > . >