From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 528A2C433F5 for ; Thu, 16 Sep 2021 09:33:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 344F360F23 for ; Thu, 16 Sep 2021 09:33:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235626AbhIPJfG (ORCPT ); Thu, 16 Sep 2021 05:35:06 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:19988 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229494AbhIPJfC (ORCPT ); Thu, 16 Sep 2021 05:35:02 -0400 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4H9BcY5XwSzbmTl; Thu, 16 Sep 2021 17:29:33 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Thu, 16 Sep 2021 17:33:40 +0800 Received: from [10.69.30.204] (10.69.30.204) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.8; Thu, 16 Sep 2021 17:33:40 +0800 Subject: Re: [Linuxarm] Re: [PATCH net-next v2 3/3] skbuff: keep track of pp page when __skb_frag_ref() is called To: Ilias Apalodimas CC: Jesper Dangaard Brouer , , Alexander Duyck , , , , , , , , , , , , , , , , , References: <20210914121114.28559-1-linyunsheng@huawei.com> <20210914121114.28559-4-linyunsheng@huawei.com> <9467ec14-af34-bba4-1ece-6f5ea199ec97@huawei.com> <0337e2f6-5428-2c75-71a5-6db31c60650a@redhat.com> From: Yunsheng Lin Message-ID: Date: Thu, 16 Sep 2021 17:33:39 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.69.30.204] X-ClientProxiedBy: dggeme704-chm.china.huawei.com (10.1.199.100) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/9/16 16:44, Ilias Apalodimas wrote: >>>> appear if we try to pull in your patches on using page pool and recycling > > [...] > >>>> for Tx where TSO and skb_split are used? >> >> As my understanding, the problem might exists without tx recycling, because a >> skb from wire would be passed down to the tcp stack and retransmited back to >> the wire theoretically. As I am not able to setup a configuration to verify >> and test it and the handling seems tricky, so I am targetting net-next branch >> instead of net branch. >> >>>> >>>> I'll be honest, when I came up with the recycling idea for page pool, I >>>> never intended to support Tx. I agree with Alexander here, If people want >>>> to use it on Tx and think there's value, we might need to go back to the >>>> drawing board and see what I've missed. It's still early and there's a >>>> handful of drivers using it, so it will less painful now. >> >> Yes, we also need to prototype it to see if there is something missing in the >> drawing board and how much improvement we get from that:) >> >>> >>> I agree, page_pool is NOT designed or intended for TX support. >>> E.g. it doesn't make sense to allocate a page_pool instance per socket, as the backing memory structures for page_pool are too much. >>> As the number RX-queues are more limited it was deemed okay that we use page_pool per RX-queue, which sacrifice some memory to gain speed. >> >> As memtioned before, Tx recycling is based on page_pool instance per socket. >> it shares the page_pool instance with rx. >> >> Anyway, based on feedback from edumazet and dsahern, I am still trying to >> see if the page pool is meaningful for tx. >> >>> >>> >>>> The pp_recycle_bit was introduced to make the checking faster, instead of >>>> getting stuff into cache and check the page signature. If that ends up >>>> being counterproductive, we could just replace the entire logic with the >>>> frag count and the page signature, couldn't we? In that case we should be >>>> very cautious and measure potential regression on the standard path. >>> >>> +1 >> >> I am not sure "pp_recycle_bit was introduced to make the checking faster" is a >> valid. The size of "struct page" is only about 9 words(36/72 bytes), which is >> mostly to be in the same cache line, and both standard path and recycle path have >> been touching the "struct page", so it seems the overhead for checking signature >> seems minimal. >> >> I agree that we need to be cautious and measure potential regression on the >> standard path. > > well pp_recycle is on the same cache line boundary with the head_frag we > need to decide on recycling. After that we start checking page signatures > etc, which means the default release path remains mostly unaffected. > > I guess what you are saying here, is that 'struct page' is going to be > accessed eventually by the default network path, so there won't be any > noticeable performance hit? What about the other usecases we have Yes. > for pp_recycle right now? __skb_frag_unref() in skb_shift() or > skb_try_coalesce() (the latter can probably be removed tbh). If we decide to go with accurate indicator of a pp page, we just need to make sure network stack use __skb_frag_unref() and __skb_frag_ref() to put and get a page frag, the indicator checking need only done in __skb_frag_unref() and __skb_frag_ref(), so the skb_shift() and skb_try_coalesce() should be fine too. > >> >> Another way is to use the bit 0 of frag->bv_page ptr to indicate if a frag >> page is from page pool. > > Instead of the 'struct page' signature? And the pp_recycle bit will > continue to exist? pp_recycle bit might only exist or is only used for the head page for the skb. The bit 0 of frag->bv_page ptr can be used to indicate a frag page uniquely. Doing a memcpying of shinfo or "*fragto = *fragfrom" automatically pass the indicator to the new shinfo before doing a __skb_frag_ref(), and __skb_frag_ref() will increment the _refcount or pp_frag_count according to the bit 0 of frag->bv_page. By the way, I also prototype the above idea, and it seems to work well too. > . > Right now the 'naive' explanation on the recycling decision is something like: > > if (pp_recycle) <--- recycling bit is set > (check page signature) <--- signature matches page pool > (check fragment refcnt) <--- If frags are enabled and is the last consumer > recycle > > If we can proove the performance is unaffected when we eliminate the first if, > then obviously we should remove it. I'll try running that test here and see, > but keep in mind I am only testing on an 1GB interface. Any chance we can get > measurements on a beefier hardware using hns3 ? Sure, I will try it. As the kind of performance overhead is small, any performance testcase in mind? > >> >>> >>>> But in general, I'd be happier if we only had a simple logic in our >>>> testing for the pages we have to recycle. Debugging and understanding this >>>> otherwise will end up being a mess. >>> >>> > > [...] > > Regards > /Ilias > . >