All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yunsheng Lin <linyunsheng@huawei.com>
To: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: <davem@davemloft.net>, <kuba@kernel.org>, <pabeni@redhat.com>,
	<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Alexander Duyck <alexander.duyck@gmail.com>,
	Liang Chen <liangchen.linux@gmail.com>,
	Alexander Lobakin <aleksander.lobakin@intel.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Eric Dumazet <edumazet@google.com>
Subject: Re: [PATCH net-next v12 1/5] page_pool: unify frag_count handling in page_pool_is_last_frag()
Date: Mon, 23 Oct 2023 20:26:34 +0800	[thread overview]
Message-ID: <4da09821-d964-924f-470b-e5c1de18eecf@huawei.com> (raw)
In-Reply-To: <ZTZcTrTy9ulPast5@hades>

On 2023/10/23 19:43, Ilias Apalodimas wrote:
> Hi Yunsheng, 
> 
> [...]
> 
>> +	 * 1. 'n == 1': no need to actually overwrite it.
>> +	 * 2. 'n != 1': overwrite it with one, which is the rare case
>> +	 *              for pp_frag_count draining.
>>  	 *
>> -	 * The main advantage to doing this is that an atomic_read is
>> -	 * generally a much cheaper operation than an atomic update,
>> -	 * especially when dealing with a page that may be partitioned
>> -	 * into only 2 or 3 pieces.
>> +	 * The main advantage to doing this is that not only we avoid a atomic
>> +	 * update, as an atomic_read is generally a much cheaper operation than
>> +	 * an atomic update, especially when dealing with a page that may be
>> +	 * partitioned into only 2 or 3 pieces; but also unify the pp_frag_count
>> +	 * handling by ensuring all pages have partitioned into only 1 piece
>> +	 * initially, and only overwrite it when the page is partitioned into
>> +	 * more than one piece.
>>  	 */
>> -	if (atomic_long_read(&page->pp_frag_count) == nr)
>> +	if (atomic_long_read(&page->pp_frag_count) == nr) {
>> +		/* As we have ensured nr is always one for constant case using
>> +		 * the BUILD_BUG_ON(), only need to handle the non-constant case
>> +		 * here for pp_frag_count draining, which is a rare case.
>> +		 */
>> +		BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1);
>> +		if (!__builtin_constant_p(nr))
>> +			atomic_long_set(&page->pp_frag_count, 1);
> 
> Aren't we changing the behaviour of the current code here? IIRC is
> atomic_long_read(&page->pp_frag_count) == nr we never updated the atomic
> pp_frag_count and the reasoning was that the next caller can set it
> properly. 

If the next caller is calling the page_pool_alloc_frag(), then yes,
because page_pool_fragment_page() will be used to reset the
page->pp_frag_count, so it does not really matter what is the value
of page->pp_frag_count when we are recycling a page.

If the next caller is calling page_pool_alloc_pages() directly without
fragmenting a page, the above code is used to ensure that pp_frag_count
is always one when page_pool_alloc_pages() fetches a page from pool->alloc
or pool->ring, because page_pool_fragment_page() is not used to reset the
page->pp_frag_count for page_pool_alloc_pages() and we have removed the
per page_pool PP_FLAG_PAGE_FRAG in page_pool_is_last_frag().

As we don't know if the caller is page_pool_alloc_frag() or
page_pool_alloc_pages(), so the above code ensure the page in pool->alloc
or pool->ring always have the pp_frag_count being one.


> 
>> +
>>  		return 0;
>> +	}
>>  
>>  	ret = atomic_long_sub_return(nr, &page->pp_frag_count);
>>  	WARN_ON(ret < 0);
>> +
>> +	/* We are the last user here too, reset pp_frag_count back to 1 to
>> +	 * ensure all pages have been partitioned into 1 piece initially,
>> +	 * this should be the rare case when the last two fragment users call
>> +	 * page_pool_defrag_page() currently.
>> +	 */
>> +	if (unlikely(!ret))
>> +		atomic_long_set(&page->pp_frag_count, 1);
>> +
>>  	return ret;
>>  }
>>  
>  
>  [....]
> 
>  Thanks
>  /Ilias
> 
> .
> 

  reply	other threads:[~2023-10-23 12:26 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-20  9:59 [PATCH net-next v12 0/5] introduce page_pool_alloc() related API Yunsheng Lin
2023-10-20  9:59 ` Yunsheng Lin
2023-10-20  9:59 ` [PATCH net-next v12 1/5] page_pool: unify frag_count handling in page_pool_is_last_frag() Yunsheng Lin
2023-10-23 11:43   ` Ilias Apalodimas
2023-10-23 12:26     ` Yunsheng Lin [this message]
2023-10-24  8:26       ` Ilias Apalodimas
2023-10-20  9:59 ` [PATCH net-next v12 2/5] page_pool: remove PP_FLAG_PAGE_FRAG Yunsheng Lin
2023-10-20  9:59   ` [Intel-wired-lan] " Yunsheng Lin
2023-10-20  9:59   ` Yunsheng Lin
2023-10-20  9:59 ` [PATCH net-next v12 3/5] page_pool: introduce page_pool_alloc() API Yunsheng Lin
2023-10-20  9:59 ` [PATCH net-next v12 4/5] page_pool: update document about fragment API Yunsheng Lin
2023-10-20  9:59 ` [PATCH net-next v12 5/5] net: veth: use newly added page pool API for veth with xdp Yunsheng Lin
2023-10-24  2:24 ` [PATCH net-next v12 0/5] introduce page_pool_alloc() related API Jakub Kicinski
2023-10-24  2:24   ` Jakub Kicinski
2023-10-24  2:30 ` patchwork-bot+netdevbpf
2023-10-24  2:30   ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4da09821-d964-924f-470b-e5c1de18eecf@huawei.com \
    --to=linyunsheng@huawei.com \
    --cc=aleksander.lobakin@intel.com \
    --cc=alexander.duyck@gmail.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=kuba@kernel.org \
    --cc=liangchen.linux@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lorenzo@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.