Netdev Archive on lore.kernel.org
 help / color / Atom feed
From: Yunsheng Lin <linyunsheng@huawei.com>
To: "Li,Rongqing" <lirongqing@baidu.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"saeedm@mellanox.com" <saeedm@mellanox.com>
Subject: Re: 答复: 答复: [PATCH] page_pool: mark unbound node page as reusable pages
Date: Thu, 5 Dec 2019 10:06:15 +0800
Message-ID: <3e3b1e0c-e7e0-eea2-b1b5-20bf2b8fc34b@huawei.com> (raw)
In-Reply-To: <68135c0148894aa3b26db19120fb7bac@baidu.com>

On 2019/12/5 9:55, Li,Rongqing wrote:
> 
> 
>> -----邮件原件-----
>> 发件人: Yunsheng Lin [mailto:linyunsheng@huawei.com]
>> 发送时间: 2019年12月5日 9:44
>> 收件人: Li,Rongqing <lirongqing@baidu.com>; netdev@vger.kernel.org;
>> saeedm@mellanox.com
>> 主题: Re: 答复: [PATCH] page_pool: mark unbound node page as reusable
>> pages
>>
>> On 2019/12/5 9:08, Li,Rongqing wrote:
>>>
>>>
>>>> -----邮件原件-----
>>>> 发件人: Yunsheng Lin [mailto:linyunsheng@huawei.com]
>>>> 发送时间: 2019年12月5日 8:55
>>>> 收件人: Li,Rongqing <lirongqing@baidu.com>; netdev@vger.kernel.org;
>>>> saeedm@mellanox.com
>>>> 主题: Re: [PATCH] page_pool: mark unbound node page as reusable pages
>>>>
>>>> On 2019/12/4 18:14, Li RongQing wrote:
>>>>> some drivers uses page pool, but not require to allocate page from
>>>>> bound node, so pool.p.nid is NUMA_NO_NODE, and this fixed patch will
>>>>> block this kind of driver to recycle
>>>>>
>>>>> Fixes: d5394610b1ba ("page_pool: Don't recycle non-reusable pages")
>>>>> Signed-off-by: Li RongQing <lirongqing@baidu.com>
>>>>> Cc: Saeed Mahameed <saeedm@mellanox.com>
>>>>> ---
>>>>>  net/core/page_pool.c | 4 +++-
>>>>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/net/core/page_pool.c b/net/core/page_pool.c index
>>>>> a6aefe989043..4054db683178 100644
>>>>> --- a/net/core/page_pool.c
>>>>> +++ b/net/core/page_pool.c
>>>>> @@ -317,7 +317,9 @@ static bool __page_pool_recycle_direct(struct
>>>>> page
>>>> *page,
>>>>>   */
>>>>>  static bool pool_page_reusable(struct page_pool *pool, struct page
>>>>> *page)  {
>>>>> -	return !page_is_pfmemalloc(page) && page_to_nid(page) ==
>> pool->p.nid;
>>>>> +	return !page_is_pfmemalloc(page) &&
>>>>> +		(page_to_nid(page) == pool->p.nid ||
>>>>> +		 pool->p.nid == NUMA_NO_NODE);
>>>>
>>>> If I understand it correctly, you are allowing recycling when
>>>> pool->p.nid is NUMA_NO_NODE, which does not seems match the commit
>>>> log: "this fixed patch will block this kind of driver to recycle".
>>>>
>>>> Maybe you mean "commit d5394610b1ba" by this fixed patch?
>>>
>>> yes
>>>
>>>>
>>>> Also, maybe it is better to allow recycling if the below condition is matched:
>>>>
>>>> 	pool->p.nid == NUMA_NO_NODE && page_to_nid(page) ==
>>>> numa_mem_id()
>>>
>>> If driver uses NUMA_NO_NODE, it does not care numa node, and maybe its
>>> platform Only has a node, so not need to compare like "page_to_nid(page) ==
>> numa_mem_id()"
>>
>> Normally, driver does not care if the node of a device is NUMA_NO_NODE or
>> not, it just uses the node that returns from dev_to_node().
>>
>> Even for multi node system, the node of a device may be NUMA_NO_NODE
>> when BIOS/FW has not specified it through ACPI/DT, see [1].
>>
>>
>> [1] https://lore.kernel.org/patchwork/patch/1141952/
>>
> 
> at this condition, page can be allocated from any node from driver boot,
> why need to check "page_to_nid(page) == numa_mem_id()" at recycle?

For performance, the performance is better when the rx page is on the same
node as the rx process is running.

We want the node of rx page is close to the node of device/cpu to achive
better performance, since the node of device is unknown, maybe we choose
the node of memory that is close to the cpu that is running to handle the
rx cleaning.

> 
> -Li 
> 
>>>
>>>
>>> -RongQing
>>>
>>>
>>>>
>>>>>  }
>>>>>
>>>>>  void __page_pool_put_page(struct page_pool *pool, struct page
>>>>> *page,
>>>>>
>>>
> 


  reply index

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-04 10:14 Li RongQing
2019-12-05  0:55 ` Yunsheng Lin
2019-12-05  1:08   ` 答复: " Li,Rongqing
2019-12-05  1:43     ` Yunsheng Lin
2019-12-05  1:55       ` 答复: " Li,Rongqing
2019-12-05  2:06         ` Yunsheng Lin [this message]
2019-12-05  2:17           ` 答复: " Li,Rongqing
2019-12-05  2:30             ` Yunsheng Lin
2019-12-05  2:47               ` 答复: " Li,Rongqing
2019-12-05  3:03                 ` Yunsheng Lin
2019-12-05  3:18                   ` 答复: " Li,Rongqing
2019-12-05  3:33                     ` Yunsheng Lin
2019-12-05  1:22   ` Li,Rongqing

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3e3b1e0c-e7e0-eea2-b1b5-20bf2b8fc34b@huawei.com \
    --to=linyunsheng@huawei.com \
    --cc=lirongqing@baidu.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Netdev Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/netdev/0 netdev/git/0.git
	git clone --mirror https://lore.kernel.org/netdev/1 netdev/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 netdev netdev/ https://lore.kernel.org/netdev \
		netdev@vger.kernel.org
	public-inbox-index netdev

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.netdev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git