netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yunsheng Lin <linyunsheng@huawei.com>
To: "Li,Rongqing" <lirongqing@baidu.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"saeedm@mellanox.com" <saeedm@mellanox.com>
Subject: Re: 答复: 答复: 答复: 答复: 答复: [PATCH] page_pool: mark unbound node page as reusable pages
Date: Thu, 5 Dec 2019 11:33:00 +0800	[thread overview]
Message-ID: <bb1daad7-d5a9-0c36-e218-710b3f15b5a1@huawei.com> (raw)
In-Reply-To: <3a0d273cb57146d3b2f5c849569fb244@baidu.com>

On 2019/12/5 11:18, Li,Rongqing wrote:
> 
>>>> [1] https://lore.kernel.org/patchwork/patch/1125789/
>>>
>>>
>>> What is status of this patch? I think you should fix your firmware or
>>> bios
>>
>> Have not reached a conclusion yet.
> 
> I think it will never be accepted
> 
>>
>>>
>>> Consider the below condition:
>>>
>>> there is two numa node, and NIC sites on node 2, but NUMA_NO_NODE is
>>> used, recycle will fail due to page_to_nid(page) == numa_mem_id(), and
>> reallocated pages maybe always from node 1, then the recycle will never
>> success.
>>
>> For page pool:
>>
>> 1. if the pool->p.nid != NUMA_NO_NODE, the recycle is always decided by
>> checking page_to_nid(page) == pool->p.nid.
>>
>> 2. only when the pool->p.nid == NUMA_NO_NODE, the numa_mem_id() is
>> checked
>>    to decide the recycle.
>>
>> Yes, If pool->p.nid == NUMA_NO_NODE, and the cpu that is doing recycling is
>> changing each time, the recycle may never success, but it is not common, 
> 
> Why can we ignore this condition, and accept your hardware with abnormal node
> information
> 
>> and
>> have its own performance penalty when changing the recycling cpu so often.
> 
> I have said if hardware takes care of numa node, it should be assign when page pool
> Is created, not depends on recycle.
> 
> If you insist your idea, you can submit you patch after this one

I am just arguing that rx recycle the page pool is doing should be consistent
with other driver that does its own recycle.

If the driver that does its own recycle begin to support the page pool,
then there may be different behavior when they are not consistent.

> 
> -RongQing
> 
>>
>>
>>>
>>> -RongQing
>>>
>>>
> 


  reply	other threads:[~2019-12-05  3:33 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-04 10:14 [PATCH] page_pool: mark unbound node page as reusable pages Li RongQing
2019-12-05  0:55 ` Yunsheng Lin
2019-12-05  1:08   ` 答复: " Li,Rongqing
2019-12-05  1:43     ` Yunsheng Lin
2019-12-05  1:55       ` 答复: " Li,Rongqing
2019-12-05  2:06         ` Yunsheng Lin
2019-12-05  2:17           ` 答复: " Li,Rongqing
2019-12-05  2:30             ` Yunsheng Lin
2019-12-05  2:47               ` 答复: " Li,Rongqing
2019-12-05  3:03                 ` Yunsheng Lin
2019-12-05  3:18                   ` 答复: " Li,Rongqing
2019-12-05  3:33                     ` Yunsheng Lin [this message]
2019-12-06  8:05                       ` 答复: " Li,Rongqing
2019-12-05  1:22   ` Li,Rongqing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bb1daad7-d5a9-0c36-e218-710b3f15b5a1@huawei.com \
    --to=linyunsheng@huawei.com \
    --cc=lirongqing@baidu.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).