linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: <netdev@vger.kernel.org>, <lirongqing@baidu.com>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Saeed Mahameed <saeedm@mellanox.com>, <mhocko@kernel.org>,
	<peterz@infradead.org>, <linux-kernel@vger.kernel.org>,
	brouer@redhat.com
Subject: Re: [net-next v6 PATCH 1/2] page_pool: handle page recycle for NUMA_NO_NODE condition
Date: Thu, 2 Jan 2020 12:46:14 +0100	[thread overview]
Message-ID: <20200102124614.1335fa32@carbon> (raw)
In-Reply-To: <b56069bd-ca8f-68bc-7e0b-bd0da423f891@huawei.com>

On Mon, 30 Dec 2019 09:59:23 +0800
Yunsheng Lin <linyunsheng@huawei.com> wrote:

> On 2019/12/28 1:13, Jesper Dangaard Brouer wrote:
> > The check in pool_page_reusable (page_to_nid(page) == pool->p.nid) is
> > not valid if page_pool was configured with pool->p.nid = NUMA_NO_NODE.
> > 
> > The goal of the NUMA changes in commit d5394610b1ba ("page_pool: Don't
> > recycle non-reusable pages"), were to have RX-pages that belongs to the
> > same NUMA node as the CPU processing RX-packet during softirq/NAPI. As
> > illustrated by the performance measurements.
> > 
> > This patch moves the NAPI checks out of fast-path, and at the same time
> > solves the NUMA_NO_NODE issue.  
> 
> There seems to be a minor NUMA_NO_NODE case that has not been handled by
> this patch yet:
> 
> 1. When the page is always recycled to pool->alloc.cache.
> 2. And page_pool_alloc_pages always return pages from pool->alloc.cache.
> 
> Then non-local page will be reused when the rx NAPI affinity changes.
> 
> Of coure we can handle above by calling page_pool_update_nid(), which
> may require very user to call page_pool_update_nid() explicitly even
> the user has specify the pool->p.nid as NUMA_NO_NODE.
> 
> Any consideration for above case?

Yes, I tried to explain below, that "we accept that transitioning the
alloc cache doesn't happen immediately.".  Thus, I've not forgotten
about this case, and I still consider this patch correct.

You have to consider/understand how this page_pool cache is used.  The
alloc cache can only get "recycled" pages in rare situations.  The case
basically only happen for XDP_DROP (which have extreme performance
needs).  Sure, in a 100% XDP_DROP case, then your described case can
happen, but that case is unlikely/useless in real-life production. Some
packets must be let through, which will have to travel through the
ptr_ring, where the NUMA check is done.  Thus, pages will eventually
transition to the new NAPI node of the new IRQ-CPU.


> > 
> > First realize that alloc_pages_node() with pool->p.nid = NUMA_NO_NODE
> > will lookup current CPU nid (Numa ID) via numa_mem_id(), which is used
> > as the the preferred nid.  It is only in rare situations, where
> > e.g. NUMA zone runs dry, that page gets doesn't get allocated from
> > preferred nid.  The page_pool API allows drivers to control the nid
> > themselves via controlling pool->p.nid.
> > 
> > This patch moves the NAPI check to when alloc cache is refilled, via
> > dequeuing/consuming pages from the ptr_ring. Thus, we can allow placing
> > pages from remote NUMA into the ptr_ring, as the dequeue/consume step
> > will check the NUMA node. All current drivers using page_pool will
> > alloc/refill RX-ring from same CPU running softirq/NAPI process.
> > 
> > Drivers that control the nid explicitly, also use page_pool_update_nid
> > when changing nid runtime.  To speed up transision to new nid the alloc
> > cache is now flushed on nid changes.  This force pages to come from
> > ptr_ring, which does the appropate nid check.
> > 
> > For the NUMA_NO_NODE case, when a NIC IRQ is moved to another NUMA
> > node, we accept that transitioning the alloc cache doesn't happen
> > immediately. The preferred nid change runtime via consulting
> > numa_mem_id() based on the CPU processing RX-packets.

This is the section, where I say: "we accept that transitioning the
alloc cache doesn't happen immediately. "


> > Notice, to avoid stressing the page buddy allocator and avoid doing too
> > much work under softirq with preempt disabled, the NUMA check at
> > ptr_ring dequeue will break the refill cycle, when detecting a NUMA
> > mismatch. This will cause a slower transition, but its done on purpose.
> > 
> > Fixes: d5394610b1ba ("page_pool: Don't recycle non-reusable pages")
> > Reported-by: Li RongQing <lirongqing@baidu.com>
> > Reported-by: Yunsheng Lin <linyunsheng@huawei.com>
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


  reply	other threads:[~2020-01-02 11:46 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-27 17:13 [net-next v6 PATCH 0/2] page_pool: NUMA node handling fixes Jesper Dangaard Brouer
2019-12-27 17:13 ` [net-next v6 PATCH 1/2] page_pool: handle page recycle for NUMA_NO_NODE condition Jesper Dangaard Brouer
2019-12-30  1:59   ` Yunsheng Lin
2020-01-02 11:46     ` Jesper Dangaard Brouer [this message]
2019-12-27 17:13 ` [net-next v6 PATCH 2/2] page_pool: help compiler remove code in case CONFIG_NUMA=n Jesper Dangaard Brouer
2020-01-02 23:38 ` [net-next v6 PATCH 0/2] page_pool: NUMA node handling fixes David Miller
2020-01-03 11:23   ` Ilias Apalodimas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200102124614.1335fa32@carbon \
    --to=brouer@redhat.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=lirongqing@baidu.com \
    --cc=mhocko@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=saeedm@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).