All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Mel Gorman <mgorman@techsingularity.net>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: Nicolas Saenz Julienne <nsaenzju@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Michal Hocko <mhocko@kernel.org>, Hugh Dickins <hughd@google.com>,
	Yu Zhao <yuzhao@google.com>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>
Subject: Re: [PATCH 6/7] mm/page_alloc: Remotely drain per-cpu lists
Date: Mon, 4 Jul 2022 16:28:50 +0200	[thread overview]
Message-ID: <2f9a95b8-d883-d5a3-3714-801bae36eec2@suse.cz> (raw)
In-Reply-To: <20220624125423.6126-7-mgorman@techsingularity.net>

On 6/24/22 14:54, Mel Gorman wrote:
> From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
> 
> Some setups, notably NOHZ_FULL CPUs, are too busy to handle the per-cpu
> drain work queued by __drain_all_pages().  So introduce a new mechanism to
> remotely drain the per-cpu lists.  It is made possible by remotely locking
> 'struct per_cpu_pages' new per-cpu spinlocks.  A benefit of this new
> scheme is that drain operations are now migration safe.
> 
> There was no observed performance degradation vs.  the previous scheme.
> Both netperf and hackbench were run in parallel to triggering the
> __drain_all_pages(NULL, true) code path around ~100 times per second.  The
> new scheme performs a bit better (~5%), although the important point here
> is there are no performance regressions vs.  the previous mechanism.
> Per-cpu lists draining happens only in slow paths.
> 
> Minchan Kim tested an earlier version and reported;
> 
> 	My workload is not NOHZ CPUs but run apps under heavy memory
> 	pressure so they goes to direct reclaim and be stuck on
> 	drain_all_pages until work on workqueue run.
> 
> 	unit: nanosecond
> 	max(dur)        avg(dur)                count(dur)
> 	166713013       487511.77786438033      1283
> 
> 	From traces, system encountered the drain_all_pages 1283 times and
> 	worst case was 166ms and avg was 487us.
> 
> 	The other problem was alloc_contig_range in CMA. The PCP draining
> 	takes several hundred millisecond sometimes though there is no
> 	memory pressure or a few of pages to be migrated out but CPU were
> 	fully booked.
> 
> 	Your patch perfectly removed those wasted time.
> 
> Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

  reply	other threads:[~2022-07-04 14:28 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-24 12:54 [PATCH v5 00/7] Drain remote per-cpu directly Mel Gorman
2022-06-24 12:54 ` [PATCH 1/7] mm/page_alloc: Add page->buddy_list and page->pcp_list Mel Gorman
2022-06-24 12:54 ` [PATCH 2/7] mm/page_alloc: Use only one PCP list for THP-sized allocations Mel Gorman
2022-06-24 12:54 ` [PATCH 3/7] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Mel Gorman
2022-06-24 12:54 ` [PATCH 4/7] mm/page_alloc: Remove mistaken page == NULL check in rmqueue Mel Gorman
2022-06-24 12:54 ` [PATCH 5/7] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-07-04 12:31   ` Vlastimil Babka
2022-07-05  7:20     ` Mel Gorman
2022-07-04 16:32   ` Nicolas Saenz Julienne
2022-06-24 12:54 ` [PATCH 6/7] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-07-04 14:28   ` Vlastimil Babka [this message]
2022-06-24 12:54 ` [PATCH 7/7] mm/page_alloc: Replace local_lock with normal spinlock Mel Gorman
2022-06-24 18:59   ` Yu Zhao
2022-06-27  8:46     ` [PATCH] mm/page_alloc: Replace local_lock with normal spinlock -fix Mel Gorman
2022-07-04 14:39   ` [PATCH 7/7] mm/page_alloc: Replace local_lock with normal spinlock Vlastimil Babka
2022-07-04 16:33   ` Nicolas Saenz Julienne
2022-07-03 23:28 ` [PATCH v5 00/7] Drain remote per-cpu directly Andrew Morton
2022-07-03 23:31   ` Yu Zhao
2022-07-03 23:35     ` Andrew Morton
  -- strict thread matches above, loose matches on Subject: below --
2022-06-13 12:56 [PATCH v4 " Mel Gorman
2022-06-13 12:56 ` [PATCH 6/7] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-06-16 16:41   ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2f9a95b8-d883-d5a3-3714-801bae36eec2@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=m.szyprowski@samsung.com \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=nsaenzju@redhat.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.