All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Nicolas Saenz Julienne <nsaenzju@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Michal Hocko <mhocko@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>
Subject: Re: [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists
Date: Fri, 13 May 2022 19:23:01 +0100	[thread overview]
Message-ID: <20220513182301.GK3441@techsingularity.net> (raw)
In-Reply-To: <167d30f439d171912b1ef584f20219e67a009de8.camel@redhat.com>

On Fri, May 13, 2022 at 05:19:18PM +0200, Nicolas Saenz Julienne wrote:
> On Fri, 2022-05-13 at 16:04 +0100, Mel Gorman wrote:
> > On Thu, May 12, 2022 at 12:37:43PM -0700, Andrew Morton wrote:
> > > On Thu, 12 May 2022 09:50:43 +0100 Mel Gorman <mgorman@techsingularity.net> wrote:
> > > 
> > > > From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
> > > > 
> > > > Some setups, notably NOHZ_FULL CPUs, are too busy to handle the per-cpu
> > > > drain work queued by __drain_all_pages(). So introduce a new mechanism to
> > > > remotely drain the per-cpu lists. It is made possible by remotely locking
> > > > 'struct per_cpu_pages' new per-cpu spinlocks. A benefit of this new scheme
> > > > is that drain operations are now migration safe.
> > > > 
> > > > There was no observed performance degradation vs. the previous scheme.
> > > > Both netperf and hackbench were run in parallel to triggering the
> > > > __drain_all_pages(NULL, true) code path around ~100 times per second.
> > > > The new scheme performs a bit better (~5%), although the important point
> > > > here is there are no performance regressions vs. the previous mechanism.
> > > > Per-cpu lists draining happens only in slow paths.
> > > > 
> > > > Minchan Kim tested this independently and reported;
> > > > 
> > > > 	My workload is not NOHZ CPUs but run apps under heavy memory
> > > > 	pressure so they goes to direct reclaim and be stuck on
> > > > 	drain_all_pages until work on workqueue run.
> > > > 
> > > > 	unit: nanosecond
> > > > 	max(dur)        avg(dur)                count(dur)
> > > > 	166713013       487511.77786438033      1283
> > > > 
> > > > 	From traces, system encountered the drain_all_pages 1283 times and
> > > > 	worst case was 166ms and avg was 487us.
> > > > 
> > > > 	The other problem was alloc_contig_range in CMA. The PCP draining
> > > > 	takes several hundred millisecond sometimes though there is no
> > > > 	memory pressure or a few of pages to be migrated out but CPU were
> > > > 	fully booked.
> > > > 
> > > > 	Your patch perfectly removed those wasted time.
> > > 
> > > I'm not getting a sense here of the overall effect upon userspace
> > > performance.  As Thomas said last year in
> > > https://lkml.kernel.org/r/87v92sgt3n.ffs@tglx
> > > 
> > > : The changelogs and the cover letter have a distinct void vs. that which
> > > : means this is just another example of 'scratch my itch' changes w/o
> > > : proper justification.
> > > 
> > > Is there more to all of this than itchiness and if so, well, you know
> > > the rest ;)
> > > 
> > 
> > I think Minchan's example is clear-cut.  The draining operation can take
> > an arbitrary amount of time waiting for the workqueue to run on each CPU
> > and can cause severe delays under reclaim or CMA and the patch fixes
> > it. Maybe most users won't even notice but I bet phone users do if a
> > camera app takes too long to open.
> > 
> > The first paragraphs was written by Nicolas and I did not want to modify
> > it heavily and still put his Signed-off-by on it. Maybe it could have
> > been clearer though because "too busy" is vague when the actual intent
> > is to avoid interfering with RT tasks. Does this sound better to you?
> > 
> > 	Some setups, notably NOHZ_FULL CPUs, may be running realtime or
> > 	latency-sensitive applications that cannot tolerate interference
> > 	due to per-cpu drain work queued by __drain_all_pages(). Introduce
> > 	a new mechanism to remotely drain the per-cpu lists. It is made
> > 	possible by remotely locking 'struct per_cpu_pages' new per-cpu
> > 	spinlocks. This has two advantages, the time to drain is more
> > 	predictable and other unrelated tasks are not interrupted.
> > 
> > You raise a very valid point with Thomas' mail and it is a concern that
> > the local_lock is no longer strictly local. We still need preemption to
> > be disabled between the percpu lookup and the lock acquisition but that
> > can be done with get_cpu_var() to make the scope clear.
> 
> This isn't going to work in RT :(
> 
> get_cpu_var() disables preemption hampering RT spinlock use. There is more to
> it in Documentation/locking/locktypes.rst.
> 

Bah, you're right.  A helper that called preempt_disable() on !RT
and migrate_disable() on RT would work although similar to local_lock
with a different name. I'll look on Monday to see how the code could be
restructured to always have the get_cpu_var() call immediately before the
lock acquisition. Once that is done, I'll look what sort of helper that
"disables preempt/migration, lookup pcp structure, acquire lock, enable
preempt/migration". It's effectively the magic trick that local_lock uses
to always lock the right pcpu lock but we want the spinlock semantics
for remote drain.

-- 
Mel Gorman
SUSE Labs

  reply	other threads:[~2022-05-13 18:23 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-12  8:50 [PATCH 0/6] Drain remote per-cpu directly v3 Mel Gorman
2022-05-12  8:50 ` [PATCH 1/6] mm/page_alloc: Add page->buddy_list and page->pcp_list Mel Gorman
2022-05-13 11:59   ` Nicolas Saenz Julienne
2022-05-19  9:36   ` Vlastimil Babka
2022-05-12  8:50 ` [PATCH 2/6] mm/page_alloc: Use only one PCP list for THP-sized allocations Mel Gorman
2022-05-19  9:45   ` Vlastimil Babka
2022-05-12  8:50 ` [PATCH 3/6] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Mel Gorman
2022-05-13 12:01   ` Nicolas Saenz Julienne
2022-05-19  9:52   ` Vlastimil Babka
2022-05-23 16:09   ` Qais Yousef
2022-05-24 11:55     ` Mel Gorman
2022-05-25 11:23       ` Qais Yousef
2022-05-12  8:50 ` [PATCH 4/6] mm/page_alloc: Remove unnecessary page == NULL check in rmqueue Mel Gorman
2022-05-13 12:03   ` Nicolas Saenz Julienne
2022-05-19 10:57   ` Vlastimil Babka
2022-05-19 12:13     ` Mel Gorman
2022-05-19 12:26       ` Vlastimil Babka
2022-05-12  8:50 ` [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-05-13 12:22   ` Nicolas Saenz Julienne
2022-05-12  8:50 ` [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-05-12 19:37   ` Andrew Morton
2022-05-13 15:04     ` Mel Gorman
2022-05-13 15:19       ` Nicolas Saenz Julienne
2022-05-13 18:23         ` Mel Gorman [this message]
2022-05-17 12:57           ` Mel Gorman
2022-05-12 19:43 ` [PATCH 0/6] Drain remote per-cpu directly v3 Andrew Morton
2022-05-13 14:23   ` Mel Gorman
2022-05-13 19:38     ` Andrew Morton
2022-05-16 10:53       ` Mel Gorman
2022-05-13 12:24 ` Nicolas Saenz Julienne
2022-05-17 23:35 ` Qian Cai
2022-05-18 12:51   ` Mel Gorman
2022-05-18 16:27     ` Qian Cai
2022-05-18 17:15       ` Paul E. McKenney
2022-05-19 13:29         ` Qian Cai
2022-05-19 19:15           ` Paul E. McKenney
2022-05-19 21:05             ` Qian Cai
2022-05-19 21:29               ` Paul E. McKenney
2022-05-18 17:26   ` Marcelo Tosatti
2022-05-18 17:44     ` Marcelo Tosatti
2022-05-18 18:01 ` Nicolas Saenz Julienne
2022-05-26 17:19 ` Qian Cai
2022-05-27  8:39   ` Mel Gorman
2022-05-27 12:58     ` Qian Cai
  -- strict thread matches above, loose matches on Subject: below --
2022-05-09 13:07 [RFC PATCH 0/6] Drain remote per-cpu directly v2 Mel Gorman
2022-05-09 13:08 ` [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-04-20  9:59 [RFC PATCH 0/6] Drain remote per-cpu directly Mel Gorman
2022-04-20  9:59 ` [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220513182301.GK3441@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=nsaenzju@redhat.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.