From: Mel Gorman <email@example.com>
To: Sebastian Andrzej Siewior <firstname.lastname@example.org>
Cc: email@example.com, Andrew Morton <firstname.lastname@example.org>,
Vlastimil Babka <email@example.com>,
Peter Zijlstra <firstname.lastname@example.org>,
Thomas Gleixner <email@example.com>
Subject: Re: [RFC] mm: Disable NUMA_BALANCING_DEFAULT_ENABLED and TRANSPARENT_HUGEPAGE on PREEMPT_RT
Date: Thu, 28 Oct 2021 13:52:24 +0100 [thread overview]
Message-ID: <20211028125224.GR3959@techsingularity.net> (raw)
On Thu, Oct 28, 2021 at 02:04:29PM +0200, Sebastian Andrzej Siewior wrote:
> On 2021-10-27 10:12:12 [+0100], Mel Gorman wrote:
> > On Tue, Oct 26, 2021 at 06:51:00PM +0200, Sebastian Andrzej Siewior wrote:
> > > In https://lore.kernel.org/all/20200304091159.GN3818@techsingularity.net/
> > > Mel wrote:
> > >
> > > | While I ack'd this, an RT application using THP is playing with fire,
> > > | I know the RT extension for SLE explicitly disables it from being enabled
> > > | at kernel config time. At minimum the critical regions should be mlocked
> > > | followed by prctl to disable future THP faults that are non-deterministic,
> > > | both from an allocation point of view, and a TLB access point of view. It's
> > > | still reasonable to expect a smaller TLB reach for huge pages than
> > > | base pages.
> > >
> > > With TRANSPARENT_HUGEPAGE enabled I haven't seen spikes > 100us
> > > in cyclictest. I did have mlock_all() enabled but nothing else.
> > > PR_SET_THP_DISABLE remained unchanged (enabled). Is there anything to
> > > stress this to be sure or is mlock_all() enough to do THP but leave the
> > > mlock() applications alone?
> > >
> > > Then Mel continued with:
> > >
> > > | It's a similar hazard with NUMA balancing, an RT application should either
> > > | disable balancing globally or set a memory policy that forces it to be
> > > | ignored. They should be doing this anyway to avoid non-deterministic
> > > | memory access costs due to NUMA artifacts but it wouldn't surprise me
> > > | if some applications got it wrong.
> > >
> > > Usually (often) RT applications are pinned. I would assume that on
> > > bigger box the RT tasks are at least pinned to a node. How bad can this
> > > get in worst case? cyclictest pins every thread to CPU. I could remove
> > > this for testing. What would be a good test to push this to its limit?
> > >
> > > Cc: Mel Gorman <firstname.lastname@example.org>
> > > Signed-off-by: Sebastian Andrzej Siewior <email@example.com>
> > Somewhat tentative but
> > Acked-by: Mel Gorman <firstname.lastname@example.org>
> > It's tentative because NUMA Balancing gets default disabled on PREEMPT_RT
> > but it's still possible to enable where as THP is disabled entirely
> > and can never be enabled. This is a little inconsistent and it would be
> > preferable that they match either by disabling NUMA_BALANCING entirely or
> > forbidding TRANSPARENT_HUGEPAGE_ALWAYS && PREEMPT_RT. I'm ok with either.
> Oh. I can go either way depending on the input ;)
> > There is the possibility that an RT application could use THP safely by
> > using madvise() and mlock(). That way, THP is available but only if an
> > application has explicit knowledge of THP and smart enough to do it only
> > during the initialisation phase with
> Yes that was my question. So if you have "always", do mlock_all() in the
> application and then have other threads that same application doing
> malloc/ free of memory that the RT thread is not touching then bad
> things can still happen, right?
> My understanding is that all threads can be blocked in a page fault if
> there is some THP operation going on.
Hmm, it could happen if all the memory used by the RT thread was not
hugepage-aligned and potentially khugepaged could interfere. khugepaged
can be disabled if tuned properly but the alignment requirement would be
tricky. Probably safer to just disable it like it has been historically.
For consistently, force NUMA_BALANCING to be disabled too because it
introduces non-deterministic latencies even if memory regions are locked
> > There is the slight caveat that even then THP can have inconsistent
> > latencies if it has a split THP with separate entries for base and huge
> > pages. The responsibility would be on the person deploying the application
> > to ensure a platform was suitable for both RT and using huge pages.
> split THP?
Sorry, "split TLB" where part of the TLB only handles base pages and
another part handles huge pages.
next prev parent reply other threads:[~2021-10-28 12:52 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-26 16:51 [RFC] mm: Disable NUMA_BALANCING_DEFAULT_ENABLED and TRANSPARENT_HUGEPAGE on PREEMPT_RT Sebastian Andrzej Siewior
2021-10-27 9:12 ` Mel Gorman
2021-10-28 12:04 ` Sebastian Andrzej Siewior
2021-10-28 12:52 ` Mel Gorman [this message]
2021-10-28 13:56 ` Sebastian Andrzej Siewior
2021-10-28 14:14 ` Mel Gorman
2021-10-28 14:34 ` Sebastian Andrzej Siewior
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.