linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Kravetz <mike.kravetz@oracle.com>
To: Peter Xu <peterx@redhat.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: THP backed thread stacks
Date: Mon, 6 Mar 2023 16:40:49 -0800	[thread overview]
Message-ID: <20230307004049.GC4956@monkey> (raw)
In-Reply-To: <ZAaCISgq4A/GnkCk@x1n>

On 03/06/23 19:15, Peter Xu wrote:
> On Mon, Mar 06, 2023 at 03:57:30PM -0800, Mike Kravetz wrote:
> > One of our product teams recently experienced 'memory bloat' in their
> > environment.  The application in this environment is the JVM which
> > creates hundreds of threads.  Threads are ultimately created via
> > pthread_create which also creates the thread stacks.  pthread attributes
> > are modified so that stacks are 2MB in size.  It just so happens that
> > due to allocation patterns, all their stacks are at 2MB boundaries.  The
> > system has THP always set, so a huge page is allocated at the first
> > (write) fault when libpthread initializes the stack.
> > 
> > It would seem that this is expected behavior.  If you set THP always,
> > you may get huge pages anywhere.
> > 
> > However, I can't help but think that backing stacks with huge pages by
> > default may not be the right thing to do.  Stacks by their very nature
> > grow in somewhat unpredictable ways over time.  Using a large virtual
> > space so that memory is allocated as needed is the desired behavior.
> > 
> > The only way to address their 'memory bloat' via thread stacks today is
> > by switching THP to madvise.
> > 
> > Just wondering if there is anything better or more selective that can be
> > done?  Does it make sense to have THP backed stacks by default?  If not,
> > who would be best at disabling?  A couple thoughts:
> > - The kernel could disable huge pages on stacks.  libpthread/glibc pass
> >   the unused flag MAP_STACK.  We could key off this and disable huge pages.
> >   However, I'm sure there is somebody somewhere today that is getting better
> >   performance because they have huge pages backing their stacks.
> > - We could push this to glibc/libpthreads and have them use
> >   MADV_NOHUGEPAGE on thread stacks.  However, this also has the potential
> >   of regressing performance if somebody somewhere is getting better
> >   performance due to huge pages.
> 
> Yes it seems it's always not safe to change a default behavior to me.
> 
> For stack I really can't tell why it must be different here.  I assume the
> problem is the wasted space and it exaggerates easily with N-threads.  But
> IIUC it'll be the same as thp to normal memories iiuc, e.g., there can be a
> per-thread mmap() of 2MB even if only 4K is used each, then if such mmap()
> is populated by THP for each thread there'll also be a huge waste.
> 
> > - Other thoughts?
> > 
> > Perhaps this is just expected behavior of THP always which is unfortunate
> > in this situation.
> 
> I would think it's proper the app explicitly choose what it wants if
> possible, and we do have the interfaces.
> 
> Then, would pthread_attr_getstack() plus MADV_NOHUGEPAGE work, which to be
> applied from the JVM framework level?

Yes, I believe the only way for this to work would be for the JVM  (or
any application) to explicitly allocate space for the stacks themselves.
Then they could do a MADV_NOHUGEPAGE on the sack before calling
pthread_create.

The JVM (or application) would also need to create the guard page within
the stack that libpthread/glibc would normally create.

I'm still checking, but I think the JVM will also need to add some
additional code so that it knows when threads exit so it can unmap the
stacks.  That was also something 'for free' if libpthread/glibc is used
for stack creation.
-- 
Mike Kravetz

  reply	other threads:[~2023-03-07  0:41 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-06 23:57 THP backed thread stacks Mike Kravetz
2023-03-07  0:15 ` Peter Xu
2023-03-07  0:40   ` Mike Kravetz [this message]
2023-03-08 19:02     ` Mike Kravetz
2023-03-09 22:38       ` Zach O'Keefe
2023-03-09 23:33         ` Mike Kravetz
2023-03-10  0:05           ` Zach O'Keefe
2023-03-10  1:40             ` William Kucharski
2023-03-10 11:25               ` David Hildenbrand
2023-03-11 12:24                 ` William Kucharski
     [not found]                 ` <20230312005549.2609-1-hdanton@sina.com>
2023-03-12  4:39                   ` William Kucharski
2023-03-10 22:02             ` Yang Shi
2023-03-07 10:10 ` David Hildenbrand
2023-03-07 19:02   ` Mike Kravetz
2023-03-07 13:36 ` Mike Rapoport
2023-03-17 17:52 ` Matthew Wilcox
2023-03-17 18:46   ` Mike Kravetz
2023-03-20 11:12     ` David Hildenbrand
2023-03-20 17:46       ` William Kucharski
2023-03-20 17:52         ` David Hildenbrand
2023-03-20 18:06         ` Mike Kravetz
2023-03-18 12:58   ` David Laight

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230307004049.GC4956@monkey \
    --to=mike.kravetz@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=peterx@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).