linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* THP backed thread stacks
@ 2023-03-06 23:57 Mike Kravetz
  2023-03-07  0:15 ` Peter Xu
                   ` (3 more replies)
  0 siblings, 4 replies; 23+ messages in thread
From: Mike Kravetz @ 2023-03-06 23:57 UTC (permalink / raw)
  To: linux-mm, linux-kernel

One of our product teams recently experienced 'memory bloat' in their
environment.  The application in this environment is the JVM which
creates hundreds of threads.  Threads are ultimately created via
pthread_create which also creates the thread stacks.  pthread attributes
are modified so that stacks are 2MB in size.  It just so happens that
due to allocation patterns, all their stacks are at 2MB boundaries.  The
system has THP always set, so a huge page is allocated at the first
(write) fault when libpthread initializes the stack.

It would seem that this is expected behavior.  If you set THP always,
you may get huge pages anywhere.

However, I can't help but think that backing stacks with huge pages by
default may not be the right thing to do.  Stacks by their very nature
grow in somewhat unpredictable ways over time.  Using a large virtual
space so that memory is allocated as needed is the desired behavior.

The only way to address their 'memory bloat' via thread stacks today is
by switching THP to madvise.

Just wondering if there is anything better or more selective that can be
done?  Does it make sense to have THP backed stacks by default?  If not,
who would be best at disabling?  A couple thoughts:
- The kernel could disable huge pages on stacks.  libpthread/glibc pass
  the unused flag MAP_STACK.  We could key off this and disable huge pages.
  However, I'm sure there is somebody somewhere today that is getting better
  performance because they have huge pages backing their stacks.
- We could push this to glibc/libpthreads and have them use
  MADV_NOHUGEPAGE on thread stacks.  However, this also has the potential
  of regressing performance if somebody somewhere is getting better
  performance due to huge pages.
- Other thoughts?

Perhaps this is just expected behavior of THP always which is unfortunate
in this situation.
-- 
Mike Kravetz


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2023-03-20 18:07 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-06 23:57 THP backed thread stacks Mike Kravetz
2023-03-07  0:15 ` Peter Xu
2023-03-07  0:40   ` Mike Kravetz
2023-03-08 19:02     ` Mike Kravetz
2023-03-09 22:38       ` Zach O'Keefe
2023-03-09 23:33         ` Mike Kravetz
2023-03-10  0:05           ` Zach O'Keefe
2023-03-10  1:40             ` William Kucharski
2023-03-10 11:25               ` David Hildenbrand
2023-03-11 12:24                 ` William Kucharski
2023-03-12  0:55                   ` Hillf Danton
2023-03-12  4:39                     ` William Kucharski
2023-03-10 22:02             ` Yang Shi
2023-03-07 10:10 ` David Hildenbrand
2023-03-07 19:02   ` Mike Kravetz
2023-03-07 13:36 ` Mike Rapoport
2023-03-17 17:52 ` Matthew Wilcox
2023-03-17 18:46   ` Mike Kravetz
2023-03-20 11:12     ` David Hildenbrand
2023-03-20 17:46       ` William Kucharski
2023-03-20 17:52         ` David Hildenbrand
2023-03-20 18:06         ` Mike Kravetz
2023-03-18 12:58   ` David Laight

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).