All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Rientjes <rientjes@google.com>
To: Gang Li <ligang.bdlg@bytedance.com>
Cc: David Hildenbrand <david@redhat.com>, Gang Li <gang.li@linux.dev>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Muchun Song <muchun.song@linux.dev>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot
Date: Wed, 29 Nov 2023 11:41:52 -0800 (PST)	[thread overview]
Message-ID: <965d3ea8-4dd9-8f66-7ac0-0d6aa7fcadc7@google.com> (raw)
In-Reply-To: <db1b593a-41d9-444a-b3f4-f6bffe98634b@bytedance.com>

On Tue, 28 Nov 2023, Gang Li wrote:

> > 
> > > And there, the 65.2s won't be noise because that 12TB system is up by a
> > > snap
> > > of a finger? :)
> > > 
> > 
> > In this single boot test, total boot time was 373.78s, so 1GB hugetlb
> > allocation is 17.4% of that.
> 
> Thank you for sharing these data. Currently, I don't have access to a machine
> of such large capacity, so the benefits in my tests are not as pronounced.
> 
> I believe testing on a system of this scale would yield significant benefits.
> 
> > 
> > Would love to see what the numbers would look like if 1GB pages were
> > supported.
> > 
> 
> Support for 1GB hugetlb is not yet perfect, so it wasn't included in v1. But
> I'm happy to refine and introduce 1GB hugetlb support in future versions.
> 

That would be very appreciated, thank you!  I'm happy to test and collect 
data for any proposed patch series on 12TB systems booted with a lot of 
1GB hugetlb pages on the kernel command line.

      parent reply	other threads:[~2023-11-29 19:41 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-23 13:30 [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot Gang Li
2023-11-23 13:30 ` [RFC PATCH v1 1/4] hugetlb: code clean for hugetlb_hstate_alloc_pages Gang Li
2023-11-23 13:30 ` [RFC PATCH v1 2/4] hugetlb: split hugetlb_hstate_alloc_pages Gang Li
2023-11-23 13:30 ` [RFC PATCH v1 3/4] hugetlb: add timing to hugetlb allocations on boot Gang Li
2023-11-23 13:30 ` [RFC PATCH v1 4/4] hugetlb: parallelize hugetlb page allocation Gang Li
2023-11-23 13:58 ` [RFC PATCH v1 0/4] hugetlb: parallelize hugetlb page allocation on boot Gang Li
2023-11-23 14:10 ` David Hildenbrand
2023-11-24 19:44   ` David Rientjes
2023-11-24 19:47     ` David Hildenbrand
2023-11-24 20:00       ` David Rientjes
2023-11-28  3:18         ` Gang Li
2023-11-28  6:52           ` Gang Li
2023-11-28  8:09             ` David Hildenbrand
2023-11-29 19:41           ` David Rientjes [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=965d3ea8-4dd9-8f66-7ac0-0d6aa7fcadc7@google.com \
    --to=rientjes@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=gang.li@linux.dev \
    --cc=ligang.bdlg@bytedance.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=muchun.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.