linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Oscar Salvador <osalvador@suse.de>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	David Hildenbrand <david@redhat.com>,
	Michal Hocko <mhocko@suse.com>, Zi Yan <ziy@nvidia.com>,
	David Rientjes <rientjes@google.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [RFC PATCH 0/3] hugetlb: add demote/split page functionality
Date: Wed, 10 Mar 2021 16:58:58 +0100	[thread overview]
Message-ID: <20210310155843.GA14328@linux> (raw)
In-Reply-To: <20210309001855.142453-1-mike.kravetz@oracle.com>

On Mon, Mar 08, 2021 at 04:18:52PM -0800, Mike Kravetz wrote:
> The concurrent use of multiple hugetlb page sizes on a single system
> is becoming more common.  One of the reasons is better TLB support for
> gigantic page sizes on x86 hardware.  In addition, hugetlb pages are
> being used to back VMs in hosting environments.
> 
> When using hugetlb pages to back VMs in such environments, it is
> sometimes desirable to preallocate hugetlb pools.  This avoids the delay
> and uncertainty of allocating hugetlb pages at VM startup.  In addition,
> preallocating huge pages minimizes the issue of memory fragmentation that
> increases the longer the system is up and running.
> 
> In such environments, a combination of larger and smaller hugetlb pages
> are preallocated in anticipation of backing VMs of various sizes.  Over
> time, the preallocated pool of smaller hugetlb pages may become
> depleted while larger hugetlb pages still remain.  In such situations,
> it may be desirable to convert larger hugetlb pages to smaller hugetlb
> pages.

Hi Mike,

The usecase sounds neat.

> 
> Converting larger to smaller hugetlb pages can be accomplished today by
> first freeing the larger page to the buddy allocator and then allocating
> the smaller pages.  However, there are two issues with this approach:
> 1) This process can take quite some time, especially if allocation of
>    the smaller pages is not immediate and requires migration/compaction.
> 2) There is no guarantee that the total size of smaller pages allocated
>    will match the size of the larger page which was freed.  This is
>    because the area freed by the larger page could quickly be
>    fragmented.
> 
> To address these issues, introduce the concept of hugetlb page demotion.
> Demotion provides a means of 'in place' splitting a hugetlb page to
> pages of a smaller size.  For example, on x86 one 1G page can be
> demoted to 512 2M pages.  Page demotion is controlled via sysfs files.
> - demote_size	Read only target page size for demotion

What about those archs where we have more than two hugetlb sizes?
IIRC, in powerpc you can have that, right?
If so, would it make sense for demote_size to be writable so you can pick
the size? 


> - demote	Writable number of hugetlb pages to be demoted

Below you mention that due to reservation, the amount of demoted pages can
be less than what the admin specified.
Would it make sense to have a place where someone can check how many pages got
actually demoted?
Or will this follow nr_hugepages' scheme and will always reflect the number of
current demoted pages?

> Only hugetlb pages which are free at the time of the request can be demoted.
> Demotion does not add to the complexity surplus pages.  Demotion also honors
> reserved huge pages.  Therefore, when a value is written to the sysfs demote
> file that value is only the maximum number of pages which will be demoted.
> It is possible fewer will actually be demoted.
> 
> If demote_size is PAGESIZE, demote will simply free pages to the buddy
> allocator.

Wrt. vmemmap discussion with David.
I also think we could compute how many vmemmap pages we are going to need to
re-shape the vmemmap layout and allocate those upfront.
And I think this approach would be just more simple.

I plan to have a look at the patches later today or tomorrow.

Thanks

-- 
Oscar Salvador
SUSE L3

  parent reply	other threads:[~2021-03-10 16:00 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-09  0:18 [RFC PATCH 0/3] hugetlb: add demote/split page functionality Mike Kravetz
2021-03-09  0:18 ` [RFC PATCH 1/3] hugetlb: add demote hugetlb page sysfs interfaces Mike Kravetz
2021-03-09  0:18 ` [RFC PATCH 2/3] hugetlb: add HPageCma flag and code to free non-gigantic pages in CMA Mike Kravetz
2021-03-09  0:18 ` [RFC PATCH 3/3] hugetlb: add hugetlb demote page support Mike Kravetz
2021-03-09  9:01 ` [RFC PATCH 0/3] hugetlb: add demote/split page functionality David Hildenbrand
2021-03-09 17:11   ` Mike Kravetz
2021-03-09 17:50     ` David Hildenbrand
2021-03-09 18:21       ` Mike Kravetz
2021-03-09 19:01         ` David Hildenbrand
2021-03-10 15:58 ` Oscar Salvador [this message]
2021-03-10 16:23 ` Michal Hocko
2021-03-10 16:46   ` Zi Yan
2021-03-10 17:05     ` Michal Hocko
2021-03-10 17:36       ` Zi Yan
2021-03-10 19:56     ` Mike Kravetz
2021-03-10 19:45   ` Mike Kravetz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210310155843.GA14328@linux \
    --to=osalvador@suse.de \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=rientjes@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).