All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Kravetz <mike.kravetz@oracle.com>
To: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	David Hildenbrand <david@redhat.com>,
	Oscar Salvador <osalvador@suse.de>, Zi Yan <ziy@nvidia.com>,
	David Rientjes <rientjes@google.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [RFC PATCH 0/3] hugetlb: add demote/split page functionality
Date: Wed, 10 Mar 2021 11:45:22 -0800	[thread overview]
Message-ID: <4ae28031-6cee-59b4-94d1-832290d7813c@oracle.com> (raw)
In-Reply-To: <YEjyS+xyeNlMcW/l@dhcp22.suse.cz>

On 3/10/21 8:23 AM, Michal Hocko wrote:
> On Mon 08-03-21 16:18:52, Mike Kravetz wrote:
> [...]
>> Converting larger to smaller hugetlb pages can be accomplished today by
>> first freeing the larger page to the buddy allocator and then allocating
>> the smaller pages.  However, there are two issues with this approach:
>> 1) This process can take quite some time, especially if allocation of
>>    the smaller pages is not immediate and requires migration/compaction.
>> 2) There is no guarantee that the total size of smaller pages allocated
>>    will match the size of the larger page which was freed.  This is
>>    because the area freed by the larger page could quickly be
>>    fragmented.
> 
> I will likely not surprise to show some level of reservation. While your
> concerns about reconfiguration by existing interfaces are quite real is
> this really a problem in practice? How often do you need such a
> reconfiguration?

In reply to one of David's comments, I mentioned that we have a product
group with this use case today.  They use hugetlb pages to back VMs,
and preallocate a 'best guess' number of pages of each each order.  They
can only guess how many pages of each order are needed because they are
responding to dynamic requests for new VMs.

When they find themselves in this situation today, they free 1G pages to
buddy and try to allocate the corresponding number of 2M pages.  The
concerns above were mentioned/experienced by this group.

Part of the reason for the RFC was to see if others might have similar
use cases.  With newer x86 processors, I hear about more people using
1G hugetlb pages.  I also hear about people using hugetlb pages to back
VMs.  So, was thinking others may have similar use cases?

> Is this all really worth the additional code to something as tricky as
> hugetlb code base?
> 

The 'good news' is that this does not involve much tricky code.  It only
demotes free hugetlb pages.  Of course, it is only worth it the new code
is actually going to be used.  I know of at least one use case.
-- 
Mike Kravetz

      parent reply	other threads:[~2021-03-10 19:46 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-09  0:18 [RFC PATCH 0/3] hugetlb: add demote/split page functionality Mike Kravetz
2021-03-09  0:18 ` [RFC PATCH 1/3] hugetlb: add demote hugetlb page sysfs interfaces Mike Kravetz
2021-03-19 13:08   ` kernel test robot
2021-03-09  0:18 ` [RFC PATCH 2/3] hugetlb: add HPageCma flag and code to free non-gigantic pages in CMA Mike Kravetz
2021-03-09  0:18 ` [RFC PATCH 3/3] hugetlb: add hugetlb demote page support Mike Kravetz
2021-03-09  2:37   ` kernel test robot
2021-03-09  4:30   ` kernel test robot
2021-03-09  9:01 ` [RFC PATCH 0/3] hugetlb: add demote/split page functionality David Hildenbrand
2021-03-09 17:11   ` Mike Kravetz
2021-03-09 17:50     ` David Hildenbrand
2021-03-09 18:21       ` Mike Kravetz
2021-03-09 19:01         ` David Hildenbrand
2021-03-10 15:58 ` Oscar Salvador
2021-03-10 16:23 ` Michal Hocko
2021-03-10 16:46   ` Zi Yan
2021-03-10 17:05     ` Michal Hocko
2021-03-10 17:36       ` Zi Yan
2021-03-10 19:56     ` Mike Kravetz
2021-03-10 19:45   ` Mike Kravetz [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4ae28031-6cee-59b4-94d1-832290d7813c@oracle.com \
    --to=mike.kravetz@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=osalvador@suse.de \
    --cc=rientjes@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.