All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/5] hugetlb: add demote/split page functionality
@ 2021-10-01 17:52 Mike Kravetz
  2021-10-01 17:52 ` [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces Mike Kravetz
                   ` (4 more replies)
  0 siblings, 5 replies; 22+ messages in thread
From: Mike Kravetz @ 2021-10-01 17:52 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: David Hildenbrand, Michal Hocko, Oscar Salvador, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton, Mike Kravetz

The concurrent use of multiple hugetlb page sizes on a single system
is becoming more common.  One of the reasons is better TLB support for
gigantic page sizes on x86 hardware.  In addition, hugetlb pages are
being used to back VMs in hosting environments.

When using hugetlb pages to back VMs, it is often desirable to
preallocate hugetlb pools.  This avoids the delay and uncertainty of
allocating hugetlb pages at VM startup.  In addition, preallocating
huge pages minimizes the issue of memory fragmentation that increases
the longer the system is up and running.

In such environments, a combination of larger and smaller hugetlb pages
are preallocated in anticipation of backing VMs of various sizes.  Over
time, the preallocated pool of smaller hugetlb pages may become
depleted while larger hugetlb pages still remain.  In such situations,
it is desirable to convert larger hugetlb pages to smaller hugetlb pages.

Converting larger to smaller hugetlb pages can be accomplished today by
first freeing the larger page to the buddy allocator and then allocating
the smaller pages.  For example, to convert 50 GB pages on x86:
gb_pages=`cat .../hugepages-1048576kB/nr_hugepages`
m2_pages=`cat .../hugepages-2048kB/nr_hugepages`
echo $(($gb_pages - 50)) > .../hugepages-1048576kB/nr_hugepages
echo $(($m2_pages + 25600)) > .../hugepages-2048kB/nr_hugepages

On an idle system this operation is fairly reliable and results are as
expected.  The number of 2MB pages is increased as expected and the time
of the operation is a second or two.

However, when there is activity on the system the following issues
arise:
1) This process can take quite some time, especially if allocation of
   the smaller pages is not immediate and requires migration/compaction.
2) There is no guarantee that the total size of smaller pages allocated
   will match the size of the larger page which was freed.  This is
   because the area freed by the larger page could quickly be
   fragmented.
In a test environment with a load that continually fills the page cache
with clean pages, results such as the following can be observed:

Unexpected number of 2MB pages allocated: Expected 25600, have 19944
real    0m42.092s
user    0m0.008s
sys     0m41.467s

To address these issues, introduce the concept of hugetlb page demotion.
Demotion provides a means of 'in place' splitting of a hugetlb page to
pages of a smaller size.  This avoids freeing pages to buddy and then
trying to allocate from buddy.

Page demotion is controlled via sysfs files that reside in the per-hugetlb
page size and per node directories.
- demote_size   Target page size for demotion, a smaller huge page size.
		File can be written to chose a smaller huge page size if
		multiple are available.
- demote        Writable number of hugetlb pages to be demoted

To demote 50 GB huge pages, one would:
cat .../hugepages-1048576kB/free_hugepages   /* optional, verify free pages */
cat .../hugepages-1048576kB/demote_size      /* optional, verify target size */
echo 50 > .../hugepages-1048576kB/demote

Only hugetlb pages which are free at the time of the request can be demoted.
Demotion does not add to the complexity of surplus pages and honors reserved
huge pages.  Therefore, when a value is written to the sysfs demote file,
that value is only the maximum number of pages which will be demoted.  It
is possible fewer will actually be demoted.  The recently introduced
per-hstate mutex is used to synchronize demote operations with other
operations that modify hugetlb pools.

Real world use cases
--------------------
The above scenario describes a real world use case where hugetlb pages are
used to back VMs on x86.  Both issues of long allocation times and not
necessarily getting the expected number of smaller huge pages after a free
and allocate cycle have been experienced.  The occurrence of these issues
is dependent on other activity within the host and can not be predicted.

v2 -> v3
  - Require gigantic_page_runtime_supported for demote
  - Simplify code in demote_store and update comment
  - Remove hugetlb specific cma flag, add cma_pages_valid interface
  - Retain error return code in demote_free_huge_page

RESEND -> v2
  - Removed optimizations for vmemmap optimized pages
  - Make demote_size writable
  - Removed demote interfaces for smallest huge page size
  - Updated documentation and commit messages
  - Fixed build break for !CONFIG_ARCH_HAS_GIGANTIC_PAGE

v1 -> RESEND
  - Rebase on next-20210816
  - Fix a few typos in commit messages
RFC -> v1
  - Provides basic support for vmemmap optimized pages
  - Takes speculative page references into account
  - Updated Documentation file
  - Added optimizations for vmemmap optimized pages

Mike Kravetz (5):
  hugetlb: add demote hugetlb page sysfs interfaces
  mm/cma: add cma_pages_valid to determine if pages are in CMA
  hugetlb: be sure to free demoted CMA pages to CMA
  hugetlb: add demote bool to gigantic page routines
  hugetlb: add hugetlb demote page support

 Documentation/admin-guide/mm/hugetlbpage.rst |  30 +-
 include/linux/cma.h                          |   1 +
 include/linux/hugetlb.h                      |   1 +
 mm/cma.c                                     |  21 +-
 mm/hugetlb.c                                 | 307 ++++++++++++++++++-
 5 files changed, 339 insertions(+), 21 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces
  2021-10-01 17:52 [PATCH v3 0/5] hugetlb: add demote/split page functionality Mike Kravetz
@ 2021-10-01 17:52 ` Mike Kravetz
  2021-10-04 13:00   ` Oscar Salvador
                     ` (2 more replies)
  2021-10-01 17:52 ` [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA Mike Kravetz
                   ` (3 subsequent siblings)
  4 siblings, 3 replies; 22+ messages in thread
From: Mike Kravetz @ 2021-10-01 17:52 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: David Hildenbrand, Michal Hocko, Oscar Salvador, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton, Mike Kravetz

Two new sysfs files are added to demote hugtlb pages.  These files are
both per-hugetlb page size and per node.  Files are:
  demote_size - The size in Kb that pages are demoted to. (read-write)
  demote - The number of huge pages to demote. (write-only)

By default, demote_size is the next smallest huge page size.  Valid huge
page sizes less than huge page size may be written to this file.  When
huge pages are demoted, they are demoted to this size.

Writing a value to demote will result in an attempt to demote that
number of hugetlb pages to an appropriate number of demote_size pages.

NOTE: Demote interfaces are only provided for huge page sizes if there
is a smaller target demote huge page size.  For example, on x86 1GB huge
pages will have demote interfaces.  2MB huge pages will not have demote
interfaces.

This patch does not provide full demote functionality.  It only provides
the sysfs interfaces.

It also provides documentation for the new interfaces.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 Documentation/admin-guide/mm/hugetlbpage.rst |  30 +++-
 include/linux/hugetlb.h                      |   1 +
 mm/hugetlb.c                                 | 155 ++++++++++++++++++-
 3 files changed, 183 insertions(+), 3 deletions(-)

diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index 8abaeb144e44..0e123a347e1e 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -234,8 +234,12 @@ will exist, of the form::
 
 	hugepages-${size}kB
 
-Inside each of these directories, the same set of files will exist::
+Inside each of these directories, the set of files contained in ``/proc``
+will exist.  In addition, two additional interfaces for demoting huge
+pages may exist::
 
+        demote
+        demote_size
 	nr_hugepages
 	nr_hugepages_mempolicy
 	nr_overcommit_hugepages
@@ -243,7 +247,29 @@ Inside each of these directories, the same set of files will exist::
 	resv_hugepages
 	surplus_hugepages
 
-which function as described above for the default huge page-sized case.
+The demote interfaces provide the ability to split a huge page into
+smaller huge pages.  For example, the x86 architecture supports both
+1GB and 2MB huge pages sizes.  A 1GB huge page can be split into 512
+2MB huge pages.  Demote interfaces are not available for the smallest
+huge page size.  The demote interfaces are:
+
+demote_size
+        is the size of demoted pages.  When a page is demoted a corresponding
+        number of huge pages of demote_size will be created.  By default,
+        demote_size is set to the next smaller huge page size.  If there are
+        multiple smaller huge page sizes, demote_size can be set to any of
+        these smaller sizes.  Only huge page sizes less then the current huge
+        pages size are allowed.
+
+demote
+        is used to demote a number of huge pages.  A user with root privileges
+        can write to this file.  It may not be possible to demote the
+        requested number of huge pages.  To determine how many pages were
+        actually demoted, compare the value of nr_hugepages before and after
+        writing to the demote interface.  demote is a write only interface.
+
+The interfaces which are the same as in ``/proc`` (all except demote and
+demote_size) function as described above for the default huge page-sized case.
 
 .. _mem_policy_and_hp_alloc:
 
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 1faebe1cd0ed..f2c3979efd69 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -596,6 +596,7 @@ struct hstate {
 	int next_nid_to_alloc;
 	int next_nid_to_free;
 	unsigned int order;
+	unsigned int demote_order;
 	unsigned long mask;
 	unsigned long max_huge_pages;
 	unsigned long nr_huge_pages;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 95dc7b83381f..f4dad1ab12d8 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2986,7 +2986,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
 
 static void __init hugetlb_init_hstates(void)
 {
-	struct hstate *h;
+	struct hstate *h, *h2;
 
 	for_each_hstate(h) {
 		if (minimum_order > huge_page_order(h))
@@ -2995,6 +2995,23 @@ static void __init hugetlb_init_hstates(void)
 		/* oversize hugepages were init'ed in early boot */
 		if (!hstate_is_gigantic(h))
 			hugetlb_hstate_alloc_pages(h);
+
+		/*
+		 * Set demote order for each hstate.  Note that
+		 * h->demote_order is initially 0.
+		 * - We can not demote gigantic pages if runtime freeing
+		 *   is not supported.
+		 */
+		if (!hstate_is_gigantic(h) ||
+		    gigantic_page_runtime_supported()) {
+			for_each_hstate(h2) {
+				if (h2 == h)
+					continue;
+				if (h2->order < h->order &&
+				    h2->order > h->demote_order)
+					h->demote_order = h2->order;
+			}
+		}
 	}
 	VM_BUG_ON(minimum_order == UINT_MAX);
 }
@@ -3235,9 +3252,29 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
 	return 0;
 }
 
+static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
+	__must_hold(&hugetlb_lock)
+{
+	int rc = 0;
+
+	lockdep_assert_held(&hugetlb_lock);
+
+	/* We should never get here if no demote order */
+	if (!h->demote_order)
+		return rc;
+
+	/*
+	 * TODO - demote fucntionality will be added in subsequent patch
+	 */
+	return rc;
+}
+
 #define HSTATE_ATTR_RO(_name) \
 	static struct kobj_attribute _name##_attr = __ATTR_RO(_name)
 
+#define HSTATE_ATTR_WO(_name) \
+	static struct kobj_attribute _name##_attr = __ATTR_WO(_name)
+
 #define HSTATE_ATTR(_name) \
 	static struct kobj_attribute _name##_attr = \
 		__ATTR(_name, 0644, _name##_show, _name##_store)
@@ -3433,6 +3470,106 @@ static ssize_t surplus_hugepages_show(struct kobject *kobj,
 }
 HSTATE_ATTR_RO(surplus_hugepages);
 
+static ssize_t demote_store(struct kobject *kobj,
+	       struct kobj_attribute *attr, const char *buf, size_t len)
+{
+	unsigned long nr_demote;
+	unsigned long nr_available;
+	nodemask_t nodes_allowed, *n_mask;
+	struct hstate *h;
+	int err;
+	int nid;
+
+	err = kstrtoul(buf, 10, &nr_demote);
+	if (err)
+		return err;
+	h = kobj_to_hstate(kobj, &nid);
+
+	/* Synchronize with other sysfs operations modifying huge pages */
+	mutex_lock(&h->resize_lock);
+
+	spin_lock_irq(&hugetlb_lock);
+	if (nid != NUMA_NO_NODE) {
+		init_nodemask_of_node(&nodes_allowed, nid);
+		n_mask = &nodes_allowed;
+	} else {
+		n_mask = &node_states[N_MEMORY];
+	}
+
+	while (nr_demote) {
+		/*
+		 * Check for available pages to demote each time thorough the
+		 * loop as demote_pool_huge_page will drop hugetlb_lock.
+		 *
+		 * NOTE: demote_pool_huge_page does not yet drop hugetlb_lock
+		 * but will when full demote functionality is added in a later
+		 * patch.
+		 */
+		if (nid != NUMA_NO_NODE)
+			nr_available = h->free_huge_pages_node[nid];
+		else
+			nr_available = h->free_huge_pages;
+		nr_available -= h->resv_huge_pages;
+		if (!nr_available)
+			break;
+
+		if (!demote_pool_huge_page(h, n_mask))
+			break;
+
+		nr_demote--;
+	}
+
+	spin_unlock_irq(&hugetlb_lock);
+	mutex_unlock(&h->resize_lock);
+
+	return len;
+}
+HSTATE_ATTR_WO(demote);
+
+static ssize_t demote_size_show(struct kobject *kobj,
+					struct kobj_attribute *attr, char *buf)
+{
+	struct hstate *h;
+	unsigned long demote_size;
+	int nid;
+
+	h = kobj_to_hstate(kobj, &nid);
+	demote_size = h->demote_order;
+
+	return sysfs_emit(buf, "%lukB\n",
+			(unsigned long)(PAGE_SIZE << h->demote_order) / SZ_1K);
+}
+
+static ssize_t demote_size_store(struct kobject *kobj,
+					struct kobj_attribute *attr,
+					const char *buf, size_t count)
+{
+	struct hstate *h, *t_hstate;
+	unsigned long demote_size;
+	unsigned int demote_order;
+	int nid;
+
+	demote_size = (unsigned long)memparse(buf, NULL);
+
+	t_hstate = size_to_hstate(demote_size);
+	if (!t_hstate)
+		return -EINVAL;
+	demote_order = t_hstate->order;
+
+	/* demote order must be smaller hstate order */
+	h = kobj_to_hstate(kobj, &nid);
+	if (demote_order >= h->order)
+		return -EINVAL;
+
+	/* resize_lock synchronizes access to demote size and writes */
+	mutex_lock(&h->resize_lock);
+	h->demote_order = demote_order;
+	mutex_unlock(&h->resize_lock);
+
+	return count;
+}
+HSTATE_ATTR(demote_size);
+
 static struct attribute *hstate_attrs[] = {
 	&nr_hugepages_attr.attr,
 	&nr_overcommit_hugepages_attr.attr,
@@ -3449,6 +3586,16 @@ static const struct attribute_group hstate_attr_group = {
 	.attrs = hstate_attrs,
 };
 
+static struct attribute *hstate_demote_attrs[] = {
+	&demote_size_attr.attr,
+	&demote_attr.attr,
+	NULL,
+};
+
+static const struct attribute_group hstate_demote_attr_group = {
+	.attrs = hstate_demote_attrs,
+};
+
 static int hugetlb_sysfs_add_hstate(struct hstate *h, struct kobject *parent,
 				    struct kobject **hstate_kobjs,
 				    const struct attribute_group *hstate_attr_group)
@@ -3466,6 +3613,12 @@ static int hugetlb_sysfs_add_hstate(struct hstate *h, struct kobject *parent,
 		hstate_kobjs[hi] = NULL;
 	}
 
+	if (h->demote_order) {
+		if (sysfs_create_group(hstate_kobjs[hi],
+					&hstate_demote_attr_group))
+			pr_warn("HugeTLB unable to create demote interfaces for %s\n", h->name);
+	}
+
 	return retval;
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA
  2021-10-01 17:52 [PATCH v3 0/5] hugetlb: add demote/split page functionality Mike Kravetz
  2021-10-01 17:52 ` [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces Mike Kravetz
@ 2021-10-01 17:52 ` Mike Kravetz
  2021-10-05  8:35   ` Oscar Salvador
                     ` (2 more replies)
  2021-10-01 17:52 ` [PATCH v3 3/5] hugetlb: be sure to free demoted CMA pages to CMA Mike Kravetz
                   ` (2 subsequent siblings)
  4 siblings, 3 replies; 22+ messages in thread
From: Mike Kravetz @ 2021-10-01 17:52 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: David Hildenbrand, Michal Hocko, Oscar Salvador, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton, Mike Kravetz

Add new interface cma_pages_valid() which indicates if the specified
pages are part of a CMA region.  This interface will be used in a
subsequent patch by hugetlb code.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 include/linux/cma.h |  1 +
 mm/cma.c            | 21 +++++++++++++++++----
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/include/linux/cma.h b/include/linux/cma.h
index 53fd8c3cdbd0..bd801023504b 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -46,6 +46,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
 					struct cma **res_cma);
 extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align,
 			      bool no_warn);
+extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count);
 extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count);
 
 extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
diff --git a/mm/cma.c b/mm/cma.c
index 995e15480937..960994b88c7f 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -524,6 +524,22 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
 	return page;
 }
 
+bool cma_pages_valid(struct cma *cma, const struct page *pages,
+		     unsigned long count)
+{
+	unsigned long pfn;
+
+	if (!cma || !pages)
+		return false;
+
+	pfn = page_to_pfn(pages);
+
+	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
+		return false;
+
+	return true;
+}
+
 /**
  * cma_release() - release allocated pages
  * @cma:   Contiguous memory region for which the allocation is performed.
@@ -539,16 +555,13 @@ bool cma_release(struct cma *cma, const struct page *pages,
 {
 	unsigned long pfn;
 
-	if (!cma || !pages)
+	if (!cma_pages_valid(cma, pages, count))
 		return false;
 
 	pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count);
 
 	pfn = page_to_pfn(pages);
 
-	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
-		return false;
-
 	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
 
 	free_contig_range(pfn, count);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 3/5] hugetlb: be sure to free demoted CMA pages to CMA
  2021-10-01 17:52 [PATCH v3 0/5] hugetlb: add demote/split page functionality Mike Kravetz
  2021-10-01 17:52 ` [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces Mike Kravetz
  2021-10-01 17:52 ` [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA Mike Kravetz
@ 2021-10-01 17:52 ` Mike Kravetz
  2021-10-05  9:33   ` Oscar Salvador
  2021-10-01 17:52 ` [PATCH v3 4/5] hugetlb: add demote bool to gigantic page routines Mike Kravetz
  2021-10-01 17:52 ` [PATCH v3 5/5] hugetlb: add hugetlb demote page support Mike Kravetz
  4 siblings, 1 reply; 22+ messages in thread
From: Mike Kravetz @ 2021-10-01 17:52 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: David Hildenbrand, Michal Hocko, Oscar Salvador, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton, Mike Kravetz

When huge page demotion is fully implemented, gigantic pages can be
demoted to a smaller huge page size.  For example, on x86 a 1G page
can be demoted to 512 2M pages.  However, gigantic pages can potentially
be allocated from CMA.  If a gigantic page which was allocated from CMA
is demoted, the corresponding demoted pages needs to be returned to CMA.

Use the new interface cma_pages_valid() to determine if a non-gigantic
hugetlb page should be freed to CMA.  Also, clear mapping field of these
pages as expected by cma_release.

This also requires a change to CMA reservations for gigantic pages.
Currently, the 'order_per_bit' is set to the gigantic page size.
However, if gigantic pages can be demoted this needs to be set to the
order of the smallest huge page.  At CMA reservation time we do not know
the size of the smallest huge page size, so use HUGETLB_PAGE_ORDER.
Also, prohibit demotion to huge page sizes smaller than
HUGETLB_PAGE_ORDER.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 mm/hugetlb.c | 32 +++++++++++++++++++++++++++++---
 1 file changed, 29 insertions(+), 3 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f4dad1ab12d8..a15f6763e8f4 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -50,6 +50,16 @@ struct hstate hstates[HUGE_MAX_HSTATE];
 
 #ifdef CONFIG_CMA
 static struct cma *hugetlb_cma[MAX_NUMNODES];
+static bool hugetlb_cma_page(struct page *page, unsigned int order)
+{
+	return cma_pages_valid(hugetlb_cma[page_to_nid(page)], page,
+				1 << order);
+}
+#else
+static bool hugetlb_cma_page(struct page *page, unsigned int order)
+{
+	return false;
+}
 #endif
 static unsigned long hugetlb_cma_size __initdata;
 
@@ -1272,6 +1282,7 @@ static void destroy_compound_gigantic_page(struct page *page,
 	atomic_set(compound_pincount_ptr(page), 0);
 
 	for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {
+		p->mapping = NULL;
 		clear_compound_head(p);
 		set_page_refcounted(p);
 	}
@@ -1476,7 +1487,13 @@ static void __update_and_free_page(struct hstate *h, struct page *page)
 				1 << PG_active | 1 << PG_private |
 				1 << PG_writeback);
 	}
-	if (hstate_is_gigantic(h)) {
+
+	/*
+	 * Non-gigantic pages demoted from CMA allocated gigantic pages
+	 * need to be given back to CMA in free_gigantic_page.
+	 */
+	if (hstate_is_gigantic(h) ||
+	    hugetlb_cma_page(page, huge_page_order(h))) {
 		destroy_compound_gigantic_page(page, huge_page_order(h));
 		free_gigantic_page(page, huge_page_order(h));
 	} else {
@@ -3003,7 +3020,8 @@ static void __init hugetlb_init_hstates(void)
 		 *   is not supported.
 		 */
 		if (!hstate_is_gigantic(h) ||
-		    gigantic_page_runtime_supported()) {
+		    gigantic_page_runtime_supported() ||
+		    !hugetlb_cma_size || !(h->order <= HUGETLB_PAGE_ORDER)) {
 			for_each_hstate(h2) {
 				if (h2 == h)
 					continue;
@@ -3555,6 +3573,8 @@ static ssize_t demote_size_store(struct kobject *kobj,
 	if (!t_hstate)
 		return -EINVAL;
 	demote_order = t_hstate->order;
+	if (demote_order < HUGETLB_PAGE_ORDER)
+		return -EINVAL;
 
 	/* demote order must be smaller hstate order */
 	h = kobj_to_hstate(kobj, &nid);
@@ -6563,7 +6583,13 @@ void __init hugetlb_cma_reserve(int order)
 		size = round_up(size, PAGE_SIZE << order);
 
 		snprintf(name, sizeof(name), "hugetlb%d", nid);
-		res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order,
+		/*
+		 * Note that 'order per bit' is based on smallest size that
+		 * may be returned to CMA allocator in the case of
+		 * huge page demotion.
+		 */
+		res = cma_declare_contiguous_nid(0, size, 0,
+						PAGE_SIZE << HUGETLB_PAGE_ORDER,
 						 0, false, name,
 						 &hugetlb_cma[nid], nid);
 		if (res) {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 4/5] hugetlb: add demote bool to gigantic page routines
  2021-10-01 17:52 [PATCH v3 0/5] hugetlb: add demote/split page functionality Mike Kravetz
                   ` (2 preceding siblings ...)
  2021-10-01 17:52 ` [PATCH v3 3/5] hugetlb: be sure to free demoted CMA pages to CMA Mike Kravetz
@ 2021-10-01 17:52 ` Mike Kravetz
  2021-10-01 17:52 ` [PATCH v3 5/5] hugetlb: add hugetlb demote page support Mike Kravetz
  4 siblings, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2021-10-01 17:52 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: David Hildenbrand, Michal Hocko, Oscar Salvador, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton, Mike Kravetz

The routines remove_hugetlb_page and destroy_compound_gigantic_page
will remove a gigantic page and make the set of base pages ready to be
returned to a lower level allocator.  In the process of doing this, they
make all base pages reference counted.

The routine prep_compound_gigantic_page creates a gigantic page from a
set of base pages.  It assumes that all these base pages are reference
counted.

During demotion, a gigantic page will be split into huge pages of a
smaller size.  This logically involves use of the routines,
remove_hugetlb_page, and destroy_compound_gigantic_page followed by
prep_compound*_page for each smaller huge page.

When pages are reference counted (ref count >= 0), additional
speculative ref counts could be taken.  This could result in errors
while demoting a huge page.  Quite a bit of code would need to be
created to handle all possible issues.

Instead of dealing with the possibility of speculative ref counts, avoid
the possibility by keeping ref counts at zero during the demote process.
Add a boolean 'demote' to the routines remove_hugetlb_page,
destroy_compound_gigantic_page and prep_compound_gigantic_page.  If the
boolean is set, the remove and destroy routines will not reference count
pages and the prep routine will not expect reference counted pages.

'*_for_demote' wrappers of the routines will be added in a subsequent
patch where this functionality is used.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 mm/hugetlb.c | 54 +++++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 43 insertions(+), 11 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a15f6763e8f4..ccbe323c992b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1271,8 +1271,8 @@ static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed)
 		nr_nodes--)
 
 #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
-static void destroy_compound_gigantic_page(struct page *page,
-					unsigned int order)
+static void __destroy_compound_gigantic_page(struct page *page,
+					unsigned int order, bool demote)
 {
 	int i;
 	int nr_pages = 1 << order;
@@ -1284,7 +1284,8 @@ static void destroy_compound_gigantic_page(struct page *page,
 	for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {
 		p->mapping = NULL;
 		clear_compound_head(p);
-		set_page_refcounted(p);
+		if (!demote)
+			set_page_refcounted(p);
 	}
 
 	set_compound_order(page, 0);
@@ -1292,6 +1293,12 @@ static void destroy_compound_gigantic_page(struct page *page,
 	__ClearPageHead(page);
 }
 
+static void destroy_compound_gigantic_page(struct page *page,
+					unsigned int order)
+{
+	__destroy_compound_gigantic_page(page, order, false);
+}
+
 static void free_gigantic_page(struct page *page, unsigned int order)
 {
 	/*
@@ -1364,12 +1371,15 @@ static inline void destroy_compound_gigantic_page(struct page *page,
 
 /*
  * Remove hugetlb page from lists, and update dtor so that page appears
- * as just a compound page.  A reference is held on the page.
+ * as just a compound page.
+ *
+ * A reference is held on the page, except in the case of demote.
  *
  * Must be called with hugetlb lock held.
  */
-static void remove_hugetlb_page(struct hstate *h, struct page *page,
-							bool adjust_surplus)
+static void __remove_hugetlb_page(struct hstate *h, struct page *page,
+							bool adjust_surplus,
+							bool demote)
 {
 	int nid = page_to_nid(page);
 
@@ -1407,8 +1417,12 @@ static void remove_hugetlb_page(struct hstate *h, struct page *page,
 	 *
 	 * This handles the case where more than one ref is held when and
 	 * after update_and_free_page is called.
+	 *
+	 * In the case of demote we do not ref count the page as it will soon
+	 * be turned into a page of smaller size.
 	 */
-	set_page_refcounted(page);
+	if (!demote)
+		set_page_refcounted(page);
 	if (hstate_is_gigantic(h))
 		set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
 	else
@@ -1418,6 +1432,12 @@ static void remove_hugetlb_page(struct hstate *h, struct page *page,
 	h->nr_huge_pages_node[nid]--;
 }
 
+static void remove_hugetlb_page(struct hstate *h, struct page *page,
+							bool adjust_surplus)
+{
+	__remove_hugetlb_page(h, page, adjust_surplus, false);
+}
+
 static void add_hugetlb_page(struct hstate *h, struct page *page,
 			     bool adjust_surplus)
 {
@@ -1681,7 +1701,8 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 	spin_unlock_irq(&hugetlb_lock);
 }
 
-static bool prep_compound_gigantic_page(struct page *page, unsigned int order)
+static bool __prep_compound_gigantic_page(struct page *page, unsigned int order,
+								bool demote)
 {
 	int i, j;
 	int nr_pages = 1 << order;
@@ -1719,10 +1740,16 @@ static bool prep_compound_gigantic_page(struct page *page, unsigned int order)
 		 * the set of pages can not be converted to a gigantic page.
 		 * The caller who allocated the pages should then discard the
 		 * pages using the appropriate free interface.
+		 *
+		 * In the case of demote, the ref count will be zero.
 		 */
-		if (!page_ref_freeze(p, 1)) {
-			pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n");
-			goto out_error;
+		if (!demote) {
+			if (!page_ref_freeze(p, 1)) {
+				pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n");
+				goto out_error;
+			}
+		} else {
+			VM_BUG_ON_PAGE(page_count(p), p);
 		}
 		set_page_count(p, 0);
 		set_compound_head(p, page);
@@ -1747,6 +1774,11 @@ static bool prep_compound_gigantic_page(struct page *page, unsigned int order)
 	return false;
 }
 
+static bool prep_compound_gigantic_page(struct page *page, unsigned int order)
+{
+	return __prep_compound_gigantic_page(page, order, false);
+}
+
 /*
  * PageHuge() only returns true for hugetlbfs pages, but not for normal or
  * transparent huge pages.  See the PageTransHuge() documentation for more
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 5/5] hugetlb: add hugetlb demote page support
  2021-10-01 17:52 [PATCH v3 0/5] hugetlb: add demote/split page functionality Mike Kravetz
                   ` (3 preceding siblings ...)
  2021-10-01 17:52 ` [PATCH v3 4/5] hugetlb: add demote bool to gigantic page routines Mike Kravetz
@ 2021-10-01 17:52 ` Mike Kravetz
  2021-10-06  8:41   ` Oscar Salvador
  4 siblings, 1 reply; 22+ messages in thread
From: Mike Kravetz @ 2021-10-01 17:52 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: David Hildenbrand, Michal Hocko, Oscar Salvador, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton, Mike Kravetz

Demote page functionality will split a huge page into a number of huge
pages of a smaller size.  For example, on x86 a 1GB huge page can be
demoted into 512 2M huge pages.  Demotion is done 'in place' by simply
splitting the huge page.

Added '*_for_demote' wrappers for remove_hugetlb_page,
destroy_compound_gigantic_page and prep_compound_gigantic_page for use
by demote code.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 mm/hugetlb.c | 82 +++++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 74 insertions(+), 8 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ccbe323c992b..aec2e737d0cd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1270,7 +1270,7 @@ static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed)
 		((node = hstate_next_node_to_free(hs, mask)) || 1);	\
 		nr_nodes--)
 
-#ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
+/* used to demote non-gigantic_huge pages as well */
 static void __destroy_compound_gigantic_page(struct page *page,
 					unsigned int order, bool demote)
 {
@@ -1293,6 +1293,13 @@ static void __destroy_compound_gigantic_page(struct page *page,
 	__ClearPageHead(page);
 }
 
+static void destroy_compound_gigantic_page_for_demote(struct page *page,
+					unsigned int order)
+{
+	__destroy_compound_gigantic_page(page, order, true);
+}
+
+#ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
 static void destroy_compound_gigantic_page(struct page *page,
 					unsigned int order)
 {
@@ -1438,6 +1445,12 @@ static void remove_hugetlb_page(struct hstate *h, struct page *page,
 	__remove_hugetlb_page(h, page, adjust_surplus, false);
 }
 
+static void remove_hugetlb_page_for_demote(struct hstate *h, struct page *page,
+							bool adjust_surplus)
+{
+	__remove_hugetlb_page(h, page, adjust_surplus, true);
+}
+
 static void add_hugetlb_page(struct hstate *h, struct page *page,
 			     bool adjust_surplus)
 {
@@ -1779,6 +1792,12 @@ static bool prep_compound_gigantic_page(struct page *page, unsigned int order)
 	return __prep_compound_gigantic_page(page, order, false);
 }
 
+static bool prep_compound_gigantic_page_for_demote(struct page *page,
+							unsigned int order)
+{
+	return __prep_compound_gigantic_page(page, order, true);
+}
+
 /*
  * PageHuge() only returns true for hugetlbfs pages, but not for normal or
  * transparent huge pages.  See the PageTransHuge() documentation for more
@@ -3302,9 +3321,54 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
 	return 0;
 }
 
+static int demote_free_huge_page(struct hstate *h, struct page *page)
+{
+	int i, nid = page_to_nid(page);
+	struct hstate *target_hstate;
+	int rc = 0;
+
+	target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order);
+
+	remove_hugetlb_page_for_demote(h, page, false);
+	spin_unlock_irq(&hugetlb_lock);
+
+	rc = alloc_huge_page_vmemmap(h, page);
+	if (rc) {
+		/* Allocation of vmemmmap failed, we can not demote page */
+		spin_lock_irq(&hugetlb_lock);
+		set_page_refcounted(page);
+		add_hugetlb_page(h, page, false);
+		return rc;
+	}
+
+	/*
+	 * Use destroy_compound_gigantic_page_for_demote for all huge page
+	 * sizes as it will not ref count pages.
+	 */
+	destroy_compound_gigantic_page_for_demote(page, huge_page_order(h));
+
+	for (i = 0; i < pages_per_huge_page(h);
+				i += pages_per_huge_page(target_hstate)) {
+		if (hstate_is_gigantic(target_hstate))
+			prep_compound_gigantic_page_for_demote(page + i,
+							target_hstate->order);
+		else
+			prep_compound_page(page + i, target_hstate->order);
+		set_page_private(page + i, 0);
+		set_page_refcounted(page + i);
+		prep_new_huge_page(target_hstate, page + i, nid);
+		put_page(page + i);
+	}
+
+	spin_lock_irq(&hugetlb_lock);
+	return rc;
+}
+
 static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
 	__must_hold(&hugetlb_lock)
 {
+	int nr_nodes, node;
+	struct page *page;
 	int rc = 0;
 
 	lockdep_assert_held(&hugetlb_lock);
@@ -3313,9 +3377,15 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
 	if (!h->demote_order)
 		return rc;
 
-	/*
-	 * TODO - demote fucntionality will be added in subsequent patch
-	 */
+	for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) {
+		if (!list_empty(&h->hugepage_freelists[node])) {
+			page = list_entry(h->hugepage_freelists[node].next,
+					struct page, lru);
+			rc = !demote_free_huge_page(h, page);
+			break;
+		}
+	}
+
 	return rc;
 }
 
@@ -3550,10 +3620,6 @@ static ssize_t demote_store(struct kobject *kobj,
 		/*
 		 * Check for available pages to demote each time thorough the
 		 * loop as demote_pool_huge_page will drop hugetlb_lock.
-		 *
-		 * NOTE: demote_pool_huge_page does not yet drop hugetlb_lock
-		 * but will when full demote functionality is added in a later
-		 * patch.
 		 */
 		if (nid != NUMA_NO_NODE)
 			nr_available = h->free_huge_pages_node[nid];
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces
  2021-10-01 17:52 ` [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces Mike Kravetz
@ 2021-10-04 13:00   ` Oscar Salvador
  2021-10-04 18:27     ` Mike Kravetz
  2021-10-05  8:23   ` Oscar Salvador
  2021-11-12 20:10   ` kernel test robot
  2 siblings, 1 reply; 22+ messages in thread
From: Oscar Salvador @ 2021-10-04 13:00 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On Fri, Oct 01, 2021 at 10:52:06AM -0700, Mike Kravetz wrote:
> -which function as described above for the default huge page-sized case.
> +The demote interfaces provide the ability to split a huge page into
> +smaller huge pages.  For example, the x86 architecture supports both
> +1GB and 2MB huge pages sizes.  A 1GB huge page can be split into 512
> +2MB huge pages.  Demote interfaces are not available for the smallest
> +huge page size.  The demote interfaces are:
> +
> +demote_size
> +        is the size of demoted pages.  When a page is demoted a corresponding
> +        number of huge pages of demote_size will be created.  By default,
> +        demote_size is set to the next smaller huge page size.  If there are
> +        multiple smaller huge page sizes, demote_size can be set to any of
> +        these smaller sizes.  Only huge page sizes less then the current huge

s/then/than ?

>  static void __init hugetlb_init_hstates(void)
>  {
> -	struct hstate *h;
> +	struct hstate *h, *h2;
>  
>  	for_each_hstate(h) {
>  		if (minimum_order > huge_page_order(h))
> @@ -2995,6 +2995,23 @@ static void __init hugetlb_init_hstates(void)
>  		/* oversize hugepages were init'ed in early boot */
>  		if (!hstate_is_gigantic(h))
>  			hugetlb_hstate_alloc_pages(h);
> +
> +		/*
> +		 * Set demote order for each hstate.  Note that
> +		 * h->demote_order is initially 0.
> +		 * - We can not demote gigantic pages if runtime freeing
> +		 *   is not supported.
> +		 */
> +		if (!hstate_is_gigantic(h) ||
> +		    gigantic_page_runtime_supported()) {

Based on the comment, I think that making the condition explicit looks
better from my point of view.

if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
    continue;

> +			for_each_hstate(h2) {
> +				if (h2 == h)
> +					continue;
> +				if (h2->order < h->order &&
> +				    h2->order > h->demote_order)
> +					h->demote_order = h2->order;
> +			}
> +		}
>  	}
>  	VM_BUG_ON(minimum_order == UINT_MAX);
>  }
> @@ -3235,9 +3252,29 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
>  	return 0;
>  }
>  

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces
  2021-10-04 13:00   ` Oscar Salvador
@ 2021-10-04 18:27     ` Mike Kravetz
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2021-10-04 18:27 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On 10/4/21 6:00 AM, Oscar Salvador wrote:
> On Fri, Oct 01, 2021 at 10:52:06AM -0700, Mike Kravetz wrote:
>> -which function as described above for the default huge page-sized case.
>> +The demote interfaces provide the ability to split a huge page into
>> +smaller huge pages.  For example, the x86 architecture supports both
>> +1GB and 2MB huge pages sizes.  A 1GB huge page can be split into 512
>> +2MB huge pages.  Demote interfaces are not available for the smallest
>> +huge page size.  The demote interfaces are:
>> +
>> +demote_size
>> +        is the size of demoted pages.  When a page is demoted a corresponding
>> +        number of huge pages of demote_size will be created.  By default,
>> +        demote_size is set to the next smaller huge page size.  If there are
>> +        multiple smaller huge page sizes, demote_size can be set to any of
>> +        these smaller sizes.  Only huge page sizes less then the current huge
> 
> s/then/than ?
> 

Thanks

>>  static void __init hugetlb_init_hstates(void)
>>  {
>> -	struct hstate *h;
>> +	struct hstate *h, *h2;
>>  
>>  	for_each_hstate(h) {
>>  		if (minimum_order > huge_page_order(h))
>> @@ -2995,6 +2995,23 @@ static void __init hugetlb_init_hstates(void)
>>  		/* oversize hugepages were init'ed in early boot */
>>  		if (!hstate_is_gigantic(h))
>>  			hugetlb_hstate_alloc_pages(h);
>> +
>> +		/*
>> +		 * Set demote order for each hstate.  Note that
>> +		 * h->demote_order is initially 0.
>> +		 * - We can not demote gigantic pages if runtime freeing
>> +		 *   is not supported.
>> +		 */
>> +		if (!hstate_is_gigantic(h) ||
>> +		    gigantic_page_runtime_supported()) {
> 
> Based on the comment, I think that making the condition explicit looks
> better from my point of view.
> 
> if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
>     continue;
> 

Agreed.  In addition, this code will look better when more conditions
are added in subsequent patches.

-- 
Mike Kravetz

>> +			for_each_hstate(h2) {
>> +				if (h2 == h)
>> +					continue;
>> +				if (h2->order < h->order &&
>> +				    h2->order > h->demote_order)
>> +					h->demote_order = h2->order;
>> +			}
>> +		}
>>  	}
>>  	VM_BUG_ON(minimum_order == UINT_MAX);
>>  }
>> @@ -3235,9 +3252,29 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
>>  	return 0;
>>  }
>>  

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces
  2021-10-01 17:52 ` [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces Mike Kravetz
  2021-10-04 13:00   ` Oscar Salvador
@ 2021-10-05  8:23   ` Oscar Salvador
  2021-10-05 16:58     ` Mike Kravetz
  2021-11-12 20:10   ` kernel test robot
  2 siblings, 1 reply; 22+ messages in thread
From: Oscar Salvador @ 2021-10-05  8:23 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On 2021-10-01 19:52, Mike Kravetz wrote:
> +static int demote_pool_huge_page(struct hstate *h, nodemask_t 
> *nodes_allowed)
> +	__must_hold(&hugetlb_lock)
> +{
> +	int rc = 0;
> +
> +	lockdep_assert_held(&hugetlb_lock);
> +
> +	/* We should never get here if no demote order */
> +	if (!h->demote_order)
> +		return rc;

Probably add a warning here? If we should never reach this but we do, 
then
something got screwed along the way and we might want to know.

> +static ssize_t demote_store(struct kobject *kobj,
> +	       struct kobj_attribute *attr, const char *buf, size_t len)
> +{
> +	unsigned long nr_demote;
> +	unsigned long nr_available;
> +	nodemask_t nodes_allowed, *n_mask;
> +	struct hstate *h;
> +	int err;
> +	int nid;
> +
> +	err = kstrtoul(buf, 10, &nr_demote);
> +	if (err)
> +		return err;
> +	h = kobj_to_hstate(kobj, &nid);
> +
> +	/* Synchronize with other sysfs operations modifying huge pages */
> +	mutex_lock(&h->resize_lock);
> +
> +	spin_lock_irq(&hugetlb_lock);
> +	if (nid != NUMA_NO_NODE) {
> +		init_nodemask_of_node(&nodes_allowed, nid);
> +		n_mask = &nodes_allowed;
> +	} else {
> +		n_mask = &node_states[N_MEMORY];
> +	}

Cannot the n_mask dance be outside the locks? That does not need to be 
protected, right?

> +
> +	while (nr_demote) {
> +		/*
> +		 * Check for available pages to demote each time thorough the
> +		 * loop as demote_pool_huge_page will drop hugetlb_lock.
> +		 *
> +		 * NOTE: demote_pool_huge_page does not yet drop hugetlb_lock
> +		 * but will when full demote functionality is added in a later
> +		 * patch.
> +		 */
> +		if (nid != NUMA_NO_NODE)
> +			nr_available = h->free_huge_pages_node[nid];
> +		else
> +			nr_available = h->free_huge_pages;
> +		nr_available -= h->resv_huge_pages;
> +		if (!nr_available)
> +			break;
> +
> +		if (!demote_pool_huge_page(h, n_mask))
> +			break;

Is it possible that when demote_pool_huge_page() drops the lock,
h->resv_huge_pages change? Or that can only happen under h->resize_lock?

> +
> +		nr_demote--;
> +	}
> +
> +	spin_unlock_irq(&hugetlb_lock);
> +	mutex_unlock(&h->resize_lock);
> +
> +	return len;
> +}
> +HSTATE_ATTR_WO(demote);
> +
> +static ssize_t demote_size_show(struct kobject *kobj,
> +					struct kobj_attribute *attr, char *buf)
> +{
> +	struct hstate *h;
> +	unsigned long demote_size;
> +	int nid;
> +
> +	h = kobj_to_hstate(kobj, &nid);
> +	demote_size = h->demote_order;

This has already been pointed out.

> +
> +	return sysfs_emit(buf, "%lukB\n",
> +			(unsigned long)(PAGE_SIZE << h->demote_order) / SZ_1K);
> +}
> +
> +static ssize_t demote_size_store(struct kobject *kobj,
> +					struct kobj_attribute *attr,
> +					const char *buf, size_t count)
> +{
> +	struct hstate *h, *t_hstate;
> +	unsigned long demote_size;
> +	unsigned int demote_order;
> +	int nid;

This is just a nit-pick, but what is t_hstate? demote_hstate might be 
more descriptive.

> +
> +	demote_size = (unsigned long)memparse(buf, NULL);
> +
> +	t_hstate = size_to_hstate(demote_size);
> +	if (!t_hstate)
> +		return -EINVAL;
> +	demote_order = t_hstate->order;
> +
> +	/* demote order must be smaller hstate order */

"must be smaller than"

-- 
Oscar Salvador
SUSE Labs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA
  2021-10-01 17:52 ` [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA Mike Kravetz
@ 2021-10-05  8:35   ` Oscar Salvador
  2021-10-05  8:45   ` Oscar Salvador
  2021-10-05  8:48   ` David Hildenbrand
  2 siblings, 0 replies; 22+ messages in thread
From: Oscar Salvador @ 2021-10-05  8:35 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On Fri, Oct 01, 2021 at 10:52:07AM -0700, Mike Kravetz wrote:
> +bool cma_pages_valid(struct cma *cma, const struct page *pages,
> +		     unsigned long count)
> +{
> +	unsigned long pfn;
> +
> +	if (!cma || !pages)
> +		return false;
> +
> +	pfn = page_to_pfn(pages);
> +
> +	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
> +		return false;
> +
> +	return true;
> +}
> +
>  /**
>   * cma_release() - release allocated pages
>   * @cma:   Contiguous memory region for which the allocation is performed.
> @@ -539,16 +555,13 @@ bool cma_release(struct cma *cma, const struct page *pages,
>  {
>  	unsigned long pfn;
>  
> -	if (!cma || !pages)
> +	if (!cma_pages_valid(cma, pages, count))
>  		return false;
>  
>  	pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count);
>  
>  	pfn = page_to_pfn(pages);
>  
> -	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
> -		return false;
> -

Might be worth noting that after this change, the debug statement will not be
printed as before. Now, we back off before firing it.

You might want to point that out in the changelog in case someone wonders
why.

-- 
Oscar Salvador
SUSE Labs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA
  2021-10-01 17:52 ` [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA Mike Kravetz
  2021-10-05  8:35   ` Oscar Salvador
@ 2021-10-05  8:45   ` Oscar Salvador
  2021-10-05 17:06     ` Mike Kravetz
  2021-10-05  8:48   ` David Hildenbrand
  2 siblings, 1 reply; 22+ messages in thread
From: Oscar Salvador @ 2021-10-05  8:45 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On Fri, Oct 01, 2021 at 10:52:07AM -0700, Mike Kravetz wrote:
> +bool cma_pages_valid(struct cma *cma, const struct page *pages,
> +		     unsigned long count)
> +{
> +	unsigned long pfn;
> +
> +	if (!cma || !pages)
> +		return false;
> +
> +	pfn = page_to_pfn(pages);
> +
> +	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
> +		return false;
> +
> +	return true;
> +}
> +
>  /**
>   * cma_release() - release allocated pages
>   * @cma:   Contiguous memory region for which the allocation is performed.
> @@ -539,16 +555,13 @@ bool cma_release(struct cma *cma, const struct page *pages,
>  {
>  	unsigned long pfn;
>  
> -	if (!cma || !pages)
> +	if (!cma_pages_valid(cma, pages, count))
>  		return false;
>  
>  	pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count);
>  
>  	pfn = page_to_pfn(pages);
>  
> -	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
> -		return false;
> -

After this patch, the timing of printing the debug statement changes as we back
off earlier.
You might want to point that out in the changelog in case someone wonders why.


-- 
Oscar Salvador
SUSE Labs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA
  2021-10-01 17:52 ` [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA Mike Kravetz
  2021-10-05  8:35   ` Oscar Salvador
  2021-10-05  8:45   ` Oscar Salvador
@ 2021-10-05  8:48   ` David Hildenbrand
  2 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2021-10-05  8:48 UTC (permalink / raw)
  To: Mike Kravetz, linux-mm, linux-kernel
  Cc: Michal Hocko, Oscar Salvador, Zi Yan, Muchun Song,
	Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On 01.10.21 19:52, Mike Kravetz wrote:
> Add new interface cma_pages_valid() which indicates if the specified
> pages are part of a CMA region.  This interface will be used in a
> subsequent patch by hugetlb code.
> 
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>   include/linux/cma.h |  1 +
>   mm/cma.c            | 21 +++++++++++++++++----
>   2 files changed, 18 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> index 53fd8c3cdbd0..bd801023504b 100644
> --- a/include/linux/cma.h
> +++ b/include/linux/cma.h
> @@ -46,6 +46,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
>   					struct cma **res_cma);
>   extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align,
>   			      bool no_warn);
> +extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count);
>   extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count);
>   
>   extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
> diff --git a/mm/cma.c b/mm/cma.c
> index 995e15480937..960994b88c7f 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -524,6 +524,22 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
>   	return page;
>   }
>   
> +bool cma_pages_valid(struct cma *cma, const struct page *pages,
> +		     unsigned long count)
> +{
> +	unsigned long pfn;
> +
> +	if (!cma || !pages)
> +		return false;
> +
> +	pfn = page_to_pfn(pages);
> +
> +	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
> +		return false;
> +
> +	return true;
> +}
> +
>   /**
>    * cma_release() - release allocated pages
>    * @cma:   Contiguous memory region for which the allocation is performed.
> @@ -539,16 +555,13 @@ bool cma_release(struct cma *cma, const struct page *pages,
>   {
>   	unsigned long pfn;
>   
> -	if (!cma || !pages)
> +	if (!cma_pages_valid(cma, pages, count))
>   		return false;
>   
>   	pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count);
>   
>   	pfn = page_to_pfn(pages);
>   
> -	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
> -		return false;
> -
>   	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
>   
>   	free_contig_range(pfn, count);
> 

Agreed that we might want to perform the pr_debug() now earlier, or do 
another pr_warn before returning false.

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/5] hugetlb: be sure to free demoted CMA pages to CMA
  2021-10-01 17:52 ` [PATCH v3 3/5] hugetlb: be sure to free demoted CMA pages to CMA Mike Kravetz
@ 2021-10-05  9:33   ` Oscar Salvador
  2021-10-05 18:57     ` Mike Kravetz
  0 siblings, 1 reply; 22+ messages in thread
From: Oscar Salvador @ 2021-10-05  9:33 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On Fri, Oct 01, 2021 at 10:52:08AM -0700, Mike Kravetz wrote:
> When huge page demotion is fully implemented, gigantic pages can be
> demoted to a smaller huge page size.  For example, on x86 a 1G page
> can be demoted to 512 2M pages.  However, gigantic pages can potentially
> be allocated from CMA.  If a gigantic page which was allocated from CMA
> is demoted, the corresponding demoted pages needs to be returned to CMA.
> 
> Use the new interface cma_pages_valid() to determine if a non-gigantic
> hugetlb page should be freed to CMA.  Also, clear mapping field of these
> pages as expected by cma_release.
> 
> This also requires a change to CMA reservations for gigantic pages.
> Currently, the 'order_per_bit' is set to the gigantic page size.
> However, if gigantic pages can be demoted this needs to be set to the
> order of the smallest huge page.  At CMA reservation time we do not know

to the smallest, or to the next smaller? Would you mind elaborating why?

> @@ -3003,7 +3020,8 @@ static void __init hugetlb_init_hstates(void)
>  		 *   is not supported.
>  		 */
>  		if (!hstate_is_gigantic(h) ||
> -		    gigantic_page_runtime_supported()) {
> +		    gigantic_page_runtime_supported() ||
> +		    !hugetlb_cma_size || !(h->order <= HUGETLB_PAGE_ORDER)) {

I am bit lost in the CMA area, so bear with me.
We do not allow to demote if we specify we want hugetlb pages from the CMA?
Also, can h->order be smaller than HUGETLB_PAGE_ORDER? I though
HUGETLB_PAGE_ORDER was the smallest one.

The check for HUGETLB_PAGE_ORDER can probably be squashed into patch#1.


>  			for_each_hstate(h2) {
>  				if (h2 == h)
>  					continue;
> @@ -3555,6 +3573,8 @@ static ssize_t demote_size_store(struct kobject *kobj,
>  	if (!t_hstate)
>  		return -EINVAL;
>  	demote_order = t_hstate->order;
> +	if (demote_order < HUGETLB_PAGE_ORDER)
> +		return -EINVAL;

This could probably go in the first patch.


-- 
Oscar Salvador
SUSE Labs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces
  2021-10-05  8:23   ` Oscar Salvador
@ 2021-10-05 16:58     ` Mike Kravetz
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2021-10-05 16:58 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On 10/5/21 1:23 AM, Oscar Salvador wrote:
> On 2021-10-01 19:52, Mike Kravetz wrote:
>> +static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
>> +    __must_hold(&hugetlb_lock)
>> +{
>> +    int rc = 0;
>> +
>> +    lockdep_assert_held(&hugetlb_lock);
>> +
>> +    /* We should never get here if no demote order */
>> +    if (!h->demote_order)
>> +        return rc;
> 
> Probably add a warning here? If we should never reach this but we do, then
> something got screwed along the way and we might want to know.
> 

Sure.  I thought about just dropping this check.  However, getting here
with !h->demote_order implies there was an issue in the setup of sysfs
files.  That is not directly in the 'call path', so a check is helpful.

I will add a warning.

>> +static ssize_t demote_store(struct kobject *kobj,
>> +           struct kobj_attribute *attr, const char *buf, size_t len)
>> +{
>> +    unsigned long nr_demote;
>> +    unsigned long nr_available;
>> +    nodemask_t nodes_allowed, *n_mask;
>> +    struct hstate *h;
>> +    int err;
>> +    int nid;
>> +
>> +    err = kstrtoul(buf, 10, &nr_demote);
>> +    if (err)
>> +        return err;
>> +    h = kobj_to_hstate(kobj, &nid);
>> +
>> +    /* Synchronize with other sysfs operations modifying huge pages */
>> +    mutex_lock(&h->resize_lock);
>> +
>> +    spin_lock_irq(&hugetlb_lock);
>> +    if (nid != NUMA_NO_NODE) {
>> +        init_nodemask_of_node(&nodes_allowed, nid);
>> +        n_mask = &nodes_allowed;
>> +    } else {
>> +        n_mask = &node_states[N_MEMORY];
>> +    }
> 
> Cannot the n_mask dance be outside the locks? That does not need to be protected, right?
> 

Yes it can.   I will move outside.

>> +
>> +    while (nr_demote) {
>> +        /*
>> +         * Check for available pages to demote each time thorough the
>> +         * loop as demote_pool_huge_page will drop hugetlb_lock.
>> +         *
>> +         * NOTE: demote_pool_huge_page does not yet drop hugetlb_lock
>> +         * but will when full demote functionality is added in a later
>> +         * patch.
>> +         */
>> +        if (nid != NUMA_NO_NODE)
>> +            nr_available = h->free_huge_pages_node[nid];
>> +        else
>> +            nr_available = h->free_huge_pages;
>> +        nr_available -= h->resv_huge_pages;
>> +        if (!nr_available)
>> +            break;
>> +
>> +        if (!demote_pool_huge_page(h, n_mask))
>> +            break;
> 
> Is it possible that when demote_pool_huge_page() drops the lock,
> h->resv_huge_pages change? Or that can only happen under h->resize_lock?
> 

Yes, it can change.  That is why the calculations are done each time
through the loop with the lock held.

Should we be worried about compiler optimizations that may not read the
value each time through the loop?

>> +
>> +        nr_demote--;
>> +    }
>> +
>> +    spin_unlock_irq(&hugetlb_lock);
>> +    mutex_unlock(&h->resize_lock);
>> +
>> +    return len;
>> +}
>> +HSTATE_ATTR_WO(demote);
>> +
>> +static ssize_t demote_size_show(struct kobject *kobj,
>> +                    struct kobj_attribute *attr, char *buf)
>> +{
>> +    struct hstate *h;
>> +    unsigned long demote_size;
>> +    int nid;
>> +
>> +    h = kobj_to_hstate(kobj, &nid);
>> +    demote_size = h->demote_order;
> 
> This has already been pointed out.
> 

Yes.

>> +
>> +    return sysfs_emit(buf, "%lukB\n",
>> +            (unsigned long)(PAGE_SIZE << h->demote_order) / SZ_1K);
>> +}
>> +
>> +static ssize_t demote_size_store(struct kobject *kobj,
>> +                    struct kobj_attribute *attr,
>> +                    const char *buf, size_t count)
>> +{
>> +    struct hstate *h, *t_hstate;
>> +    unsigned long demote_size;
>> +    unsigned int demote_order;
>> +    int nid;
> 
> This is just a nit-pick, but what is t_hstate? demote_hstate might be more descriptive.
> 

temporary_hstate.  I agree that demote_hstate would be more descriptive.

>> +
>> +    demote_size = (unsigned long)memparse(buf, NULL);
>> +
>> +    t_hstate = size_to_hstate(demote_size);
>> +    if (!t_hstate)
>> +        return -EINVAL;
>> +    demote_order = t_hstate->order;
>> +
>> +    /* demote order must be smaller hstate order */
> 
> "must be smaller than"
> 

Will change.

Thanks for the suggestions!
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA
  2021-10-05  8:45   ` Oscar Salvador
@ 2021-10-05 17:06     ` Mike Kravetz
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2021-10-05 17:06 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On 10/5/21 1:45 AM, Oscar Salvador wrote:
> On Fri, Oct 01, 2021 at 10:52:07AM -0700, Mike Kravetz wrote:
>> +bool cma_pages_valid(struct cma *cma, const struct page *pages,
>> +		     unsigned long count)
>> +{
>> +	unsigned long pfn;
>> +
>> +	if (!cma || !pages)
>> +		return false;
>> +
>> +	pfn = page_to_pfn(pages);
>> +
>> +	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
>> +		return false;
>> +
>> +	return true;
>> +}
>> +
>>  /**
>>   * cma_release() - release allocated pages
>>   * @cma:   Contiguous memory region for which the allocation is performed.
>> @@ -539,16 +555,13 @@ bool cma_release(struct cma *cma, const struct page *pages,
>>  {
>>  	unsigned long pfn;
>>  
>> -	if (!cma || !pages)
>> +	if (!cma_pages_valid(cma, pages, count))
>>  		return false;
>>  
>>  	pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count);
>>  
>>  	pfn = page_to_pfn(pages);
>>  
>> -	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
>> -		return false;
>> -
> 
> After this patch, the timing of printing the debug statement changes as we back
> off earlier.
> You might want to point that out in the changelog in case someone wonders why.
> 

When making this change, I did not want to duplicate that debug
statement.  However, duplicated code only becomes an issue if
CONFIG_DEBUG.  So, duplication should be acceptable.

I will make the debug statements function as before.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/5] hugetlb: be sure to free demoted CMA pages to CMA
  2021-10-05  9:33   ` Oscar Salvador
@ 2021-10-05 18:57     ` Mike Kravetz
  2021-10-06  7:54       ` Oscar Salvador
  0 siblings, 1 reply; 22+ messages in thread
From: Mike Kravetz @ 2021-10-05 18:57 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On 10/5/21 2:33 AM, Oscar Salvador wrote:
> On Fri, Oct 01, 2021 at 10:52:08AM -0700, Mike Kravetz wrote:
>> When huge page demotion is fully implemented, gigantic pages can be
>> demoted to a smaller huge page size.  For example, on x86 a 1G page
>> can be demoted to 512 2M pages.  However, gigantic pages can potentially
>> be allocated from CMA.  If a gigantic page which was allocated from CMA
>> is demoted, the corresponding demoted pages needs to be returned to CMA.
>>
>> Use the new interface cma_pages_valid() to determine if a non-gigantic
>> hugetlb page should be freed to CMA.  Also, clear mapping field of these
>> pages as expected by cma_release.
>>
>> This also requires a change to CMA reservations for gigantic pages.
>> Currently, the 'order_per_bit' is set to the gigantic page size.
>> However, if gigantic pages can be demoted this needs to be set to the
>> order of the smallest huge page.  At CMA reservation time we do not know
> 
> to the smallest, or to the next smaller? Would you mind elaborating why?
> 

It is the smallest.

CMA uses a per-region bit map to track allocations.  When setting up the
region, you specify how many pages each bit represents.  Currently,
only gigantic pages are allocated/freed from CMA so the region is set up
such that one bit represents a gigantic page size allocation.

With demote, a gigantic page (allocation) could be split into smaller
size pages.  And, these smaller size pages will be freed to CMA.  So,
since the per-region bit map needs to be set up to represent the smallest
allocation/free size, it now needs to be set to the smallest huge page
size which can be freed to CMA.

Unfortunately, we set up the CMA region for huge pages before we set up
huge pages sizes (hstates).  So, technically we do not know the smallest
huge page size as this can change via command line options and
architecture specific code.  Therefore, at region setup time we need some
constant value for smallest possible huge page size.  That is why
HUGETLB_PAGE_ORDER is used.

I should probably add all that to the changelog for clarity?

>> @@ -3003,7 +3020,8 @@ static void __init hugetlb_init_hstates(void)
>>  		 *   is not supported.
>>  		 */
>>  		if (!hstate_is_gigantic(h) ||
>> -		    gigantic_page_runtime_supported()) {
>> +		    gigantic_page_runtime_supported() ||
>> +		    !hugetlb_cma_size || !(h->order <= HUGETLB_PAGE_ORDER)) {
> 
> I am bit lost in the CMA area, so bear with me.
> We do not allow to demote if we specify we want hugetlb pages from the CMA?

We limit the size of the order which can be demoted if hugetlb pages can
be allocated from CMA.

> Also, can h->order be smaller than HUGETLB_PAGE_ORDER? I though
> HUGETLB_PAGE_ORDER was the smallest one.

Nope,
arm64 with 64K PAGE_SIZE is one example.  huge page sizes/orders are:
CONT_PMD_SHIFT	34	16.0 GiB
PMD_SHIFT	29	512 MiB
CONT_PTE_SHIFT	21	2.00 MiB

#define HPAGE_SHIFT	PMD_SHIFT
#define	HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT)

So, HUGETLB_PAGE_ORDER is associated with the 512 MiB huge page size,
but there is also a (smaller) 2.00 MiB huge page size.

After your comment yesterday about rewriting this code for clarity,  this
now becomes:

		/*
		 * Set demote order for each hstate.  Note that
		 * h->demote_order is initially 0.
		 * - We can not demote gigantic pages if runtime freeing
		 *   is not supported, so skip this.
		 * - If CMA allocation is possible, we can not demote
		 *   HUGETLB_PAGE_ORDER or smaller size pages.
		 */
		if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
			continue;
		if (hugetlb_cma_size && h->order <= HUGETLB_PAGE_ORDER)
			continue;
		for_each_hstate(h2) {
			if (h2 == h)
				continue;
			if (h2->order < h->order &&
			    h2->order > h->demote_order)
				h->demote_order = h2->order;
		}

Hopefully, that is more clear.

> 
> The check for HUGETLB_PAGE_ORDER can probably be squashed into patch#1.
> 
> 
>>  			for_each_hstate(h2) {
>>  				if (h2 == h)
>>  					continue;
>> @@ -3555,6 +3573,8 @@ static ssize_t demote_size_store(struct kobject *kobj,
>>  	if (!t_hstate)
>>  		return -EINVAL;
>>  	demote_order = t_hstate->order;
>> +	if (demote_order < HUGETLB_PAGE_ORDER)
>> +		return -EINVAL;
> 
> This could probably go in the first patch.
> 
> 

Both of the above HUGETLB_PAGE_ORDER checks 'could' go into the first
patch.  However, the code which actually makes them necessary is in this
patch.  I would prefer to leave them together here.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/5] hugetlb: be sure to free demoted CMA pages to CMA
  2021-10-05 18:57     ` Mike Kravetz
@ 2021-10-06  7:54       ` Oscar Salvador
  2021-10-06 18:27         ` Mike Kravetz
  0 siblings, 1 reply; 22+ messages in thread
From: Oscar Salvador @ 2021-10-06  7:54 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On Tue, Oct 05, 2021 at 11:57:54AM -0700, Mike Kravetz wrote:
> It is the smallest.
> 
> CMA uses a per-region bit map to track allocations.  When setting up the
> region, you specify how many pages each bit represents.  Currently,
> only gigantic pages are allocated/freed from CMA so the region is set up
> such that one bit represents a gigantic page size allocation.
> 
> With demote, a gigantic page (allocation) could be split into smaller
> size pages.  And, these smaller size pages will be freed to CMA.  So,
> since the per-region bit map needs to be set up to represent the smallest
> allocation/free size, it now needs to be set to the smallest huge page
> size which can be freed to CMA.
> 
> Unfortunately, we set up the CMA region for huge pages before we set up
> huge pages sizes (hstates).  So, technically we do not know the smallest
> huge page size as this can change via command line options and
> architecture specific code.  Therefore, at region setup time we need some
> constant value for smallest possible huge page size.  That is why
> HUGETLB_PAGE_ORDER is used.

Do you know if that is done for a reason? Setting up CMA for hugetlb before
initialiting hugetlb itself? Would not make more sense to do it the other way
around?

The way I see it is that it is a bit unfortunate that we cannot only demote
gigantic pages per se, so 1GB on x86_64 and 16G on arm64 with 64k page size.

I guess nothing to be worried about now as this is an early stage, but maybe
something to think about in the future in we case we want to allow for more
flexibility.

> I should probably add all that to the changelog for clarity?

Yes, I think it would be great to have that as a context.

> After your comment yesterday about rewriting this code for clarity,  this
> now becomes:
> 
> 		/*
> 		 * Set demote order for each hstate.  Note that
> 		 * h->demote_order is initially 0.
> 		 * - We can not demote gigantic pages if runtime freeing
> 		 *   is not supported, so skip this.
> 		 * - If CMA allocation is possible, we can not demote
> 		 *   HUGETLB_PAGE_ORDER or smaller size pages.
> 		 */
> 		if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> 			continue;
> 		if (hugetlb_cma_size && h->order <= HUGETLB_PAGE_ORDER)
> 			continue;
> 		for_each_hstate(h2) {
> 			if (h2 == h)
> 				continue;
> 			if (h2->order < h->order &&
> 			    h2->order > h->demote_order)
> 				h->demote_order = h2->order;
> 		}
> 
> Hopefully, that is more clear.

Defintiely, this looks better to me.

-- 
Oscar Salvador
SUSE Labs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 5/5] hugetlb: add hugetlb demote page support
  2021-10-01 17:52 ` [PATCH v3 5/5] hugetlb: add hugetlb demote page support Mike Kravetz
@ 2021-10-06  8:41   ` Oscar Salvador
  2021-10-06 18:52     ` Mike Kravetz
  0 siblings, 1 reply; 22+ messages in thread
From: Oscar Salvador @ 2021-10-06  8:41 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On Fri, Oct 01, 2021 at 10:52:10AM -0700, Mike Kravetz wrote:
> Demote page functionality will split a huge page into a number of huge
> pages of a smaller size.  For example, on x86 a 1GB huge page can be
> demoted into 512 2M huge pages.  Demotion is done 'in place' by simply
> splitting the huge page.
> 
> Added '*_for_demote' wrappers for remove_hugetlb_page,
> destroy_compound_gigantic_page and prep_compound_gigantic_page for use
> by demote code.
> 
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>  mm/hugetlb.c | 82 +++++++++++++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 74 insertions(+), 8 deletions(-)
> 
...  
> +static int demote_free_huge_page(struct hstate *h, struct page *page)
> +{
> +	int i, nid = page_to_nid(page);
> +	struct hstate *target_hstate;
> +	int rc = 0;
> +
> +	target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order);
> +
> +	remove_hugetlb_page_for_demote(h, page, false);
> +	spin_unlock_irq(&hugetlb_lock);
> +
> +	rc = alloc_huge_page_vmemmap(h, page);
> +	if (rc) {
> +		/* Allocation of vmemmmap failed, we can not demote page */
> +		spin_lock_irq(&hugetlb_lock);
> +		set_page_refcounted(page);
> +		add_hugetlb_page(h, page, false);
> +		return rc;
> +	}

Question: You keep the original error code returned from alloc_huge_page_vmemmap()
here, but then you lose it on demote_pool_huge_page() when doing the
!demote_free_huge_page. Would not make more sense to keep it all the way down to 
demote_store() in case you want to return the actual error code?

> +
> +	/*
> +	 * Use destroy_compound_gigantic_page_for_demote for all huge page
> +	 * sizes as it will not ref count pages.
> +	 */
> +	destroy_compound_gigantic_page_for_demote(page, huge_page_order(h));

It seems that for now we only allow gigantic pages to be demoted, but
destroy_compound_gigantic_page_for_demote feels kind of wrong, even
if it is only a wrapper that ends up calling _*gigantic_ functions.

We want a routine that destroy a hugetlb to be demoted into smaller hugetlb
pages, so the name gigantic makes little sense to appear in my opinion.

>  static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
>  	__must_hold(&hugetlb_lock)
>  {
> +	int nr_nodes, node;
> +	struct page *page;
>  	int rc = 0;
>  
>  	lockdep_assert_held(&hugetlb_lock);
> @@ -3313,9 +3377,15 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
>  	if (!h->demote_order)
>  		return rc;
>  
> -	/*
> -	 * TODO - demote fucntionality will be added in subsequent patch
> -	 */
> +	for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) {
> +		if (!list_empty(&h->hugepage_freelists[node])) {
> +			page = list_entry(h->hugepage_freelists[node].next,
> +					struct page, lru);
> +			rc = !demote_free_huge_page(h, page);

I kinda dislike this as I pointed out.


-- 
Oscar Salvador
SUSE Labs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/5] hugetlb: be sure to free demoted CMA pages to CMA
  2021-10-06  7:54       ` Oscar Salvador
@ 2021-10-06 18:27         ` Mike Kravetz
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2021-10-06 18:27 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On 10/6/21 12:54 AM, Oscar Salvador wrote:
> On Tue, Oct 05, 2021 at 11:57:54AM -0700, Mike Kravetz wrote:
>> It is the smallest.
>>
>> CMA uses a per-region bit map to track allocations.  When setting up the
>> region, you specify how many pages each bit represents.  Currently,
>> only gigantic pages are allocated/freed from CMA so the region is set up
>> such that one bit represents a gigantic page size allocation.
>>
>> With demote, a gigantic page (allocation) could be split into smaller
>> size pages.  And, these smaller size pages will be freed to CMA.  So,
>> since the per-region bit map needs to be set up to represent the smallest
>> allocation/free size, it now needs to be set to the smallest huge page
>> size which can be freed to CMA.
>>
>> Unfortunately, we set up the CMA region for huge pages before we set up
>> huge pages sizes (hstates).  So, technically we do not know the smallest
>> huge page size as this can change via command line options and
>> architecture specific code.  Therefore, at region setup time we need some
>> constant value for smallest possible huge page size.  That is why
>> HUGETLB_PAGE_ORDER is used.
> 
> Do you know if that is done for a reason? Setting up CMA for hugetlb before
> initialiting hugetlb itself? Would not make more sense to do it the other way
> around?
> 

One reason is that the initialization sequence is a bit messy.  In most
cases, arch specific code sets up huge pages.  So, we would need to make
sure this is done before the cma initialization.  This might be
possible, but I am not confident in my abilities to understand/modify
and test early init code for all architectures supporting hugetlb cma
allocations.

In addition, not all architectures initialize their huge page sizes.  It
is possible for architectures to only set up huge pages that have been
requested on the command line.  In such cases, it would require some
fancy command line parsing to look for and process a hugetlb_cma argument
before any other hugetlb argument.  Not even sure if this is possible.

The most reasonable way to address this would be to add an arch specific
callback asking for the smallest supported huge page size.  I did not do
this here, as I am not sure this is really going to be an issue.  In
the use case (and architecture) I know of, this is not an issue.  As you
mention, this or something else could be added if the need arises.
-- 
Mike Kravetz

> The way I see it is that it is a bit unfortunate that we cannot only demote
> gigantic pages per se, so 1GB on x86_64 and 16G on arm64 with 64k page size.
> 
> I guess nothing to be worried about now as this is an early stage, but maybe
> something to think about in the future in we case we want to allow for more
> flexibility.
> 
>> I should probably add all that to the changelog for clarity?
> 
> Yes, I think it would be great to have that as a context.
> 
>> After your comment yesterday about rewriting this code for clarity,  this
>> now becomes:
>>
>> 		/*
>> 		 * Set demote order for each hstate.  Note that
>> 		 * h->demote_order is initially 0.
>> 		 * - We can not demote gigantic pages if runtime freeing
>> 		 *   is not supported, so skip this.
>> 		 * - If CMA allocation is possible, we can not demote
>> 		 *   HUGETLB_PAGE_ORDER or smaller size pages.
>> 		 */
>> 		if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
>> 			continue;
>> 		if (hugetlb_cma_size && h->order <= HUGETLB_PAGE_ORDER)
>> 			continue;
>> 		for_each_hstate(h2) {
>> 			if (h2 == h)
>> 				continue;
>> 			if (h2->order < h->order &&
>> 			    h2->order > h->demote_order)
>> 				h->demote_order = h2->order;
>> 		}
>>
>> Hopefully, that is more clear.
> 
> Defintiely, this looks better to me.
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 5/5] hugetlb: add hugetlb demote page support
  2021-10-06  8:41   ` Oscar Salvador
@ 2021-10-06 18:52     ` Mike Kravetz
  2021-10-07  7:52       ` Oscar Salvador
  0 siblings, 1 reply; 22+ messages in thread
From: Mike Kravetz @ 2021-10-06 18:52 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On 10/6/21 1:41 AM, Oscar Salvador wrote:
> On Fri, Oct 01, 2021 at 10:52:10AM -0700, Mike Kravetz wrote:
>> Demote page functionality will split a huge page into a number of huge
>> pages of a smaller size.  For example, on x86 a 1GB huge page can be
>> demoted into 512 2M huge pages.  Demotion is done 'in place' by simply
>> splitting the huge page.
>>
>> Added '*_for_demote' wrappers for remove_hugetlb_page,
>> destroy_compound_gigantic_page and prep_compound_gigantic_page for use
>> by demote code.
>>
>> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
>> ---
>>  mm/hugetlb.c | 82 +++++++++++++++++++++++++++++++++++++++++++++++-----
>>  1 file changed, 74 insertions(+), 8 deletions(-)
>>
> ...  
>> +static int demote_free_huge_page(struct hstate *h, struct page *page)
>> +{
>> +	int i, nid = page_to_nid(page);
>> +	struct hstate *target_hstate;
>> +	int rc = 0;
>> +
>> +	target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order);
>> +
>> +	remove_hugetlb_page_for_demote(h, page, false);
>> +	spin_unlock_irq(&hugetlb_lock);
>> +
>> +	rc = alloc_huge_page_vmemmap(h, page);
>> +	if (rc) {
>> +		/* Allocation of vmemmmap failed, we can not demote page */
>> +		spin_lock_irq(&hugetlb_lock);
>> +		set_page_refcounted(page);
>> +		add_hugetlb_page(h, page, false);
>> +		return rc;
>> +	}
> 
> Question: You keep the original error code returned from alloc_huge_page_vmemmap()
> here, but then you lose it on demote_pool_huge_page() when doing the
> !demote_free_huge_page. Would not make more sense to keep it all the way down to 
> demote_store() in case you want to return the actual error code?
> 

Yes, I will return it all the way to demote_store (and the user).

>> +
>> +	/*
>> +	 * Use destroy_compound_gigantic_page_for_demote for all huge page
>> +	 * sizes as it will not ref count pages.
>> +	 */
>> +	destroy_compound_gigantic_page_for_demote(page, huge_page_order(h));
> 
> It seems that for now we only allow gigantic pages to be demoted, but
> destroy_compound_gigantic_page_for_demote feels kind of wrong, even
> if it is only a wrapper that ends up calling _*gigantic_ functions.
> 
> We want a routine that destroy a hugetlb to be demoted into smaller hugetlb
> pages, so the name gigantic makes little sense to appear in my opinion.
> 

Agree, I do not love the name.  Since it is only a wrapper, how about
destroy_hugetlb_page_for_demote?  And, change those other *_for_demote
wrappers to similiarly not have gigantic in their names.

>>  static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
>>  	__must_hold(&hugetlb_lock)
>>  {
>> +	int nr_nodes, node;
>> +	struct page *page;
>>  	int rc = 0;
>>  
>>  	lockdep_assert_held(&hugetlb_lock);
>> @@ -3313,9 +3377,15 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
>>  	if (!h->demote_order)
>>  		return rc;
>>  
>> -	/*
>> -	 * TODO - demote fucntionality will be added in subsequent patch
>> -	 */
>> +	for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) {
>> +		if (!list_empty(&h->hugepage_freelists[node])) {
>> +			page = list_entry(h->hugepage_freelists[node].next,
>> +					struct page, lru);
>> +			rc = !demote_free_huge_page(h, page);
> 
> I kinda dislike this as I pointed out.
> 

Will change.

Thanks for all your comments!
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 5/5] hugetlb: add hugetlb demote page support
  2021-10-06 18:52     ` Mike Kravetz
@ 2021-10-07  7:52       ` Oscar Salvador
  0 siblings, 0 replies; 22+ messages in thread
From: Oscar Salvador @ 2021-10-07  7:52 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-mm, linux-kernel, David Hildenbrand, Michal Hocko, Zi Yan,
	Muchun Song, Naoya Horiguchi, David Rientjes, Aneesh Kumar K . V,
	Andrew Morton

On Wed, Oct 06, 2021 at 11:52:33AM -0700, Mike Kravetz wrote:
> Agree, I do not love the name.  Since it is only a wrapper, how about
> destroy_hugetlb_page_for_demote?  And, change those other *_for_demote
> wrappers to similiarly not have gigantic in their names.

Yes, makes sense to me. We want the names to be generic and not tied to
gigantic stuff.

Thanks Mike

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces
  2021-10-01 17:52 ` [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces Mike Kravetz
  2021-10-04 13:00   ` Oscar Salvador
  2021-10-05  8:23   ` Oscar Salvador
@ 2021-11-12 20:10   ` kernel test robot
  2 siblings, 0 replies; 22+ messages in thread
From: kernel test robot @ 2021-11-12 20:10 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 2226 bytes --]

Hi Mike,

I love your patch! Yet something to improve:

[auto build test ERROR on linux/master]
[also build test ERROR on v5.15]
[cannot apply to hnaz-mm/master linus/master next-20211112]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Mike-Kravetz/hugetlb-add-demote-split-page-functionality/20211002-015415
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 5816b3e6577eaa676ceb00a848f0fd65fe2adc29
config: x86_64-allyesconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/5829c115b143a240099ed1da793e684f6dd8c2d8
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Mike-Kravetz/hugetlb-add-demote-split-page-functionality/20211002-015415
        git checkout 5829c115b143a240099ed1da793e684f6dd8c2d8
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   mm/hugetlb.c: In function 'demote_size_show':
>> mm/hugetlb.c:3533:16: error: variable 'demote_size' set but not used [-Werror=unused-but-set-variable]
    3533 |  unsigned long demote_size;
         |                ^~~~~~~~~~~
   cc1: all warnings being treated as errors


vim +/demote_size +3533 mm/hugetlb.c

  3528	
  3529	static ssize_t demote_size_show(struct kobject *kobj,
  3530						struct kobj_attribute *attr, char *buf)
  3531	{
  3532		struct hstate *h;
> 3533		unsigned long demote_size;
  3534		int nid;
  3535	
  3536		h = kobj_to_hstate(kobj, &nid);
  3537		demote_size = h->demote_order;
  3538	
  3539		return sysfs_emit(buf, "%lukB\n",
  3540				(unsigned long)(PAGE_SIZE << h->demote_order) / SZ_1K);
  3541	}
  3542	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 66499 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2021-11-12 20:10 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-01 17:52 [PATCH v3 0/5] hugetlb: add demote/split page functionality Mike Kravetz
2021-10-01 17:52 ` [PATCH v3 1/5] hugetlb: add demote hugetlb page sysfs interfaces Mike Kravetz
2021-10-04 13:00   ` Oscar Salvador
2021-10-04 18:27     ` Mike Kravetz
2021-10-05  8:23   ` Oscar Salvador
2021-10-05 16:58     ` Mike Kravetz
2021-11-12 20:10   ` kernel test robot
2021-10-01 17:52 ` [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA Mike Kravetz
2021-10-05  8:35   ` Oscar Salvador
2021-10-05  8:45   ` Oscar Salvador
2021-10-05 17:06     ` Mike Kravetz
2021-10-05  8:48   ` David Hildenbrand
2021-10-01 17:52 ` [PATCH v3 3/5] hugetlb: be sure to free demoted CMA pages to CMA Mike Kravetz
2021-10-05  9:33   ` Oscar Salvador
2021-10-05 18:57     ` Mike Kravetz
2021-10-06  7:54       ` Oscar Salvador
2021-10-06 18:27         ` Mike Kravetz
2021-10-01 17:52 ` [PATCH v3 4/5] hugetlb: add demote bool to gigantic page routines Mike Kravetz
2021-10-01 17:52 ` [PATCH v3 5/5] hugetlb: add hugetlb demote page support Mike Kravetz
2021-10-06  8:41   ` Oscar Salvador
2021-10-06 18:52     ` Mike Kravetz
2021-10-07  7:52       ` Oscar Salvador

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.