All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Kravetz <mike.kravetz@oracle.com>
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: Muchun Song <songmuchun@bytedance.com>,
	Joao Martins <joao.m.martins@oracle.com>,
	Oscar Salvador <osalvador@suse.de>,
	David Hildenbrand <david@redhat.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	David Rientjes <rientjes@google.com>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Naoya Horiguchi <naoya.horiguchi@linux.dev>,
	Barry Song <song.bao.hua@hisilicon.com>,
	Michal Hocko <mhocko@suse.com>,
	Matthew Wilcox <willy@infradead.org>,
	Xiongchun Duan <duanxiongchun@bytedance.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Kravetz <mike.kravetz@oracle.com>
Subject: [PATCH v2 06/11] hugetlb: perform vmemmap optimization on a list of pages
Date: Tue,  5 Sep 2023 14:44:05 -0700	[thread overview]
Message-ID: <20230905214412.89152-7-mike.kravetz@oracle.com> (raw)
In-Reply-To: <20230905214412.89152-1-mike.kravetz@oracle.com>

When adding hugetlb pages to the pool, we first create a list of the
allocated pages before adding to the pool.  Pass this list of pages to a
new routine hugetlb_vmemmap_optimize_folios() for vmemmap optimization.

We also modify the routine vmemmap_should_optimize() to check for pages
that are already optimized.  There are code paths that might request
vmemmap optimization twice and we want to make sure this is not
attempted.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 mm/hugetlb.c         |  5 +++++
 mm/hugetlb_vmemmap.c | 11 +++++++++++
 mm/hugetlb_vmemmap.h |  5 +++++
 3 files changed, 21 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d1950b726346..554be94b07bd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2230,6 +2230,11 @@ static void prep_and_add_allocated_folios(struct hstate *h,
 {
 	struct folio *folio, *tmp_f;
 
+	/*
+	 * Send list for bulk vmemmap optimization processing
+	 */
+	hugetlb_vmemmap_optimize_folios(h, folio_list);
+
 	/*
 	 * Add all new pool pages to free lists in one lock cycle
 	 */
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index aeb7dd889eee..ac5577d372fe 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -484,6 +484,9 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
 /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */
 static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head)
 {
+	if (HPageVmemmapOptimized((struct page *)head))
+		return false;
+
 	if (!READ_ONCE(vmemmap_optimize_enabled))
 		return false;
 
@@ -573,6 +576,14 @@ void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head)
 		SetHPageVmemmapOptimized(head);
 }
 
+void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list)
+{
+	struct folio *folio;
+
+	list_for_each_entry(folio, folio_list, lru)
+		hugetlb_vmemmap_optimize(h, &folio->page);
+}
+
 static struct ctl_table hugetlb_vmemmap_sysctls[] = {
 	{
 		.procname	= "hugetlb_optimize_vmemmap",
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 25bd0e002431..036494e040ca 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -13,6 +13,7 @@
 #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
 int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head);
 void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head);
+void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list);
 
 /*
  * Reserve one vmemmap page, all vmemmap addresses are mapped to it. See
@@ -47,6 +48,10 @@ static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct page
 {
 }
 
+static inline void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list)
+{
+}
+
 static inline unsigned int hugetlb_vmemmap_optimizable_size(const struct hstate *h)
 {
 	return 0;
-- 
2.41.0


  parent reply	other threads:[~2023-09-05 21:46 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-05 21:43 [PATCH v2 00/11] Batch hugetlb vmemmap modification operations Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 01/11] hugetlb: set hugetlb page flag before optimizing vmemmap Mike Kravetz
2023-09-06  0:48   ` Matthew Wilcox
2023-09-06  1:05     ` Mike Kravetz
2023-10-13 12:58   ` Naoya Horiguchi
2023-10-13 21:43     ` Mike Kravetz
2023-10-16 22:55       ` Andrew Morton
2023-10-17  3:21     ` Mike Kravetz
2023-10-18  1:58       ` Naoya Horiguchi
2023-10-18  3:43         ` Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 02/11] hugetlb: Use a folio in free_hpage_workfn() Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 03/11] hugetlb: Remove a few calls to page_folio() Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 04/11] hugetlb: Convert remove_pool_huge_page() to remove_pool_hugetlb_folio() Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 05/11] hugetlb: restructure pool allocations Mike Kravetz
2023-09-05 21:44 ` Mike Kravetz [this message]
2023-09-06  7:30   ` [PATCH v2 06/11] hugetlb: perform vmemmap optimization on a list of pages Muchun Song
2023-09-05 21:44 ` [PATCH v2 07/11] hugetlb: perform vmemmap restoration " Mike Kravetz
2023-09-06  7:33   ` Muchun Song
2023-09-06  8:07     ` Muchun Song
2023-09-06 21:12       ` Mike Kravetz
2023-09-07  3:33         ` Muchun Song
2023-09-07 18:54           ` Mike Kravetz
2023-09-08 20:53             ` Mike Kravetz
2023-09-11  3:10               ` Muchun Song
2023-09-06 20:53     ` Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 08/11] hugetlb: batch freeing of vmemmap pages Mike Kravetz
2023-09-06  7:38   ` Muchun Song
2023-09-06 21:38     ` Mike Kravetz
2023-09-07  6:19       ` Muchun Song
2023-09-07 18:47         ` Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 09/11] hugetlb: batch PMD split for bulk vmemmap dedup Mike Kravetz
2023-09-06  8:24   ` Muchun Song
2023-09-06  9:11     ` [External] " Muchun Song
2023-09-06  9:26       ` Joao Martins
2023-09-06  9:32         ` [External] " Muchun Song
2023-09-06  9:44           ` Joao Martins
2023-09-06 11:34             ` Muchun Song
2023-09-06  9:13     ` Joao Martins
2023-09-05 21:44 ` [PATCH v2 10/11] hugetlb: batch TLB flushes when freeing vmemmap Mike Kravetz
2023-09-07  6:55   ` Muchun Song
2023-09-07 18:57     ` Mike Kravetz
2023-09-05 21:44 ` [PATCH v2 11/11] hugetlb: batch TLB flushes when restoring vmemmap Mike Kravetz
2023-09-07  6:58   ` Muchun Song
2023-09-07 18:58     ` Mike Kravetz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230905214412.89152-7-mike.kravetz@oracle.com \
    --to=mike.kravetz@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=david@redhat.com \
    --cc=duanxiongchun@bytedance.com \
    --cc=joao.m.martins@oracle.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=naoya.horiguchi@linux.dev \
    --cc=osalvador@suse.de \
    --cc=rientjes@google.com \
    --cc=song.bao.hua@hisilicon.com \
    --cc=songmuchun@bytedance.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.