All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: tim.c.chen@intel.com, dave.hansen@intel.com,
	andi.kleen@intel.com, aaron.lu@intel.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, Huang Ying <ying.huang@intel.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Ebru Akagunduz <ebru.akagunduz@gmail.com>
Subject: [PATCH -v4 RESEND 7/9] mm, THP: Add can_split_huge_page()
Date: Fri, 28 Oct 2016 13:56:06 +0800	[thread overview]
Message-ID: <20161028055608.1736-8-ying.huang@intel.com> (raw)
In-Reply-To: <20161028055608.1736-1-ying.huang@intel.com>

From: Huang Ying <ying.huang@intel.com>

Separates checking whether we can split the huge page from
split_huge_page_to_list() into a function.  This will help to check that
before splitting the THP (Transparent Huge Page) really.

This will be used for delaying splitting THP during swapping out.  Where
for a THP, we will allocate a swap cluster, add the THP into the swap
cache, then split the THP.  To avoid the unnecessary operations for the
un-splittable THP, we will check that firstly.

There is no functionality change in this patch.

Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Ebru Akagunduz <ebru.akagunduz@gmail.com>
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
---
 include/linux/huge_mm.h |  7 +++++++
 mm/huge_memory.c        | 13 ++++++++++++-
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 9b9f65d..14ffa3f 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -94,6 +94,7 @@ extern unsigned long thp_get_unmapped_area(struct file *filp,
 extern void prep_transhuge_page(struct page *page);
 extern void free_transhuge_page(struct page *page);
 
+bool can_split_huge_page(struct page *page);
 int split_huge_page_to_list(struct page *page, struct list_head *list);
 static inline int split_huge_page(struct page *page)
 {
@@ -176,6 +177,12 @@ static inline void prep_transhuge_page(struct page *page) {}
 
 #define thp_get_unmapped_area	NULL
 
+static inline bool
+can_split_huge_page(struct page *page)
+{
+	BUILD_BUG();
+	return false;
+}
 static inline int
 split_huge_page_to_list(struct page *page, struct list_head *list)
 {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index cdcd25c..326b145 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2013,6 +2013,17 @@ int page_trans_huge_mapcount(struct page *page, int *total_mapcount)
 	return ret;
 }
 
+/* Racy check whether the huge page can be split */
+bool can_split_huge_page(struct page *page)
+{
+	int extra_pins = 0;
+
+	/* Additional pins from radix tree */
+	if (!PageAnon(page))
+		extra_pins = HPAGE_PMD_NR;
+	return total_mapcount(page) == page_count(page) - extra_pins - 1;
+}
+
 /*
  * This function splits huge page into normal pages. @page can point to any
  * subpage of huge page to split. Split doesn't change the position of @page.
@@ -2083,7 +2094,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 	 * Racy check if we can split the page, before freeze_page() will
 	 * split PMDs
 	 */
-	if (total_mapcount(head) != page_count(head) - extra_pins - 1) {
+	if (!can_split_huge_page(head)) {
 		ret = -EBUSY;
 		goto out_unlock;
 	}
-- 
2.9.3

WARNING: multiple messages have this Message-ID (diff)
From: "Huang, Ying" <ying.huang@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: tim.c.chen@intel.com, dave.hansen@intel.com,
	andi.kleen@intel.com, aaron.lu@intel.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, Huang Ying <ying.huang@intel.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Ebru Akagunduz <ebru.akagunduz@gmail.com>
Subject: [PATCH -v4 RESEND 7/9] mm, THP: Add can_split_huge_page()
Date: Fri, 28 Oct 2016 13:56:06 +0800	[thread overview]
Message-ID: <20161028055608.1736-8-ying.huang@intel.com> (raw)
In-Reply-To: <20161028055608.1736-1-ying.huang@intel.com>

From: Huang Ying <ying.huang@intel.com>

Separates checking whether we can split the huge page from
split_huge_page_to_list() into a function.  This will help to check that
before splitting the THP (Transparent Huge Page) really.

This will be used for delaying splitting THP during swapping out.  Where
for a THP, we will allocate a swap cluster, add the THP into the swap
cache, then split the THP.  To avoid the unnecessary operations for the
un-splittable THP, we will check that firstly.

There is no functionality change in this patch.

Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Ebru Akagunduz <ebru.akagunduz@gmail.com>
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
---
 include/linux/huge_mm.h |  7 +++++++
 mm/huge_memory.c        | 13 ++++++++++++-
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 9b9f65d..14ffa3f 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -94,6 +94,7 @@ extern unsigned long thp_get_unmapped_area(struct file *filp,
 extern void prep_transhuge_page(struct page *page);
 extern void free_transhuge_page(struct page *page);
 
+bool can_split_huge_page(struct page *page);
 int split_huge_page_to_list(struct page *page, struct list_head *list);
 static inline int split_huge_page(struct page *page)
 {
@@ -176,6 +177,12 @@ static inline void prep_transhuge_page(struct page *page) {}
 
 #define thp_get_unmapped_area	NULL
 
+static inline bool
+can_split_huge_page(struct page *page)
+{
+	BUILD_BUG();
+	return false;
+}
 static inline int
 split_huge_page_to_list(struct page *page, struct list_head *list)
 {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index cdcd25c..326b145 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2013,6 +2013,17 @@ int page_trans_huge_mapcount(struct page *page, int *total_mapcount)
 	return ret;
 }
 
+/* Racy check whether the huge page can be split */
+bool can_split_huge_page(struct page *page)
+{
+	int extra_pins = 0;
+
+	/* Additional pins from radix tree */
+	if (!PageAnon(page))
+		extra_pins = HPAGE_PMD_NR;
+	return total_mapcount(page) == page_count(page) - extra_pins - 1;
+}
+
 /*
  * This function splits huge page into normal pages. @page can point to any
  * subpage of huge page to split. Split doesn't change the position of @page.
@@ -2083,7 +2094,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 	 * Racy check if we can split the page, before freeze_page() will
 	 * split PMDs
 	 */
-	if (total_mapcount(head) != page_count(head) - extra_pins - 1) {
+	if (!can_split_huge_page(head)) {
 		ret = -EBUSY;
 		goto out_unlock;
 	}
-- 
2.9.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2016-10-28  5:56 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-28  5:55 [PATCH -v4 RESEND 0/9] THP swap: Delay splitting THP during swapping out Huang, Ying
2016-10-28  5:55 ` Huang, Ying
2016-10-28  5:56 ` [PATCH -v4 RESEND 1/9] mm, swap: Make swap cluster size same of THP size on x86_64 Huang, Ying
2016-10-28  5:56   ` Huang, Ying
2016-10-28  5:56 ` [PATCH -v4 RESEND 2/9] mm, memcg: Support to charge/uncharge multiple swap entries Huang, Ying
2016-10-28  5:56   ` Huang, Ying
2016-10-28  5:56 ` [PATCH -v4 RESEND 3/9] mm, THP, swap: Add swap cluster allocate/free functions Huang, Ying
2016-10-28  5:56   ` Huang, Ying
2016-10-28  5:56 ` [PATCH -v4 RESEND 4/9] mm, THP, swap: Add get_huge_swap_page() Huang, Ying
2016-10-28  5:56   ` Huang, Ying
2016-10-28  5:56 ` [PATCH -v4 RESEND 5/9] mm, THP, swap: Support to clear SWAP_HAS_CACHE for huge page Huang, Ying
2016-10-28  5:56   ` Huang, Ying
2016-10-28  5:56 ` [PATCH -v4 RESEND 6/9] mm, THP, swap: Support to add/delete THP to/from swap cache Huang, Ying
2016-10-28  5:56   ` Huang, Ying
2016-10-28  8:08   ` Hillf Danton
2016-10-28  8:08     ` Hillf Danton
2016-10-28  8:30     ` Huang, Ying
2016-10-28  8:30       ` Huang, Ying
2016-10-28  5:56 ` Huang, Ying [this message]
2016-10-28  5:56   ` [PATCH -v4 RESEND 7/9] mm, THP: Add can_split_huge_page() Huang, Ying
2016-10-28  5:56 ` [PATCH -v4 RESEND 8/9] mm, THP, swap: Support to split THP in swap cache Huang, Ying
2016-10-28  5:56   ` Huang, Ying
2016-10-28 10:18   ` Hillf Danton
2016-10-28 10:18     ` Hillf Danton
2016-10-31  1:26     ` Huang, Ying
2016-10-31  1:26       ` Huang, Ying
2016-10-28  5:56 ` [PATCH -v4 RESEND 9/9] mm, THP, swap: Delay splitting THP during swap out Huang, Ying
2016-10-28  5:56   ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161028055608.1736-8-ying.huang@intel.com \
    --to=ying.huang@intel.com \
    --cc=aarcange@redhat.com \
    --cc=aaron.lu@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi.kleen@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=ebru.akagunduz@gmail.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=tim.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.