linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/4] HWPOISON: soft offlining for non-lru movable page
@ 2017-02-03  7:59 Yisheng Xie
  2017-02-03  7:59 ` [PATCH v6 1/4] mm/migration: make isolate_movable_page() return int type Yisheng Xie
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Yisheng Xie @ 2017-02-03  7:59 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: mhocko, minchan, ak, guohanjun, hannes, iamjoonsoo.kim, mgorman,
	n-horiguchi, arbab, izumi.taku, vkuznets, vbabka, qiuxishi

Hi Michal, Minchan and all,
Could you please help to review it?

Any suggestion is more than welcome. And Thanks for all of you.

After Minchan's commit bda807d44454 ("mm: migrate: support non-lru movable
page migration"), some type of non-lru page like zsmalloc and
virtio-balloon page also support migration.

Therefore, we can:

1) soft offlining no-lru movable pages, which means when memory
   corrected errors occur on a non-lru movable page, we can stop to use it
   by migrating data onto another page and disable the original (maybe
   half-broken) one.

2) enable memory hotplug for non-lru movable pages, i.e.  we may
   offline blocks, which include such pages, by using non-lru page
   migration.

This patchset is heavily dependent on non-lru movable page migration.
--------
v6:
 * just return -EBUSY for isolate_movable_page when it failed to isolate
   a non-lru movable page, which suggested by Minchan.

v5:
 * change the return type of isolate_movable_page() from bool to int as
   Michal's suggestion.
 * add "enable memory hotplug for non-lru movable pages" to this patchset,
   which also make some change as Michal's suggestion here.

v4:
 * make isolate_movable_page always defined to avoid compile error with
   CONFIG_MIGRATION = n
 * return -EBUSY when isolate_movable_page return false which means failed
   to isolate movable page.

v3:
  * delete some unneed limitation and use !__PageMovable instead of PageLRU
    after isolate page to avoid isolated count mismatch, as Minchan Kim's suggestion.

v2:
 * delete function soft_offline_movable_page() and hanle non-lru movable
   page in __soft_offline_page() as Michal Hocko suggested.

Yisheng Xie (4):
  mm/migration: make isolate_movable_page() return int type
  mm/migration: make isolate_movable_page always defined
  HWPOISON: soft offlining for non-lru movable page
  mm/hotplug: enable memory hotplug for non-lru movable pages

 include/linux/migrate.h |  4 +++-
 mm/compaction.c         |  2 +-
 mm/memory-failure.c     | 26 ++++++++++++++++----------
 mm/memory_hotplug.c     | 28 +++++++++++++++++-----------
 mm/migrate.c            |  6 +++---
 mm/page_alloc.c         |  8 ++++++--
 6 files changed, 46 insertions(+), 28 deletions(-)

-- 
1.7.12.4

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v6 1/4] mm/migration: make isolate_movable_page() return int type
  2017-02-03  7:59 [PATCH v6 0/4] HWPOISON: soft offlining for non-lru movable page Yisheng Xie
@ 2017-02-03  7:59 ` Yisheng Xie
  2017-02-07  0:33   ` Minchan Kim
  2017-02-03  7:59 ` [PATCH v6 2/4] mm/migration: make isolate_movable_page always defined Yisheng Xie
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: Yisheng Xie @ 2017-02-03  7:59 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: mhocko, minchan, ak, guohanjun, hannes, iamjoonsoo.kim, mgorman,
	n-horiguchi, arbab, izumi.taku, vkuznets, vbabka, qiuxishi

Change the return type of isolate_movable_page() from bool to int.  It
will return 0 when isolate movable page successfully, and return -EBUSY
when it isolates failed.

There is no functional change within this patch but prepare for later
patch.

Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Suggested-by: Michal Hocko <mhocko@kernel.org>
Suggested-by: Minchan Kim <minchan@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Hanjun Guo <guohanjun@huawei.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Xishi Qiu <qiuxishi@huawei.com>
---
 include/linux/migrate.h | 2 +-
 mm/compaction.c         | 2 +-
 mm/migrate.c            | 6 +++---
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index ae8d475..43d5deb 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -37,7 +37,7 @@ extern int migrate_page(struct address_space *,
 			struct page *, struct page *, enum migrate_mode);
 extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free,
 		unsigned long private, enum migrate_mode mode, int reason);
-extern bool isolate_movable_page(struct page *page, isolate_mode_t mode);
+extern int isolate_movable_page(struct page *page, isolate_mode_t mode);
 extern void putback_movable_page(struct page *page);
 
 extern int migrate_prep(void);
diff --git a/mm/compaction.c b/mm/compaction.c
index 949198d..1d89147 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -802,7 +802,7 @@ static bool too_many_isolated(struct zone *zone)
 					locked = false;
 				}
 
-				if (isolate_movable_page(page, isolate_mode))
+				if (!isolate_movable_page(page, isolate_mode))
 					goto isolate_success;
 			}
 
diff --git a/mm/migrate.c b/mm/migrate.c
index 87f4d0f..6807174 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -74,7 +74,7 @@ int migrate_prep_local(void)
 	return 0;
 }
 
-bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+int isolate_movable_page(struct page *page, isolate_mode_t mode)
 {
 	struct address_space *mapping;
 
@@ -125,14 +125,14 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
 	__SetPageIsolated(page);
 	unlock_page(page);
 
-	return true;
+	return 0;
 
 out_no_isolated:
 	unlock_page(page);
 out_putpage:
 	put_page(page);
 out:
-	return false;
+	return -EBUSY;
 }
 
 /* It should be called on page which is PG_movable */
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v6 2/4] mm/migration: make isolate_movable_page always defined
  2017-02-03  7:59 [PATCH v6 0/4] HWPOISON: soft offlining for non-lru movable page Yisheng Xie
  2017-02-03  7:59 ` [PATCH v6 1/4] mm/migration: make isolate_movable_page() return int type Yisheng Xie
@ 2017-02-03  7:59 ` Yisheng Xie
  2017-02-03  7:59 ` [PATCH v6 3/4] HWPOISON: soft offlining for non-lru movable page Yisheng Xie
  2017-02-03  7:59 ` [PATCH v6 4/4] mm/hotplug: enable memory hotplug for non-lru movable pages Yisheng Xie
  3 siblings, 0 replies; 8+ messages in thread
From: Yisheng Xie @ 2017-02-03  7:59 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: mhocko, minchan, ak, guohanjun, hannes, iamjoonsoo.kim, mgorman,
	n-horiguchi, arbab, izumi.taku, vkuznets, vbabka, qiuxishi

Define isolate_movable_page as a static inline function when
CONFIG_MIGRATION is not enable.  It should return -EBUSY here which means
failed to isolate movable pages.

This patch do not have any functional change but prepare for later patch.

Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Suggested-by: Michal Hocko <mhocko@kernel.org>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Hanjun Guo <guohanjun@huawei.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Xishi Qiu <qiuxishi@huawei.com>
---
 include/linux/migrate.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 43d5deb..fa76b51 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -56,6 +56,8 @@ static inline int migrate_pages(struct list_head *l, new_page_t new,
 		free_page_t free, unsigned long private, enum migrate_mode mode,
 		int reason)
 	{ return -ENOSYS; }
+static inline int isolate_movable_page(struct page *page, isolate_mode_t mode)
+	{ return -EBUSY; }
 
 static inline int migrate_prep(void) { return -ENOSYS; }
 static inline int migrate_prep_local(void) { return -ENOSYS; }
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v6 3/4] HWPOISON: soft offlining for non-lru movable page
  2017-02-03  7:59 [PATCH v6 0/4] HWPOISON: soft offlining for non-lru movable page Yisheng Xie
  2017-02-03  7:59 ` [PATCH v6 1/4] mm/migration: make isolate_movable_page() return int type Yisheng Xie
  2017-02-03  7:59 ` [PATCH v6 2/4] mm/migration: make isolate_movable_page always defined Yisheng Xie
@ 2017-02-03  7:59 ` Yisheng Xie
  2017-02-03  7:59 ` [PATCH v6 4/4] mm/hotplug: enable memory hotplug for non-lru movable pages Yisheng Xie
  3 siblings, 0 replies; 8+ messages in thread
From: Yisheng Xie @ 2017-02-03  7:59 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: mhocko, minchan, ak, guohanjun, hannes, iamjoonsoo.kim, mgorman,
	n-horiguchi, arbab, izumi.taku, vkuznets, vbabka, qiuxishi

Extend soft offlining framework to support non-lru page, which already
support migration after commit bda807d44454 ("mm: migrate: support non-lru
movable page migration")

When memory corrected errors occur on a non-lru movable page, we can
choose to stop using it by migrating data onto another page and disable
the original (maybe half-broken) one.

Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Suggested-by: Michal Hocko <mhocko@kernel.org>
Suggested-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Hanjun Guo <guohanjun@huawei.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Xishi Qiu <qiuxishi@huawei.com>
---
 mm/memory-failure.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index f283c7e..3d0f2fd 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1527,7 +1527,8 @@ static int get_any_page(struct page *page, unsigned long pfn, int flags)
 {
 	int ret = __get_any_page(page, pfn, flags);
 
-	if (ret == 1 && !PageHuge(page) && !PageLRU(page)) {
+	if (ret == 1 && !PageHuge(page) &&
+	    !PageLRU(page) && !__PageMovable(page)) {
 		/*
 		 * Try to free it.
 		 */
@@ -1649,7 +1650,10 @@ static int __soft_offline_page(struct page *page, int flags)
 	 * Try to migrate to a new page instead. migrate.c
 	 * handles a large number of cases for us.
 	 */
-	ret = isolate_lru_page(page);
+	if (PageLRU(page))
+		ret = isolate_lru_page(page);
+	else
+		ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
 	/*
 	 * Drop page reference which is came from get_any_page()
 	 * successful isolate_lru_page() already took another one.
@@ -1657,18 +1661,20 @@ static int __soft_offline_page(struct page *page, int flags)
 	put_hwpoison_page(page);
 	if (!ret) {
 		LIST_HEAD(pagelist);
-		inc_node_page_state(page, NR_ISOLATED_ANON +
-					page_is_file_cache(page));
+		/*
+		 * After isolated lru page, the PageLRU will be cleared,
+		 * so use !__PageMovable instead for LRU page's mapping
+		 * cannot have PAGE_MAPPING_MOVABLE.
+		 */
+		if (!__PageMovable(page))
+			inc_node_page_state(page, NR_ISOLATED_ANON +
+						page_is_file_cache(page));
 		list_add(&page->lru, &pagelist);
 		ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL,
 					MIGRATE_SYNC, MR_MEMORY_FAILURE);
 		if (ret) {
-			if (!list_empty(&pagelist)) {
-				list_del(&page->lru);
-				dec_node_page_state(page, NR_ISOLATED_ANON +
-						page_is_file_cache(page));
-				putback_lru_page(page);
-			}
+			if (!list_empty(&pagelist))
+				putback_movable_pages(&pagelist);
 
 			pr_info("soft offline: %#lx: migration failed %d, type %lx\n",
 				pfn, ret, page->flags);
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v6 4/4] mm/hotplug: enable memory hotplug for non-lru movable pages
  2017-02-03  7:59 [PATCH v6 0/4] HWPOISON: soft offlining for non-lru movable page Yisheng Xie
                   ` (2 preceding siblings ...)
  2017-02-03  7:59 ` [PATCH v6 3/4] HWPOISON: soft offlining for non-lru movable page Yisheng Xie
@ 2017-02-03  7:59 ` Yisheng Xie
  2017-02-06  3:29   ` Naoya Horiguchi
  3 siblings, 1 reply; 8+ messages in thread
From: Yisheng Xie @ 2017-02-03  7:59 UTC (permalink / raw)
  To: linux-mm, linux-kernel, akpm
  Cc: mhocko, minchan, ak, guohanjun, hannes, iamjoonsoo.kim, mgorman,
	n-horiguchi, arbab, izumi.taku, vkuznets, vbabka, qiuxishi

We had considered all of the non-lru pages as unmovable before commit
bda807d44454 ("mm: migrate: support non-lru movable page migration").  But
now some of non-lru pages like zsmalloc, virtio-balloon pages also become
movable.  So we can offline such blocks by using non-lru page migration.

This patch straightforwardly adds non-lru migration code, which means
adding non-lru related code to the functions which scan over pfn and
collect pages to be migrated and isolate them before migration.

Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Hanjun Guo <guohanjun@huawei.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Xishi Qiu <qiuxishi@huawei.com>
---
 mm/memory_hotplug.c | 28 +++++++++++++++++-----------
 mm/page_alloc.c     |  8 ++++++--
 2 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index ca2723d..ea1be08 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1516,10 +1516,10 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
 }
 
 /*
- * Scan pfn range [start,end) to find movable/migratable pages (LRU pages
- * and hugepages). We scan pfn because it's much easier than scanning over
- * linked list. This function returns the pfn of the first found movable
- * page if it's found, otherwise 0.
+ * Scan pfn range [start,end) to find movable/migratable pages (LRU pages,
+ * non-lru movable pages and hugepages). We scan pfn because it's much
+ * easier than scanning over linked list. This function returns the pfn
+ * of the first found movable page if it's found, otherwise 0.
  */
 static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
 {
@@ -1530,6 +1530,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
 			page = pfn_to_page(pfn);
 			if (PageLRU(page))
 				return pfn;
+			if (__PageMovable(page))
+				return pfn;
 			if (PageHuge(page)) {
 				if (page_huge_active(page))
 					return pfn;
@@ -1606,21 +1608,25 @@ static struct page *new_node_page(struct page *page, unsigned long private,
 		if (!get_page_unless_zero(page))
 			continue;
 		/*
-		 * We can skip free pages. And we can only deal with pages on
-		 * LRU.
+		 * We can skip free pages. And we can deal with pages on
+		 * LRU and non-lru movable pages.
 		 */
-		ret = isolate_lru_page(page);
+		if (PageLRU(page))
+			ret = isolate_lru_page(page);
+		else
+			ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
 		if (!ret) { /* Success */
 			put_page(page);
 			list_add_tail(&page->lru, &source);
 			move_pages--;
-			inc_node_page_state(page, NR_ISOLATED_ANON +
-					    page_is_file_cache(page));
+			if (!__PageMovable(page))
+				inc_node_page_state(page, NR_ISOLATED_ANON +
+						    page_is_file_cache(page));
 
 		} else {
 #ifdef CONFIG_DEBUG_VM
-			pr_alert("removing pfn %lx from LRU failed\n", pfn);
-			dump_page(page, "failed to remove from LRU");
+			pr_alert("failed to isolate pfn %lx\n", pfn);
+			dump_page(page, "isolation failed");
 #endif
 			put_page(page);
 			/* Because we don't have big zone->lock. we should
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f3e0c69..9c4e229 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7081,8 +7081,9 @@ void *__init alloc_large_system_hash(const char *tablename,
  * If @count is not zero, it is okay to include less @count unmovable pages
  *
  * PageLRU check without isolation or lru_lock could race so that
- * MIGRATE_MOVABLE block might include unmovable pages. It means you can't
- * expect this function should be exact.
+ * MIGRATE_MOVABLE block might include unmovable pages. And __PageMovable
+ * check without lock_page also may miss some movable non-lru pages at
+ * race condition. So you can't expect this function should be exact.
  */
 bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 			 bool skip_hwpoisoned_pages)
@@ -7138,6 +7139,9 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 		if (skip_hwpoisoned_pages && PageHWPoison(page))
 			continue;
 
+		if (__PageMovable(page))
+			continue;
+
 		if (!PageLRU(page))
 			found++;
 		/*
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v6 4/4] mm/hotplug: enable memory hotplug for non-lru movable pages
  2017-02-03  7:59 ` [PATCH v6 4/4] mm/hotplug: enable memory hotplug for non-lru movable pages Yisheng Xie
@ 2017-02-06  3:29   ` Naoya Horiguchi
  2017-02-06  4:25     ` Yisheng Xie
  0 siblings, 1 reply; 8+ messages in thread
From: Naoya Horiguchi @ 2017-02-06  3:29 UTC (permalink / raw)
  To: Yisheng Xie
  Cc: linux-mm, linux-kernel, akpm, mhocko, minchan, ak, guohanjun,
	hannes, iamjoonsoo.kim, mgorman, arbab, izumi.taku, vkuznets,
	vbabka, qiuxishi

On Fri, Feb 03, 2017 at 03:59:30PM +0800, Yisheng Xie wrote:
> We had considered all of the non-lru pages as unmovable before commit
> bda807d44454 ("mm: migrate: support non-lru movable page migration").  But
> now some of non-lru pages like zsmalloc, virtio-balloon pages also become
> movable.  So we can offline such blocks by using non-lru page migration.
> 
> This patch straightforwardly adds non-lru migration code, which means
> adding non-lru related code to the functions which scan over pfn and
> collect pages to be migrated and isolate them before migration.
> 
> Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Andi Kleen <ak@linux.intel.com>
> Cc: Hanjun Guo <guohanjun@huawei.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
> Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Xishi Qiu <qiuxishi@huawei.com>
> ---
>  mm/memory_hotplug.c | 28 +++++++++++++++++-----------
>  mm/page_alloc.c     |  8 ++++++--
>  2 files changed, 23 insertions(+), 13 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index ca2723d..ea1be08 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1516,10 +1516,10 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
>  }
>  
>  /*
> - * Scan pfn range [start,end) to find movable/migratable pages (LRU pages
> - * and hugepages). We scan pfn because it's much easier than scanning over
> - * linked list. This function returns the pfn of the first found movable
> - * page if it's found, otherwise 0.
> + * Scan pfn range [start,end) to find movable/migratable pages (LRU pages,
> + * non-lru movable pages and hugepages). We scan pfn because it's much
> + * easier than scanning over linked list. This function returns the pfn
> + * of the first found movable page if it's found, otherwise 0.
>   */
>  static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
>  {
> @@ -1530,6 +1530,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
>  			page = pfn_to_page(pfn);
>  			if (PageLRU(page))
>  				return pfn;
> +			if (__PageMovable(page))
> +				return pfn;
>  			if (PageHuge(page)) {
>  				if (page_huge_active(page))
>  					return pfn;
> @@ -1606,21 +1608,25 @@ static struct page *new_node_page(struct page *page, unsigned long private,
>  		if (!get_page_unless_zero(page))
>  			continue;
>  		/*
> -		 * We can skip free pages. And we can only deal with pages on
> -		 * LRU.
> +		 * We can skip free pages. And we can deal with pages on
> +		 * LRU and non-lru movable pages.
>  		 */
> -		ret = isolate_lru_page(page);
> +		if (PageLRU(page))
> +			ret = isolate_lru_page(page);
> +		else
> +			ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
>  		if (!ret) { /* Success */
>  			put_page(page);
>  			list_add_tail(&page->lru, &source);
>  			move_pages--;
> -			inc_node_page_state(page, NR_ISOLATED_ANON +
> -					    page_is_file_cache(page));
> +			if (!__PageMovable(page))

If this check is identical with "if (PageLRU(page))" in this context,
PageLRU(page) looks better because you already add same "if" above.

Otherwise, looks good to me.

Thanks,
Naoya Horiguchi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v6 4/4] mm/hotplug: enable memory hotplug for non-lru movable pages
  2017-02-06  3:29   ` Naoya Horiguchi
@ 2017-02-06  4:25     ` Yisheng Xie
  0 siblings, 0 replies; 8+ messages in thread
From: Yisheng Xie @ 2017-02-06  4:25 UTC (permalink / raw)
  To: Naoya Horiguchi
  Cc: linux-mm, linux-kernel, akpm, mhocko, minchan, ak, guohanjun,
	hannes, iamjoonsoo.kim, mgorman, arbab, izumi.taku, vkuznets,
	vbabka, qiuxishi

Hi Naoya Horiguchi,
Thanks for reviewing.

On 2017/2/6 11:29, Naoya Horiguchi wrote:
> On Fri, Feb 03, 2017 at 03:59:30PM +0800, Yisheng Xie wrote:
>> We had considered all of the non-lru pages as unmovable before commit
>> bda807d44454 ("mm: migrate: support non-lru movable page migration").  But
>> now some of non-lru pages like zsmalloc, virtio-balloon pages also become
>> movable.  So we can offline such blocks by using non-lru page migration.
>>
>> This patch straightforwardly adds non-lru migration code, which means
>> adding non-lru related code to the functions which scan over pfn and
>> collect pages to be migrated and isolate them before migration.
>>
>> Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Minchan Kim <minchan@kernel.org>
>> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Andi Kleen <ak@linux.intel.com>
>> Cc: Hanjun Guo <guohanjun@huawei.com>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>> Cc: Mel Gorman <mgorman@techsingularity.net>
>> Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
>> Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
>> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
>> Cc: Xishi Qiu <qiuxishi@huawei.com>
>> ---
>>  mm/memory_hotplug.c | 28 +++++++++++++++++-----------
>>  mm/page_alloc.c     |  8 ++++++--
>>  2 files changed, 23 insertions(+), 13 deletions(-)
>>
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index ca2723d..ea1be08 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -1516,10 +1516,10 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)
>>  }
>>  
>>  /*
>> - * Scan pfn range [start,end) to find movable/migratable pages (LRU pages
>> - * and hugepages). We scan pfn because it's much easier than scanning over
>> - * linked list. This function returns the pfn of the first found movable
>> - * page if it's found, otherwise 0.
>> + * Scan pfn range [start,end) to find movable/migratable pages (LRU pages,
>> + * non-lru movable pages and hugepages). We scan pfn because it's much
>> + * easier than scanning over linked list. This function returns the pfn
>> + * of the first found movable page if it's found, otherwise 0.
>>   */
>>  static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
>>  {
>> @@ -1530,6 +1530,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
>>  			page = pfn_to_page(pfn);
>>  			if (PageLRU(page))
>>  				return pfn;
>> +			if (__PageMovable(page))
>> +				return pfn;
>>  			if (PageHuge(page)) {
>>  				if (page_huge_active(page))
>>  					return pfn;
>> @@ -1606,21 +1608,25 @@ static struct page *new_node_page(struct page *page, unsigned long private,
>>  		if (!get_page_unless_zero(page))
>>  			continue;
>>  		/*
>> -		 * We can skip free pages. And we can only deal with pages on
>> -		 * LRU.
>> +		 * We can skip free pages. And we can deal with pages on
>> +		 * LRU and non-lru movable pages.
>>  		 */
>> -		ret = isolate_lru_page(page);
>> +		if (PageLRU(page))
>> +			ret = isolate_lru_page(page);
>> +		else
>> +			ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
>>  		if (!ret) { /* Success */
>>  			put_page(page);
>>  			list_add_tail(&page->lru, &source);
>>  			move_pages--;
>> -			inc_node_page_state(page, NR_ISOLATED_ANON +
>> -					    page_is_file_cache(page));
>> +			if (!__PageMovable(page))
> 
> If this check is identical with "if (PageLRU(page))" in this context,
> PageLRU(page) looks better because you already add same "if" above.
> 
After isolated lru page, the PageLRU will be cleared, so use !__PageMovable
instead here for LRU page's mapping cannot have PAGE_MAPPING_MOVABLE.

I have add the comment in PATCH[3/4] HWPOISON: soft offlining for non-lru
movable page, so I do not add the same comment here.

Thanks
Yisheng Xie

> Otherwise, looks good to me.
> 
> Thanks,
> Naoya Horiguchi
> .
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v6 1/4] mm/migration: make isolate_movable_page() return int type
  2017-02-03  7:59 ` [PATCH v6 1/4] mm/migration: make isolate_movable_page() return int type Yisheng Xie
@ 2017-02-07  0:33   ` Minchan Kim
  0 siblings, 0 replies; 8+ messages in thread
From: Minchan Kim @ 2017-02-07  0:33 UTC (permalink / raw)
  To: Yisheng Xie
  Cc: linux-mm, linux-kernel, akpm, mhocko, ak, guohanjun, hannes,
	iamjoonsoo.kim, mgorman, n-horiguchi, arbab, izumi.taku,
	vkuznets, vbabka, qiuxishi

On Fri, Feb 03, 2017 at 03:59:27PM +0800, Yisheng Xie wrote:
> Change the return type of isolate_movable_page() from bool to int.  It
> will return 0 when isolate movable page successfully, and return -EBUSY
> when it isolates failed.
> 
> There is no functional change within this patch but prepare for later
> patch.
> 
> Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
> Suggested-by: Michal Hocko <mhocko@kernel.org>
> Suggested-by: Minchan Kim <minchan@kernel.org>
Acked-by: Minchan Kim <minchan@kernel.org>

Thanks.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-02-07  0:36 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-03  7:59 [PATCH v6 0/4] HWPOISON: soft offlining for non-lru movable page Yisheng Xie
2017-02-03  7:59 ` [PATCH v6 1/4] mm/migration: make isolate_movable_page() return int type Yisheng Xie
2017-02-07  0:33   ` Minchan Kim
2017-02-03  7:59 ` [PATCH v6 2/4] mm/migration: make isolate_movable_page always defined Yisheng Xie
2017-02-03  7:59 ` [PATCH v6 3/4] HWPOISON: soft offlining for non-lru movable page Yisheng Xie
2017-02-03  7:59 ` [PATCH v6 4/4] mm/hotplug: enable memory hotplug for non-lru movable pages Yisheng Xie
2017-02-06  3:29   ` Naoya Horiguchi
2017-02-06  4:25     ` Yisheng Xie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).