linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/10] make try_to_unmap simple
@ 2017-03-15  5:24 Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 01/10] mm: remove unncessary ret in page_referenced Minchan Kim
                   ` (9 more replies)
  0 siblings, 10 replies; 20+ messages in thread
From: Minchan Kim @ 2017-03-15  5:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual, Minchan Kim

Currently, try_to_unmap returns various return value(SWAP_SUCCESS,
SWAP_FAIL, SWAP_AGAIN, SWAP_DIRTY and SWAP_MLOCK). When I look into
that, it's unncessary complicated so this patch aims for cleaning
it up. Change ttu to boolean function so we can remove SWAP_AGAIN,
SWAP_DIRTY, SWAP_MLOCK.

* from v1
  * add some acked-by
  * add description about rmap_one's return - Andrew

* from RFC
- http://lkml.kernel.org/r/1488436765-32350-1-git-send-email-minchan@kernel.org
  * Remove RFC tag
  * add acked-by to some patches
  * some of minor fixes
  * based on mmotm-2017-03-09-16-19.


Minchan Kim (10):
  mm: remove unncessary ret in page_referenced
  mm: remove SWAP_DIRTY in ttu
  mm: remove SWAP_MLOCK check for SWAP_SUCCESS in ttu
  mm: make the try_to_munlock void function
  mm: remove SWAP_MLOCK in ttu
  mm: remove SWAP_AGAIN in ttu
  mm: make ttu's return boolean
  mm: make rmap_walk void function
  mm: make rmap_one boolean function
  mm: remove SWAP_[SUCCESS|AGAIN|FAIL]

 include/linux/ksm.h  |  5 ++-
 include/linux/rmap.h | 25 ++++++--------
 mm/huge_memory.c     |  6 ++--
 mm/ksm.c             | 16 ++++-----
 mm/memory-failure.c  | 26 +++++++-------
 mm/migrate.c         |  4 +--
 mm/mlock.c           |  6 ++--
 mm/page_idle.c       |  4 +--
 mm/rmap.c            | 98 ++++++++++++++++++++--------------------------------
 mm/vmscan.c          | 32 +++++------------
 10 files changed, 85 insertions(+), 137 deletions(-)

-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 01/10] mm: remove unncessary ret in page_referenced
  2017-03-15  5:24 [PATCH v2 00/10] make try_to_unmap simple Minchan Kim
@ 2017-03-15  5:24 ` Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 02/10] mm: remove SWAP_DIRTY in ttu Minchan Kim
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2017-03-15  5:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual, Minchan Kim,
	Hillf Danton

Anyone doesn't use ret variable. Remove it.

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/rmap.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 7d24bb9..9dbfa6f 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -807,7 +807,6 @@ int page_referenced(struct page *page,
 		    struct mem_cgroup *memcg,
 		    unsigned long *vm_flags)
 {
-	int ret;
 	int we_locked = 0;
 	struct page_referenced_arg pra = {
 		.mapcount = total_mapcount(page),
@@ -841,7 +840,7 @@ int page_referenced(struct page *page,
 		rwc.invalid_vma = invalid_page_referenced_vma;
 	}
 
-	ret = rmap_walk(page, &rwc);
+	rmap_walk(page, &rwc);
 	*vm_flags = pra.vm_flags;
 
 	if (we_locked)
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 02/10] mm: remove SWAP_DIRTY in ttu
  2017-03-15  5:24 [PATCH v2 00/10] make try_to_unmap simple Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 01/10] mm: remove unncessary ret in page_referenced Minchan Kim
@ 2017-03-15  5:24 ` Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 03/10] mm: remove SWAP_MLOCK check for SWAP_SUCCESS " Minchan Kim
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2017-03-15  5:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual, Minchan Kim,
	Hillf Danton

If we found lazyfree page is dirty, try_to_unmap_one can just
SetPageSwapBakced in there like PG_mlocked page and just return
with SWAP_FAIL which is very natural because the page is not swappable
right now so that vmscan can activate it.
There is no point to introduce new return value SWAP_DIRTY
in try_to_unmap at the moment.

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/rmap.h | 1 -
 mm/rmap.c            | 4 ++--
 mm/vmscan.c          | 3 ---
 3 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index fee10d7..b556eef 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -298,6 +298,5 @@ static inline int page_mkclean(struct page *page)
 #define SWAP_AGAIN	1
 #define SWAP_FAIL	2
 #define SWAP_MLOCK	3
-#define SWAP_DIRTY	4
 
 #endif	/* _LINUX_RMAP_H */
diff --git a/mm/rmap.c b/mm/rmap.c
index 9dbfa6f..e692cb5 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1431,7 +1431,8 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				 * discarded. Remap the page to page table.
 				 */
 				set_pte_at(mm, address, pvmw.pte, pteval);
-				ret = SWAP_DIRTY;
+				SetPageSwapBacked(page);
+				ret = SWAP_FAIL;
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
@@ -1501,7 +1502,6 @@ static int page_mapcount_is_zero(struct page *page)
  * SWAP_AGAIN	- we missed a mapping, try again later
  * SWAP_FAIL	- the page is unswappable
  * SWAP_MLOCK	- page is mlocked.
- * SWAP_DIRTY	- page is dirty MADV_FREE page
  */
 int try_to_unmap(struct page *page, enum ttu_flags flags)
 {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a3656f9..b8fd656 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1142,9 +1142,6 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 		if (page_mapped(page)) {
 			switch (ret = try_to_unmap(page,
 				ttu_flags | TTU_BATCH_FLUSH)) {
-			case SWAP_DIRTY:
-				SetPageSwapBacked(page);
-				/* fall through */
 			case SWAP_FAIL:
 				nr_unmap_fail++;
 				goto activate_locked;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 03/10] mm: remove SWAP_MLOCK check for SWAP_SUCCESS in ttu
  2017-03-15  5:24 [PATCH v2 00/10] make try_to_unmap simple Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 01/10] mm: remove unncessary ret in page_referenced Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 02/10] mm: remove SWAP_DIRTY in ttu Minchan Kim
@ 2017-03-15  5:24 ` Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 04/10] mm: make the try_to_munlock void function Minchan Kim
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2017-03-15  5:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual, Minchan Kim

If the page is mapped and rescue in try_to_unmap_one,
page_mapcount(page) == 0 cannot be true so page_mapcount check
in try_to_unmap is enough to return SWAP_SUCCESS.
IOW, SWAP_MLOCK check is redundant so remove it.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/rmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index e692cb5..bdc7310 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1530,7 +1530,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags)
 	else
 		ret = rmap_walk(page, &rwc);
 
-	if (ret != SWAP_MLOCK && !page_mapcount(page))
+	if (!page_mapcount(page))
 		ret = SWAP_SUCCESS;
 	return ret;
 }
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 04/10] mm: make the try_to_munlock void function
  2017-03-15  5:24 [PATCH v2 00/10] make try_to_unmap simple Minchan Kim
                   ` (2 preceding siblings ...)
  2017-03-15  5:24 ` [PATCH v2 03/10] mm: remove SWAP_MLOCK check for SWAP_SUCCESS " Minchan Kim
@ 2017-03-15  5:24 ` Minchan Kim
  2017-03-15  7:31   ` Vlastimil Babka
  2017-04-08  3:18   ` alexander.levin
  2017-03-15  5:24 ` [PATCH v2 05/10] mm: remove SWAP_MLOCK in ttu Minchan Kim
                   ` (5 subsequent siblings)
  9 siblings, 2 replies; 20+ messages in thread
From: Minchan Kim @ 2017-03-15  5:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual, Minchan Kim,
	Vlastimil Babka

try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
the page if the page is not pte-mapped THP which cannot be
mlocked, either.

With that, __munlock_isolated_page can use PageMlocked to check
whether try_to_munlock is successful or not without relying on
try_to_munlock's retval. It helps to make try_to_unmap/try_to_unmap_one
simple with upcoming patches.

Cc: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/rmap.h |  2 +-
 mm/mlock.c           |  6 ++----
 mm/rmap.c            | 17 +++++------------
 3 files changed, 8 insertions(+), 17 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index b556eef..1b0cd4c 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -235,7 +235,7 @@ int page_mkclean(struct page *);
  * called in munlock()/munmap() path to check for other vmas holding
  * the page mlocked.
  */
-int try_to_munlock(struct page *);
+void try_to_munlock(struct page *);
 
 void remove_migration_ptes(struct page *old, struct page *new, bool locked);
 
diff --git a/mm/mlock.c b/mm/mlock.c
index 02f1382..9660ee5 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -123,17 +123,15 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage)
  */
 static void __munlock_isolated_page(struct page *page)
 {
-	int ret = SWAP_AGAIN;
-
 	/*
 	 * Optimization: if the page was mapped just once, that's our mapping
 	 * and we don't need to check all the other vmas.
 	 */
 	if (page_mapcount(page) > 1)
-		ret = try_to_munlock(page);
+		try_to_munlock(page);
 
 	/* Did try_to_unlock() succeed or punt? */
-	if (ret != SWAP_MLOCK)
+	if (!PageMlocked(page))
 		count_vm_event(UNEVICTABLE_PGMUNLOCKED);
 
 	putback_lru_page(page);
diff --git a/mm/rmap.c b/mm/rmap.c
index bdc7310..2f1fbd9 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1547,18 +1547,10 @@ static int page_not_mapped(struct page *page)
  * Called from munlock code.  Checks all of the VMAs mapping the page
  * to make sure nobody else has this page mlocked. The page will be
  * returned with PG_mlocked cleared if no other vmas have it mlocked.
- *
- * Return values are:
- *
- * SWAP_AGAIN	- no vma is holding page mlocked, or,
- * SWAP_AGAIN	- page mapped in mlocked vma -- couldn't acquire mmap sem
- * SWAP_FAIL	- page cannot be located at present
- * SWAP_MLOCK	- page is now mlocked.
  */
-int try_to_munlock(struct page *page)
-{
-	int ret;
 
+void try_to_munlock(struct page *page)
+{
 	struct rmap_walk_control rwc = {
 		.rmap_one = try_to_unmap_one,
 		.arg = (void *)TTU_MUNLOCK,
@@ -1568,9 +1560,10 @@ int try_to_munlock(struct page *page)
 	};
 
 	VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
+	VM_BUG_ON_PAGE(PageMlocked(page), page);
+	VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page);
 
-	ret = rmap_walk(page, &rwc);
-	return ret;
+	rmap_walk(page, &rwc);
 }
 
 void __put_anon_vma(struct anon_vma *anon_vma)
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 05/10] mm: remove SWAP_MLOCK in ttu
  2017-03-15  5:24 [PATCH v2 00/10] make try_to_unmap simple Minchan Kim
                   ` (3 preceding siblings ...)
  2017-03-15  5:24 ` [PATCH v2 04/10] mm: make the try_to_munlock void function Minchan Kim
@ 2017-03-15  5:24 ` Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 06/10] mm: remove SWAP_AGAIN " Minchan Kim
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2017-03-15  5:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual, Minchan Kim

ttu don't need to return SWAP_MLOCK. Instead, just return SWAP_FAIL
because it means the page is not-swappable so it should move to
another LRU list(active or unevictable). putback friends will
move it to right list depending on the page's LRU flag.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/rmap.h |  1 -
 mm/rmap.c            |  3 +--
 mm/vmscan.c          | 20 +++++++-------------
 3 files changed, 8 insertions(+), 16 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 1b0cd4c..3630d4d 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -297,6 +297,5 @@ static inline int page_mkclean(struct page *page)
 #define SWAP_SUCCESS	0
 #define SWAP_AGAIN	1
 #define SWAP_FAIL	2
-#define SWAP_MLOCK	3
 
 #endif	/* _LINUX_RMAP_H */
diff --git a/mm/rmap.c b/mm/rmap.c
index 2f1fbd9..a5af1e1 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1324,7 +1324,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 					 */
 					mlock_vma_page(page);
 				}
-				ret = SWAP_MLOCK;
+				ret = SWAP_FAIL;
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
@@ -1501,7 +1501,6 @@ static int page_mapcount_is_zero(struct page *page)
  * SWAP_SUCCESS	- we succeeded in removing all mappings
  * SWAP_AGAIN	- we missed a mapping, try again later
  * SWAP_FAIL	- the page is unswappable
- * SWAP_MLOCK	- page is mlocked.
  */
 int try_to_unmap(struct page *page, enum ttu_flags flags)
 {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b8fd656..2a208f0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -982,7 +982,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 		sc->nr_scanned++;
 
 		if (unlikely(!page_evictable(page)))
-			goto cull_mlocked;
+			goto activate_locked;
 
 		if (!sc->may_unmap && page_mapped(page))
 			goto keep_locked;
@@ -1147,8 +1147,6 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 				goto activate_locked;
 			case SWAP_AGAIN:
 				goto keep_locked;
-			case SWAP_MLOCK:
-				goto cull_mlocked;
 			case SWAP_SUCCESS:
 				; /* try to free the page below */
 			}
@@ -1290,20 +1288,16 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 		list_add(&page->lru, &free_pages);
 		continue;
 
-cull_mlocked:
-		if (PageSwapCache(page))
-			try_to_free_swap(page);
-		unlock_page(page);
-		list_add(&page->lru, &ret_pages);
-		continue;
-
 activate_locked:
 		/* Not a candidate for swapping, so reclaim swap space. */
-		if (PageSwapCache(page) && mem_cgroup_swap_full(page))
+		if (PageSwapCache(page) && (mem_cgroup_swap_full(page) ||
+						PageMlocked(page)))
 			try_to_free_swap(page);
 		VM_BUG_ON_PAGE(PageActive(page), page);
-		SetPageActive(page);
-		pgactivate++;
+		if (!PageMlocked(page)) {
+			SetPageActive(page);
+			pgactivate++;
+		}
 keep_locked:
 		unlock_page(page);
 keep:
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 06/10] mm: remove SWAP_AGAIN in ttu
  2017-03-15  5:24 [PATCH v2 00/10] make try_to_unmap simple Minchan Kim
                   ` (4 preceding siblings ...)
  2017-03-15  5:24 ` [PATCH v2 05/10] mm: remove SWAP_MLOCK in ttu Minchan Kim
@ 2017-03-15  5:24 ` Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 07/10] mm: make ttu's return boolean Minchan Kim
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2017-03-15  5:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual, Minchan Kim

In 2002, [1] introduced SWAP_AGAIN.
At that time, try_to_unmap_one used spin_trylock(&mm->page_table_lock)
so it's really easy to contend and fail to hold a lock so SWAP_AGAIN
to keep LRU status makes sense.

However, now we changed it to mutex-based lock and be able to block
without skip pte so there is few of small window to return SWAP_AGAIN
so remove SWAP_AGAIN and just return SWAP_FAIL.

[1] c48c43e, minimal rmap
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/rmap.c   | 11 +++--------
 mm/vmscan.c |  2 --
 2 files changed, 3 insertions(+), 10 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index a5af1e1..612682c 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1499,13 +1499,10 @@ static int page_mapcount_is_zero(struct page *page)
  * Return values are:
  *
  * SWAP_SUCCESS	- we succeeded in removing all mappings
- * SWAP_AGAIN	- we missed a mapping, try again later
  * SWAP_FAIL	- the page is unswappable
  */
 int try_to_unmap(struct page *page, enum ttu_flags flags)
 {
-	int ret;
-
 	struct rmap_walk_control rwc = {
 		.rmap_one = try_to_unmap_one,
 		.arg = (void *)flags,
@@ -1525,13 +1522,11 @@ int try_to_unmap(struct page *page, enum ttu_flags flags)
 		rwc.invalid_vma = invalid_migration_vma;
 
 	if (flags & TTU_RMAP_LOCKED)
-		ret = rmap_walk_locked(page, &rwc);
+		rmap_walk_locked(page, &rwc);
 	else
-		ret = rmap_walk(page, &rwc);
+		rmap_walk(page, &rwc);
 
-	if (!page_mapcount(page))
-		ret = SWAP_SUCCESS;
-	return ret;
+	return !page_mapcount(page) ? SWAP_SUCCESS : SWAP_FAIL;
 }
 
 static int page_not_mapped(struct page *page)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2a208f0..7727fbe 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1145,8 +1145,6 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 			case SWAP_FAIL:
 				nr_unmap_fail++;
 				goto activate_locked;
-			case SWAP_AGAIN:
-				goto keep_locked;
 			case SWAP_SUCCESS:
 				; /* try to free the page below */
 			}
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 07/10] mm: make ttu's return boolean
  2017-03-15  5:24 [PATCH v2 00/10] make try_to_unmap simple Minchan Kim
                   ` (5 preceding siblings ...)
  2017-03-15  5:24 ` [PATCH v2 06/10] mm: remove SWAP_AGAIN " Minchan Kim
@ 2017-03-15  5:24 ` Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 08/10] mm: make rmap_walk void function Minchan Kim
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2017-03-15  5:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual, Minchan Kim,
	Naoya Horiguchi

try_to_unmap returns SWAP_SUCCESS or SWAP_FAIL so it's suitable for
boolean return. This patch changes it.

Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/rmap.h |  4 ++--
 mm/huge_memory.c     |  6 +++---
 mm/memory-failure.c  | 26 ++++++++++++--------------
 mm/rmap.c            |  8 +++-----
 mm/vmscan.c          |  7 +------
 5 files changed, 21 insertions(+), 30 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 3630d4d..6028c38 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -191,7 +191,7 @@ static inline void page_dup_rmap(struct page *page, bool compound)
 int page_referenced(struct page *, int is_locked,
 			struct mem_cgroup *memcg, unsigned long *vm_flags);
 
-int try_to_unmap(struct page *, enum ttu_flags flags);
+bool try_to_unmap(struct page *, enum ttu_flags flags);
 
 /* Avoid racy checks */
 #define PVMW_SYNC		(1 << 0)
@@ -281,7 +281,7 @@ static inline int page_referenced(struct page *page, int is_locked,
 	return 0;
 }
 
-#define try_to_unmap(page, refs) SWAP_FAIL
+#define try_to_unmap(page, refs) false
 
 static inline int page_mkclean(struct page *page)
 {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2b4120f..033ccee 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2139,15 +2139,15 @@ static void freeze_page(struct page *page)
 {
 	enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS |
 		TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD;
-	int ret;
+	bool unmap_success;
 
 	VM_BUG_ON_PAGE(!PageHead(page), page);
 
 	if (PageAnon(page))
 		ttu_flags |= TTU_MIGRATION;
 
-	ret = try_to_unmap(page, ttu_flags);
-	VM_BUG_ON_PAGE(ret, page);
+	unmap_success = try_to_unmap(page, ttu_flags);
+	VM_BUG_ON_PAGE(!unmap_success, page);
 }
 
 static void unfreeze_page(struct page *page)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index f85adfe5..3d3cf6a 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -322,7 +322,7 @@ static void add_to_kill(struct task_struct *tsk, struct page *p,
  * wrong earlier.
  */
 static void kill_procs(struct list_head *to_kill, int forcekill, int trapno,
-			  int fail, struct page *page, unsigned long pfn,
+			  bool fail, struct page *page, unsigned long pfn,
 			  int flags)
 {
 	struct to_kill *tk, *next;
@@ -904,13 +904,13 @@ EXPORT_SYMBOL_GPL(get_hwpoison_page);
  * Do all that is necessary to remove user space mappings. Unmap
  * the pages and send SIGBUS to the processes if the data was dirty.
  */
-static int hwpoison_user_mappings(struct page *p, unsigned long pfn,
+static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
 				  int trapno, int flags, struct page **hpagep)
 {
 	enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;
 	struct address_space *mapping;
 	LIST_HEAD(tokill);
-	int ret;
+	bool unmap_success;
 	int kill = 1, forcekill;
 	struct page *hpage = *hpagep;
 
@@ -919,20 +919,20 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn,
 	 * other types of pages.
 	 */
 	if (PageReserved(p) || PageSlab(p))
-		return SWAP_SUCCESS;
+		return true;
 	if (!(PageLRU(hpage) || PageHuge(p)))
-		return SWAP_SUCCESS;
+		return true;
 
 	/*
 	 * This check implies we don't kill processes if their pages
 	 * are in the swap cache early. Those are always late kills.
 	 */
 	if (!page_mapped(hpage))
-		return SWAP_SUCCESS;
+		return true;
 
 	if (PageKsm(p)) {
 		pr_err("Memory failure: %#lx: can't handle KSM pages.\n", pfn);
-		return SWAP_FAIL;
+		return false;
 	}
 
 	if (PageSwapCache(p)) {
@@ -971,8 +971,8 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn,
 	if (kill)
 		collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED);
 
-	ret = try_to_unmap(hpage, ttu);
-	if (ret != SWAP_SUCCESS)
+	unmap_success = try_to_unmap(hpage, ttu);
+	if (!unmap_success)
 		pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n",
 		       pfn, page_mapcount(hpage));
 
@@ -987,10 +987,9 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn,
 	 * any accesses to the poisoned memory.
 	 */
 	forcekill = PageDirty(hpage) || (flags & MF_MUST_KILL);
-	kill_procs(&tokill, forcekill, trapno,
-		      ret != SWAP_SUCCESS, p, pfn, flags);
+	kill_procs(&tokill, forcekill, trapno, !unmap_success, p, pfn, flags);
 
-	return ret;
+	return unmap_success;
 }
 
 static void set_page_hwpoison_huge_page(struct page *hpage)
@@ -1230,8 +1229,7 @@ int memory_failure(unsigned long pfn, int trapno, int flags)
 	 * When the raw error page is thp tail page, hpage points to the raw
 	 * page after thp split.
 	 */
-	if (hwpoison_user_mappings(p, pfn, trapno, flags, &hpage)
-	    != SWAP_SUCCESS) {
+	if (!hwpoison_user_mappings(p, pfn, trapno, flags, &hpage)) {
 		action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED);
 		res = -EBUSY;
 		goto out;
diff --git a/mm/rmap.c b/mm/rmap.c
index 612682c..04d5355 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1496,12 +1496,10 @@ static int page_mapcount_is_zero(struct page *page)
  *
  * Tries to remove all the page table entries which are mapping this
  * page, used in the pageout path.  Caller must hold the page lock.
- * Return values are:
  *
- * SWAP_SUCCESS	- we succeeded in removing all mappings
- * SWAP_FAIL	- the page is unswappable
+ * If unmap is successful, return true. Otherwise, false.
  */
-int try_to_unmap(struct page *page, enum ttu_flags flags)
+bool try_to_unmap(struct page *page, enum ttu_flags flags)
 {
 	struct rmap_walk_control rwc = {
 		.rmap_one = try_to_unmap_one,
@@ -1526,7 +1524,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags)
 	else
 		rmap_walk(page, &rwc);
 
-	return !page_mapcount(page) ? SWAP_SUCCESS : SWAP_FAIL;
+	return !page_mapcount(page) ? true : false;
 }
 
 static int page_not_mapped(struct page *page)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7727fbe..beffe79 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -967,7 +967,6 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 		int may_enter_fs;
 		enum page_references references = PAGEREF_RECLAIM_CLEAN;
 		bool dirty, writeback;
-		int ret = SWAP_SUCCESS;
 
 		cond_resched();
 
@@ -1140,13 +1139,9 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 		 * processes. Try to unmap it here.
 		 */
 		if (page_mapped(page)) {
-			switch (ret = try_to_unmap(page,
-				ttu_flags | TTU_BATCH_FLUSH)) {
-			case SWAP_FAIL:
+			if (!try_to_unmap(page, ttu_flags | TTU_BATCH_FLUSH)) {
 				nr_unmap_fail++;
 				goto activate_locked;
-			case SWAP_SUCCESS:
-				; /* try to free the page below */
 			}
 		}
 
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 08/10] mm: make rmap_walk void function
  2017-03-15  5:24 [PATCH v2 00/10] make try_to_unmap simple Minchan Kim
                   ` (6 preceding siblings ...)
  2017-03-15  5:24 ` [PATCH v2 07/10] mm: make ttu's return boolean Minchan Kim
@ 2017-03-15  5:24 ` Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 09/10] mm: make rmap_one boolean function Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Minchan Kim
  9 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2017-03-15  5:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual, Minchan Kim

There is no user of return value from rmap_walk friend so this
patch makes them void function.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/ksm.h  |  5 ++---
 include/linux/rmap.h |  4 ++--
 mm/ksm.c             | 16 ++++++----------
 mm/rmap.c            | 32 +++++++++++++-------------------
 4 files changed, 23 insertions(+), 34 deletions(-)

diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index e1cfda4..78b44a0 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -61,7 +61,7 @@ static inline void set_page_stable_node(struct page *page,
 struct page *ksm_might_need_to_copy(struct page *page,
 			struct vm_area_struct *vma, unsigned long address);
 
-int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc);
+void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc);
 void ksm_migrate_page(struct page *newpage, struct page *oldpage);
 
 #else  /* !CONFIG_KSM */
@@ -94,10 +94,9 @@ static inline int page_referenced_ksm(struct page *page,
 	return 0;
 }
 
-static inline int rmap_walk_ksm(struct page *page,
+static inline void rmap_walk_ksm(struct page *page,
 			struct rmap_walk_control *rwc)
 {
-	return 0;
 }
 
 static inline void ksm_migrate_page(struct page *newpage, struct page *oldpage)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 6028c38..1d7d457c 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -264,8 +264,8 @@ struct rmap_walk_control {
 	bool (*invalid_vma)(struct vm_area_struct *vma, void *arg);
 };
 
-int rmap_walk(struct page *page, struct rmap_walk_control *rwc);
-int rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc);
+void rmap_walk(struct page *page, struct rmap_walk_control *rwc);
+void rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc);
 
 #else	/* !CONFIG_MMU */
 
diff --git a/mm/ksm.c b/mm/ksm.c
index 19b4f2d..6edffb9 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1933,11 +1933,10 @@ struct page *ksm_might_need_to_copy(struct page *page,
 	return new_page;
 }
 
-int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc)
+void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc)
 {
 	struct stable_node *stable_node;
 	struct rmap_item *rmap_item;
-	int ret = SWAP_AGAIN;
 	int search_new_forks = 0;
 
 	VM_BUG_ON_PAGE(!PageKsm(page), page);
@@ -1950,7 +1949,7 @@ int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc)
 
 	stable_node = page_stable_node(page);
 	if (!stable_node)
-		return ret;
+		return;
 again:
 	hlist_for_each_entry(rmap_item, &stable_node->hlist, hlist) {
 		struct anon_vma *anon_vma = rmap_item->anon_vma;
@@ -1978,23 +1977,20 @@ int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc)
 			if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg))
 				continue;
 
-			ret = rwc->rmap_one(page, vma,
-					rmap_item->address, rwc->arg);
-			if (ret != SWAP_AGAIN) {
+			if (SWAP_AGAIN != rwc->rmap_one(page, vma,
+					rmap_item->address, rwc->arg)) {
 				anon_vma_unlock_read(anon_vma);
-				goto out;
+				return;
 			}
 			if (rwc->done && rwc->done(page)) {
 				anon_vma_unlock_read(anon_vma);
-				goto out;
+				return;
 			}
 		}
 		anon_vma_unlock_read(anon_vma);
 	}
 	if (!search_new_forks++)
 		goto again;
-out:
-	return ret;
 }
 
 #ifdef CONFIG_MIGRATION
diff --git a/mm/rmap.c b/mm/rmap.c
index 04d5355..987b0d2 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1603,13 +1603,12 @@ static struct anon_vma *rmap_walk_anon_lock(struct page *page,
  * vm_flags for that VMA.  That should be OK, because that vma shouldn't be
  * LOCKED.
  */
-static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc,
+static void rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc,
 		bool locked)
 {
 	struct anon_vma *anon_vma;
 	pgoff_t pgoff_start, pgoff_end;
 	struct anon_vma_chain *avc;
-	int ret = SWAP_AGAIN;
 
 	if (locked) {
 		anon_vma = page_anon_vma(page);
@@ -1619,7 +1618,7 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc,
 		anon_vma = rmap_walk_anon_lock(page, rwc);
 	}
 	if (!anon_vma)
-		return ret;
+		return;
 
 	pgoff_start = page_to_pgoff(page);
 	pgoff_end = pgoff_start + hpage_nr_pages(page) - 1;
@@ -1633,8 +1632,7 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc,
 		if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg))
 			continue;
 
-		ret = rwc->rmap_one(page, vma, address, rwc->arg);
-		if (ret != SWAP_AGAIN)
+		if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg))
 			break;
 		if (rwc->done && rwc->done(page))
 			break;
@@ -1642,7 +1640,6 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc,
 
 	if (!locked)
 		anon_vma_unlock_read(anon_vma);
-	return ret;
 }
 
 /*
@@ -1658,13 +1655,12 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc,
  * vm_flags for that VMA.  That should be OK, because that vma shouldn't be
  * LOCKED.
  */
-static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc,
+static void rmap_walk_file(struct page *page, struct rmap_walk_control *rwc,
 		bool locked)
 {
 	struct address_space *mapping = page_mapping(page);
 	pgoff_t pgoff_start, pgoff_end;
 	struct vm_area_struct *vma;
-	int ret = SWAP_AGAIN;
 
 	/*
 	 * The page lock not only makes sure that page->mapping cannot
@@ -1675,7 +1671,7 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc,
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 
 	if (!mapping)
-		return ret;
+		return;
 
 	pgoff_start = page_to_pgoff(page);
 	pgoff_end = pgoff_start + hpage_nr_pages(page) - 1;
@@ -1690,8 +1686,7 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc,
 		if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg))
 			continue;
 
-		ret = rwc->rmap_one(page, vma, address, rwc->arg);
-		if (ret != SWAP_AGAIN)
+		if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg))
 			goto done;
 		if (rwc->done && rwc->done(page))
 			goto done;
@@ -1700,28 +1695,27 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc,
 done:
 	if (!locked)
 		i_mmap_unlock_read(mapping);
-	return ret;
 }
 
-int rmap_walk(struct page *page, struct rmap_walk_control *rwc)
+void rmap_walk(struct page *page, struct rmap_walk_control *rwc)
 {
 	if (unlikely(PageKsm(page)))
-		return rmap_walk_ksm(page, rwc);
+		rmap_walk_ksm(page, rwc);
 	else if (PageAnon(page))
-		return rmap_walk_anon(page, rwc, false);
+		rmap_walk_anon(page, rwc, false);
 	else
-		return rmap_walk_file(page, rwc, false);
+		rmap_walk_file(page, rwc, false);
 }
 
 /* Like rmap_walk, but caller holds relevant rmap lock */
-int rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc)
+void rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc)
 {
 	/* no ksm support for now */
 	VM_BUG_ON_PAGE(PageKsm(page), page);
 	if (PageAnon(page))
-		return rmap_walk_anon(page, rwc, true);
+		rmap_walk_anon(page, rwc, true);
 	else
-		return rmap_walk_file(page, rwc, true);
+		rmap_walk_file(page, rwc, true);
 }
 
 #ifdef CONFIG_HUGETLB_PAGE
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 09/10] mm: make rmap_one boolean function
  2017-03-15  5:24 [PATCH v2 00/10] make try_to_unmap simple Minchan Kim
                   ` (7 preceding siblings ...)
  2017-03-15  5:24 ` [PATCH v2 08/10] mm: make rmap_walk void function Minchan Kim
@ 2017-03-15  5:24 ` Minchan Kim
  2017-03-15  5:24 ` [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Minchan Kim
  9 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2017-03-15  5:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual, Minchan Kim

rmap_one's return value controls whether rmap_work should contine to
scan other ptes or not so it's target for changing to boolean.
Return true if the scan should be continued. Otherwise, return false
to stop the scanning.

This patch makes rmap_one's return value to boolean.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/rmap.h |  6 +++++-
 mm/ksm.c             |  2 +-
 mm/migrate.c         |  4 ++--
 mm/page_idle.c       |  4 ++--
 mm/rmap.c            | 30 +++++++++++++++---------------
 5 files changed, 25 insertions(+), 21 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 1d7d457c..13ed232 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -257,7 +257,11 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma);
  */
 struct rmap_walk_control {
 	void *arg;
-	int (*rmap_one)(struct page *page, struct vm_area_struct *vma,
+	/*
+	 * Return false if page table scanning in rmap_walk should be stopped.
+	 * Otherwise, return true.
+	 */
+	bool (*rmap_one)(struct page *page, struct vm_area_struct *vma,
 					unsigned long addr, void *arg);
 	int (*done)(struct page *page);
 	struct anon_vma *(*anon_lock)(struct page *page);
diff --git a/mm/ksm.c b/mm/ksm.c
index 6edffb9..d9fc0e4 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1977,7 +1977,7 @@ void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc)
 			if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg))
 				continue;
 
-			if (SWAP_AGAIN != rwc->rmap_one(page, vma,
+			if (!rwc->rmap_one(page, vma,
 					rmap_item->address, rwc->arg)) {
 				anon_vma_unlock_read(anon_vma);
 				return;
diff --git a/mm/migrate.c b/mm/migrate.c
index e0cb4b7..8e9f1e8 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -194,7 +194,7 @@ void putback_movable_pages(struct list_head *l)
 /*
  * Restore a potential migration pte to a working pte entry
  */
-static int remove_migration_pte(struct page *page, struct vm_area_struct *vma,
+static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
 				 unsigned long addr, void *old)
 {
 	struct page_vma_mapped_walk pvmw = {
@@ -250,7 +250,7 @@ static int remove_migration_pte(struct page *page, struct vm_area_struct *vma,
 		update_mmu_cache(vma, pvmw.address, pvmw.pte);
 	}
 
-	return SWAP_AGAIN;
+	return true;
 }
 
 /*
diff --git a/mm/page_idle.c b/mm/page_idle.c
index b0ee56c..1b0f48c 100644
--- a/mm/page_idle.c
+++ b/mm/page_idle.c
@@ -50,7 +50,7 @@ static struct page *page_idle_get_page(unsigned long pfn)
 	return page;
 }
 
-static int page_idle_clear_pte_refs_one(struct page *page,
+static bool page_idle_clear_pte_refs_one(struct page *page,
 					struct vm_area_struct *vma,
 					unsigned long addr, void *arg)
 {
@@ -84,7 +84,7 @@ static int page_idle_clear_pte_refs_one(struct page *page,
 		 */
 		set_page_young(page);
 	}
-	return SWAP_AGAIN;
+	return true;
 }
 
 static void page_idle_clear_pte_refs(struct page *page)
diff --git a/mm/rmap.c b/mm/rmap.c
index 987b0d2..aa25fde 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -719,7 +719,7 @@ struct page_referenced_arg {
 /*
  * arg: page_referenced_arg will be passed
  */
-static int page_referenced_one(struct page *page, struct vm_area_struct *vma,
+static bool page_referenced_one(struct page *page, struct vm_area_struct *vma,
 			unsigned long address, void *arg)
 {
 	struct page_referenced_arg *pra = arg;
@@ -736,7 +736,7 @@ static int page_referenced_one(struct page *page, struct vm_area_struct *vma,
 		if (vma->vm_flags & VM_LOCKED) {
 			page_vma_mapped_walk_done(&pvmw);
 			pra->vm_flags |= VM_LOCKED;
-			return SWAP_FAIL; /* To break the loop */
+			return false; /* To break the loop */
 		}
 
 		if (pvmw.pte) {
@@ -776,9 +776,9 @@ static int page_referenced_one(struct page *page, struct vm_area_struct *vma,
 	}
 
 	if (!pra->mapcount)
-		return SWAP_SUCCESS; /* To break the loop */
+		return false; /* To break the loop */
 
-	return SWAP_AGAIN;
+	return true;
 }
 
 static bool invalid_page_referenced_vma(struct vm_area_struct *vma, void *arg)
@@ -849,7 +849,7 @@ int page_referenced(struct page *page,
 	return pra.referenced;
 }
 
-static int page_mkclean_one(struct page *page, struct vm_area_struct *vma,
+static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 			    unsigned long address, void *arg)
 {
 	struct page_vma_mapped_walk pvmw = {
@@ -902,7 +902,7 @@ static int page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 		}
 	}
 
-	return SWAP_AGAIN;
+	return true;
 }
 
 static bool invalid_mkclean_vma(struct vm_area_struct *vma, void *arg)
@@ -1285,7 +1285,7 @@ void page_remove_rmap(struct page *page, bool compound)
 /*
  * @arg: enum ttu_flags will be passed to this argument
  */
-static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 		     unsigned long address, void *arg)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -1296,12 +1296,12 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 	};
 	pte_t pteval;
 	struct page *subpage;
-	int ret = SWAP_AGAIN;
+	bool ret = true;
 	enum ttu_flags flags = (enum ttu_flags)arg;
 
 	/* munlock has nothing to gain from examining un-locked vmas */
 	if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED))
-		return SWAP_AGAIN;
+		return true;
 
 	if (flags & TTU_SPLIT_HUGE_PMD) {
 		split_huge_pmd_address(vma, address,
@@ -1324,7 +1324,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 					 */
 					mlock_vma_page(page);
 				}
-				ret = SWAP_FAIL;
+				ret = false;
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
@@ -1342,7 +1342,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 		if (!(flags & TTU_IGNORE_ACCESS)) {
 			if (ptep_clear_flush_young_notify(vma, address,
 						pvmw.pte)) {
-				ret = SWAP_FAIL;
+				ret = false;
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
@@ -1432,14 +1432,14 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				 */
 				set_pte_at(mm, address, pvmw.pte, pteval);
 				SetPageSwapBacked(page);
-				ret = SWAP_FAIL;
+				ret = false;
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
 
 			if (swap_duplicate(entry) < 0) {
 				set_pte_at(mm, address, pvmw.pte, pteval);
-				ret = SWAP_FAIL;
+				ret = false;
 				page_vma_mapped_walk_done(&pvmw);
 				break;
 			}
@@ -1632,7 +1632,7 @@ static void rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc,
 		if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg))
 			continue;
 
-		if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg))
+		if (!rwc->rmap_one(page, vma, address, rwc->arg))
 			break;
 		if (rwc->done && rwc->done(page))
 			break;
@@ -1686,7 +1686,7 @@ static void rmap_walk_file(struct page *page, struct rmap_walk_control *rwc,
 		if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg))
 			continue;
 
-		if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg))
+		if (!rwc->rmap_one(page, vma, address, rwc->arg))
 			goto done;
 		if (rwc->done && rwc->done(page))
 			goto done;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL]
  2017-03-15  5:24 [PATCH v2 00/10] make try_to_unmap simple Minchan Kim
                   ` (8 preceding siblings ...)
  2017-03-15  5:24 ` [PATCH v2 09/10] mm: make rmap_one boolean function Minchan Kim
@ 2017-03-15  5:24 ` Minchan Kim
  2017-03-16  4:40   ` Sergey Senozhatsky
  2017-03-16 18:54   ` kbuild test robot
  9 siblings, 2 replies; 20+ messages in thread
From: Minchan Kim @ 2017-03-15  5:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual, Minchan Kim

There is no user for it. Remove it.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/rmap.h | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 13ed232..43ef2c3 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -295,11 +295,4 @@ static inline int page_mkclean(struct page *page)
 
 #endif	/* CONFIG_MMU */
 
-/*
- * Return values of try_to_unmap
- */
-#define SWAP_SUCCESS	0
-#define SWAP_AGAIN	1
-#define SWAP_FAIL	2
-
 #endif	/* _LINUX_RMAP_H */
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 04/10] mm: make the try_to_munlock void function
  2017-03-15  5:24 ` [PATCH v2 04/10] mm: make the try_to_munlock void function Minchan Kim
@ 2017-03-15  7:31   ` Vlastimil Babka
  2017-04-08  3:18   ` alexander.levin
  1 sibling, 0 replies; 20+ messages in thread
From: Vlastimil Babka @ 2017-03-15  7:31 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual

On 03/15/2017 06:24 AM, Minchan Kim wrote:
> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> the page if the page is not pte-mapped THP which cannot be
> mlocked, either.
> 
> With that, __munlock_isolated_page can use PageMlocked to check
> whether try_to_munlock is successful or not without relying on
> try_to_munlock's retval. It helps to make try_to_unmap/try_to_unmap_one
> simple with upcoming patches.
> 
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL]
  2017-03-15  5:24 ` [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Minchan Kim
@ 2017-03-16  4:40   ` Sergey Senozhatsky
  2017-03-16  5:33     ` Minchan Kim
  2017-03-16 18:54   ` kbuild test robot
  1 sibling, 1 reply; 20+ messages in thread
From: Sergey Senozhatsky @ 2017-03-16  4:40 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-kernel, linux-mm, kernel-team,
	Johannes Weiner, Michal Hocko, Kirill A. Shutemov,
	Anshuman Khandual

Hello,


On (03/15/17 14:24), Minchan Kim wrote:
> There is no user for it. Remove it.
> 

there is one.

mm/rmap.c

try_to_unmap_one()
...
	if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {
		WARN_ON_ONCE(1);
		ret = SWAP_FAIL;
		page_vma_mapped_walk_done(&pvmw);
		break;
	}

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL]
  2017-03-16  4:40   ` Sergey Senozhatsky
@ 2017-03-16  5:33     ` Minchan Kim
  2017-03-16  5:44       ` Sergey Senozhatsky
  0 siblings, 1 reply; 20+ messages in thread
From: Minchan Kim @ 2017-03-16  5:33 UTC (permalink / raw)
  To: Sergey Senozhatsky, Andrew Morton
  Cc: linux-kernel, linux-mm, kernel-team, Johannes Weiner,
	Michal Hocko, Kirill A. Shutemov, Anshuman Khandual

Hey, Sergey,

On Thu, Mar 16, 2017 at 01:40:23PM +0900, Sergey Senozhatsky wrote:
> Hello,
> 
> 
> On (03/15/17 14:24), Minchan Kim wrote:
> > There is no user for it. Remove it.
> > 
> 
> there is one.
> 
> mm/rmap.c
> 
> try_to_unmap_one()
> ...
> 	if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {
> 		WARN_ON_ONCE(1);
> 		ret = SWAP_FAIL;
> 		page_vma_mapped_walk_done(&pvmw);
> 		break;
> 	}

"There is no user for it"

I was liar so need to be a honest guy.
Thanks, Sergey!

Andrew, Please make me honest. Sorry about that.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL]
  2017-03-16  5:33     ` Minchan Kim
@ 2017-03-16  5:44       ` Sergey Senozhatsky
  2017-03-16  5:51         ` Minchan Kim
  0 siblings, 1 reply; 20+ messages in thread
From: Sergey Senozhatsky @ 2017-03-16  5:44 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, Andrew Morton, linux-kernel, linux-mm,
	kernel-team, Johannes Weiner, Michal Hocko, Kirill A. Shutemov,
	Anshuman Khandual

On (03/16/17 14:33), Minchan Kim wrote:
[..]
> "There is no user for it"
> 
> I was liar so need to be a honest guy.

ha-ha-ha. I didn't say that :)

[..]
> @@ -1414,7 +1414,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  			 */
>  			if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {
>  				WARN_ON_ONCE(1);
> -				ret = SWAP_FAIL;
> +				ret = false;
>  				page_vma_mapped_walk_done(&pvmw);
>  				break;
>  			}


one thing to notice here is that 'ret = false' and 'ret = SWAP_FAIL'
are not the same and must produce different results. `ret' is bool
and SWAP_FAIL was 2. it's return 1 vs return 0, isn't it? so was
there a bug before?

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL]
  2017-03-16  5:44       ` Sergey Senozhatsky
@ 2017-03-16  5:51         ` Minchan Kim
  2017-03-16  5:57           ` Sergey Senozhatsky
  0 siblings, 1 reply; 20+ messages in thread
From: Minchan Kim @ 2017-03-16  5:51 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, linux-kernel, linux-mm, kernel-team,
	Johannes Weiner, Michal Hocko, Kirill A. Shutemov,
	Anshuman Khandual

On Thu, Mar 16, 2017 at 02:44:30PM +0900, Sergey Senozhatsky wrote:
> On (03/16/17 14:33), Minchan Kim wrote:
> [..]
> > "There is no user for it"
> > 
> > I was liar so need to be a honest guy.
> 
> ha-ha-ha. I didn't say that :)
> 
> [..]
> > @@ -1414,7 +1414,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> >  			 */
> >  			if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {
> >  				WARN_ON_ONCE(1);
> > -				ret = SWAP_FAIL;
> > +				ret = false;
> >  				page_vma_mapped_walk_done(&pvmw);
> >  				break;
> >  			}
> 
> 
> one thing to notice here is that 'ret = false' and 'ret = SWAP_FAIL'
> are not the same and must produce different results. `ret' is bool
> and SWAP_FAIL was 2. it's return 1 vs return 0, isn't it? so was
> there a bug before?

No, it was not a bug. Just my patchset changed return value meaning.
Look at this.
https://marc.info/?l=linux-mm&m=148955552314806&w=2

So, false means SWAP_FAIL(ie., stop rmap scanning and bail out) now.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL]
  2017-03-16  5:51         ` Minchan Kim
@ 2017-03-16  5:57           ` Sergey Senozhatsky
  0 siblings, 0 replies; 20+ messages in thread
From: Sergey Senozhatsky @ 2017-03-16  5:57 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, Andrew Morton, linux-kernel, linux-mm,
	kernel-team, Johannes Weiner, Michal Hocko, Kirill A. Shutemov,
	Anshuman Khandual

On (03/16/17 14:51), Minchan Kim wrote:
[..]
> > > @@ -1414,7 +1414,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> > >  			 */
> > >  			if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {
> > >  				WARN_ON_ONCE(1);
> > > -				ret = SWAP_FAIL;
> > > +				ret = false;
> > >  				page_vma_mapped_walk_done(&pvmw);
> > >  				break;
> > >  			}
> > 
> > 
> > one thing to notice here is that 'ret = false' and 'ret = SWAP_FAIL'
> > are not the same and must produce different results. `ret' is bool
> > and SWAP_FAIL was 2. it's return 1 vs return 0, isn't it? so was
> > there a bug before?
> 
> No, it was not a bug. Just my patchset changed return value meaning.
> Look at this.
> https://marc.info/?l=linux-mm&m=148955552314806&w=2
> 
> So, false means SWAP_FAIL(ie., stop rmap scanning and bail out) now.

ah, indeed. sorry, didn't notice that.

thanks.

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL]
  2017-03-15  5:24 ` [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Minchan Kim
  2017-03-16  4:40   ` Sergey Senozhatsky
@ 2017-03-16 18:54   ` kbuild test robot
  1 sibling, 0 replies; 20+ messages in thread
From: kbuild test robot @ 2017-03-16 18:54 UTC (permalink / raw)
  To: Minchan Kim
  Cc: kbuild-all, Andrew Morton, linux-kernel, linux-mm, kernel-team,
	Johannes Weiner, Michal Hocko, Kirill A. Shutemov,
	Anshuman Khandual

[-- Attachment #1: Type: text/plain, Size: 2004 bytes --]

Hi Minchan,

[auto build test ERROR on mmotm/master]
[also build test ERROR on next-20170310]
[cannot apply to v4.11-rc2]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Minchan-Kim/make-try_to_unmap-simple/20170317-020635
base:   git://git.cmpxchg.org/linux-mmotm.git master
config: i386-tinyconfig (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

   mm/rmap.c: In function 'try_to_unmap_one':
>> mm/rmap.c:1417:11: error: 'SWAP_FAIL' undeclared (first use in this function)
        ret = SWAP_FAIL;
              ^~~~~~~~~
   mm/rmap.c:1417:11: note: each undeclared identifier is reported only once for each function it appears in

vim +/SWAP_FAIL +1417 mm/rmap.c

^1da177e Linus Torvalds 2005-04-16  1411  			/*
^1da177e Linus Torvalds 2005-04-16  1412  			 * Store the swap location in the pte.
^1da177e Linus Torvalds 2005-04-16  1413  			 * See handle_pte_fault() ...
^1da177e Linus Torvalds 2005-04-16  1414  			 */
efeba3bd Minchan Kim    2017-03-10  1415  			if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {
efeba3bd Minchan Kim    2017-03-10  1416  				WARN_ON_ONCE(1);
3154f021 Minchan Kim    2017-03-10 @1417  				ret = SWAP_FAIL;
3154f021 Minchan Kim    2017-03-10  1418  				page_vma_mapped_walk_done(&pvmw);
3154f021 Minchan Kim    2017-03-10  1419  				break;
3154f021 Minchan Kim    2017-03-10  1420  			}

:::::: The code at line 1417 was first introduced by commit
:::::: 3154f021001fba264cc2cba4c4ff4bfb5a3e2f92 mm: fix lazyfree BUG_ON check in try_to_unmap_one()

:::::: TO: Minchan Kim <minchan@kernel.org>
:::::: CC: Johannes Weiner <hannes@cmpxchg.org>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 6553 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 04/10] mm: make the try_to_munlock void function
  2017-03-15  5:24 ` [PATCH v2 04/10] mm: make the try_to_munlock void function Minchan Kim
  2017-03-15  7:31   ` Vlastimil Babka
@ 2017-04-08  3:18   ` alexander.levin
  2017-04-11  2:56     ` Minchan Kim
  1 sibling, 1 reply; 20+ messages in thread
From: alexander.levin @ 2017-04-08  3:18 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-kernel, linux-mm, kernel-team,
	Johannes Weiner, Michal Hocko, Kirill A. Shutemov,
	Anshuman Khandual, Vlastimil Babka

On Wed, Mar 15, 2017 at 02:24:47PM +0900, Minchan Kim wrote:
> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> the page if the page is not pte-mapped THP which cannot be
> mlocked, either.
> 
> With that, __munlock_isolated_page can use PageMlocked to check
> whether try_to_munlock is successful or not without relying on
> try_to_munlock's retval. It helps to make try_to_unmap/try_to_unmap_one
> simple with upcoming patches.
> 
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Hey Minchan,

I seem to be hitting one of those newly added BUG_ONs with trinity:

[   21.017404] page:ffffea000307a300 count:10 mapcount:7 mapping:ffff88010083f3a8 index:0x131
[   21.019974] flags: 0x1fffc00001c001d(locked|referenced|uptodate|dirty|swapbacked|unevictable|mlocked)                                                      [   21.022806] raw: 01fffc00001c001d ffff88010083f3a8 0000000000000131 0000000a00000006                                                                       [   21.023974] raw: dead000000000100 dead000000000200 0000000000000000 ffff880109838008
[   21.026098] page dumped because: VM_BUG_ON_PAGE(PageMlocked(page))
[   21.026903] page->mem_cgroup:ffff880109838008                                                                                                              [   21.027505] page allocated via order 0, migratetype Movable, gfp_mask 0x14200ca(GFP_HIGHUSER_MOVABLE)
[   21.028783] save_stack_trace (arch/x86/kernel/stacktrace.c:60) 
[   21.029362] save_stack (./arch/x86/include/asm/current.h:14 mm/kasan/kasan.c:50)                                                                           [   21.029859] __set_page_owner (mm/page_owner.c:178)                                                                                                         [   21.030414] get_page_from_freelist (./include/linux/page_owner.h:30 mm/page_alloc.c:1742 mm/page_alloc.c:1750 mm/page_alloc.c:3097)                        [   21.031071] __alloc_pages_nodemask (mm/page_alloc.c:4011)                                                                                                  [   21.031716] alloc_pages_vma (./include/linux/mempolicy.h:77 ./include/linux/mempolicy.h:82 mm/mempolicy.c:2024)                                            [   21.032307] shmem_alloc_page (mm/shmem.c:1389 mm/shmem.c:1444)                                                                                             [   21.032881] shmem_getpage_gfp (mm/shmem.c:1474 mm/shmem.c:1753)                                                                                            [   21.033488] shmem_fault (mm/shmem.c:1987)                                                                                                                  [   21.034055] __do_fault (mm/memory.c:3012)                                                                                                                  [   21.034568] __handle_mm_fault (mm/memory.c:3449 mm/memory.c:3497 mm/memory.c:3723 mm/memory.c:3841)                                                        [   21.035192] handle_mm_fault (mm/memory.c:3878)                                                                                                             [   21.035772] __do_page_fault (arch/x86/mm/fault.c:1446)                                                                                                     [   21.037148] do_page_fault (arch/x86/mm/fault.c:1508 ./include/linux/context_tracking_state.h:30 ./include/linux/context_tracking.h:63 arch/x86/mm/fault.c:1509) 
[   21.037657] do_async_page_fault (./arch/x86/include/asm/traps.h:82 arch/x86/kernel/kvm.c:264) 
[   21.038266] async_page_fault (arch/x86/entry/entry_64.S:1011) 
[   21.038901] ------------[ cut here ]------------
[   21.039546] kernel BUG at mm/rmap.c:1560!
[   21.040126] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN
[   21.040910] Modules linked in:
[   21.041345] CPU: 6 PID: 1317 Comm: trinity-c62 Tainted: G        W       4.11.0-rc5-next-20170407 #7
[   21.042761] task: ffff8801067d3e40 task.stack: ffff8800c06d0000
[   21.043572] RIP: 0010:try_to_munlock (??:?) 
[   21.044639] RSP: 0018:ffff8800c06d71a0 EFLAGS: 00010296
[   21.045330] RAX: 0000000000000000 RBX: 1ffff100180dae36 RCX: 0000000000000000
[   21.046289] RDX: 0000000000000000 RSI: 0000000000000086 RDI: ffffed00180dae28
[   21.047225] RBP: ffff8800c06d7358 R08: 0000000000001639 R09: 6c7561665f656761
[   21.048982] R10: ffffea000307a31c R11: 303378302f383278 R12: ffff8800c06d7330
[   21.049823] R13: ffffea000307a300 R14: ffff8800c06d72d0 R15: ffffea000307a300
[   21.050647] FS:  00007f4ab05a7700(0000) GS:ffff880109d80000(0000) knlGS:0000000000000000
[   21.051574] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   21.052246] CR2: 00007f4aafdebfc0 CR3: 00000000c069f000 CR4: 00000000000406a0
[   21.053072] Call Trace:
[   21.061057] __munlock_isolated_page (mm/mlock.c:131) 
[   21.065328] __munlock_pagevec (mm/mlock.c:339) 
[   21.079191] munlock_vma_pages_range (mm/mlock.c:494) 
[   21.085665] mlock_fixup (mm/mlock.c:569) 
[   21.086205] apply_vma_lock_flags (mm/mlock.c:608) 
[   21.089035] SyS_munlock (./arch/x86/include/asm/current.h:14 mm/mlock.c:739 mm/mlock.c:729) 
[   21.089502] do_syscall_64 (arch/x86/entry/common.c:284)

-- 

Thanks,
Sasha
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 04/10] mm: make the try_to_munlock void function
  2017-04-08  3:18   ` alexander.levin
@ 2017-04-11  2:56     ` Minchan Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2017-04-11  2:56 UTC (permalink / raw)
  To: alexander.levin
  Cc: Andrew Morton, linux-kernel, linux-mm, kernel-team,
	Johannes Weiner, Michal Hocko, Kirill A. Shutemov,
	Anshuman Khandual, Vlastimil Babka

Hi Sasha,

On Sat, Apr 08, 2017 at 03:18:35AM +0000, alexander.levin@verizon.com wrote:
> On Wed, Mar 15, 2017 at 02:24:47PM +0900, Minchan Kim wrote:
> > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> > the page if the page is not pte-mapped THP which cannot be
> > mlocked, either.
> > 
> > With that, __munlock_isolated_page can use PageMlocked to check
> > whether try_to_munlock is successful or not without relying on
> > try_to_munlock's retval. It helps to make try_to_unmap/try_to_unmap_one
> > simple with upcoming patches.
> > 
> > Cc: Vlastimil Babka <vbabka@suse.cz>
> > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> 
> Hey Minchan,
> 
> I seem to be hitting one of those newly added BUG_ONs with trinity:
> 
> [   21.017404] page:ffffea000307a300 count:10 mapcount:7 mapping:ffff88010083f3a8 index:0x131
> [   21.019974] flags: 0x1fffc00001c001d(locked|referenced|uptodate|dirty|swapbacked|unevictable|mlocked)                                                      [   21.022806] raw: 01fffc00001c001d ffff88010083f3a8 0000000000000131 0000000a00000006                                                                       [   21.023974] raw: dead000000000100 dead000000000200 0000000000000000 ffff880109838008
> [   21.026098] page dumped because: VM_BUG_ON_PAGE(PageMlocked(page))
> [   21.026903] page->mem_cgroup:ffff880109838008                                                                                                              [   21.027505] page allocated via order 0, migratetype Movable, gfp_mask 0x14200ca(GFP_HIGHUSER_MOVABLE)
> [   21.028783] save_stack_trace (arch/x86/kernel/stacktrace.c:60) 
> [   21.029362] save_stack (./arch/x86/include/asm/current.h:14 mm/kasan/kasan.c:50)                                                                           [   21.029859] __set_page_owner (mm/page_owner.c:178)                                                                                                         [   21.030414] get_page_from_freelist (./include/linux/page_owner.h:30 mm/page_alloc.c:1742 mm/page_alloc.c:1750 mm/page_alloc.c:3097)                        [   21.031071] __alloc_pages_nodemask (mm/page_alloc.c:4011)                                                                                                  [   21.031716] alloc_pages_vma (./include/linux/mempolicy.h:77 ./include/linux/mempolicy.h:82 mm/mempolicy.c:2024)                                            [   21.032307] shmem_alloc_page (mm/shmem.c:1389 mm/shmem.c:1444)                                                                                             [   21.032881] shmem_getpage_gfp (mm/shmem.c:1474 mm/shmem.c:1753)                                                                                            [   21.033488] shmem_fault (mm/shmem.c:1987)                                                                                                                  [   21.034055] __do_fault (mm/memory.c:3012)                                                                                                                  [   21.034568] __handle_mm_fault (mm/memory.c:3449 mm/memory.c:3497 mm/memory.c:3723 mm/memory.c:3841)                                                        [   21.035192] handle_mm_fault (mm/memory.c:3878)                                                                                                             [   21.035772] __do_page_fault (arch/x86/mm/fault.c:1446)                                                                                                     [   21.037148] do_page_fault (arch/x86/mm/fault.c:1508 ./include/linux/context_tracking_state.h:30 ./include/linux/context_tracking.h:63 arch/x86/mm/fault.c:1509) 
> [   21.037657] do_async_page_fault (./arch/x86/include/asm/traps.h:82 arch/x86/kernel/kvm.c:264) 
> [   21.038266] async_page_fault (arch/x86/entry/entry_64.S:1011) 
> [   21.038901] ------------[ cut here ]------------
> [   21.039546] kernel BUG at mm/rmap.c:1560!
> [   21.040126] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN
> [   21.040910] Modules linked in:
> [   21.041345] CPU: 6 PID: 1317 Comm: trinity-c62 Tainted: G        W       4.11.0-rc5-next-20170407 #7
> [   21.042761] task: ffff8801067d3e40 task.stack: ffff8800c06d0000
> [   21.043572] RIP: 0010:try_to_munlock (??:?) 
> [   21.044639] RSP: 0018:ffff8800c06d71a0 EFLAGS: 00010296
> [   21.045330] RAX: 0000000000000000 RBX: 1ffff100180dae36 RCX: 0000000000000000
> [   21.046289] RDX: 0000000000000000 RSI: 0000000000000086 RDI: ffffed00180dae28
> [   21.047225] RBP: ffff8800c06d7358 R08: 0000000000001639 R09: 6c7561665f656761
> [   21.048982] R10: ffffea000307a31c R11: 303378302f383278 R12: ffff8800c06d7330
> [   21.049823] R13: ffffea000307a300 R14: ffff8800c06d72d0 R15: ffffea000307a300
> [   21.050647] FS:  00007f4ab05a7700(0000) GS:ffff880109d80000(0000) knlGS:0000000000000000
> [   21.051574] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [   21.052246] CR2: 00007f4aafdebfc0 CR3: 00000000c069f000 CR4: 00000000000406a0
> [   21.053072] Call Trace:
> [   21.061057] __munlock_isolated_page (mm/mlock.c:131) 
> [   21.065328] __munlock_pagevec (mm/mlock.c:339) 
> [   21.079191] munlock_vma_pages_range (mm/mlock.c:494) 
> [   21.085665] mlock_fixup (mm/mlock.c:569) 
> [   21.086205] apply_vma_lock_flags (mm/mlock.c:608) 
> [   21.089035] SyS_munlock (./arch/x86/include/asm/current.h:14 mm/mlock.c:739 mm/mlock.c:729) 
> [   21.089502] do_syscall_64 (arch/x86/entry/common.c:284)
> 

Thanks for the report.

When I look at the code, that VM_BUG_ON check should be removed because
__munlock_pagevec doesn't hold any PG_lock so a page can have PG_mlocked
again before passing the page into try_to_munlock.

From 4369227f190264291961bb4024e14d34e6656b54 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@kernel.org>
Date: Tue, 11 Apr 2017 11:41:54 +0900
Subject: [PATCH] mm: remove PG_Mlocked VM_BUG_ON check

Caller of try_to_munlock doesn't guarantee he pass the page
with clearing PG_mlocked.
Look at __munlock_pagevec which doesn't hold any PG_lock
so anybody can set PG_mlocked under us.
Remove bogus PageMlocked check in try_to_munlock.

Reported-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
Andrew,

This patch can be foled into mm-make-the-try_to_munlock-void-function.patch.
Thanks.

 mm/rmap.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index a69a2a70d057..0773118214cc 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1557,7 +1557,6 @@ void try_to_munlock(struct page *page)
 	};
 
 	VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
-	VM_BUG_ON_PAGE(PageMlocked(page), page);
 	VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page);
 
 	rmap_walk(page, &rwc);
-- 
2.7.4



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-04-11  2:56 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-15  5:24 [PATCH v2 00/10] make try_to_unmap simple Minchan Kim
2017-03-15  5:24 ` [PATCH v2 01/10] mm: remove unncessary ret in page_referenced Minchan Kim
2017-03-15  5:24 ` [PATCH v2 02/10] mm: remove SWAP_DIRTY in ttu Minchan Kim
2017-03-15  5:24 ` [PATCH v2 03/10] mm: remove SWAP_MLOCK check for SWAP_SUCCESS " Minchan Kim
2017-03-15  5:24 ` [PATCH v2 04/10] mm: make the try_to_munlock void function Minchan Kim
2017-03-15  7:31   ` Vlastimil Babka
2017-04-08  3:18   ` alexander.levin
2017-04-11  2:56     ` Minchan Kim
2017-03-15  5:24 ` [PATCH v2 05/10] mm: remove SWAP_MLOCK in ttu Minchan Kim
2017-03-15  5:24 ` [PATCH v2 06/10] mm: remove SWAP_AGAIN " Minchan Kim
2017-03-15  5:24 ` [PATCH v2 07/10] mm: make ttu's return boolean Minchan Kim
2017-03-15  5:24 ` [PATCH v2 08/10] mm: make rmap_walk void function Minchan Kim
2017-03-15  5:24 ` [PATCH v2 09/10] mm: make rmap_one boolean function Minchan Kim
2017-03-15  5:24 ` [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Minchan Kim
2017-03-16  4:40   ` Sergey Senozhatsky
2017-03-16  5:33     ` Minchan Kim
2017-03-16  5:44       ` Sergey Senozhatsky
2017-03-16  5:51         ` Minchan Kim
2017-03-16  5:57           ` Sergey Senozhatsky
2017-03-16 18:54   ` kbuild test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).