From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 32F1C6B0038 for ; Wed, 15 Mar 2017 01:24:59 -0400 (EDT) Received: by mail-pf0-f197.google.com with SMTP id v190so14081890pfb.5 for ; Tue, 14 Mar 2017 22:24:59 -0700 (PDT) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id 123si679923pfg.257.2017.03.14.22.24.57 for ; Tue, 14 Mar 2017 22:24:58 -0700 (PDT) From: Minchan Kim Subject: [PATCH v2 00/10] make try_to_unmap simple Date: Wed, 15 Mar 2017 14:24:43 +0900 Message-ID: <1489555493-14659-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Minchan Kim Currently, try_to_unmap returns various return value(SWAP_SUCCESS, SWAP_FAIL, SWAP_AGAIN, SWAP_DIRTY and SWAP_MLOCK). When I look into that, it's unncessary complicated so this patch aims for cleaning it up. Change ttu to boolean function so we can remove SWAP_AGAIN, SWAP_DIRTY, SWAP_MLOCK. * from v1 * add some acked-by * add description about rmap_one's return - Andrew * from RFC - http://lkml.kernel.org/r/1488436765-32350-1-git-send-email-minchan@kernel.org * Remove RFC tag * add acked-by to some patches * some of minor fixes * based on mmotm-2017-03-09-16-19. Minchan Kim (10): mm: remove unncessary ret in page_referenced mm: remove SWAP_DIRTY in ttu mm: remove SWAP_MLOCK check for SWAP_SUCCESS in ttu mm: make the try_to_munlock void function mm: remove SWAP_MLOCK in ttu mm: remove SWAP_AGAIN in ttu mm: make ttu's return boolean mm: make rmap_walk void function mm: make rmap_one boolean function mm: remove SWAP_[SUCCESS|AGAIN|FAIL] include/linux/ksm.h | 5 ++- include/linux/rmap.h | 25 ++++++-------- mm/huge_memory.c | 6 ++-- mm/ksm.c | 16 ++++----- mm/memory-failure.c | 26 +++++++------- mm/migrate.c | 4 +-- mm/mlock.c | 6 ++-- mm/page_idle.c | 4 +-- mm/rmap.c | 98 ++++++++++++++++++++-------------------------------- mm/vmscan.c | 32 +++++------------ 10 files changed, 85 insertions(+), 137 deletions(-) -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f69.google.com (mail-pg0-f69.google.com [74.125.83.69]) by kanga.kvack.org (Postfix) with ESMTP id 52EC76B0388 for ; Wed, 15 Mar 2017 01:24:59 -0400 (EDT) Received: by mail-pg0-f69.google.com with SMTP id x127so14817293pgb.4 for ; Tue, 14 Mar 2017 22:24:59 -0700 (PDT) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id e7si696404pfa.53.2017.03.14.22.24.57 for ; Tue, 14 Mar 2017 22:24:58 -0700 (PDT) From: Minchan Kim Subject: [PATCH v2 01/10] mm: remove unncessary ret in page_referenced Date: Wed, 15 Mar 2017 14:24:44 +0900 Message-ID: <1489555493-14659-2-git-send-email-minchan@kernel.org> In-Reply-To: <1489555493-14659-1-git-send-email-minchan@kernel.org> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Minchan Kim , Hillf Danton Anyone doesn't use ret variable. Remove it. Acked-by: Hillf Danton Acked-by: Kirill A. Shutemov Signed-off-by: Minchan Kim --- mm/rmap.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 7d24bb9..9dbfa6f 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -807,7 +807,6 @@ int page_referenced(struct page *page, struct mem_cgroup *memcg, unsigned long *vm_flags) { - int ret; int we_locked = 0; struct page_referenced_arg pra = { .mapcount = total_mapcount(page), @@ -841,7 +840,7 @@ int page_referenced(struct page *page, rwc.invalid_vma = invalid_page_referenced_vma; } - ret = rmap_walk(page, &rwc); + rmap_walk(page, &rwc); *vm_flags = pra.vm_flags; if (we_locked) -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id ACD0E6B038C for ; Wed, 15 Mar 2017 01:24:59 -0400 (EDT) Received: by mail-pf0-f198.google.com with SMTP id j5so14262438pfb.3 for ; Tue, 14 Mar 2017 22:24:59 -0700 (PDT) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id 71si675849pfi.365.2017.03.14.22.24.58 for ; Tue, 14 Mar 2017 22:24:58 -0700 (PDT) From: Minchan Kim Subject: [PATCH v2 03/10] mm: remove SWAP_MLOCK check for SWAP_SUCCESS in ttu Date: Wed, 15 Mar 2017 14:24:46 +0900 Message-ID: <1489555493-14659-4-git-send-email-minchan@kernel.org> In-Reply-To: <1489555493-14659-1-git-send-email-minchan@kernel.org> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Minchan Kim If the page is mapped and rescue in try_to_unmap_one, page_mapcount(page) == 0 cannot be true so page_mapcount check in try_to_unmap is enough to return SWAP_SUCCESS. IOW, SWAP_MLOCK check is redundant so remove it. Signed-off-by: Minchan Kim --- mm/rmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index e692cb5..bdc7310 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1530,7 +1530,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) else ret = rmap_walk(page, &rwc); - if (ret != SWAP_MLOCK && !page_mapcount(page)) + if (!page_mapcount(page)) ret = SWAP_SUCCESS; return ret; } -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id A79846B0388 for ; Wed, 15 Mar 2017 01:24:59 -0400 (EDT) Received: by mail-pg0-f72.google.com with SMTP id 190so14859308pgg.3 for ; Tue, 14 Mar 2017 22:24:59 -0700 (PDT) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id c64si701541pfl.21.2017.03.14.22.24.58 for ; Tue, 14 Mar 2017 22:24:58 -0700 (PDT) From: Minchan Kim Subject: [PATCH v2 02/10] mm: remove SWAP_DIRTY in ttu Date: Wed, 15 Mar 2017 14:24:45 +0900 Message-ID: <1489555493-14659-3-git-send-email-minchan@kernel.org> In-Reply-To: <1489555493-14659-1-git-send-email-minchan@kernel.org> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Minchan Kim , Hillf Danton If we found lazyfree page is dirty, try_to_unmap_one can just SetPageSwapBakced in there like PG_mlocked page and just return with SWAP_FAIL which is very natural because the page is not swappable right now so that vmscan can activate it. There is no point to introduce new return value SWAP_DIRTY in try_to_unmap at the moment. Acked-by: Hillf Danton Acked-by: Kirill A. Shutemov Signed-off-by: Minchan Kim --- include/linux/rmap.h | 1 - mm/rmap.c | 4 ++-- mm/vmscan.c | 3 --- 3 files changed, 2 insertions(+), 6 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index fee10d7..b556eef 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -298,6 +298,5 @@ static inline int page_mkclean(struct page *page) #define SWAP_AGAIN 1 #define SWAP_FAIL 2 #define SWAP_MLOCK 3 -#define SWAP_DIRTY 4 #endif /* _LINUX_RMAP_H */ diff --git a/mm/rmap.c b/mm/rmap.c index 9dbfa6f..e692cb5 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1431,7 +1431,8 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * discarded. Remap the page to page table. */ set_pte_at(mm, address, pvmw.pte, pteval); - ret = SWAP_DIRTY; + SetPageSwapBacked(page); + ret = SWAP_FAIL; page_vma_mapped_walk_done(&pvmw); break; } @@ -1501,7 +1502,6 @@ static int page_mapcount_is_zero(struct page *page) * SWAP_AGAIN - we missed a mapping, try again later * SWAP_FAIL - the page is unswappable * SWAP_MLOCK - page is mlocked. - * SWAP_DIRTY - page is dirty MADV_FREE page */ int try_to_unmap(struct page *page, enum ttu_flags flags) { diff --git a/mm/vmscan.c b/mm/vmscan.c index a3656f9..b8fd656 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1142,9 +1142,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, if (page_mapped(page)) { switch (ret = try_to_unmap(page, ttu_flags | TTU_BATCH_FLUSH)) { - case SWAP_DIRTY: - SetPageSwapBacked(page); - /* fall through */ case SWAP_FAIL: nr_unmap_fail++; goto activate_locked; -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id EF3DA6B0388 for ; Wed, 15 Mar 2017 01:24:59 -0400 (EDT) Received: by mail-pg0-f70.google.com with SMTP id b2so14561040pgc.6 for ; Tue, 14 Mar 2017 22:24:59 -0700 (PDT) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id k13si976753pgn.38.2017.03.14.22.24.58 for ; Tue, 14 Mar 2017 22:24:59 -0700 (PDT) From: Minchan Kim Subject: [PATCH v2 04/10] mm: make the try_to_munlock void function Date: Wed, 15 Mar 2017 14:24:47 +0900 Message-ID: <1489555493-14659-5-git-send-email-minchan@kernel.org> In-Reply-To: <1489555493-14659-1-git-send-email-minchan@kernel.org> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Minchan Kim , Vlastimil Babka try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped the page has VM_LOCKED flag. In that time, VM set PG_mlocked to the page if the page is not pte-mapped THP which cannot be mlocked, either. With that, __munlock_isolated_page can use PageMlocked to check whether try_to_munlock is successful or not without relying on try_to_munlock's retval. It helps to make try_to_unmap/try_to_unmap_one simple with upcoming patches. Cc: Vlastimil Babka Acked-by: Kirill A. Shutemov Signed-off-by: Minchan Kim --- include/linux/rmap.h | 2 +- mm/mlock.c | 6 ++---- mm/rmap.c | 17 +++++------------ 3 files changed, 8 insertions(+), 17 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b556eef..1b0cd4c 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -235,7 +235,7 @@ int page_mkclean(struct page *); * called in munlock()/munmap() path to check for other vmas holding * the page mlocked. */ -int try_to_munlock(struct page *); +void try_to_munlock(struct page *); void remove_migration_ptes(struct page *old, struct page *new, bool locked); diff --git a/mm/mlock.c b/mm/mlock.c index 02f1382..9660ee5 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -123,17 +123,15 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage) */ static void __munlock_isolated_page(struct page *page) { - int ret = SWAP_AGAIN; - /* * Optimization: if the page was mapped just once, that's our mapping * and we don't need to check all the other vmas. */ if (page_mapcount(page) > 1) - ret = try_to_munlock(page); + try_to_munlock(page); /* Did try_to_unlock() succeed or punt? */ - if (ret != SWAP_MLOCK) + if (!PageMlocked(page)) count_vm_event(UNEVICTABLE_PGMUNLOCKED); putback_lru_page(page); diff --git a/mm/rmap.c b/mm/rmap.c index bdc7310..2f1fbd9 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1547,18 +1547,10 @@ static int page_not_mapped(struct page *page) * Called from munlock code. Checks all of the VMAs mapping the page * to make sure nobody else has this page mlocked. The page will be * returned with PG_mlocked cleared if no other vmas have it mlocked. - * - * Return values are: - * - * SWAP_AGAIN - no vma is holding page mlocked, or, - * SWAP_AGAIN - page mapped in mlocked vma -- couldn't acquire mmap sem - * SWAP_FAIL - page cannot be located at present - * SWAP_MLOCK - page is now mlocked. */ -int try_to_munlock(struct page *page) -{ - int ret; +void try_to_munlock(struct page *page) +{ struct rmap_walk_control rwc = { .rmap_one = try_to_unmap_one, .arg = (void *)TTU_MUNLOCK, @@ -1568,9 +1560,10 @@ int try_to_munlock(struct page *page) }; VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page); + VM_BUG_ON_PAGE(PageMlocked(page), page); + VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page); - ret = rmap_walk(page, &rwc); - return ret; + rmap_walk(page, &rwc); } void __put_anon_vma(struct anon_vma *anon_vma) -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 6404B6B0388 for ; Wed, 15 Mar 2017 01:25:00 -0400 (EDT) Received: by mail-pf0-f197.google.com with SMTP id o126so14368727pfb.2 for ; Tue, 14 Mar 2017 22:25:00 -0700 (PDT) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id l1si981835pld.15.2017.03.14.22.24.58 for ; Tue, 14 Mar 2017 22:24:59 -0700 (PDT) From: Minchan Kim Subject: [PATCH v2 05/10] mm: remove SWAP_MLOCK in ttu Date: Wed, 15 Mar 2017 14:24:48 +0900 Message-ID: <1489555493-14659-6-git-send-email-minchan@kernel.org> In-Reply-To: <1489555493-14659-1-git-send-email-minchan@kernel.org> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Minchan Kim ttu don't need to return SWAP_MLOCK. Instead, just return SWAP_FAIL because it means the page is not-swappable so it should move to another LRU list(active or unevictable). putback friends will move it to right list depending on the page's LRU flag. Signed-off-by: Minchan Kim --- include/linux/rmap.h | 1 - mm/rmap.c | 3 +-- mm/vmscan.c | 20 +++++++------------- 3 files changed, 8 insertions(+), 16 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 1b0cd4c..3630d4d 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -297,6 +297,5 @@ static inline int page_mkclean(struct page *page) #define SWAP_SUCCESS 0 #define SWAP_AGAIN 1 #define SWAP_FAIL 2 -#define SWAP_MLOCK 3 #endif /* _LINUX_RMAP_H */ diff --git a/mm/rmap.c b/mm/rmap.c index 2f1fbd9..a5af1e1 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1324,7 +1324,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, */ mlock_vma_page(page); } - ret = SWAP_MLOCK; + ret = SWAP_FAIL; page_vma_mapped_walk_done(&pvmw); break; } @@ -1501,7 +1501,6 @@ static int page_mapcount_is_zero(struct page *page) * SWAP_SUCCESS - we succeeded in removing all mappings * SWAP_AGAIN - we missed a mapping, try again later * SWAP_FAIL - the page is unswappable - * SWAP_MLOCK - page is mlocked. */ int try_to_unmap(struct page *page, enum ttu_flags flags) { diff --git a/mm/vmscan.c b/mm/vmscan.c index b8fd656..2a208f0 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -982,7 +982,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, sc->nr_scanned++; if (unlikely(!page_evictable(page))) - goto cull_mlocked; + goto activate_locked; if (!sc->may_unmap && page_mapped(page)) goto keep_locked; @@ -1147,8 +1147,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, goto activate_locked; case SWAP_AGAIN: goto keep_locked; - case SWAP_MLOCK: - goto cull_mlocked; case SWAP_SUCCESS: ; /* try to free the page below */ } @@ -1290,20 +1288,16 @@ static unsigned long shrink_page_list(struct list_head *page_list, list_add(&page->lru, &free_pages); continue; -cull_mlocked: - if (PageSwapCache(page)) - try_to_free_swap(page); - unlock_page(page); - list_add(&page->lru, &ret_pages); - continue; - activate_locked: /* Not a candidate for swapping, so reclaim swap space. */ - if (PageSwapCache(page) && mem_cgroup_swap_full(page)) + if (PageSwapCache(page) && (mem_cgroup_swap_full(page) || + PageMlocked(page))) try_to_free_swap(page); VM_BUG_ON_PAGE(PageActive(page), page); - SetPageActive(page); - pgactivate++; + if (!PageMlocked(page)) { + SetPageActive(page); + pgactivate++; + } keep_locked: unlock_page(page); keep: -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f69.google.com (mail-pg0-f69.google.com [74.125.83.69]) by kanga.kvack.org (Postfix) with ESMTP id 86BB76B0390 for ; Wed, 15 Mar 2017 01:25:00 -0400 (EDT) Received: by mail-pg0-f69.google.com with SMTP id x127so14818012pgb.4 for ; Tue, 14 Mar 2017 22:25:00 -0700 (PDT) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id 35si956913pgx.238.2017.03.14.22.24.59 for ; Tue, 14 Mar 2017 22:24:59 -0700 (PDT) From: Minchan Kim Subject: [PATCH v2 06/10] mm: remove SWAP_AGAIN in ttu Date: Wed, 15 Mar 2017 14:24:49 +0900 Message-ID: <1489555493-14659-7-git-send-email-minchan@kernel.org> In-Reply-To: <1489555493-14659-1-git-send-email-minchan@kernel.org> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Minchan Kim In 2002, [1] introduced SWAP_AGAIN. At that time, try_to_unmap_one used spin_trylock(&mm->page_table_lock) so it's really easy to contend and fail to hold a lock so SWAP_AGAIN to keep LRU status makes sense. However, now we changed it to mutex-based lock and be able to block without skip pte so there is few of small window to return SWAP_AGAIN so remove SWAP_AGAIN and just return SWAP_FAIL. [1] c48c43e, minimal rmap Signed-off-by: Minchan Kim --- mm/rmap.c | 11 +++-------- mm/vmscan.c | 2 -- 2 files changed, 3 insertions(+), 10 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index a5af1e1..612682c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1499,13 +1499,10 @@ static int page_mapcount_is_zero(struct page *page) * Return values are: * * SWAP_SUCCESS - we succeeded in removing all mappings - * SWAP_AGAIN - we missed a mapping, try again later * SWAP_FAIL - the page is unswappable */ int try_to_unmap(struct page *page, enum ttu_flags flags) { - int ret; - struct rmap_walk_control rwc = { .rmap_one = try_to_unmap_one, .arg = (void *)flags, @@ -1525,13 +1522,11 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) rwc.invalid_vma = invalid_migration_vma; if (flags & TTU_RMAP_LOCKED) - ret = rmap_walk_locked(page, &rwc); + rmap_walk_locked(page, &rwc); else - ret = rmap_walk(page, &rwc); + rmap_walk(page, &rwc); - if (!page_mapcount(page)) - ret = SWAP_SUCCESS; - return ret; + return !page_mapcount(page) ? SWAP_SUCCESS : SWAP_FAIL; } static int page_not_mapped(struct page *page) diff --git a/mm/vmscan.c b/mm/vmscan.c index 2a208f0..7727fbe 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1145,8 +1145,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, case SWAP_FAIL: nr_unmap_fail++; goto activate_locked; - case SWAP_AGAIN: - goto keep_locked; case SWAP_SUCCESS: ; /* try to free the page below */ } -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 42E936B0392 for ; Wed, 15 Mar 2017 01:25:01 -0400 (EDT) Received: by mail-pg0-f71.google.com with SMTP id q126so15086395pga.0 for ; Tue, 14 Mar 2017 22:25:01 -0700 (PDT) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id p19si967718pli.148.2017.03.14.22.24.59 for ; Tue, 14 Mar 2017 22:25:00 -0700 (PDT) From: Minchan Kim Subject: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Date: Wed, 15 Mar 2017 14:24:53 +0900 Message-ID: <1489555493-14659-11-git-send-email-minchan@kernel.org> In-Reply-To: <1489555493-14659-1-git-send-email-minchan@kernel.org> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Minchan Kim There is no user for it. Remove it. Signed-off-by: Minchan Kim --- include/linux/rmap.h | 7 ------- 1 file changed, 7 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 13ed232..43ef2c3 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -295,11 +295,4 @@ static inline int page_mkclean(struct page *page) #endif /* CONFIG_MMU */ -/* - * Return values of try_to_unmap - */ -#define SWAP_SUCCESS 0 -#define SWAP_AGAIN 1 -#define SWAP_FAIL 2 - #endif /* _LINUX_RMAP_H */ -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id 48BE66B0393 for ; Wed, 15 Mar 2017 01:25:01 -0400 (EDT) Received: by mail-pg0-f70.google.com with SMTP id y17so14958056pgh.2 for ; Tue, 14 Mar 2017 22:25:01 -0700 (PDT) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id l186si954045pge.284.2017.03.14.22.24.59 for ; Tue, 14 Mar 2017 22:25:00 -0700 (PDT) From: Minchan Kim Subject: [PATCH v2 09/10] mm: make rmap_one boolean function Date: Wed, 15 Mar 2017 14:24:52 +0900 Message-ID: <1489555493-14659-10-git-send-email-minchan@kernel.org> In-Reply-To: <1489555493-14659-1-git-send-email-minchan@kernel.org> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Minchan Kim rmap_one's return value controls whether rmap_work should contine to scan other ptes or not so it's target for changing to boolean. Return true if the scan should be continued. Otherwise, return false to stop the scanning. This patch makes rmap_one's return value to boolean. Signed-off-by: Minchan Kim --- include/linux/rmap.h | 6 +++++- mm/ksm.c | 2 +- mm/migrate.c | 4 ++-- mm/page_idle.c | 4 ++-- mm/rmap.c | 30 +++++++++++++++--------------- 5 files changed, 25 insertions(+), 21 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 1d7d457c..13ed232 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -257,7 +257,11 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); */ struct rmap_walk_control { void *arg; - int (*rmap_one)(struct page *page, struct vm_area_struct *vma, + /* + * Return false if page table scanning in rmap_walk should be stopped. + * Otherwise, return true. + */ + bool (*rmap_one)(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *arg); int (*done)(struct page *page); struct anon_vma *(*anon_lock)(struct page *page); diff --git a/mm/ksm.c b/mm/ksm.c index 6edffb9..d9fc0e4 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1977,7 +1977,7 @@ void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - if (SWAP_AGAIN != rwc->rmap_one(page, vma, + if (!rwc->rmap_one(page, vma, rmap_item->address, rwc->arg)) { anon_vma_unlock_read(anon_vma); return; diff --git a/mm/migrate.c b/mm/migrate.c index e0cb4b7..8e9f1e8 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -194,7 +194,7 @@ void putback_movable_pages(struct list_head *l) /* * Restore a potential migration pte to a working pte entry */ -static int remove_migration_pte(struct page *page, struct vm_area_struct *vma, +static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *old) { struct page_vma_mapped_walk pvmw = { @@ -250,7 +250,7 @@ static int remove_migration_pte(struct page *page, struct vm_area_struct *vma, update_mmu_cache(vma, pvmw.address, pvmw.pte); } - return SWAP_AGAIN; + return true; } /* diff --git a/mm/page_idle.c b/mm/page_idle.c index b0ee56c..1b0f48c 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -50,7 +50,7 @@ static struct page *page_idle_get_page(unsigned long pfn) return page; } -static int page_idle_clear_pte_refs_one(struct page *page, +static bool page_idle_clear_pte_refs_one(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *arg) { @@ -84,7 +84,7 @@ static int page_idle_clear_pte_refs_one(struct page *page, */ set_page_young(page); } - return SWAP_AGAIN; + return true; } static void page_idle_clear_pte_refs(struct page *page) diff --git a/mm/rmap.c b/mm/rmap.c index 987b0d2..aa25fde 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -719,7 +719,7 @@ struct page_referenced_arg { /* * arg: page_referenced_arg will be passed */ -static int page_referenced_one(struct page *page, struct vm_area_struct *vma, +static bool page_referenced_one(struct page *page, struct vm_area_struct *vma, unsigned long address, void *arg) { struct page_referenced_arg *pra = arg; @@ -736,7 +736,7 @@ static int page_referenced_one(struct page *page, struct vm_area_struct *vma, if (vma->vm_flags & VM_LOCKED) { page_vma_mapped_walk_done(&pvmw); pra->vm_flags |= VM_LOCKED; - return SWAP_FAIL; /* To break the loop */ + return false; /* To break the loop */ } if (pvmw.pte) { @@ -776,9 +776,9 @@ static int page_referenced_one(struct page *page, struct vm_area_struct *vma, } if (!pra->mapcount) - return SWAP_SUCCESS; /* To break the loop */ + return false; /* To break the loop */ - return SWAP_AGAIN; + return true; } static bool invalid_page_referenced_vma(struct vm_area_struct *vma, void *arg) @@ -849,7 +849,7 @@ int page_referenced(struct page *page, return pra.referenced; } -static int page_mkclean_one(struct page *page, struct vm_area_struct *vma, +static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, unsigned long address, void *arg) { struct page_vma_mapped_walk pvmw = { @@ -902,7 +902,7 @@ static int page_mkclean_one(struct page *page, struct vm_area_struct *vma, } } - return SWAP_AGAIN; + return true; } static bool invalid_mkclean_vma(struct vm_area_struct *vma, void *arg) @@ -1285,7 +1285,7 @@ void page_remove_rmap(struct page *page, bool compound) /* * @arg: enum ttu_flags will be passed to this argument */ -static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, +static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, unsigned long address, void *arg) { struct mm_struct *mm = vma->vm_mm; @@ -1296,12 +1296,12 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, }; pte_t pteval; struct page *subpage; - int ret = SWAP_AGAIN; + bool ret = true; enum ttu_flags flags = (enum ttu_flags)arg; /* munlock has nothing to gain from examining un-locked vmas */ if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED)) - return SWAP_AGAIN; + return true; if (flags & TTU_SPLIT_HUGE_PMD) { split_huge_pmd_address(vma, address, @@ -1324,7 +1324,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, */ mlock_vma_page(page); } - ret = SWAP_FAIL; + ret = false; page_vma_mapped_walk_done(&pvmw); break; } @@ -1342,7 +1342,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, if (!(flags & TTU_IGNORE_ACCESS)) { if (ptep_clear_flush_young_notify(vma, address, pvmw.pte)) { - ret = SWAP_FAIL; + ret = false; page_vma_mapped_walk_done(&pvmw); break; } @@ -1432,14 +1432,14 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, */ set_pte_at(mm, address, pvmw.pte, pteval); SetPageSwapBacked(page); - ret = SWAP_FAIL; + ret = false; page_vma_mapped_walk_done(&pvmw); break; } if (swap_duplicate(entry) < 0) { set_pte_at(mm, address, pvmw.pte, pteval); - ret = SWAP_FAIL; + ret = false; page_vma_mapped_walk_done(&pvmw); break; } @@ -1632,7 +1632,7 @@ static void rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg)) + if (!rwc->rmap_one(page, vma, address, rwc->arg)) break; if (rwc->done && rwc->done(page)) break; @@ -1686,7 +1686,7 @@ static void rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg)) + if (!rwc->rmap_one(page, vma, address, rwc->arg)) goto done; if (rwc->done && rwc->done(page)) goto done; -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id F1D516B0396 for ; Wed, 15 Mar 2017 01:25:05 -0400 (EDT) Received: by mail-pf0-f199.google.com with SMTP id o126so14371702pfb.2 for ; Tue, 14 Mar 2017 22:25:05 -0700 (PDT) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id h3si981610pld.13.2017.03.14.22.24.59 for ; Tue, 14 Mar 2017 22:24:59 -0700 (PDT) From: Minchan Kim Subject: [PATCH v2 07/10] mm: make ttu's return boolean Date: Wed, 15 Mar 2017 14:24:50 +0900 Message-ID: <1489555493-14659-8-git-send-email-minchan@kernel.org> In-Reply-To: <1489555493-14659-1-git-send-email-minchan@kernel.org> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Minchan Kim , Naoya Horiguchi try_to_unmap returns SWAP_SUCCESS or SWAP_FAIL so it's suitable for boolean return. This patch changes it. Cc: "Kirill A. Shutemov" Cc: Naoya Horiguchi Signed-off-by: Minchan Kim --- include/linux/rmap.h | 4 ++-- mm/huge_memory.c | 6 +++--- mm/memory-failure.c | 26 ++++++++++++-------------- mm/rmap.c | 8 +++----- mm/vmscan.c | 7 +------ 5 files changed, 21 insertions(+), 30 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 3630d4d..6028c38 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -191,7 +191,7 @@ static inline void page_dup_rmap(struct page *page, bool compound) int page_referenced(struct page *, int is_locked, struct mem_cgroup *memcg, unsigned long *vm_flags); -int try_to_unmap(struct page *, enum ttu_flags flags); +bool try_to_unmap(struct page *, enum ttu_flags flags); /* Avoid racy checks */ #define PVMW_SYNC (1 << 0) @@ -281,7 +281,7 @@ static inline int page_referenced(struct page *page, int is_locked, return 0; } -#define try_to_unmap(page, refs) SWAP_FAIL +#define try_to_unmap(page, refs) false static inline int page_mkclean(struct page *page) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2b4120f..033ccee 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2139,15 +2139,15 @@ static void freeze_page(struct page *page) { enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS | TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; - int ret; + bool unmap_success; VM_BUG_ON_PAGE(!PageHead(page), page); if (PageAnon(page)) ttu_flags |= TTU_MIGRATION; - ret = try_to_unmap(page, ttu_flags); - VM_BUG_ON_PAGE(ret, page); + unmap_success = try_to_unmap(page, ttu_flags); + VM_BUG_ON_PAGE(!unmap_success, page); } static void unfreeze_page(struct page *page) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index f85adfe5..3d3cf6a 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -322,7 +322,7 @@ static void add_to_kill(struct task_struct *tsk, struct page *p, * wrong earlier. */ static void kill_procs(struct list_head *to_kill, int forcekill, int trapno, - int fail, struct page *page, unsigned long pfn, + bool fail, struct page *page, unsigned long pfn, int flags) { struct to_kill *tk, *next; @@ -904,13 +904,13 @@ EXPORT_SYMBOL_GPL(get_hwpoison_page); * Do all that is necessary to remove user space mappings. Unmap * the pages and send SIGBUS to the processes if the data was dirty. */ -static int hwpoison_user_mappings(struct page *p, unsigned long pfn, +static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, int trapno, int flags, struct page **hpagep) { enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS; struct address_space *mapping; LIST_HEAD(tokill); - int ret; + bool unmap_success; int kill = 1, forcekill; struct page *hpage = *hpagep; @@ -919,20 +919,20 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, * other types of pages. */ if (PageReserved(p) || PageSlab(p)) - return SWAP_SUCCESS; + return true; if (!(PageLRU(hpage) || PageHuge(p))) - return SWAP_SUCCESS; + return true; /* * This check implies we don't kill processes if their pages * are in the swap cache early. Those are always late kills. */ if (!page_mapped(hpage)) - return SWAP_SUCCESS; + return true; if (PageKsm(p)) { pr_err("Memory failure: %#lx: can't handle KSM pages.\n", pfn); - return SWAP_FAIL; + return false; } if (PageSwapCache(p)) { @@ -971,8 +971,8 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, if (kill) collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED); - ret = try_to_unmap(hpage, ttu); - if (ret != SWAP_SUCCESS) + unmap_success = try_to_unmap(hpage, ttu); + if (!unmap_success) pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n", pfn, page_mapcount(hpage)); @@ -987,10 +987,9 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, * any accesses to the poisoned memory. */ forcekill = PageDirty(hpage) || (flags & MF_MUST_KILL); - kill_procs(&tokill, forcekill, trapno, - ret != SWAP_SUCCESS, p, pfn, flags); + kill_procs(&tokill, forcekill, trapno, !unmap_success, p, pfn, flags); - return ret; + return unmap_success; } static void set_page_hwpoison_huge_page(struct page *hpage) @@ -1230,8 +1229,7 @@ int memory_failure(unsigned long pfn, int trapno, int flags) * When the raw error page is thp tail page, hpage points to the raw * page after thp split. */ - if (hwpoison_user_mappings(p, pfn, trapno, flags, &hpage) - != SWAP_SUCCESS) { + if (!hwpoison_user_mappings(p, pfn, trapno, flags, &hpage)) { action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED); res = -EBUSY; goto out; diff --git a/mm/rmap.c b/mm/rmap.c index 612682c..04d5355 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1496,12 +1496,10 @@ static int page_mapcount_is_zero(struct page *page) * * Tries to remove all the page table entries which are mapping this * page, used in the pageout path. Caller must hold the page lock. - * Return values are: * - * SWAP_SUCCESS - we succeeded in removing all mappings - * SWAP_FAIL - the page is unswappable + * If unmap is successful, return true. Otherwise, false. */ -int try_to_unmap(struct page *page, enum ttu_flags flags) +bool try_to_unmap(struct page *page, enum ttu_flags flags) { struct rmap_walk_control rwc = { .rmap_one = try_to_unmap_one, @@ -1526,7 +1524,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) else rmap_walk(page, &rwc); - return !page_mapcount(page) ? SWAP_SUCCESS : SWAP_FAIL; + return !page_mapcount(page) ? true : false; } static int page_not_mapped(struct page *page) diff --git a/mm/vmscan.c b/mm/vmscan.c index 7727fbe..beffe79 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -967,7 +967,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, int may_enter_fs; enum page_references references = PAGEREF_RECLAIM_CLEAN; bool dirty, writeback; - int ret = SWAP_SUCCESS; cond_resched(); @@ -1140,13 +1139,9 @@ static unsigned long shrink_page_list(struct list_head *page_list, * processes. Try to unmap it here. */ if (page_mapped(page)) { - switch (ret = try_to_unmap(page, - ttu_flags | TTU_BATCH_FLUSH)) { - case SWAP_FAIL: + if (!try_to_unmap(page, ttu_flags | TTU_BATCH_FLUSH)) { nr_unmap_fail++; goto activate_locked; - case SWAP_SUCCESS: - ; /* try to free the page below */ } } -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id F360F6B0397 for ; Wed, 15 Mar 2017 01:25:05 -0400 (EDT) Received: by mail-pf0-f198.google.com with SMTP id v190so14085413pfb.5 for ; Tue, 14 Mar 2017 22:25:05 -0700 (PDT) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id n11si976053plg.52.2017.03.14.22.24.59 for ; Tue, 14 Mar 2017 22:24:59 -0700 (PDT) From: Minchan Kim Subject: [PATCH v2 08/10] mm: make rmap_walk void function Date: Wed, 15 Mar 2017 14:24:51 +0900 Message-ID: <1489555493-14659-9-git-send-email-minchan@kernel.org> In-Reply-To: <1489555493-14659-1-git-send-email-minchan@kernel.org> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Minchan Kim There is no user of return value from rmap_walk friend so this patch makes them void function. Signed-off-by: Minchan Kim --- include/linux/ksm.h | 5 ++--- include/linux/rmap.h | 4 ++-- mm/ksm.c | 16 ++++++---------- mm/rmap.c | 32 +++++++++++++------------------- 4 files changed, 23 insertions(+), 34 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index e1cfda4..78b44a0 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -61,7 +61,7 @@ static inline void set_page_stable_node(struct page *page, struct page *ksm_might_need_to_copy(struct page *page, struct vm_area_struct *vma, unsigned long address); -int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc); +void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc); void ksm_migrate_page(struct page *newpage, struct page *oldpage); #else /* !CONFIG_KSM */ @@ -94,10 +94,9 @@ static inline int page_referenced_ksm(struct page *page, return 0; } -static inline int rmap_walk_ksm(struct page *page, +static inline void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) { - return 0; } static inline void ksm_migrate_page(struct page *newpage, struct page *oldpage) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 6028c38..1d7d457c 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -264,8 +264,8 @@ struct rmap_walk_control { bool (*invalid_vma)(struct vm_area_struct *vma, void *arg); }; -int rmap_walk(struct page *page, struct rmap_walk_control *rwc); -int rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc); +void rmap_walk(struct page *page, struct rmap_walk_control *rwc); +void rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc); #else /* !CONFIG_MMU */ diff --git a/mm/ksm.c b/mm/ksm.c index 19b4f2d..6edffb9 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1933,11 +1933,10 @@ struct page *ksm_might_need_to_copy(struct page *page, return new_page; } -int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) +void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) { struct stable_node *stable_node; struct rmap_item *rmap_item; - int ret = SWAP_AGAIN; int search_new_forks = 0; VM_BUG_ON_PAGE(!PageKsm(page), page); @@ -1950,7 +1949,7 @@ int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) stable_node = page_stable_node(page); if (!stable_node) - return ret; + return; again: hlist_for_each_entry(rmap_item, &stable_node->hlist, hlist) { struct anon_vma *anon_vma = rmap_item->anon_vma; @@ -1978,23 +1977,20 @@ int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - ret = rwc->rmap_one(page, vma, - rmap_item->address, rwc->arg); - if (ret != SWAP_AGAIN) { + if (SWAP_AGAIN != rwc->rmap_one(page, vma, + rmap_item->address, rwc->arg)) { anon_vma_unlock_read(anon_vma); - goto out; + return; } if (rwc->done && rwc->done(page)) { anon_vma_unlock_read(anon_vma); - goto out; + return; } } anon_vma_unlock_read(anon_vma); } if (!search_new_forks++) goto again; -out: - return ret; } #ifdef CONFIG_MIGRATION diff --git a/mm/rmap.c b/mm/rmap.c index 04d5355..987b0d2 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1603,13 +1603,12 @@ static struct anon_vma *rmap_walk_anon_lock(struct page *page, * vm_flags for that VMA. That should be OK, because that vma shouldn't be * LOCKED. */ -static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, +static void rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, bool locked) { struct anon_vma *anon_vma; pgoff_t pgoff_start, pgoff_end; struct anon_vma_chain *avc; - int ret = SWAP_AGAIN; if (locked) { anon_vma = page_anon_vma(page); @@ -1619,7 +1618,7 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, anon_vma = rmap_walk_anon_lock(page, rwc); } if (!anon_vma) - return ret; + return; pgoff_start = page_to_pgoff(page); pgoff_end = pgoff_start + hpage_nr_pages(page) - 1; @@ -1633,8 +1632,7 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - ret = rwc->rmap_one(page, vma, address, rwc->arg); - if (ret != SWAP_AGAIN) + if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg)) break; if (rwc->done && rwc->done(page)) break; @@ -1642,7 +1640,6 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, if (!locked) anon_vma_unlock_read(anon_vma); - return ret; } /* @@ -1658,13 +1655,12 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, * vm_flags for that VMA. That should be OK, because that vma shouldn't be * LOCKED. */ -static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, +static void rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, bool locked) { struct address_space *mapping = page_mapping(page); pgoff_t pgoff_start, pgoff_end; struct vm_area_struct *vma; - int ret = SWAP_AGAIN; /* * The page lock not only makes sure that page->mapping cannot @@ -1675,7 +1671,7 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, VM_BUG_ON_PAGE(!PageLocked(page), page); if (!mapping) - return ret; + return; pgoff_start = page_to_pgoff(page); pgoff_end = pgoff_start + hpage_nr_pages(page) - 1; @@ -1690,8 +1686,7 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - ret = rwc->rmap_one(page, vma, address, rwc->arg); - if (ret != SWAP_AGAIN) + if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg)) goto done; if (rwc->done && rwc->done(page)) goto done; @@ -1700,28 +1695,27 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, done: if (!locked) i_mmap_unlock_read(mapping); - return ret; } -int rmap_walk(struct page *page, struct rmap_walk_control *rwc) +void rmap_walk(struct page *page, struct rmap_walk_control *rwc) { if (unlikely(PageKsm(page))) - return rmap_walk_ksm(page, rwc); + rmap_walk_ksm(page, rwc); else if (PageAnon(page)) - return rmap_walk_anon(page, rwc, false); + rmap_walk_anon(page, rwc, false); else - return rmap_walk_file(page, rwc, false); + rmap_walk_file(page, rwc, false); } /* Like rmap_walk, but caller holds relevant rmap lock */ -int rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc) +void rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc) { /* no ksm support for now */ VM_BUG_ON_PAGE(PageKsm(page), page); if (PageAnon(page)) - return rmap_walk_anon(page, rwc, true); + rmap_walk_anon(page, rwc, true); else - return rmap_walk_file(page, rwc, true); + rmap_walk_file(page, rwc, true); } #ifdef CONFIG_HUGETLB_PAGE -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f72.google.com (mail-wm0-f72.google.com [74.125.82.72]) by kanga.kvack.org (Postfix) with ESMTP id 49E706B0388 for ; Wed, 15 Mar 2017 03:31:50 -0400 (EDT) Received: by mail-wm0-f72.google.com with SMTP id c143so3484824wmd.1 for ; Wed, 15 Mar 2017 00:31:50 -0700 (PDT) Received: from mx2.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id q4si1467241wrc.328.2017.03.15.00.31.48 for (version=TLS1 cipher=AES128-SHA bits=128/128); Wed, 15 Mar 2017 00:31:48 -0700 (PDT) Subject: Re: [PATCH v2 04/10] mm: make the try_to_munlock void function References: <1489555493-14659-1-git-send-email-minchan@kernel.org> <1489555493-14659-5-git-send-email-minchan@kernel.org> From: Vlastimil Babka Message-ID: Date: Wed, 15 Mar 2017 08:31:45 +0100 MIME-Version: 1.0 In-Reply-To: <1489555493-14659-5-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual On 03/15/2017 06:24 AM, Minchan Kim wrote: > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to > the page if the page is not pte-mapped THP which cannot be > mlocked, either. > > With that, __munlock_isolated_page can use PageMlocked to check > whether try_to_munlock is successful or not without relying on > try_to_munlock's retval. It helps to make try_to_unmap/try_to_unmap_one > simple with upcoming patches. > > Cc: Vlastimil Babka > Acked-by: Kirill A. Shutemov > Signed-off-by: Minchan Kim Acked-by: Vlastimil Babka -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id 4F8E86B0388 for ; Thu, 16 Mar 2017 00:40:28 -0400 (EDT) Received: by mail-pg0-f70.google.com with SMTP id 81so34491735pgh.3 for ; Wed, 15 Mar 2017 21:40:28 -0700 (PDT) Received: from mail-pg0-x244.google.com (mail-pg0-x244.google.com. [2607:f8b0:400e:c05::244]) by mx.google.com with ESMTPS id g13si4053033pgf.56.2017.03.15.21.40.27 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Mar 2017 21:40:27 -0700 (PDT) Received: by mail-pg0-x244.google.com with SMTP id y17so4574706pgh.0 for ; Wed, 15 Mar 2017 21:40:27 -0700 (PDT) Date: Thu, 16 Mar 2017 13:40:23 +0900 From: Sergey Senozhatsky Subject: Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Message-ID: <20170316044023.GA2597@jagdpanzerIV.localdomain> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> <1489555493-14659-11-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1489555493-14659-11-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual Hello, On (03/15/17 14:24), Minchan Kim wrote: > There is no user for it. Remove it. > there is one. mm/rmap.c try_to_unmap_one() ... if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) { WARN_ON_ONCE(1); ret = SWAP_FAIL; page_vma_mapped_walk_done(&pvmw); break; } -ss -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 154386B038C for ; Thu, 16 Mar 2017 01:33:16 -0400 (EDT) Received: by mail-pg0-f71.google.com with SMTP id b2so72651385pgc.6 for ; Wed, 15 Mar 2017 22:33:16 -0700 (PDT) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id b7si4176707pll.139.2017.03.15.22.33.14 for ; Wed, 15 Mar 2017 22:33:15 -0700 (PDT) Date: Thu, 16 Mar 2017 14:33:13 +0900 From: Minchan Kim Subject: Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Message-ID: <20170316053313.GA19241@bbox> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> <1489555493-14659-11-git-send-email-minchan@kernel.org> <20170316044023.GA2597@jagdpanzerIV.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170316044023.GA2597@jagdpanzerIV.localdomain> Sender: owner-linux-mm@kvack.org List-ID: To: Sergey Senozhatsky , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual Hey, Sergey, On Thu, Mar 16, 2017 at 01:40:23PM +0900, Sergey Senozhatsky wrote: > Hello, > > > On (03/15/17 14:24), Minchan Kim wrote: > > There is no user for it. Remove it. > > > > there is one. > > mm/rmap.c > > try_to_unmap_one() > ... > if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) { > WARN_ON_ONCE(1); > ret = SWAP_FAIL; > page_vma_mapped_walk_done(&pvmw); > break; > } "There is no user for it" I was liar so need to be a honest guy. Thanks, Sergey! Andrew, Please make me honest. Sorry about that. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f69.google.com (mail-pg0-f69.google.com [74.125.83.69]) by kanga.kvack.org (Postfix) with ESMTP id C3A9B6B0388 for ; Thu, 16 Mar 2017 01:44:35 -0400 (EDT) Received: by mail-pg0-f69.google.com with SMTP id b2so72995015pgc.6 for ; Wed, 15 Mar 2017 22:44:35 -0700 (PDT) Received: from mail-pg0-x243.google.com (mail-pg0-x243.google.com. [2607:f8b0:400e:c05::243]) by mx.google.com with ESMTPS id c32si4205347plj.141.2017.03.15.22.44.35 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Mar 2017 22:44:35 -0700 (PDT) Received: by mail-pg0-x243.google.com with SMTP id y17so4756731pgh.0 for ; Wed, 15 Mar 2017 22:44:35 -0700 (PDT) Date: Thu, 16 Mar 2017 14:44:30 +0900 From: Sergey Senozhatsky Subject: Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Message-ID: <20170316054430.GA464@jagdpanzerIV.localdomain> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> <1489555493-14659-11-git-send-email-minchan@kernel.org> <20170316044023.GA2597@jagdpanzerIV.localdomain> <20170316053313.GA19241@bbox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170316053313.GA19241@bbox> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Sergey Senozhatsky , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual On (03/16/17 14:33), Minchan Kim wrote: [..] > "There is no user for it" > > I was liar so need to be a honest guy. ha-ha-ha. I didn't say that :) [..] > @@ -1414,7 +1414,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > */ > if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) { > WARN_ON_ONCE(1); > - ret = SWAP_FAIL; > + ret = false; > page_vma_mapped_walk_done(&pvmw); > break; > } one thing to notice here is that 'ret = false' and 'ret = SWAP_FAIL' are not the same and must produce different results. `ret' is bool and SWAP_FAIL was 2. it's return 1 vs return 0, isn't it? so was there a bug before? -ss -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 715586B0388 for ; Thu, 16 Mar 2017 01:51:58 -0400 (EDT) Received: by mail-pg0-f71.google.com with SMTP id y17so73777897pgh.2 for ; Wed, 15 Mar 2017 22:51:58 -0700 (PDT) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id u5si4200326pgi.223.2017.03.15.22.51.57 for ; Wed, 15 Mar 2017 22:51:57 -0700 (PDT) Date: Thu, 16 Mar 2017 14:51:54 +0900 From: Minchan Kim Subject: Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Message-ID: <20170316055154.GA26126@bbox> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> <1489555493-14659-11-git-send-email-minchan@kernel.org> <20170316044023.GA2597@jagdpanzerIV.localdomain> <20170316053313.GA19241@bbox> <20170316054430.GA464@jagdpanzerIV.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170316054430.GA464@jagdpanzerIV.localdomain> Sender: owner-linux-mm@kvack.org List-ID: To: Sergey Senozhatsky Cc: Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual On Thu, Mar 16, 2017 at 02:44:30PM +0900, Sergey Senozhatsky wrote: > On (03/16/17 14:33), Minchan Kim wrote: > [..] > > "There is no user for it" > > > > I was liar so need to be a honest guy. > > ha-ha-ha. I didn't say that :) > > [..] > > @@ -1414,7 +1414,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > > */ > > if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) { > > WARN_ON_ONCE(1); > > - ret = SWAP_FAIL; > > + ret = false; > > page_vma_mapped_walk_done(&pvmw); > > break; > > } > > > one thing to notice here is that 'ret = false' and 'ret = SWAP_FAIL' > are not the same and must produce different results. `ret' is bool > and SWAP_FAIL was 2. it's return 1 vs return 0, isn't it? so was > there a bug before? No, it was not a bug. Just my patchset changed return value meaning. Look at this. https://marc.info/?l=linux-mm&m=148955552314806&w=2 So, false means SWAP_FAIL(ie., stop rmap scanning and bail out) now. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id 05E716B0388 for ; Thu, 16 Mar 2017 01:58:00 -0400 (EDT) Received: by mail-pg0-f70.google.com with SMTP id e5so74046332pgk.1 for ; Wed, 15 Mar 2017 22:57:59 -0700 (PDT) Received: from mail-pf0-x242.google.com (mail-pf0-x242.google.com. [2607:f8b0:400e:c00::242]) by mx.google.com with ESMTPS id f16si4249744pli.29.2017.03.15.22.57.59 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Mar 2017 22:57:59 -0700 (PDT) Received: by mail-pf0-x242.google.com with SMTP id v190so4579172pfb.0 for ; Wed, 15 Mar 2017 22:57:59 -0700 (PDT) Date: Thu, 16 Mar 2017 14:57:54 +0900 From: Sergey Senozhatsky Subject: Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Message-ID: <20170316055754.GB464@jagdpanzerIV.localdomain> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> <1489555493-14659-11-git-send-email-minchan@kernel.org> <20170316044023.GA2597@jagdpanzerIV.localdomain> <20170316053313.GA19241@bbox> <20170316054430.GA464@jagdpanzerIV.localdomain> <20170316055154.GA26126@bbox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170316055154.GA26126@bbox> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Sergey Senozhatsky , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual On (03/16/17 14:51), Minchan Kim wrote: [..] > > > @@ -1414,7 +1414,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > > > */ > > > if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) { > > > WARN_ON_ONCE(1); > > > - ret = SWAP_FAIL; > > > + ret = false; > > > page_vma_mapped_walk_done(&pvmw); > > > break; > > > } > > > > > > one thing to notice here is that 'ret = false' and 'ret = SWAP_FAIL' > > are not the same and must produce different results. `ret' is bool > > and SWAP_FAIL was 2. it's return 1 vs return 0, isn't it? so was > > there a bug before? > > No, it was not a bug. Just my patchset changed return value meaning. > Look at this. > https://marc.info/?l=linux-mm&m=148955552314806&w=2 > > So, false means SWAP_FAIL(ie., stop rmap scanning and bail out) now. ah, indeed. sorry, didn't notice that. thanks. -ss -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id B35BF6B0038 for ; Thu, 16 Mar 2017 14:54:33 -0400 (EDT) Received: by mail-pf0-f199.google.com with SMTP id x63so100426307pfx.7 for ; Thu, 16 Mar 2017 11:54:33 -0700 (PDT) Received: from mga04.intel.com (mga04.intel.com. [192.55.52.120]) by mx.google.com with ESMTPS id f5si6122179pgj.78.2017.03.16.11.54.32 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Mar 2017 11:54:32 -0700 (PDT) Date: Fri, 17 Mar 2017 02:54:12 +0800 From: kbuild test robot Subject: Re: [PATCH v2 10/10] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Message-ID: <201703170222.idzs7Jly%fengguang.wu@intel.com> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="sdtB3X0nJg68CQEu" Content-Disposition: inline In-Reply-To: <1489555493-14659-11-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: kbuild-all@01.org, Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@lge.com, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual --sdtB3X0nJg68CQEu Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Hi Minchan, [auto build test ERROR on mmotm/master] [also build test ERROR on next-20170310] [cannot apply to v4.11-rc2] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Minchan-Kim/make-try_to_unmap-simple/20170317-020635 base: git://git.cmpxchg.org/linux-mmotm.git master config: i386-tinyconfig (attached as .config) compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901 reproduce: # save the attached .config to linux build tree make ARCH=i386 All errors (new ones prefixed by >>): mm/rmap.c: In function 'try_to_unmap_one': >> mm/rmap.c:1417:11: error: 'SWAP_FAIL' undeclared (first use in this function) ret = SWAP_FAIL; ^~~~~~~~~ mm/rmap.c:1417:11: note: each undeclared identifier is reported only once for each function it appears in vim +/SWAP_FAIL +1417 mm/rmap.c ^1da177e Linus Torvalds 2005-04-16 1411 /* ^1da177e Linus Torvalds 2005-04-16 1412 * Store the swap location in the pte. ^1da177e Linus Torvalds 2005-04-16 1413 * See handle_pte_fault() ... ^1da177e Linus Torvalds 2005-04-16 1414 */ efeba3bd Minchan Kim 2017-03-10 1415 if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) { efeba3bd Minchan Kim 2017-03-10 1416 WARN_ON_ONCE(1); 3154f021 Minchan Kim 2017-03-10 @1417 ret = SWAP_FAIL; 3154f021 Minchan Kim 2017-03-10 1418 page_vma_mapped_walk_done(&pvmw); 3154f021 Minchan Kim 2017-03-10 1419 break; 3154f021 Minchan Kim 2017-03-10 1420 } :::::: The code at line 1417 was first introduced by commit :::::: 3154f021001fba264cc2cba4c4ff4bfb5a3e2f92 mm: fix lazyfree BUG_ON check in try_to_unmap_one() :::::: TO: Minchan Kim :::::: CC: Johannes Weiner --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation --sdtB3X0nJg68CQEu Content-Type: application/gzip Content-Disposition: attachment; filename=".config.gz" Content-Transfer-Encoding: base64 H4sICNbZylgAAy5jb25maWcAjFxbc9u4kn4/v4I1sw8zVZvEt3g8teUHCAQljAmCIUhJ9gtL kZVEFVvy6jKT/PvtBkjx1tDsqTrnxOjGvS9fN5r69T+/Bux42L4uDuvl4uXlZ/B1tVntFofV c/Bl/bL6nyDUQaLzQIQyfw/M8Xpz/PFhfX13G9y8v7x8f/Fut7x89/p6GTysdpvVS8C3my/r r0cYYr3d/OdX6MJ1EslxeXszknmw3geb7SHYrw7/qdrnd7fl9dX9z9bfzR8yMXlW8FzqpAwF 16HIGqIu8rTIy0hniuX3v6xevlxfvcOl/VJzsIxPoF/k/rz/ZbFbfvvw4+72w9Kucm83Uj6v vri/T/1izR9CkZamSFOd5c2UJmf8Ic8YF0OaUkXzh51ZKZaWWRKWsHNTKpnc352js/n95S3N wLVKWf6v43TYOsMlQoSlGZehYmUsknE+adY6FonIJC+lYUgfEiYzIceTvL879lhO2FSUKS+j kDfUbGaEKud8MmZhWLJ4rDOZT9RwXM5iOcpYLuCOYvbYG3/CTMnTosyANqdojE9EGcsE7kI+ iYbDLsqIvEjLVGR2DJaJ1r7sYdQkoUbwVyQzk5d8UiQPHr6UjQXN5lYkRyJLmJXUVBsjR7Ho sZjCpAJuyUOesSQvJwXMkiq4qwmsmeKwh8diy5nHo8EcVipNqdNcKjiWEHQIzkgmYx9nKEbF 2G6PxSD4HU0EzSxj9vRYjo2ve5FmeiRa5EjOS8Gy+BH+LpVo3Xs6zhnsGwRwKmJzf1W3nzQU btOAJn94WX/+8Lp9Pr6s9h/+q0iYEigFghnx4X1PVWX2qZzprHUdo0LGIWxelGLu5jMdPc0n IAx4LJGG/ylzZrCzNVVja/xe0Dwd36ClHjHTDyIpYTtGpW3jJPNSJFM4EFy5kvn99WlPPINb tgop4aZ/+aUxhFVbmQtD2UO4AhZPRWZAkjr92oSSFbkmOlvRfwBBFHE5fpJpTykqyggoVzQp fmobgDZl/uTroX2EGyCclt9aVXvhfbpd2zkGXCGx8/Yqh130+RFviAFBKFkRg0Zqk6ME3v/y 22a7Wf3euhHzaKYy5eTY7v5B/HX2WLIc/MaE5IsmLAljQdIKI8BA+q7ZqiErwDHDOkA04lqK QSWC/fHz/uf+sHptpPhk5kFjrM4SHgBIZqJnLRmHFnCwHOyI05uOITEpy4xApqaNo/M0uoA+ YLByPgl13/S0WUKWM7rzFLxDiM4hZmhzH3lMrNjq+bQ5gL6HwfHA2iS5OUtEp1qy8K/C5ASf 0mjmcC31Eefr19VuT53y5Ak9htSh5G1JTDRSpO+mLZmkTMDzgvEzdqeZafM4dJUWH/LF/ntw gCUFi81zsD8sDvtgsVxuj5vDevO1WVsu+YNzh5zrIsndXZ6mwru259mQB9NlvAjMcNfA+1gC rT0c/AkWGA6DsnKmx4xW2GAX8hBwKIBecYzGU+mEGNAKFvLpxEIC64E6M2RCWAYL38hprOcA 9JRc0TotH9w/fBpZAFp1DgeQSejkq70EPs50kRraXkwEf0i1BA8Pt53rjF6iGxmtvx2LPi0E U/QG4wewa1PrubKQXgc/QQdUfBRmC7CT7pl5uLtAjCXgqWQCqN30XEQhw8sWzEf9zWOQFi5S i6DsHfX6pNykD7CgmOW4oobqhKx90AoMtwTrmdFnCMBJgbyVldmgmR5NZM5yAIwDpDNUy8a9 QE/zqGhimsFVP3jEcEx36R4A3RcwUhkVniVHRS7mJEWk2ncQcpywOKKlxe7eQ7OW1UMbpdH5 05+A5yQpTNK+nIVTCVuvBqXPHCXCOnXPqmDOEcsy2ZWbejsYJ4Qi7EslDFmePEzrri4vOqjC Ws8qRk5Xuy/b3etis1wF4u/VBsw1A8PN0WCDW2nMqmfwCrEjEbZUTpUF7uSWpsr1L61F90lq HTdmtECamI08hIKCJybWo/Z64VJyiAjR1ZcAYGUkuQ2UPIqhIxn3fE/7xLXjaJmHuqVMlHQi 2Z79r0KlgCFGgha1Kn6hnS/OZxMXEMaCHqDp5VwY41ubiGBvEs8bopZOjx4EwntDdwOOsxyZ GesjdQkOAKN6WFzeIz30Ay7XmomcJIB9pju4VoxqIsrcwln2WuzCLetE64ceERML8Hcux4Uu CLAFkZOFPxWMJOJ5iNgrvEyEvRCmPgIKR8RnLbfNCvWWkImxAZ8TuixNde4lS/v7wKVCq1Oj Hm0yAy0QzHniHk3JOVxnQzZ2xr5nAxsD7XmRJYDqcpD1dsqqbzKIU7ZUYuBa3bNqe2Gh+kJj T6sR98EZu1stDYsEgNoUMzS9EapWF2t6aKEuPMkLiIVKFxHU8SuxPiM4mpsSFDYfHM0YAEUa F2OZdAxeq9mnecBhzwUVRnDATx3g1SfSUKbLA9eXiLOj4DUVMaNRxpAbhFb7zZo7RplPwCK4 G44yCDv7YkCAdI+aJhidiSqnhOmdVqpSh0UMuo9WSMQobkNhMY4C+qTVML02zF/2GMQcjCap 691ed91b1OljnaDJ444MNNPC2uhYGhOYo8KqPHXBMdwnACT+MGNZ2FqvBtAPKKdKz10PCMzm nzuSADEUhGyNtY+iMw7ELnqKu7b3SsMX5NEW/LK4TkxkMxqs+ZjrnIUvPLJWNgdrnLc6tZPb XlK/uxOgiselzrievvu82K+eg+8O5bzttl/WL50A9DQMcpe11+5E7s4MVE7DOZWJQDFuJfgQ 4xoEPfeXLfDmZJrYey3tNgKMwXUVafsyRximEd2kCyRNCgpZJMjUTXRUdCurjn6ORvadZTIX vs5tYrd3NwHLco1+MVOzHgdq96dCFJj5h03Y1IqfJZvVDE24AAf21AXD9q7T3Xa52u+3u+Dw 880lHb6sFofjbrVvv/g8ob6F3WxdgwkVHbxi0jkSDPwnOCu0f34uTAvVrJhMpVnHoMWR9FkM wMQg6iHgO+88Yp6DWcCXgHOBV5Usl5mkl+ECd7ip3Nn10kIIT4Q6eQRvD/EMOI1xQaeJwfyM tM5dfr1Rgpu7Wzq0+XiGkBs6eECaUnNKpW7tK13DCZYTIm4lJT3QiXyeTh9tTb2hqQ+ejT38 4Wm/o9t5VhhNZ12UtfTCE7GomUz4BMCPZyEV+doXdMbMM+5Y6FCM55dnqGVMuwjFHzM59573 VDJ+XdJ5dkv0nB2HsMTTC82QVzMqg+55/rWKgGmi6k3PTGSU339ss8SXPVpn+BRcCZgCOkeF DGjnLJNNs5milT1CMihAt6HCurc3/WY97bYomUhVKIsIIohP4sfuum2MwfNYmQ4ghaVgcIKg UMSADim4AiOCjXcmqpUhr5rt/XYezmsKUyHBDirEimxIsEBRCYjMqbEKxV17Y5pSCNNsjE1e dqgo6JXYJ1QD7vq0fyFUmg8gdt0+1TFgW5bRacyKyytteAippG2avbSunDif1srJvG4368N2 56BLM2srbIMzBgM+8xyCFVgBuPERYJ/H7noJuQYRH9HuSN7R6BEnzAT6g0jOfRlmAAkgdaBl /nMx/v3A/cmQulqNLxQ9N1Q13dB5zIp6e0PFQlNl0hic5HXnaaJpRdzrOVDHckVP2pD/dYRL al32+V8Dzhf5/cUPfuH+0zNDjLI/FmhFgB1gz6VIGFEYYINmP9maiPotEdBs2x7IGCUtruEE vpoV4v7ihOnP9a0XpVhS2HC/QSunFTkasa2qc3e00lpx16+VnWiGgwgoly1j6xIrQo26ELjT XA3aHtAV9kjDIZJrd+8GXhVAck/9SU/yT0vDK09zO5E1Uje9vCj3pyonj2AKwjArc29501Rm YC81xqWdl2mjCOb6zdmGyO5JMszuby7+vG2/Yw0je0ov27UrDx3t5LFgifWmdOLCg9ifUq3p FOrTqKCxzZMZpqZrWF6FePadrk53+kIcOBeRZRjH2LyfU0Z8xGpvy1opdO/lSGosvciyIu3f XcdgGgDZGBHO7m9bl67yjDaD7onRJkS8ZhI27I9rXLQB0IKOEFxijDaZT+XlxQWVOnoqrz5e dCT/qbzusvZGoYe5h2H60cokwxdj+m1LzAV1ragSkoM9AkXP0FJe9g1lJjC5aN9Jz/W32XPo f9XrXj1VTENDvwNxFdroeeQTVrCBMnosY4j5iBcohwW2/6x2AWCBxdfV62pzsBEu46kMtm9Y 1tiJcqu0EW0gaEExkRzMCWoaRLvV/x5Xm+XPYL9cvPTgh0WYmfhE9pTPL6s+s7fYwMox2gdz 4sPnoTQW4WDw0XFfbzr4LeUyWB2W73/vwCI+3Ey42q+/bmaL3SpAMt/CP8zx7W27O7S7Vvk6 KvfiShGr5H27gye4RjkhSTr2FOiAgNF6mIj848cLOuhKOXocv/Y/mmg0OA3xY7U8HhafX1a2 pjawOPOwDz4E4vX4shhI1Aj8lcox/UpOVJENz2RKeRyXc9RFxzhWnbD53KBKelIBGPjhiwMV qDiNvO5XlFV5KamdYW+fLyEwf68BeIe79d/uAbQpx1svq+ZAD5WvcI+bExGnvoBETHOVetKz YKSSkGFe2Bdn2OEjmakZeFxXIEKyRjPwIyz0LAKd4MxWXlDn2ForvuuGmZx6N2MZxDTz5MUc AybDqmHA3ELM6qklAfTSZJro5FldAgV2AqaVnEywtrmwNKWuLmtFhcwVtIZwhFFEpBTRzjxb Iejcr8rp49YRsQz3uoCVyqe6ZMBJVZF2c6muabACtd4vqSXAbalHzL+SCxEJj7XBDCSCif75 NEedMdoV8CtyMULAGapgP7SZjlL+ec3nt4Nu+erHYh/Izf6wO77auoL9NzDCz8Fht9jscagA 3MoqeIa9rt/wn7WqsZfDarcIonTMwEjtXv9B2/28/Wfzsl08B64Wt+aVm8PqJQDdtrfmlLOm GS4jonmqU6K1GWiy3R+8RL7YPVPTePm3b6cEtTksDqtANa78N66N+r1vaXB9p+Gas+YTD8iY x/YVwktkUVEroE69b5YyPBUUGm5kJX2tWz+5NyMRt3QiNGzzJdcV44A1tZlUixiWDcrN2/Ew nLDxtElaDMVyAjdhJUN+0AF26SIhrHv8/+mlZe288DIlSE3gIMCLJQgnpZt5TieIwFT56oeA 9OCj4aoAeqKd7sGS5lxSJUtXsutJ3c/OhQjJ1GcIUn73x/Xtj3KcegqcEsP9RFjR2MU+/tRc zuG/HkQKcQnvP4M5ObnipHh4KiRNSiecTapowsQM0WMKGkPMmaZDMca26oulra3HrXs5ap4G y5ft8nufIDYWjUGwgfXViO4BlOBXBBh/2CMEZKBSLEI6bGG2VXD4tgoWz89rRCCLFzfq/n17 eXg3vWrtE23mQZOYQSzZ1FMhaKkYpdKQzdExRo5pLZjMlCdxkU9EphgdH9U121SuxIzaH684 w7XdrJf7wKxf1svtJhgtlt/fXhabTjQC/YjRRhxQQX+40Q78zXL7GuzfVsv1FwB/TI1YBx33 8hPOeR9fDusvx80S76c2a88nG98Yxii0EIy2mkjMtCkFLdyTHAEFhKfX3u4PQqUehIhkld9e /+l5WgGyUb64g43mHy8uzi8do1nfCxWQc1kydX39cY6vHSz0vPgho/IYGVftknugohKhZHXK ZnBB493i7RsKCqHYYfdJ1eERnga/sePzegvu/PTe/Lv/80IYpAT1I4yv5Yp2i9dV8Pn45Qt4 knDoSSJacbFaJLaeK+YhtbkmeTxmmNv0IG1dJFTyvACF0hMuYeV5DlG4SOAMW1VTSB98Z4iN p0KKCe+ggsIMw09ss9DvuYt5sD399nOPH34G8eInutihxuBsYBRpl6RTS59zIackB1LHLBx7 TBiSiziVXndbzOh7Ucojv0IZb84qERCkiZCeyVULypGEq3gkrkqEjNchLYTeRevDO0tqrqmB j9BOjJSBGQFJbfpjg+KXN7d3l3cVpdG5HL9TYcYT7ilGRGUuolYMQi0yYfWYcKy+8ySHinko Ter7gqDw2Aab5vaBzel6B6ugpAu7SQ3X2R22CsiWu+1+++UQTH6+rXbvpsHX4wrCBMKCgOaN exXDnbxMXblBxbANbp9AYCVOvMNtnNCveVtvLKzoaRS3jWZ73HW8Tz1+/GAyXsq7q4+tGjNo FdOcaB3F4am1uZ1cibhMJa1OgPct/Cu5+hcGlRf0M/6JI1f0pzhCVQygZ57YQ8YjTafWpFaq 8PqIbPW6PawwdqNEBRMZOQa/fNjx7XX/tX8ZBhh/M/Y7pUBvII5Yv/3eoIpe/HeCHWbLqclN kcylP4qHuUrPcSDpyeMWUiuQ/aRuc9Tz3OvQ7XsefcYeDU1n1IsTA6UYg0lTbF4mWbueTqZY g+ozzBaW2oLvTMe+WChSw7tCX9L+gGyQavI5G0Tm6ZyVV3eJwrCBdgAdLnAvtJQDhiwfdMIs h39GBNjc856j+NDTEjUElLXK2NC2sM3zbrt+brMBkMm0pMFk4o1vTU63u7enfDKY2aZ8OrCK StVbrkFXCN6I/UXGYxnwq6w0FnMi/xfVOadwqHwi9ORc67QsnIXv2S0UcVxmI9qWhTwcMV/V oB7H4jQFkWn7ulu0MmWdVFSEWX4n4S37H7oCJggzW9+EtA6t+qCMcTouE3M0msDm3sS1p8rD VtQih88fwggi4dnj4Gm0xWE/XPCkVs7QpKOV3i/vInam96dC53Q6y1J4Tp8LJpwjc1N6UvwR ln55aBrQCgCdHtmJ3mL5rRchmMGTuVP6/er4vLUvO82VNzYE/JVvekvjExmHmaBvAkuxfU8X +H0iHaa6H4Y4Ty37ZQMNDLL/B1LiGQCfiKyUua+6aKYkHh5p9fHbt8Xye/erZPtzKjL7FMVs bFpA2vZ62603h+82R/P8ugI33yDaZsFGW6Ef2x+WqKso7v84VauCrmHFwIDjpm0o8OUEsTHA wMFvM7gr3b6+wS2/s19ag3gsv+/tupaufUeBbTcsFqDQSm3rfUowMfj7NmkmOISQns8pHasq 7A+QCLJk3VUW42j3lxdXrd1h2X5aMqNK7wepWKtuZ2CG9iBFAqqEaQY10p4PLF2R1Cw5+3wV kflwgY9nxu1s+K2jEe43gED4FOanaJXoMblj1UlMxXPNV0udcuxe/fu/FWpXO9L2xw4Ee6gr bjzIGJEWqE33LakzlPv4ohZ+BYh49zMIV5+PX7/2yxHxrG1tuvEZ8t4vu/ivDLZodOLzGG6Y TNvvN/ua0ePSo7/gFrwvHtUmwWHHcFrDe64pZ2Zw30YVxme/HNeUBrtV1qTigeCzV/fWIZwZ vqqnwxKk81u1q0U/E8X2ZzWozdRk30h22XgyPuWY9F4nqyd1EJoghoj0+Obs1GSx+doxTggT ihRGGX4U15oCieA2EvcjDSTT7BOZNm4JWQKSD6qp6dewDr1fwuiIGHRiTcOgSMlrWx3ZiQv+ 7NLAaPaOEWd4ECKlfvYCj7FRw+C3fZUB2P938Ho8rH6s4B9YGfO+WxtT3U/18c05ecIP88++ 6c9mjgk/rp6lLKdNoOO1APGMymd6eh4j2gEwnXlmkjoXFsOR/ctaYBr7wa0RceT/UMdOCmJ4 +p6HFrX/a+RqmtuEgehf6U9w6k6nVxDgKHYEFZCxc2HSjg85dcZNDvn33V0JhMSu0mP8FiL0 sVpp972lH/zLhFPQrNOWadrROatc43XW2XX6M4s+51FnenBu2JWtK2S/FExAhWop/NZAAyyJ qXjRHpRCyW1tn44EvQBr4rMW//UaeaRIROan99i55eFliiYr779zf0+1ta0Fx/FQy7W9rhCX tZnjoYWLLQgCkvNuRqOCpEnKd17Qgy26e95mJr6zLP0YJP4wRx738CNRhsFAwSE0MfHFkK4N jt+eEsD9g+4tAcQncIkzV+DNZmTdzEVlIgiih+vft2TuUu0SripSbeNnZp1DyzAgSGCW511J fFARd/7t+7e8o6G23NdnsabLNRbCb3PwZWr8cie7IxgOwt0pGZBqDF8WSHipB+kmhPBxFC6U CLXIrN5U5ibfKpGvI1WFTAsqUZ0IIhyxnym2NE5ChK/sDv6teOx4IvAqmjpUUZoF/84FjGPZ FwbeDPEeSh05xnKYKkGlwhmadjKSBg9Z5IPTJyIt9K5ssI4SfJjNgHCxbHtHbxAkoFxVfUZj iLIiA85aOakcbHKOl5+tTiRiw0ZPQ65T2ZxGiZvrkgWwSmW9FUwcCd5Xt05VdBouXT3tzj92 IaRMMejjOx5z0zVIVcYo0df2G4z+2bpsOQDC4X6xyCyPxcYk9apLl/o9a93EdbysumK7Oj22 qHit1EKTwYIwREg5LETHqRG23m5ErUx0qdsWuGzN9ff77fXtg7tDOdYX4Q6sVqPVwwV8T91T ToLUCbK2/O3DSlvCQrwF5wbc3VGNIqq5I/ISQcik2lYdJyMR2lesKE8pGouD4sWsrOz5FBF5 /DlVP8tqSqU2hb0w+4Y71rz+ur3cPr7c/rzDPnxdXaotsj2DNaq7TA0WneKHM8o+YHKqjYA2 2szquqVmxBU7pZcS8QQSf2Z0KUgMwCcYdKz/pKyalNIDPy8AveOJmPjccLerNL/fIqwHiF0l dM/nngDhK4BOuqSnJD1RxfPWSQHU62q6SnyGbB2iHiqP2X/NRzXnZ1TbzkBTqR7YSdrjqK1Z ge4ndM0xg492OtKajSII07admNtAAyo0kAwwOhU+vKr4Yw6poYrad54fKIEpIy6dlT1WFRTa MBMWd66JNj8A/wF2iE34h10AAA== --sdtB3X0nJg68CQEu-- -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f69.google.com (mail-it0-f69.google.com [209.85.214.69]) by kanga.kvack.org (Postfix) with ESMTP id 16A816B0038 for ; Fri, 7 Apr 2017 23:19:07 -0400 (EDT) Received: by mail-it0-f69.google.com with SMTP id x8so3078368itb.11 for ; Fri, 07 Apr 2017 20:19:07 -0700 (PDT) Received: from omzsmtpe02.verizonbusiness.com (omzsmtpe02.verizonbusiness.com. [199.249.25.209]) by mx.google.com with ESMTPS id e42si4700327ioj.166.2017.04.07.20.19.05 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 Apr 2017 20:19:06 -0700 (PDT) From: alexander.levin@verizon.com Subject: Re: [PATCH v2 04/10] mm: make the try_to_munlock void function Date: Sat, 8 Apr 2017 03:18:35 +0000 Message-ID: <20170408031833.iwhbyliu2lp3wazi@sasha-lappy> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> <1489555493-14659-5-git-send-email-minchan@kernel.org> In-Reply-To: <1489555493-14659-5-git-send-email-minchan@kernel.org> Content-Language: en-US Content-Type: text/plain; charset="us-ascii" Content-ID: <80920216F207954A9D8DB3E2943E2492@vzwcorp.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Andrew Morton , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "kernel-team@lge.com" , Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Vlastimil Babka On Wed, Mar 15, 2017 at 02:24:47PM +0900, Minchan Kim wrote: > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to > the page if the page is not pte-mapped THP which cannot be > mlocked, either. >=20 > With that, __munlock_isolated_page can use PageMlocked to check > whether try_to_munlock is successful or not without relying on > try_to_munlock's retval. It helps to make try_to_unmap/try_to_unmap_one > simple with upcoming patches. >=20 > Cc: Vlastimil Babka > Acked-by: Kirill A. Shutemov > Signed-off-by: Minchan Kim Hey Minchan, I seem to be hitting one of those newly added BUG_ONs with trinity: [ 21.017404] page:ffffea000307a300 count:10 mapcount:7 mapping:ffff880100= 83f3a8 index:0x131 [ 21.019974] flags: 0x1fffc00001c001d(locked|referenced|uptodate|dirty|sw= apbacked|unevictable|mlocked) = [ 21.022806] raw: 01fffc00001c001d ffff88010083f3a8 0000000000000= 131 0000000a00000006 = [ 21.023974] raw: dead000000000100 dead000000000200 00000= 00000000000 ffff880109838008 [ 21.026098] page dumped because: VM_BUG_ON_PAGE(PageMlocked(page)) [ 21.026903] page->mem_cgroup:ffff880109838008 = = [ 21.027505] page allocated via order 0, migratetype Movable, gfp= _mask 0x14200ca(GFP_HIGHUSER_MOVABLE) [ 21.028783] save_stack_trace (arch/x86/kernel/stacktrace.c:60)=20 [ 21.029362] save_stack (./arch/x86/include/asm/current.h:14 mm/kasan/kas= an.c:50) = [ 21.029859] __set_page_owner (mm/page_owner.c:178) = = [ 21.030414] get_page_from_freelist (./include/linux/page= _owner.h:30 mm/page_alloc.c:1742 mm/page_alloc.c:1750 mm/page_alloc.c:3097)= [ 21.031071] __alloc_pages_nodemask (mm/page_allo= c.c:4011) = [ 21.031716] alloc_pages_vma (./include/l= inux/mempolicy.h:77 ./include/linux/mempolicy.h:82 mm/mempolicy.c:2024) = [ 21.032307] shmem_alloc_page (mm= /shmem.c:1389 mm/shmem.c:1444) = [ 21.032881] shmem_getpag= e_gfp (mm/shmem.c:1474 mm/shmem.c:1753) = [ 21.033488] shme= m_fault (mm/shmem.c:1987) = [ 21.0340= 55] __do_fault (mm/memory.c:3012) = [ = 21.034568] __handle_mm_fault (mm/memory.c:3449 mm/memory.c:3497 mm/memory.= c:3723 mm/memory.c:3841) = [ 21.035192] handle_mm_fault (mm/memory.c:3878) = = [ 21.035772] __do_page_fault (arch/x86/mm/fault.c:1446) = = [ 21.037148] do_page_fault (arch/x86/mm/fault.c:1508= ./include/linux/context_tracking_state.h:30 ./include/linux/context_tracki= ng.h:63 arch/x86/mm/fault.c:1509)=20 [ 21.037657] do_async_page_fault (./arch/x86/include/asm/traps.h:82 arch/= x86/kernel/kvm.c:264)=20 [ 21.038266] async_page_fault (arch/x86/entry/entry_64.S:1011)=20 [ 21.038901] ------------[ cut here ]------------ [ 21.039546] kernel BUG at mm/rmap.c:1560! [ 21.040126] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN [ 21.040910] Modules linked in: [ 21.041345] CPU: 6 PID: 1317 Comm: trinity-c62 Tainted: G W = 4.11.0-rc5-next-20170407 #7 [ 21.042761] task: ffff8801067d3e40 task.stack: ffff8800c06d0000 [ 21.043572] RIP: 0010:try_to_munlock (??:?)=20 [ 21.044639] RSP: 0018:ffff8800c06d71a0 EFLAGS: 00010296 [ 21.045330] RAX: 0000000000000000 RBX: 1ffff100180dae36 RCX: 00000000000= 00000 [ 21.046289] RDX: 0000000000000000 RSI: 0000000000000086 RDI: ffffed00180= dae28 [ 21.047225] RBP: ffff8800c06d7358 R08: 0000000000001639 R09: 6c7561665f6= 56761 [ 21.048982] R10: ffffea000307a31c R11: 303378302f383278 R12: ffff8800c06= d7330 [ 21.049823] R13: ffffea000307a300 R14: ffff8800c06d72d0 R15: ffffea00030= 7a300 [ 21.050647] FS: 00007f4ab05a7700(0000) GS:ffff880109d80000(0000) knlGS:= 0000000000000000 [ 21.051574] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 21.052246] CR2: 00007f4aafdebfc0 CR3: 00000000c069f000 CR4: 00000000000= 406a0 [ 21.053072] Call Trace: [ 21.061057] __munlock_isolated_page (mm/mlock.c:131)=20 [ 21.065328] __munlock_pagevec (mm/mlock.c:339)=20 [ 21.079191] munlock_vma_pages_range (mm/mlock.c:494)=20 [ 21.085665] mlock_fixup (mm/mlock.c:569)=20 [ 21.086205] apply_vma_lock_flags (mm/mlock.c:608)=20 [ 21.089035] SyS_munlock (./arch/x86/include/asm/current.h:14 mm/mlock.c:= 739 mm/mlock.c:729)=20 [ 21.089502] do_syscall_64 (arch/x86/entry/common.c:284) --=20 Thanks, Sasha= -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id 17C6E6B0390 for ; Mon, 10 Apr 2017 22:56:19 -0400 (EDT) Received: by mail-pg0-f72.google.com with SMTP id x125so134402464pgb.5 for ; Mon, 10 Apr 2017 19:56:19 -0700 (PDT) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id 1si14979551plw.137.2017.04.10.19.56.17 for ; Mon, 10 Apr 2017 19:56:18 -0700 (PDT) Date: Tue, 11 Apr 2017 11:56:15 +0900 From: Minchan Kim Subject: Re: [PATCH v2 04/10] mm: make the try_to_munlock void function Message-ID: <20170411025615.GA6545@bbox> References: <1489555493-14659-1-git-send-email-minchan@kernel.org> <1489555493-14659-5-git-send-email-minchan@kernel.org> <20170408031833.iwhbyliu2lp3wazi@sasha-lappy> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable In-Reply-To: <20170408031833.iwhbyliu2lp3wazi@sasha-lappy> Sender: owner-linux-mm@kvack.org List-ID: To: alexander.levin@verizon.com Cc: Andrew Morton , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "kernel-team@lge.com" , Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Anshuman Khandual , Vlastimil Babka Hi Sasha, On Sat, Apr 08, 2017 at 03:18:35AM +0000, alexander.levin@verizon.com wrote: > On Wed, Mar 15, 2017 at 02:24:47PM +0900, Minchan Kim wrote: > > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped > > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to > > the page if the page is not pte-mapped THP which cannot be > > mlocked, either. > >=20 > > With that, __munlock_isolated_page can use PageMlocked to check > > whether try_to_munlock is successful or not without relying on > > try_to_munlock's retval. It helps to make try_to_unmap/try_to_unmap_one > > simple with upcoming patches. > >=20 > > Cc: Vlastimil Babka > > Acked-by: Kirill A. Shutemov > > Signed-off-by: Minchan Kim >=20 > Hey Minchan, >=20 > I seem to be hitting one of those newly added BUG_ONs with trinity: >=20 > [ 21.017404] page:ffffea000307a300 count:10 mapcount:7 mapping:ffff8801= 0083f3a8 index:0x131 > [ 21.019974] flags: 0x1fffc00001c001d(locked|referenced|uptodate|dirty|= swapbacked|unevictable|mlocked) = [ 21.022806] raw: 01fffc00001c001d ffff88010083f3a8 00000000000= 00131 0000000a00000006 = [ 21.023974] raw: dead000000000100 dead000000000200 000= 0000000000000 ffff880109838008 > [ 21.026098] page dumped because: VM_BUG_ON_PAGE(PageMlocked(page)) > [ 21.026903] page->mem_cgroup:ffff880109838008 = = [ 21.027505] page allocated via order 0, migratetype Movable, g= fp_mask 0x14200ca(GFP_HIGHUSER_MOVABLE) > [ 21.028783] save_stack_trace (arch/x86/kernel/stacktrace.c:60)=20 > [ 21.029362] save_stack (./arch/x86/include/asm/current.h:14 mm/kasan/k= asan.c:50) = [ 21.029859] __set_page_owner (mm/page_owner.c:178) = = [ 21.030414] get_page_from_freelist (./include/linux/pa= ge_owner.h:30 mm/page_alloc.c:1742 mm/page_alloc.c:1750 mm/page_alloc.c:309= 7) [ 21.031071] __alloc_pages_nodemask (mm/page_al= loc.c:4011) = [ 21.031716] alloc_pages_vma (./include= /linux/mempolicy.h:77 ./include/linux/mempolicy.h:82 mm/mempolicy.c:2024) = [ 21.032307] shmem_alloc_page (= mm/shmem.c:1389 mm/shmem.c:1444) = [ 21.032881] shmem_getp= age_gfp (mm/shmem.c:1474 mm/shmem.c:1753) = [ 21.033488] sh= mem_fault (mm/shmem.c:1987) = [ 21.03= 4055] __do_fault (mm/memory.c:3012) = [= 21.034568] __handle_mm_fault (mm/memory.c:3449 mm/memory.c:3497 mm/memor= y.c:3723 mm/memory.c:3841) = [ 21.035192] handle_mm_fault (mm/memory.c:3878) = = [ 21.035772] __do_page_fault (arch/x86/mm/fault.c:1446) = = [ 21.037148] do_page_fault (arch/x86/mm/fault.c:15= 08 ./include/linux/context_tracking_state.h:30 ./include/linux/context_trac= king.h:63 arch/x86/mm/fault.c:1509)=20 > [ 21.037657] do_async_page_fault (./arch/x86/include/asm/traps.h:82 arc= h/x86/kernel/kvm.c:264)=20 > [ 21.038266] async_page_fault (arch/x86/entry/entry_64.S:1011)=20 > [ 21.038901] ------------[ cut here ]------------ > [ 21.039546] kernel BUG at mm/rmap.c:1560! > [ 21.040126] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN > [ 21.040910] Modules linked in: > [ 21.041345] CPU: 6 PID: 1317 Comm: trinity-c62 Tainted: G W = 4.11.0-rc5-next-20170407 #7 > [ 21.042761] task: ffff8801067d3e40 task.stack: ffff8800c06d0000 > [ 21.043572] RIP: 0010:try_to_munlock (??:?)=20 > [ 21.044639] RSP: 0018:ffff8800c06d71a0 EFLAGS: 00010296 > [ 21.045330] RAX: 0000000000000000 RBX: 1ffff100180dae36 RCX: 000000000= 0000000 > [ 21.046289] RDX: 0000000000000000 RSI: 0000000000000086 RDI: ffffed001= 80dae28 > [ 21.047225] RBP: ffff8800c06d7358 R08: 0000000000001639 R09: 6c7561665= f656761 > [ 21.048982] R10: ffffea000307a31c R11: 303378302f383278 R12: ffff8800c= 06d7330 > [ 21.049823] R13: ffffea000307a300 R14: ffff8800c06d72d0 R15: ffffea000= 307a300 > [ 21.050647] FS: 00007f4ab05a7700(0000) GS:ffff880109d80000(0000) knlG= S:0000000000000000 > [ 21.051574] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 21.052246] CR2: 00007f4aafdebfc0 CR3: 00000000c069f000 CR4: 000000000= 00406a0 > [ 21.053072] Call Trace: > [ 21.061057] __munlock_isolated_page (mm/mlock.c:131)=20 > [ 21.065328] __munlock_pagevec (mm/mlock.c:339)=20 > [ 21.079191] munlock_vma_pages_range (mm/mlock.c:494)=20 > [ 21.085665] mlock_fixup (mm/mlock.c:569)=20 > [ 21.086205] apply_vma_lock_flags (mm/mlock.c:608)=20 > [ 21.089035] SyS_munlock (./arch/x86/include/asm/current.h:14 mm/mlock.= c:739 mm/mlock.c:729)=20 > [ 21.089502] do_syscall_64 (arch/x86/entry/common.c:284) >=20 Thanks for the report. When I look at the code, that VM_BUG_ON check should be removed because __munlock_pagevec doesn't hold any PG_lock so a page can have PG_mlocked again before passing the page into try_to_munlock. =46rom 4369227f190264291961bb4024e14d34e6656b54 Mon Sep 17 00:00:00 2001 =46rom: Minchan Kim Date: Tue, 11 Apr 2017 11:41:54 +0900 Subject: [PATCH] mm: remove PG_Mlocked VM_BUG_ON check Caller of try_to_munlock doesn't guarantee he pass the page with clearing PG_mlocked. Look at __munlock_pagevec which doesn't hold any PG_lock so anybody can set PG_mlocked under us. Remove bogus PageMlocked check in try_to_munlock. Reported-by: Sasha Levin Signed-off-by: Minchan Kim --- Andrew, This patch can be foled into mm-make-the-try_to_munlock-void-function.patch. Thanks. mm/rmap.c | 1 - 1 file changed, 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index a69a2a70d057..0773118214cc 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1557,7 +1557,6 @@ void try_to_munlock(struct page *page) }; =20 VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page); - VM_BUG_ON_PAGE(PageMlocked(page), page); VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page); =20 rmap_walk(page, &rwc); --=20 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org