From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id 091296B0038 for ; Thu, 2 Mar 2017 01:39:31 -0500 (EST) Received: by mail-pg0-f70.google.com with SMTP id b2so81621836pgc.6 for ; Wed, 01 Mar 2017 22:39:30 -0800 (PST) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id k15si311963pfj.185.2017.03.01.22.39.29 for ; Wed, 01 Mar 2017 22:39:30 -0800 (PST) From: Minchan Kim Subject: [RFC 00/11] make try_to_unmap simple Date: Thu, 2 Mar 2017 15:39:14 +0900 Message-Id: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim Currently, try_to_unmap returns various return value(SWAP_SUCCESS, SWAP_FAIL, SWAP_AGAIN, SWAP_DIRTY and SWAP_MLOCK). When I look into that, it's unncessary complicated so this patch aims for cleaning it up. Change ttu to boolean function so we can remove SWAP_AGAIN, SWAP_DIRTY, SWAP_MLOCK. This patchset is based on v4.10-mmots-2017-02-28-17-33. Minchan Kim (11): mm: use SWAP_SUCCESS instead of 0 mm: remove unncessary ret in page_referenced mm: remove SWAP_DIRTY in ttu mm: remove SWAP_MLOCK check for SWAP_SUCCESS in ttu mm: make the try_to_munlock void function mm: remove SWAP_MLOCK in ttu mm: remove SWAP_AGAIN in ttu mm: make ttu's return boolean mm: make rmap_walk void function mm: make rmap_one boolean function mm: remove SWAP_[SUCCESS|AGAIN|FAIL] include/linux/ksm.h | 5 ++- include/linux/rmap.h | 21 ++++-------- mm/huge_memory.c | 4 +-- mm/ksm.c | 16 ++++----- mm/memory-failure.c | 22 ++++++------ mm/migrate.c | 4 +-- mm/mlock.c | 6 ++-- mm/page_idle.c | 4 +-- mm/rmap.c | 97 ++++++++++++++++++++-------------------------------- mm/vmscan.c | 26 +++----------- 10 files changed, 73 insertions(+), 132 deletions(-) -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f69.google.com (mail-pg0-f69.google.com [74.125.83.69]) by kanga.kvack.org (Postfix) with ESMTP id 12B226B0387 for ; Thu, 2 Mar 2017 01:39:31 -0500 (EST) Received: by mail-pg0-f69.google.com with SMTP id f21so82048794pgi.4 for ; Wed, 01 Mar 2017 22:39:31 -0800 (PST) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id 1si5817215pgt.210.2017.03.01.22.39.29 for ; Wed, 01 Mar 2017 22:39:30 -0800 (PST) From: Minchan Kim Subject: [RFC 01/11] mm: use SWAP_SUCCESS instead of 0 Date: Thu, 2 Mar 2017 15:39:15 +0900 Message-Id: <1488436765-32350-2-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim , "Kirill A . Shutemov" SWAP_SUCCESS defined value 0 can be changed always so don't rely on it. Instead, use explict macro. Cc: Kirill A. Shutemov Signed-off-by: Minchan Kim --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 092cc5c..fe2ccd4 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2114,7 +2114,7 @@ static void freeze_page(struct page *page) ttu_flags |= TTU_MIGRATION; ret = try_to_unmap(page, ttu_flags); - VM_BUG_ON_PAGE(ret, page); + VM_BUG_ON_PAGE(ret != SWAP_SUCCESS, page); } static void unfreeze_page(struct page *page) -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f69.google.com (mail-pg0-f69.google.com [74.125.83.69]) by kanga.kvack.org (Postfix) with ESMTP id A38B06B0387 for ; Thu, 2 Mar 2017 01:39:31 -0500 (EST) Received: by mail-pg0-f69.google.com with SMTP id d18so82157269pgh.2 for ; Wed, 01 Mar 2017 22:39:31 -0800 (PST) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id r69si6626083pfk.242.2017.03.01.22.39.29 for ; Wed, 01 Mar 2017 22:39:30 -0800 (PST) From: Minchan Kim Subject: [RFC 02/11] mm: remove unncessary ret in page_referenced Date: Thu, 2 Mar 2017 15:39:16 +0900 Message-Id: <1488436765-32350-3-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim Anyone doesn't use ret variable. Remove it. Signed-off-by: Minchan Kim --- mm/rmap.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index bb45712..8076347 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -805,7 +805,6 @@ int page_referenced(struct page *page, struct mem_cgroup *memcg, unsigned long *vm_flags) { - int ret; int we_locked = 0; struct page_referenced_arg pra = { .mapcount = total_mapcount(page), @@ -839,7 +838,7 @@ int page_referenced(struct page *page, rwc.invalid_vma = invalid_page_referenced_vma; } - ret = rmap_walk(page, &rwc); + rmap_walk(page, &rwc); *vm_flags = pra.vm_flags; if (we_locked) -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id AF6EF6B0388 for ; Thu, 2 Mar 2017 01:39:31 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id 1so81805348pgz.5 for ; Wed, 01 Mar 2017 22:39:31 -0800 (PST) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id h11si6624942pln.322.2017.03.01.22.39.30 for ; Wed, 01 Mar 2017 22:39:30 -0800 (PST) From: Minchan Kim Subject: [RFC 04/11] mm: remove SWAP_MLOCK check for SWAP_SUCCESS in ttu Date: Thu, 2 Mar 2017 15:39:18 +0900 Message-Id: <1488436765-32350-5-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim If the page is mapped and rescue in ttuo, page_mapcount(page) == 0 cannot be true so page_mapcount check in ttu is enough to return SWAP_SUCCESS. IOW, SWAP_MLOCK check is redundant so remove it. Signed-off-by: Minchan Kim --- mm/rmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index 3a14013..0a48958 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1523,7 +1523,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) else ret = rmap_walk(page, &rwc); - if (ret != SWAP_MLOCK && !page_mapcount(page)) + if (!page_mapcount(page)) ret = SWAP_SUCCESS; return ret; } -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f69.google.com (mail-pg0-f69.google.com [74.125.83.69]) by kanga.kvack.org (Postfix) with ESMTP id 317E86B038A for ; Thu, 2 Mar 2017 01:39:32 -0500 (EST) Received: by mail-pg0-f69.google.com with SMTP id x17so81881347pgi.3 for ; Wed, 01 Mar 2017 22:39:32 -0800 (PST) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id h5si6624287pgg.45.2017.03.01.22.39.30 for ; Wed, 01 Mar 2017 22:39:31 -0800 (PST) From: Minchan Kim Subject: [RFC 05/11] mm: make the try_to_munlock void function Date: Thu, 2 Mar 2017 15:39:19 +0900 Message-Id: <1488436765-32350-6-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim , Vlastimil Babka , "Kirill A . Shutemov" try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped the page has VM_LOCKED flag. In that time, VM set PG_mlocked to the page if the page is not pte-mapped THP which cannot be mlocked, either. With that, __munlock_isolated_page can use PageMlocked to check whether try_to_munlock is successful or not without relying on try_to_munlock's retval. It helps to make ttu/ttuo simple with upcoming patches. Cc: Vlastimil Babka Cc: Kirill A. Shutemov Signed-off-by: Minchan Kim --- include/linux/rmap.h | 2 +- mm/mlock.c | 6 ++---- mm/rmap.c | 16 ++++------------ 3 files changed, 7 insertions(+), 17 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b556eef..1b0cd4c 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -235,7 +235,7 @@ int page_mkclean(struct page *); * called in munlock()/munmap() path to check for other vmas holding * the page mlocked. */ -int try_to_munlock(struct page *); +void try_to_munlock(struct page *); void remove_migration_ptes(struct page *old, struct page *new, bool locked); diff --git a/mm/mlock.c b/mm/mlock.c index cdbed8a..d34a540 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage) */ static void __munlock_isolated_page(struct page *page) { - int ret = SWAP_AGAIN; - /* * Optimization: if the page was mapped just once, that's our mapping * and we don't need to check all the other vmas. */ if (page_mapcount(page) > 1) - ret = try_to_munlock(page); + try_to_munlock(page); /* Did try_to_unlock() succeed or punt? */ - if (ret != SWAP_MLOCK) + if (!PageMlocked(page)) count_vm_event(UNEVICTABLE_PGMUNLOCKED); putback_lru_page(page); diff --git a/mm/rmap.c b/mm/rmap.c index 0a48958..61ae694 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page) * Called from munlock code. Checks all of the VMAs mapping the page * to make sure nobody else has this page mlocked. The page will be * returned with PG_mlocked cleared if no other vmas have it mlocked. - * - * Return values are: - * - * SWAP_AGAIN - no vma is holding page mlocked, or, - * SWAP_AGAIN - page mapped in mlocked vma -- couldn't acquire mmap sem - * SWAP_FAIL - page cannot be located at present - * SWAP_MLOCK - page is now mlocked. */ -int try_to_munlock(struct page *page) -{ - int ret; +void try_to_munlock(struct page *page) +{ struct rmap_walk_control rwc = { .rmap_one = try_to_unmap_one, .arg = (void *)TTU_MUNLOCK, @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page) }; VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page); + VM_BUG_ON_PAGE(PageMlocked(page), page); - ret = rmap_walk(page, &rwc); - return ret; + rmap_walk(page, &rwc); } void __put_anon_vma(struct anon_vma *anon_vma) -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id 5875D6B038C for ; Thu, 2 Mar 2017 01:39:32 -0500 (EST) Received: by mail-pg0-f72.google.com with SMTP id 65so81772463pgi.7 for ; Wed, 01 Mar 2017 22:39:32 -0800 (PST) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id f1si6629091plb.246.2017.03.01.22.39.30 for ; Wed, 01 Mar 2017 22:39:31 -0800 (PST) From: Minchan Kim Subject: [RFC 06/11] mm: remove SWAP_MLOCK in ttu Date: Thu, 2 Mar 2017 15:39:20 +0900 Message-Id: <1488436765-32350-7-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim ttu don't need to return SWAP_MLOCK. Instead, just return SWAP_FAIL because it means the page is not-swappable so it should move to another LRU list(active or unevictable). putback friends will move it to right list depending on the page's LRU flag. A side effect is shrink_page_list accounts unevictable list movement by PGACTIVATE but I don't think it corrupts something severe. Signed-off-by: Minchan Kim --- include/linux/rmap.h | 1 - mm/rmap.c | 3 +-- mm/vmscan.c | 14 +++----------- 3 files changed, 4 insertions(+), 14 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 1b0cd4c..3630d4d 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -297,6 +297,5 @@ static inline int page_mkclean(struct page *page) #define SWAP_SUCCESS 0 #define SWAP_AGAIN 1 #define SWAP_FAIL 2 -#define SWAP_MLOCK 3 #endif /* _LINUX_RMAP_H */ diff --git a/mm/rmap.c b/mm/rmap.c index 61ae694..47898a1 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1328,7 +1328,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, */ mlock_vma_page(page); } - ret = SWAP_MLOCK; + ret = SWAP_FAIL; page_vma_mapped_walk_done(&pvmw); break; } @@ -1494,7 +1494,6 @@ static int page_mapcount_is_zero(struct page *page) * SWAP_SUCCESS - we succeeded in removing all mappings * SWAP_AGAIN - we missed a mapping, try again later * SWAP_FAIL - the page is unswappable - * SWAP_MLOCK - page is mlocked. */ int try_to_unmap(struct page *page, enum ttu_flags flags) { diff --git a/mm/vmscan.c b/mm/vmscan.c index 91cef05..3cdd270b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -981,7 +981,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, sc->nr_scanned++; if (unlikely(!page_evictable(page))) - goto cull_mlocked; + goto activate_locked; if (!sc->may_unmap && page_mapped(page)) goto keep_locked; @@ -1146,8 +1146,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, goto activate_locked; case SWAP_AGAIN: goto keep_locked; - case SWAP_MLOCK: - goto cull_mlocked; case SWAP_SUCCESS: ; /* try to free the page below */ } @@ -1289,16 +1287,10 @@ static unsigned long shrink_page_list(struct list_head *page_list, list_add(&page->lru, &free_pages); continue; -cull_mlocked: - if (PageSwapCache(page)) - try_to_free_swap(page); - unlock_page(page); - list_add(&page->lru, &ret_pages); - continue; - activate_locked: /* Not a candidate for swapping, so reclaim swap space. */ - if (PageSwapCache(page) && mem_cgroup_swap_full(page)) + if (PageSwapCache(page) && (mem_cgroup_swap_full(page) || + PageMlocked(page))) try_to_free_swap(page); VM_BUG_ON_PAGE(PageActive(page), page); SetPageActive(page); -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 942716B038E for ; Thu, 2 Mar 2017 01:39:32 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id t184so82025903pgt.1 for ; Wed, 01 Mar 2017 22:39:32 -0800 (PST) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id p24si6624202pli.285.2017.03.01.22.39.31 for ; Wed, 01 Mar 2017 22:39:31 -0800 (PST) From: Minchan Kim Subject: [RFC 07/11] mm: remove SWAP_AGAIN in ttu Date: Thu, 2 Mar 2017 15:39:21 +0900 Message-Id: <1488436765-32350-8-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim In 2002, [1] introduced SWAP_AGAIN. At that time, ttuo used spin_trylock(&mm->page_table_lock) so it's really easy to contend and fail to hold a lock so SWAP_AGAIN to keep LRU status makes sense. However, now we changed it to mutex-based lock and be able to block without skip pte so there is a few of small window to return SWAP_AGAIN so remove SWAP_AGAIN and just return SWAP_FAIL. [1] c48c43e, minimal rmap Signed-off-by: Minchan Kim --- mm/rmap.c | 11 +++-------- mm/vmscan.c | 2 -- 2 files changed, 3 insertions(+), 10 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 47898a1..da18f21 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1492,13 +1492,10 @@ static int page_mapcount_is_zero(struct page *page) * Return values are: * * SWAP_SUCCESS - we succeeded in removing all mappings - * SWAP_AGAIN - we missed a mapping, try again later * SWAP_FAIL - the page is unswappable */ int try_to_unmap(struct page *page, enum ttu_flags flags) { - int ret; - struct rmap_walk_control rwc = { .rmap_one = try_to_unmap_one, .arg = (void *)flags, @@ -1518,13 +1515,11 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) rwc.invalid_vma = invalid_migration_vma; if (flags & TTU_RMAP_LOCKED) - ret = rmap_walk_locked(page, &rwc); + rmap_walk_locked(page, &rwc); else - ret = rmap_walk(page, &rwc); + rmap_walk(page, &rwc); - if (!page_mapcount(page)) - ret = SWAP_SUCCESS; - return ret; + return !page_mapcount(page) ? SWAP_SUCCESS: SWAP_FAIL; } static int page_not_mapped(struct page *page) diff --git a/mm/vmscan.c b/mm/vmscan.c index 3cdd270b..170c61f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1144,8 +1144,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, case SWAP_FAIL: nr_unmap_fail++; goto activate_locked; - case SWAP_AGAIN: - goto keep_locked; case SWAP_SUCCESS: ; /* try to free the page below */ } -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id 5A53C6B038E for ; Thu, 2 Mar 2017 01:39:33 -0500 (EST) Received: by mail-pg0-f70.google.com with SMTP id 1so81805912pgz.5 for ; Wed, 01 Mar 2017 22:39:33 -0800 (PST) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id y17si6610383pgi.179.2017.03.01.22.39.31 for ; Wed, 01 Mar 2017 22:39:32 -0800 (PST) From: Minchan Kim Subject: [RFC 10/11] mm: make rmap_one boolean function Date: Thu, 2 Mar 2017 15:39:24 +0900 Message-Id: <1488436765-32350-11-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim rmap_one's return value controls whether rmap_work should contine to scan other ptes or not so it's target for changing to boolean. Return true if the scan should be continued. Otherwise, return false to stop the scanning. This patch makes rmap_one's return value to boolean. Signed-off-by: Minchan Kim --- include/linux/rmap.h | 2 +- mm/ksm.c | 2 +- mm/migrate.c | 4 ++-- mm/page_idle.c | 4 ++-- mm/rmap.c | 30 +++++++++++++++--------------- 5 files changed, 21 insertions(+), 21 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 1d7d457c..59d7dd7 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -257,7 +257,7 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); */ struct rmap_walk_control { void *arg; - int (*rmap_one)(struct page *page, struct vm_area_struct *vma, + bool (*rmap_one)(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *arg); int (*done)(struct page *page); struct anon_vma *(*anon_lock)(struct page *page); diff --git a/mm/ksm.c b/mm/ksm.c index 68f8820..f764afb 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1975,7 +1975,7 @@ void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - if (SWAP_AGAIN != rwc->rmap_one(page, vma, + if (!rwc->rmap_one(page, vma, rmap_item->address, rwc->arg)) { anon_vma_unlock_read(anon_vma); return; diff --git a/mm/migrate.c b/mm/migrate.c index cda4c27..22daaea 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -193,7 +193,7 @@ void putback_movable_pages(struct list_head *l) /* * Restore a potential migration pte to a working pte entry */ -static int remove_migration_pte(struct page *page, struct vm_area_struct *vma, +static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *old) { struct page_vma_mapped_walk pvmw = { @@ -249,7 +249,7 @@ static int remove_migration_pte(struct page *page, struct vm_area_struct *vma, update_mmu_cache(vma, pvmw.address, pvmw.pte); } - return SWAP_AGAIN; + return true; } /* diff --git a/mm/page_idle.c b/mm/page_idle.c index b0ee56c..1b0f48c 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -50,7 +50,7 @@ static struct page *page_idle_get_page(unsigned long pfn) return page; } -static int page_idle_clear_pte_refs_one(struct page *page, +static bool page_idle_clear_pte_refs_one(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *arg) { @@ -84,7 +84,7 @@ static int page_idle_clear_pte_refs_one(struct page *page, */ set_page_young(page); } - return SWAP_AGAIN; + return true; } static void page_idle_clear_pte_refs(struct page *page) diff --git a/mm/rmap.c b/mm/rmap.c index 08e4f81..60adedb 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -717,7 +717,7 @@ struct page_referenced_arg { /* * arg: page_referenced_arg will be passed */ -static int page_referenced_one(struct page *page, struct vm_area_struct *vma, +static bool page_referenced_one(struct page *page, struct vm_area_struct *vma, unsigned long address, void *arg) { struct page_referenced_arg *pra = arg; @@ -734,7 +734,7 @@ static int page_referenced_one(struct page *page, struct vm_area_struct *vma, if (vma->vm_flags & VM_LOCKED) { page_vma_mapped_walk_done(&pvmw); pra->vm_flags |= VM_LOCKED; - return SWAP_FAIL; /* To break the loop */ + return false; /* To break the loop */ } if (pvmw.pte) { @@ -774,9 +774,9 @@ static int page_referenced_one(struct page *page, struct vm_area_struct *vma, } if (!pra->mapcount) - return SWAP_SUCCESS; /* To break the loop */ + return false; /* To break the loop */ - return SWAP_AGAIN; + return true; } static bool invalid_page_referenced_vma(struct vm_area_struct *vma, void *arg) @@ -847,7 +847,7 @@ int page_referenced(struct page *page, return pra.referenced; } -static int page_mkclean_one(struct page *page, struct vm_area_struct *vma, +static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, unsigned long address, void *arg) { struct page_vma_mapped_walk pvmw = { @@ -900,7 +900,7 @@ static int page_mkclean_one(struct page *page, struct vm_area_struct *vma, } } - return SWAP_AGAIN; + return true; } static bool invalid_mkclean_vma(struct vm_area_struct *vma, void *arg) @@ -1283,7 +1283,7 @@ void page_remove_rmap(struct page *page, bool compound) /* * @arg: enum ttu_flags will be passed to this argument */ -static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, +static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, unsigned long address, void *arg) { struct mm_struct *mm = vma->vm_mm; @@ -1294,12 +1294,12 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, }; pte_t pteval; struct page *subpage; - int ret = SWAP_AGAIN; + bool ret = true; enum ttu_flags flags = (enum ttu_flags)arg; /* munlock has nothing to gain from examining un-locked vmas */ if ((flags & TTU_MUNLOCK) && !(vma->vm_flags & VM_LOCKED)) - return SWAP_AGAIN; + return true; if (flags & TTU_SPLIT_HUGE_PMD) { split_huge_pmd_address(vma, address, @@ -1328,7 +1328,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, */ mlock_vma_page(page); } - ret = SWAP_FAIL; + ret = false; page_vma_mapped_walk_done(&pvmw); break; } @@ -1339,7 +1339,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, if (!(flags & TTU_IGNORE_ACCESS)) { if (ptep_clear_flush_young_notify(vma, address, pvmw.pte)) { - ret = SWAP_FAIL; + ret = false; page_vma_mapped_walk_done(&pvmw); break; } @@ -1425,14 +1425,14 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, /* dirty MADV_FREE page */ set_pte_at(mm, address, pvmw.pte, pteval); SetPageSwapBacked(page); - ret = SWAP_FAIL; + ret = false; page_vma_mapped_walk_done(&pvmw); break; } if (swap_duplicate(entry) < 0) { set_pte_at(mm, address, pvmw.pte, pteval); - ret = SWAP_FAIL; + ret = false; page_vma_mapped_walk_done(&pvmw); break; } @@ -1624,7 +1624,7 @@ static void rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg)) + if (!rwc->rmap_one(page, vma, address, rwc->arg)) break; if (rwc->done && rwc->done(page)) break; @@ -1678,7 +1678,7 @@ static void rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg)) + if (!rwc->rmap_one(page, vma, address, rwc->arg)) goto done; if (rwc->done && rwc->done(page)) goto done; -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id 651996B038F for ; Thu, 2 Mar 2017 01:39:33 -0500 (EST) Received: by mail-pg0-f72.google.com with SMTP id q126so82010446pga.0 for ; Wed, 01 Mar 2017 22:39:33 -0800 (PST) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id r19si6613516pgj.165.2017.03.01.22.39.31 for ; Wed, 01 Mar 2017 22:39:32 -0800 (PST) From: Minchan Kim Subject: [RFC 08/11] mm: make ttu's return boolean Date: Thu, 2 Mar 2017 15:39:22 +0900 Message-Id: <1488436765-32350-9-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim , "Kirill A. Shutemov" , Naoya Horiguchi try_to_unmap returns SWAP_SUCCESS or SWAP_FAIL so it's suitable for boolean return. This patch changes it. Cc: "Kirill A. Shutemov" Cc: Naoya Horiguchi Signed-off-by: Minchan Kim --- include/linux/rmap.h | 4 ++-- mm/huge_memory.c | 4 ++-- mm/memory-failure.c | 22 ++++++++++------------ mm/rmap.c | 8 +++----- mm/vmscan.c | 7 +------ 5 files changed, 18 insertions(+), 27 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 3630d4d..6028c38 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -191,7 +191,7 @@ static inline void page_dup_rmap(struct page *page, bool compound) int page_referenced(struct page *, int is_locked, struct mem_cgroup *memcg, unsigned long *vm_flags); -int try_to_unmap(struct page *, enum ttu_flags flags); +bool try_to_unmap(struct page *, enum ttu_flags flags); /* Avoid racy checks */ #define PVMW_SYNC (1 << 0) @@ -281,7 +281,7 @@ static inline int page_referenced(struct page *page, int is_locked, return 0; } -#define try_to_unmap(page, refs) SWAP_FAIL +#define try_to_unmap(page, refs) false static inline int page_mkclean(struct page *page) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index fe2ccd4..79ea769 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2106,7 +2106,7 @@ static void freeze_page(struct page *page) { enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS | TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; - int ret; + bool ret; VM_BUG_ON_PAGE(!PageHead(page), page); @@ -2114,7 +2114,7 @@ static void freeze_page(struct page *page) ttu_flags |= TTU_MIGRATION; ret = try_to_unmap(page, ttu_flags); - VM_BUG_ON_PAGE(ret != SWAP_SUCCESS, page); + VM_BUG_ON_PAGE(!ret, page); } static void unfreeze_page(struct page *page) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index b78d080..75fcbd8 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -321,7 +321,7 @@ static void add_to_kill(struct task_struct *tsk, struct page *p, * wrong earlier. */ static void kill_procs(struct list_head *to_kill, int forcekill, int trapno, - int fail, struct page *page, unsigned long pfn, + bool fail, struct page *page, unsigned long pfn, int flags) { struct to_kill *tk, *next; @@ -903,13 +903,13 @@ EXPORT_SYMBOL_GPL(get_hwpoison_page); * Do all that is necessary to remove user space mappings. Unmap * the pages and send SIGBUS to the processes if the data was dirty. */ -static int hwpoison_user_mappings(struct page *p, unsigned long pfn, +static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, int trapno, int flags, struct page **hpagep) { enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS; struct address_space *mapping; LIST_HEAD(tokill); - int ret; + bool ret; int kill = 1, forcekill; struct page *hpage = *hpagep; @@ -918,20 +918,20 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, * other types of pages. */ if (PageReserved(p) || PageSlab(p)) - return SWAP_SUCCESS; + return true; if (!(PageLRU(hpage) || PageHuge(p))) - return SWAP_SUCCESS; + return true; /* * This check implies we don't kill processes if their pages * are in the swap cache early. Those are always late kills. */ if (!page_mapped(hpage)) - return SWAP_SUCCESS; + return true; if (PageKsm(p)) { pr_err("Memory failure: %#lx: can't handle KSM pages.\n", pfn); - return SWAP_FAIL; + return false; } if (PageSwapCache(p)) { @@ -971,7 +971,7 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED); ret = try_to_unmap(hpage, ttu); - if (ret != SWAP_SUCCESS) + if (!ret) pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n", pfn, page_mapcount(hpage)); @@ -986,8 +986,7 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, * any accesses to the poisoned memory. */ forcekill = PageDirty(hpage) || (flags & MF_MUST_KILL); - kill_procs(&tokill, forcekill, trapno, - ret != SWAP_SUCCESS, p, pfn, flags); + kill_procs(&tokill, forcekill, trapno, !ret , p, pfn, flags); return ret; } @@ -1229,8 +1228,7 @@ int memory_failure(unsigned long pfn, int trapno, int flags) * When the raw error page is thp tail page, hpage points to the raw * page after thp split. */ - if (hwpoison_user_mappings(p, pfn, trapno, flags, &hpage) - != SWAP_SUCCESS) { + if (!hwpoison_user_mappings(p, pfn, trapno, flags, &hpage)) { action_result(pfn, MF_MSG_UNMAP_FAILED, MF_IGNORED); res = -EBUSY; goto out; diff --git a/mm/rmap.c b/mm/rmap.c index da18f21..01f7832 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1489,12 +1489,10 @@ static int page_mapcount_is_zero(struct page *page) * * Tries to remove all the page table entries which are mapping this * page, used in the pageout path. Caller must hold the page lock. - * Return values are: * - * SWAP_SUCCESS - we succeeded in removing all mappings - * SWAP_FAIL - the page is unswappable + * If unmap is successful, return true. Otherwise, false. */ -int try_to_unmap(struct page *page, enum ttu_flags flags) +bool try_to_unmap(struct page *page, enum ttu_flags flags) { struct rmap_walk_control rwc = { .rmap_one = try_to_unmap_one, @@ -1519,7 +1517,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) else rmap_walk(page, &rwc); - return !page_mapcount(page) ? SWAP_SUCCESS: SWAP_FAIL; + return !page_mapcount(page) ? true: false; } static int page_not_mapped(struct page *page) diff --git a/mm/vmscan.c b/mm/vmscan.c index 170c61f..e4b74f1 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -966,7 +966,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, int may_enter_fs; enum page_references references = PAGEREF_RECLAIM_CLEAN; bool dirty, writeback; - int ret = SWAP_SUCCESS; cond_resched(); @@ -1139,13 +1138,9 @@ static unsigned long shrink_page_list(struct list_head *page_list, * processes. Try to unmap it here. */ if (page_mapped(page)) { - switch (ret = try_to_unmap(page, - ttu_flags | TTU_BATCH_FLUSH)) { - case SWAP_FAIL: + if (!try_to_unmap(page, ttu_flags | TTU_BATCH_FLUSH)) { nr_unmap_fail++; goto activate_locked; - case SWAP_SUCCESS: - ; /* try to free the page below */ } } -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id A145B6B038E for ; Thu, 2 Mar 2017 01:39:33 -0500 (EST) Received: by mail-pg0-f70.google.com with SMTP id t184so82026261pgt.1 for ; Wed, 01 Mar 2017 22:39:33 -0800 (PST) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id r1si6625114plb.293.2017.03.01.22.39.31 for ; Wed, 01 Mar 2017 22:39:32 -0800 (PST) From: Minchan Kim Subject: [RFC 03/11] mm: remove SWAP_DIRTY in ttu Date: Thu, 2 Mar 2017 15:39:17 +0900 Message-Id: <1488436765-32350-4-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim , Shaohua Li If we found lazyfree page is dirty, ttuo can just SetPageSwapBakced in there like PG_mlocked page and just return with SWAP_FAIL which is very natural because the page is not swappable right now so that vmscan can activate it. There is no point to introduce new return value SWAP_DIRTY in ttu at the moment. Cc: Shaohua Li Signed-off-by: Minchan Kim --- include/linux/rmap.h | 1 - mm/rmap.c | 4 ++-- mm/vmscan.c | 3 --- 3 files changed, 2 insertions(+), 6 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index fee10d7..b556eef 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -298,6 +298,5 @@ static inline int page_mkclean(struct page *page) #define SWAP_AGAIN 1 #define SWAP_FAIL 2 #define SWAP_MLOCK 3 -#define SWAP_DIRTY 4 #endif /* _LINUX_RMAP_H */ diff --git a/mm/rmap.c b/mm/rmap.c index 8076347..3a14013 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1424,7 +1424,8 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, } else if (!PageSwapBacked(page)) { /* dirty MADV_FREE page */ set_pte_at(mm, address, pvmw.pte, pteval); - ret = SWAP_DIRTY; + SetPageSwapBacked(page); + ret = SWAP_FAIL; page_vma_mapped_walk_done(&pvmw); break; } @@ -1494,7 +1495,6 @@ static int page_mapcount_is_zero(struct page *page) * SWAP_AGAIN - we missed a mapping, try again later * SWAP_FAIL - the page is unswappable * SWAP_MLOCK - page is mlocked. - * SWAP_DIRTY - page is dirty MADV_FREE page */ int try_to_unmap(struct page *page, enum ttu_flags flags) { diff --git a/mm/vmscan.c b/mm/vmscan.c index 7aa89e3..91cef05 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1141,9 +1141,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, if (page_mapped(page)) { switch (ret = try_to_unmap(page, ttu_flags | TTU_BATCH_FLUSH)) { - case SWAP_DIRTY: - SetPageSwapBacked(page); - /* fall through */ case SWAP_FAIL: nr_unmap_fail++; goto activate_locked; -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id D57106B0394 for ; Thu, 2 Mar 2017 01:39:33 -0500 (EST) Received: by mail-pg0-f70.google.com with SMTP id q126so82010617pga.0 for ; Wed, 01 Mar 2017 22:39:33 -0800 (PST) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id r1si450276pgo.42.2017.03.01.22.39.32 for ; Wed, 01 Mar 2017 22:39:33 -0800 (PST) From: Minchan Kim Subject: [RFC 11/11] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Date: Thu, 2 Mar 2017 15:39:25 +0900 Message-Id: <1488436765-32350-12-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim There is no user for it. Remove it. Signed-off-by: Minchan Kim --- include/linux/rmap.h | 7 ------- 1 file changed, 7 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 59d7dd7..5d6788f 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -291,11 +291,4 @@ static inline int page_mkclean(struct page *page) #endif /* CONFIG_MMU */ -/* - * Return values of try_to_unmap - */ -#define SWAP_SUCCESS 0 -#define SWAP_AGAIN 1 -#define SWAP_FAIL 2 - #endif /* _LINUX_RMAP_H */ -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 091E36B038E for ; Thu, 2 Mar 2017 01:39:38 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id t184so82027751pgt.1 for ; Wed, 01 Mar 2017 22:39:38 -0800 (PST) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id i191si66836pge.48.2017.03.01.22.39.33 for ; Wed, 01 Mar 2017 22:39:37 -0800 (PST) From: Minchan Kim Subject: [RFC 09/11] mm: make rmap_walk void function Date: Thu, 2 Mar 2017 15:39:23 +0900 Message-Id: <1488436765-32350-10-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim There is no user of return value from rmap_walk friend so this patch makes them void function. Signed-off-by: Minchan Kim --- include/linux/ksm.h | 5 ++--- include/linux/rmap.h | 4 ++-- mm/ksm.c | 16 ++++++---------- mm/rmap.c | 32 +++++++++++++------------------- 4 files changed, 23 insertions(+), 34 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 481c8c4..317ad0b 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -60,7 +60,7 @@ static inline void set_page_stable_node(struct page *page, struct page *ksm_might_need_to_copy(struct page *page, struct vm_area_struct *vma, unsigned long address); -int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc); +void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc); void ksm_migrate_page(struct page *newpage, struct page *oldpage); #else /* !CONFIG_KSM */ @@ -93,10 +93,9 @@ static inline int page_referenced_ksm(struct page *page, return 0; } -static inline int rmap_walk_ksm(struct page *page, +static inline void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) { - return 0; } static inline void ksm_migrate_page(struct page *newpage, struct page *oldpage) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 6028c38..1d7d457c 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -264,8 +264,8 @@ struct rmap_walk_control { bool (*invalid_vma)(struct vm_area_struct *vma, void *arg); }; -int rmap_walk(struct page *page, struct rmap_walk_control *rwc); -int rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc); +void rmap_walk(struct page *page, struct rmap_walk_control *rwc); +void rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc); #else /* !CONFIG_MMU */ diff --git a/mm/ksm.c b/mm/ksm.c index 520e4c3..68f8820 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1931,11 +1931,10 @@ struct page *ksm_might_need_to_copy(struct page *page, return new_page; } -int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) +void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) { struct stable_node *stable_node; struct rmap_item *rmap_item; - int ret = SWAP_AGAIN; int search_new_forks = 0; VM_BUG_ON_PAGE(!PageKsm(page), page); @@ -1948,7 +1947,7 @@ int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) stable_node = page_stable_node(page); if (!stable_node) - return ret; + return; again: hlist_for_each_entry(rmap_item, &stable_node->hlist, hlist) { struct anon_vma *anon_vma = rmap_item->anon_vma; @@ -1976,23 +1975,20 @@ int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - ret = rwc->rmap_one(page, vma, - rmap_item->address, rwc->arg); - if (ret != SWAP_AGAIN) { + if (SWAP_AGAIN != rwc->rmap_one(page, vma, + rmap_item->address, rwc->arg)) { anon_vma_unlock_read(anon_vma); - goto out; + return; } if (rwc->done && rwc->done(page)) { anon_vma_unlock_read(anon_vma); - goto out; + return; } } anon_vma_unlock_read(anon_vma); } if (!search_new_forks++) goto again; -out: - return ret; } #ifdef CONFIG_MIGRATION diff --git a/mm/rmap.c b/mm/rmap.c index 01f7832..08e4f81 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1595,13 +1595,12 @@ static struct anon_vma *rmap_walk_anon_lock(struct page *page, * vm_flags for that VMA. That should be OK, because that vma shouldn't be * LOCKED. */ -static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, +static void rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, bool locked) { struct anon_vma *anon_vma; pgoff_t pgoff_start, pgoff_end; struct anon_vma_chain *avc; - int ret = SWAP_AGAIN; if (locked) { anon_vma = page_anon_vma(page); @@ -1611,7 +1610,7 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, anon_vma = rmap_walk_anon_lock(page, rwc); } if (!anon_vma) - return ret; + return; pgoff_start = page_to_pgoff(page); pgoff_end = pgoff_start + hpage_nr_pages(page) - 1; @@ -1625,8 +1624,7 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - ret = rwc->rmap_one(page, vma, address, rwc->arg); - if (ret != SWAP_AGAIN) + if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg)) break; if (rwc->done && rwc->done(page)) break; @@ -1634,7 +1632,6 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, if (!locked) anon_vma_unlock_read(anon_vma); - return ret; } /* @@ -1650,13 +1647,12 @@ static int rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, * vm_flags for that VMA. That should be OK, because that vma shouldn't be * LOCKED. */ -static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, +static void rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, bool locked) { struct address_space *mapping = page_mapping(page); pgoff_t pgoff_start, pgoff_end; struct vm_area_struct *vma; - int ret = SWAP_AGAIN; /* * The page lock not only makes sure that page->mapping cannot @@ -1667,7 +1663,7 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, VM_BUG_ON_PAGE(!PageLocked(page), page); if (!mapping) - return ret; + return; pgoff_start = page_to_pgoff(page); pgoff_end = pgoff_start + hpage_nr_pages(page) - 1; @@ -1682,8 +1678,7 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; - ret = rwc->rmap_one(page, vma, address, rwc->arg); - if (ret != SWAP_AGAIN) + if (SWAP_AGAIN != rwc->rmap_one(page, vma, address, rwc->arg)) goto done; if (rwc->done && rwc->done(page)) goto done; @@ -1692,28 +1687,27 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, done: if (!locked) i_mmap_unlock_read(mapping); - return ret; } -int rmap_walk(struct page *page, struct rmap_walk_control *rwc) +void rmap_walk(struct page *page, struct rmap_walk_control *rwc) { if (unlikely(PageKsm(page))) - return rmap_walk_ksm(page, rwc); + rmap_walk_ksm(page, rwc); else if (PageAnon(page)) - return rmap_walk_anon(page, rwc, false); + rmap_walk_anon(page, rwc, false); else - return rmap_walk_file(page, rwc, false); + rmap_walk_file(page, rwc, false); } /* Like rmap_walk, but caller holds relevant rmap lock */ -int rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc) +void rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc) { /* no ksm support for now */ VM_BUG_ON_PAGE(PageKsm(page), page); if (PageAnon(page)) - return rmap_walk_anon(page, rwc, true); + rmap_walk_anon(page, rwc, true); else - return rmap_walk_file(page, rwc, true); + rmap_walk_file(page, rwc, true); } #ifdef CONFIG_HUGETLB_PAGE -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 98A416B038A for ; Thu, 2 Mar 2017 02:34:52 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id 1so82942839pgz.5 for ; Wed, 01 Mar 2017 23:34:52 -0800 (PST) Received: from out0-143.mail.aliyun.com (out0-143.mail.aliyun.com. [140.205.0.143]) by mx.google.com with ESMTP id q1si6745427plb.117.2017.03.01.23.34.51 for ; Wed, 01 Mar 2017 23:34:51 -0800 (PST) Reply-To: "Hillf Danton" From: "Hillf Danton" References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-4-git-send-email-minchan@kernel.org> In-Reply-To: <1488436765-32350-4-git-send-email-minchan@kernel.org> Subject: Re: [RFC 03/11] mm: remove SWAP_DIRTY in ttu Date: Thu, 02 Mar 2017 15:34:45 +0800 Message-ID: <079901d29327$77698f60$663cae20$@alibaba-inc.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Content-Language: zh-cn Sender: owner-linux-mm@kvack.org List-ID: To: 'Minchan Kim' , 'Andrew Morton' Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, 'Johannes Weiner' , 'Michal Hocko' , 'Shaohua Li' On March 02, 2017 2:39 PM Minchan Kim wrote: > @@ -1424,7 +1424,8 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > } else if (!PageSwapBacked(page)) { > /* dirty MADV_FREE page */ Nit: enrich the comment please. > set_pte_at(mm, address, pvmw.pte, pteval); > - ret = SWAP_DIRTY; > + SetPageSwapBacked(page); > + ret = SWAP_FAIL; > page_vma_mapped_walk_done(&pvmw); > break; > } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id 4504E6B0388 for ; Thu, 2 Mar 2017 09:23:41 -0500 (EST) Received: by mail-pg0-f72.google.com with SMTP id 10so9562986pgb.3 for ; Thu, 02 Mar 2017 06:23:41 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id h5si7555073pgg.45.2017.03.02.06.23.40 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Mar 2017 06:23:40 -0800 (PST) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v22EJWBm101180 for ; Thu, 2 Mar 2017 09:23:39 -0500 Received: from e23smtp08.au.ibm.com (e23smtp08.au.ibm.com [202.81.31.141]) by mx0b-001b2d01.pphosted.com with ESMTP id 28xdv1mvv7-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 02 Mar 2017 09:23:39 -0500 Received: from localhost by e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 3 Mar 2017 00:23:33 +1000 Received: from d23relay09.au.ibm.com (d23relay09.au.ibm.com [9.185.63.181]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id 4B1442BB0057 for ; Fri, 3 Mar 2017 01:23:32 +1100 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay09.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v22ENOOZ40042564 for ; Fri, 3 Mar 2017 01:23:32 +1100 Received: from d23av04.au.ibm.com (localhost [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v22EMx8L003651 for ; Fri, 3 Mar 2017 01:23:00 +1100 Subject: Re: [RFC 00/11] make try_to_unmap simple References: <1488436765-32350-1-git-send-email-minchan@kernel.org> From: Anshuman Khandual Date: Thu, 2 Mar 2017 19:52:27 +0530 MIME-Version: 1.0 In-Reply-To: <1488436765-32350-1-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: <86c860e4-c53d-200a-f36a-2ed8a7415d5d@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On 03/02/2017 12:09 PM, Minchan Kim wrote: > Currently, try_to_unmap returns various return value(SWAP_SUCCESS, > SWAP_FAIL, SWAP_AGAIN, SWAP_DIRTY and SWAP_MLOCK). When I look into > that, it's unncessary complicated so this patch aims for cleaning > it up. Change ttu to boolean function so we can remove SWAP_AGAIN, > SWAP_DIRTY, SWAP_MLOCK. It may be a trivial question but apart from being a cleanup does it help in improving it's callers some way ? Any other benefits ? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 11AD16B0387 for ; Thu, 2 Mar 2017 09:27:19 -0500 (EST) Received: by mail-pf0-f197.google.com with SMTP id j5so83985748pfb.3 for ; Thu, 02 Mar 2017 06:27:19 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id f6si7565489plj.297.2017.03.02.06.27.18 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Mar 2017 06:27:18 -0800 (PST) Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v22EJjGK012097 for ; Thu, 2 Mar 2017 09:27:17 -0500 Received: from e28smtp06.in.ibm.com (e28smtp06.in.ibm.com [125.16.236.6]) by mx0a-001b2d01.pphosted.com with ESMTP id 28xjam8tvd-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 02 Mar 2017 09:27:17 -0500 Received: from localhost by e28smtp06.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 2 Mar 2017 19:57:14 +0530 Received: from d28relay10.in.ibm.com (d28relay10.in.ibm.com [9.184.220.161]) by d28dlp01.in.ibm.com (Postfix) with ESMTP id B369BE005F for ; Thu, 2 Mar 2017 19:58:57 +0530 (IST) Received: from d28av02.in.ibm.com (d28av02.in.ibm.com [9.184.220.64]) by d28relay10.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v22EQ5aL16384084 for ; Thu, 2 Mar 2017 19:56:05 +0530 Received: from d28av02.in.ibm.com (localhost [127.0.0.1]) by d28av02.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v22ERBaJ024502 for ; Thu, 2 Mar 2017 19:57:11 +0530 Subject: Re: [RFC 01/11] mm: use SWAP_SUCCESS instead of 0 References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-2-git-send-email-minchan@kernel.org> From: Anshuman Khandual Date: Thu, 2 Mar 2017 19:57:10 +0530 MIME-Version: 1.0 In-Reply-To: <1488436765-32350-2-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , "Kirill A . Shutemov" On 03/02/2017 12:09 PM, Minchan Kim wrote: > SWAP_SUCCESS defined value 0 can be changed always so don't rely on > it. Instead, use explict macro. Right. But should not we move the changes to the callers last in the patch series after doing the cleanup to the try_to_unmap() function as intended first. > > Cc: Kirill A. Shutemov > Signed-off-by: Minchan Kim > --- > mm/huge_memory.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 092cc5c..fe2ccd4 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2114,7 +2114,7 @@ static void freeze_page(struct page *page) > ttu_flags |= TTU_MIGRATION; > > ret = try_to_unmap(page, ttu_flags); > - VM_BUG_ON_PAGE(ret, page); > + VM_BUG_ON_PAGE(ret != SWAP_SUCCESS, page); > } > > static void unfreeze_page(struct page *page) > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id ABF146B0388 for ; Thu, 2 Mar 2017 09:33:23 -0500 (EST) Received: by mail-pg0-f70.google.com with SMTP id d18so93385926pgh.2 for ; Thu, 02 Mar 2017 06:33:23 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id i9si7607107plk.73.2017.03.02.06.33.22 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Mar 2017 06:33:22 -0800 (PST) Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v22ETpe1140322 for ; Thu, 2 Mar 2017 09:33:22 -0500 Received: from e28smtp03.in.ibm.com (e28smtp03.in.ibm.com [125.16.236.3]) by mx0a-001b2d01.pphosted.com with ESMTP id 28xen2kc7c-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 02 Mar 2017 09:33:21 -0500 Received: from localhost by e28smtp03.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 2 Mar 2017 20:03:18 +0530 Received: from d28relay08.in.ibm.com (d28relay08.in.ibm.com [9.184.220.159]) by d28dlp01.in.ibm.com (Postfix) with ESMTP id 17CA4E0024 for ; Thu, 2 Mar 2017 20:05:03 +0530 (IST) Received: from d28av01.in.ibm.com (d28av01.in.ibm.com [9.184.220.63]) by d28relay08.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v22EWAWs13631582 for ; Thu, 2 Mar 2017 20:02:10 +0530 Received: from d28av01.in.ibm.com (localhost [127.0.0.1]) by d28av01.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v22EXGSX014417 for ; Thu, 2 Mar 2017 20:03:16 +0530 Subject: Re: [RFC 02/11] mm: remove unncessary ret in page_referenced References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-3-git-send-email-minchan@kernel.org> From: Anshuman Khandual Date: Thu, 2 Mar 2017 20:03:16 +0530 MIME-Version: 1.0 In-Reply-To: <1488436765-32350-3-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: <2baf1168-0f84-b80d-5fb9-9d13c618c9f1@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On 03/02/2017 12:09 PM, Minchan Kim wrote: > Anyone doesn't use ret variable. Remove it. > This change is correct. But not sure how this is related to try_to_unmap() clean up though. > Signed-off-by: Minchan Kim > --- > mm/rmap.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index bb45712..8076347 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -805,7 +805,6 @@ int page_referenced(struct page *page, > struct mem_cgroup *memcg, > unsigned long *vm_flags) > { > - int ret; > int we_locked = 0; > struct page_referenced_arg pra = { > .mapcount = total_mapcount(page), > @@ -839,7 +838,7 @@ int page_referenced(struct page *page, > rwc.invalid_vma = invalid_page_referenced_vma; > } > > - ret = rmap_walk(page, &rwc); > + rmap_walk(page, &rwc); > *vm_flags = pra.vm_flags; > > if (we_locked) > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 6F1266B0389 for ; Thu, 2 Mar 2017 09:43:13 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id t184so93508124pgt.1 for ; Thu, 02 Mar 2017 06:43:13 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id t7si7603072pfi.147.2017.03.02.06.43.12 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Mar 2017 06:43:12 -0800 (PST) Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v22EclF0022780 for ; Thu, 2 Mar 2017 09:43:11 -0500 Received: from e28smtp07.in.ibm.com (e28smtp07.in.ibm.com [125.16.236.7]) by mx0a-001b2d01.pphosted.com with ESMTP id 28xen2ktha-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 02 Mar 2017 09:43:11 -0500 Received: from localhost by e28smtp07.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 2 Mar 2017 20:13:08 +0530 Received: from d28relay03.in.ibm.com (d28relay03.in.ibm.com [9.184.220.60]) by d28dlp02.in.ibm.com (Postfix) with ESMTP id B43FC394004E for ; Thu, 2 Mar 2017 20:13:06 +0530 (IST) Received: from d28av01.in.ibm.com (d28av01.in.ibm.com [9.184.220.63]) by d28relay03.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v22Eh6mO18088170 for ; Thu, 2 Mar 2017 20:13:06 +0530 Received: from d28av01.in.ibm.com (localhost [127.0.0.1]) by d28av01.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v22Eh5LM024985 for ; Thu, 2 Mar 2017 20:13:05 +0530 Subject: Re: [RFC 03/11] mm: remove SWAP_DIRTY in ttu References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-4-git-send-email-minchan@kernel.org> From: Anshuman Khandual Date: Thu, 2 Mar 2017 20:12:59 +0530 MIME-Version: 1.0 In-Reply-To: <1488436765-32350-4-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: <339d97b2-aeda-29ff-514c-d883e87c7a14@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Shaohua Li On 03/02/2017 12:09 PM, Minchan Kim wrote: > If we found lazyfree page is dirty, ttuo can just SetPageSwapBakced > in there like PG_mlocked page and just return with SWAP_FAIL which > is very natural because the page is not swappable right now so that > vmscan can activate it. There is no point to introduce new return > value SWAP_DIRTY in ttu at the moment. Yeah makes sense. In the process, SetPageSwapBacked marking of the page is moved from the shrink_page_list() to try_to_unmap_one(). -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id 391EE6B038A for ; Thu, 2 Mar 2017 09:52:01 -0500 (EST) Received: by mail-pg0-f72.google.com with SMTP id 10so10340478pgb.3 for ; Thu, 02 Mar 2017 06:52:01 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id a21si7621407pgi.248.2017.03.02.06.52.00 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Mar 2017 06:52:00 -0800 (PST) Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v22Eo8Gl098776 for ; Thu, 2 Mar 2017 09:52:00 -0500 Received: from e28smtp09.in.ibm.com (e28smtp09.in.ibm.com [125.16.236.9]) by mx0a-001b2d01.pphosted.com with ESMTP id 28xjama05c-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 02 Mar 2017 09:51:59 -0500 Received: from localhost by e28smtp09.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 2 Mar 2017 20:21:56 +0530 Received: from d28relay10.in.ibm.com (d28relay10.in.ibm.com [9.184.220.161]) by d28dlp03.in.ibm.com (Postfix) with ESMTP id 156D9125804F for ; Thu, 2 Mar 2017 20:22:07 +0530 (IST) Received: from d28av03.in.ibm.com (d28av03.in.ibm.com [9.184.220.65]) by d28relay10.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v22EomZL17956888 for ; Thu, 2 Mar 2017 20:20:48 +0530 Received: from d28av03.in.ibm.com (localhost [127.0.0.1]) by d28av03.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v22Eprl4000922 for ; Thu, 2 Mar 2017 20:21:54 +0530 Subject: Re: [RFC 04/11] mm: remove SWAP_MLOCK check for SWAP_SUCCESS in ttu References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-5-git-send-email-minchan@kernel.org> From: Anshuman Khandual Date: Thu, 2 Mar 2017 20:21:46 +0530 MIME-Version: 1.0 In-Reply-To: <1488436765-32350-5-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: <65fd1dd1-7ca4-6610-285c-09436879d8ed@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On 03/02/2017 12:09 PM, Minchan Kim wrote: > If the page is mapped and rescue in ttuo, page_mapcount(page) == 0 cannot Nit: "ttuo" is very cryptic. Please expand it. > be true so page_mapcount check in ttu is enough to return SWAP_SUCCESS. > IOW, SWAP_MLOCK check is redundant so remove it. Right, page_mapcount(page) should be enough to tell whether swapping out happened successfully or the page is still mapped in some page table. > > Signed-off-by: Minchan Kim > --- > mm/rmap.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 3a14013..0a48958 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1523,7 +1523,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) > else > ret = rmap_walk(page, &rwc); > > - if (ret != SWAP_MLOCK && !page_mapcount(page)) > + if (!page_mapcount(page)) > ret = SWAP_SUCCESS; > return ret; > } > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f200.google.com (mail-pf0-f200.google.com [209.85.192.200]) by kanga.kvack.org (Postfix) with ESMTP id 443176B0038 for ; Thu, 2 Mar 2017 21:11:22 -0500 (EST) Received: by mail-pf0-f200.google.com with SMTP id x63so34919214pfx.7 for ; Thu, 02 Mar 2017 18:11:22 -0800 (PST) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id f4si9024669pgc.224.2017.03.02.18.11.20 for ; Thu, 02 Mar 2017 18:11:21 -0800 (PST) Date: Fri, 3 Mar 2017 11:11:18 +0900 From: Minchan Kim Subject: Re: [RFC 00/11] make try_to_unmap simple Message-ID: <20170303021118.GA3503@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <86c860e4-c53d-200a-f36a-2ed8a7415d5d@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <86c860e4-c53d-200a-f36a-2ed8a7415d5d@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko Hi Anshuman, On Thu, Mar 02, 2017 at 07:52:27PM +0530, Anshuman Khandual wrote: > On 03/02/2017 12:09 PM, Minchan Kim wrote: > > Currently, try_to_unmap returns various return value(SWAP_SUCCESS, > > SWAP_FAIL, SWAP_AGAIN, SWAP_DIRTY and SWAP_MLOCK). When I look into > > that, it's unncessary complicated so this patch aims for cleaning > > it up. Change ttu to boolean function so we can remove SWAP_AGAIN, > > SWAP_DIRTY, SWAP_MLOCK. > > It may be a trivial question but apart from being a cleanup does it > help in improving it's callers some way ? Any other benefits ? If you mean some performace, I don't think so. It just aims for cleanup so caller don't need to think much about return value of try_to_unmap. What he should consider is just "success/fail". Others will be done in isolate/putback friends which makes API simple/easy to use. Thanks. > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 252196B0038 for ; Thu, 2 Mar 2017 21:57:35 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id q126so113523194pga.0 for ; Thu, 02 Mar 2017 18:57:35 -0800 (PST) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id v15si9161990plk.133.2017.03.02.18.57.33 for ; Thu, 02 Mar 2017 18:57:34 -0800 (PST) Date: Fri, 3 Mar 2017 11:57:31 +0900 From: Minchan Kim Subject: Re: [RFC 03/11] mm: remove SWAP_DIRTY in ttu Message-ID: <20170303025731.GC3503@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-4-git-send-email-minchan@kernel.org> <079901d29327$77698f60$663cae20$@alibaba-inc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <079901d29327$77698f60$663cae20$@alibaba-inc.com> Sender: owner-linux-mm@kvack.org List-ID: To: Hillf Danton Cc: 'Andrew Morton' , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, 'Johannes Weiner' , 'Michal Hocko' , 'Shaohua Li' Hi Hillf, On Thu, Mar 02, 2017 at 03:34:45PM +0800, Hillf Danton wrote: > > On March 02, 2017 2:39 PM Minchan Kim wrote: > > @@ -1424,7 +1424,8 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > > } else if (!PageSwapBacked(page)) { > > /* dirty MADV_FREE page */ > > Nit: enrich the comment please. I guess what you wanted is not my patch doing but one merged already so I just sent a small clean patch against of patch merged onto mmotm to make thig logic clear. You are already Cced in there so you can see it. Hope it well. If you want others, please tell me. I will do something to make it clear. Thanks for the review. > > set_pte_at(mm, address, pvmw.pte, pteval); > > - ret = SWAP_DIRTY; > > + SetPageSwapBacked(page); > > + ret = SWAP_FAIL; > > page_vma_mapped_walk_done(&pvmw); > > break; > > } > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id 97A906B0038 for ; Thu, 2 Mar 2017 22:02:03 -0500 (EST) Received: by mail-pg0-f70.google.com with SMTP id t184so113610627pgt.1 for ; Thu, 02 Mar 2017 19:02:03 -0800 (PST) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id o4si9179675plb.97.2017.03.02.19.02.02 for ; Thu, 02 Mar 2017 19:02:02 -0800 (PST) Date: Fri, 3 Mar 2017 12:01:58 +0900 From: Minchan Kim Subject: Re: [RFC 01/11] mm: use SWAP_SUCCESS instead of 0 Message-ID: <20170303030158.GD3503@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-2-git-send-email-minchan@kernel.org> MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: owner-linux-mm@kvack.org List-ID: To: Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , "Kirill A . Shutemov" On Thu, Mar 02, 2017 at 07:57:10PM +0530, Anshuman Khandual wrote: > On 03/02/2017 12:09 PM, Minchan Kim wrote: > > SWAP_SUCCESS defined value 0 can be changed always so don't rely on > > it. Instead, use explict macro. > > Right. But should not we move the changes to the callers last in the > patch series after doing the cleanup to the try_to_unmap() function > as intended first. I don't understand what you are pointing out. Could you elaborate it a bit? Thanks. > > > > Cc: Kirill A. Shutemov > > Signed-off-by: Minchan Kim > > --- > > mm/huge_memory.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index 092cc5c..fe2ccd4 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -2114,7 +2114,7 @@ static void freeze_page(struct page *page) > > ttu_flags |= TTU_MIGRATION; > > > > ret = try_to_unmap(page, ttu_flags); > > - VM_BUG_ON_PAGE(ret, page); > > + VM_BUG_ON_PAGE(ret != SWAP_SUCCESS, page); > > } > > > > static void unfreeze_page(struct page *page) > > > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id 2501A6B0389 for ; Thu, 2 Mar 2017 22:03:03 -0500 (EST) Received: by mail-pf0-f199.google.com with SMTP id j5so103540521pfb.3 for ; Thu, 02 Mar 2017 19:03:03 -0800 (PST) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id 33si9157453plb.317.2017.03.02.19.03.01 for ; Thu, 02 Mar 2017 19:03:02 -0800 (PST) Date: Fri, 3 Mar 2017 12:03:00 +0900 From: Minchan Kim Subject: Re: [RFC 02/11] mm: remove unncessary ret in page_referenced Message-ID: <20170303030300.GE3503@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-3-git-send-email-minchan@kernel.org> <2baf1168-0f84-b80d-5fb9-9d13c618c9f1@linux.vnet.ibm.com> MIME-Version: 1.0 In-Reply-To: <2baf1168-0f84-b80d-5fb9-9d13c618c9f1@linux.vnet.ibm.com> Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: owner-linux-mm@kvack.org List-ID: To: Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On Thu, Mar 02, 2017 at 08:03:16PM +0530, Anshuman Khandual wrote: > On 03/02/2017 12:09 PM, Minchan Kim wrote: > > Anyone doesn't use ret variable. Remove it. > > > > This change is correct. But not sure how this is related to > try_to_unmap() clean up though. In this patchset, I made rmap_walk void function with upcoming patch so it's a preparation step for it. > > > > Signed-off-by: Minchan Kim > > --- > > mm/rmap.c | 3 +-- > > 1 file changed, 1 insertion(+), 2 deletions(-) > > > > diff --git a/mm/rmap.c b/mm/rmap.c > > index bb45712..8076347 100644 > > --- a/mm/rmap.c > > +++ b/mm/rmap.c > > @@ -805,7 +805,6 @@ int page_referenced(struct page *page, > > struct mem_cgroup *memcg, > > unsigned long *vm_flags) > > { > > - int ret; > > int we_locked = 0; > > struct page_referenced_arg pra = { > > .mapcount = total_mapcount(page), > > @@ -839,7 +838,7 @@ int page_referenced(struct page *page, > > rwc.invalid_vma = invalid_page_referenced_vma; > > } > > > > - ret = rmap_walk(page, &rwc); > > + rmap_walk(page, &rwc); > > *vm_flags = pra.vm_flags; > > > > if (we_locked) > > > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 31B006B0038 for ; Thu, 2 Mar 2017 22:04:19 -0500 (EST) Received: by mail-pf0-f197.google.com with SMTP id j5so103567862pfb.3 for ; Thu, 02 Mar 2017 19:04:19 -0800 (PST) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id w31si9183140pla.116.2017.03.02.19.04.17 for ; Thu, 02 Mar 2017 19:04:18 -0800 (PST) Date: Fri, 3 Mar 2017 12:04:16 +0900 From: Minchan Kim Subject: Re: [RFC 04/11] mm: remove SWAP_MLOCK check for SWAP_SUCCESS in ttu Message-ID: <20170303030416.GF3503@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-5-git-send-email-minchan@kernel.org> <65fd1dd1-7ca4-6610-285c-09436879d8ed@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <65fd1dd1-7ca4-6610-285c-09436879d8ed@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On Thu, Mar 02, 2017 at 08:21:46PM +0530, Anshuman Khandual wrote: > On 03/02/2017 12:09 PM, Minchan Kim wrote: > > If the page is mapped and rescue in ttuo, page_mapcount(page) == 0 cannot > > Nit: "ttuo" is very cryptic. Please expand it. No problem. > > > be true so page_mapcount check in ttu is enough to return SWAP_SUCCESS. > > IOW, SWAP_MLOCK check is redundant so remove it. > > Right, page_mapcount(page) should be enough to tell whether swapping > out happened successfully or the page is still mapped in some page > table. > Thanks for the review, Anshuman! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 134746B0389 for ; Fri, 3 Mar 2017 00:42:16 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id f21so116509905pgi.4 for ; Thu, 02 Mar 2017 21:42:16 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id z17si9494038pgi.387.2017.03.02.21.42.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Mar 2017 21:42:15 -0800 (PST) Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v235cWX3042728 for ; Fri, 3 Mar 2017 00:42:14 -0500 Received: from e23smtp03.au.ibm.com (e23smtp03.au.ibm.com [202.81.31.145]) by mx0a-001b2d01.pphosted.com with ESMTP id 28xs8e5wtm-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 03 Mar 2017 00:42:14 -0500 Received: from localhost by e23smtp03.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 3 Mar 2017 15:42:11 +1000 Received: from d23relay06.au.ibm.com (d23relay06.au.ibm.com [9.185.63.219]) by d23dlp03.au.ibm.com (Postfix) with ESMTP id D942D3578065 for ; Fri, 3 Mar 2017 16:42:08 +1100 (EST) Received: from d23av05.au.ibm.com (d23av05.au.ibm.com [9.190.234.119]) by d23relay06.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v235f2cn50266270 for ; Fri, 3 Mar 2017 16:42:07 +1100 Received: from d23av05.au.ibm.com (localhost [127.0.0.1]) by d23av05.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v235ebt7028530 for ; Fri, 3 Mar 2017 16:40:37 +1100 Subject: Re: [RFC 00/11] make try_to_unmap simple References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <86c860e4-c53d-200a-f36a-2ed8a7415d5d@linux.vnet.ibm.com> <20170303021118.GA3503@bbox> From: Anshuman Khandual Date: Fri, 3 Mar 2017 11:09:43 +0530 MIME-Version: 1.0 In-Reply-To: <20170303021118.GA3503@bbox> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: <96e43a6e-182f-2bda-0214-ab3a50946f29@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On 03/03/2017 07:41 AM, Minchan Kim wrote: > Hi Anshuman, > > On Thu, Mar 02, 2017 at 07:52:27PM +0530, Anshuman Khandual wrote: >> On 03/02/2017 12:09 PM, Minchan Kim wrote: >>> Currently, try_to_unmap returns various return value(SWAP_SUCCESS, >>> SWAP_FAIL, SWAP_AGAIN, SWAP_DIRTY and SWAP_MLOCK). When I look into >>> that, it's unncessary complicated so this patch aims for cleaning >>> it up. Change ttu to boolean function so we can remove SWAP_AGAIN, >>> SWAP_DIRTY, SWAP_MLOCK. >> >> It may be a trivial question but apart from being a cleanup does it >> help in improving it's callers some way ? Any other benefits ? > > If you mean some performace, I don't think so. It just aims for cleanup > so caller don't need to think much about return value of try_to_unmap. > What he should consider is just "success/fail". Others will be done in > isolate/putback friends which makes API simple/easy to use. Right, got it. Thanks ! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id 6C3176B0388 for ; Fri, 3 Mar 2017 06:45:37 -0500 (EST) Received: by mail-pg0-f70.google.com with SMTP id 65so123800980pgi.7 for ; Fri, 03 Mar 2017 03:45:37 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id w187si9193832pgb.130.2017.03.03.03.45.36 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 03 Mar 2017 03:45:36 -0800 (PST) Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v23BjR31001268 for ; Fri, 3 Mar 2017 06:45:36 -0500 Received: from e23smtp05.au.ibm.com (e23smtp05.au.ibm.com [202.81.31.147]) by mx0a-001b2d01.pphosted.com with ESMTP id 28xs8ekmeg-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 03 Mar 2017 06:45:31 -0500 Received: from localhost by e23smtp05.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 3 Mar 2017 21:45:00 +1000 Received: from d23relay10.au.ibm.com (d23relay10.au.ibm.com [9.190.26.77]) by d23dlp03.au.ibm.com (Postfix) with ESMTP id 5135A3578053 for ; Fri, 3 Mar 2017 22:44:59 +1100 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay10.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v23Biphm40501380 for ; Fri, 3 Mar 2017 22:44:59 +1100 Received: from d23av04.au.ibm.com (localhost [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v23BiQdQ030006 for ; Fri, 3 Mar 2017 22:44:26 +1100 Subject: Re: [RFC 05/11] mm: make the try_to_munlock void function References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-6-git-send-email-minchan@kernel.org> From: Anshuman Khandual Date: Fri, 3 Mar 2017 17:13:54 +0530 MIME-Version: 1.0 In-Reply-To: <1488436765-32350-6-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: <98488e1a-0202-b88b-ca9c-1dc0d6c27ae5@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Vlastimil Babka , "Kirill A . Shutemov" On 03/02/2017 12:09 PM, Minchan Kim wrote: > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to > the page if the page is not pte-mapped THP which cannot be > mlocked, either. Right. > > With that, __munlock_isolated_page can use PageMlocked to check > whether try_to_munlock is successful or not without relying on > try_to_munlock's retval. It helps to make ttu/ttuo simple with > upcoming patches. Right. > > Cc: Vlastimil Babka > Cc: Kirill A. Shutemov > Signed-off-by: Minchan Kim > --- > include/linux/rmap.h | 2 +- > mm/mlock.c | 6 ++---- > mm/rmap.c | 16 ++++------------ > 3 files changed, 7 insertions(+), 17 deletions(-) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index b556eef..1b0cd4c 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -235,7 +235,7 @@ int page_mkclean(struct page *); > * called in munlock()/munmap() path to check for other vmas holding > * the page mlocked. > */ > -int try_to_munlock(struct page *); > +void try_to_munlock(struct page *); > > void remove_migration_ptes(struct page *old, struct page *new, bool locked); > > diff --git a/mm/mlock.c b/mm/mlock.c > index cdbed8a..d34a540 100644 > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage) > */ > static void __munlock_isolated_page(struct page *page) > { > - int ret = SWAP_AGAIN; > - > /* > * Optimization: if the page was mapped just once, that's our mapping > * and we don't need to check all the other vmas. > */ > if (page_mapcount(page) > 1) > - ret = try_to_munlock(page); > + try_to_munlock(page); > > /* Did try_to_unlock() succeed or punt? */ > - if (ret != SWAP_MLOCK) > + if (!PageMlocked(page)) Checks if the page is still mlocked or not. > count_vm_event(UNEVICTABLE_PGMUNLOCKED); > > putback_lru_page(page); > diff --git a/mm/rmap.c b/mm/rmap.c > index 0a48958..61ae694 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page) > * Called from munlock code. Checks all of the VMAs mapping the page > * to make sure nobody else has this page mlocked. The page will be > * returned with PG_mlocked cleared if no other vmas have it mlocked. > - * > - * Return values are: > - * > - * SWAP_AGAIN - no vma is holding page mlocked, or, > - * SWAP_AGAIN - page mapped in mlocked vma -- couldn't acquire mmap sem > - * SWAP_FAIL - page cannot be located at present > - * SWAP_MLOCK - page is now mlocked. > */ > -int try_to_munlock(struct page *page) > -{ > - int ret; > > +void try_to_munlock(struct page *page) > +{ > struct rmap_walk_control rwc = { > .rmap_one = try_to_unmap_one, > .arg = (void *)TTU_MUNLOCK, > @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page) > }; > > VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page); > + VM_BUG_ON_PAGE(PageMlocked(page), page); We are calling on the page to see if its mlocked from any of it's mapping VMAs. Then it is a possibility that the page is mlocked and the above condition is true and we print VM BUG report there. The point is if its a valid possibility why we have added the above check ? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 1C70C6B038B for ; Fri, 3 Mar 2017 07:37:50 -0500 (EST) Received: by mail-pf0-f197.google.com with SMTP id j5so114820441pfb.3 for ; Fri, 03 Mar 2017 04:37:50 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id k5si10481871pgh.227.2017.03.03.04.37.48 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 03 Mar 2017 04:37:49 -0800 (PST) Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v23CXgWc068998 for ; Fri, 3 Mar 2017 07:37:48 -0500 Received: from e23smtp06.au.ibm.com (e23smtp06.au.ibm.com [202.81.31.148]) by mx0b-001b2d01.pphosted.com with ESMTP id 28xs8ep1tw-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 03 Mar 2017 07:37:47 -0500 Received: from localhost by e23smtp06.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 3 Mar 2017 22:37:45 +1000 Received: from d23relay06.au.ibm.com (d23relay06.au.ibm.com [9.185.63.219]) by d23dlp01.au.ibm.com (Postfix) with ESMTP id B98E52CE8057 for ; Fri, 3 Mar 2017 23:37:42 +1100 (EST) Received: from d23av06.au.ibm.com (d23av06.au.ibm.com [9.190.235.151]) by d23relay06.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v23CbYZI55050448 for ; Fri, 3 Mar 2017 23:37:42 +1100 Received: from d23av06.au.ibm.com (localhost [127.0.0.1]) by d23av06.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v23CbAJ8010394 for ; Fri, 3 Mar 2017 23:37:10 +1100 Subject: Re: [RFC 06/11] mm: remove SWAP_MLOCK in ttu References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-7-git-send-email-minchan@kernel.org> From: Anshuman Khandual Date: Fri, 3 Mar 2017 18:06:38 +0530 MIME-Version: 1.0 In-Reply-To: <1488436765-32350-7-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: <54799ea5-005d-939c-de32-bc21af881ab4@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On 03/02/2017 12:09 PM, Minchan Kim wrote: > ttu don't need to return SWAP_MLOCK. Instead, just return SWAP_FAIL > because it means the page is not-swappable so it should move to > another LRU list(active or unevictable). putback friends will > move it to right list depending on the page's LRU flag. Right, if it cannot be swapped out there is not much difference with SWAP_FAIL once we change the callers who expected to see a SWAP_MLOCK return instead. > > A side effect is shrink_page_list accounts unevictable list movement > by PGACTIVATE but I don't think it corrupts something severe. Not sure I got that, could you please elaborate on this. We will still activate the page and put it in an appropriate LRU list if it is marked mlocked ? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id 6739C6B0390 for ; Fri, 3 Mar 2017 07:54:20 -0500 (EST) Received: by mail-pg0-f72.google.com with SMTP id 10so41893666pgb.3 for ; Fri, 03 Mar 2017 04:54:20 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id f17si3743080plj.72.2017.03.03.04.54.19 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 03 Mar 2017 04:54:19 -0800 (PST) Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v23CsBwr015722 for ; Fri, 3 Mar 2017 07:54:19 -0500 Received: from e28smtp03.in.ibm.com (e28smtp03.in.ibm.com [125.16.236.3]) by mx0a-001b2d01.pphosted.com with ESMTP id 28y34pfe37-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 03 Mar 2017 07:54:18 -0500 Received: from localhost by e28smtp03.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 3 Mar 2017 18:24:15 +0530 Received: from d28relay05.in.ibm.com (d28relay05.in.ibm.com [9.184.220.62]) by d28dlp02.in.ibm.com (Postfix) with ESMTP id 76AB23940060 for ; Fri, 3 Mar 2017 18:24:13 +0530 (IST) Received: from d28av03.in.ibm.com (d28av03.in.ibm.com [9.184.220.65]) by d28relay05.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v23Cs9Qs14024914 for ; Fri, 3 Mar 2017 18:24:09 +0530 Received: from d28av03.in.ibm.com (localhost [127.0.0.1]) by d28av03.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v23CsCJx019313 for ; Fri, 3 Mar 2017 18:24:12 +0530 Subject: Re: [RFC 07/11] mm: remove SWAP_AGAIN in ttu References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-8-git-send-email-minchan@kernel.org> From: Anshuman Khandual Date: Fri, 3 Mar 2017 18:24:06 +0530 MIME-Version: 1.0 In-Reply-To: <1488436765-32350-8-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On 03/02/2017 12:09 PM, Minchan Kim wrote: > In 2002, [1] introduced SWAP_AGAIN. > At that time, ttuo used spin_trylock(&mm->page_table_lock) so it's Small nit: Please expand "ttuo" here. TTU in the first place is also not very clear but we have that in many places. > really easy to contend and fail to hold a lock so SWAP_AGAIN to keep > LRU status makes sense. Okay. > > However, now we changed it to mutex-based lock and be able to block > without skip pte so there is a few of small window to return > SWAP_AGAIN so remove SWAP_AGAIN and just return SWAP_FAIL. Makes sense. > > [1] c48c43e, minimal rmap > Signed-off-by: Minchan Kim > --- > mm/rmap.c | 11 +++-------- > mm/vmscan.c | 2 -- > 2 files changed, 3 insertions(+), 10 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 47898a1..da18f21 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1492,13 +1492,10 @@ static int page_mapcount_is_zero(struct page *page) > * Return values are: > * > * SWAP_SUCCESS - we succeeded in removing all mappings > - * SWAP_AGAIN - we missed a mapping, try again later > * SWAP_FAIL - the page is unswappable > */ > int try_to_unmap(struct page *page, enum ttu_flags flags) > { > - int ret; > - > struct rmap_walk_control rwc = { > .rmap_one = try_to_unmap_one, > .arg = (void *)flags, > @@ -1518,13 +1515,11 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) > rwc.invalid_vma = invalid_migration_vma; > > if (flags & TTU_RMAP_LOCKED) > - ret = rmap_walk_locked(page, &rwc); > + rmap_walk_locked(page, &rwc); > else > - ret = rmap_walk(page, &rwc); > + rmap_walk(page, &rwc); > > - if (!page_mapcount(page)) > - ret = SWAP_SUCCESS; > - return ret; > + return !page_mapcount(page) ? SWAP_SUCCESS: SWAP_FAIL; Its very simple now. So after the rmap_walk() if page is not mapped any more return SWAP_SUCCESS otherwise SWAP_FAIL. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id D57626B038F for ; Fri, 3 Mar 2017 08:06:07 -0500 (EST) Received: by mail-pf0-f199.google.com with SMTP id x63so48087837pfx.7 for ; Fri, 03 Mar 2017 05:06:07 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id p1si10551012pga.393.2017.03.03.05.06.06 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 03 Mar 2017 05:06:06 -0800 (PST) Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v23D5a9w061166 for ; Fri, 3 Mar 2017 08:06:06 -0500 Received: from e28smtp06.in.ibm.com (e28smtp06.in.ibm.com [125.16.236.6]) by mx0a-001b2d01.pphosted.com with ESMTP id 28xs8e7hd6-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 03 Mar 2017 08:05:57 -0500 Received: from localhost by e28smtp06.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 3 Mar 2017 18:34:22 +0530 Received: from d28relay07.in.ibm.com (d28relay07.in.ibm.com [9.184.220.158]) by d28dlp03.in.ibm.com (Postfix) with ESMTP id B1CC2125805C for ; Fri, 3 Mar 2017 18:34:34 +0530 (IST) Received: from d28av04.in.ibm.com (d28av04.in.ibm.com [9.184.220.66]) by d28relay07.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v23D3EHR7667920 for ; Fri, 3 Mar 2017 18:33:14 +0530 Received: from d28av04.in.ibm.com (localhost [127.0.0.1]) by d28av04.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v23D4Kr2015840 for ; Fri, 3 Mar 2017 18:34:21 +0530 Subject: Re: [RFC 11/11] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-12-git-send-email-minchan@kernel.org> From: Anshuman Khandual Date: Fri, 3 Mar 2017 18:34:15 +0530 MIME-Version: 1.0 In-Reply-To: <1488436765-32350-12-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: <907b98e3-127a-4cad-deea-093785274b64@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On 03/02/2017 12:09 PM, Minchan Kim wrote: > There is no user for it. Remove it. Last patches in the series prepared ground for this removal. The entire series looks pretty straight forward. As it does not change any functionality, wondering what kind of tests this should go through to look for any potential problems. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id 41FB26B0038 for ; Sun, 5 Mar 2017 21:09:08 -0500 (EST) Received: by mail-pf0-f198.google.com with SMTP id v190so56477612pfb.5 for ; Sun, 05 Mar 2017 18:09:08 -0800 (PST) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id t1si17543413pge.83.2017.03.05.18.09.06 for ; Sun, 05 Mar 2017 18:09:07 -0800 (PST) Date: Mon, 6 Mar 2017 11:09:01 +0900 From: Minchan Kim Subject: Re: [RFC 05/11] mm: make the try_to_munlock void function Message-ID: <20170306020901.GC8779@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-6-git-send-email-minchan@kernel.org> <98488e1a-0202-b88b-ca9c-1dc0d6c27ae5@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <98488e1a-0202-b88b-ca9c-1dc0d6c27ae5@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Vlastimil Babka , "Kirill A . Shutemov" On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote: > On 03/02/2017 12:09 PM, Minchan Kim wrote: > > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped > > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to > > the page if the page is not pte-mapped THP which cannot be > > mlocked, either. > > Right. > > > > > With that, __munlock_isolated_page can use PageMlocked to check > > whether try_to_munlock is successful or not without relying on > > try_to_munlock's retval. It helps to make ttu/ttuo simple with > > upcoming patches. > > Right. > > > > > Cc: Vlastimil Babka > > Cc: Kirill A. Shutemov > > Signed-off-by: Minchan Kim > > --- > > include/linux/rmap.h | 2 +- > > mm/mlock.c | 6 ++---- > > mm/rmap.c | 16 ++++------------ > > 3 files changed, 7 insertions(+), 17 deletions(-) > > > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > > index b556eef..1b0cd4c 100644 > > --- a/include/linux/rmap.h > > +++ b/include/linux/rmap.h > > @@ -235,7 +235,7 @@ int page_mkclean(struct page *); > > * called in munlock()/munmap() path to check for other vmas holding > > * the page mlocked. > > */ > > -int try_to_munlock(struct page *); > > +void try_to_munlock(struct page *); > > > > void remove_migration_ptes(struct page *old, struct page *new, bool locked); > > > > diff --git a/mm/mlock.c b/mm/mlock.c > > index cdbed8a..d34a540 100644 > > --- a/mm/mlock.c > > +++ b/mm/mlock.c > > @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage) > > */ > > static void __munlock_isolated_page(struct page *page) > > { > > - int ret = SWAP_AGAIN; > > - > > /* > > * Optimization: if the page was mapped just once, that's our mapping > > * and we don't need to check all the other vmas. > > */ > > if (page_mapcount(page) > 1) > > - ret = try_to_munlock(page); > > + try_to_munlock(page); > > > > /* Did try_to_unlock() succeed or punt? */ > > - if (ret != SWAP_MLOCK) > > + if (!PageMlocked(page)) > > Checks if the page is still mlocked or not. > > > count_vm_event(UNEVICTABLE_PGMUNLOCKED); > > > > putback_lru_page(page); > > diff --git a/mm/rmap.c b/mm/rmap.c > > index 0a48958..61ae694 100644 > > --- a/mm/rmap.c > > +++ b/mm/rmap.c > > @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page) > > * Called from munlock code. Checks all of the VMAs mapping the page > > * to make sure nobody else has this page mlocked. The page will be > > * returned with PG_mlocked cleared if no other vmas have it mlocked. > > - * > > - * Return values are: > > - * > > - * SWAP_AGAIN - no vma is holding page mlocked, or, > > - * SWAP_AGAIN - page mapped in mlocked vma -- couldn't acquire mmap sem > > - * SWAP_FAIL - page cannot be located at present > > - * SWAP_MLOCK - page is now mlocked. > > */ > > -int try_to_munlock(struct page *page) > > -{ > > - int ret; > > > > +void try_to_munlock(struct page *page) > > +{ > > struct rmap_walk_control rwc = { > > .rmap_one = try_to_unmap_one, > > .arg = (void *)TTU_MUNLOCK, > > @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page) > > }; > > > > VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page); > > + VM_BUG_ON_PAGE(PageMlocked(page), page); > > We are calling on the page to see if its mlocked from any of it's > mapping VMAs. Then it is a possibility that the page is mlocked > and the above condition is true and we print VM BUG report there. > The point is if its a valid possibility why we have added the > above check ? If I read code properly, __munlock_isolated_page calls try_to_munlock always pass the TestClearPageMlocked page to try_to_munlock. (e.g., munlock_vma_page and __munlock_pagevec) so I thought try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one returns PG_mlocked page once it found a VM_LOCKED VMA for a page. IOW, non-PG_mlocked page is precondition for try_to_munlock. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 9AC2E6B0038 for ; Sun, 5 Mar 2017 21:15:17 -0500 (EST) Received: by mail-pf0-f197.google.com with SMTP id o126so21472945pfb.2 for ; Sun, 05 Mar 2017 18:15:17 -0800 (PST) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id s3si17594650plj.94.2017.03.05.18.15.16 for ; Sun, 05 Mar 2017 18:15:16 -0800 (PST) Date: Mon, 6 Mar 2017 11:15:08 +0900 From: Minchan Kim Subject: Re: [RFC 06/11] mm: remove SWAP_MLOCK in ttu Message-ID: <20170306021508.GD8779@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-7-git-send-email-minchan@kernel.org> <54799ea5-005d-939c-de32-bc21af881ab4@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <54799ea5-005d-939c-de32-bc21af881ab4@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko Hi Anshuman, On Fri, Mar 03, 2017 at 06:06:38PM +0530, Anshuman Khandual wrote: > On 03/02/2017 12:09 PM, Minchan Kim wrote: > > ttu don't need to return SWAP_MLOCK. Instead, just return SWAP_FAIL > > because it means the page is not-swappable so it should move to > > another LRU list(active or unevictable). putback friends will > > move it to right list depending on the page's LRU flag. > > Right, if it cannot be swapped out there is not much difference with > SWAP_FAIL once we change the callers who expected to see a SWAP_MLOCK > return instead. > > > > > A side effect is shrink_page_list accounts unevictable list movement > > by PGACTIVATE but I don't think it corrupts something severe. > > Not sure I got that, could you please elaborate on this. We will still > activate the page and put it in an appropriate LRU list if it is marked > mlocked ? Right. putback_iactive_pages/putback_lru_page has a logic to filter out unevictable pages and move them to unevictable LRU list so it doesn't break LRU change behavior but the concern is until now, we have accounted PGACTIVATE for only evictable LRU list page but by this change, it accounts it to unevictable LRU list as well. However, although I don't think it's big problem in real practice, we can fix it simply with checking PG_mlocked if someone reports. Thanks. > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id 9D5ED6B0038 for ; Sun, 5 Mar 2017 21:18:33 -0500 (EST) Received: by mail-pf0-f198.google.com with SMTP id v190so56625562pfb.5 for ; Sun, 05 Mar 2017 18:18:33 -0800 (PST) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id p10si17527858pge.292.2017.03.05.18.18.32 for ; Sun, 05 Mar 2017 18:18:32 -0800 (PST) Date: Mon, 6 Mar 2017 11:18:23 +0900 From: Minchan Kim Subject: Re: [RFC 11/11] mm: remove SWAP_[SUCCESS|AGAIN|FAIL] Message-ID: <20170306021823.GF8779@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-12-git-send-email-minchan@kernel.org> <907b98e3-127a-4cad-deea-093785274b64@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <907b98e3-127a-4cad-deea-093785274b64@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On Fri, Mar 03, 2017 at 06:34:15PM +0530, Anshuman Khandual wrote: > On 03/02/2017 12:09 PM, Minchan Kim wrote: > > There is no user for it. Remove it. > > Last patches in the series prepared ground for this removal. The > entire series looks pretty straight forward. As it does not change Thanks. > any functionality, wondering what kind of tests this should go > through to look for any potential problems. I don't think it can change something severe, either but one thing I want to check is handling of mlocked pages part so I will do some test for that. Thanks for the review, Anshuman! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id 8AC436B0387 for ; Sun, 5 Mar 2017 21:19:20 -0500 (EST) Received: by mail-pg0-f72.google.com with SMTP id b2so195160142pgc.6 for ; Sun, 05 Mar 2017 18:19:20 -0800 (PST) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id 7si17581018pll.242.2017.03.05.18.19.19 for ; Sun, 05 Mar 2017 18:19:19 -0800 (PST) Date: Mon, 6 Mar 2017 11:16:10 +0900 From: Minchan Kim Subject: Re: [RFC 07/11] mm: remove SWAP_AGAIN in ttu Message-ID: <20170306021610.GE8779@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-8-git-send-email-minchan@kernel.org> MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: owner-linux-mm@kvack.org List-ID: To: Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On Fri, Mar 03, 2017 at 06:24:06PM +0530, Anshuman Khandual wrote: > On 03/02/2017 12:09 PM, Minchan Kim wrote: > > In 2002, [1] introduced SWAP_AGAIN. > > At that time, ttuo used spin_trylock(&mm->page_table_lock) so it's > > Small nit: Please expand "ttuo" here. TTU in the first place is also > not very clear but we have that in many places. No problem. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 205A16B0038 for ; Mon, 6 Mar 2017 04:08:51 -0500 (EST) Received: by mail-pf0-f197.google.com with SMTP id l66so85922308pfl.6 for ; Mon, 06 Mar 2017 01:08:51 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id 1si18499728plz.212.2017.03.06.01.08.50 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 Mar 2017 01:08:50 -0800 (PST) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v2698loN140150 for ; Mon, 6 Mar 2017 04:08:49 -0500 Received: from e23smtp08.au.ibm.com (e23smtp08.au.ibm.com [202.81.31.141]) by mx0b-001b2d01.pphosted.com with ESMTP id 28yu0pkutm-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Mon, 06 Mar 2017 04:08:48 -0500 Received: from localhost by e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 6 Mar 2017 19:08:40 +1000 Received: from d23relay09.au.ibm.com (d23relay09.au.ibm.com [9.185.63.181]) by d23dlp03.au.ibm.com (Postfix) with ESMTP id 61BA13578056 for ; Mon, 6 Mar 2017 20:08:37 +1100 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay09.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v2698TE047251674 for ; Mon, 6 Mar 2017 20:08:37 +1100 Received: from d23av04.au.ibm.com (localhost [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v26984bj027475 for ; Mon, 6 Mar 2017 20:08:04 +1100 Subject: Re: [RFC 01/11] mm: use SWAP_SUCCESS instead of 0 References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-2-git-send-email-minchan@kernel.org> <20170303030158.GD3503@bbox> From: Anshuman Khandual Date: Mon, 6 Mar 2017 14:37:40 +0530 MIME-Version: 1.0 In-Reply-To: <20170303030158.GD3503@bbox> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: <33a8d76e-dbeb-bcf1-5024-5e780b81bef6@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , "Kirill A . Shutemov" On 03/03/2017 08:31 AM, Minchan Kim wrote: > On Thu, Mar 02, 2017 at 07:57:10PM +0530, Anshuman Khandual wrote: >> On 03/02/2017 12:09 PM, Minchan Kim wrote: >>> SWAP_SUCCESS defined value 0 can be changed always so don't rely on >>> it. Instead, use explict macro. >> >> Right. But should not we move the changes to the callers last in the >> patch series after doing the cleanup to the try_to_unmap() function >> as intended first. > > I don't understand what you are pointing out. Could you elaborate it > a bit? I was just referring to the order of this patch in the series and thinking if it would have been better if this patch would be at a later stage in the series. But I guess its okay as we are any way dropping off SWAP_FAIL, SWAP_SUCCESS etc in the end. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 8295F6B0038 for ; Mon, 6 Mar 2017 04:40:27 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id 77so63138539pgc.5 for ; Mon, 06 Mar 2017 01:40:27 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id u1si18505173pgb.47.2017.03.06.01.40.26 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 Mar 2017 01:40:26 -0800 (PST) Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v269cW4w112458 for ; Mon, 6 Mar 2017 04:40:25 -0500 Received: from e28smtp04.in.ibm.com (e28smtp04.in.ibm.com [125.16.236.4]) by mx0a-001b2d01.pphosted.com with ESMTP id 290bhmyu61-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Mon, 06 Mar 2017 04:40:25 -0500 Received: from localhost by e28smtp04.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 6 Mar 2017 15:10:22 +0530 Received: from d28relay09.in.ibm.com (d28relay09.in.ibm.com [9.184.220.160]) by d28dlp03.in.ibm.com (Postfix) with ESMTP id ED3BD125805B for ; Mon, 6 Mar 2017 15:10:35 +0530 (IST) Received: from d28av02.in.ibm.com (d28av02.in.ibm.com [9.184.220.64]) by d28relay09.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v269eKu113500450 for ; Mon, 6 Mar 2017 15:10:20 +0530 Received: from d28av02.in.ibm.com (localhost [127.0.0.1]) by d28av02.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v269eI7D004891 for ; Mon, 6 Mar 2017 15:10:20 +0530 Subject: Re: [RFC 05/11] mm: make the try_to_munlock void function References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-6-git-send-email-minchan@kernel.org> <98488e1a-0202-b88b-ca9c-1dc0d6c27ae5@linux.vnet.ibm.com> <20170306020901.GC8779@bbox> From: Anshuman Khandual Date: Mon, 6 Mar 2017 15:10:17 +0530 MIME-Version: 1.0 In-Reply-To: <20170306020901.GC8779@bbox> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: <12618e8d-9eaf-322a-4334-e1416dc95b8b@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Vlastimil Babka , "Kirill A . Shutemov" On 03/06/2017 07:39 AM, Minchan Kim wrote: > On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote: >> On 03/02/2017 12:09 PM, Minchan Kim wrote: >>> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped >>> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to >>> the page if the page is not pte-mapped THP which cannot be >>> mlocked, either. >> >> Right. >> >>> >>> With that, __munlock_isolated_page can use PageMlocked to check >>> whether try_to_munlock is successful or not without relying on >>> try_to_munlock's retval. It helps to make ttu/ttuo simple with >>> upcoming patches. >> >> Right. >> >>> >>> Cc: Vlastimil Babka >>> Cc: Kirill A. Shutemov >>> Signed-off-by: Minchan Kim >>> --- >>> include/linux/rmap.h | 2 +- >>> mm/mlock.c | 6 ++---- >>> mm/rmap.c | 16 ++++------------ >>> 3 files changed, 7 insertions(+), 17 deletions(-) >>> >>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h >>> index b556eef..1b0cd4c 100644 >>> --- a/include/linux/rmap.h >>> +++ b/include/linux/rmap.h >>> @@ -235,7 +235,7 @@ int page_mkclean(struct page *); >>> * called in munlock()/munmap() path to check for other vmas holding >>> * the page mlocked. >>> */ >>> -int try_to_munlock(struct page *); >>> +void try_to_munlock(struct page *); >>> >>> void remove_migration_ptes(struct page *old, struct page *new, bool locked); >>> >>> diff --git a/mm/mlock.c b/mm/mlock.c >>> index cdbed8a..d34a540 100644 >>> --- a/mm/mlock.c >>> +++ b/mm/mlock.c >>> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage) >>> */ >>> static void __munlock_isolated_page(struct page *page) >>> { >>> - int ret = SWAP_AGAIN; >>> - >>> /* >>> * Optimization: if the page was mapped just once, that's our mapping >>> * and we don't need to check all the other vmas. >>> */ >>> if (page_mapcount(page) > 1) >>> - ret = try_to_munlock(page); >>> + try_to_munlock(page); >>> >>> /* Did try_to_unlock() succeed or punt? */ >>> - if (ret != SWAP_MLOCK) >>> + if (!PageMlocked(page)) >> >> Checks if the page is still mlocked or not. >> >>> count_vm_event(UNEVICTABLE_PGMUNLOCKED); >>> >>> putback_lru_page(page); >>> diff --git a/mm/rmap.c b/mm/rmap.c >>> index 0a48958..61ae694 100644 >>> --- a/mm/rmap.c >>> +++ b/mm/rmap.c >>> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page) >>> * Called from munlock code. Checks all of the VMAs mapping the page >>> * to make sure nobody else has this page mlocked. The page will be >>> * returned with PG_mlocked cleared if no other vmas have it mlocked. >>> - * >>> - * Return values are: >>> - * >>> - * SWAP_AGAIN - no vma is holding page mlocked, or, >>> - * SWAP_AGAIN - page mapped in mlocked vma -- couldn't acquire mmap sem >>> - * SWAP_FAIL - page cannot be located at present >>> - * SWAP_MLOCK - page is now mlocked. >>> */ >>> -int try_to_munlock(struct page *page) >>> -{ >>> - int ret; >>> >>> +void try_to_munlock(struct page *page) >>> +{ >>> struct rmap_walk_control rwc = { >>> .rmap_one = try_to_unmap_one, >>> .arg = (void *)TTU_MUNLOCK, >>> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page) >>> }; >>> >>> VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page); >>> + VM_BUG_ON_PAGE(PageMlocked(page), page); >> >> We are calling on the page to see if its mlocked from any of it's >> mapping VMAs. Then it is a possibility that the page is mlocked >> and the above condition is true and we print VM BUG report there. >> The point is if its a valid possibility why we have added the >> above check ? > > If I read code properly, __munlock_isolated_page calls try_to_munlock > always pass the TestClearPageMlocked page to try_to_munlock. Right. > (e.g., munlock_vma_page and __munlock_pagevec) so I thought > try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one > returns PG_mlocked page once it found a VM_LOCKED VMA for a page. > IOW, non-PG_mlocked page is precondition for try_to_munlock. Okay, I have missed that part. Nonetheless this is a separate issue, should be part of a different patch ? Not inside these cleanups. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 68A276B0387 for ; Tue, 7 Mar 2017 01:50:51 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id b2so233548086pgc.6 for ; Mon, 06 Mar 2017 22:50:51 -0800 (PST) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id d198si21542163pga.192.2017.03.06.22.50.49 for ; Mon, 06 Mar 2017 22:50:50 -0800 (PST) Date: Tue, 7 Mar 2017 15:50:48 +0900 From: Minchan Kim Subject: Re: [RFC 05/11] mm: make the try_to_munlock void function Message-ID: <20170307065048.GD29458@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-6-git-send-email-minchan@kernel.org> <98488e1a-0202-b88b-ca9c-1dc0d6c27ae5@linux.vnet.ibm.com> <20170306020901.GC8779@bbox> <12618e8d-9eaf-322a-4334-e1416dc95b8b@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <12618e8d-9eaf-322a-4334-e1416dc95b8b@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Vlastimil Babka , "Kirill A . Shutemov" Hi Anshuman, On Mon, Mar 06, 2017 at 03:10:17PM +0530, Anshuman Khandual wrote: > On 03/06/2017 07:39 AM, Minchan Kim wrote: > > On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote: > >> On 03/02/2017 12:09 PM, Minchan Kim wrote: > >>> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped > >>> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to > >>> the page if the page is not pte-mapped THP which cannot be > >>> mlocked, either. > >> > >> Right. > >> > >>> > >>> With that, __munlock_isolated_page can use PageMlocked to check > >>> whether try_to_munlock is successful or not without relying on > >>> try_to_munlock's retval. It helps to make ttu/ttuo simple with > >>> upcoming patches. > >> > >> Right. > >> > >>> > >>> Cc: Vlastimil Babka > >>> Cc: Kirill A. Shutemov > >>> Signed-off-by: Minchan Kim > >>> --- > >>> include/linux/rmap.h | 2 +- > >>> mm/mlock.c | 6 ++---- > >>> mm/rmap.c | 16 ++++------------ > >>> 3 files changed, 7 insertions(+), 17 deletions(-) > >>> > >>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h > >>> index b556eef..1b0cd4c 100644 > >>> --- a/include/linux/rmap.h > >>> +++ b/include/linux/rmap.h > >>> @@ -235,7 +235,7 @@ int page_mkclean(struct page *); > >>> * called in munlock()/munmap() path to check for other vmas holding > >>> * the page mlocked. > >>> */ > >>> -int try_to_munlock(struct page *); > >>> +void try_to_munlock(struct page *); > >>> > >>> void remove_migration_ptes(struct page *old, struct page *new, bool locked); > >>> > >>> diff --git a/mm/mlock.c b/mm/mlock.c > >>> index cdbed8a..d34a540 100644 > >>> --- a/mm/mlock.c > >>> +++ b/mm/mlock.c > >>> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage) > >>> */ > >>> static void __munlock_isolated_page(struct page *page) > >>> { > >>> - int ret = SWAP_AGAIN; > >>> - > >>> /* > >>> * Optimization: if the page was mapped just once, that's our mapping > >>> * and we don't need to check all the other vmas. > >>> */ > >>> if (page_mapcount(page) > 1) > >>> - ret = try_to_munlock(page); > >>> + try_to_munlock(page); > >>> > >>> /* Did try_to_unlock() succeed or punt? */ > >>> - if (ret != SWAP_MLOCK) > >>> + if (!PageMlocked(page)) > >> > >> Checks if the page is still mlocked or not. > >> > >>> count_vm_event(UNEVICTABLE_PGMUNLOCKED); > >>> > >>> putback_lru_page(page); > >>> diff --git a/mm/rmap.c b/mm/rmap.c > >>> index 0a48958..61ae694 100644 > >>> --- a/mm/rmap.c > >>> +++ b/mm/rmap.c > >>> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page) > >>> * Called from munlock code. Checks all of the VMAs mapping the page > >>> * to make sure nobody else has this page mlocked. The page will be > >>> * returned with PG_mlocked cleared if no other vmas have it mlocked. > >>> - * > >>> - * Return values are: > >>> - * > >>> - * SWAP_AGAIN - no vma is holding page mlocked, or, > >>> - * SWAP_AGAIN - page mapped in mlocked vma -- couldn't acquire mmap sem > >>> - * SWAP_FAIL - page cannot be located at present > >>> - * SWAP_MLOCK - page is now mlocked. > >>> */ > >>> -int try_to_munlock(struct page *page) > >>> -{ > >>> - int ret; > >>> > >>> +void try_to_munlock(struct page *page) > >>> +{ > >>> struct rmap_walk_control rwc = { > >>> .rmap_one = try_to_unmap_one, > >>> .arg = (void *)TTU_MUNLOCK, > >>> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page) > >>> }; > >>> > >>> VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page); > >>> + VM_BUG_ON_PAGE(PageMlocked(page), page); > >> > >> We are calling on the page to see if its mlocked from any of it's > >> mapping VMAs. Then it is a possibility that the page is mlocked > >> and the above condition is true and we print VM BUG report there. > >> The point is if its a valid possibility why we have added the > >> above check ? > > > > If I read code properly, __munlock_isolated_page calls try_to_munlock > > always pass the TestClearPageMlocked page to try_to_munlock. > > Right. > > > (e.g., munlock_vma_page and __munlock_pagevec) so I thought > > try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one > > returns PG_mlocked page once it found a VM_LOCKED VMA for a page. > > IOW, non-PG_mlocked page is precondition for try_to_munlock. > > Okay, I have missed that part. Nonetheless this is a separate issue, > should be part of a different patch ? Not inside these cleanups. If that precondition is not true, this patch changes the behavior slightly. UNEVICTABLE_PGMUNLOCKED count mistmatch compared to old. I wanted to catch it up. If you still think it's separate issue, I will do. Please tell me. However, I still think it's no problem to merge it in this clean up patch. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 1FE646B0387 for ; Tue, 7 Mar 2017 03:56:53 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id g2so18984106pge.7 for ; Tue, 07 Mar 2017 00:56:53 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id x26si15152094pge.30.2017.03.07.00.56.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Mar 2017 00:56:52 -0800 (PST) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v278nfN0146925 for ; Tue, 7 Mar 2017 03:56:51 -0500 Received: from e23smtp08.au.ibm.com (e23smtp08.au.ibm.com [202.81.31.141]) by mx0a-001b2d01.pphosted.com with ESMTP id 2917pnbypv-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Tue, 07 Mar 2017 03:56:51 -0500 Received: from localhost by e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 7 Mar 2017 18:56:48 +1000 Received: from d23relay06.au.ibm.com (d23relay06.au.ibm.com [9.185.63.219]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id CE14F2BB0054 for ; Tue, 7 Mar 2017 19:56:45 +1100 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay06.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v278ubag48824456 for ; Tue, 7 Mar 2017 19:56:45 +1100 Received: from d23av04.au.ibm.com (localhost [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v278uCap013830 for ; Tue, 7 Mar 2017 19:56:13 +1100 Subject: Re: [RFC 05/11] mm: make the try_to_munlock void function References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-6-git-send-email-minchan@kernel.org> <98488e1a-0202-b88b-ca9c-1dc0d6c27ae5@linux.vnet.ibm.com> <20170306020901.GC8779@bbox> <12618e8d-9eaf-322a-4334-e1416dc95b8b@linux.vnet.ibm.com> <20170307065048.GD29458@bbox> From: Anshuman Khandual Date: Tue, 7 Mar 2017 14:25:48 +0530 MIME-Version: 1.0 In-Reply-To: <20170307065048.GD29458@bbox> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Message-Id: <6e412428-86c7-9032-a3b6-41980ad667f9@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Anshuman Khandual Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Vlastimil Babka , "Kirill A . Shutemov" On 03/07/2017 12:20 PM, Minchan Kim wrote: > Hi Anshuman, > > On Mon, Mar 06, 2017 at 03:10:17PM +0530, Anshuman Khandual wrote: >> On 03/06/2017 07:39 AM, Minchan Kim wrote: >>> On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote: >>>> On 03/02/2017 12:09 PM, Minchan Kim wrote: >>>>> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped >>>>> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to >>>>> the page if the page is not pte-mapped THP which cannot be >>>>> mlocked, either. >>>> Right. >>>> >>>>> With that, __munlock_isolated_page can use PageMlocked to check >>>>> whether try_to_munlock is successful or not without relying on >>>>> try_to_munlock's retval. It helps to make ttu/ttuo simple with >>>>> upcoming patches. >>>> Right. >>>> >>>>> Cc: Vlastimil Babka >>>>> Cc: Kirill A. Shutemov >>>>> Signed-off-by: Minchan Kim >>>>> --- >>>>> include/linux/rmap.h | 2 +- >>>>> mm/mlock.c | 6 ++---- >>>>> mm/rmap.c | 16 ++++------------ >>>>> 3 files changed, 7 insertions(+), 17 deletions(-) >>>>> >>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h >>>>> index b556eef..1b0cd4c 100644 >>>>> --- a/include/linux/rmap.h >>>>> +++ b/include/linux/rmap.h >>>>> @@ -235,7 +235,7 @@ int page_mkclean(struct page *); >>>>> * called in munlock()/munmap() path to check for other vmas holding >>>>> * the page mlocked. >>>>> */ >>>>> -int try_to_munlock(struct page *); >>>>> +void try_to_munlock(struct page *); >>>>> >>>>> void remove_migration_ptes(struct page *old, struct page *new, bool locked); >>>>> >>>>> diff --git a/mm/mlock.c b/mm/mlock.c >>>>> index cdbed8a..d34a540 100644 >>>>> --- a/mm/mlock.c >>>>> +++ b/mm/mlock.c >>>>> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage) >>>>> */ >>>>> static void __munlock_isolated_page(struct page *page) >>>>> { >>>>> - int ret = SWAP_AGAIN; >>>>> - >>>>> /* >>>>> * Optimization: if the page was mapped just once, that's our mapping >>>>> * and we don't need to check all the other vmas. >>>>> */ >>>>> if (page_mapcount(page) > 1) >>>>> - ret = try_to_munlock(page); >>>>> + try_to_munlock(page); >>>>> >>>>> /* Did try_to_unlock() succeed or punt? */ >>>>> - if (ret != SWAP_MLOCK) >>>>> + if (!PageMlocked(page)) >>>> Checks if the page is still mlocked or not. >>>> >>>>> count_vm_event(UNEVICTABLE_PGMUNLOCKED); >>>>> >>>>> putback_lru_page(page); >>>>> diff --git a/mm/rmap.c b/mm/rmap.c >>>>> index 0a48958..61ae694 100644 >>>>> --- a/mm/rmap.c >>>>> +++ b/mm/rmap.c >>>>> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page) >>>>> * Called from munlock code. Checks all of the VMAs mapping the page >>>>> * to make sure nobody else has this page mlocked. The page will be >>>>> * returned with PG_mlocked cleared if no other vmas have it mlocked. >>>>> - * >>>>> - * Return values are: >>>>> - * >>>>> - * SWAP_AGAIN - no vma is holding page mlocked, or, >>>>> - * SWAP_AGAIN - page mapped in mlocked vma -- couldn't acquire mmap sem >>>>> - * SWAP_FAIL - page cannot be located at present >>>>> - * SWAP_MLOCK - page is now mlocked. >>>>> */ >>>>> -int try_to_munlock(struct page *page) >>>>> -{ >>>>> - int ret; >>>>> >>>>> +void try_to_munlock(struct page *page) >>>>> +{ >>>>> struct rmap_walk_control rwc = { >>>>> .rmap_one = try_to_unmap_one, >>>>> .arg = (void *)TTU_MUNLOCK, >>>>> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page) >>>>> }; >>>>> >>>>> VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page); >>>>> + VM_BUG_ON_PAGE(PageMlocked(page), page); >>>> We are calling on the page to see if its mlocked from any of it's >>>> mapping VMAs. Then it is a possibility that the page is mlocked >>>> and the above condition is true and we print VM BUG report there. >>>> The point is if its a valid possibility why we have added the >>>> above check ? >>> If I read code properly, __munlock_isolated_page calls try_to_munlock >>> always pass the TestClearPageMlocked page to try_to_munlock. >> Right. >> >>> (e.g., munlock_vma_page and __munlock_pagevec) so I thought >>> try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one >>> returns PG_mlocked page once it found a VM_LOCKED VMA for a page. >>> IOW, non-PG_mlocked page is precondition for try_to_munlock. >> Okay, I have missed that part. Nonetheless this is a separate issue, >> should be part of a different patch ? Not inside these cleanups. > If that precondition is not true, this patch changes the behavior > slightly. > > UNEVICTABLE_PGMUNLOCKED count mistmatch compared to old. > > I wanted to catch it up. If you still think it's separate issue, > I will do. Please tell me. However, I still think it's no problem > to merge it in this clean up patch. Got it, its okay. Let this change be part of this patch itself. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f197.google.com (mail-wr0-f197.google.com [209.85.128.197]) by kanga.kvack.org (Postfix) with ESMTP id E67936B038B for ; Tue, 7 Mar 2017 09:19:37 -0500 (EST) Received: by mail-wr0-f197.google.com with SMTP id v66so1046985wrc.4 for ; Tue, 07 Mar 2017 06:19:37 -0800 (PST) Received: from mail-wr0-x244.google.com (mail-wr0-x244.google.com. [2a00:1450:400c:c0c::244]) by mx.google.com with ESMTPS id y132si19085222wme.36.2017.03.07.06.19.36 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Mar 2017 06:19:36 -0800 (PST) Received: by mail-wr0-x244.google.com with SMTP id u48so432738wrc.1 for ; Tue, 07 Mar 2017 06:19:36 -0800 (PST) Date: Tue, 7 Mar 2017 17:19:33 +0300 From: "Kirill A. Shutemov" Subject: Re: [RFC 01/11] mm: use SWAP_SUCCESS instead of 0 Message-ID: <20170307141933.GA2779@node.shutemov.name> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-2-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1488436765-32350-2-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On Thu, Mar 02, 2017 at 03:39:15PM +0900, Minchan Kim wrote: > SWAP_SUCCESS defined value 0 can be changed always so don't rely on > it. Instead, use explict macro. I'm okay with this as long as it's prepartion for something meaningful. 0 as success is widely used. I don't think replacing it's with macro here has value on its own. -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f199.google.com (mail-wr0-f199.google.com [209.85.128.199]) by kanga.kvack.org (Postfix) with ESMTP id DEBD56B038C for ; Tue, 7 Mar 2017 09:20:14 -0500 (EST) Received: by mail-wr0-f199.google.com with SMTP id u108so1004515wrb.3 for ; Tue, 07 Mar 2017 06:20:14 -0800 (PST) Received: from mail-wm0-x243.google.com (mail-wm0-x243.google.com. [2a00:1450:400c:c09::243]) by mx.google.com with ESMTPS id x7si741348wmf.1.2017.03.07.06.20.13 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Mar 2017 06:20:13 -0800 (PST) Received: by mail-wm0-x243.google.com with SMTP id u132so1241082wmg.1 for ; Tue, 07 Mar 2017 06:20:13 -0800 (PST) Date: Tue, 7 Mar 2017 17:20:10 +0300 From: "Kirill A. Shutemov" Subject: Re: [RFC 02/11] mm: remove unncessary ret in page_referenced Message-ID: <20170307142010.GB2779@node.shutemov.name> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-3-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1488436765-32350-3-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On Thu, Mar 02, 2017 at 03:39:16PM +0900, Minchan Kim wrote: > Anyone doesn't use ret variable. Remove it. > > Signed-off-by: Minchan Kim Acked-by: Kirill A. Shutemov -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f199.google.com (mail-wr0-f199.google.com [209.85.128.199]) by kanga.kvack.org (Postfix) with ESMTP id D11226B038D for ; Tue, 7 Mar 2017 09:20:55 -0500 (EST) Received: by mail-wr0-f199.google.com with SMTP id y51so1033383wry.6 for ; Tue, 07 Mar 2017 06:20:55 -0800 (PST) Received: from mail-wm0-x243.google.com (mail-wm0-x243.google.com. [2a00:1450:400c:c09::243]) by mx.google.com with ESMTPS id 63si205707wrp.137.2017.03.07.06.20.54 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Mar 2017 06:20:54 -0800 (PST) Received: by mail-wm0-x243.google.com with SMTP id v190so1231951wme.3 for ; Tue, 07 Mar 2017 06:20:54 -0800 (PST) Date: Tue, 7 Mar 2017 17:20:51 +0300 From: "Kirill A. Shutemov" Subject: Re: [RFC 03/11] mm: remove SWAP_DIRTY in ttu Message-ID: <20170307142051.GC2779@node.shutemov.name> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-4-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1488436765-32350-4-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Shaohua Li On Thu, Mar 02, 2017 at 03:39:17PM +0900, Minchan Kim wrote: > If we found lazyfree page is dirty, ttuo can just SetPageSwapBakced > in there like PG_mlocked page and just return with SWAP_FAIL which > is very natural because the page is not swappable right now so that > vmscan can activate it. There is no point to introduce new return > value SWAP_DIRTY in ttu at the moment. > > Cc: Shaohua Li > Signed-off-by: Minchan Kim Acked-by: Kirill A. Shutemov -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f69.google.com (mail-wm0-f69.google.com [74.125.82.69]) by kanga.kvack.org (Postfix) with ESMTP id 3311A6B038B for ; Tue, 7 Mar 2017 09:26:48 -0500 (EST) Received: by mail-wm0-f69.google.com with SMTP id g8so1803246wmg.7 for ; Tue, 07 Mar 2017 06:26:48 -0800 (PST) Received: from mail-wr0-x243.google.com (mail-wr0-x243.google.com. [2a00:1450:400c:c0c::243]) by mx.google.com with ESMTPS id y63si219037wrb.176.2017.03.07.06.26.46 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Mar 2017 06:26:47 -0800 (PST) Received: by mail-wr0-x243.google.com with SMTP id l37so441830wrc.3 for ; Tue, 07 Mar 2017 06:26:46 -0800 (PST) Date: Tue, 7 Mar 2017 17:26:43 +0300 From: "Kirill A. Shutemov" Subject: Re: [RFC 04/11] mm: remove SWAP_MLOCK check for SWAP_SUCCESS in ttu Message-ID: <20170307142643.GD2779@node.shutemov.name> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-5-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1488436765-32350-5-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On Thu, Mar 02, 2017 at 03:39:18PM +0900, Minchan Kim wrote: > If the page is mapped and rescue in ttuo, page_mapcount(page) == 0 cannot > be true so page_mapcount check in ttu is enough to return SWAP_SUCCESS. > IOW, SWAP_MLOCK check is redundant so remove it. > > Signed-off-by: Minchan Kim > --- > mm/rmap.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 3a14013..0a48958 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1523,7 +1523,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) > else > ret = rmap_walk(page, &rwc); > > - if (ret != SWAP_MLOCK && !page_mapcount(page)) > + if (!page_mapcount(page)) Hm. I think there's bug in current code. It should be !total_mapcount(page) otherwise it can be false-positive if there's THP mapped with PTEs. And in this case ret != SWAP_MLOCK is helpful to cut down some cost. Althouth it should be fine to remove it, I guess. -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f197.google.com (mail-wr0-f197.google.com [209.85.128.197]) by kanga.kvack.org (Postfix) with ESMTP id 8F3E76B0389 for ; Tue, 7 Mar 2017 10:17:52 -0500 (EST) Received: by mail-wr0-f197.google.com with SMTP id l37so1478408wrc.7 for ; Tue, 07 Mar 2017 07:17:52 -0800 (PST) Received: from mail-wm0-x244.google.com (mail-wm0-x244.google.com. [2a00:1450:400c:c09::244]) by mx.google.com with ESMTPS id n110si398229wrb.159.2017.03.07.07.17.51 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Mar 2017 07:17:51 -0800 (PST) Received: by mail-wm0-x244.google.com with SMTP id n11so1528459wma.0 for ; Tue, 07 Mar 2017 07:17:51 -0800 (PST) Date: Tue, 7 Mar 2017 18:17:47 +0300 From: "Kirill A. Shutemov" Subject: Re: [RFC 05/11] mm: make the try_to_munlock void function Message-ID: <20170307151747.GA2940@node.shutemov.name> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-6-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1488436765-32350-6-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Vlastimil Babka , "Kirill A . Shutemov" On Thu, Mar 02, 2017 at 03:39:19PM +0900, Minchan Kim wrote: > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to > the page if the page is not pte-mapped THP which cannot be > mlocked, either. > > With that, __munlock_isolated_page can use PageMlocked to check > whether try_to_munlock is successful or not without relying on > try_to_munlock's retval. It helps to make ttu/ttuo simple with > upcoming patches. I *think* you're correct, but it took time to wrap my head around. We basically rely on try_to_munlock() never caller for PTE-mapped THP. And we don't at the moment. It worth adding something like VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page); into try_to_munlock(). Otherwise looks good to me. Will free adding my Acked-by once this nit is addressed. -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f70.google.com (mail-wm0-f70.google.com [74.125.82.70]) by kanga.kvack.org (Postfix) with ESMTP id 6BAAC6B038B for ; Tue, 7 Mar 2017 10:24:42 -0500 (EST) Received: by mail-wm0-f70.google.com with SMTP id u9so2329674wme.6 for ; Tue, 07 Mar 2017 07:24:42 -0800 (PST) Received: from mail-wm0-x244.google.com (mail-wm0-x244.google.com. [2a00:1450:400c:c09::244]) by mx.google.com with ESMTPS id o67si19297059wmo.87.2017.03.07.07.24.40 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Mar 2017 07:24:41 -0800 (PST) Received: by mail-wm0-x244.google.com with SMTP id u132so1556092wmg.1 for ; Tue, 07 Mar 2017 07:24:40 -0800 (PST) Date: Tue, 7 Mar 2017 18:24:37 +0300 From: "Kirill A. Shutemov" Subject: Re: [RFC 06/11] mm: remove SWAP_MLOCK in ttu Message-ID: <20170307152437.GB2940@node.shutemov.name> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-7-git-send-email-minchan@kernel.org> <54799ea5-005d-939c-de32-bc21af881ab4@linux.vnet.ibm.com> <20170306021508.GD8779@bbox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170306021508.GD8779@bbox> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , g@node.shutemov.name Cc: Anshuman Khandual , Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On Mon, Mar 06, 2017 at 11:15:08AM +0900, Minchan Kim wrote: > Hi Anshuman, > > On Fri, Mar 03, 2017 at 06:06:38PM +0530, Anshuman Khandual wrote: > > On 03/02/2017 12:09 PM, Minchan Kim wrote: > > > ttu don't need to return SWAP_MLOCK. Instead, just return SWAP_FAIL > > > because it means the page is not-swappable so it should move to > > > another LRU list(active or unevictable). putback friends will > > > move it to right list depending on the page's LRU flag. > > > > Right, if it cannot be swapped out there is not much difference with > > SWAP_FAIL once we change the callers who expected to see a SWAP_MLOCK > > return instead. > > > > > > > > A side effect is shrink_page_list accounts unevictable list movement > > > by PGACTIVATE but I don't think it corrupts something severe. > > > > Not sure I got that, could you please elaborate on this. We will still > > activate the page and put it in an appropriate LRU list if it is marked > > mlocked ? > > Right. putback_iactive_pages/putback_lru_page has a logic to filter > out unevictable pages and move them to unevictable LRU list so it > doesn't break LRU change behavior but the concern is until now, > we have accounted PGACTIVATE for only evictable LRU list page but > by this change, it accounts it to unevictable LRU list as well. > However, although I don't think it's big problem in real practice, > we can fix it simply with checking PG_mlocked if someone reports. I think it's better to do this pro-actively. Let's hide both pgactivate++ and SetPageActive() under "if (!PageMlocked())". SetPageActive() is not free. -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 23ED76B0396 for ; Wed, 8 Mar 2017 01:25:22 -0500 (EST) Received: by mail-pg0-f71.google.com with SMTP id b2so42111560pgc.6 for ; Tue, 07 Mar 2017 22:25:22 -0800 (PST) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id s2si2274147plk.272.2017.03.07.22.25.20 for ; Tue, 07 Mar 2017 22:25:21 -0800 (PST) Date: Wed, 8 Mar 2017 15:25:19 +0900 From: Minchan Kim Subject: Re: [RFC 01/11] mm: use SWAP_SUCCESS instead of 0 Message-ID: <20170308062519.GE11206@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-2-git-send-email-minchan@kernel.org> <20170307141933.GA2779@node.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170307141933.GA2779@node.shutemov.name> Sender: owner-linux-mm@kvack.org List-ID: To: "Kirill A. Shutemov" Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko Hi Kirill, On Tue, Mar 07, 2017 at 05:19:33PM +0300, Kirill A. Shutemov wrote: > On Thu, Mar 02, 2017 at 03:39:15PM +0900, Minchan Kim wrote: > > SWAP_SUCCESS defined value 0 can be changed always so don't rely on > > it. Instead, use explict macro. > > I'm okay with this as long as it's prepartion for something meaningful. > 0 as success is widely used. I don't think replacing it's with macro here > has value on its own. It's the prepartion for making try_to_unmap return bool type but strictly speaking, it's not necessary but I wanted to replace it with SWAP_SUCCESS in this chance because it has several *defined* return type so it would make it clear if we use one of those defiend type, IMO. However, my thumb rule is to keep author/maintainer's credit for trivial case and it seems you don't like so I will drop in next spin. Thanks. > > -- > Kirill A. Shutemov > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f200.google.com (mail-pf0-f200.google.com [209.85.192.200]) by kanga.kvack.org (Postfix) with ESMTP id 0A1AC6B0398 for ; Wed, 8 Mar 2017 01:40:42 -0500 (EST) Received: by mail-pf0-f200.google.com with SMTP id 67so42781447pfg.0 for ; Tue, 07 Mar 2017 22:40:42 -0800 (PST) Received: from lgeamrelo11.lge.com (LGEAMRELO11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id m14si2314376pln.225.2017.03.07.22.40.40 for ; Tue, 07 Mar 2017 22:40:41 -0800 (PST) Date: Wed, 8 Mar 2017 15:40:38 +0900 From: Minchan Kim Subject: Re: [RFC 04/11] mm: remove SWAP_MLOCK check for SWAP_SUCCESS in ttu Message-ID: <20170308064038.GF11206@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-5-git-send-email-minchan@kernel.org> <20170307142643.GD2779@node.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170307142643.GD2779@node.shutemov.name> Sender: owner-linux-mm@kvack.org List-ID: To: "Kirill A. Shutemov" Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On Tue, Mar 07, 2017 at 05:26:43PM +0300, Kirill A. Shutemov wrote: > On Thu, Mar 02, 2017 at 03:39:18PM +0900, Minchan Kim wrote: > > If the page is mapped and rescue in ttuo, page_mapcount(page) == 0 cannot > > be true so page_mapcount check in ttu is enough to return SWAP_SUCCESS. > > IOW, SWAP_MLOCK check is redundant so remove it. > > > > Signed-off-by: Minchan Kim > > --- > > mm/rmap.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/mm/rmap.c b/mm/rmap.c > > index 3a14013..0a48958 100644 > > --- a/mm/rmap.c > > +++ b/mm/rmap.c > > @@ -1523,7 +1523,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) > > else > > ret = rmap_walk(page, &rwc); > > > > - if (ret != SWAP_MLOCK && !page_mapcount(page)) > > + if (!page_mapcount(page)) > > Hm. I think there's bug in current code. > It should be !total_mapcount(page) otherwise it can be false-positive if > there's THP mapped with PTEs. Hmm, I lost THP thesedays totally so I can miss something easily. When I look at that, it seems every pages passed try_to_unmap is already splited by split split_huge_page_to_list which calls freeze_page which split pmd. So I guess it's no problem. Right? Anyway, it's out of scope in this patch so if it's really problem, I'd like to handle it separately. One asking: When we should use total_mapcount instead of page_mapcount? If total_mapcount has some lengthy description, it would be very helpful for one who not is faimilar with that. > > And in this case ret != SWAP_MLOCK is helpful to cut down some cost. > Althouth it should be fine to remove it, I guess. Sure but be hard to measure it, I think. As well, later patch removes SWAP_MLOCK. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f200.google.com (mail-pf0-f200.google.com [209.85.192.200]) by kanga.kvack.org (Postfix) with ESMTP id 628D76B039A for ; Wed, 8 Mar 2017 01:41:51 -0500 (EST) Received: by mail-pf0-f200.google.com with SMTP id v190so42666321pfb.5 for ; Tue, 07 Mar 2017 22:41:51 -0800 (PST) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id k2si2309914pga.293.2017.03.07.22.41.49 for ; Tue, 07 Mar 2017 22:41:50 -0800 (PST) Date: Wed, 8 Mar 2017 15:41:47 +0900 From: Minchan Kim Subject: Re: [RFC 05/11] mm: make the try_to_munlock void function Message-ID: <20170308064147.GG11206@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-6-git-send-email-minchan@kernel.org> <20170307151747.GA2940@node.shutemov.name> MIME-Version: 1.0 In-Reply-To: <20170307151747.GA2940@node.shutemov.name> Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: owner-linux-mm@kvack.org List-ID: To: "Kirill A. Shutemov" Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Vlastimil Babka , "Kirill A . Shutemov" On Tue, Mar 07, 2017 at 06:17:47PM +0300, Kirill A. Shutemov wrote: > On Thu, Mar 02, 2017 at 03:39:19PM +0900, Minchan Kim wrote: > > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped > > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to > > the page if the page is not pte-mapped THP which cannot be > > mlocked, either. > > > > With that, __munlock_isolated_page can use PageMlocked to check > > whether try_to_munlock is successful or not without relying on > > try_to_munlock's retval. It helps to make ttu/ttuo simple with > > upcoming patches. > > I *think* you're correct, but it took time to wrap my head around. > We basically rely on try_to_munlock() never caller for PTE-mapped THP. > And we don't at the moment. > > It worth adding something like > > VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page); > > into try_to_munlock(). Agree. > > Otherwise looks good to me. > > Will free adding my Acked-by once this nit is addressed. Thanks for the review this part, Kirill! > > -- > Kirill A. Shutemov > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id 5184E6B039C for ; Wed, 8 Mar 2017 01:42:46 -0500 (EST) Received: by mail-pg0-f72.google.com with SMTP id b2so42609981pgc.6 for ; Tue, 07 Mar 2017 22:42:46 -0800 (PST) Received: from lgeamrelo13.lge.com (LGEAMRELO13.lge.com. [156.147.23.53]) by mx.google.com with ESMTP id d81si2334696pfd.82.2017.03.07.22.42.44 for ; Tue, 07 Mar 2017 22:42:45 -0800 (PST) Date: Wed, 8 Mar 2017 15:42:42 +0900 From: Minchan Kim Subject: Re: [RFC 06/11] mm: remove SWAP_MLOCK in ttu Message-ID: <20170308064242.GH11206@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-7-git-send-email-minchan@kernel.org> <54799ea5-005d-939c-de32-bc21af881ab4@linux.vnet.ibm.com> <20170306021508.GD8779@bbox> <20170307152437.GB2940@node.shutemov.name> MIME-Version: 1.0 In-Reply-To: <20170307152437.GB2940@node.shutemov.name> Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: owner-linux-mm@kvack.org List-ID: To: "Kirill A. Shutemov" Cc: g@node.shutemov.name, Anshuman Khandual , Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko On Tue, Mar 07, 2017 at 06:24:37PM +0300, Kirill A. Shutemov wrote: > On Mon, Mar 06, 2017 at 11:15:08AM +0900, Minchan Kim wrote: > > Hi Anshuman, > > > > On Fri, Mar 03, 2017 at 06:06:38PM +0530, Anshuman Khandual wrote: > > > On 03/02/2017 12:09 PM, Minchan Kim wrote: > > > > ttu don't need to return SWAP_MLOCK. Instead, just return SWAP_FAIL > > > > because it means the page is not-swappable so it should move to > > > > another LRU list(active or unevictable). putback friends will > > > > move it to right list depending on the page's LRU flag. > > > > > > Right, if it cannot be swapped out there is not much difference with > > > SWAP_FAIL once we change the callers who expected to see a SWAP_MLOCK > > > return instead. > > > > > > > > > > > A side effect is shrink_page_list accounts unevictable list movement > > > > by PGACTIVATE but I don't think it corrupts something severe. > > > > > > Not sure I got that, could you please elaborate on this. We will still > > > activate the page and put it in an appropriate LRU list if it is marked > > > mlocked ? > > > > Right. putback_iactive_pages/putback_lru_page has a logic to filter > > out unevictable pages and move them to unevictable LRU list so it > > doesn't break LRU change behavior but the concern is until now, > > we have accounted PGACTIVATE for only evictable LRU list page but > > by this change, it accounts it to unevictable LRU list as well. > > However, although I don't think it's big problem in real practice, > > we can fix it simply with checking PG_mlocked if someone reports. > > I think it's better to do this pro-actively. Let's hide both pgactivate++ > and SetPageActive() under "if (!PageMlocked())". > SetPageActive() is not free. I will consider it in next spin. Thanks! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f69.google.com (mail-pg0-f69.google.com [74.125.83.69]) by kanga.kvack.org (Postfix) with ESMTP id A4C746B03A6 for ; Wed, 8 Mar 2017 02:20:08 -0500 (EST) Received: by mail-pg0-f69.google.com with SMTP id e5so44387518pgk.1 for ; Tue, 07 Mar 2017 23:20:08 -0800 (PST) Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com. [216.228.121.65]) by mx.google.com with ESMTPS id q77si2421330pfi.41.2017.03.07.23.20.06 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Mar 2017 23:20:07 -0800 (PST) Subject: Re: [RFC 08/11] mm: make ttu's return boolean References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-9-git-send-email-minchan@kernel.org> From: John Hubbard Message-ID: <70f60783-e098-c1a9-11b4-544530bcd809@nvidia.com> Date: Tue, 7 Mar 2017 23:13:26 -0800 MIME-Version: 1.0 In-Reply-To: <1488436765-32350-9-git-send-email-minchan@kernel.org> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim , Andrew Morton Cc: kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Naoya Horiguchi On 03/01/2017 10:39 PM, Minchan Kim wrote: > try_to_unmap returns SWAP_SUCCESS or SWAP_FAIL so it's suitable for > boolean return. This patch changes it. Hi Minchan, So, up until this patch, I definitely like the cleanup, because as you observed, the return values didn't need so many different values. However, at this point, I think you should stop, and keep the SWAP_SUCCESS and SWAP_FAIL (or maybe even rename them to UNMAP_* or TTU_RESULT_*, to match their functions' names better), because removing them makes the code considerably less readable. And since this is billed as a cleanup, we care here, even though this is a minor point. :) Bool return values are sometimes perfect, such as when asking a question: bool mode_changed = needs_modeset(crtc_state); The above is very nice. However, for returning success or failure, bools are not as nice, because *usually* success == true, except when you use the errno-based system, in which success == 0 (which would translate to false, if you mistakenly treated it as a bool). That leads to the reader having to remember which system is in use, usually with no visual cues to help. > [...] > if (PageSwapCache(p)) { > @@ -971,7 +971,7 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, > collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED); > > ret = try_to_unmap(hpage, ttu); > - if (ret != SWAP_SUCCESS) > + if (!ret) > pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n", > pfn, page_mapcount(hpage)); > > @@ -986,8 +986,7 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, > * any accesses to the poisoned memory. > */ > forcekill = PageDirty(hpage) || (flags & MF_MUST_KILL); > - kill_procs(&tokill, forcekill, trapno, > - ret != SWAP_SUCCESS, p, pfn, flags); > + kill_procs(&tokill, forcekill, trapno, !ret , p, pfn, flags); The kill_procs() invocation was a little more readable before. > [...] > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 170c61f..e4b74f1 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -966,7 +966,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, > int may_enter_fs; > enum page_references references = PAGEREF_RECLAIM_CLEAN; > bool dirty, writeback; > - int ret = SWAP_SUCCESS; > > cond_resched(); > > @@ -1139,13 +1138,9 @@ static unsigned long shrink_page_list(struct list_head *page_list, > * processes. Try to unmap it here. > */ > if (page_mapped(page)) { > - switch (ret = try_to_unmap(page, > - ttu_flags | TTU_BATCH_FLUSH)) { > - case SWAP_FAIL: Again: the SWAP_FAIL makes it crystal clear which case we're in. I also wonder if UNMAP_FAIL or TTU_RESULT_FAIL is a better name? thanks, John Hubbard NVIDIA -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f69.google.com (mail-pg0-f69.google.com [74.125.83.69]) by kanga.kvack.org (Postfix) with ESMTP id C513E6B0435 for ; Thu, 9 Mar 2017 01:37:27 -0500 (EST) Received: by mail-pg0-f69.google.com with SMTP id g2so97516858pge.7 for ; Wed, 08 Mar 2017 22:37:27 -0800 (PST) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id a5si5504945pgi.377.2017.03.08.22.37.26 for ; Wed, 08 Mar 2017 22:37:26 -0800 (PST) Date: Thu, 9 Mar 2017 15:37:21 +0900 From: Minchan Kim Subject: Re: [RFC 08/11] mm: make ttu's return boolean Message-ID: <20170309063721.GC854@bbox> References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-9-git-send-email-minchan@kernel.org> <70f60783-e098-c1a9-11b4-544530bcd809@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <70f60783-e098-c1a9-11b4-544530bcd809@nvidia.com> Sender: owner-linux-mm@kvack.org List-ID: To: John Hubbard Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Naoya Horiguchi Hi John, On Tue, Mar 07, 2017 at 11:13:26PM -0800, John Hubbard wrote: > On 03/01/2017 10:39 PM, Minchan Kim wrote: > >try_to_unmap returns SWAP_SUCCESS or SWAP_FAIL so it's suitable for > >boolean return. This patch changes it. > > Hi Minchan, > > So, up until this patch, I definitely like the cleanup, because as you > observed, the return values didn't need so many different values. However, > at this point, I think you should stop, and keep the SWAP_SUCCESS and > SWAP_FAIL (or maybe even rename them to UNMAP_* or TTU_RESULT_*, to match > their functions' names better), because removing them makes the code > considerably less readable. > > And since this is billed as a cleanup, we care here, even though this is a > minor point. :) > > Bool return values are sometimes perfect, such as when asking a question: > > bool mode_changed = needs_modeset(crtc_state); > > The above is very nice. However, for returning success or failure, bools are > not as nice, because *usually* success == true, except when you use the > errno-based system, in which success == 0 (which would translate to false, > if you mistakenly treated it as a bool). That leads to the reader having to > remember which system is in use, usually with no visual cues to help. I think it's the matter of taste. if (try_to_unmap(xxx)) something else something It's perfectly understandable to me. IOW, if try_to_unmap returns true, it means it did unmap successfully. Otherwise, failed. IMHO, SWAP_SUCCESS or TTU_RESULT_* seems to be an over-engineering. If the user want it, user can do it by introducing right variable name in his context. See below. > > > > [...] > > if (PageSwapCache(p)) { > >@@ -971,7 +971,7 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, > > collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED); > > > > ret = try_to_unmap(hpage, ttu); > >- if (ret != SWAP_SUCCESS) > >+ if (!ret) > > pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n", > > pfn, page_mapcount(hpage)); > > > >@@ -986,8 +986,7 @@ static int hwpoison_user_mappings(struct page *p, unsigned long pfn, > > * any accesses to the poisoned memory. > > */ > > forcekill = PageDirty(hpage) || (flags & MF_MUST_KILL); > >- kill_procs(&tokill, forcekill, trapno, > >- ret != SWAP_SUCCESS, p, pfn, flags); > >+ kill_procs(&tokill, forcekill, trapno, !ret , p, pfn, flags); > > The kill_procs() invocation was a little more readable before. Indeed but I think it's not a problem of try_to_unmap but ret variable name isn't good any more. How about this? bool unmap_success; unmap_success = try_to_unmap(hpage, ttu); .. kill_procs(&tokill, forcekill, trapno, !unmap_success , p, pfn, flags); .. return unmap_success; My point is user can introduce whatever variable name depends on his context. No need to make return variable complicated, IMHO. > > > > [...] > >diff --git a/mm/vmscan.c b/mm/vmscan.c > >index 170c61f..e4b74f1 100644 > >--- a/mm/vmscan.c > >+++ b/mm/vmscan.c > >@@ -966,7 +966,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, > > int may_enter_fs; > > enum page_references references = PAGEREF_RECLAIM_CLEAN; > > bool dirty, writeback; > >- int ret = SWAP_SUCCESS; > > > > cond_resched(); > > > >@@ -1139,13 +1138,9 @@ static unsigned long shrink_page_list(struct list_head *page_list, > > * processes. Try to unmap it here. > > */ > > if (page_mapped(page)) { > >- switch (ret = try_to_unmap(page, > >- ttu_flags | TTU_BATCH_FLUSH)) { > >- case SWAP_FAIL: > > Again: the SWAP_FAIL makes it crystal clear which case we're in. To me, I don't feel it. To me, below is perfectly understandable. if (try_to_unmap()) do something That's why I think it's matter of taste. Okay, I admit I might be biased, too so I will consider what you suggested if others votes it. Thanks. > > I also wonder if UNMAP_FAIL or TTU_RESULT_FAIL is a better name? > > thanks, > John Hubbard > NVIDIA > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 1657F831FE for ; Thu, 9 Mar 2017 01:46:49 -0500 (EST) Received: by mail-pf0-f197.google.com with SMTP id x63so97520903pfx.7 for ; Wed, 08 Mar 2017 22:46:49 -0800 (PST) Received: from hqemgate15.nvidia.com (hqemgate15.nvidia.com. [216.228.121.64]) by mx.google.com with ESMTPS id t19si5526460plj.305.2017.03.08.22.46.48 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 08 Mar 2017 22:46:48 -0800 (PST) Subject: Re: [RFC 08/11] mm: make ttu's return boolean References: <1488436765-32350-1-git-send-email-minchan@kernel.org> <1488436765-32350-9-git-send-email-minchan@kernel.org> <70f60783-e098-c1a9-11b4-544530bcd809@nvidia.com> <20170309063721.GC854@bbox> From: John Hubbard Message-ID: <49da6c96-387f-5931-eddb-cb6414631877@nvidia.com> Date: Wed, 8 Mar 2017 22:46:40 -0800 MIME-Version: 1.0 In-Reply-To: <20170309063721.GC854@bbox> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Andrew Morton , kernel-team@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , "Kirill A. Shutemov" , Naoya Horiguchi On 03/08/2017 10:37 PM, Minchan Kim wrote: >[...] > > I think it's the matter of taste. > > if (try_to_unmap(xxx)) > something > else > something > > It's perfectly understandable to me. IOW, if try_to_unmap returns true, > it means it did unmap successfully. Otherwise, failed. > > IMHO, SWAP_SUCCESS or TTU_RESULT_* seems to be an over-engineering. > If the user want it, user can do it by introducing right variable name > in his context. See below. I'm OK with that approach. Just something to avoid the "what does !ret mean in this function call" is what I was looking for... >> [...] >>> forcekill = PageDirty(hpage) || (flags & MF_MUST_KILL); >>> - kill_procs(&tokill, forcekill, trapno, >>> - ret != SWAP_SUCCESS, p, pfn, flags); >>> + kill_procs(&tokill, forcekill, trapno, !ret , p, pfn, flags); >> >> The kill_procs() invocation was a little more readable before. > > Indeed but I think it's not a problem of try_to_unmap but ret variable name > isn't good any more. How about this? > > bool unmap_success; > > unmap_success = try_to_unmap(hpage, ttu); > > .. > > kill_procs(&tokill, forcekill, trapno, !unmap_success , p, pfn, flags); > > .. > > return unmap_success; > > My point is user can introduce whatever variable name depends on his > context. No need to make return variable complicated, IMHO. Yes, the local variable basically achieves what I was hoping for, so sure, works for me. >> [...] >>> - case SWAP_FAIL: >> >> Again: the SWAP_FAIL makes it crystal clear which case we're in. > > To me, I don't feel it. > To me, below is perfectly understandable. > > if (try_to_unmap()) > do something > > That's why I think it's matter of taste. Okay, I admit I might be > biased, too so I will consider what you suggested if others votes > it. Yes, if it's really just a matter of taste, then not worth debating. Your change above is fine I think. thanks john h > > Thanks. > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org