From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 204226B025F for ; Fri, 22 Sep 2017 14:46:36 -0400 (EDT) Received: by mail-pg0-f71.google.com with SMTP id 188so3614178pgb.3 for ; Fri, 22 Sep 2017 11:46:36 -0700 (PDT) Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id r12si235002pgf.738.2017.09.22.11.46.35 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 22 Sep 2017 11:46:35 -0700 (PDT) From: Shaohua Li Subject: [PATCH V2 1/2] mm: avoid marking swap cached page as lazyfree Date: Fri, 22 Sep 2017 11:46:30 -0700 Message-Id: In-Reply-To: References: In-Reply-To: References: Sender: owner-linux-mm@kvack.org List-ID: To: linux-mm@kvack.org Cc: Artem Savkov , Kernel-team@fb.com, Shaohua Li , stable@vger.kernel.org, Johannes Weiner , Michal Hocko , Hillf Danton , Minchan Kim , Hugh Dickins , Mel Gorman , Andrew Morton From: Shaohua Li MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear SwapBacked). There is no lock to prevent the page is added to swap cache between these two steps by page reclaim. If the page is added to swap cache, marking the page lazyfree will confuse page fault if the page is reclaimed and refault. Reported-and-tested-by: Artem Savkov Fix: 802a3a92ad7a(mm: reclaim MADV_FREE pages) Signed-off-by: Shaohua Li Cc: stable@vger.kernel.org Cc: Johannes Weiner Cc: Michal Hocko Cc: Hillf Danton Cc: Minchan Kim Cc: Hugh Dickins Cc: Mel Gorman Cc: Andrew Morton Reviewed-by: Rik van Riel --- mm/swap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 9295ae9..a77d68f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -575,7 +575,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, void *arg) { if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && - !PageUnevictable(page)) { + !PageSwapCache(page) && !PageUnevictable(page)) { bool active = PageActive(page); del_page_from_lru_list(page, lruvec, @@ -665,7 +665,7 @@ void deactivate_file_page(struct page *page) void mark_page_lazyfree(struct page *page) { if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && - !PageUnevictable(page)) { + !PageSwapCache(page) && !PageUnevictable(page)) { struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs); get_page(page); -- 2.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org