From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11DA4C433E1 for ; Wed, 3 Jun 2020 23:03:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BD27722226 for ; Wed, 3 Jun 2020 23:03:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="mYLvAxJH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD27722226 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 64B73280087; Wed, 3 Jun 2020 19:03:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5EF8628006C; Wed, 3 Jun 2020 19:03:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52DD5280087; Wed, 3 Jun 2020 19:03:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0177.hostedemail.com [216.40.44.177]) by kanga.kvack.org (Postfix) with ESMTP id 3DE0B28006C for ; Wed, 3 Jun 2020 19:03:40 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id EB47B181AEF09 for ; Wed, 3 Jun 2020 23:03:39 +0000 (UTC) X-FDA: 76889429358.30.bit56_3d0f1395c881e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id C88B7180B3C85 for ; Wed, 3 Jun 2020 23:03:39 +0000 (UTC) X-HE-Tag: bit56_3d0f1395c881e X-Filterd-Recvd-Size: 3750 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 23:03:39 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6D9F122229; Wed, 3 Jun 2020 23:03:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591225418; bh=9KgeT4YD+SD0to9+8CcCVYPZrJyioNZE1VRxLxLL/70=; h=Date:From:To:Subject:In-Reply-To:From; b=mYLvAxJHqoFqw/RuTVSytlrbPZRczmAGcfeLPokOtGnQjLcsfqqKL/nK0ZUjBw81S 06HhZQgD3IQXtqBhUQi5Ubee61Xp08lQkiEjGqjfTwIHja7lfZh58nAQxUdOYbmWfW 81TRTQMR1/uglkeW+cpWunw5ywOGFGxR0d7rPyA8= Date: Wed, 03 Jun 2020 16:03:37 -0700 From: Andrew Morton To: aarcange@redhat.com, akpm@linux-foundation.org, daniel.m.jordan@oracle.com, hughd@google.com, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, yang.shi@linux.alibaba.com Subject: [patch 124/131] mm: thp: don't need to drain lru cache when splitting and mlocking THP Message-ID: <20200603230337.8oZ9CuARg%akpm@linux-foundation.org> In-Reply-To: <20200603155549.e041363450869eaae4c7f05b@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: C88B7180B3C85 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yang Shi Subject: mm: thp: don't need to drain lru cache when splitting and mlocking THP Since commit 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival") THP would not stay in pagevec anymore. So the optimization made by commit d965432234db ("thp: increase split_huge_page() success rate") doesn't make sense anymore, which tries to unpin munlocked THPs from pagevec by draining pagevec. Draining lru cache before isolating THP in mlock path is also unnecessary. b676b293fb48 ("mm, thp: fix mapped pages avoiding unevictable list on mlock") added it and 9a73f61bdb8a ("thp, mlock: do not mlock PTE-mapped file huge pages") accidentally carried it over after the above optimization went in. Link: http://lkml.kernel.org/r/1585946493-7531-1-git-send-email-yang.shi@linux.alibaba.com Signed-off-by: Yang Shi Reviewed-by: Daniel Jordan Acked-by: Kirill A. Shutemov Cc: Hugh Dickins Cc: Andrea Arcangeli Signed-off-by: Andrew Morton --- mm/huge_memory.c | 7 ------- 1 file changed, 7 deletions(-) --- a/mm/huge_memory.c~mm-thp-dont-need-drain-lru-cache-when-splitting-and-mlocking-thp +++ a/mm/huge_memory.c @@ -1378,7 +1378,6 @@ struct page *follow_trans_huge_pmd(struc goto skip_mlock; if (!trylock_page(page)) goto skip_mlock; - lru_add_drain(); if (page->mapping && !PageDoubleMap(page)) mlock_vma_page(page); unlock_page(page); @@ -2582,7 +2581,6 @@ int split_huge_page_to_list(struct page struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int count, mapcount, extra_pins, ret; - bool mlocked; unsigned long flags; pgoff_t end; @@ -2641,14 +2639,9 @@ int split_huge_page_to_list(struct page goto out_unlock; } - mlocked = PageMlocked(head); unmap_page(head); VM_BUG_ON_PAGE(compound_mapcount(head), head); - /* Make sure the page is not on per-CPU pagevec as it takes pin */ - if (mlocked) - lru_add_drain(); - /* prevent PageLRU to go away from under us, and freeze lru stats */ spin_lock_irqsave(&pgdata->lru_lock, flags); _