From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF3F5C2BBCA for ; Tue, 15 Dec 2020 20:35:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 901AF22CBB for ; Tue, 15 Dec 2020 20:35:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727915AbgLOUet (ORCPT ); Tue, 15 Dec 2020 15:34:49 -0500 Received: from mail.kernel.org ([198.145.29.99]:36898 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729341AbgLOUep (ORCPT ); Tue, 15 Dec 2020 15:34:45 -0500 Date: Tue, 15 Dec 2020 12:34:02 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608064444; bh=j/mg/P1x2UmA5zgSLCrFTLSVkF4qHliuS8Vdn/oXcPk=; h=From:To:Subject:In-Reply-To:From; b=eG1iQJ6nJGY6T1psSxSaMlDqx9mzhMylC5XhJiT1rq0mB6nVP+VSVVqNqrprC81UA TkDV+hvExQIbAAfnpUYpyr+4ZB8dsca8OcG3vC2bcNyj3bpXy/1ZGpck2fnQ5+Ij2l 99OtugSKN0y4Ghli3atFrJ2+O8o7QS5vAw1kig68= From: Andrew Morton To: aarcange@redhat.com, akpm@linux-foundation.org, alex.shi@linux.alibaba.com, alexander.duyck@gmail.com, aryabinin@virtuozzo.com, daniel.m.jordan@oracle.com, hannes@cmpxchg.org, hughd@google.com, iamjoonsoo.kim@lge.com, jannh@google.com, khlebnikov@yandex-team.ru, kirill.shutemov@linux.intel.com, kirill@shutemov.name, linux-mm@kvack.org, mgorman@techsingularity.net, mhocko@kernel.org, mhocko@suse.com, mika.penttila@nextfour.com, minchan@kernel.org, mm-commits@vger.kernel.org, richard.weiyang@gmail.com, rong.a.chen@intel.com, shakeelb@google.com, tglx@linutronix.de, tj@kernel.org, torvalds@linux-foundation.org, vbabka@suse.cz, vdavydov.dev@gmail.com, willy@infradead.org, yang.shi@linux.alibaba.com, ying.huang@intel.com Subject: [patch 11/19] mm/vmscan: remove lruvec reget in move_pages_to_lru Message-ID: <20201215203402.8d5-PP6AL%akpm@linux-foundation.org> In-Reply-To: <20201215123253.954eca9a5ef4c0d52fd381fa@linux-foundation.org> User-Agent: s-nail v14.8.16 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org =46rom: Alex Shi Subject: mm/vmscan: remove lruvec reget in move_pages_to_lru Isolated page shouldn't be recharged by memcg since the memcg migration isn't possible at the time. All pages were isolated from the same lruvec (and isolation inhibits memcg migration). So remove unnecessary regetting. Thanks to Alexander Duyck for pointing this out. Link: https://lkml.kernel.org/r/1604566549-62481-12-git-send-email-alex.shi= @linux.alibaba.com Signed-off-by: Alex Shi Acked-by: Hugh Dickins Acked-by: Johannes Weiner Cc: Konstantin Khlebnikov Cc: Michal Hocko Cc: Alexander Duyck Cc: Andrea Arcangeli Cc: Andrey Ryabinin Cc: "Chen, Rong A" Cc: Daniel Jordan Cc: "Huang, Ying" Cc: Jann Horn Cc: Joonsoo Kim Cc: Kirill A. Shutemov Cc: Kirill A. Shutemov Cc: Matthew Wilcox (Oracle) Cc: Mel Gorman Cc: Michal Hocko Cc: Mika Penttil=C3=A4 Cc: Minchan Kim Cc: Shakeel Butt Cc: Tejun Heo Cc: Thomas Gleixner Cc: Vladimir Davydov Cc: Vlastimil Babka Cc: Wei Yang Cc: Yang Shi Signed-off-by: Andrew Morton --- mm/vmscan.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) --- a/mm/vmscan.c~mm-vmscan-remove-lruvec-reget-in-move_pages_to_lru +++ a/mm/vmscan.c @@ -1886,7 +1886,12 @@ static unsigned noinline_for_stack move_ continue; } =20 - lruvec =3D mem_cgroup_page_lruvec(page, pgdat); + /* + * All pages were isolated from the same lruvec (and isolation + * inhibits memcg migration). + */ + VM_BUG_ON_PAGE(mem_cgroup_page_lruvec(page, page_pgdat(page)) + !=3D lruvec, page); lru =3D page_lru(page); nr_pages =3D thp_nr_pages(page); =20 _