From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBA69C4361B for ; Tue, 15 Dec 2020 22:20:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 846DC22D49 for ; Tue, 15 Dec 2020 22:20:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 846DC22D49 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 27E768D000D; Tue, 15 Dec 2020 17:20:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 22CEF8D000C; Tue, 15 Dec 2020 17:20:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F4D28D000D; Tue, 15 Dec 2020 17:20:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id E63588D000C for ; Tue, 15 Dec 2020 17:20:53 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B62A3180AD81A for ; Tue, 15 Dec 2020 22:20:53 +0000 (UTC) X-FDA: 77596937586.23.quiet38_30000ae27427 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 99C1137606 for ; Tue, 15 Dec 2020 22:20:53 +0000 (UTC) X-HE-Tag: quiet38_30000ae27427 X-Filterd-Recvd-Size: 5525 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Dec 2020 22:20:53 +0000 (UTC) Date: Tue, 15 Dec 2020 14:20:50 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608070852; bh=gxvTtWnSad0FaLs7/+b3NpyvCIteFLuV4/DnD865n8E=; h=From:To:Subject:In-Reply-To:From; b=pt/JJLnsEDpsw+81ZXGeyAinLLWtYDvO4ZwjS4rE6lR+tOsqDp3NDDWnnhx1oDc2L Zq5lB9Ioj0Hzk6IsWsTc4jrQVMI9KAOanaOBDhKPhbR3wowgWa50in2OUUKjLm3pR+ +RxD3LxYuLv+PQkDht2ugWNST1SSj9eKsuBX32KM= From: Andrew Morton To: aarcange@redhat.com, akpm@linux-foundation.org, alex.shi@linux.alibaba.com, alexander.duyck@gmail.com, aryabinin@virtuozzo.com, daniel.m.jordan@oracle.com, hannes@cmpxchg.org, hughd@google.com, iamjoonsoo.kim@lge.com, jannh@google.com, khlebnikov@yandex-team.ru, kirill.shutemov@linux.intel.com, kirill@shutemov.name, linux-mm@kvack.org, mgorman@techsingularity.net, mhocko@kernel.org, mhocko@suse.com, mika.penttila@nextfour.com, minchan@kernel.org, mm-commits@vger.kernel.org, richard.weiyang@gmail.com, rong.a.chen@intel.com, shakeelb@google.com, tglx@linutronix.de, tj@kernel.org, torvalds@linux-foundation.org, vbabka@suse.cz, vdavydov.dev@gmail.com, willy@infradead.org, yang.shi@linux.alibaba.com, ying.huang@intel.com Subject: [patch 10/19] mm/lru: move lock into lru_note_cost Message-ID: <20201215222050.WZsb-8EB6%akpm@linux-foundation.org> In-Reply-To: <20201215123253.954eca9a5ef4c0d52fd381fa@linux-foundation.org> User-Agent: s-nail v14.8.16 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =46rom: Alex Shi Subject: mm/lru: move lock into lru_note_cost We have to move lru_lock into lru_note_cost, since it cycle up on memcg tree, for future per lruvec lru_lock replace. It's a bit ugly and may cost a bit more locking, but benefit from multiple memcg locking could cover the lost. Link: https://lkml.kernel.org/r/1604566549-62481-11-git-send-email-alex.shi= @linux.alibaba.com Signed-off-by: Alex Shi Acked-by: Hugh Dickins Acked-by: Johannes Weiner Cc: Johannes Weiner Cc: Alexander Duyck Cc: Andrea Arcangeli Cc: Andrey Ryabinin Cc: "Chen, Rong A" Cc: Daniel Jordan Cc: "Huang, Ying" Cc: Jann Horn Cc: Joonsoo Kim Cc: Kirill A. Shutemov Cc: Kirill A. Shutemov Cc: Konstantin Khlebnikov Cc: Matthew Wilcox (Oracle) Cc: Mel Gorman Cc: Michal Hocko Cc: Michal Hocko Cc: Mika Penttil=C3=A4 Cc: Minchan Kim Cc: Shakeel Butt Cc: Tejun Heo Cc: Thomas Gleixner Cc: Vladimir Davydov Cc: Vlastimil Babka Cc: Wei Yang Cc: Yang Shi Signed-off-by: Andrew Morton --- mm/swap.c | 3 +++ mm/vmscan.c | 4 +--- mm/workingset.c | 2 -- 3 files changed, 4 insertions(+), 5 deletions(-) --- a/mm/swap.c~mm-lru-move-lock-into-lru_note_cost +++ a/mm/swap.c @@ -268,7 +268,9 @@ void lru_note_cost(struct lruvec *lruvec { do { unsigned long lrusize; + struct pglist_data *pgdat =3D lruvec_pgdat(lruvec); =20 + spin_lock_irq(&pgdat->lru_lock); /* Record cost event */ if (file) lruvec->file_cost +=3D nr_pages; @@ -292,6 +294,7 @@ void lru_note_cost(struct lruvec *lruvec lruvec->file_cost /=3D 2; lruvec->anon_cost /=3D 2; } + spin_unlock_irq(&pgdat->lru_lock); } while ((lruvec =3D parent_lruvec(lruvec))); } =20 --- a/mm/vmscan.c~mm-lru-move-lock-into-lru_note_cost +++ a/mm/vmscan.c @@ -1971,19 +1971,17 @@ shrink_inactive_list(unsigned long nr_to nr_reclaimed =3D shrink_page_list(&page_list, pgdat, sc, &stat, false); =20 spin_lock_irq(&pgdat->lru_lock); - move_pages_to_lru(lruvec, &page_list); =20 __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - lru_note_cost(lruvec, file, stat.nr_pageout); item =3D current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); - spin_unlock_irq(&pgdat->lru_lock); =20 + lru_note_cost(lruvec, file, stat.nr_pageout); mem_cgroup_uncharge_list(&page_list); free_unref_page_list(&page_list); =20 --- a/mm/workingset.c~mm-lru-move-lock-into-lru_note_cost +++ a/mm/workingset.c @@ -381,9 +381,7 @@ void workingset_refault(struct page *pag if (workingset) { SetPageWorkingset(page); /* XXX: Move to lru_cache_add() when it supports new vs putback */ - spin_lock_irq(&page_pgdat(page)->lru_lock); lru_note_cost_page(page); - spin_unlock_irq(&page_pgdat(page)->lru_lock); inc_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file); } out: _