From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F12FC433E0 for ; Sat, 1 Aug 2020 03:58:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0F4972076C for ; Sat, 1 Aug 2020 03:58:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0F4972076C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 84AB28D0080; Fri, 31 Jul 2020 23:58:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7FC6B8D0077; Fri, 31 Jul 2020 23:58:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 712658D0080; Fri, 31 Jul 2020 23:58:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0214.hostedemail.com [216.40.44.214]) by kanga.kvack.org (Postfix) with ESMTP id 595B38D0077 for ; Fri, 31 Jul 2020 23:58:58 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D0617824805A for ; Sat, 1 Aug 2020 03:58:57 +0000 (UTC) X-FDA: 77100643914.23.pie55_0e165e626f89 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id AABB237606 for ; Sat, 1 Aug 2020 03:58:57 +0000 (UTC) X-HE-Tag: pie55_0e165e626f89 X-Filterd-Recvd-Size: 5659 Received: from out30-44.freemail.mail.aliyun.com (out30-44.freemail.mail.aliyun.com [115.124.30.44]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Aug 2020 03:58:55 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R261e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01358;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0U4MsvgN_1596254329; Received: from IT-FVFX43SYHV2H.lan(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U4MsvgN_1596254329) by smtp.aliyun-inc.com(127.0.0.1); Sat, 01 Aug 2020 11:58:50 +0800 Subject: Re: [PATCH] mm/memcg: remove useless check on page->mem_cgroup To: Johannes Weiner Cc: Michal Hocko , Vladimir Davydov , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1596166480-22814-1-git-send-email-alex.shi@linux.alibaba.com> <20200731151655.GB491801@cmpxchg.org> From: Alex Shi Message-ID: <9338716f-ca0e-057f-8d94-03e2b3f70281@linux.alibaba.com> Date: Sat, 1 Aug 2020 11:58:41 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <20200731151655.GB491801@cmpxchg.org> Content-Type: text/plain; charset=gbk X-Rspamd-Queue-Id: AABB237606 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =D4=DA 2020/7/31 =CF=C2=CE=E711:16, Johannes Weiner =D0=B4=B5=C0: >> if (!entry.val) { >> memcg_memory_event(memcg, MEMCG_SWAP_FAIL); > Uncharged readahead pages are gone, but I'm not 100% sure uncharged > pages in general are gone. ISTR that the !page->mem_cgroup check in > mem_cgroup_uncharge() prevented a crash - although that is of course a > much broader interface, whereas the ones you change should only apply > to LRU pages (which are hopefully all charged). >=20 > Nevertheless, to avoid unnecessary crashes if we discover that we've > been wrong, how about leaving the branches for now, but adding a (new) > VM_WARN_ON_ONCE_PAGE() to them? Right, let's see if other unexcepted things happens, and then do actions. So it's the patch: >From 28893cf8e55b98665cce58c0ba6d54aeafb63a62 Mon Sep 17 00:00:00 2001 From: Alex Shi Date: Sat, 1 Aug 2020 10:43:55 +0800 Subject: [PATCH] mm/memcg: warning on !memcg after readahead page charged Since readahead page is charged on memcg too, in theory we don't have to check this exception now. Before safely remove them all, add a warning for the unexpected !memcg. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Andrew Morton Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- include/linux/mmdebug.h | 8 ++++++++ mm/memcontrol.c | 15 ++++++++------- 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h index 2ad72d2c8cc5..639e98a3384e 100644 --- a/include/linux/mmdebug.h +++ b/include/linux/mmdebug.h @@ -37,6 +37,13 @@ BUG(); \ } \ } while (0) +#define VM_WARN_ON_ONCE_PAGE(cond, page) \ + do { \ + if (unlikely(cond)) { \ + dump_page(page, "VM_WARN_ON_ONCE_PAGE(" __stringify(cond)")");\ + WARN_ON_ONCE(cond); \ + } \ + } while (0) #define VM_WARN_ON(cond) (void)WARN_ON(cond) #define VM_WARN_ON_ONCE(cond) (void)WARN_ON_ONCE(cond) #define VM_WARN_ONCE(cond, format...) (void)WARN_ONCE(cond, format) @@ -48,6 +55,7 @@ #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond) #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond) +#define VM_WARN_ON_ONCE_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond) #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond) #endif diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e84c2b5596f2..0174c31f6491 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1322,10 +1322,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page = *page, struct pglist_data *pgd } =20 memcg =3D page->mem_cgroup; - /* - * Swapcache readahead pages are added to the LRU - and - * possibly migrated - before they are charged. - */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) memcg =3D root_mem_cgroup; =20 @@ -6897,8 +6895,9 @@ void mem_cgroup_migrate(struct page *oldpage, struc= t page *newpage) if (newpage->mem_cgroup) return; =20 - /* Swapcache readahead pages can get replaced before being charged */ memcg =3D oldpage->mem_cgroup; + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, oldpage); if (!memcg) return; =20 @@ -7095,7 +7094,8 @@ void mem_cgroup_swapout(struct page *page, swp_entr= y_t entry) =20 memcg =3D page->mem_cgroup; =20 - /* Readahead page, never charged */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) return; =20 @@ -7159,7 +7159,8 @@ int mem_cgroup_try_charge_swap(struct page *page, s= wp_entry_t entry) =20 memcg =3D page->mem_cgroup; =20 - /* Readahead page, never charged */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) return 0; =20 --=20 1.8.3.1