From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70804C433E1 for ; Tue, 4 Aug 2020 07:35:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E6C2E206D8 for ; Tue, 4 Aug 2020 07:35:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E6C2E206D8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 500BD8D012E; Tue, 4 Aug 2020 03:35:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4AFB48D0081; Tue, 4 Aug 2020 03:35:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39FA68D012E; Tue, 4 Aug 2020 03:35:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0019.hostedemail.com [216.40.44.19]) by kanga.kvack.org (Postfix) with ESMTP id 243CD8D0081 for ; Tue, 4 Aug 2020 03:35:35 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D43698248047 for ; Tue, 4 Aug 2020 07:35:34 +0000 (UTC) X-FDA: 77112076188.29.news12_22099b326fa4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id A8EBD18086E2D for ; Tue, 4 Aug 2020 07:35:34 +0000 (UTC) X-HE-Tag: news12_22099b326fa4 X-Filterd-Recvd-Size: 8004 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Tue, 4 Aug 2020 07:35:30 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07484;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0U4isePz_1596526524; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U4isePz_1596526524) by smtp.aliyun-inc.com(127.0.0.1); Tue, 04 Aug 2020 15:35:25 +0800 Subject: Re: [PATCH] mm/memcg: remove useless check on page->mem_cgroup To: Michal Hocko Cc: Johannes Weiner , Vladimir Davydov , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1596166480-22814-1-git-send-email-alex.shi@linux.alibaba.com> <20200731151655.GB491801@cmpxchg.org> <9338716f-ca0e-057f-8d94-03e2b3f70281@linux.alibaba.com> <20200803081815.GD5174@dhcp22.suse.cz> From: Alex Shi Message-ID: Date: Tue, 4 Aug 2020 15:35:08 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <20200803081815.GD5174@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 X-Rspamd-Queue-Id: A8EBD18086E2D X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2020/8/3 =E4=B8=8B=E5=8D=884:18, Michal Hocko =E5=86=99=E9=81=93= : > On Sat 01-08-20 11:58:41, Alex Shi wrote: >> >> >> =E5=9C=A8 2020/7/31 =E4=B8=8B=E5=8D=8811:16, Johannes Weiner =E5=86=99= =E9=81=93: >>>> if (!entry.val) { >>>> memcg_memory_event(memcg, MEMCG_SWAP_FAIL); >>> Uncharged readahead pages are gone, but I'm not 100% sure uncharged >>> pages in general are gone. ISTR that the !page->mem_cgroup check in >>> mem_cgroup_uncharge() prevented a crash - although that is of course = a >>> much broader interface, whereas the ones you change should only apply >>> to LRU pages (which are hopefully all charged). >>> >>> Nevertheless, to avoid unnecessary crashes if we discover that we've >>> been wrong, how about leaving the branches for now, but adding a (new= ) >>> VM_WARN_ON_ONCE_PAGE() to them? >=20 > Agreed! >=20 >> Right, let's see if other unexcepted things happens, and then do actio= ns. >> So it's the patch: >> >> >From 28893cf8e55b98665cce58c0ba6d54aeafb63a62 Mon Sep 17 00:00:00 200= 1 >> From: Alex Shi >> Date: Sat, 1 Aug 2020 10:43:55 +0800 >> Subject: [PATCH] mm/memcg: warning on !memcg after readahead page char= ged >> >> Since readahead page is charged on memcg too, in theory we don't have = to >> check this exception now. Before safely remove them all, add a warning >> for the unexpected !memcg. >=20 > I would find it useful to mention since when this assumption holds.>=20 >> Signed-off-by: Alex Shi >> Cc: Johannes Weiner >> Cc: Michal Hocko >> Cc: Vladimir Davydov >> Cc: Andrew Morton >> Cc: cgroups@vger.kernel.org >> Cc: linux-mm@kvack.org >> Cc: linux-kernel@vger.kernel.org >> --- >> include/linux/mmdebug.h | 8 ++++++++ >> mm/memcontrol.c | 15 ++++++++------- >> 2 files changed, 16 insertions(+), 7 deletions(-) >> >> diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h >> index 2ad72d2c8cc5..639e98a3384e 100644 >> --- a/include/linux/mmdebug.h >> +++ b/include/linux/mmdebug.h >> @@ -37,6 +37,13 @@ >> BUG(); \ >> } \ >> } while (0) >> +#define VM_WARN_ON_ONCE_PAGE(cond, page) \ >> + do { \ >> + if (unlikely(cond)) { \ >> + dump_page(page, "VM_WARN_ON_ONCE_PAGE(" __stringify(cond)")");\ >> + WARN_ON_ONCE(cond); \ >> + } \ >=20 > This is a bit strange behavior. You dump page for each occasion but war= n > only once. I would expect either "once" semantic for any output or just > dump on each occasion because if the whole point is to reduce to amount > of output then the above doesn't serve the purpose. >=20 Yes, left more dump_page may ommited by users. for reduce dmesg purpose, = warn once is better. Thanks for comment! Alex=20 -- >From 3cee031d50625733a64b58240d0e6f8151e5299c Mon Sep 17 00:00:00 2001 From: Alex Shi Date: Sat, 1 Aug 2020 10:43:55 +0800 Subject: [PATCH v2] mm/memcg: warning on !memcg after readahead page char= ged Since readahead page is charged on memcg too, in theory we don't have to check this exception now. Before safely remove them all, add a warning for the unexpected !memcg. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Andrew Morton Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- include/linux/mmdebug.h | 13 +++++++++++++ mm/memcontrol.c | 15 ++++++++------- 2 files changed, 21 insertions(+), 7 deletions(-) diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h index 2ad72d2c8cc5..698eaf56f89f 100644 --- a/include/linux/mmdebug.h +++ b/include/linux/mmdebug.h @@ -37,6 +37,18 @@ BUG(); \ } \ } while (0) +#define VM_WARN_ON_ONCE_PAGE(condition, page) ({ \ + static bool __section(.data.once) __warned; \ + int __ret_warn_once =3D !!(condition); \ + \ + if (unlikely(__ret_warn_once && !__warned)) { \ + dump_page(page, "VM_WARN_ON_ONCE_PAGE(" __stringify(cond)")");\ + __warned =3D true; \ + WARN_ON(1); \ + } \ + unlikely(__ret_warn_once); \ +}) + #define VM_WARN_ON(cond) (void)WARN_ON(cond) #define VM_WARN_ON_ONCE(cond) (void)WARN_ON_ONCE(cond) #define VM_WARN_ONCE(cond, format...) (void)WARN_ONCE(cond, format) @@ -48,6 +60,7 @@ #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond) #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond) +#define VM_WARN_ON_ONCE_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond) #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond) #endif diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 130093bdf74b..299382fc55a9 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1322,10 +1322,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page = *page, struct pglist_data *pgd } =20 memcg =3D page->mem_cgroup; - /* - * Swapcache readahead pages are added to the LRU - and - * possibly migrated - before they are charged. - */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) memcg =3D root_mem_cgroup; =20 @@ -6906,8 +6904,9 @@ void mem_cgroup_migrate(struct page *oldpage, struc= t page *newpage) if (newpage->mem_cgroup) return; =20 - /* Swapcache readahead pages can get replaced before being charged */ memcg =3D oldpage->mem_cgroup; + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, oldpage); if (!memcg) return; =20 @@ -7104,7 +7103,8 @@ void mem_cgroup_swapout(struct page *page, swp_entr= y_t entry) =20 memcg =3D page->mem_cgroup; =20 - /* Readahead page, never charged */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) return; =20 @@ -7168,7 +7168,8 @@ int mem_cgroup_try_charge_swap(struct page *page, s= wp_entry_t entry) =20 memcg =3D page->mem_cgroup; =20 - /* Readahead page, never charged */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) return 0; =20 --=20 1.8.3.1