From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6F37C433FE for ; Tue, 8 Dec 2020 09:15:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 38B6D23A7F for ; Tue, 8 Dec 2020 09:15:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 38B6D23A7F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A72876B0068; Tue, 8 Dec 2020 04:15:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A229A6B006C; Tue, 8 Dec 2020 04:15:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 939306B006E; Tue, 8 Dec 2020 04:15:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 7D1C86B0068 for ; Tue, 8 Dec 2020 04:15:15 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3E366A77C for ; Tue, 8 Dec 2020 09:15:15 +0000 (UTC) X-FDA: 77569556190.22.crush69_350e9d3273e5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 2B10918038E60 for ; Tue, 8 Dec 2020 09:15:15 +0000 (UTC) X-HE-Tag: crush69_350e9d3273e5 X-Filterd-Recvd-Size: 8922 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Dec 2020 09:15:12 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R371e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=alimailimapcm10staff010182156082;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0UHyM8HH_1607418907; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UHyM8HH_1607418907) by smtp.aliyun-inc.com(127.0.0.1); Tue, 08 Dec 2020 17:15:08 +0800 Subject: Re: [PATCH 11/11] mm: enlarge the "int nr_pages" parameter of update_lru_size() To: Yu Zhao , Andrew Morton , Hugh Dickins Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20201207220949.830352-1-yuzhao@google.com> <20201207220949.830352-12-yuzhao@google.com> From: Alex Shi Message-ID: <9b558a41-489f-c92f-4246-08472c45c678@linux.alibaba.com> Date: Tue, 8 Dec 2020 17:15:07 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201207220949.830352-12-yuzhao@google.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reviewed-by: Alex Shi =E5=9C=A8 2020/12/8 =E4=B8=8A=E5=8D=886:09, Yu Zhao =E5=86=99=E9=81=93: > update_lru_sizes() defines an unsigned long argument and passes it as > nr_pages to update_lru_size(). Though this isn't causing any overflows > I'm aware of, it's a bad idea to go through the demotion given that we > have recently stumbled on a related type promotion problem fixed by > commit 2da9f6305f30 ("mm/vmscan: fix NR_ISOLATED_FILE corruption on 64-= bit") >=20 > Note that the underlying counters are already in long. This is another > reason we shouldn't have the demotion. >=20 > This patch enlarges all relevant parameters on the path to the final > underlying counters: > update_lru_size(int -> long) > if memcg: > __mod_lruvec_state(int -> long) > if smp: > __mod_node_page_state(long) > else: > __mod_node_page_state(int -> long) > __mod_memcg_lruvec_state(int -> long) > __mod_memcg_state(int -> long) > else: > __mod_lruvec_state(int -> long) > if smp: > __mod_node_page_state(long) > else: > __mod_node_page_state(int -> long) >=20 > __mod_zone_page_state(long) >=20 > if memcg: > mem_cgroup_update_lru_size(int -> long) >=20 > Note that __mod_node_page_state() for the smp case and > __mod_zone_page_state() already use long. So this change also fixes > the inconsistency. >=20 > Signed-off-by: Yu Zhao > --- > include/linux/memcontrol.h | 10 +++++----- > include/linux/mm_inline.h | 2 +- > include/linux/vmstat.h | 6 +++--- > mm/memcontrol.c | 10 +++++----- > 4 files changed, 14 insertions(+), 14 deletions(-) >=20 > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 3febf64d1b80..1454201abb8d 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -810,7 +810,7 @@ static inline bool mem_cgroup_online(struct mem_cgr= oup *memcg) > int mem_cgroup_select_victim_node(struct mem_cgroup *memcg); > =20 > void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list l= ru, > - int zid, int nr_pages); > + int zid, long nr_pages); > =20 > static inline > unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec, > @@ -896,7 +896,7 @@ static inline unsigned long memcg_page_state_local(= struct mem_cgroup *memcg, > return x; > } > =20 > -void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val); > +void __mod_memcg_state(struct mem_cgroup *memcg, int idx, long val); > =20 > /* idx can be of type enum memcg_stat_item or node_stat_item */ > static inline void mod_memcg_state(struct mem_cgroup *memcg, > @@ -948,7 +948,7 @@ static inline unsigned long lruvec_page_state_local= (struct lruvec *lruvec, > } > =20 > void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_it= em idx, > - int val); > + long val); > void __mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val= ); > =20 > static inline void mod_lruvec_kmem_state(void *p, enum node_stat_item = idx, > @@ -1346,7 +1346,7 @@ static inline unsigned long memcg_page_state_loca= l(struct mem_cgroup *memcg, > =20 > static inline void __mod_memcg_state(struct mem_cgroup *memcg, > int idx, > - int nr) > + long nr) > { > } > =20 > @@ -1369,7 +1369,7 @@ static inline unsigned long lruvec_page_state_loc= al(struct lruvec *lruvec, > } > =20 > static inline void __mod_memcg_lruvec_state(struct lruvec *lruvec, > - enum node_stat_item idx, int val) > + enum node_stat_item idx, long val) > { > } > =20 > diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h > index 355ea1ee32bd..18e85071b44a 100644 > --- a/include/linux/mm_inline.h > +++ b/include/linux/mm_inline.h > @@ -26,7 +26,7 @@ static inline int page_is_file_lru(struct page *page) > =20 > static __always_inline void update_lru_size(struct lruvec *lruvec, > enum lru_list lru, enum zone_type zid, > - int nr_pages) > + long nr_pages) > { > struct pglist_data *pgdat =3D lruvec_pgdat(lruvec); > =20 > diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h > index 773135fc6e19..230922179ba0 100644 > --- a/include/linux/vmstat.h > +++ b/include/linux/vmstat.h > @@ -310,7 +310,7 @@ static inline void __mod_zone_page_state(struct zon= e *zone, > } > =20 > static inline void __mod_node_page_state(struct pglist_data *pgdat, > - enum node_stat_item item, int delta) > + enum node_stat_item item, long delta) > { > if (vmstat_item_in_bytes(item)) { > VM_WARN_ON_ONCE(delta & (PAGE_SIZE - 1)); > @@ -453,7 +453,7 @@ static inline const char *vm_event_name(enum vm_eve= nt_item item) > #ifdef CONFIG_MEMCG > =20 > void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx= , > - int val); > + long val); > =20 > static inline void mod_lruvec_state(struct lruvec *lruvec, > enum node_stat_item idx, int val) > @@ -481,7 +481,7 @@ static inline void mod_lruvec_page_state(struct pag= e *page, > #else > =20 > static inline void __mod_lruvec_state(struct lruvec *lruvec, > - enum node_stat_item idx, int val) > + enum node_stat_item idx, long val) > { > __mod_node_page_state(lruvec_pgdat(lruvec), idx, val); > } > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index de17f02d27ad..c3fe5880c42d 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -758,7 +758,7 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgrou= p_tree_per_node *mctz) > * @idx: the stat item - can be enum memcg_stat_item or enum node_stat= _item > * @val: delta to add to the counter, can be negative > */ > -void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val) > +void __mod_memcg_state(struct mem_cgroup *memcg, int idx, long val) > { > long x, threshold =3D MEMCG_CHARGE_BATCH; > =20 > @@ -796,7 +796,7 @@ parent_nodeinfo(struct mem_cgroup_per_node *pn, int= nid) > } > =20 > void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_it= em idx, > - int val) > + long val) > { > struct mem_cgroup_per_node *pn; > struct mem_cgroup *memcg; > @@ -837,7 +837,7 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec= , enum node_stat_item idx, > * change of state at this level: per-node, per-cgroup, per-lruvec. > */ > void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx= , > - int val) > + long val) > { > /* Update node */ > __mod_node_page_state(lruvec_pgdat(lruvec), idx, val); > @@ -1407,7 +1407,7 @@ struct lruvec *lock_page_lruvec_irqsave(struct pa= ge *page, unsigned long *flags) > * so as to allow it to check that lru_size 0 is consistent with list_= empty). > */ > void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list l= ru, > - int zid, int nr_pages) > + int zid, long nr_pages) > { > struct mem_cgroup_per_node *mz; > unsigned long *lru_size; > @@ -1424,7 +1424,7 @@ void mem_cgroup_update_lru_size(struct lruvec *lr= uvec, enum lru_list lru, > =20 > size =3D *lru_size; > if (WARN_ONCE(size < 0, > - "%s(%p, %d, %d): lru_size %ld\n", > + "%s(%p, %d, %ld): lru_size %ld\n", > __func__, lruvec, lru, nr_pages, size)) { > VM_BUG_ON(1); > *lru_size =3D 0; >=20