From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BECFC433E1 for ; Thu, 21 May 2020 00:24:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B5991205CB for ; Thu, 21 May 2020 00:24:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="VgqpoalR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B5991205CB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AF2C98000B; Wed, 20 May 2020 20:24:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7C128000A; Wed, 20 May 2020 20:24:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 995588000B; Wed, 20 May 2020 20:24:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 6F18D8000A for ; Wed, 20 May 2020 20:24:16 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 338EE180AD81D for ; Thu, 21 May 2020 00:24:16 +0000 (UTC) X-FDA: 76838829312.25.plate26_5f6f9cfaea01c X-HE-Tag: plate26_5f6f9cfaea01c X-Filterd-Recvd-Size: 7083 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 May 2020 00:24:15 +0000 (UTC) Received: from kicinski-fedora-PC1C0HJN.thefacebook.com (unknown [163.114.132.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8F65820899; Thu, 21 May 2020 00:24:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590020655; bh=oc6qwxfvMM4c1WK4D4PNats482UU/LDHcFO1emJQp7s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VgqpoalRz/JNjTIan1V+UREvKsBV8OFB329Y+Mm1YkfIccVxpqITggr1Zt+bIHOO8 T3eMdMeQehnuU9eox3aMx7aFux8W2YTBS6feiTZMoWN5UvnNbgsFknbqPC6iHMvBzH AAYBKvBYxd54h0j3NX4tAH5nWWaar69jY6sB4ZCw= From: Jakub Kicinski To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, kernel-team@fb.com, tj@kernel.org, hannes@cmpxchg.org, chris@chrisdown.name, cgroups@vger.kernel.org, shakeelb@google.com, mhocko@kernel.org, Jakub Kicinski Subject: [PATCH mm v5 RESEND 3/4] mm: move cgroup high memory limit setting into struct page_counter Date: Wed, 20 May 2020 17:24:10 -0700 Message-Id: <20200521002411.3963032-4-kuba@kernel.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200521002411.3963032-1-kuba@kernel.org> References: <20200521002411.3963032-1-kuba@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: High memory limit is currently recorded directly in struct mem_cgroup. We are about to add a high limit for swap, move the field to struct page_counter and add some helpers. Signed-off-by: Jakub Kicinski Reviewed-by: Shakeel Butt -- v5: make page_counter_set_high() a static inline in the header v4: new patch --- include/linux/memcontrol.h | 3 --- include/linux/page_counter.h | 13 +++++++++++++ mm/memcontrol.c | 17 +++++++++-------- 3 files changed, 22 insertions(+), 11 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e0bcef180672..d726867d8af9 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -206,9 +206,6 @@ struct mem_cgroup { struct page_counter kmem; struct page_counter tcpmem; =20 - /* Upper bound of normal memory consumption range */ - unsigned long high; - /* Range enforcement for interrupt charges */ struct work_struct high_work; =20 diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h index bab7e57f659b..6a89ff948412 100644 --- a/include/linux/page_counter.h +++ b/include/linux/page_counter.h @@ -10,6 +10,7 @@ struct page_counter { atomic_long_t usage; unsigned long min; unsigned long low; + unsigned long high; unsigned long max; struct page_counter *parent; =20 @@ -55,6 +56,13 @@ bool page_counter_try_charge(struct page_counter *coun= ter, void page_counter_uncharge(struct page_counter *counter, unsigned long n= r_pages); void page_counter_set_min(struct page_counter *counter, unsigned long nr= _pages); void page_counter_set_low(struct page_counter *counter, unsigned long nr= _pages); + +static inline void page_counter_set_high(struct page_counter *counter, + unsigned long nr_pages) +{ + WRITE_ONCE(counter->high, nr_pages); +} + int page_counter_set_max(struct page_counter *counter, unsigned long nr_= pages); int page_counter_memparse(const char *buf, const char *max, unsigned long *nr_pages); @@ -64,4 +72,9 @@ static inline void page_counter_reset_watermark(struct = page_counter *counter) counter->watermark =3D page_counter_read(counter); } =20 +static inline bool page_counter_is_above_high(struct page_counter *count= er) +{ + return page_counter_read(counter) > READ_ONCE(counter->high); +} + #endif /* _LINUX_PAGE_COUNTER_H */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index dd8605a9137a..d4b7bc80aa38 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2233,7 +2233,7 @@ static void reclaim_high(struct mem_cgroup *memcg, gfp_t gfp_mask) { do { - if (page_counter_read(&memcg->memory) <=3D READ_ONCE(memcg->high)) + if (!page_counter_is_above_high(&memcg->memory)) continue; memcg_memory_event(memcg, MEMCG_HIGH); try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask, true); @@ -2326,7 +2326,7 @@ static u64 mem_find_max_overage(struct mem_cgroup *= memcg) =20 do { overage =3D calculate_overage(page_counter_read(&memcg->memory), - READ_ONCE(memcg->high)); + READ_ONCE(memcg->memory.high)); max_overage =3D max(overage, max_overage); } while ((memcg =3D parent_mem_cgroup(memcg)) && !mem_cgroup_is_root(memcg)); @@ -2585,7 +2585,7 @@ static int try_charge(struct mem_cgroup *memcg, gfp= _t gfp_mask, * reclaim, the cost of mismatch is negligible. */ do { - if (page_counter_read(&memcg->memory) > READ_ONCE(memcg->high)) { + if (page_counter_is_above_high(&memcg->memory)) { /* Don't bother a random interrupted task */ if (in_interrupt()) { schedule_work(&memcg->high_work); @@ -4286,7 +4286,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, = unsigned long *pfilepages, =20 while ((parent =3D parent_mem_cgroup(memcg))) { unsigned long ceiling =3D min(READ_ONCE(memcg->memory.max), - READ_ONCE(memcg->high)); + READ_ONCE(memcg->memory.high)); unsigned long used =3D page_counter_read(&memcg->memory); =20 *pheadroom =3D min(*pheadroom, ceiling - min(ceiling, used)); @@ -5011,7 +5011,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *pa= rent_css) if (IS_ERR(memcg)) return ERR_CAST(memcg); =20 - WRITE_ONCE(memcg->high, PAGE_COUNTER_MAX); + page_counter_set_high(&memcg->memory, PAGE_COUNTER_MAX); memcg->soft_limit =3D PAGE_COUNTER_MAX; if (parent) { memcg->swappiness =3D mem_cgroup_swappiness(parent); @@ -5164,7 +5164,7 @@ static void mem_cgroup_css_reset(struct cgroup_subs= ys_state *css) page_counter_set_max(&memcg->tcpmem, PAGE_COUNTER_MAX); page_counter_set_min(&memcg->memory, 0); page_counter_set_low(&memcg->memory, 0); - WRITE_ONCE(memcg->high, PAGE_COUNTER_MAX); + page_counter_set_high(&memcg->memory, PAGE_COUNTER_MAX); memcg->soft_limit =3D PAGE_COUNTER_MAX; memcg_wb_domain_size_changed(memcg); } @@ -5984,7 +5984,8 @@ static ssize_t memory_low_write(struct kernfs_open_= file *of, =20 static int memory_high_show(struct seq_file *m, void *v) { - return seq_puts_memcg_tunable(m, READ_ONCE(mem_cgroup_from_seq(m)->high= )); + return seq_puts_memcg_tunable(m, + READ_ONCE(mem_cgroup_from_seq(m)->memory.high)); } =20 static ssize_t memory_high_write(struct kernfs_open_file *of, @@ -6001,7 +6002,7 @@ static ssize_t memory_high_write(struct kernfs_open= _file *of, if (err) return err; =20 - WRITE_ONCE(memcg->high, high); + page_counter_set_high(&memcg->memory, high); =20 for (;;) { unsigned long nr_pages =3D page_counter_read(&memcg->memory); --=20 2.25.4 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jakub Kicinski Subject: [PATCH mm v5 RESEND 3/4] mm: move cgroup high memory limit setting into struct page_counter Date: Wed, 20 May 2020 17:24:10 -0700 Message-ID: <20200521002411.3963032-4-kuba@kernel.org> References: <20200521002411.3963032-1-kuba@kernel.org> Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590020655; bh=oc6qwxfvMM4c1WK4D4PNats482UU/LDHcFO1emJQp7s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VgqpoalRz/JNjTIan1V+UREvKsBV8OFB329Y+Mm1YkfIccVxpqITggr1Zt+bIHOO8 T3eMdMeQehnuU9eox3aMx7aFux8W2YTBS6feiTZMoWN5UvnNbgsFknbqPC6iHMvBzH AAYBKvBYxd54h0j3NX4tAH5nWWaar69jY6sB4ZCw= In-Reply-To: <20200521002411.3963032-1-kuba-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: Content-Type: text/plain; charset="us-ascii" To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, kernel-team-b10kYP2dOMg@public.gmane.org, tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, chris-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, Jakub Kicinski High memory limit is currently recorded directly in struct mem_cgroup. We are about to add a high limit for swap, move the field to struct page_counter and add some helpers. Signed-off-by: Jakub Kicinski Reviewed-by: Shakeel Butt -- v5: make page_counter_set_high() a static inline in the header v4: new patch --- include/linux/memcontrol.h | 3 --- include/linux/page_counter.h | 13 +++++++++++++ mm/memcontrol.c | 17 +++++++++-------- 3 files changed, 22 insertions(+), 11 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e0bcef180672..d726867d8af9 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -206,9 +206,6 @@ struct mem_cgroup { struct page_counter kmem; struct page_counter tcpmem; - /* Upper bound of normal memory consumption range */ - unsigned long high; - /* Range enforcement for interrupt charges */ struct work_struct high_work; diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h index bab7e57f659b..6a89ff948412 100644 --- a/include/linux/page_counter.h +++ b/include/linux/page_counter.h @@ -10,6 +10,7 @@ struct page_counter { atomic_long_t usage; unsigned long min; unsigned long low; + unsigned long high; unsigned long max; struct page_counter *parent; @@ -55,6 +56,13 @@ bool page_counter_try_charge(struct page_counter *counter, void page_counter_uncharge(struct page_counter *counter, unsigned long nr_pages); void page_counter_set_min(struct page_counter *counter, unsigned long nr_pages); void page_counter_set_low(struct page_counter *counter, unsigned long nr_pages); + +static inline void page_counter_set_high(struct page_counter *counter, + unsigned long nr_pages) +{ + WRITE_ONCE(counter->high, nr_pages); +} + int page_counter_set_max(struct page_counter *counter, unsigned long nr_pages); int page_counter_memparse(const char *buf, const char *max, unsigned long *nr_pages); @@ -64,4 +72,9 @@ static inline void page_counter_reset_watermark(struct page_counter *counter) counter->watermark = page_counter_read(counter); } +static inline bool page_counter_is_above_high(struct page_counter *counter) +{ + return page_counter_read(counter) > READ_ONCE(counter->high); +} + #endif /* _LINUX_PAGE_COUNTER_H */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index dd8605a9137a..d4b7bc80aa38 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2233,7 +2233,7 @@ static void reclaim_high(struct mem_cgroup *memcg, gfp_t gfp_mask) { do { - if (page_counter_read(&memcg->memory) <= READ_ONCE(memcg->high)) + if (!page_counter_is_above_high(&memcg->memory)) continue; memcg_memory_event(memcg, MEMCG_HIGH); try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask, true); @@ -2326,7 +2326,7 @@ static u64 mem_find_max_overage(struct mem_cgroup *memcg) do { overage = calculate_overage(page_counter_read(&memcg->memory), - READ_ONCE(memcg->high)); + READ_ONCE(memcg->memory.high)); max_overage = max(overage, max_overage); } while ((memcg = parent_mem_cgroup(memcg)) && !mem_cgroup_is_root(memcg)); @@ -2585,7 +2585,7 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, * reclaim, the cost of mismatch is negligible. */ do { - if (page_counter_read(&memcg->memory) > READ_ONCE(memcg->high)) { + if (page_counter_is_above_high(&memcg->memory)) { /* Don't bother a random interrupted task */ if (in_interrupt()) { schedule_work(&memcg->high_work); @@ -4286,7 +4286,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, while ((parent = parent_mem_cgroup(memcg))) { unsigned long ceiling = min(READ_ONCE(memcg->memory.max), - READ_ONCE(memcg->high)); + READ_ONCE(memcg->memory.high)); unsigned long used = page_counter_read(&memcg->memory); *pheadroom = min(*pheadroom, ceiling - min(ceiling, used)); @@ -5011,7 +5011,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) if (IS_ERR(memcg)) return ERR_CAST(memcg); - WRITE_ONCE(memcg->high, PAGE_COUNTER_MAX); + page_counter_set_high(&memcg->memory, PAGE_COUNTER_MAX); memcg->soft_limit = PAGE_COUNTER_MAX; if (parent) { memcg->swappiness = mem_cgroup_swappiness(parent); @@ -5164,7 +5164,7 @@ static void mem_cgroup_css_reset(struct cgroup_subsys_state *css) page_counter_set_max(&memcg->tcpmem, PAGE_COUNTER_MAX); page_counter_set_min(&memcg->memory, 0); page_counter_set_low(&memcg->memory, 0); - WRITE_ONCE(memcg->high, PAGE_COUNTER_MAX); + page_counter_set_high(&memcg->memory, PAGE_COUNTER_MAX); memcg->soft_limit = PAGE_COUNTER_MAX; memcg_wb_domain_size_changed(memcg); } @@ -5984,7 +5984,8 @@ static ssize_t memory_low_write(struct kernfs_open_file *of, static int memory_high_show(struct seq_file *m, void *v) { - return seq_puts_memcg_tunable(m, READ_ONCE(mem_cgroup_from_seq(m)->high)); + return seq_puts_memcg_tunable(m, + READ_ONCE(mem_cgroup_from_seq(m)->memory.high)); } static ssize_t memory_high_write(struct kernfs_open_file *of, @@ -6001,7 +6002,7 @@ static ssize_t memory_high_write(struct kernfs_open_file *of, if (err) return err; - WRITE_ONCE(memcg->high, high); + page_counter_set_high(&memcg->memory, high); for (;;) { unsigned long nr_pages = page_counter_read(&memcg->memory); -- 2.25.4