From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC7CFC43461 for ; Fri, 9 Apr 2021 23:19:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CC9196115B for ; Fri, 9 Apr 2021 23:19:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235178AbhDIXT4 (ORCPT ); Fri, 9 Apr 2021 19:19:56 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:40209 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235151AbhDIXTu (ORCPT ); Fri, 9 Apr 2021 19:19:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618010377; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=ptUYwKpXpvlSb7C/Hll6413ogimsLnOMUVrWZt43Ltc=; b=W/HDZWQKhlmH+NpEwU3IXtZabPxWuZh/K8ysbil6L0njazV42eAuXB2WLPzn/c5NBVJcuw YDw5ObAVqUpbVjitGQe9QxGxpxq598UX4M0eg7C3dLEqyRc4e+qtHDS/2s9mWA1LNrc5W9 pxkZCkAPbF5C6yoV1yQ/Lb9BFY75ncI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-439-rwia5fCfMoCyAeRrf6IlTA-1; Fri, 09 Apr 2021 19:19:33 -0400 X-MC-Unique: rwia5fCfMoCyAeRrf6IlTA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2B8D81883521; Fri, 9 Apr 2021 23:19:30 +0000 (UTC) Received: from llong.com (ovpn-113-226.rdu2.redhat.com [10.10.113.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3726E6EF50; Fri, 9 Apr 2021 23:19:28 +0000 (UTC) From: Waiman Long To: Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Tejun Heo , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Muchun Song , Alex Shi , Chris Down , Yafang Shao , Alexander Duyck , Wei Yang , Masayoshi Mizuma , Waiman Long Subject: [PATCH 2/5] mm/memcg: Introduce obj_cgroup_uncharge_mod_state() Date: Fri, 9 Apr 2021 19:18:39 -0400 Message-Id: <20210409231842.8840-3-longman@redhat.com> In-Reply-To: <20210409231842.8840-1-longman@redhat.com> References: <20210409231842.8840-1-longman@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In memcg_slab_free_hook()/pcpu_memcg_free_hook(), obj_cgroup_uncharge() is followed by mod_objcg_state()/mod_memcg_state(). Each of these function call goes through a separate irq_save/irq_restore cycle. That is inefficient. Introduce a new function obj_cgroup_uncharge_mod_state() that combines them with a single irq_save/irq_restore cycle. Signed-off-by: Waiman Long --- include/linux/memcontrol.h | 2 ++ mm/memcontrol.c | 31 +++++++++++++++++++++++++++---- mm/percpu.c | 9 ++------- mm/slab.h | 6 +++--- 4 files changed, 34 insertions(+), 14 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 95f12996e66c..6890f999c1a3 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1592,6 +1592,8 @@ struct obj_cgroup *get_obj_cgroup_from_current(void); int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size); void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size); +void obj_cgroup_uncharge_mod_state(struct obj_cgroup *objcg, size_t size, + struct pglist_data *pgdat, int idx); extern struct static_key_false memcg_kmem_enabled_key; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d66e1e38f8ac..b19100c68aa0 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3225,12 +3225,9 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock, return false; } -static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) +static void __refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) { struct memcg_stock_pcp *stock; - unsigned long flags; - - local_irq_save(flags); stock = this_cpu_ptr(&memcg_stock); if (stock->cached_objcg != objcg) { /* reset if necessary */ @@ -3243,7 +3240,14 @@ static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) if (stock->nr_bytes > PAGE_SIZE) drain_obj_stock(stock); +} + +static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) +{ + unsigned long flags; + local_irq_save(flags); + __refill_obj_stock(objcg, nr_bytes); local_irq_restore(flags); } @@ -3292,6 +3296,25 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) refill_obj_stock(objcg, size); } +void obj_cgroup_uncharge_mod_state(struct obj_cgroup *objcg, size_t size, + struct pglist_data *pgdat, int idx) +{ + unsigned long flags; + struct mem_cgroup *memcg; + struct lruvec *lruvec = NULL; + + local_irq_save(flags); + __refill_obj_stock(objcg, size); + + rcu_read_lock(); + memcg = obj_cgroup_memcg(objcg); + if (pgdat) + lruvec = mem_cgroup_lruvec(memcg, pgdat); + __mod_memcg_lruvec_state(memcg, lruvec, idx, -(int)size); + rcu_read_unlock(); + local_irq_restore(flags); +} + #endif /* CONFIG_MEMCG_KMEM */ /* diff --git a/mm/percpu.c b/mm/percpu.c index 6596a0a4286e..24ecd51c688c 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1631,13 +1631,8 @@ static void pcpu_memcg_free_hook(struct pcpu_chunk *chunk, int off, size_t size) objcg = chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT]; chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT] = NULL; - obj_cgroup_uncharge(objcg, size * num_possible_cpus()); - - rcu_read_lock(); - mod_memcg_state(obj_cgroup_memcg(objcg), MEMCG_PERCPU_B, - -(size * num_possible_cpus())); - rcu_read_unlock(); - + obj_cgroup_uncharge_mod_state(objcg, size * num_possible_cpus(), + NULL, MEMCG_PERCPU_B); obj_cgroup_put(objcg); } diff --git a/mm/slab.h b/mm/slab.h index bc6c7545e487..677cdc52e641 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -366,9 +366,9 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig, continue; objcgs[off] = NULL; - obj_cgroup_uncharge(objcg, obj_full_size(s)); - mod_objcg_state(objcg, page_pgdat(page), cache_vmstat_idx(s), - -obj_full_size(s)); + obj_cgroup_uncharge_mod_state(objcg, obj_full_size(s), + page_pgdat(page), + cache_vmstat_idx(s)); obj_cgroup_put(objcg); } } -- 2.18.1