From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BB46C433ED for ; Thu, 22 Apr 2021 16:59:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 469B061435 for ; Thu, 22 Apr 2021 16:59:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 469B061435 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8D6426B006C; Thu, 22 Apr 2021 12:58:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8AD596B006E; Thu, 22 Apr 2021 12:58:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 727466B0070; Thu, 22 Apr 2021 12:58:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id 5AB0E6B006C for ; Thu, 22 Apr 2021 12:58:59 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0D87252AB for ; Thu, 22 Apr 2021 16:58:59 +0000 (UTC) X-FDA: 78060612798.19.6283882 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 3850880192C0 for ; Thu, 22 Apr 2021 16:58:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1619110737; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tnsoyaxSTba4CLrYhMvvI/m5yaJbsJVCTEUHiT030yM=; b=YFwo06U9PeAeOFN8ZEDBvs8eSs2OkF78pVPHaGFAf3xLctZXWFqI2T15up1kTjfqXxbPaQ Od2OCUsfwCSFdaDwJkewWX01EhmseUgMB+dJix53cIAeZ4ApMyvE42JELbUPbkKAjCcUFf vxKGtSUF/+ae+N73ibFfRYLWHvLTmY0= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-307-Cv8TDd59MhmTyoi-yCau_Q-1; Thu, 22 Apr 2021 12:58:55 -0400 X-MC-Unique: Cv8TDd59MhmTyoi-yCau_Q-1 Received: by mail-qt1-f198.google.com with SMTP id h12-20020ac8744c0000b02901ba644d864fso5643441qtr.8 for ; Thu, 22 Apr 2021 09:58:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:cc:references:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=tnsoyaxSTba4CLrYhMvvI/m5yaJbsJVCTEUHiT030yM=; b=V359N7SbQ07z9hga3GVKhTCPVTbBraF7vSWlBCo1KbxZ6TStoDfIntsEFR/DGkU+Sk wkiUiIg2gwoz9jtpdgjUjIQxZeCiH7pIm75Lf9yOQHabOpy0m1361gdIMqqv+npK7EQ5 ARKMtrGeYFnzQY/Oyx1jIeKRKojQVqdoRGVEoEAOXRKWjkQXTv7+THz33/Q6HAvuxnPy t6pQfehlLQN7Hzzn0FM6YNwc6MeFYtf2UCK00N/0/K7RZmkaYGQJIuxGhhRNtZhtmM9J WvqDWn0qe3nyS2YbX7NgodDOVGCFu85lNCfkQ6C0AhATpFMAOayCzCSQm9D3qD8w3Hvp yjTA== X-Gm-Message-State: AOAM530caV6qnXUwtMVYd4ElpPQlZUB2hZl1iL/SEARb2nh12b8WCwkJ sb3lVa7Z9+7zGz65lrWhHccObrjRZoc3AqY+lcmy2IVId17Q8S3Rf0HIPJ9S93zMWBE7yPG07U4 f3zD+oqsNrVo= X-Received: by 2002:a05:622a:301:: with SMTP id q1mr4076309qtw.48.1619110735146; Thu, 22 Apr 2021 09:58:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyMMDzwOxnCBIZPqNlb6mS4q+6V6Etx8XVFSoCsq4jewANdy6F+FHMw0/YtsCSFKEaj7MWOhg== X-Received: by 2002:a05:622a:301:: with SMTP id q1mr4076276qtw.48.1619110734839; Thu, 22 Apr 2021 09:58:54 -0700 (PDT) Received: from llong.remote.csb ([2601:191:8500:76c0::cdbc]) by smtp.gmail.com with ESMTPSA id d68sm2543373qkf.93.2021.04.22.09.58.53 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 22 Apr 2021 09:58:54 -0700 (PDT) From: Waiman Long X-Google-Original-From: Waiman Long Subject: Re: [PATCH-next v5 2/4] mm/memcg: Cache vmstat data in percpu memcg_stock_pcp To: Roman Gushchin Cc: Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Tejun Heo , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Muchun Song , Alex Shi , Chris Down , Yafang Shao , Wei Yang , Masayoshi Mizuma , Xing Zhengjun , Matthew Wilcox References: <20210420192907.30880-1-longman@redhat.com> <20210420192907.30880-3-longman@redhat.com> Message-ID: Date: Thu, 22 Apr 2021 12:58:52 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=llong@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 3850880192C0 X-Stat-Signature: ea4msujs97u4jog8it6sk7yoyeqc5ije Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf27; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=216.205.24.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619110720-490396 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 4/21/21 7:28 PM, Roman Gushchin wrote: > On Tue, Apr 20, 2021 at 03:29:05PM -0400, Waiman Long wrote: >> Before the new slab memory controller with per object byte charging, >> charging and vmstat data update happen only when new slab pages are >> allocated or freed. Now they are done with every kmem_cache_alloc() >> and kmem_cache_free(). This causes additional overhead for workloads >> that generate a lot of alloc and free calls. >> >> The memcg_stock_pcp is used to cache byte charge for a specific >> obj_cgroup to reduce that overhead. To further reducing it, this patch >> makes the vmstat data cached in the memcg_stock_pcp structure as well >> until it accumulates a page size worth of update or when other cached >> data change. Caching the vmstat data in the per-cpu stock eliminates two >> writes to non-hot cachelines for memcg specific as well as memcg-lruvecs >> specific vmstat data by a write to a hot local stock cacheline. >> >> On a 2-socket Cascade Lake server with instrumentation enabled and this >> patch applied, it was found that about 20% (634400 out of 3243830) >> of the time when mod_objcg_state() is called leads to an actual call >> to __mod_objcg_state() after initial boot. When doing parallel kernel >> build, the figure was about 17% (24329265 out of 142512465). So caching >> the vmstat data reduces the number of calls to __mod_objcg_state() >> by more than 80%. >> >> Signed-off-by: Waiman Long >> Reviewed-by: Shakeel Butt >> --- >> mm/memcontrol.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++-- >> 1 file changed, 83 insertions(+), 3 deletions(-) >> >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c >> index 7cd7187a017c..292b4783b1a7 100644 >> --- a/mm/memcontrol.c >> +++ b/mm/memcontrol.c >> @@ -782,8 +782,9 @@ void __mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val) >> rcu_read_unlock(); >> } >> >> -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, >> - enum node_stat_item idx, int nr) >> +static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, >> + struct pglist_data *pgdat, >> + enum node_stat_item idx, int nr) >> { >> struct mem_cgroup *memcg; >> struct lruvec *lruvec; >> @@ -791,7 +792,7 @@ void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, >> rcu_read_lock(); >> memcg = obj_cgroup_memcg(objcg); >> lruvec = mem_cgroup_lruvec(memcg, pgdat); >> - mod_memcg_lruvec_state(lruvec, idx, nr); >> + __mod_memcg_lruvec_state(lruvec, idx, nr); >> rcu_read_unlock(); >> } >> >> @@ -2059,7 +2060,10 @@ struct memcg_stock_pcp { >> >> #ifdef CONFIG_MEMCG_KMEM >> struct obj_cgroup *cached_objcg; >> + struct pglist_data *cached_pgdat; > I wonder if we want to have per-node counters instead? > That would complicate the initialization of pcp stocks a bit, > but might shave off some additional cpu time. > But we can do it later too. > A per node counter will certainly complicate the code and reduce the performance benefit too. I got a pretty good hit rate of 80%+ with the current code on a 2-socket system. The hit rate will probably drop when there are more nodes. I will do some more investigation, but it will not be for this patchset. >> unsigned int nr_bytes; >> + int nr_slab_reclaimable_b; >> + int nr_slab_unreclaimable_b; >> #endif >> >> struct work_struct work; >> @@ -3008,6 +3012,63 @@ void __memcg_kmem_uncharge_page(struct page *page, int order) >> obj_cgroup_put(objcg); >> } >> >> +void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, >> + enum node_stat_item idx, int nr) >> +{ >> + struct memcg_stock_pcp *stock; >> + unsigned long flags; >> + int *bytes; >> + >> + local_irq_save(flags); >> + stock = this_cpu_ptr(&memcg_stock); >> + >> + /* >> + * Save vmstat data in stock and skip vmstat array update unless >> + * accumulating over a page of vmstat data or when pgdat or idx >> + * changes. >> + */ >> + if (stock->cached_objcg != objcg) { >> + drain_obj_stock(stock); >> + obj_cgroup_get(objcg); >> + stock->nr_bytes = atomic_read(&objcg->nr_charged_bytes) >> + ? atomic_xchg(&objcg->nr_charged_bytes, 0) : 0; >> + stock->cached_objcg = objcg; >> + stock->cached_pgdat = pgdat; >> + } else if (stock->cached_pgdat != pgdat) { >> + /* Flush the existing cached vmstat data */ >> + if (stock->nr_slab_reclaimable_b) { >> + mod_objcg_mlstate(objcg, pgdat, NR_SLAB_RECLAIMABLE_B, >> + stock->nr_slab_reclaimable_b); >> + stock->nr_slab_reclaimable_b = 0; >> + } >> + if (stock->nr_slab_unreclaimable_b) { >> + mod_objcg_mlstate(objcg, pgdat, NR_SLAB_UNRECLAIMABLE_B, >> + stock->nr_slab_unreclaimable_b); >> + stock->nr_slab_unreclaimable_b = 0; >> + } >> + stock->cached_pgdat = pgdat; >> + } >> + >> + bytes = (idx == NR_SLAB_RECLAIMABLE_B) ? &stock->nr_slab_reclaimable_b >> + : &stock->nr_slab_unreclaimable_b; >> + if (!*bytes) { >> + *bytes = nr; >> + nr = 0; >> + } else { >> + *bytes += nr; >> + if (abs(*bytes) > PAGE_SIZE) { >> + nr = *bytes; >> + *bytes = 0; >> + } else { >> + nr = 0; >> + } >> + } > This part is a little bit hard to follow, how about something like this > (completely untested): > > { > stocked = (idx == NR_SLAB_RECLAIMABLE_B) ? &stock->nr_slab_reclaimable_b > : &stock->nr_slab_unreclaimable_b; > if (abs(*stocked + nr) > PAGE_SIZE) { > nr += *stocked; > *stocked = 0; > } else { > *stocked += nr; > nr = 0; > } > } That was done purposely to make sure that large object (>= 4k) will also be cached once before flushing it out. I should have been more clear about that by adding a comment about it. vmstat data isn't as critical as memory charge and so I am allowing it to cache more than 4k in this case. Cheers, Longman