From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 259F9C433EF for ; Sun, 12 Sep 2021 11:18:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4BCED6108F for ; Sun, 12 Sep 2021 11:18:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4BCED6108F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=sina.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5DA51900002; Sun, 12 Sep 2021 07:18:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 58B066B0072; Sun, 12 Sep 2021 07:18:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49FBC900002; Sun, 12 Sep 2021 07:18:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id 3EA8D6B0071 for ; Sun, 12 Sep 2021 07:18:14 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E46D01801E68C for ; Sun, 12 Sep 2021 11:18:13 +0000 (UTC) X-FDA: 78578672466.17.5509772 Received: from mail3-162.sinamail.sina.com.cn (mail3-162.sinamail.sina.com.cn [202.108.3.162]) by imf02.hostedemail.com (Postfix) with SMTP id 6D0DF7001A0A for ; Sun, 12 Sep 2021 11:18:12 +0000 (UTC) Received: from unknown (HELO localhost.localdomain)([123.115.166.15]) by sina.com (172.16.97.27) with ESMTP id 613DE1ED0002CED0; Sun, 12 Sep 2021 19:18:08 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 26254449283442 From: Hillf Danton To: Feng Tang Cc: Shakeel Butt , LKML , Xing Zhengjun , Linux MM Subject: Re: [memcg] 45208c9105: aim7.jobs-per-min -14.0% regression Date: Sun, 12 Sep 2021 19:17:56 +0800 Message-Id: <20210912111756.4158-1-hdanton@sina.com> In-Reply-To: <20210910010842.GA94434@shbuild999.sh.intel.com> References: <20210902215504.dSSfDKJZu%akpm@linux-foundation.org> <20210905124439.GA15026@xsang-OptiPlex-9020> <20210907033000.GA88160@shbuild999.sh.intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 6D0DF7001A0A Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf02.hostedemail.com: domain of hdanton@sina.com designates 202.108.3.162 as permitted sender) smtp.mailfrom=hdanton@sina.com X-Stat-Signature: 414sqceq3iy64kbnw7ekghyu96ykx97o X-HE-Tag: 1631445492-924573 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 10 Sep 2021 09:08:42 +0800 Feng Tang wrote: > On Thu, Sep 09, 2021 at 05:43:40PM -0700, Shakeel Butt wrote: > >=20 > > Feng, is it possible for you to run these benchmarks with the change > > (basically changing MEMCG_CHARGE_BATCH to 128 in the if condition > > before queue_work() inside __mod_memcg_lruvec_state())? >=20 > When I checked this, I tried different changes, including this batch > number change :), but it didn't recover the regression (the regression > is slightly reduced to about 12%) >=20 > Please check if my patch is what you want to test: >=20 > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 4d8c9af..a50a69a 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -682,7 +682,8 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec= , enum node_stat_item idx, > =20 > /* Update lruvec */ > __this_cpu_add(pn->lruvec_stats_percpu->state[idx], val); > - if (!(__this_cpu_inc_return(stats_flush_threshold) % MEMCG_CHARGE_BAT= CH)) > +// if (!(__this_cpu_inc_return(stats_flush_threshold) % MEMCG_CHARGE_B= ATCH)) > + if (!(__this_cpu_inc_return(stats_flush_threshold) % 128)) > queue_work(system_unbound_wq, &stats_flush_work); > } Hi Feng, Would you please check if it helps fix the regression to avoid queuing a queued work by adding and checking an atomic counter. Hillf --- x/mm/memcontrol.c +++ y/mm/memcontrol.c @@ -108,6 +108,7 @@ static void flush_memcg_stats_dwork(stru static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwor= k); static void flush_memcg_stats_work(struct work_struct *w); static DECLARE_WORK(stats_flush_work, flush_memcg_stats_work); +static atomic_t sfwork_queued; static DEFINE_PER_CPU(unsigned int, stats_flush_threshold); static DEFINE_SPINLOCK(stats_flush_lock); =20 @@ -660,8 +661,13 @@ void __mod_memcg_lruvec_state(struct lru =20 /* Update lruvec */ __this_cpu_add(pn->lruvec_stats_percpu->state[idx], val); - if (!(__this_cpu_inc_return(stats_flush_threshold) % MEMCG_CHARGE_BATCH= )) - queue_work(system_unbound_wq, &stats_flush_work); + if (!(__this_cpu_inc_return(stats_flush_threshold) % + MEMCG_CHARGE_BATCH)) { + int queued =3D atomic_read(&sfwork_queued); + + if (!queued && atomic_try_cmpxchg(&sfwork_queued, &queued, 1)) + queue_work(system_unbound_wq, &stats_flush_work); + } } =20 /** @@ -5376,6 +5382,7 @@ static void flush_memcg_stats_dwork(stru static void flush_memcg_stats_work(struct work_struct *w) { mem_cgroup_flush_stats(); + atomic_dec(&sfwork_queued); } =20 static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, = int cpu)