From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF331C76196 for ; Tue, 28 Mar 2023 18:53:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229869AbjC1Sxa (ORCPT ); Tue, 28 Mar 2023 14:53:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229580AbjC1Sx3 (ORCPT ); Tue, 28 Mar 2023 14:53:29 -0400 Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com [IPv6:2a00:1450:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D294B2139 for ; Tue, 28 Mar 2023 11:53:27 -0700 (PDT) Received: by mail-ed1-x529.google.com with SMTP id r11so53737690edd.5 for ; Tue, 28 Mar 2023 11:53:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680029606; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Knz0ICKyVQWqTN98QRlsl94SOUBbAgJSfpknOWcWIz8=; b=eBFPS5upevzYowGb39P08hDPA5ByzP8CKwYQMFgOKj+6IYq2Vkfhm5oL5cVrbZhHzf eQHkB8C313lW8nsV9BpZNUZGGFqiswJwY5FMsPU6z/cd/O9XwAIDMFqeGPfx35miDyrF +bGkozLip971EeAk9h6UC0HOXtdfFwRiZJkEUoXuLJ3fBqgLl1/lbS6hfhsmfL9XRhH+ KR9XbqwM57zERf5JS531f6qDu8XI49Nmmt2jHNhErVh4imKzomY5+lfULAhHKaJb3jM2 Rye/OtfKqLqEEIMO08YuFZgxXxl9NZRbCLkFj6dKWKMGFVVM0dFa3ui8jA4/djrvfILk dA1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680029606; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Knz0ICKyVQWqTN98QRlsl94SOUBbAgJSfpknOWcWIz8=; b=3fhqALiNHB6RuFvY9GvPUhpE7pQiTWo7fGgykfnrUjcyAgCSc5nxuopVrP/Dn2xEkZ kImBwXCALCjjaVIWc6bz13BVLzZ+XMscKbI/bZNFQTnP6omqkbLk++Ob2PKSMKq3oFCw H1/xW4PAergbdkpE71E4Z+bQ+EvhluemjMbJpZ2S6EHE20+uVw2J/I/M7tFjuts27kyY kGPs5iFPAOVEhr5qRc34briJ6nTH+cus5o02BRFsiecEbu0yiPkn8xBrNKXSGRAzMIyR f6uicOtTRAMqbIybxkEeSl/9L9A7xyk4Wr8rdJ20I2lNXeB/mc+7SH1qiWzQmT4XJBrS PiFw== X-Gm-Message-State: AAQBX9fBLQS8jCsY5/rBBbNHVDF6eslXLz8DBm5RZqe7BmIEqVQhQ8B5 HIC4fie2Y7YOmf3YiM+o367cz7ZzFlIhN0+XSdmW8A== X-Google-Smtp-Source: AKy350Z3M7QR34ahCbawZrKpU+fclV+7eeIoXTqQ34bVv12otunyd4ooJvbGExuWfU2TzFjs2F14JyzAfPoOyuqb8SA= X-Received: by 2002:a17:907:9870:b0:8b1:28f6:8ab3 with SMTP id ko16-20020a170907987000b008b128f68ab3mr8771567ejc.15.1680029606204; Tue, 28 Mar 2023 11:53:26 -0700 (PDT) MIME-Version: 1.0 References: <20230328061638.203420-1-yosryahmed@google.com> <20230328061638.203420-6-yosryahmed@google.com> <20230328141523.txyhl7wt7wtvssea@google.com> In-Reply-To: <20230328141523.txyhl7wt7wtvssea@google.com> From: Yosry Ahmed Date: Tue, 28 Mar 2023 11:52:50 -0700 Message-ID: Subject: Re: [PATCH v1 5/9] memcg: replace stats_flush_lock with an atomic To: Shakeel Butt Cc: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Andrew Morton , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On Tue, Mar 28, 2023 at 7:15=E2=80=AFAM Shakeel Butt = wrote: > > On Tue, Mar 28, 2023 at 06:16:34AM +0000, Yosry Ahmed wrote: > [...] > > @@ -585,8 +585,8 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgrou= p_tree_per_node *mctz) > > */ > > static void flush_memcg_stats_dwork(struct work_struct *w); > > static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dw= ork); > > -static DEFINE_SPINLOCK(stats_flush_lock); > > static DEFINE_PER_CPU(unsigned int, stats_updates); > > +static atomic_t stats_flush_ongoing =3D ATOMIC_INIT(0); > > static atomic_t stats_flush_threshold =3D ATOMIC_INIT(0); > > static u64 flush_next_time; > > > > @@ -636,15 +636,18 @@ static inline void memcg_rstat_updated(struct mem= _cgroup *memcg, int val) > > > > static void __mem_cgroup_flush_stats(void) > > { > > - unsigned long flag; > > - > > - if (!spin_trylock_irqsave(&stats_flush_lock, flag)) > > + /* > > + * We always flush the entire tree, so concurrent flushers can ju= st > > + * skip. This avoids a thundering herd problem on the rstat globa= l lock > > + * from memcg flushers (e.g. reclaim, refault, etc). > > + */ > > + if (atomic_xchg(&stats_flush_ongoing, 1)) > > Have you profiled this? I wonder if we should replace the above with > > if (atomic_read(&stats_flush_ongoing) || atomic_xchg(&stats_flush= _ongoing, 1)) I profiled the entire series with perf and I haven't noticed a notable difference between before and after the patch series -- but maybe some specific access patterns cause a regression, not sure. Does an atomic_cmpxchg() satisfy the same purpose? it's easier to read / more concise I guess. Something like if (atomic_cmpxchg(&stats_flush_ongoing, 0, 1)) WDYT? > > to not always dirty the cacheline. This would not be an issue if there > is no cacheline sharing but I suspect percpu stats_updates is sharing > the cacheline with it and may cause false sharing with the parallel stat > updaters (updaters only need to read the base percpu pointer). > > Other than that the patch looks good. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yosry Ahmed Subject: Re: [PATCH v1 5/9] memcg: replace stats_flush_lock with an atomic Date: Tue, 28 Mar 2023 11:52:50 -0700 Message-ID: References: <20230328061638.203420-1-yosryahmed@google.com> <20230328061638.203420-6-yosryahmed@google.com> <20230328141523.txyhl7wt7wtvssea@google.com> Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680029606; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Knz0ICKyVQWqTN98QRlsl94SOUBbAgJSfpknOWcWIz8=; b=eBFPS5upevzYowGb39P08hDPA5ByzP8CKwYQMFgOKj+6IYq2Vkfhm5oL5cVrbZhHzf eQHkB8C313lW8nsV9BpZNUZGGFqiswJwY5FMsPU6z/cd/O9XwAIDMFqeGPfx35miDyrF +bGkozLip971EeAk9h6UC0HOXtdfFwRiZJkEUoXuLJ3fBqgLl1/lbS6hfhsmfL9XRhH+ KR9XbqwM57zERf5JS531f6qDu8XI49Nmmt2jHNhErVh4imKzomY5+lfULAhHKaJb3jM2 Rye/OtfKqLqEEIMO08YuFZgxXxl9NZRbCLkFj6dKWKMGFVVM0dFa3ui8jA4/djrvfILk dA1w== In-Reply-To: <20230328141523.txyhl7wt7wtvssea-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> List-ID: Content-Type: text/plain; charset="windows-1252" To: Shakeel Butt Cc: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Andrew Morton , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Vasily Averin , cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-block-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, bpf-u79uwXL29TY76Z2rM5mHXA@public.gmane.org On Tue, Mar 28, 2023 at 7:15=E2=80=AFAM Shakeel Butt = wrote: > > On Tue, Mar 28, 2023 at 06:16:34AM +0000, Yosry Ahmed wrote: > [...] > > @@ -585,8 +585,8 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgrou= p_tree_per_node *mctz) > > */ > > static void flush_memcg_stats_dwork(struct work_struct *w); > > static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dw= ork); > > -static DEFINE_SPINLOCK(stats_flush_lock); > > static DEFINE_PER_CPU(unsigned int, stats_updates); > > +static atomic_t stats_flush_ongoing =3D ATOMIC_INIT(0); > > static atomic_t stats_flush_threshold =3D ATOMIC_INIT(0); > > static u64 flush_next_time; > > > > @@ -636,15 +636,18 @@ static inline void memcg_rstat_updated(struct mem= _cgroup *memcg, int val) > > > > static void __mem_cgroup_flush_stats(void) > > { > > - unsigned long flag; > > - > > - if (!spin_trylock_irqsave(&stats_flush_lock, flag)) > > + /* > > + * We always flush the entire tree, so concurrent flushers can ju= st > > + * skip. This avoids a thundering herd problem on the rstat globa= l lock > > + * from memcg flushers (e.g. reclaim, refault, etc). > > + */ > > + if (atomic_xchg(&stats_flush_ongoing, 1)) > > Have you profiled this? I wonder if we should replace the above with > > if (atomic_read(&stats_flush_ongoing) || atomic_xchg(&stats_flush= _ongoing, 1)) I profiled the entire series with perf and I haven't noticed a notable difference between before and after the patch series -- but maybe some specific access patterns cause a regression, not sure. Does an atomic_cmpxchg() satisfy the same purpose? it's easier to read / more concise I guess. Something like if (atomic_cmpxchg(&stats_flush_ongoing, 0, 1)) WDYT? > > to not always dirty the cacheline. This would not be an issue if there > is no cacheline sharing but I suspect percpu stats_updates is sharing > the cacheline with it and may cause false sharing with the parallel stat > updaters (updaters only need to read the base percpu pointer). > > Other than that the patch looks good.