From: Yosry Ahmed <yosryahmed@google.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeelb@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
Muchun Song <muchun.song@linux.dev>,
cgroups@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: memcg: provide accurate stats for userspace reads
Date: Wed, 9 Aug 2023 06:17:18 -0700 [thread overview]
Message-ID: <CAJD7tkaPPcMsq-pbu26H332xBJP-m=v1aBbU_NJQQn+7motX9g@mail.gmail.com> (raw)
In-Reply-To: <ZNONgeoytpkchHga@dhcp22.suse.cz>
<snip>
> > > [...]
> > > > @@ -639,17 +639,24 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
> > > > }
> > > > }
> > > >
> > > > -static void do_flush_stats(void)
> > > > +static void do_flush_stats(bool full)
> > > > {
> > > > + if (!atomic_read(&stats_flush_ongoing) &&
> > > > + !atomic_xchg(&stats_flush_ongoing, 1))
> > > > + goto flush;
> > > > +
> > > > /*
> > > > - * We always flush the entire tree, so concurrent flushers can just
> > > > - * skip. This avoids a thundering herd problem on the rstat global lock
> > > > - * from memcg flushers (e.g. reclaim, refault, etc).
> > > > + * We always flush the entire tree, so concurrent flushers can choose to
> > > > + * skip if accuracy is not critical. Otherwise, wait for the ongoing
> > > > + * flush to complete. This avoids a thundering herd problem on the rstat
> > > > + * global lock from memcg flushers (e.g. reclaim, refault, etc).
> > > > */
> > > > - if (atomic_read(&stats_flush_ongoing) ||
> > > > - atomic_xchg(&stats_flush_ongoing, 1))
> > > > - return;
> > > > -
> > > > + while (full && atomic_read(&stats_flush_ongoing) == 1) {
> > > > + if (!cond_resched())
> > > > + cpu_relax();
> > >
> > > You are reinveting a mutex with spinning waiter. Why don't you simply
> > > make stats_flush_ongoing a real mutex and make use try_lock for !full
> > > flush and normal lock otherwise?
> >
> > So that was actually a spinlock at one point, when we used to skip if
> > try_lock failed.
>
> AFAICS cgroup_rstat_flush is allowed to sleep so spinlocks are not
> really possible.
Sorry I hit the send button too early, didn't get to this part.
We were able to use a spinlock because we used to disable sleeping
when flushing the stats then, which opened another can of worms :)
>
> > We opted for an atomic because the lock was only used
> > in a try_lock fashion. The problem here is that the atomic is used to
> > ensure that only one thread actually attempts to flush at a time (and
> > others skip/wait), to avoid a thundering herd problem on
> > cgroup_rstat_lock.
> >
> > Here, what I am trying to do is essentially equivalent to "wait until
> > the lock is available but don't grab it". If we make
> > stats_flush_ongoing a mutex, I am afraid the thundering herd problem
> > will be reintroduced for stats_flush_ongoing this time.
>
> You will have potentially many spinners for something that might take
> quite a lot of time (sleep) if there is nothing else to schedule. I do
> not think this is a proper behavior. Really, you shouldn't be busy
> waiting for a sleeper.
>
> > I am not sure if there's a cleaner way of doing this, but I am
> > certainly open for suggestions. I also don't like how the spinning
> > loop looks as of now.
>
> mutex_try_lock for non-critical flushers and mutex_lock of syncing ones.
> We can talk a custom locking scheme if that proves insufficient or
> problematic.
I have no problem with this. I can send a v2 following this scheme,
once we agree on the importance of this patch :)
> --
> Michal Hocko
> SUSE Labs
prev parent reply other threads:[~2023-08-09 13:17 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-09 4:58 [PATCH] mm: memcg: provide accurate stats for userspace reads Yosry Ahmed
2023-08-09 8:51 ` Michal Hocko
2023-08-09 12:31 ` Yosry Ahmed
2023-08-09 12:58 ` Michal Hocko
2023-08-09 13:13 ` Yosry Ahmed
2023-08-09 13:31 ` Michal Hocko
2023-08-09 18:33 ` Yosry Ahmed
2023-08-11 12:21 ` Michal Hocko
2023-08-11 19:02 ` Yosry Ahmed
2023-08-11 19:25 ` Michal Hocko
2023-08-11 20:39 ` Yosry Ahmed
2023-08-12 2:08 ` Shakeel Butt
2023-08-12 2:11 ` Yosry Ahmed
2023-08-12 2:29 ` Shakeel Butt
2023-08-12 2:35 ` Yosry Ahmed
2023-08-12 2:48 ` Shakeel Butt
2023-08-12 8:35 ` Michal Hocko
2023-08-12 11:04 ` Yosry Ahmed
2023-08-15 0:14 ` Tejun Heo
2023-08-15 0:28 ` Yosry Ahmed
2023-08-15 0:35 ` Tejun Heo
2023-08-15 0:39 ` Yosry Ahmed
2023-08-15 0:47 ` Tejun Heo
2023-08-15 0:50 ` Yosry Ahmed
2023-08-16 0:23 ` Shakeel Butt
2023-08-16 0:29 ` Yosry Ahmed
2023-08-16 1:14 ` Shakeel Butt
2023-08-16 2:19 ` Yosry Ahmed
2023-08-16 17:11 ` Shakeel Butt
2023-08-16 19:08 ` Tejun Heo
2023-08-16 22:35 ` Yosry Ahmed
2023-08-18 21:40 ` Yosry Ahmed
2023-08-21 20:58 ` Yosry Ahmed
2023-08-15 15:44 ` Waiman Long
2023-08-09 13:17 ` Yosry Ahmed [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJD7tkaPPcMsq-pbu26H332xBJP-m=v1aBbU_NJQQn+7motX9g@mail.gmail.com' \
--to=yosryahmed@google.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).