linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Yang Shi <shy828301@gmail.com>
Cc: Shakeel Butt <shakeelb@google.com>,
	Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>,
	Naresh Kamboju <naresh.kamboju@linaro.org>,
	linux-mm <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Michal Hocko <mhocko@kernel.org>,
	Dan Schatzberg <schatzberg.dan@gmail.com>
Subject: Re: fs/buffer.c: WARNING: alloc_page_buffers while mke2fs
Date: Tue, 3 Mar 2020 15:33:11 -0500	[thread overview]
Message-ID: <20200303203311.GB68565@cmpxchg.org> (raw)
In-Reply-To: <CAHbLzkqzsjFmKcb6YT7Vd_-kt7czmyAnbiB5oOHBrVKOXLidXw@mail.gmail.com>

On Tue, Mar 03, 2020 at 11:42:31AM -0800, Yang Shi wrote:
> On Tue, Mar 3, 2020 at 10:15 AM Shakeel Butt <shakeelb@google.com> wrote:
> >
> > On Tue, Mar 3, 2020 at 9:47 AM Yang Shi <shy828301@gmail.com> wrote:
> > >
> > > On Tue, Mar 3, 2020 at 2:53 AM Tetsuo Handa
> > > <penguin-kernel@i-love.sakura.ne.jp> wrote:
> > > >
> > > > Hello, Naresh.
> > > >
> > > > > [   98.003346] WARNING: CPU: 2 PID: 340 at
> > > > > include/linux/sched/mm.h:323 alloc_page_buffers+0x210/0x288
> > > >
> > > > This is
> > > >
> > > > /**
> > > >  * memalloc_use_memcg - Starts the remote memcg charging scope.
> > > >  * @memcg: memcg to charge.
> > > >  *
> > > >  * This function marks the beginning of the remote memcg charging scope. All the
> > > >  * __GFP_ACCOUNT allocations till the end of the scope will be charged to the
> > > >  * given memcg.
> > > >  *
> > > >  * NOTE: This function is not nesting safe.
> > > >  */
> > > > static inline void memalloc_use_memcg(struct mem_cgroup *memcg)
> > > > {
> > > >         WARN_ON_ONCE(current->active_memcg);
> > > >         current->active_memcg = memcg;
> > > > }
> > > >
> > > > which is about memcg. Redirecting to linux-mm.
> > >
> > > Isn't this triggered by ("loop: use worker per cgroup instead of
> > > kworker") in linux-next, which converted loop driver to use worker per
> > > cgroup, so it may have multiple workers work at the mean time?
> > >
> > > So they may share the same "current", then it may cause kind of nested
> > > call to memalloc_use_memcg().
> > >
> > > Could you please try the below debug patch? This is not the proper
> > > fix, but it may help us narrow down the problem.
> > >
> > > diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
> > > index c49257a..1cc1cdc 100644
> > > --- a/include/linux/sched/mm.h
> > > +++ b/include/linux/sched/mm.h
> > > @@ -320,6 +320,10 @@ static inline void
> > > memalloc_nocma_restore(unsigned int flags)
> > >   */
> > >  static inline void memalloc_use_memcg(struct mem_cgroup *memcg)
> > >  {
> > > +       if ((current->flags & PF_KTHREAD) &&
> > > +            current->active_memcg)
> > > +               return;
> > > +
> > >         WARN_ON_ONCE(current->active_memcg);
> > >         current->active_memcg = memcg;
> > >  }
> > >
> >
> > Maybe it's time to make memalloc_use_memcg() nesting safe.
> 
> Need handle the below case:
> 
>             CPU A                                          CPU B
> memalloc_use_memcg
>                                                      memalloc_use_memcg
> memalloc_unuse_memcg
>                                                      memalloc_unuse_memcg
> 
> 
> They may manipulate the same task->active_memcg, so CPU B may still
> see wrong memcg, and the last call to memalloc_unuse_memcg() on CPU B
> may not restore active_memcg to NULL. And, some code depends on
> correct active_memcg.

It's safe because it's only `current` updating a private pointer -
nobody is changing active_memcg of a different task. And a task cannot
run on more than one CPU simultaneously.


  parent reply	other threads:[~2020-03-03 20:33 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CA+G9fYs==eMEmY_OpdhyCHO_1Z5f_M8CAQQTh-AOf5xAvBHKAQ@mail.gmail.com>
2020-03-03 10:52 ` fs/buffer.c: WARNING: alloc_page_buffers while mke2fs Tetsuo Handa
2020-03-03 17:47   ` Yang Shi
2020-03-03 18:14     ` Shakeel Butt
2020-03-03 18:34       ` Yang Shi
2020-03-03 19:42       ` Yang Shi
2020-03-03 20:26         ` Shakeel Butt
2020-03-03 20:33         ` Johannes Weiner [this message]
2020-03-03 20:59           ` Yang Shi
2020-03-03 20:26       ` Johannes Weiner
2020-03-03 20:40         ` Shakeel Butt
2020-03-03 21:06           ` Johannes Weiner
2020-03-03 23:22             ` Shakeel Butt
2020-03-04  0:29               ` Andrew Morton
2020-04-20 16:41                 ` Shakeel Butt
2020-04-20 22:45                   ` Dan Schatzberg
2020-04-21  5:02                     ` Naresh Kamboju
2020-03-03 20:57         ` Yang Shi
2020-03-03 18:40     ` Naresh Kamboju
2020-03-03 19:04       ` Yang Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200303203311.GB68565@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=naresh.kamboju@linaro.org \
    --cc=penguin-kernel@i-love.sakura.ne.jp \
    --cc=schatzberg.dan@gmail.com \
    --cc=shakeelb@google.com \
    --cc=shy828301@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).