linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
	xiyou.wangcong@gmail.com, dave.hansen@intel.com,
	hannes@cmpxchg.org, mgorman@suse.de, vbabka@suse.cz,
	sergey.senozhatsky.work@gmail.com, pmladek@suse.com
Subject: Re: [PATCH] mm,page_alloc: Serialize warn_alloc() if schedulable.
Date: Tue, 11 Jul 2017 15:49:00 +0200	[thread overview]
Message-ID: <20170711134900.GD11936@dhcp22.suse.cz> (raw)
In-Reply-To: <201707112210.AEG17105.tFVOOLQFFMOHJS@I-love.SAKURA.ne.jp>

On Tue 11-07-17 22:10:36, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Mon 10-07-17 22:54:37, Tetsuo Handa wrote:
> > > Michal Hocko wrote:
> > > > On Sat 08-07-17 13:59:54, Tetsuo Handa wrote:
> > > > [...]
> > > > > Quoting from http://lkml.kernel.org/r/20170705081956.GA14538@dhcp22.suse.cz :
> > > > > Michal Hocko wrote:
> > > > > > On Sat 01-07-17 20:43:56, Tetsuo Handa wrote:
> > > > > > > You are rejecting serialization under OOM without giving a chance to test
> > > > > > > side effects of serialization under OOM at linux-next.git. I call such attitude
> > > > > > > "speculation" which you never accept.
> > > > > > 
> > > > > > No I am rejecting abusing the lock for purpose it is not aimed for.
> > > > > 
> > > > > Then, why adding a new lock (not oom_lock but warn_alloc_lock) is not acceptable?
> > > > > Since warn_alloc_lock is aimed for avoiding messages by warn_alloc() getting
> > > > > jumbled, there should be no reason you reject this lock.
> > > > > 
> > > > > If you don't like locks, can you instead accept below one?
> > > > 
> > > > No, seriously! Just think about what you are proposing. You are stalling
> > > > and now you will stall _random_ tasks even more. Some of them for
> > > > unbound amount of time because of inherent unfairness of cmpxchg.
> > > 
> > > The cause of stall when oom_lock is already held is that threads which failed to
> > > hold oom_lock continue almost busy looping; schedule_timeout_uninterruptible(1) is
> > > not sufficient when there are multiple threads doing the same thing, for direct
> > > reclaim/compaction consumes a lot of CPU time.
> > > 
> > > What makes this situation worse is, since warn_alloc() periodically appends to
> > > printk() buffer, the thread inside the OOM killer with oom_lock held can stall
> > > forever due to cond_resched() from console_unlock() from printk().
> > 
> > warn_alloc is just yet-another-user of printk. We might have many
> > others...
> 
> warn_alloc() is different from other users of printk() that printk() is called
> as long as oom_lock is already held by somebody else processing console_unlock().

So what exactly prevents any other caller of printk interfering while
the oom is ongoing?

> 
> >  
> > > Below change significantly reduces possibility of falling into printk() v.s. oom_lock
> > > lockup problem, for the thread inside the OOM killer with oom_lock held no longer
> > > blocks inside printk(). Though there still remains possibility of sleeping for
> > > unexpectedly long at schedule_timeout_killable(1) with the oom_lock held.
> > 
> > This just papers over the real problem.
> > 
> > > --- a/mm/oom_kill.c
> > > +++ b/mm/oom_kill.c
> > > @@ -1051,8 +1051,10 @@ bool out_of_memory(struct oom_control *oc)
> > >  		panic("Out of memory and no killable processes...\n");
> > >  	}
> > >  	if (oc->chosen && oc->chosen != (void *)-1UL) {
> > > +		preempt_disable();
> > >  		oom_kill_process(oc, !is_memcg_oom(oc) ? "Out of memory" :
> > >  				 "Memory cgroup out of memory");
> > > +		preempt_enable_no_resched();
> > >  		/*
> > >  		 * Give the killed process a good chance to exit before trying
> > >  		 * to allocate memory again.
> > > 
> > > I wish we could agree with applying this patch until printk-kthread can
> > > work reliably...
> > 
> > And now you have introduced soft lockups most probably because
> > oom_kill_process can take some time... Or maybe even sleeping while
> > atomic warnings if some code path needs to sleep for whatever reason.
> > The real fix is make sure that printk doesn't take an arbitrary amount of
> > time.
> 
> The OOM killer is not permitted to wait for __GFP_DIRECT_RECLAIM allocations
> directly/indirectly (because it will cause recursion deadlock). Thus, even if
> some code path needs to sleep for some reason, that code path is not permitted to
> wait for __GFP_DIRECT_RECLAIM allocations directly/indirectly. Anyway, I can
> propose scattering preempt_disable()/preempt_enable_no_resched() around printk()
> rather than whole oom_kill_process(). You will just reject it as you have rejected
> in the past.

because you are trying to address a problem at a wrong layer. If there
is absolutely no way around it and printk is unfixable then we really
need a printk variant which will make sure that no excessive waiting
will be involved. Then we can replace all printk in the oom path with
this special printk.
 
[...]

> > You are trying to hammer this particular path but you should realize
> > that as long as printk can take an unbound amount of time then there are
> > many other land mines which need fixing. It is simply not feasible to go
> > after each and ever one of them and try to tweak them around. So please
> > stop proposing these random hacks and rather try to work with prink guys
> > to find solution for this long term printk limitation. OOM killer is a
> > good usecase to give this a priority.
> 
> Whatever approach we use for printk() not to take unbound amount of time
> (e.g. just enqueue to log_buf using per a thread flag), we might still take
> unbound amount of time if we allow cond_sched() (or whatever sleep some
> code path might need to use) with the oom_lock held. After all, the OOM killer
> is ignoring scheduling priority problem regardless of printk() lockup problem.
> 
> I don't have objection about making sure that printk() doesn't take an arbitrary
> amount of time. But the real fix is make sure that out_of_memory() doesn't take
> an arbitrary amount of time (i.e. don't allow cond_resched() etc. at all) unless
> there is cooperation from other allocating threads which failed to hold oom_lock.

As I've said out_of_memory is an expensive operation and as such it has
to be preemptible. Addressing this would require quite some work.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-07-11 13:49 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-01 11:43 [PATCH] mm,page_alloc: Serialize warn_alloc() if schedulable Tetsuo Handa
2017-06-01 11:59 ` Michal Hocko
2017-06-01 13:11   ` Tetsuo Handa
2017-06-01 13:28     ` Michal Hocko
2017-06-01 22:10       ` Andrew Morton
2017-06-02  7:18         ` Michal Hocko
2017-06-02 11:13           ` Tetsuo Handa
2017-06-02 12:15             ` Michal Hocko
2017-06-02 17:13               ` Tetsuo Handa
2017-06-02 21:57             ` Cong Wang
2017-06-04  8:58               ` Tetsuo Handa
2017-06-04 15:05                 ` Michal Hocko
2017-06-04 21:43                   ` Tetsuo Handa
2017-06-05  5:37                     ` Michal Hocko
2017-06-05 18:15                       ` Cong Wang
2017-06-06  9:17                         ` Michal Hocko
2017-06-05 18:25                 ` Cong Wang
2017-06-22 10:35                   ` Tetsuo Handa
2017-06-22 22:53                     ` Cong Wang
2017-06-02 16:59           ` Cong Wang
2017-06-02 19:59           ` Andrew Morton
2017-06-03  2:57             ` Tetsuo Handa
2017-06-03  7:32             ` Michal Hocko
2017-06-03  8:36               ` Tetsuo Handa
2017-06-05  7:10                 ` Sergey Senozhatsky
2017-06-05  9:36                   ` Sergey Senozhatsky
2017-06-05 15:02                     ` Tetsuo Handa
2017-06-03 13:21               ` Tetsuo Handa
2017-07-08  4:59           ` Tetsuo Handa
2017-07-10 13:21             ` Michal Hocko
2017-07-10 13:54               ` Tetsuo Handa
2017-07-10 14:14                 ` Michal Hocko
2017-07-11 13:10                   ` Tetsuo Handa
2017-07-11 13:49                     ` Michal Hocko [this message]
2017-07-11 14:58                       ` Petr Mladek
2017-07-11 22:06                       ` Tetsuo Handa
2017-07-12  8:54                         ` Michal Hocko
2017-07-12 12:23                           ` Tetsuo Handa
2017-07-12 12:41                             ` Michal Hocko
2017-07-14 12:30                               ` Tetsuo Handa
2017-07-14 12:48                                 ` Michal Hocko
2017-08-09  6:14                                   ` Tetsuo Handa
2017-08-09 13:01                                     ` Tetsuo Handa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170711134900.GD11936@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=pmladek@suse.com \
    --cc=sergey.senozhatsky.work@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).