All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
To: mhocko@kernel.org
Cc: linux-mm@kvack.org, akpm@linux-foundation.org, oleg@redhat.com,
	rientjes@google.com, vdavydov@parallels.com
Subject: Re: [PATCH 08/10] exit, oom: postpone exit_oom_victim to later
Date: Mon, 1 Aug 2016 19:46:48 +0900	[thread overview]
Message-ID: <201608011946.JAI56255.HJLOtSMFOFOVQF@I-love.SAKURA.ne.jp> (raw)
In-Reply-To: <20160731093530.GB22397@dhcp22.suse.cz>

Michal Hocko wrote:
> On Sat 30-07-16 17:20:30, Tetsuo Handa wrote:
> > Michal Hocko wrote:
> > > From: Michal Hocko <mhocko@suse.com>
> > > 
> > > exit_oom_victim was called after mmput because it is expected that
> > > address space of the victim would get released by that time and there is
> > > no reason to hold off the oom killer from selecting another task should
> > > that be insufficient to handle the oom situation. In order to catch
> > > post exit_mm() allocations we used to check for PF_EXITING but this
> > > got removed by 6a618957ad17 ("mm: oom_kill: don't ignore oom score on
> > > exiting tasks") because this check was lockup prone.
> > > 
> > > It seems that we have all needed pieces ready now and can finally
> > > fix this properly (at least for CONFIG_MMU cases where we have the
> > > oom_reaper).  Since "oom: keep mm of the killed task available" we have
> > > a reliable way to ignore oom victims which are no longer interesting
> > > because they either were reaped and do not sit on a lot of memory or
> > > they are not reapable for some reason and it is safer to ignore them
> > > and move on to another victim. That means that we can safely postpone
> > > exit_oom_victim to closer to the final schedule.
> > 
> > I don't like this patch. The advantage of this patch will be that we can
> > avoid selecting next OOM victim when only OOM victims need to allocate
> > memory after they left exit_mm().
> 
> Not really as we do not rely on TIF_MEMDIE nor signal->oom_victims to
> block new oom victim selection anymore.

I meant "whether out_of_memory() is called or not (when OOM victims need
to allocate memory after they left exit_mm())".

I did not mean "whether out_of_memory() selects next OOM victim or not"
because "whether MMF_OOM_SKIP on signal->oom_mm is set or not" depends
on "whether out_of_memory() is called before MMF_OOM_SKIP is set on
signal->oom_mm" which depends on timing.

> 
> > But the disadvantage of this patch will
> > be that we increase the possibility of depleting 100% of memory reserves
> > by allowing them to allocate using ALLOC_NO_WATERMARKS after they left
> > exit_mm().
> 
> I think this is a separate problem. As the current code stands we can
> already deplete memory reserves. The large number of threads might be
> sitting in an allocation loop before they bail out to handle the
> SIGKILL. Exit path shouldn't add too much on top of that. If we want to
> be reliable in not consuming all the reserves we would have to employ
> some form of throttling and that is out of scope of this patch.

I'm suggesting such throttling by allowing fatal_signal_pending() or
PF_EXITING threads access to only some portion of memory reserves.

> 
> > It is possible that a user creates a process with 10000 threads
> > and let that process be OOM-killed. Then, this patch allows 10000 threads
> > to start consuming memory reserves after they left exit_mm(). OOM victims
> > are not the only threads who need to allocate memory for termination. Non
> > OOM victims might need to allocate memory at exit_task_work() in order to
> > allow OOM victims to make forward progress.
> 
> this might be possible but unlike the regular exiting tasks we do
> reclaim oom victim's memory in the background. So while they can consume
> memory reserves we should also give some (and arguably much more) memory
> back. The reserves are there to expedite the exit.

Background reclaim does not occur on CONFIG_MMU=n kernels. But this patch
also affects CONFIG_MMU=n kernels. If a process with two threads was
OOM-killed and one thread consumed too much memory after it left exit_mm()
before the other thread sets MMF_OOM_SKIP on their mm by returning from
exit_aio() etc. in __mmput() from mmput() from exit_mm(), this patch
introduces a new possibility to OOM livelock. I think it is wild to assume
that "CONFIG_MMU=n kernels can OOM livelock even without this patch. Thus,
let's apply this patch even though this patch might break the balance of
OOM handling in CONFIG_MMU=n kernels."

Also, where is the guarantee that memory reclaimed by the OOM reaper is
used for terminating exiting threads? Since we do not prefer
fatal_signal_pending() or PF_EXITING threads over !fatal_signal_pending()
nor !PF_EXITING threads, it is possible that all memory reclaimed by the
OOM reaper is depleted by !fatal_signal_pending() nor !PF_EXITING threads.
Yes, the OOM reaper will allow the OOM killer to select next OOM victim.
But the intention of this patch is to avoid calling out_of_memory() by
allowing OOM victims access to memory reserves, isn't it?
We after all need to call out_of_memory() regardless of whether we prefer
TIF_MEMDIE (or signal->oom_mm != NULL) threads over !TIF_MEMDIE (or
signal->oom_mm == NULL) threads.

So, it is not clear to me that this patch is an improvement.

> 
> > I think that allocations from
> > do_exit() are important for terminating cleanly (from the point of view of
> > filesystem integrity and kernel object management) and such allocations
> > should not be given up simply because ALLOC_NO_WATERMARKS allocations
> > failed.
> 
> We are talking about a fatal condition when OOM killer forcefully kills
> a task. Chances are that the userspace leaves so much state behind that
> a manual cleanup would be necessary anyway. Depleting the memory
> reserves is not nice but I really believe that this particular patch
> doesn't make the situation really much worse than before.

I'm not talking about inconsistency in userspace programs. I'm talking
about inconsistency of objects managed by kernel (e.g. failing to drop
references) caused by allocation failures.

>  
> > > There is possible advantages of this because we are reducing chances
> > > of further interference of the oom victim with the rest of the system
> > > after oom_killer_disable(). Strictly speaking this is possible right
> > > now because there are indeed allocations possible past exit_mm() and
> > > who knows whether some of them can trigger IO. I haven't seen this in
> > > practice though.
> > 
> > I don't know which I/O oom_killer_disable() must act as a hard barrier.
> 
> Any allocation that could trigger the IO can corrupt the hibernation
> image or access the half suspended device. The whole point of
> oom_killer_disable is to prevent anything like that to happen.

That's a puzzling answer. What I/O? If fs writeback done by a GFP_FS
allocation issued by userspace processes between after returning from
exit_mm() and reaching final schedule() in do_exit() is problematic, why
fs writeback issued by a GFP_FS allocation done by kernel threads after
returning from oom_killer_disable() is not problematic? I think any I/O
which userspace processes can do is also doable by kernel threads.
Where is the guarantee that kernel threads which do I/O which
oom_killer_disable() acts as a hard barrier do not corrupt the hibernation
image or access the half suspended device?

> 
> > But safer way is to get rid of TIF_MEMDIE's triple meanings. The first
> > one which prevents the OOM killer from selecting next OOM victim was
> > removed by replacing TIF_MEMDIE test in oom_scan_process_thread() with
> > tsk_is_oom_victim(). The second one which allows the OOM victims to
> > deplete 100% of memory reserves wants some changes in order not to
> > block memory allocations by non OOM victims (e.g. GFP_ATOMIC allocations
> > by interrupt handlers, GFP_NOIO / GFP_NOFS allocations by subsystems
> > which are needed for making forward progress of threads in do_exit())
> > by consuming too much of memory reserves. The third one which blocks
> > oom_killer_disable() can be removed by replacing TIF_MEMDIE test in
> > exit_oom_victim() with PFA_OOM_WAITING test like below patch.
> 
> I plan to remove TIF_MEMDIE dependency for this as well but I would like
> to finish this pile first. We actually do not need any flag for that. We
> just need to detect last exiting thread and tsk_is_oom_victim. I have
> some preliminary code for that.

Please show me the preliminary code. How do you expedite termination of
exiting threads? If you simply remove test_thread_flag(TIF_MEMDIE) in
gfp_to_alloc_flags(), there is a risk of failing to escape from current
allocation request loop (if you also remove

	/* Avoid allocations with no watermarks from looping endlessly */
	if (test_thread_flag(TIF_MEMDIE) && !(gfp_mask & __GFP_NOFAIL))
		goto nopage;

) especially for CONFIG_MMU=n kernels, or hitting problems due to
allocation failure (if you don't remove

	/* Avoid allocations with no watermarks from looping endlessly */
	if (test_thread_flag(TIF_MEMDIE) && !(gfp_mask & __GFP_NOFAIL))
		goto nopage;

) due to not allowing access to memory reserves.

On the other hand, if you simply replace test_thread_flag(TIF_MEMDIE) in
gfp_to_alloc_flags() with signal->oom_mm != NULL, it might increase
possibility of depleting memory reserves.

> 
> > (If
> > oom_killer_disable() were specific to CONFIG_MMU=y kernels, I think
> > that not thawing OOM victims will be simpler because the OOM reaper
> > can reclaim memory without thawing OOM victims.)
> 
> Well I do not think keeping an oom victim inside the fridge is a good
> idea. The task might be not sitting on any reclaimable memory but it
> still might consume resources which are bound to its life time (open
> files and their buffers etc.).

Then, I do not think keeping !TIF_MEMDIE OOM victims (sharing TIF_MEMDIE
OOM victim's memory) inside the fridge is a good idea. There might be
resources (e.g. open files) which will not be released unless all threads
sharing TIF_MEMDIE OOM victim's memory terminate.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2016-08-01 10:47 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-28 19:42 [RFC PATCH 0/10] fortify oom killer even more Michal Hocko
2016-07-28 19:42 ` [PATCH 01/10] mm,oom_reaper: Reduce find_lock_task_mm() usage Michal Hocko
2016-07-28 19:42 ` [PATCH 02/10] mm,oom_reaper: Do not attempt to reap a task twice Michal Hocko
2016-07-28 19:42 ` [PATCH 03/10] oom: keep mm of the killed task available Michal Hocko
2016-07-28 19:42 ` [PATCH 04/10] mm, oom: get rid of signal_struct::oom_victims Michal Hocko
2016-07-28 19:42 ` [PATCH 05/10] kernel, oom: fix potential pgd_lock deadlock from __mmdrop Michal Hocko
2016-07-28 19:42 ` [PATCH 06/10] oom, suspend: fix oom_killer_disable vs. pm suspend properly Michal Hocko
2016-07-28 19:42 ` [PATCH 07/10] mm, oom: enforce exit_oom_victim on current task Michal Hocko
2016-07-28 19:42 ` [PATCH 08/10] exit, oom: postpone exit_oom_victim to later Michal Hocko
2016-07-30  8:20   ` Tetsuo Handa
2016-07-31  9:35     ` Michal Hocko
2016-07-31 10:19       ` Michal Hocko
2016-08-01 10:46       ` Tetsuo Handa [this message]
2016-08-01 11:33         ` Michal Hocko
2016-08-02 10:32           ` Tetsuo Handa
2016-08-02 11:31             ` Michal Hocko
2016-07-28 19:42 ` [PATCH 09/10] vhost, mm: make sure that oom_reaper doesn't reap memory read by vhost Michal Hocko
2016-07-28 20:41   ` Michael S. Tsirkin
2016-07-29  6:04     ` Michal Hocko
2016-07-29 13:14       ` Michael S. Tsirkin
2016-07-29 13:35         ` Michal Hocko
2016-07-29 17:57           ` Michael S. Tsirkin
2016-07-31  9:44             ` Michal Hocko
2016-08-12  9:42               ` Michal Hocko
2016-08-12 13:21                 ` Oleg Nesterov
2016-08-12 14:41                   ` Michal Hocko
2016-08-12 16:05                     ` Oleg Nesterov
2016-08-12 15:57                   ` Paul E. McKenney
2016-08-12 16:09                     ` Oleg Nesterov
2016-08-12 16:26                       ` Paul E. McKenney
2016-08-12 16:23                     ` Michal Hocko
2016-08-13  0:15                   ` Michael S. Tsirkin
2016-08-14  8:41                     ` Michal Hocko
2016-08-14 16:57                       ` Michael S. Tsirkin
2016-08-14 23:06                         ` Michael S. Tsirkin
2016-08-15  9:49                           ` Michal Hocko
2016-08-17 16:58                             ` Michal Hocko
2016-08-22 13:03                   ` Michal Hocko
2016-08-22 21:01                     ` Michael S. Tsirkin
2016-08-23  7:55                       ` Michal Hocko
2016-08-23  9:06                         ` Michal Hocko
2016-08-23 12:54                           ` Michael S. Tsirkin
2016-08-24 16:42                           ` Michal Hocko
2016-08-12  9:43         ` Michal Hocko
2016-07-29 17:07   ` Oleg Nesterov
2016-07-31  9:11     ` Michal Hocko
2016-07-28 19:42 ` [PATCH 10/10] oom, oom_reaper: allow to reap mm shared by the kthreads Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201608011946.JAI56255.HJLOtSMFOFOVQF@I-love.SAKURA.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=oleg@redhat.com \
    --cc=rientjes@google.com \
    --cc=vdavydov@parallels.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.