All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Oleg Nesterov <oleg@redhat.com>,
	David Rientjes <rientjes@google.com>,
	linux-mm@kvack.org
Subject: Re: [PATCH] mm,oom: Clarify reason to kill other threads sharing the vitctim's memory.
Date: Thu, 14 Apr 2016 13:31:08 +0200	[thread overview]
Message-ID: <20160414113108.GE2850@dhcp22.suse.cz> (raw)
In-Reply-To: <1460631391-8628-2-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp>

On Thu 14-04-16 19:56:31, Tetsuo Handa wrote:
> Current comment for "Kill all user processes sharing victim->mm in other
> thread groups" is not clear that doing so is a best effort avoidance.
> 
> I tried to update that logic along with TIF_MEMDIE for several times
> but not yet accepted. Therefore, this patch changes only comment so that
> we can apply now.
> 
> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> ---
>  mm/oom_kill.c | 29 ++++++++++++++++++++++-------
>  1 file changed, 22 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index e78818d..43d0002 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -814,13 +814,28 @@ void oom_kill_process(struct oom_control *oc, struct task_struct *p,
>  	task_unlock(victim);
>  
>  	/*
> -	 * Kill all user processes sharing victim->mm in other thread groups, if
> -	 * any.  They don't get access to memory reserves, though, to avoid
> -	 * depletion of all memory.  This prevents mm->mmap_sem livelock when an
           ^^^^^^^^^
this was an useful information which you have dropped. Why?

> -	 * oom killed thread cannot exit because it requires the semaphore and
> -	 * its contended by another thread trying to allocate memory itself.
> -	 * That thread will now get access to memory reserves since it has a
> -	 * pending fatal signal.
> +	 * Kill all user processes sharing victim->mm in other thread groups,
> +	 * if any. This reduces possibility of hitting mm->mmap_sem livelock
> +	 * when an OOM victim thread cannot exit because it requires the
> +	 * mm->mmap_sem for read at exit_mm() while another thread is trying
> +	 * to allocate memory with that mm->mmap_sem held for write.
> +	 *
> +	 * Any thread except the victim thread itself which is killed by
> +	 * this heuristic does not get access to memory reserves as of now,
> +	 * but it will get access to memory reserves by calling out_of_memory()
> +	 * or mem_cgroup_out_of_memory() since it has a pending fatal signal.
> +	 *
> +	 * Note that this heuristic is not perfect because it is possible that
> +	 * a thread which shares victim->mm and is doing memory allocation with
> +	 * victim->mm->mmap_sem held for write is marked as OOM_SCORE_ADJ_MIN.

Is this really helpful? I would rather be explicit that we _do not care_
about these configurations. It is just PITA maintain and it doesn't make
any sense. So rather than trying to document all the weird thing that
might happen I would welcome a warning "mm shared with OOM_SCORE_ADJ_MIN
task. Something is broken in your configuration!"

> +	 * Also, it is possible that a thread which shares victim->mm and is
> +	 * doing memory allocation with victim->mm->mmap_sem held for write
> +	 * (possibly the victim thread itself which got TIF_MEMDIE) is blocked
> +	 * at unkillable locks from direct reclaim paths because nothing
> +	 * prevents TIF_MEMDIE threads which already started direct reclaim
> +	 * paths from being blocked at unkillable locks. In such cases, the
> +	 * OOM reaper will be unable to reap victim->mm and we will need to
> +	 * select a different OOM victim.

This is a more general problem and not related to this particular code.
Whenever we select a victim and call mark_oom_victim we hope it will
eventually get out of its kernel code path (unless it was running in the
userspace) so I am not sure this is placed properly.

>  	 */
>  	rcu_read_lock();
>  	for_each_process(p) {
> -- 
> 1.8.3.1

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-04-14 11:31 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-14 10:56 [PATCH] mm,oom_reaper: Use try_oom_reaper() for reapability test Tetsuo Handa
2016-04-14 10:56 ` [PATCH] mm,oom: Clarify reason to kill other threads sharing the vitctim's memory Tetsuo Handa
2016-04-14 11:31   ` Michal Hocko [this message]
2016-04-14 15:03     ` [PATCH] mm,oom: Clarify reason to kill other threads sharing thevitctim's memory Tetsuo Handa
2016-04-14 15:18       ` Michal Hocko
2016-04-14 21:59         ` [PATCH] mm,oom: Clarify reason to kill other threads sharing the vitctim's memory Tetsuo Handa
2016-04-14 11:21 ` [PATCH] mm,oom_reaper: Use try_oom_reaper() for reapability test Michal Hocko
2016-04-14 11:34   ` Tetsuo Handa
2016-04-14 12:01     ` Michal Hocko
2016-04-14 12:34       ` Michal Hocko
2016-04-14 14:01         ` Tetsuo Handa
2016-04-14 14:30           ` Michal Hocko
2016-04-15 12:11   ` Tetsuo Handa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160414113108.GE2850@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=oleg@redhat.com \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.