All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>,
	Oleg Nesterov <oleg@redhat.com>
Cc: David Rientjes <rientjes@google.com>,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] oom: consider multi-threaded tasks in task_will_free_mem
Date: Tue, 26 Apr 2016 15:57:52 +0200	[thread overview]
Message-ID: <20160426135752.GC20813@dhcp22.suse.cz> (raw)
In-Reply-To: <1460452756-15491-1-git-send-email-mhocko@kernel.org>

On Tue 12-04-16 11:19:16, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> task_will_free_mem is a misnomer for a more complex PF_EXITING test
> for early break out from the oom killer because it is believed that
> such a task would release its memory shortly and so we do not have
> to select an oom victim and perform a disruptive action.
> 
> Currently we make sure that the given task is not participating in the
> core dumping because it might get blocked for a long time - see
> d003f371b270 ("oom: don't assume that a coredumping thread will exit
> soon").
> 
> The check can still do better though. We shouldn't consider the task
> unless the whole thread group is going down. This is rather unlikely
> but not impossible. A single exiting thread would surely leave all the
> address space behind. If we are really unlucky it might get stuck on the
> exit path and keep its TIF_MEMDIE and so block the oom killer.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
> 
> Hi,
> I hope I got it right but I would really appreciate if Oleg found some
> time and double checked after me. The fix is more cosmetic than anything
> else but I guess it is worth it.

ping...

> 
> Thanks!
> 
>  include/linux/oom.h | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/oom.h b/include/linux/oom.h
> index 628a43242a34..b09c7dc523ff 100644
> --- a/include/linux/oom.h
> +++ b/include/linux/oom.h
> @@ -102,13 +102,24 @@ extern struct task_struct *find_lock_task_mm(struct task_struct *p);
>  
>  static inline bool task_will_free_mem(struct task_struct *task)
>  {
> +	struct signal_struct *sig = task->signal;
> +
>  	/*
>  	 * A coredumping process may sleep for an extended period in exit_mm(),
>  	 * so the oom killer cannot assume that the process will promptly exit
>  	 * and release memory.
>  	 */
> -	return (task->flags & PF_EXITING) &&
> -		!(task->signal->flags & SIGNAL_GROUP_COREDUMP);
> +	if (sig->flags & SIGNAL_GROUP_COREDUMP)
> +		return false;
> +
> +	if (!(task->flags & PF_EXITING))
> +		return false;
> +
> +	/* Make sure that the whole thread group is going down */
> +	if (!thread_group_empty(task) && !(sig->flags & SIGNAL_GROUP_EXIT))
> +		return false;
> +
> +	return true;
>  }
>  
>  /* sysctls */
> -- 
> 2.8.0.rc3
> 

-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>,
	Oleg Nesterov <oleg@redhat.com>
Cc: David Rientjes <rientjes@google.com>,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] oom: consider multi-threaded tasks in task_will_free_mem
Date: Tue, 26 Apr 2016 15:57:52 +0200	[thread overview]
Message-ID: <20160426135752.GC20813@dhcp22.suse.cz> (raw)
In-Reply-To: <1460452756-15491-1-git-send-email-mhocko@kernel.org>

On Tue 12-04-16 11:19:16, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> task_will_free_mem is a misnomer for a more complex PF_EXITING test
> for early break out from the oom killer because it is believed that
> such a task would release its memory shortly and so we do not have
> to select an oom victim and perform a disruptive action.
> 
> Currently we make sure that the given task is not participating in the
> core dumping because it might get blocked for a long time - see
> d003f371b270 ("oom: don't assume that a coredumping thread will exit
> soon").
> 
> The check can still do better though. We shouldn't consider the task
> unless the whole thread group is going down. This is rather unlikely
> but not impossible. A single exiting thread would surely leave all the
> address space behind. If we are really unlucky it might get stuck on the
> exit path and keep its TIF_MEMDIE and so block the oom killer.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
> 
> Hi,
> I hope I got it right but I would really appreciate if Oleg found some
> time and double checked after me. The fix is more cosmetic than anything
> else but I guess it is worth it.

ping...

> 
> Thanks!
> 
>  include/linux/oom.h | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/oom.h b/include/linux/oom.h
> index 628a43242a34..b09c7dc523ff 100644
> --- a/include/linux/oom.h
> +++ b/include/linux/oom.h
> @@ -102,13 +102,24 @@ extern struct task_struct *find_lock_task_mm(struct task_struct *p);
>  
>  static inline bool task_will_free_mem(struct task_struct *task)
>  {
> +	struct signal_struct *sig = task->signal;
> +
>  	/*
>  	 * A coredumping process may sleep for an extended period in exit_mm(),
>  	 * so the oom killer cannot assume that the process will promptly exit
>  	 * and release memory.
>  	 */
> -	return (task->flags & PF_EXITING) &&
> -		!(task->signal->flags & SIGNAL_GROUP_COREDUMP);
> +	if (sig->flags & SIGNAL_GROUP_COREDUMP)
> +		return false;
> +
> +	if (!(task->flags & PF_EXITING))
> +		return false;
> +
> +	/* Make sure that the whole thread group is going down */
> +	if (!thread_group_empty(task) && !(sig->flags & SIGNAL_GROUP_EXIT))
> +		return false;
> +
> +	return true;
>  }
>  
>  /* sysctls */
> -- 
> 2.8.0.rc3
> 

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2016-04-26 13:57 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-12  9:19 [PATCH] oom: consider multi-threaded tasks in task_will_free_mem Michal Hocko
2016-04-12  9:19 ` Michal Hocko
2016-04-13 11:04 ` Tetsuo Handa
2016-04-13 11:04   ` Tetsuo Handa
2016-04-13 13:08   ` Michal Hocko
2016-04-13 13:08     ` Michal Hocko
2016-04-13 13:27     ` Tetsuo Handa
2016-04-13 13:27       ` Tetsuo Handa
2016-04-13 13:45       ` Michal Hocko
2016-04-13 13:45         ` Michal Hocko
2016-05-17 18:06     ` Oleg Nesterov
2016-05-17 18:06       ` Oleg Nesterov
2016-04-26 13:57 ` Michal Hocko [this message]
2016-04-26 13:57   ` Michal Hocko
2016-05-17 20:28   ` Michal Hocko
2016-05-17 20:28     ` Michal Hocko
2016-05-17 22:21     ` Andrew Morton
2016-05-17 22:21       ` Andrew Morton
2016-05-18  7:16       ` Michal Hocko
2016-05-18  7:16         ` Michal Hocko
2016-05-17 18:42 ` Oleg Nesterov
2016-05-17 18:42   ` Oleg Nesterov
2016-05-17 20:25   ` Michal Hocko
2016-05-17 20:25     ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160426135752.GC20813@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=oleg@redhat.com \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.