LKML Archive on lore.kernel.org
 help / color / Atom feed
* [patch] mm, oom: dump stack of victim when reaping failed
@ 2020-01-14 23:20 David Rientjes
  2020-01-15  8:43 ` Michal Hocko
  0 siblings, 1 reply; 6+ messages in thread
From: David Rientjes @ 2020-01-14 23:20 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Michal Hocko, linux-kernel, linux-mm

When a process cannot be oom reaped, for whatever reason, currently the
list of locks that are held is currently dumped to the kernel log.

Much more interesting is the stack trace of the victim that cannot be
reaped.  If the stack trace is dumped, we have the ability to find
related occurrences in the same kernel code and hopefully solve the
issue that is making it wedged.

Dump the stack trace when a process fails to be oom reaped.

Signed-off-by: David Rientjes <rientjes@google.com>
---
 mm/oom_kill.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -26,6 +26,7 @@
 #include <linux/sched/mm.h>
 #include <linux/sched/coredump.h>
 #include <linux/sched/task.h>
+#include <linux/sched/debug.h>
 #include <linux/swap.h>
 #include <linux/timex.h>
 #include <linux/jiffies.h>
@@ -620,6 +621,7 @@ static void oom_reap_task(struct task_struct *tsk)
 
 	pr_info("oom_reaper: unable to reap pid:%d (%s)\n",
 		task_pid_nr(tsk), tsk->comm);
+	sched_show_task(tsk);
 	debug_show_all_locks();
 
 done:

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch] mm, oom: dump stack of victim when reaping failed
  2020-01-14 23:20 [patch] mm, oom: dump stack of victim when reaping failed David Rientjes
@ 2020-01-15  8:43 ` Michal Hocko
  2020-01-15  9:18   ` Tetsuo Handa
  0 siblings, 1 reply; 6+ messages in thread
From: Michal Hocko @ 2020-01-15  8:43 UTC (permalink / raw)
  To: David Rientjes; +Cc: Andrew Morton, linux-kernel, linux-mm

On Tue 14-01-20 15:20:04, David Rientjes wrote:
> When a process cannot be oom reaped, for whatever reason, currently the
> list of locks that are held is currently dumped to the kernel log.
> 
> Much more interesting is the stack trace of the victim that cannot be
> reaped.  If the stack trace is dumped, we have the ability to find
> related occurrences in the same kernel code and hopefully solve the
> issue that is making it wedged.
> 
> Dump the stack trace when a process fails to be oom reaped.

Yes, this is really helpful.

> Signed-off-by: David Rientjes <rientjes@google.com>

Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

> ---
>  mm/oom_kill.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -26,6 +26,7 @@
>  #include <linux/sched/mm.h>
>  #include <linux/sched/coredump.h>
>  #include <linux/sched/task.h>
> +#include <linux/sched/debug.h>
>  #include <linux/swap.h>
>  #include <linux/timex.h>
>  #include <linux/jiffies.h>
> @@ -620,6 +621,7 @@ static void oom_reap_task(struct task_struct *tsk)
>  
>  	pr_info("oom_reaper: unable to reap pid:%d (%s)\n",
>  		task_pid_nr(tsk), tsk->comm);
> +	sched_show_task(tsk);
>  	debug_show_all_locks();
>  
>  done:

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch] mm, oom: dump stack of victim when reaping failed
  2020-01-15  8:43 ` Michal Hocko
@ 2020-01-15  9:18   ` Tetsuo Handa
  2020-01-15 20:27     ` David Rientjes
  0 siblings, 1 reply; 6+ messages in thread
From: Tetsuo Handa @ 2020-01-15  9:18 UTC (permalink / raw)
  To: Michal Hocko, David Rientjes; +Cc: Andrew Morton, linux-kernel, linux-mm

On 2020/01/15 17:43, Michal Hocko wrote:
> On Tue 14-01-20 15:20:04, David Rientjes wrote:
>> When a process cannot be oom reaped, for whatever reason, currently the
>> list of locks that are held is currently dumped to the kernel log.
>>
>> Much more interesting is the stack trace of the victim that cannot be
>> reaped.  If the stack trace is dumped, we have the ability to find
>> related occurrences in the same kernel code and hopefully solve the
>> issue that is making it wedged.
>>
>> Dump the stack trace when a process fails to be oom reaped.
> 
> Yes, this is really helpful.

tsk would be a thread group leader, but the thread which got stuck is not
always a thread group leader. Maybe dump all threads in that thread group
without PF_EXITING (or something) ?

> 
>> Signed-off-by: David Rientjes <rientjes@google.com>
> 
> Acked-by: Michal Hocko <mhocko@suse.com>
> 
> Thanks!
> 
>> ---
>>  mm/oom_kill.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
>> --- a/mm/oom_kill.c
>> +++ b/mm/oom_kill.c
>> @@ -26,6 +26,7 @@
>>  #include <linux/sched/mm.h>
>>  #include <linux/sched/coredump.h>
>>  #include <linux/sched/task.h>
>> +#include <linux/sched/debug.h>
>>  #include <linux/swap.h>
>>  #include <linux/timex.h>
>>  #include <linux/jiffies.h>
>> @@ -620,6 +621,7 @@ static void oom_reap_task(struct task_struct *tsk)
>>  
>>  	pr_info("oom_reaper: unable to reap pid:%d (%s)\n",
>>  		task_pid_nr(tsk), tsk->comm);
>> +	sched_show_task(tsk);
>>  	debug_show_all_locks();
>>  
>>  done:
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch] mm, oom: dump stack of victim when reaping failed
  2020-01-15  9:18   ` Tetsuo Handa
@ 2020-01-15 20:27     ` David Rientjes
  2020-01-15 21:09       ` Tetsuo Handa
  0 siblings, 1 reply; 6+ messages in thread
From: David Rientjes @ 2020-01-15 20:27 UTC (permalink / raw)
  To: Tetsuo Handa; +Cc: Michal Hocko, Andrew Morton, linux-kernel, linux-mm

On Wed, 15 Jan 2020, Tetsuo Handa wrote:

> >> When a process cannot be oom reaped, for whatever reason, currently the
> >> list of locks that are held is currently dumped to the kernel log.
> >>
> >> Much more interesting is the stack trace of the victim that cannot be
> >> reaped.  If the stack trace is dumped, we have the ability to find
> >> related occurrences in the same kernel code and hopefully solve the
> >> issue that is making it wedged.
> >>
> >> Dump the stack trace when a process fails to be oom reaped.
> > 
> > Yes, this is really helpful.
> 
> tsk would be a thread group leader, but the thread which got stuck is not
> always a thread group leader. Maybe dump all threads in that thread group
> without PF_EXITING (or something) ?
> 

That's possible, yes.  I think it comes down to the classic problem of how 
much info in the kernel log on oom kill is too much.  Stacks for all 
threads that match the mm being reaped may be *very* verbose.  I'm 
currently tracking a stall in oom reaping where the victim doesn't always 
have a lock held so we don't know where it's at in the kernel; I'm hoping 
that a stack for the thread group leader will at least shed some light on 
it.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch] mm, oom: dump stack of victim when reaping failed
  2020-01-15 20:27     ` David Rientjes
@ 2020-01-15 21:09       ` Tetsuo Handa
  2020-01-16 21:05         ` David Rientjes
  0 siblings, 1 reply; 6+ messages in thread
From: Tetsuo Handa @ 2020-01-15 21:09 UTC (permalink / raw)
  To: David Rientjes; +Cc: Michal Hocko, Andrew Morton, linux-kernel, linux-mm

On 2020/01/16 5:27, David Rientjes wrote:
> I'm 
> currently tracking a stall in oom reaping where the victim doesn't always 
> have a lock held so we don't know where it's at in the kernel; I'm hoping 
> that a stack for the thread group leader will at least shed some light on 
> it.
> 

This change was already proposed at
https://lore.kernel.org/linux-mm/20180320122818.GL23100@dhcp22.suse.cz/ .

And according to that proposal, it is likely i_mmap_lock_write() in dup_mmap()
in copy_process(). We tried to make that lock killable but we gave it up
because nobody knows whether it is safe to do make it killable.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch] mm, oom: dump stack of victim when reaping failed
  2020-01-15 21:09       ` Tetsuo Handa
@ 2020-01-16 21:05         ` David Rientjes
  0 siblings, 0 replies; 6+ messages in thread
From: David Rientjes @ 2020-01-16 21:05 UTC (permalink / raw)
  To: Tetsuo Handa; +Cc: Michal Hocko, Andrew Morton, linux-kernel, linux-mm

On Thu, 16 Jan 2020, Tetsuo Handa wrote:

> > I'm 
> > currently tracking a stall in oom reaping where the victim doesn't always 
> > have a lock held so we don't know where it's at in the kernel; I'm hoping 
> > that a stack for the thread group leader will at least shed some light on 
> > it.
> > 
> 
> This change was already proposed at
> https://lore.kernel.org/linux-mm/20180320122818.GL23100@dhcp22.suse.cz/ .
> 

Hmm, seems the patch didn't get followed up on but I obviously agree with 
it :)

> And according to that proposal, it is likely i_mmap_lock_write() in dup_mmap()
> in copy_process(). We tried to make that lock killable but we gave it up
> because nobody knows whether it is safe to do make it killable.
> 

I haven't encountered that particular problem yet; one problem that I've 
found is a victim holding cgroup_threadgroup_rwsem in the exit path, 
another problem is the victim not holding any locks at all which is more 
concerning (why isn't it making forward progress?).  This patch intends to 
provide a clue for the latter.

Aside: we may also want to consider the possibility of doing immediate 
additional oom killing if the initial victim is too small.  We rely on the 
oom reaper to solve livelocks like this by freeing memory so that 
allocators can drop locks that the victim depends on.  If the victim is 
too small (we have victims <1MB because of oom_score_adj +1000!) we may 
want to consider additional immediate oom killing because it simply won't 
free enough memory.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, back to index

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-14 23:20 [patch] mm, oom: dump stack of victim when reaping failed David Rientjes
2020-01-15  8:43 ` Michal Hocko
2020-01-15  9:18   ` Tetsuo Handa
2020-01-15 20:27     ` David Rientjes
2020-01-15 21:09       ` Tetsuo Handa
2020-01-16 21:05         ` David Rientjes

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git
	git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git
	git clone --mirror https://lore.kernel.org/lkml/8 lkml/git/8.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org
	public-inbox-index lkml

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git