linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] locking/hung_task: Defer showing held locks
@ 2016-12-13 14:45 Tetsuo Handa
  2016-12-20  8:14 ` Vegard Nossum
  0 siblings, 1 reply; 6+ messages in thread
From: Tetsuo Handa @ 2016-12-13 14:45 UTC (permalink / raw)
  To: linux-kernel
  Cc: Tetsuo Handa, Vegard Nossum, Andrew Morton, Linus Torvalds,
	Mandeep Singh Baines, Paul E. McKenney, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar

When I was running my testcase which may block hundreds of threads
on fs locks, I got lockup due to output from debug_show_all_locks()
added by commit b2d4c2edb2e4f89a ("locking/hung_task: Show all locks").

I think we don't need to call debug_show_all_locks() on each blocked
thread. Let's defer calling debug_show_all_locks() till before panic()
or leaving for_each_process_thread() loop.

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mandeep Singh Baines <msb@chromium.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
---
 kernel/hung_task.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/hung_task.c b/kernel/hung_task.c
index 2b59c82..a9ada5d 100644
--- a/kernel/hung_task.c
+++ b/kernel/hung_task.c
@@ -40,6 +40,7 @@
 int __read_mostly sysctl_hung_task_warnings = 10;
 
 static int __read_mostly did_panic;
+static bool hung_task_show_lock;
 
 static struct task_struct *watchdog_task;
 
@@ -116,12 +117,14 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
 		pr_err("\"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\""
 			" disables this message.\n");
 		sched_show_task(t);
-		debug_show_all_locks();
+		hung_task_show_lock = true;
 	}
 
 	touch_nmi_watchdog();
 
 	if (sysctl_hung_task_panic) {
+		if (hung_task_show_lock)
+			debug_show_all_locks();
 		trigger_all_cpu_backtrace();
 		panic("hung_task: blocked tasks");
 	}
@@ -168,6 +171,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
 	if (test_taint(TAINT_DIE) || did_panic)
 		return;
 
+	hung_task_show_lock = false;
 	rcu_read_lock();
 	for_each_process_thread(g, t) {
 		if (!max_count--)
@@ -183,6 +187,8 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
 	}
  unlock:
 	rcu_read_unlock();
+	if (hung_task_show_lock)
+		debug_show_all_locks();
 }
 
 static long hung_timeout_jiffies(unsigned long last_checked,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] locking/hung_task: Defer showing held locks
  2016-12-13 14:45 [PATCH] locking/hung_task: Defer showing held locks Tetsuo Handa
@ 2016-12-20  8:14 ` Vegard Nossum
  2016-12-20 13:34   ` Tetsuo Handa
  0 siblings, 1 reply; 6+ messages in thread
From: Vegard Nossum @ 2016-12-20  8:14 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: LKML, Vegard Nossum, Andrew Morton, Linus Torvalds,
	Mandeep Singh Baines, Paul E. McKenney, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar

On 13 December 2016 at 15:45, Tetsuo Handa
<penguin-kernel@i-love.sakura.ne.jp> wrote:
> When I was running my testcase which may block hundreds of threads
> on fs locks, I got lockup due to output from debug_show_all_locks()
> added by commit b2d4c2edb2e4f89a ("locking/hung_task: Show all locks").
>
> I think we don't need to call debug_show_all_locks() on each blocked
> thread. Let's defer calling debug_show_all_locks() till before panic()
> or leaving for_each_process_thread() loop.

First of all, sorry for not answering earlier.

I'm not sure I fully understand the problem, you say the "output from
debug_show_all_locks()" caused a lockup, but was the problem simply
that the amount of output caused it to stall for a long time?

Could we instead

1) move the debug_show_all_locks() into the if
(sysctl_hung_task_panic) bit unconditionally

2) call something (touch_nmi_watchdog()?) inside debug_show_all_locks()

3) in another way make debug_show_all_locks() more robust so it doesn't "lockup"

?


Vegard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] locking/hung_task: Defer showing held locks
  2016-12-20  8:14 ` Vegard Nossum
@ 2016-12-20 13:34   ` Tetsuo Handa
  2017-02-28  2:05     ` Tetsuo Handa
  0 siblings, 1 reply; 6+ messages in thread
From: Tetsuo Handa @ 2016-12-20 13:34 UTC (permalink / raw)
  To: vegard.nossum
  Cc: linux-kernel, vegard.nossum, akpm, torvalds, msb, paulmck,
	peterz, tglx, mingo

Vegard Nossum wrote:
> On 13 December 2016 at 15:45, Tetsuo Handa
> <penguin-kernel@i-love.sakura.ne.jp> wrote:
> > When I was running my testcase which may block hundreds of threads
> > on fs locks, I got lockup due to output from debug_show_all_locks()
> > added by commit b2d4c2edb2e4f89a ("locking/hung_task: Show all locks").
> >
> > I think we don't need to call debug_show_all_locks() on each blocked
> > thread. Let's defer calling debug_show_all_locks() till before panic()
> > or leaving for_each_process_thread() loop.
>
> First of all, sorry for not answering earlier.

No problem.

>
> I'm not sure I fully understand the problem, you say the "output from
> debug_show_all_locks()" caused a lockup, but was the problem simply
> that the amount of output caused it to stall for a long time?

In Linux 4.9, in order to tell administrator that something might be wrong
with memory allocation, warn_alloc() which calls printk() periodically
when memory allocation is stalling for too long was added. However, since
printk() waits until all pending data is sent to console using cond_resched(),
printk() continues waiting as long as somebody else calls printk() when
cond_resched() is called. This is problematic under OOM situation.

Since the OOM killer calls printk() with oom_lock held, it happened that
printk() called from the OOM killer is forever unable to return because
warn_alloc() periodically calls printk() since the OOM killer is holding
oom_lock.

And it happened that khungtaskd is another source which calls printk()
periodically when threads are blocked on fs locks waiting for memory
allocation. debug_show_all_locks() generates far more amount of output
compared to warn_alloc() if debug_show_all_locks() is called on each
thread blocked on fs locks waiting for memory allocation. Therefore,
we should avoid calling debug_show_all_locks() on each blocked thread.

Full story starts at http://lkml.kernel.org/r/1481020439-5867-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp but
I appreciate if you can join on http://lkml.kernel.org/r/1478416501-10104-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp .

>
> Could we instead
>
> 1) move the debug_show_all_locks() into the if
> (sysctl_hung_task_panic) bit unconditionally
>
> 2) call something (touch_nmi_watchdog()?) inside debug_show_all_locks()
>
> 3) in another way make debug_show_all_locks() more robust so it doesn't "lockup"
>
> ?

Yes, that might be an improvement. But not needed for this patch.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] locking/hung_task: Defer showing held locks
  2016-12-20 13:34   ` Tetsuo Handa
@ 2017-02-28  2:05     ` Tetsuo Handa
  0 siblings, 0 replies; 6+ messages in thread
From: Tetsuo Handa @ 2017-02-28  2:05 UTC (permalink / raw)
  To: vegard.nossum
  Cc: linux-kernel, vegard.nossum, akpm, torvalds, msb, paulmck,
	peterz, tglx, mingo

Vegard, can I send this patch?

Tetsuo Handa wrote:
> Vegard Nossum wrote:
> > On 13 December 2016 at 15:45, Tetsuo Handa
> > <penguin-kernel@i-love.sakura.ne.jp> wrote:
> > > When I was running my testcase which may block hundreds of threads
> > > on fs locks, I got lockup due to output from debug_show_all_locks()
> > > added by commit b2d4c2edb2e4f89a ("locking/hung_task: Show all locks").
> > >
> > > I think we don't need to call debug_show_all_locks() on each blocked
> > > thread. Let's defer calling debug_show_all_locks() till before panic()
> > > or leaving for_each_process_thread() loop.
> >
> > First of all, sorry for not answering earlier.
> 
> No problem.
> 
> >
> > I'm not sure I fully understand the problem, you say the "output from
> > debug_show_all_locks()" caused a lockup, but was the problem simply
> > that the amount of output caused it to stall for a long time?
> 
> In Linux 4.9, in order to tell administrator that something might be wrong
> with memory allocation, warn_alloc() which calls printk() periodically
> when memory allocation is stalling for too long was added. However, since
> printk() waits until all pending data is sent to console using cond_resched(),
> printk() continues waiting as long as somebody else calls printk() when
> cond_resched() is called. This is problematic under OOM situation.
> 
> Since the OOM killer calls printk() with oom_lock held, it happened that
> printk() called from the OOM killer is forever unable to return because
> warn_alloc() periodically calls printk() since the OOM killer is holding
> oom_lock.
> 
> And it happened that khungtaskd is another source which calls printk()
> periodically when threads are blocked on fs locks waiting for memory
> allocation. debug_show_all_locks() generates far more amount of output
> compared to warn_alloc() if debug_show_all_locks() is called on each
> thread blocked on fs locks waiting for memory allocation. Therefore,
> we should avoid calling debug_show_all_locks() on each blocked thread.
> 
> Full story starts at http://lkml.kernel.org/r/1481020439-5867-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp but
> I appreciate if you can join on http://lkml.kernel.org/r/1478416501-10104-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp .
> 
> >
> > Could we instead
> >
> > 1) move the debug_show_all_locks() into the if
> > (sysctl_hung_task_panic) bit unconditionally
> >
> > 2) call something (touch_nmi_watchdog()?) inside debug_show_all_locks()
> >
> > 3) in another way make debug_show_all_locks() more robust so it doesn't "lockup"
> >
> > ?
> 
> Yes, that might be an improvement. But not needed for this patch.
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] locking/hung_task: Defer showing held locks
  2017-03-12  5:33 Tetsuo Handa
@ 2017-03-12  9:00 ` Vegard Nossum
  0 siblings, 0 replies; 6+ messages in thread
From: Vegard Nossum @ 2017-03-12  9:00 UTC (permalink / raw)
  To: Tetsuo Handa, akpm; +Cc: linux-kernel

On 12/03/2017 06:33, Tetsuo Handa wrote:
> When I was running my testcase which may block hundreds of threads
> on fs locks, I got lockup due to output from debug_show_all_locks()
> added by commit b2d4c2edb2e4f89a ("locking/hung_task: Show all locks").
>
> For example, if 1000 threads were blocked in TASK_UNINTERRUPTIBLE state
> and 500 out of 1000 threads hold some lock, debug_show_all_locks() from
> for_each_process_thread() loop will report locks held by 500 threads for
> 1000 times. This is a too much noise.
>
> In order to make sure rcu_lock_break() is called frequently, we should
> avoid calling debug_show_all_locks() from for_each_process_thread() loop
> because debug_show_all_locks() effectively calls for_each_process_thread()
> loop. Let's defer calling debug_show_all_locks() till before panic() or
> leaving for_each_process_thread() loop.
>
> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Cc: Vegard Nossum <vegard.nossum@oracle.com>
> ---
>  kernel/hung_task.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/hung_task.c b/kernel/hung_task.c
> index f0f8e2a..751593e 100644
> --- a/kernel/hung_task.c
> +++ b/kernel/hung_task.c
> @@ -43,6 +43,7 @@
>  int __read_mostly sysctl_hung_task_warnings = 10;
>
>  static int __read_mostly did_panic;
> +static bool hung_task_show_lock;
>
>  static struct task_struct *watchdog_task;
>
> @@ -120,12 +121,14 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
>  		pr_err("\"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\""
>  			" disables this message.\n");
>  		sched_show_task(t);
> -		debug_show_all_locks();
> +		hung_task_show_lock = true;
>  	}
>
>  	touch_nmi_watchdog();
>
>  	if (sysctl_hung_task_panic) {
> +		if (hung_task_show_lock)
> +			debug_show_all_locks();
>  		trigger_all_cpu_backtrace();
>  		panic("hung_task: blocked tasks");
>  	}
> @@ -172,6 +175,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
>  	if (test_taint(TAINT_DIE) || did_panic)
>  		return;
>
> +	hung_task_show_lock = false;
>  	rcu_read_lock();
>  	for_each_process_thread(g, t) {
>  		if (!max_count--)
> @@ -187,6 +191,8 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
>  	}
>   unlock:
>  	rcu_read_unlock();
> +	if (hung_task_show_lock)
> +		debug_show_all_locks();
>  }
>
>  static long hung_timeout_jiffies(unsigned long last_checked,
>

Reviewed/Acked-by: Vegard Nossum <vegard.nossum@oracle.com>

Thank you for fixing this.


Vegard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH] locking/hung_task: Defer showing held locks
@ 2017-03-12  5:33 Tetsuo Handa
  2017-03-12  9:00 ` Vegard Nossum
  0 siblings, 1 reply; 6+ messages in thread
From: Tetsuo Handa @ 2017-03-12  5:33 UTC (permalink / raw)
  To: akpm; +Cc: linux-kernel, Tetsuo Handa, Vegard Nossum

When I was running my testcase which may block hundreds of threads
on fs locks, I got lockup due to output from debug_show_all_locks()
added by commit b2d4c2edb2e4f89a ("locking/hung_task: Show all locks").

For example, if 1000 threads were blocked in TASK_UNINTERRUPTIBLE state
and 500 out of 1000 threads hold some lock, debug_show_all_locks() from
for_each_process_thread() loop will report locks held by 500 threads for
1000 times. This is a too much noise.

In order to make sure rcu_lock_break() is called frequently, we should
avoid calling debug_show_all_locks() from for_each_process_thread() loop
because debug_show_all_locks() effectively calls for_each_process_thread()
loop. Let's defer calling debug_show_all_locks() till before panic() or
leaving for_each_process_thread() loop.

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
---
 kernel/hung_task.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/kernel/hung_task.c b/kernel/hung_task.c
index f0f8e2a..751593e 100644
--- a/kernel/hung_task.c
+++ b/kernel/hung_task.c
@@ -43,6 +43,7 @@
 int __read_mostly sysctl_hung_task_warnings = 10;
 
 static int __read_mostly did_panic;
+static bool hung_task_show_lock;
 
 static struct task_struct *watchdog_task;
 
@@ -120,12 +121,14 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
 		pr_err("\"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\""
 			" disables this message.\n");
 		sched_show_task(t);
-		debug_show_all_locks();
+		hung_task_show_lock = true;
 	}
 
 	touch_nmi_watchdog();
 
 	if (sysctl_hung_task_panic) {
+		if (hung_task_show_lock)
+			debug_show_all_locks();
 		trigger_all_cpu_backtrace();
 		panic("hung_task: blocked tasks");
 	}
@@ -172,6 +175,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
 	if (test_taint(TAINT_DIE) || did_panic)
 		return;
 
+	hung_task_show_lock = false;
 	rcu_read_lock();
 	for_each_process_thread(g, t) {
 		if (!max_count--)
@@ -187,6 +191,8 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
 	}
  unlock:
 	rcu_read_unlock();
+	if (hung_task_show_lock)
+		debug_show_all_locks();
 }
 
 static long hung_timeout_jiffies(unsigned long last_checked,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-03-12  9:00 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-13 14:45 [PATCH] locking/hung_task: Defer showing held locks Tetsuo Handa
2016-12-20  8:14 ` Vegard Nossum
2016-12-20 13:34   ` Tetsuo Handa
2017-02-28  2:05     ` Tetsuo Handa
2017-03-12  5:33 Tetsuo Handa
2017-03-12  9:00 ` Vegard Nossum

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).