All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4] sched/debug: Use sched_debug_lock to serialize use of cgroup_path[] only
@ 2021-04-05 23:42 Waiman Long
  2021-04-06  0:18 ` Steven Rostedt
  2021-04-06  9:15 ` Peter Zijlstra
  0 siblings, 2 replies; 5+ messages in thread
From: Waiman Long @ 2021-04-05 23:42 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira
  Cc: Bharata B Rao, Phil Auld, Daniel Thompson, linux-kernel, Waiman Long

The handling of sysrq key can be activated by echoing the key to
/proc/sysrq-trigger or via the magic key sequence typed into a terminal
that is connected to the system in some way (serial, USB or other mean).
In the former case, the handling is done in a user context. In the
latter case, it is likely to be in an interrupt context.

There should be no more than one instance of sysrq key processing via
a terminal, but multiple instances of /proc/sysrq-trigger is possible.

Currently in print_cpu() of kernel/sched/debug.c, sched_debug_lock is
taken with interrupt disabled for the whole duration of the calls to
print_*_stats() and print_rq() which could last for the quite some time
if the information dump happens on the serial console.

If the system has many cpus and the sched_debug_lock is somehow busy
(e.g. parallel sysrq-t), the system may hit a hard lockup panic
depending on the actually serial console implementation of the
system. For instance,

[ 7809.796262] Kernel panic - not syncing: Hard LOCKUP
[ 7809.796264] CPU: 13 PID: 79867 Comm: reproducer.sh Kdump: loaded Tainted: G          I      --------- -  - 4.18.0-301.el8.x86_64 #1
[ 7809.796264] Hardware name: Dell Inc. PowerEdge R640/0W23H8, BIOS 1.4.9 06/29/2018
[ 7809.796265] Call Trace:
[ 7809.796265]  <NMI>
[ 7809.796266]  dump_stack+0x5c/0x80
[ 7809.796266]  panic+0xe7/0x2a9
[ 7809.796267]  nmi_panic.cold.9+0xc/0xc
[ 7809.796267]  watchdog_overflow_callback.cold.7+0x5c/0x70
[ 7809.796268]  __perf_event_overflow+0x52/0xf0
[ 7809.796268]  handle_pmi_common+0x204/0x2a0
[ 7809.796269]  ? __set_pte_vaddr+0x32/0x50
[ 7809.796269]  ? __native_set_fixmap+0x24/0x30
[ 7809.796270]  ? ghes_copy_tofrom_phys+0xd3/0x1c0
[ 7809.796271]  intel_pmu_handle_irq+0xbf/0x160
[ 7809.796271]  perf_event_nmi_handler+0x2d/0x50
[ 7809.796272]  nmi_handle+0x63/0x110
[ 7809.796272]  default_do_nmi+0x49/0x100
[ 7809.796273]  do_nmi+0x17e/0x1e0
[ 7809.796273]  end_repeat_nmi+0x16/0x6f
[ 7809.796274] RIP: 0010:native_queued_spin_lock_slowpath+0x5b/0x1d0
[ 7809.796275] Code: 6d f0 0f ba 2f 08 0f 92 c0 0f b6 c0 c1 e0 08 89 c2 8b 07 30 e4 09 d0 a9 00 01 ff ff 75 47 85 c0 74 0e 8b 07 84 c0 74 08 f3 90 <8b> 07 84 c0 75 f8 b8 01 00 00 00 66 89 07 c3 8b 37 81 fe 00 01 00
[ 7809.796276] RSP: 0018:ffffaa54cd887df8 EFLAGS: 00000002
[ 7809.796277] RAX: 0000000000000101 RBX: 0000000000000246 RCX: 0000000000000000
[ 7809.796278] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff936b66d0
[ 7809.796278] RBP: ffffffff9301fb40 R08: 0000000000000004 R09: 000000000000004f
[ 7809.796279] R10: 0000000000000000 R11: ffffaa54cd887cc0 R12: ffff907fd0a29ec0
[ 7809.796280] R13: 0000000000000000 R14: ffffffff926ab7c0 R15: 0000000000000000
[ 7809.796280]  ? native_queued_spin_lock_slowpath+0x5b/0x1d0
[ 7809.796281]  ? native_queued_spin_lock_slowpath+0x5b/0x1d0
[ 7809.796281]  </NMI>
[ 7809.796282]  _raw_spin_lock_irqsave+0x32/0x40
[ 7809.796283]  print_cpu+0x261/0x7c0
[ 7809.796283]  sysrq_sched_debug_show+0x34/0x50
[ 7809.796284]  sysrq_handle_showstate+0xc/0x20
[ 7809.796284]  __handle_sysrq.cold.11+0x48/0xfb
[ 7809.796285]  write_sysrq_trigger+0x2b/0x30
[ 7809.796285]  proc_reg_write+0x39/0x60
[ 7809.796286]  vfs_write+0xa5/0x1a0
[ 7809.796286]  ksys_write+0x4f/0xb0
[ 7809.796287]  do_syscall_64+0x5b/0x1a0
[ 7809.796287]  entry_SYSCALL_64_after_hwframe+0x65/0xca
[ 7809.796288] RIP: 0033:0x7fabe4ceb648

The purpose of sched_debug_lock is to serialize the use of the global
cgroup_path[] buffer in print_cpu(). The rests of the printk calls don't
need serialization from sched_debug_lock.

Calling printk() with interrupt disabled can still be problematic if
multiple instances are running. Allocating a stack buffer of PATH_MAX
bytes is not feasible because of the limited size of the kernel stack.

The print_cpu() function has two callers - sched_debug_show() and
sysrq_sched_debug_show(). The solution implemented in this patch is to
allow all sched_debug_show() callers to contend for sched_debug_lock and
use the full size group_path[] as their SEQ_printf() calls will be much
faster. However only one sysrq_sched_debug_show() caller that output to
the slow console will be allowed to use group_path[]. Another parallel
console writer will have to use a shorter stack buffer instead. Since
the console output will be garbled anyway, truncation of some cgroup
paths shouldn't be a big issue.

Fixes: efe25c2c7b3a ("sched: Reinstate group names in /proc/sched_debug")
Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/sched/debug.c | 54 +++++++++++++++++++++++++++++++++-----------
 1 file changed, 41 insertions(+), 13 deletions(-)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 486f403a778b..5d021b247998 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -8,8 +8,6 @@
  */
 #include "sched.h"
 
-static DEFINE_SPINLOCK(sched_debug_lock);
-
 /*
  * This allows printing both to /proc/sched_debug and
  * to the console
@@ -470,16 +468,49 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
 #endif
 
 #ifdef CONFIG_CGROUP_SCHED
+static DEFINE_SPINLOCK(sched_debug_lock);
 static char group_path[PATH_MAX];
+static enum {
+	TOKEN_NONE,
+	TOKEN_ACQUIRED,
+	TOKEN_NA	/* Not applicable */
+} console_token = TOKEN_ACQUIRED;
 
-static char *task_group_path(struct task_group *tg)
+static void task_group_path(struct task_group *tg, char *path, int plen)
 {
-	if (autogroup_path(tg, group_path, PATH_MAX))
-		return group_path;
+	if (autogroup_path(tg, path, plen))
+		return;
 
-	cgroup_path(tg->css.cgroup, group_path, PATH_MAX);
+	cgroup_path(tg->css.cgroup, path, plen);
+}
 
-	return group_path;
+/*
+ * All the print_cpu() callers from sched_debug_show() will be allowed
+ * to contend for sched_debug_lock and use group_path[] as their SEQ_printf()
+ * calls will be much faster. However only one print_cpu() caller from
+ * sysrq_sched_debug_show() which outputs to the console will be allowed
+ * to use group_path[]. Another parallel console writer will have to use
+ * a shorter stack buffer instead. Since the console output will be garbled
+ * anyway, truncation of some cgroup paths shouldn't be a big issue.
+ */
+#define SEQ_printf_task_group_path(m, tg, fmt...)			\
+{									\
+	unsigned long flags;						\
+	int token = m ? TOKEN_NA					\
+		      : xchg_acquire(&console_token, TOKEN_NONE);	\
+									\
+	if (token == TOKEN_NONE) {					\
+		char buf[128];						\
+		task_group_path(tg, buf, sizeof(buf));			\
+		SEQ_printf(m, fmt, buf);				\
+	} else {							\
+		spin_lock_irqsave(&sched_debug_lock, flags);		\
+		task_group_path(tg, group_path, sizeof(group_path));	\
+		SEQ_printf(m, fmt, group_path);				\
+		spin_unlock_irqrestore(&sched_debug_lock, flags);	\
+		if (token == TOKEN_ACQUIRED)				\
+			smp_store_release(&console_token, token);	\
+	}								\
 }
 #endif
 
@@ -506,7 +537,7 @@ print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
 	SEQ_printf(m, " %d %d", task_node(p), task_numa_group_id(p));
 #endif
 #ifdef CONFIG_CGROUP_SCHED
-	SEQ_printf(m, " %s", task_group_path(task_group(p)));
+	SEQ_printf_task_group_path(m, task_group(p), " %s")
 #endif
 
 	SEQ_printf(m, "\n");
@@ -543,7 +574,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
 	SEQ_printf(m, "\n");
-	SEQ_printf(m, "cfs_rq[%d]:%s\n", cpu, task_group_path(cfs_rq->tg));
+	SEQ_printf_task_group_path(m, cfs_rq->tg, "cfs_rq[%d]:%s\n", cpu);
 #else
 	SEQ_printf(m, "\n");
 	SEQ_printf(m, "cfs_rq[%d]:\n", cpu);
@@ -614,7 +645,7 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
 {
 #ifdef CONFIG_RT_GROUP_SCHED
 	SEQ_printf(m, "\n");
-	SEQ_printf(m, "rt_rq[%d]:%s\n", cpu, task_group_path(rt_rq->tg));
+	SEQ_printf_task_group_path(m, rt_rq->tg, "rt_rq[%d]:%s\n", cpu);
 #else
 	SEQ_printf(m, "\n");
 	SEQ_printf(m, "rt_rq[%d]:\n", cpu);
@@ -666,7 +697,6 @@ void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq)
 static void print_cpu(struct seq_file *m, int cpu)
 {
 	struct rq *rq = cpu_rq(cpu);
-	unsigned long flags;
 
 #ifdef CONFIG_X86
 	{
@@ -717,13 +747,11 @@ do {									\
 	}
 #undef P
 
-	spin_lock_irqsave(&sched_debug_lock, flags);
 	print_cfs_stats(m, cpu);
 	print_rt_stats(m, cpu);
 	print_dl_stats(m, cpu);
 
 	print_rq(m, rq, cpu);
-	spin_unlock_irqrestore(&sched_debug_lock, flags);
 	SEQ_printf(m, "\n");
 }
 
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v4] sched/debug: Use sched_debug_lock to serialize use of cgroup_path[] only
  2021-04-05 23:42 [PATCH v4] sched/debug: Use sched_debug_lock to serialize use of cgroup_path[] only Waiman Long
@ 2021-04-06  0:18 ` Steven Rostedt
  2021-04-06  1:57   ` Waiman Long
  2021-04-06  9:15 ` Peter Zijlstra
  1 sibling, 1 reply; 5+ messages in thread
From: Steven Rostedt @ 2021-04-06  0:18 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Bharata B Rao, Phil Auld,
	Daniel Thompson, linux-kernel

On Mon,  5 Apr 2021 19:42:03 -0400
Waiman Long <longman@redhat.com> wrote:

> +/*
> + * All the print_cpu() callers from sched_debug_show() will be allowed
> + * to contend for sched_debug_lock and use group_path[] as their SEQ_printf()
> + * calls will be much faster. However only one print_cpu() caller from
> + * sysrq_sched_debug_show() which outputs to the console will be allowed
> + * to use group_path[]. Another parallel console writer will have to use
> + * a shorter stack buffer instead. Since the console output will be garbled
> + * anyway, truncation of some cgroup paths shouldn't be a big issue.
> + */
> +#define SEQ_printf_task_group_path(m, tg, fmt...)			\
> +{									\
> +	unsigned long flags;						\
> +	int token = m ? TOKEN_NA					\
> +		      : xchg_acquire(&console_token, TOKEN_NONE);	\
> +									\
> +	if (token == TOKEN_NONE) {					\
> +		char buf[128];						\
> +		task_group_path(tg, buf, sizeof(buf));			\
> +		SEQ_printf(m, fmt, buf);				\
> +	} else {							\
> +		spin_lock_irqsave(&sched_debug_lock, flags);		\
> +		task_group_path(tg, group_path, sizeof(group_path));	\
> +		SEQ_printf(m, fmt, group_path);				\
> +		spin_unlock_irqrestore(&sched_debug_lock, flags);	\
> +		if (token == TOKEN_ACQUIRED)				\
> +			smp_store_release(&console_token, token);	\
> +	}								\
>  }
>  #endif

And you said my suggestion was complex!

I'll let others review this.

-- Steve

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v4] sched/debug: Use sched_debug_lock to serialize use of cgroup_path[] only
  2021-04-06  0:18 ` Steven Rostedt
@ 2021-04-06  1:57   ` Waiman Long
  0 siblings, 0 replies; 5+ messages in thread
From: Waiman Long @ 2021-04-06  1:57 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Bharata B Rao, Phil Auld,
	Daniel Thompson, linux-kernel

On 4/5/21 8:18 PM, Steven Rostedt wrote:
> On Mon,  5 Apr 2021 19:42:03 -0400
> Waiman Long <longman@redhat.com> wrote:
>
>> +/*
>> + * All the print_cpu() callers from sched_debug_show() will be allowed
>> + * to contend for sched_debug_lock and use group_path[] as their SEQ_printf()
>> + * calls will be much faster. However only one print_cpu() caller from
>> + * sysrq_sched_debug_show() which outputs to the console will be allowed
>> + * to use group_path[]. Another parallel console writer will have to use
>> + * a shorter stack buffer instead. Since the console output will be garbled
>> + * anyway, truncation of some cgroup paths shouldn't be a big issue.
>> + */
>> +#define SEQ_printf_task_group_path(m, tg, fmt...)			\
>> +{									\
>> +	unsigned long flags;						\
>> +	int token = m ? TOKEN_NA					\
>> +		      : xchg_acquire(&console_token, TOKEN_NONE);	\
>> +									\
>> +	if (token == TOKEN_NONE) {					\
>> +		char buf[128];						\
>> +		task_group_path(tg, buf, sizeof(buf));			\
>> +		SEQ_printf(m, fmt, buf);				\
>> +	} else {							\
>> +		spin_lock_irqsave(&sched_debug_lock, flags);		\
>> +		task_group_path(tg, group_path, sizeof(group_path));	\
>> +		SEQ_printf(m, fmt, group_path);				\
>> +		spin_unlock_irqrestore(&sched_debug_lock, flags);	\
>> +		if (token == TOKEN_ACQUIRED)				\
>> +			smp_store_release(&console_token, token);	\
>> +	}								\
>>   }
>>   #endif
> And you said my suggestion was complex!
>
> I'll let others review this.
>
This patch is actually inspired by your suggestion, though it is 
structured differently from your approach. I really want to thank you 
for your valuable feedback.

I realized that printing to a sequence file wasn't really a problem, 
only printing to console can be problematic. That is why I decided to 
allow unlimited use of group_path[] for those users and only one console 
writer is allowed to use it. As for calling touch_nmi_watchdog(), I am 
still thinking where will be the right place to do it, but it can be 
done with a separate patch, if needed.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v4] sched/debug: Use sched_debug_lock to serialize use of cgroup_path[] only
  2021-04-05 23:42 [PATCH v4] sched/debug: Use sched_debug_lock to serialize use of cgroup_path[] only Waiman Long
  2021-04-06  0:18 ` Steven Rostedt
@ 2021-04-06  9:15 ` Peter Zijlstra
  2021-04-06 15:17   ` Waiman Long
  1 sibling, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2021-04-06  9:15 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Bharata B Rao, Phil Auld,
	Daniel Thompson, linux-kernel

On Mon, Apr 05, 2021 at 07:42:03PM -0400, Waiman Long wrote:
> The handling of sysrq key can be activated by echoing the key to
> /proc/sysrq-trigger or via the magic key sequence typed into a terminal
> that is connected to the system in some way (serial, USB or other mean).
> In the former case, the handling is done in a user context. In the
> latter case, it is likely to be in an interrupt context.

> [ 7809.796281]  </NMI>
> [ 7809.796282]  _raw_spin_lock_irqsave+0x32/0x40
> [ 7809.796283]  print_cpu+0x261/0x7c0
> [ 7809.796283]  sysrq_sched_debug_show+0x34/0x50
> [ 7809.796284]  sysrq_handle_showstate+0xc/0x20
> [ 7809.796284]  __handle_sysrq.cold.11+0x48/0xfb
> [ 7809.796285]  write_sysrq_trigger+0x2b/0x30
> [ 7809.796285]  proc_reg_write+0x39/0x60
> [ 7809.796286]  vfs_write+0xa5/0x1a0
> [ 7809.796286]  ksys_write+0x4f/0xb0
> [ 7809.796287]  do_syscall_64+0x5b/0x1a0
> [ 7809.796287]  entry_SYSCALL_64_after_hwframe+0x65/0xca
> [ 7809.796288] RIP: 0033:0x7fabe4ceb648
> 
> The purpose of sched_debug_lock is to serialize the use of the global
> cgroup_path[] buffer in print_cpu(). The rests of the printk calls don't
> need serialization from sched_debug_lock.

> The print_cpu() function has two callers - sched_debug_show() and
> sysrq_sched_debug_show(). 

So what idiot is doing sysrq and that proc file at the same time? Why is
it a problem now?

> @@ -470,16 +468,49 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
>  #endif
>  
>  #ifdef CONFIG_CGROUP_SCHED
> +static DEFINE_SPINLOCK(sched_debug_lock);
>  static char group_path[PATH_MAX];
> +static enum {
> +	TOKEN_NONE,
> +	TOKEN_ACQUIRED,
> +	TOKEN_NA	/* Not applicable */
> +} console_token = TOKEN_ACQUIRED;

> +/*
> + * All the print_cpu() callers from sched_debug_show() will be allowed
> + * to contend for sched_debug_lock and use group_path[] as their SEQ_printf()
> + * calls will be much faster. However only one print_cpu() caller from
> + * sysrq_sched_debug_show() which outputs to the console will be allowed
> + * to use group_path[]. Another parallel console writer will have to use
> + * a shorter stack buffer instead. Since the console output will be garbled
> + * anyway, truncation of some cgroup paths shouldn't be a big issue.
> + */
> +#define SEQ_printf_task_group_path(m, tg, fmt...)			\
> +{									\
> +	unsigned long flags;						\
> +	int token = m ? TOKEN_NA					\
> +		      : xchg_acquire(&console_token, TOKEN_NONE);	\
> +									\
> +	if (token == TOKEN_NONE) {					\
> +		char buf[128];						\
> +		task_group_path(tg, buf, sizeof(buf));			\
> +		SEQ_printf(m, fmt, buf);				\
> +	} else {							\
> +		spin_lock_irqsave(&sched_debug_lock, flags);		\
> +		task_group_path(tg, group_path, sizeof(group_path));	\
> +		SEQ_printf(m, fmt, group_path);				\
> +		spin_unlock_irqrestore(&sched_debug_lock, flags);	\
> +		if (token == TOKEN_ACQUIRED)				\
> +			smp_store_release(&console_token, token);	\
> +	}								\
>  }

This is disgusting... you have an open-coded test-and-set lock like
thing *AND* a spinlock, what gives?


What's wrong with something simple like this?

---
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 4b49cc2af5c4..2ac2977f3b96 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -8,8 +8,6 @@
  */
 #include "sched.h"
 
-static DEFINE_SPINLOCK(sched_debug_lock);
-
 /*
  * This allows printing both to /proc/sched_debug and
  * to the console
@@ -470,6 +468,7 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
 #endif
 
 #ifdef CONFIG_CGROUP_SCHED
+static DEFINE_SPINLOCK(group_path_lock);
 static char group_path[PATH_MAX];
 
 static char *task_group_path(struct task_group *tg)
@@ -481,6 +480,22 @@ static char *task_group_path(struct task_group *tg)
 
 	return group_path;
 }
+
+#define SEQ_printf_task_group_path(m, tg)				\
+do {									\
+	if (spin_trylock(&group_path_lock)) {				\
+		task_group_path(tg, group_path, sizeof(group_path));	\
+		SEQ_printf(m, "%s", group_path);			\
+		spin_unlock(&group_path_lock);				\
+	} else {							\
+		SEQ_printf(m, "looser!");				\
+	}
+} while (0)
+
+#else
+
+#define SEQ_printf_task_group_path(m, tg) do { } while (0)
+
 #endif
 
 static void
@@ -505,9 +520,8 @@ print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
 #ifdef CONFIG_NUMA_BALANCING
 	SEQ_printf(m, " %d %d", task_node(p), task_numa_group_id(p));
 #endif
-#ifdef CONFIG_CGROUP_SCHED
-	SEQ_printf(m, " %s", task_group_path(task_group(p)));
-#endif
+	SEQ_printf(m, " ");
+	SEQ_printf_task_group_path(m, task_group(p));
 
 	SEQ_printf(m, "\n");
 }
@@ -541,13 +555,10 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
 	struct sched_entity *last;
 	unsigned long flags;
 
-#ifdef CONFIG_FAIR_GROUP_SCHED
 	SEQ_printf(m, "\n");
-	SEQ_printf(m, "cfs_rq[%d]:%s\n", cpu, task_group_path(cfs_rq->tg));
-#else
+	SEQ_printf(m, "cfs_rq[%d]:", cpu);
+	SEQ_printf_task_group_path(m, cfs_rq->tg);
 	SEQ_printf(m, "\n");
-	SEQ_printf(m, "cfs_rq[%d]:\n", cpu);
-#endif
 	SEQ_printf(m, "  .%-30s: %Ld.%06ld\n", "exec_clock",
 			SPLIT_NS(cfs_rq->exec_clock));
 
@@ -612,13 +623,10 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
 
 void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
 {
-#ifdef CONFIG_RT_GROUP_SCHED
 	SEQ_printf(m, "\n");
-	SEQ_printf(m, "rt_rq[%d]:%s\n", cpu, task_group_path(rt_rq->tg));
-#else
+	SEQ_printf(m, "rt_rq[%d]:", cpu);
+	SEQ_printf_task_group_path(m, rt_rq->tg);
 	SEQ_printf(m, "\n");
-	SEQ_printf(m, "rt_rq[%d]:\n", cpu);
-#endif
 
 #define P(x) \
 	SEQ_printf(m, "  .%-30s: %Ld\n", #x, (long long)(rt_rq->x))
@@ -666,7 +674,6 @@ void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq)
 static void print_cpu(struct seq_file *m, int cpu)
 {
 	struct rq *rq = cpu_rq(cpu);
-	unsigned long flags;
 
 #ifdef CONFIG_X86
 	{
@@ -717,13 +724,11 @@ do {									\
 	}
 #undef P
 
-	spin_lock_irqsave(&sched_debug_lock, flags);
 	print_cfs_stats(m, cpu);
 	print_rt_stats(m, cpu);
 	print_dl_stats(m, cpu);
 
 	print_rq(m, rq, cpu);
-	spin_unlock_irqrestore(&sched_debug_lock, flags);
 	SEQ_printf(m, "\n");
 }
 

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v4] sched/debug: Use sched_debug_lock to serialize use of cgroup_path[] only
  2021-04-06  9:15 ` Peter Zijlstra
@ 2021-04-06 15:17   ` Waiman Long
  0 siblings, 0 replies; 5+ messages in thread
From: Waiman Long @ 2021-04-06 15:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Bharata B Rao, Phil Auld,
	Daniel Thompson, linux-kernel

On 4/6/21 5:15 AM, Peter Zijlstra wrote:
> On Mon, Apr 05, 2021 at 07:42:03PM -0400, Waiman Long wrote:
>> The handling of sysrq key can be activated by echoing the key to
>> /proc/sysrq-trigger or via the magic key sequence typed into a terminal
>> that is connected to the system in some way (serial, USB or other mean).
>> In the former case, the handling is done in a user context. In the
>> latter case, it is likely to be in an interrupt context.
>> [ 7809.796281]  </NMI>
>> [ 7809.796282]  _raw_spin_lock_irqsave+0x32/0x40
>> [ 7809.796283]  print_cpu+0x261/0x7c0
>> [ 7809.796283]  sysrq_sched_debug_show+0x34/0x50
>> [ 7809.796284]  sysrq_handle_showstate+0xc/0x20
>> [ 7809.796284]  __handle_sysrq.cold.11+0x48/0xfb
>> [ 7809.796285]  write_sysrq_trigger+0x2b/0x30
>> [ 7809.796285]  proc_reg_write+0x39/0x60
>> [ 7809.796286]  vfs_write+0xa5/0x1a0
>> [ 7809.796286]  ksys_write+0x4f/0xb0
>> [ 7809.796287]  do_syscall_64+0x5b/0x1a0
>> [ 7809.796287]  entry_SYSCALL_64_after_hwframe+0x65/0xca
>> [ 7809.796288] RIP: 0033:0x7fabe4ceb648
>>
>> The purpose of sched_debug_lock is to serialize the use of the global
>> cgroup_path[] buffer in print_cpu(). The rests of the printk calls don't
>> need serialization from sched_debug_lock.
>> The print_cpu() function has two callers - sched_debug_show() and
>> sysrq_sched_debug_show().
> So what idiot is doing sysrq and that proc file at the same time? Why is
> it a problem now?
We got a customer bug report on watchdog panic because a process somehow 
stay within the sched_debug_lock for too long. I don't know what exactly 
the customer was doing, but we can reproduce the panic just by having 2 
parallel "echo t > /proc/sysrq-trigger" commands. This happens on 
certain selected systems. I was using some Dell systems for my testing. 
Of course, it is not sensible to have more than one sysrq happening at 
the same time. However, a parallel thread accessing /proc/sched_debug is 
certainly possible. That invokes similar code path.
>
>> @@ -470,16 +468,49 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
>>   #endif
>>   
>>   #ifdef CONFIG_CGROUP_SCHED
>> +static DEFINE_SPINLOCK(sched_debug_lock);
>>   static char group_path[PATH_MAX];
>> +static enum {
>> +	TOKEN_NONE,
>> +	TOKEN_ACQUIRED,
>> +	TOKEN_NA	/* Not applicable */
>> +} console_token = TOKEN_ACQUIRED;
>> +/*
>> + * All the print_cpu() callers from sched_debug_show() will be allowed
>> + * to contend for sched_debug_lock and use group_path[] as their SEQ_printf()
>> + * calls will be much faster. However only one print_cpu() caller from
>> + * sysrq_sched_debug_show() which outputs to the console will be allowed
>> + * to use group_path[]. Another parallel console writer will have to use
>> + * a shorter stack buffer instead. Since the console output will be garbled
>> + * anyway, truncation of some cgroup paths shouldn't be a big issue.
>> + */
>> +#define SEQ_printf_task_group_path(m, tg, fmt...)			\
>> +{									\
>> +	unsigned long flags;						\
>> +	int token = m ? TOKEN_NA					\
>> +		      : xchg_acquire(&console_token, TOKEN_NONE);	\
>> +									\
>> +	if (token == TOKEN_NONE) {					\
>> +		char buf[128];						\
>> +		task_group_path(tg, buf, sizeof(buf));			\
>> +		SEQ_printf(m, fmt, buf);				\
>> +	} else {							\
>> +		spin_lock_irqsave(&sched_debug_lock, flags);		\
>> +		task_group_path(tg, group_path, sizeof(group_path));	\
>> +		SEQ_printf(m, fmt, group_path);				\
>> +		spin_unlock_irqrestore(&sched_debug_lock, flags);	\
>> +		if (token == TOKEN_ACQUIRED)				\
>> +			smp_store_release(&console_token, token);	\
>> +	}								\
>>   }
> This is disgusting... you have an open-coded test-and-set lock like
> thing *AND* a spinlock, what gives?
>
>
> What's wrong with something simple like this?
>
> ---
> diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
> index 4b49cc2af5c4..2ac2977f3b96 100644
> --- a/kernel/sched/debug.c
> +++ b/kernel/sched/debug.c
> @@ -8,8 +8,6 @@
>    */
>   #include "sched.h"
>   
> -static DEFINE_SPINLOCK(sched_debug_lock);
> -
>   /*
>    * This allows printing both to /proc/sched_debug and
>    * to the console
> @@ -470,6 +468,7 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
>   #endif
>   
>   #ifdef CONFIG_CGROUP_SCHED
> +static DEFINE_SPINLOCK(group_path_lock);
>   static char group_path[PATH_MAX];
>   
>   static char *task_group_path(struct task_group *tg)
> @@ -481,6 +480,22 @@ static char *task_group_path(struct task_group *tg)
>   
>   	return group_path;
>   }
> +
> +#define SEQ_printf_task_group_path(m, tg)				\
> +do {									\
> +	if (spin_trylock(&group_path_lock)) {				\
> +		task_group_path(tg, group_path, sizeof(group_path));	\
> +		SEQ_printf(m, "%s", group_path);			\
> +		spin_unlock(&group_path_lock);				\
> +	} else {							\
> +		SEQ_printf(m, "looser!");				\
> +	}
> +} while (0)

I have no problem with using this as a possible solution as long as 
others agree. Of course, I won't use the term "looser". I will be more 
polite:-)

The current patch allows all /proc/sched_debug users and one sysrq user 
to use the full group_path[] buffer. I can certainly change it to allow 
1 user to use the full size path and the rests running the risk of path 
truncation.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-04-06 15:17 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-05 23:42 [PATCH v4] sched/debug: Use sched_debug_lock to serialize use of cgroup_path[] only Waiman Long
2021-04-06  0:18 ` Steven Rostedt
2021-04-06  1:57   ` Waiman Long
2021-04-06  9:15 ` Peter Zijlstra
2021-04-06 15:17   ` Waiman Long

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.