linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/5] printk: new log_cont implementation
@ 2020-08-19 23:26 John Ogness
  2020-08-19 23:26 ` [RFC PATCH 1/5] printk: implement pr_cont_t John Ogness
                   ` (4 more replies)
  0 siblings, 5 replies; 24+ messages in thread
From: John Ogness @ 2020-08-19 23:26 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

Hello,

My recent series [0] to perform log_cont handling by the readers
was not received well, mainly because of the blank lines in the
output for existing dmesg and journalctl tools. This caused a
discussion about alternative methods of handling log_cont. This
RFC series is the result of that discussion.

Rather than storing individual log_cont parts into the ringbuffer,
this series will buffer the parts until flushed or the message is
complete (just as mainline does it now). The difference to mainline
is that a new API is used that allows the printk subsystem to
effectively track the caller while making use of separate
per-caller buffers.

The API is designed such that:

1. The API calls map clearly to the mainline log_cont calls. This
   is important because all current log_cont users will need to
   be updated.

2. The underlying implementation can be changed/improved at some
   later time without requiring the code of log_cont users to
   change (again).

The mainline log_cont syntax:

    printk(KERN_INFO "text");
    printk(KERN_CONT " continued");
    printk(KERN_CONT "\n");

The new RFC log_cont syntax:

    pr_cont_t c;

    pr_cont_begin(&c, KERN_INFO "text");
    pr_cont_append(&c, " continued");
    pr_cont_end(&c);

Note that both methods can co-exist until all users of the old
mechanism have been converted over.

For this RFC series, 10KiB of memory are set aside to allow up to 10
simultaneous log_cont users. Obviously that might be adjusted based
on the number of CPUs or whatever.

The first patch implements the new log_cont mechanism. The following
4 patches convert existing log_cont user code, with each patch
taking on a more complex example.

This RFC series not only improves log_cont output generally, but
also addresses the lockless printk issue related to log_cont.

John Ogness

[0] https://lkml.kernel.org/r/20200717234818.8622-1-john.ogness@linutronix.de

John Ogness (5):
  printk: implement pr_cont_t
  sysrq: use pr_cont_t for cont messages
  workqueue: use pr_cont_t for cont messages
  locking/selftest: use pr_cont_t for cont messages
  lockdep: use pr_cont_t for cont messages

 based on next-20200819

 drivers/tty/sysrq.c      |   8 +-
 include/linux/printk.h   |  19 ++
 kernel/locking/lockdep.c | 463 +++++++++++++++++++++++----------------
 kernel/printk/printk.c   | 137 ++++++++++++
 kernel/workqueue.c       |  71 +++---
 lib/locking-selftest.c   |  85 +++----
 6 files changed, 519 insertions(+), 264 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-19 23:26 [RFC PATCH 0/5] printk: new log_cont implementation John Ogness
@ 2020-08-19 23:26 ` John Ogness
  2020-08-20  0:33   ` Joe Perches
                     ` (2 more replies)
  2020-08-19 23:26 ` [RFC PATCH 2/5] sysrq: use pr_cont_t for cont messages John Ogness
                   ` (3 subsequent siblings)
  4 siblings, 3 replies; 24+ messages in thread
From: John Ogness @ 2020-08-19 23:26 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

Implement a new buffering mechanism for pr_cont messages.

Old mechanism syntax:

    printk(KERN_INFO "text");
    printk(KERN_CONT " continued");
    printk(KERN_CONT "\n");

New mechanism syntax:

    pr_cont_t c;

    pr_cont_begin(&c, KERN_INFO "text");
    pr_cont_append(&c, " continued");
    pr_cont_end(&c);

Signed-off-by: John Ogness <john.ogness@linutronix.de>
---
 include/linux/printk.h |  19 ++++++
 kernel/printk/printk.c | 137 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 156 insertions(+)

diff --git a/include/linux/printk.h b/include/linux/printk.h
index 34c1a7be3e01..4d9ce18e4afa 100644
--- a/include/linux/printk.h
+++ b/include/linux/printk.h
@@ -380,6 +380,25 @@ extern int kptr_restrict;
 #define pr_cont(fmt, ...) \
 	printk(KERN_CONT fmt, ##__VA_ARGS__)
 
+/* opaque handle for continuous printk messages */
+typedef struct {
+	u8	index;
+	u8	loglevel;
+	u16	text_len;
+} pr_cont_t;
+
+/* initialize handle, provide loglevel and initial message text */
+int pr_cont_begin(pr_cont_t *c, const char *fmt, ...);
+
+/* append message text */
+int pr_cont_append(pr_cont_t *c, const char *fmt, ...);
+
+/* flush message to kernel buffer */
+void pr_cont_flush(pr_cont_t *c);
+
+/* flush message to kernel buffer, cleanup handle */
+void pr_cont_end(pr_cont_t *c);
+
 /**
  * pr_devel - Print a debug-level message conditionally
  * @fmt: format string
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index ad8d1dfe5fbe..10113e7ea350 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -3495,3 +3495,140 @@ void kmsg_dump_rewind(struct kmsg_dumper *dumper)
 EXPORT_SYMBOL_GPL(kmsg_dump_rewind);
 
 #endif
+
+#define CONT_LINE_MAX LOG_LINE_MAX
+#define CONT_BUF_COUNT 10
+static char cont_buf[CONT_BUF_COUNT][CONT_LINE_MAX];
+static DECLARE_BITMAP(cont_buf_map, CONT_BUF_COUNT);
+
+static int get_cont_buf(void)
+{
+	int bit;
+
+	do {
+		bit = find_first_zero_bit(cont_buf_map, CONT_BUF_COUNT);
+		if (bit == CONT_BUF_COUNT)
+			break;
+	} while (test_and_set_bit(bit, cont_buf_map));
+
+	return bit;
+}
+
+static void put_cont_buf(int index)
+{
+	if (WARN_ON(index >= CONT_BUF_COUNT))
+		return;
+	if (WARN_ON(!test_bit(index, cont_buf_map)))
+		return;
+	clear_bit(index, cont_buf_map);
+}
+
+/* alloc buffer, get loglevel, setup initial text */
+int pr_cont_begin(pr_cont_t *c, const char *fmt, ...)
+{
+	int kern_level;
+	va_list args;
+	char *text;
+
+	c->index = get_cont_buf();
+	if (c->index == CONT_BUF_COUNT) {
+		/* Fallback to printk parts individually. */
+		int text_len;
+
+		va_start(args, fmt);
+		text_len = vprintk(fmt, args);
+		va_end(args);
+		return text_len;
+	}
+
+	text = &cont_buf[c->index][0];
+
+	va_start(args, fmt);
+	c->text_len = vscnprintf(text, CONT_LINE_MAX, fmt, args);
+	va_end(args);
+
+	c->loglevel = default_message_loglevel;
+	while (c->text_len > 1 &&
+	       (kern_level = printk_get_level(text)) != 0) {
+		switch (kern_level) {
+		case '0' ... '7':
+			c->loglevel = kern_level - '0';
+			break;
+		}
+
+		c->text_len -= 2;
+		memmove(text, text + 2, c->text_len);
+	}
+
+	return c->text_len;
+}
+
+/* printk existing buffer, reset buffer */
+void pr_cont_flush(pr_cont_t *c)
+{
+	char *text;
+
+	if (c->index == CONT_BUF_COUNT || !c->text_len)
+		return;
+
+	text = &cont_buf[c->index][0];
+
+	printk("%c%c%s\n", KERN_SOH_ASCII, c->loglevel + '0', text);
+
+	c->text_len = 0;
+}
+
+/* printk existing buffer, free buffer */
+void pr_cont_end(pr_cont_t *c)
+{
+	if (c->index == CONT_BUF_COUNT)
+		return;
+
+	pr_cont_flush(c);
+	put_cont_buf(c->index);
+	c->index = CONT_BUF_COUNT;
+}
+
+/* append to buffer */
+int pr_cont_append(pr_cont_t *c, const char *fmt, ...)
+{
+	va_list args_copy;
+	va_list args;
+	int text_len;
+	char *text;
+
+	if (c->index == CONT_BUF_COUNT) {
+		/* Fallback to printk parts individually. */
+		va_start(args, fmt);
+		text_len = vprintk(fmt, args);
+		va_end(args);
+		return text_len;
+	}
+
+	text = &cont_buf[c->index][0];
+
+	/* Try to append directly. */
+	va_start(args, fmt);
+	va_copy(args_copy, args);
+	text_len = vsnprintf(text + c->text_len, CONT_LINE_MAX - c->text_len, fmt, args);
+	va_end(args);
+
+	if (text_len >= CONT_LINE_MAX - c->text_len) {
+		/*
+		 * Not enough space remaining. Set the overwritten terminator,
+		 * Flush any existing parts and start with a clean buffer.
+		 * The loglevel is preserved.
+		 */
+
+		text[c->text_len] = 0;
+		pr_cont_flush(c);
+
+		va_start(args_copy, fmt);
+		c->text_len = vscnprintf(text, CONT_LINE_MAX, fmt, args_copy);
+		va_end(args_copy);
+		return c->text_len;
+	}
+
+	c->text_len += text_len;
+	return text_len;
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC PATCH 2/5] sysrq: use pr_cont_t for cont messages
  2020-08-19 23:26 [RFC PATCH 0/5] printk: new log_cont implementation John Ogness
  2020-08-19 23:26 ` [RFC PATCH 1/5] printk: implement pr_cont_t John Ogness
@ 2020-08-19 23:26 ` John Ogness
  2020-08-20  1:03   ` Linus Torvalds
  2020-08-19 23:26 ` [RFC PATCH 3/5] workqueue: " John Ogness
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 24+ messages in thread
From: John Ogness @ 2020-08-19 23:26 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

Use the new pr_cont_t mechanism.

Signed-off-by: John Ogness <john.ogness@linutronix.de>
---
 drivers/tty/sysrq.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c
index a8e39b2cdd55..be2aa816c444 100644
--- a/drivers/tty/sysrq.c
+++ b/drivers/tty/sysrq.c
@@ -572,7 +572,9 @@ void __handle_sysrq(int key, bool check_mask)
 			console_loglevel = orig_log_level;
 		}
 	} else {
-		pr_info("HELP : ");
+		pr_cont_t c;
+
+		pr_cont_begin(&c, KERN_INFO "HELP : ");
 		/* Only print the help msg once per handler */
 		for (i = 0; i < ARRAY_SIZE(sysrq_key_table); i++) {
 			if (sysrq_key_table[i]) {
@@ -583,10 +585,10 @@ void __handle_sysrq(int key, bool check_mask)
 					;
 				if (j != i)
 					continue;
-				pr_cont("%s ", sysrq_key_table[i]->help_msg);
+				pr_cont_append(&c, "%s ", sysrq_key_table[i]->help_msg);
 			}
 		}
-		pr_cont("\n");
+		pr_cont_end(&c);
 		console_loglevel = orig_log_level;
 	}
 	rcu_read_unlock();
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC PATCH 3/5] workqueue: use pr_cont_t for cont messages
  2020-08-19 23:26 [RFC PATCH 0/5] printk: new log_cont implementation John Ogness
  2020-08-19 23:26 ` [RFC PATCH 1/5] printk: implement pr_cont_t John Ogness
  2020-08-19 23:26 ` [RFC PATCH 2/5] sysrq: use pr_cont_t for cont messages John Ogness
@ 2020-08-19 23:26 ` John Ogness
  2020-08-19 23:26 ` [RFC PATCH 4/5] locking/selftest: " John Ogness
  2020-08-19 23:26 ` [RFC PATCH 5/5] lockdep: " John Ogness
  4 siblings, 0 replies; 24+ messages in thread
From: John Ogness @ 2020-08-19 23:26 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

Use the new pr_cont_t mechanism.

Signed-off-by: John Ogness <john.ogness@linutronix.de>
---
 kernel/workqueue.c | 71 +++++++++++++++++++++++++---------------------
 1 file changed, 39 insertions(+), 32 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c41c3c17b86a..472370f02001 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4645,32 +4645,34 @@ void print_worker_info(const char *log_lvl, struct task_struct *task)
 	copy_from_kernel_nofault(desc, worker->desc, sizeof(desc) - 1);
 
 	if (fn || name[0] || desc[0]) {
-		printk("%sWorkqueue: %s %ps", log_lvl, name, fn);
+		pr_cont_t c;
+
+		pr_cont_begin(&c, "%sWorkqueue: %s %ps", log_lvl, name, fn);
 		if (strcmp(name, desc))
-			pr_cont(" (%s)", desc);
-		pr_cont("\n");
+			pr_cont_append(&c, " (%s)", desc);
+		pr_cont_end(&c);
 	}
 }
 
-static void pr_cont_pool_info(struct worker_pool *pool)
+static void pr_cont_pool_info(pr_cont_t *c, struct worker_pool *pool)
 {
-	pr_cont(" cpus=%*pbl", nr_cpumask_bits, pool->attrs->cpumask);
+	pr_cont_append(c, " cpus=%*pbl", nr_cpumask_bits, pool->attrs->cpumask);
 	if (pool->node != NUMA_NO_NODE)
-		pr_cont(" node=%d", pool->node);
-	pr_cont(" flags=0x%x nice=%d", pool->flags, pool->attrs->nice);
+		pr_cont_append(c, " node=%d", pool->node);
+	pr_cont_append(c, " flags=0x%x nice=%d", pool->flags, pool->attrs->nice);
 }
 
-static void pr_cont_work(bool comma, struct work_struct *work)
+static void pr_cont_work(pr_cont_t *c, bool comma, struct work_struct *work)
 {
 	if (work->func == wq_barrier_func) {
 		struct wq_barrier *barr;
 
 		barr = container_of(work, struct wq_barrier, work);
 
-		pr_cont("%s BAR(%d)", comma ? "," : "",
+		pr_cont_append(c, "%s BAR(%d)", comma ? "," : "",
 			task_pid_nr(barr->task));
 	} else {
-		pr_cont("%s %ps", comma ? "," : "", work->func);
+		pr_cont_append(c, "%s %ps", comma ? "," : "", work->func);
 	}
 }
 
@@ -4680,14 +4682,16 @@ static void show_pwq(struct pool_workqueue *pwq)
 	struct work_struct *work;
 	struct worker *worker;
 	bool has_in_flight = false, has_pending = false;
+	pr_cont_t c;
 	int bkt;
 
-	pr_info("  pwq %d:", pool->id);
-	pr_cont_pool_info(pool);
+	pr_cont_begin(&c, KERN_INFO "  pwq %d:", pool->id);
+	pr_cont_pool_info(&c, pool);
 
-	pr_cont(" active=%d/%d refcnt=%d%s\n",
+	pr_cont_append(&c, " active=%d/%d refcnt=%d%s",
 		pwq->nr_active, pwq->max_active, pwq->refcnt,
 		!list_empty(&pwq->mayday_node) ? " MAYDAY" : "");
+	pr_cont_end(&c);
 
 	hash_for_each(pool->busy_hash, bkt, worker, hentry) {
 		if (worker->current_pwq == pwq) {
@@ -4698,20 +4702,20 @@ static void show_pwq(struct pool_workqueue *pwq)
 	if (has_in_flight) {
 		bool comma = false;
 
-		pr_info("    in-flight:");
+		pr_cont_begin(&c, KERN_INFO "    in-flight:");
 		hash_for_each(pool->busy_hash, bkt, worker, hentry) {
 			if (worker->current_pwq != pwq)
 				continue;
 
-			pr_cont("%s %d%s:%ps", comma ? "," : "",
+			pr_cont_append(&c, "%s %d%s:%ps", comma ? "," : "",
 				task_pid_nr(worker->task),
 				worker->rescue_wq ? "(RESCUER)" : "",
 				worker->current_func);
 			list_for_each_entry(work, &worker->scheduled, entry)
-				pr_cont_work(false, work);
+				pr_cont_work(&c, false, work);
 			comma = true;
 		}
-		pr_cont("\n");
+		pr_cont_end(&c);
 	}
 
 	list_for_each_entry(work, &pool->worklist, entry) {
@@ -4723,26 +4727,26 @@ static void show_pwq(struct pool_workqueue *pwq)
 	if (has_pending) {
 		bool comma = false;
 
-		pr_info("    pending:");
+		pr_cont_begin(&c, KERN_INFO "    pending:");
 		list_for_each_entry(work, &pool->worklist, entry) {
 			if (get_work_pwq(work) != pwq)
 				continue;
 
-			pr_cont_work(comma, work);
+			pr_cont_work(&c, comma, work);
 			comma = !(*work_data_bits(work) & WORK_STRUCT_LINKED);
 		}
-		pr_cont("\n");
+		pr_cont_end(&c);
 	}
 
 	if (!list_empty(&pwq->delayed_works)) {
 		bool comma = false;
 
-		pr_info("    delayed:");
+		pr_cont_begin(&c, KERN_INFO "    delayed:");
 		list_for_each_entry(work, &pwq->delayed_works, entry) {
-			pr_cont_work(comma, work);
+			pr_cont_work(&c, comma, work);
 			comma = !(*work_data_bits(work) & WORK_STRUCT_LINKED);
 		}
-		pr_cont("\n");
+		pr_cont_end(&c);
 	}
 }
 
@@ -4757,6 +4761,7 @@ void show_workqueue_state(void)
 	struct workqueue_struct *wq;
 	struct worker_pool *pool;
 	unsigned long flags;
+	pr_cont_t c;
 	int pi;
 
 	rcu_read_lock();
@@ -4800,20 +4805,20 @@ void show_workqueue_state(void)
 		if (pool->nr_workers == pool->nr_idle)
 			goto next_pool;
 
-		pr_info("pool %d:", pool->id);
-		pr_cont_pool_info(pool);
-		pr_cont(" hung=%us workers=%d",
+		pr_cont_begin(&c, KERN_INFO "pool %d:", pool->id);
+		pr_cont_pool_info(&c, pool);
+		pr_cont_append(&c, " hung=%us workers=%d",
 			jiffies_to_msecs(jiffies - pool->watchdog_ts) / 1000,
 			pool->nr_workers);
 		if (pool->manager)
-			pr_cont(" manager: %d",
+			pr_cont_append(&c, " manager: %d",
 				task_pid_nr(pool->manager->task));
 		list_for_each_entry(worker, &pool->idle_list, entry) {
-			pr_cont(" %s%d", first ? "idle: " : "",
+			pr_cont_append(&c, " %s%d", first ? "idle: " : "",
 				task_pid_nr(worker->task));
 			first = false;
 		}
-		pr_cont("\n");
+		pr_cont_end(&c);
 	next_pool:
 		raw_spin_unlock_irqrestore(&pool->lock, flags);
 		/*
@@ -5752,6 +5757,7 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
 	unsigned long thresh = READ_ONCE(wq_watchdog_thresh) * HZ;
 	bool lockup_detected = false;
 	struct worker_pool *pool;
+	pr_cont_t c;
 	int pi;
 
 	if (!thresh)
@@ -5785,10 +5791,11 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
 		/* did we stall? */
 		if (time_after(jiffies, ts + thresh)) {
 			lockup_detected = true;
-			pr_emerg("BUG: workqueue lockup - pool");
-			pr_cont_pool_info(pool);
-			pr_cont(" stuck for %us!\n",
+			pr_cont_begin(&c, KERN_EMERG "BUG: workqueue lockup - pool");
+			pr_cont_pool_info(&c, pool);
+			pr_cont_append(&c, " stuck for %us!",
 				jiffies_to_msecs(jiffies - pool_ts) / 1000);
+			pr_cont_end(&c);
 		}
 	}
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC PATCH 4/5] locking/selftest: use pr_cont_t for cont messages
  2020-08-19 23:26 [RFC PATCH 0/5] printk: new log_cont implementation John Ogness
                   ` (2 preceding siblings ...)
  2020-08-19 23:26 ` [RFC PATCH 3/5] workqueue: " John Ogness
@ 2020-08-19 23:26 ` John Ogness
  2020-08-19 23:26 ` [RFC PATCH 5/5] lockdep: " John Ogness
  4 siblings, 0 replies; 24+ messages in thread
From: John Ogness @ 2020-08-19 23:26 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

Use the new pr_cont_t mechanism.

Signed-off-by: John Ogness <john.ogness@linutronix.de>
---
 lib/locking-selftest.c | 85 ++++++++++++++++++++++--------------------
 1 file changed, 44 insertions(+), 41 deletions(-)

diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 14f44f59e733..3131c517ae62 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -52,6 +52,7 @@ __setup("debug_locks_verbose=", setup_debug_locks_verbose);
 
 static struct ww_acquire_ctx t, t2;
 static struct ww_mutex o, o2, o3;
+static pr_cont_t c;
 
 /*
  * Normal standalone locks, for the circular and irq-context
@@ -1147,22 +1148,24 @@ static void dotest(void (*testcase_fn)(void), int expected, int lockclass_mask)
 #ifndef CONFIG_PROVE_LOCKING
 	if (expected == FAILURE && debug_locks) {
 		expected_testcase_failures++;
-		pr_cont("failed|");
+		pr_cont_append(&c, "failed|");
 	}
 	else
 #endif
 	if (debug_locks != expected) {
 		unexpected_testcase_failures++;
-		pr_cont("FAILED|");
+		pr_cont_append(&c, "FAILED|");
 	} else {
 		testcase_successes++;
-		pr_cont("  ok  |");
+		pr_cont_append(&c, "  ok  |");
 	}
 	testcase_total++;
 
-	if (debug_locks_verbose)
-		pr_cont(" lockclass mask: %x, debug_locks: %d, expected: %d\n",
+	if (debug_locks_verbose) {
+		pr_cont_append(&c, " lockclass mask: %x, debug_locks: %d, expected: %d",
 			lockclass_mask, debug_locks, expected);
+		pr_cont_flush(&c);
+	}
 	/*
 	 * Some tests (e.g. double-unlock) might corrupt the preemption
 	 * count, so restore it:
@@ -1186,32 +1189,32 @@ static void dotest(void (*testcase_fn)(void), int expected, int lockclass_mask)
 
 static inline void print_testname(const char *testname)
 {
-	printk("%33s:", testname);
+	pr_cont_begin(&c, "%33s:", testname);
 }
 
 #define DO_TESTCASE_1(desc, name, nr)				\
 	print_testname(desc"/"#nr);				\
 	dotest(name##_##nr, SUCCESS, LOCKTYPE_RWLOCK);		\
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 #define DO_TESTCASE_1B(desc, name, nr)				\
 	print_testname(desc"/"#nr);				\
 	dotest(name##_##nr, FAILURE, LOCKTYPE_RWLOCK);		\
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 #define DO_TESTCASE_3(desc, name, nr)				\
 	print_testname(desc"/"#nr);				\
 	dotest(name##_spin_##nr, FAILURE, LOCKTYPE_SPIN);	\
 	dotest(name##_wlock_##nr, FAILURE, LOCKTYPE_RWLOCK);	\
 	dotest(name##_rlock_##nr, SUCCESS, LOCKTYPE_RWLOCK);	\
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 #define DO_TESTCASE_3RW(desc, name, nr)				\
 	print_testname(desc"/"#nr);				\
 	dotest(name##_spin_##nr, FAILURE, LOCKTYPE_SPIN|LOCKTYPE_RWLOCK);\
 	dotest(name##_wlock_##nr, FAILURE, LOCKTYPE_RWLOCK);	\
 	dotest(name##_rlock_##nr, SUCCESS, LOCKTYPE_RWLOCK);	\
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 #define DO_TESTCASE_6(desc, name)				\
 	print_testname(desc);					\
@@ -1222,7 +1225,7 @@ static inline void print_testname(const char *testname)
 	dotest(name##_wsem, FAILURE, LOCKTYPE_RWSEM);		\
 	dotest(name##_rsem, FAILURE, LOCKTYPE_RWSEM);		\
 	dotest_rt(name##_rtmutex, FAILURE, LOCKTYPE_RTMUTEX);	\
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 #define DO_TESTCASE_6_SUCCESS(desc, name)			\
 	print_testname(desc);					\
@@ -1233,7 +1236,7 @@ static inline void print_testname(const char *testname)
 	dotest(name##_wsem, SUCCESS, LOCKTYPE_RWSEM);		\
 	dotest(name##_rsem, SUCCESS, LOCKTYPE_RWSEM);		\
 	dotest_rt(name##_rtmutex, SUCCESS, LOCKTYPE_RTMUTEX);	\
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 /*
  * 'read' variant: rlocks must not trigger.
@@ -1247,7 +1250,7 @@ static inline void print_testname(const char *testname)
 	dotest(name##_wsem, FAILURE, LOCKTYPE_RWSEM);		\
 	dotest(name##_rsem, FAILURE, LOCKTYPE_RWSEM);		\
 	dotest_rt(name##_rtmutex, FAILURE, LOCKTYPE_RTMUTEX);	\
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 #define DO_TESTCASE_2I(desc, name, nr)				\
 	DO_TESTCASE_1("hard-"desc, name##_hard, nr);		\
@@ -1900,25 +1903,25 @@ static void ww_tests(void)
 	dotest(ww_test_fail_acquire, SUCCESS, LOCKTYPE_WW);
 	dotest(ww_test_normal, SUCCESS, LOCKTYPE_WW);
 	dotest(ww_test_unneeded_slow, FAILURE, LOCKTYPE_WW);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("ww contexts mixing");
 	dotest(ww_test_two_contexts, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_diff_class, FAILURE, LOCKTYPE_WW);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("finishing ww context");
 	dotest(ww_test_context_done_twice, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_context_unlock_twice, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_context_fini_early, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_context_lock_after_done, FAILURE, LOCKTYPE_WW);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("locking mismatches");
 	dotest(ww_test_object_unlock_twice, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_object_lock_unbalanced, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_object_lock_stale_context, FAILURE, LOCKTYPE_WW);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("EDEADLK handling");
 	dotest(ww_test_edeadlk_normal, SUCCESS, LOCKTYPE_WW);
@@ -1931,11 +1934,11 @@ static void ww_tests(void)
 	dotest(ww_test_edeadlk_acquire_more_edeadlk_slow, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_edeadlk_acquire_wrong, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_edeadlk_acquire_wrong_slow, FAILURE, LOCKTYPE_WW);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("spinlock nest unlocked");
 	dotest(ww_test_spin_nest_unlocked, FAILURE, LOCKTYPE_WW);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	printk("  -----------------------------------------------------\n");
 	printk("                                 |block | try  |context|\n");
@@ -1945,25 +1948,25 @@ static void ww_tests(void)
 	dotest(ww_test_context_block, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_context_try, SUCCESS, LOCKTYPE_WW);
 	dotest(ww_test_context_context, SUCCESS, LOCKTYPE_WW);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("try");
 	dotest(ww_test_try_block, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_try_try, SUCCESS, LOCKTYPE_WW);
 	dotest(ww_test_try_context, FAILURE, LOCKTYPE_WW);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("block");
 	dotest(ww_test_block_block, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_block_try, SUCCESS, LOCKTYPE_WW);
 	dotest(ww_test_block_context, FAILURE, LOCKTYPE_WW);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("spinlock");
 	dotest(ww_test_spin_block, FAILURE, LOCKTYPE_WW);
 	dotest(ww_test_spin_try, SUCCESS, LOCKTYPE_WW);
 	dotest(ww_test_spin_context, FAILURE, LOCKTYPE_WW);
-	pr_cont("\n");
+	pr_cont_end(&c);
 }
 
 void locking_selftest(void)
@@ -2003,35 +2006,35 @@ void locking_selftest(void)
 
 	printk("  --------------------------------------------------------------------------\n");
 	print_testname("recursive read-lock");
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rlock_AA1, SUCCESS, LOCKTYPE_RWLOCK);
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rsem_AA1, FAILURE, LOCKTYPE_RWSEM);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("recursive read-lock #2");
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rlock_AA1B, SUCCESS, LOCKTYPE_RWLOCK);
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rsem_AA1B, FAILURE, LOCKTYPE_RWSEM);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("mixed read-write-lock");
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rlock_AA2, FAILURE, LOCKTYPE_RWLOCK);
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rsem_AA2, FAILURE, LOCKTYPE_RWSEM);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("mixed write-read-lock");
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rlock_AA3, FAILURE, LOCKTYPE_RWLOCK);
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rsem_AA3, FAILURE, LOCKTYPE_RWSEM);
-	pr_cont("\n");
+	pr_cont_end(&c);
 
 	print_testname("mixed read-lock/lock-write ABBA");
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rlock_ABBA1, FAILURE, LOCKTYPE_RWLOCK);
 #ifdef CONFIG_PROVE_LOCKING
 	/*
@@ -2041,19 +2044,19 @@ void locking_selftest(void)
 	unexpected_testcase_failures--;
 #endif
 
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rwsem_ABBA1, FAILURE, LOCKTYPE_RWSEM);
 
 	print_testname("mixed read-lock/lock-read ABBA");
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rlock_ABBA2, SUCCESS, LOCKTYPE_RWLOCK);
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rwsem_ABBA2, FAILURE, LOCKTYPE_RWSEM);
 
 	print_testname("mixed write-lock/lock-write ABBA");
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rlock_ABBA3, FAILURE, LOCKTYPE_RWLOCK);
-	pr_cont("             |");
+	pr_cont_append(&c, "             |");
 	dotest(rwsem_ABBA3, FAILURE, LOCKTYPE_RWSEM);
 
 	printk("  --------------------------------------------------------------------------\n");
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC PATCH 5/5] lockdep: use pr_cont_t for cont messages
  2020-08-19 23:26 [RFC PATCH 0/5] printk: new log_cont implementation John Ogness
                   ` (3 preceding siblings ...)
  2020-08-19 23:26 ` [RFC PATCH 4/5] locking/selftest: " John Ogness
@ 2020-08-19 23:26 ` John Ogness
  4 siblings, 0 replies; 24+ messages in thread
From: John Ogness @ 2020-08-19 23:26 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

Use the new pr_cont_t mechanism.

Signed-off-by: John Ogness <john.ogness@linutronix.de>
---
 kernel/locking/lockdep.c | 463 +++++++++++++++++++++++----------------
 1 file changed, 275 insertions(+), 188 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 2fad21d345b0..3210f244cb94 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -638,7 +638,7 @@ void get_usage_chars(struct lock_class *class, char usage[LOCK_USAGE_CHARS])
 	usage[i] = '\0';
 }
 
-static void __print_lock_name(struct lock_class *class)
+static void __print_lock_name(pr_cont_t *c, struct lock_class *class)
 {
 	char str[KSYM_NAME_LEN];
 	const char *name;
@@ -646,30 +646,30 @@ static void __print_lock_name(struct lock_class *class)
 	name = class->name;
 	if (!name) {
 		name = __get_key_name(class->key, str);
-		printk(KERN_CONT "%s", name);
+		pr_cont_append(c, "%s", name);
 	} else {
-		printk(KERN_CONT "%s", name);
+		pr_cont_append(c, "%s", name);
 		if (class->name_version > 1)
-			printk(KERN_CONT "#%d", class->name_version);
+			pr_cont_append(c, "#%d", class->name_version);
 		if (class->subclass)
-			printk(KERN_CONT "/%d", class->subclass);
+			pr_cont_append(c, "/%d", class->subclass);
 	}
 }
 
-static void print_lock_name(struct lock_class *class)
+static void _print_lock_name(pr_cont_t *c, struct lock_class *class)
 {
 	char usage[LOCK_USAGE_CHARS];
 
 	get_usage_chars(class, usage);
 
-	printk(KERN_CONT " (");
-	__print_lock_name(class);
-	printk(KERN_CONT "){%s}-{%hd:%hd}", usage,
+	pr_cont_append(c, " (");
+	__print_lock_name(c, class);
+	pr_cont_append(c, "){%s}-{%hd:%hd}", usage,
 			class->wait_type_outer ?: class->wait_type_inner,
 			class->wait_type_inner);
 }
 
-static void print_lockdep_cache(struct lockdep_map *lock)
+static void _print_lockdep_cache(pr_cont_t *c, struct lockdep_map *lock)
 {
 	const char *name;
 	char str[KSYM_NAME_LEN];
@@ -678,10 +678,10 @@ static void print_lockdep_cache(struct lockdep_map *lock)
 	if (!name)
 		name = __get_key_name(lock->key->subkeys, str);
 
-	printk(KERN_CONT "%s", name);
+	pr_cont_append(c, "%s", name);
 }
 
-static void print_lock(struct held_lock *hlock)
+static void _print_lock(pr_cont_t *c, struct held_lock *hlock)
 {
 	/*
 	 * We can be called locklessly through debug_show_all_locks() so be
@@ -696,16 +696,16 @@ static void print_lock(struct held_lock *hlock)
 	struct lock_class *lock = hlock_class(hlock);
 
 	if (!lock) {
-		printk(KERN_CONT "<RELEASED>\n");
+		pr_cont_append(c, "<RELEASED>");
 		return;
 	}
 
-	printk(KERN_CONT "%px", hlock->instance);
-	print_lock_name(lock);
-	printk(KERN_CONT ", at: %pS\n", (void *)hlock->acquire_ip);
+	pr_cont_append(c, "%px", hlock->instance);
+	_print_lock_name(c, lock);
+	pr_cont_append(c, ", at: %pS\n", (void *)hlock->acquire_ip);
 }
 
-static void lockdep_print_held_locks(struct task_struct *p)
+static void lockdep_print_held_locks(pr_cont_t *c, struct task_struct *p)
 {
 	int i, depth = READ_ONCE(p->lockdep_depth);
 
@@ -721,8 +721,9 @@ static void lockdep_print_held_locks(struct task_struct *p)
 	if (p->state == TASK_RUNNING && p != current)
 		return;
 	for (i = 0; i < depth; i++) {
-		printk(" #%d: ", i);
-		print_lock(p->held_locks + i);
+		pr_cont_begin(c, " #%d: ", i);
+		_print_lock(c, p->held_locks + i);
+		pr_cont_end(c);
 	}
 }
 
@@ -937,7 +938,7 @@ static bool class_lock_list_valid(struct lock_class *c, struct list_head *h)
 
 	list_for_each_entry(e, h, entry) {
 		if (e->links_to != c) {
-			printk(KERN_INFO "class %s: mismatch for lock entry %ld; class %s <> %s",
+			printk(KERN_INFO "class %s: mismatch for lock entry %ld; class %s <> %s\n",
 			       c->name ? : "(?)",
 			       (unsigned long)(e - list_entries),
 			       e->links_to && e->links_to->name ?
@@ -1195,6 +1196,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	struct lockdep_subclass_key *key;
 	struct hlist_head *hash_head;
 	struct lock_class *class;
+	pr_cont_t c;
 
 	DEBUG_LOCKS_WARN_ON(!irqs_disabled());
 
@@ -1263,10 +1265,10 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
 	if (verbose(class)) {
 		graph_unlock();
 
-		printk("\nnew class %px: %s", class->key, class->name);
+		pr_cont_begin(&c, "\nnew class %px: %s", class->key, class->name);
 		if (class->name_version > 1)
-			printk(KERN_CONT "#%d", class->name_version);
-		printk(KERN_CONT "\n");
+			pr_cont_append(&c, "#%d", class->name_version);
+		pr_cont_end(&c);
 		dump_stack();
 
 		if (!graph_lock()) {
@@ -1567,18 +1569,20 @@ static void print_lock_trace(const struct lock_trace *trace,
  * has been detected):
  */
 static noinline void
-print_circular_bug_entry(struct lock_list *target, int depth)
+print_circular_bug_entry(pr_cont_t *c, struct lock_list *target, int depth)
 {
 	if (debug_locks_silent)
 		return;
-	printk("\n-> #%u", depth);
-	print_lock_name(target->class);
-	printk(KERN_CONT ":\n");
+	pr_cont_begin(c, "\n-> #%u", depth);
+	_print_lock_name(c, target->class);
+	pr_cont_append(c, ":");
+	pr_cont_end(c);
 	print_lock_trace(target->trace, 6);
 }
 
 static void
-print_circular_lock_scenario(struct held_lock *src,
+print_circular_lock_scenario(pr_cont_t *c,
+			     struct held_lock *src,
 			     struct held_lock *tgt,
 			     struct lock_list *prt)
 {
@@ -1600,30 +1604,35 @@ print_circular_lock_scenario(struct held_lock *src,
 	 * from the safe_class lock to the unsafe_class lock.
 	 */
 	if (parent != source) {
-		printk("Chain exists of:\n  ");
-		__print_lock_name(source);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(parent);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(target);
-		printk(KERN_CONT "\n\n");
+		pr_cont_begin(c, "Chain exists of:\n  ");
+		__print_lock_name(c, source);
+		pr_cont_append(c, " --> ");
+		__print_lock_name(c, parent);
+		pr_cont_append(c, " --> ");
+		__print_lock_name(c, target);
+		pr_cont_append(c, "\n");
+		pr_cont_end(c);
 	}
 
 	printk(" Possible unsafe locking scenario:\n\n");
 	printk("       CPU0                    CPU1\n");
 	printk("       ----                    ----\n");
-	printk("  lock(");
-	__print_lock_name(target);
-	printk(KERN_CONT ");\n");
-	printk("                               lock(");
-	__print_lock_name(parent);
-	printk(KERN_CONT ");\n");
-	printk("                               lock(");
-	__print_lock_name(target);
-	printk(KERN_CONT ");\n");
-	printk("  lock(");
-	__print_lock_name(source);
-	printk(KERN_CONT ");\n");
+	pr_cont_begin(c, "  lock(");
+	__print_lock_name(c, target);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
+	pr_cont_begin(c, "                               lock(");
+	__print_lock_name(c, parent);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
+	pr_cont_begin(c, "                               lock(");
+	__print_lock_name(c, target);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
+	pr_cont_begin(c, "  lock(");
+	__print_lock_name(c, source);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
 	printk("\n *** DEADLOCK ***\n\n");
 }
 
@@ -1632,7 +1641,7 @@ print_circular_lock_scenario(struct held_lock *src,
  * header first:
  */
 static noinline void
-print_circular_bug_header(struct lock_list *entry, unsigned int depth,
+print_circular_bug_header(pr_cont_t *c, struct lock_list *entry, unsigned int depth,
 			struct held_lock *check_src,
 			struct held_lock *check_tgt)
 {
@@ -1648,15 +1657,19 @@ print_circular_bug_header(struct lock_list *entry, unsigned int depth,
 	pr_warn("------------------------------------------------------\n");
 	pr_warn("%s/%d is trying to acquire lock:\n",
 		curr->comm, task_pid_nr(curr));
-	print_lock(check_src);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock(c, check_src);
+	pr_cont_end(c);
 
 	pr_warn("\nbut task is already holding lock:\n");
 
-	print_lock(check_tgt);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock(c, check_tgt);
+	pr_cont_end(c);
 	pr_warn("\nwhich lock already depends on the new lock.\n\n");
 	pr_warn("\nthe existing dependency chain (in reverse order) is:\n");
 
-	print_circular_bug_entry(entry, depth);
+	print_circular_bug_entry(c, entry, depth);
 }
 
 static inline int class_equal(struct lock_list *entry, void *data)
@@ -1664,7 +1677,7 @@ static inline int class_equal(struct lock_list *entry, void *data)
 	return entry->class == data;
 }
 
-static noinline void print_circular_bug(struct lock_list *this,
+static noinline void print_circular_bug(pr_cont_t *c, struct lock_list *this,
 					struct lock_list *target,
 					struct held_lock *check_src,
 					struct held_lock *check_tgt)
@@ -1683,21 +1696,21 @@ static noinline void print_circular_bug(struct lock_list *this,
 
 	depth = get_lock_depth(target);
 
-	print_circular_bug_header(target, depth, check_src, check_tgt);
+	print_circular_bug_header(c, target, depth, check_src, check_tgt);
 
 	parent = get_lock_parent(target);
 	first_parent = parent;
 
 	while (parent) {
-		print_circular_bug_entry(parent, --depth);
+		print_circular_bug_entry(c, parent, --depth);
 		parent = get_lock_parent(parent);
 	}
 
 	printk("\nother info that might help us debug this:\n\n");
-	print_circular_lock_scenario(check_src, check_tgt,
+	print_circular_lock_scenario(c, check_src, check_tgt,
 				     first_parent);
 
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(c, curr);
 
 	printk("\nstack backtrace:\n");
 	dump_stack();
@@ -1803,6 +1816,7 @@ static noinline int
 check_noncircular(struct held_lock *src, struct held_lock *target,
 		  struct lock_trace **const trace)
 {
+	pr_cont_t c;
 	int ret;
 	struct lock_list *target_entry;
 	struct lock_list src_entry = {
@@ -1824,7 +1838,7 @@ check_noncircular(struct held_lock *src, struct held_lock *target,
 			*trace = save_trace();
 		}
 
-		print_circular_bug(&src_entry, target_entry, src, target);
+		print_circular_bug(&c, &src_entry, target_entry, src, target);
 	}
 
 	return ret;
@@ -1928,23 +1942,25 @@ find_usage_backwards(struct lock_list *root, unsigned long usage_mask,
 	return result;
 }
 
-static void print_lock_class_header(struct lock_class *class, int depth)
+static void print_lock_class_header(pr_cont_t *c, struct lock_class *class, int depth)
 {
 	int bit;
 
-	printk("%*s->", depth, "");
-	print_lock_name(class);
+	pr_cont_begin(c, "%*s->", depth, "");
+	_print_lock_name(c, class);
 #ifdef CONFIG_DEBUG_LOCKDEP
-	printk(KERN_CONT " ops: %lu", debug_class_ops_read(class));
+	pr_cont_append(c, " ops: %lu", debug_class_ops_read(class));
 #endif
-	printk(KERN_CONT " {\n");
+	pr_cont_append(c, " {");
+	pr_cont_end(c);
 
 	for (bit = 0; bit < LOCK_USAGE_STATES; bit++) {
 		if (class->usage_mask & (1 << bit)) {
 			int len = depth;
 
-			len += printk("%*s   %s", depth, "", usage_str[bit]);
-			len += printk(KERN_CONT " at:\n");
+			len += pr_cont_begin(c, "%*s   %s", depth, "", usage_str[bit]);
+			len += pr_cont_append(c, " at:");
+			pr_cont_end(c);
 			print_lock_trace(class->usage_traces[bit], len);
 		}
 	}
@@ -1958,7 +1974,7 @@ static void print_lock_class_header(struct lock_class *class, int depth)
  * printk the shortest lock dependencies from @start to @end in reverse order:
  */
 static void __used
-print_shortest_lock_dependencies(struct lock_list *leaf,
+print_shortest_lock_dependencies(pr_cont_t *c, struct lock_list *leaf,
 				 struct lock_list *root)
 {
 	struct lock_list *entry = leaf;
@@ -1968,7 +1984,7 @@ print_shortest_lock_dependencies(struct lock_list *leaf,
 	depth = get_lock_depth(leaf);
 
 	do {
-		print_lock_class_header(entry->class, depth);
+		print_lock_class_header(c, entry->class, depth);
 		printk("%*s ... acquired at:\n", depth, "");
 		print_lock_trace(entry->trace, 2);
 		printk("\n");
@@ -1984,7 +2000,8 @@ print_shortest_lock_dependencies(struct lock_list *leaf,
 }
 
 static void
-print_irq_lock_scenario(struct lock_list *safe_entry,
+print_irq_lock_scenario(pr_cont_t *c,
+			struct lock_list *safe_entry,
 			struct lock_list *unsafe_entry,
 			struct lock_class *prev_class,
 			struct lock_class *next_class)
@@ -2010,37 +2027,43 @@ print_irq_lock_scenario(struct lock_list *safe_entry,
 	 * from the safe_class lock to the unsafe_class lock.
 	 */
 	if (middle_class != unsafe_class) {
-		printk("Chain exists of:\n  ");
-		__print_lock_name(safe_class);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(middle_class);
-		printk(KERN_CONT " --> ");
-		__print_lock_name(unsafe_class);
-		printk(KERN_CONT "\n\n");
+		pr_cont_begin(c, "Chain exists of:\n  ");
+		__print_lock_name(c, safe_class);
+		pr_cont_append(c, " --> ");
+		__print_lock_name(c, middle_class);
+		pr_cont_append(c, " --> ");
+		__print_lock_name(c, unsafe_class);
+		pr_cont_append(c, "\n");
+		pr_cont_end(c);
 	}
 
 	printk(" Possible interrupt unsafe locking scenario:\n\n");
 	printk("       CPU0                    CPU1\n");
 	printk("       ----                    ----\n");
-	printk("  lock(");
-	__print_lock_name(unsafe_class);
-	printk(KERN_CONT ");\n");
+	pr_cont_begin(c, "  lock(");
+	__print_lock_name(c, unsafe_class);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
 	printk("                               local_irq_disable();\n");
-	printk("                               lock(");
-	__print_lock_name(safe_class);
-	printk(KERN_CONT ");\n");
-	printk("                               lock(");
-	__print_lock_name(middle_class);
-	printk(KERN_CONT ");\n");
+	pr_cont_begin(c, "                               lock(");
+	__print_lock_name(c, safe_class);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
+	pr_cont_begin(c, "                               lock(");
+	__print_lock_name(c, middle_class);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
 	printk("  <Interrupt>\n");
-	printk("    lock(");
-	__print_lock_name(safe_class);
-	printk(KERN_CONT ");\n");
+	pr_cont_begin(c, "    lock(");
+	__print_lock_name(c, safe_class);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
 	printk("\n *** DEADLOCK ***\n\n");
 }
 
 static void
-print_bad_irq_dependency(struct task_struct *curr,
+print_bad_irq_dependency(pr_cont_t *c,
+			 struct task_struct *curr,
 			 struct lock_list *prev_root,
 			 struct lock_list *next_root,
 			 struct lock_list *backwards_entry,
@@ -2066,48 +2089,57 @@ print_bad_irq_dependency(struct task_struct *curr,
 		curr->softirq_context, softirq_count() >> SOFTIRQ_SHIFT,
 		lockdep_hardirqs_enabled(),
 		curr->softirqs_enabled);
-	print_lock(next);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock(c, next);
+	pr_cont_end(c);
 
 	pr_warn("\nand this task is already holding:\n");
-	print_lock(prev);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock(c, prev);
+	pr_cont_end(c);
 	pr_warn("which would create a new lock dependency:\n");
-	print_lock_name(hlock_class(prev));
-	pr_cont(" ->");
-	print_lock_name(hlock_class(next));
-	pr_cont("\n");
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock_name(c, hlock_class(prev));
+	pr_cont_append(c, " ->");
+	_print_lock_name(c, hlock_class(next));
+	pr_cont_end(c);
 
 	pr_warn("\nbut this new dependency connects a %s-irq-safe lock:\n",
 		irqclass);
-	print_lock_name(backwards_entry->class);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock_name(c, backwards_entry->class);
+	pr_cont_end(c);
 	pr_warn("\n... which became %s-irq-safe at:\n", irqclass);
 
 	print_lock_trace(backwards_entry->class->usage_traces[bit1], 1);
 
 	pr_warn("\nto a %s-irq-unsafe lock:\n", irqclass);
-	print_lock_name(forwards_entry->class);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock_name(c, forwards_entry->class);
+	pr_cont_end(c);
 	pr_warn("\n... which became %s-irq-unsafe at:\n", irqclass);
-	pr_warn("...");
+	pr_warn("...\n");
 
 	print_lock_trace(forwards_entry->class->usage_traces[bit2], 1);
 
 	pr_warn("\nother info that might help us debug this:\n\n");
-	print_irq_lock_scenario(backwards_entry, forwards_entry,
+	print_irq_lock_scenario(c, backwards_entry, forwards_entry,
 				hlock_class(prev), hlock_class(next));
 
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(c, curr);
 
 	pr_warn("\nthe dependencies between %s-irq-safe lock and the holding lock:\n", irqclass);
 	prev_root->trace = save_trace();
 	if (!prev_root->trace)
 		return;
-	print_shortest_lock_dependencies(backwards_entry, prev_root);
+	print_shortest_lock_dependencies(c, backwards_entry, prev_root);
 
 	pr_warn("\nthe dependencies between the lock to be acquired");
 	pr_warn(" and %s-irq-unsafe lock:\n", irqclass);
 	next_root->trace = save_trace();
 	if (!next_root->trace)
 		return;
-	print_shortest_lock_dependencies(forwards_entry, next_root);
+	print_shortest_lock_dependencies(c, forwards_entry, next_root);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
@@ -2247,6 +2279,7 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
 	struct lock_list *target_entry1;
 	struct lock_list *target_entry;
 	struct lock_list this, that;
+	pr_cont_t c;
 	int ret;
 
 	/*
@@ -2308,7 +2341,7 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
 	if (DEBUG_LOCKS_WARN_ON(ret == -1))
 		return 1;
 
-	print_bad_irq_dependency(curr, &this, &that,
+	print_bad_irq_dependency(&c, curr, &this, &that,
 				 target_entry, target_entry1,
 				 prev, next,
 				 backward_bit, forward_bit,
@@ -2347,7 +2380,7 @@ static void dec_chains(int irq_context)
 }
 
 static void
-print_deadlock_scenario(struct held_lock *nxt, struct held_lock *prv)
+print_deadlock_scenario(pr_cont_t *c, struct held_lock *nxt, struct held_lock *prv)
 {
 	struct lock_class *next = hlock_class(nxt);
 	struct lock_class *prev = hlock_class(prv);
@@ -2355,18 +2388,20 @@ print_deadlock_scenario(struct held_lock *nxt, struct held_lock *prv)
 	printk(" Possible unsafe locking scenario:\n\n");
 	printk("       CPU0\n");
 	printk("       ----\n");
-	printk("  lock(");
-	__print_lock_name(prev);
-	printk(KERN_CONT ");\n");
-	printk("  lock(");
-	__print_lock_name(next);
-	printk(KERN_CONT ");\n");
+	pr_cont_begin(c, "  lock(");
+	__print_lock_name(c, prev);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
+	pr_cont_begin(c, "  lock(");
+	__print_lock_name(c, next);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
 	printk("\n *** DEADLOCK ***\n\n");
 	printk(" May be due to missing lock nesting notation\n\n");
 }
 
 static void
-print_deadlock_bug(struct task_struct *curr, struct held_lock *prev,
+print_deadlock_bug(pr_cont_t *c, struct task_struct *curr, struct held_lock *prev,
 		   struct held_lock *next)
 {
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
@@ -2379,13 +2414,17 @@ print_deadlock_bug(struct task_struct *curr, struct held_lock *prev,
 	pr_warn("--------------------------------------------\n");
 	pr_warn("%s/%d is trying to acquire lock:\n",
 		curr->comm, task_pid_nr(curr));
-	print_lock(next);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock(c, next);
+	pr_cont_end(c);
 	pr_warn("\nbut task is already holding lock:\n");
-	print_lock(prev);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock(c, prev);
+	pr_cont_end(c);
 
 	pr_warn("\nother info that might help us debug this:\n");
-	print_deadlock_scenario(next, prev);
-	lockdep_print_held_locks(curr);
+	print_deadlock_scenario(c, next, prev);
+	lockdep_print_held_locks(c, curr);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
@@ -2404,6 +2443,7 @@ check_deadlock(struct task_struct *curr, struct held_lock *next)
 {
 	struct held_lock *prev;
 	struct held_lock *nest = NULL;
+	pr_cont_t c;
 	int i;
 
 	for (i = 0; i < curr->lockdep_depth; i++) {
@@ -2429,7 +2469,7 @@ check_deadlock(struct task_struct *curr, struct held_lock *next)
 		if (nest)
 			return 2;
 
-		print_deadlock_bug(curr, prev, next);
+		print_deadlock_bug(&c, curr, prev, next);
 		return 0;
 	}
 	return 1;
@@ -2901,18 +2941,18 @@ static inline int get_first_held_lock(struct task_struct *curr,
 /*
  * Returns the next chain_key iteration
  */
-static u64 print_chain_key_iteration(int class_idx, u64 chain_key)
+static u64 _print_chain_key_iteration(pr_cont_t *c, int class_idx, u64 chain_key)
 {
 	u64 new_chain_key = iterate_chain_key(chain_key, class_idx);
 
-	printk(" class_idx:%d -> chain_key:%016Lx",
+	pr_cont_append(c, " class_idx:%d -> chain_key:%016Lx",
 		class_idx,
 		(unsigned long long)new_chain_key);
 	return new_chain_key;
 }
 
 static void
-print_chain_keys_held_locks(struct task_struct *curr, struct held_lock *hlock_next)
+print_chain_keys_held_locks(pr_cont_t *c, struct task_struct *curr, struct held_lock *hlock_next)
 {
 	struct held_lock *hlock;
 	u64 chain_key = INITIAL_CHAIN_KEY;
@@ -2923,16 +2963,20 @@ print_chain_keys_held_locks(struct task_struct *curr, struct held_lock *hlock_ne
 		hlock_next->irq_context);
 	for (; i < depth; i++) {
 		hlock = curr->held_locks + i;
-		chain_key = print_chain_key_iteration(hlock->class_idx, chain_key);
 
-		print_lock(hlock);
+		pr_cont_begin(c, "");
+		chain_key = _print_chain_key_iteration(c, hlock->class_idx, chain_key);
+		_print_lock(c, hlock);
+		pr_cont_end(c);
 	}
 
-	print_chain_key_iteration(hlock_next->class_idx, chain_key);
-	print_lock(hlock_next);
+	pr_cont_begin(c, "");
+	_print_chain_key_iteration(c, hlock_next->class_idx, chain_key);
+	_print_lock(c, hlock_next);
+	pr_cont_end(c);
 }
 
-static void print_chain_keys_chain(struct lock_chain *chain)
+static void print_chain_keys_chain(pr_cont_t *c, struct lock_chain *chain)
 {
 	int i;
 	u64 chain_key = INITIAL_CHAIN_KEY;
@@ -2941,14 +2985,15 @@ static void print_chain_keys_chain(struct lock_chain *chain)
 	printk("depth: %u\n", chain->depth);
 	for (i = 0; i < chain->depth; i++) {
 		class_id = chain_hlocks[chain->base + i];
-		chain_key = print_chain_key_iteration(class_id, chain_key);
 
-		print_lock_name(lock_classes + class_id);
-		printk("\n");
+		pr_cont_begin(c, "");
+		chain_key = _print_chain_key_iteration(c, class_id, chain_key);
+		_print_lock_name(c, lock_classes + class_id);
+		pr_cont_end(c);
 	}
 }
 
-static void print_collision(struct task_struct *curr,
+static void print_collision(pr_cont_t *c, struct task_struct *curr,
 			struct held_lock *hlock_next,
 			struct lock_chain *chain)
 {
@@ -2961,10 +3006,10 @@ static void print_collision(struct task_struct *curr,
 	pr_warn("Hash chain already cached but the contents don't match!\n");
 
 	pr_warn("Held locks:");
-	print_chain_keys_held_locks(curr, hlock_next);
+	print_chain_keys_held_locks(c, curr, hlock_next);
 
 	pr_warn("Locks in cached chain:");
-	print_chain_keys_chain(chain);
+	print_chain_keys_chain(c, chain);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
@@ -2983,11 +3028,12 @@ static int check_no_collision(struct task_struct *curr,
 {
 #ifdef CONFIG_DEBUG_LOCKDEP
 	int i, j, id;
+	pr_cont_t c;
 
 	i = get_first_held_lock(curr, hlock);
 
 	if (DEBUG_LOCKS_WARN_ON(chain->depth != curr->lockdep_depth - (i - 1))) {
-		print_collision(curr, hlock, chain);
+		print_collision(&c, curr, hlock, chain);
 		return 0;
 	}
 
@@ -2995,7 +3041,7 @@ static int check_no_collision(struct task_struct *curr,
 		id = curr->held_locks[i].class_idx;
 
 		if (DEBUG_LOCKS_WARN_ON(chain_hlocks[chain->base + j] != id)) {
-			print_collision(curr, hlock, chain);
+			print_collision(&c, curr, hlock, chain);
 			return 0;
 		}
 	}
@@ -3296,25 +3342,27 @@ static void check_chain_key(struct task_struct *curr)
 static int mark_lock(struct task_struct *curr, struct held_lock *this,
 		     enum lock_usage_bit new_bit);
 
-static void print_usage_bug_scenario(struct held_lock *lock)
+static void print_usage_bug_scenario(pr_cont_t *c, struct held_lock *lock)
 {
 	struct lock_class *class = hlock_class(lock);
 
 	printk(" Possible unsafe locking scenario:\n\n");
 	printk("       CPU0\n");
 	printk("       ----\n");
-	printk("  lock(");
-	__print_lock_name(class);
-	printk(KERN_CONT ");\n");
+	pr_cont_begin(c, "  lock(");
+	__print_lock_name(c, class);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
 	printk("  <Interrupt>\n");
-	printk("    lock(");
-	__print_lock_name(class);
-	printk(KERN_CONT ");\n");
+	pr_cont_begin(c, "    lock(");
+	__print_lock_name(c, class);
+	pr_cont_append(c, ");");
+	pr_cont_end(c);
 	printk("\n *** DEADLOCK ***\n\n");
 }
 
 static void
-print_usage_bug(struct task_struct *curr, struct held_lock *this,
+print_usage_bug(pr_cont_t *c, struct task_struct *curr, struct held_lock *this,
 		enum lock_usage_bit prev_bit, enum lock_usage_bit new_bit)
 {
 	if (!debug_locks_off_graph_unlock() || debug_locks_silent)
@@ -3335,16 +3383,18 @@ print_usage_bug(struct task_struct *curr, struct held_lock *this,
 		lockdep_softirq_context(curr), softirq_count() >> SOFTIRQ_SHIFT,
 		lockdep_hardirqs_enabled(),
 		lockdep_softirqs_enabled(curr));
-	print_lock(this);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock(c, this);
+	pr_cont_end(c);
 
 	pr_warn("{%s} state was registered at:\n", usage_str[prev_bit]);
 	print_lock_trace(hlock_class(this)->usage_traces[prev_bit], 1);
 
 	print_irqtrace_events(curr);
 	pr_warn("\nother info that might help us debug this:\n");
-	print_usage_bug_scenario(this);
+	print_usage_bug_scenario(c, this);
 
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(c, curr);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
@@ -3357,8 +3407,10 @@ static inline int
 valid_state(struct task_struct *curr, struct held_lock *this,
 	    enum lock_usage_bit new_bit, enum lock_usage_bit bad_bit)
 {
+	pr_cont_t c;
+
 	if (unlikely(hlock_class(this)->usage_mask & (1 << bad_bit))) {
-		print_usage_bug(curr, this, bad_bit, new_bit);
+		print_usage_bug(&c, curr, this, bad_bit, new_bit);
 		return 0;
 	}
 	return 1;
@@ -3369,7 +3421,7 @@ valid_state(struct task_struct *curr, struct held_lock *this,
  * print irq inversion bug:
  */
 static void
-print_irq_inversion_bug(struct task_struct *curr,
+print_irq_inversion_bug(pr_cont_t *c, struct task_struct *curr,
 			struct lock_list *root, struct lock_list *other,
 			struct held_lock *this, int forwards,
 			const char *irqclass)
@@ -3388,12 +3440,16 @@ print_irq_inversion_bug(struct task_struct *curr,
 	pr_warn("--------------------------------------------------------\n");
 	pr_warn("%s/%d just changed the state of lock:\n",
 		curr->comm, task_pid_nr(curr));
-	print_lock(this);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock(c, this);
+	pr_cont_end(c);
 	if (forwards)
 		pr_warn("but this lock took another, %s-unsafe lock in the past:\n", irqclass);
 	else
 		pr_warn("but this lock was taken by another, %s-safe lock in the past:\n", irqclass);
-	print_lock_name(other->class);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock_name(c, other->class);
+	pr_cont_end(c);
 	pr_warn("\n\nand interrupts could create inverse lock ordering between them.\n\n");
 
 	pr_warn("\nother info that might help us debug this:\n");
@@ -3410,19 +3466,19 @@ print_irq_inversion_bug(struct task_struct *curr,
 		depth--;
 	} while (entry && entry != root && (depth >= 0));
 	if (forwards)
-		print_irq_lock_scenario(root, other,
+		print_irq_lock_scenario(c, root, other,
 			middle ? middle->class : root->class, other->class);
 	else
-		print_irq_lock_scenario(other, root,
+		print_irq_lock_scenario(c, other, root,
 			middle ? middle->class : other->class, root->class);
 
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(c, curr);
 
 	pr_warn("\nthe shortest dependencies between 2nd lock and 1st lock:\n");
 	root->trace = save_trace();
 	if (!root->trace)
 		return;
-	print_shortest_lock_dependencies(other, root);
+	print_shortest_lock_dependencies(c, other, root);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
@@ -3439,6 +3495,7 @@ check_usage_forwards(struct task_struct *curr, struct held_lock *this,
 	int ret;
 	struct lock_list root;
 	struct lock_list *target_entry;
+	pr_cont_t c;
 
 	root.parent = NULL;
 	root.class = hlock_class(this);
@@ -3450,7 +3507,7 @@ check_usage_forwards(struct task_struct *curr, struct held_lock *this,
 	if (ret == 1)
 		return ret;
 
-	print_irq_inversion_bug(curr, &root, target_entry,
+	print_irq_inversion_bug(&c, curr, &root, target_entry,
 				this, 1, irqclass);
 	return 0;
 }
@@ -3466,6 +3523,7 @@ check_usage_backwards(struct task_struct *curr, struct held_lock *this,
 	int ret;
 	struct lock_list root;
 	struct lock_list *target_entry;
+	pr_cont_t c;
 
 	root.parent = NULL;
 	root.class = hlock_class(this);
@@ -3477,7 +3535,7 @@ check_usage_backwards(struct task_struct *curr, struct held_lock *this,
 	if (ret == 1)
 		return ret;
 
-	print_irq_inversion_bug(curr, &root, target_entry,
+	print_irq_inversion_bug(&c, curr, &root, target_entry,
 				this, 0, irqclass);
 	return 0;
 }
@@ -3970,6 +4028,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 			     enum lock_usage_bit new_bit)
 {
 	unsigned int new_mask = 1 << new_bit, ret = 1;
+	pr_cont_t c;
 
 	if (new_bit >= LOCK_USAGE_STATES) {
 		DEBUG_LOCKS_WARN_ON(1);
@@ -4015,7 +4074,9 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
 	 */
 	if (ret == 2) {
 		printk("\nmarked lock as {%s}:\n", usage_str[new_bit]);
-		print_lock(this);
+		pr_cont_begin(&c, "");
+		_print_lock(&c, this);
+		pr_cont_end(&c);
 		print_irqtrace_events(curr);
 		dump_stack();
 	}
@@ -4048,7 +4109,7 @@ static inline short task_wait_context(struct task_struct *curr)
 }
 
 static int
-print_lock_invalid_wait_context(struct task_struct *curr,
+print_lock_invalid_wait_context(pr_cont_t *c, struct task_struct *curr,
 				struct held_lock *hlock)
 {
 	short curr_inner;
@@ -4065,14 +4126,16 @@ print_lock_invalid_wait_context(struct task_struct *curr,
 	pr_warn("-----------------------------\n");
 
 	pr_warn("%s/%d is trying to lock:\n", curr->comm, task_pid_nr(curr));
-	print_lock(hlock);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock(c, hlock);
+	pr_cont_end(c);
 
 	pr_warn("other info that might help us debug this:\n");
 
 	curr_inner = task_wait_context(curr);
 	pr_warn("context-{%d:%d}\n", curr_inner, curr_inner);
 
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(c, curr);
 
 	pr_warn("stack backtrace:\n");
 	dump_stack();
@@ -4100,6 +4163,7 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
 	short next_inner = hlock_class(next)->wait_type_inner;
 	short next_outer = hlock_class(next)->wait_type_outer;
 	short curr_inner;
+	pr_cont_t c;
 	int depth;
 
 	if (!curr->lockdep_depth || !next_inner || next->trylock)
@@ -4136,7 +4200,7 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
 	}
 
 	if (next_outer > curr_inner)
-		return print_lock_invalid_wait_context(curr, next);
+		return print_lock_invalid_wait_context(&c, curr, next);
 
 	return 0;
 }
@@ -4236,7 +4300,7 @@ struct lock_class_key __lockdep_no_validate__;
 EXPORT_SYMBOL_GPL(__lockdep_no_validate__);
 
 static void
-print_lock_nested_lock_not_held(struct task_struct *curr,
+print_lock_nested_lock_not_held(pr_cont_t *c, struct task_struct *curr,
 				struct held_lock *hlock,
 				unsigned long ip)
 {
@@ -4252,7 +4316,9 @@ print_lock_nested_lock_not_held(struct task_struct *curr,
 	pr_warn("----------------------------------\n");
 
 	pr_warn("%s/%d is trying to lock:\n", curr->comm, task_pid_nr(curr));
-	print_lock(hlock);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock(c, hlock);
+	pr_cont_end(c);
 
 	pr_warn("\nbut this task is not holding:\n");
 	pr_warn("%s\n", hlock->nest_lock->name);
@@ -4261,7 +4327,7 @@ print_lock_nested_lock_not_held(struct task_struct *curr,
 	dump_stack();
 
 	pr_warn("\nother info that might help us debug this:\n");
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(c, curr);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
@@ -4289,6 +4355,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	int chain_head = 0;
 	int class_idx;
 	u64 chain_key;
+	pr_cont_t c;
 
 	if (unlikely(!debug_locks))
 		return 0;
@@ -4310,10 +4377,10 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	debug_class_ops_inc(class);
 
 	if (very_verbose(class)) {
-		printk("\nacquire class [%px] %s", class->key, class->name);
+		pr_cont_begin(&c, "\nacquire class [%px] %s", class->key, class->name);
 		if (class->name_version > 1)
-			printk(KERN_CONT "#%d", class->name_version);
-		printk(KERN_CONT "\n");
+			pr_cont_append(&c, "#%d", class->name_version);
+		pr_cont_end(&c);
 		dump_stack();
 	}
 
@@ -4414,7 +4481,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	chain_key = iterate_chain_key(chain_key, class_idx);
 
 	if (nest_lock && !__lock_is_held(nest_lock, -1)) {
-		print_lock_nested_lock_not_held(curr, hlock, ip);
+		print_lock_nested_lock_not_held(&c, curr, hlock, ip);
 		return 0;
 	}
 
@@ -4439,7 +4506,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 		printk(KERN_DEBUG "depth: %i  max: %lu!\n",
 		       curr->lockdep_depth, MAX_LOCK_DEPTH);
 
-		lockdep_print_held_locks(current);
+		lockdep_print_held_locks(&c, current);
 		debug_show_all_locks();
 		dump_stack();
 
@@ -4452,7 +4519,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
 	return 1;
 }
 
-static void print_unlock_imbalance_bug(struct task_struct *curr,
+static void print_unlock_imbalance_bug(pr_cont_t *c, struct task_struct *curr,
 				       struct lockdep_map *lock,
 				       unsigned long ip)
 {
@@ -4468,12 +4535,14 @@ static void print_unlock_imbalance_bug(struct task_struct *curr,
 	pr_warn("-------------------------------------\n");
 	pr_warn("%s/%d is trying to release lock (",
 		curr->comm, task_pid_nr(curr));
-	print_lockdep_cache(lock);
-	pr_cont(") at:\n");
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lockdep_cache(c, lock);
+	pr_cont_append(c, ") at:");
+	pr_cont_end(c);
 	print_ip_sym(KERN_WARNING, ip);
 	pr_warn("but there are no more locks to release!\n");
 	pr_warn("\nother info that might help us debug this:\n");
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(c, curr);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
@@ -4592,6 +4661,7 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
 	unsigned int depth, merged = 0;
 	struct held_lock *hlock;
 	struct lock_class *class;
+	pr_cont_t c;
 	int i;
 
 	if (unlikely(!debug_locks))
@@ -4607,7 +4677,7 @@ __lock_set_class(struct lockdep_map *lock, const char *name,
 
 	hlock = find_held_lock(curr, lock, depth, &i);
 	if (!hlock) {
-		print_unlock_imbalance_bug(curr, lock, ip);
+		print_unlock_imbalance_bug(&c, curr, lock, ip);
 		return 0;
 	}
 
@@ -4637,6 +4707,7 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
 	struct task_struct *curr = current;
 	unsigned int depth, merged = 0;
 	struct held_lock *hlock;
+	pr_cont_t c;
 	int i;
 
 	if (unlikely(!debug_locks))
@@ -4652,7 +4723,7 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
 
 	hlock = find_held_lock(curr, lock, depth, &i);
 	if (!hlock) {
-		print_unlock_imbalance_bug(curr, lock, ip);
+		print_unlock_imbalance_bug(&c, curr, lock, ip);
 		return 0;
 	}
 
@@ -4691,6 +4762,7 @@ __lock_release(struct lockdep_map *lock, unsigned long ip)
 	struct task_struct *curr = current;
 	unsigned int depth, merged = 1;
 	struct held_lock *hlock;
+	pr_cont_t c;
 	int i;
 
 	if (unlikely(!debug_locks))
@@ -4702,7 +4774,7 @@ __lock_release(struct lockdep_map *lock, unsigned long ip)
 	 * own any locks, you've been drinking again?
 	 */
 	if (depth <= 0) {
-		print_unlock_imbalance_bug(curr, lock, ip);
+		print_unlock_imbalance_bug(&c, curr, lock, ip);
 		return 0;
 	}
 
@@ -4712,7 +4784,7 @@ __lock_release(struct lockdep_map *lock, unsigned long ip)
 	 */
 	hlock = find_held_lock(curr, lock, depth, &i);
 	if (!hlock) {
-		print_unlock_imbalance_bug(curr, lock, ip);
+		print_unlock_imbalance_bug(&c, curr, lock, ip);
 		return 0;
 	}
 
@@ -4942,6 +5014,7 @@ static void verify_lock_unused(struct lockdep_map *lock, struct held_lock *hlock
 {
 #ifdef CONFIG_PROVE_LOCKING
 	struct lock_class *class = look_up_lock_class(lock, subclass);
+	pr_cont_t c;
 
 	/* if it doesn't have a class (yet), it certainly hasn't been used yet */
 	if (!class)
@@ -4952,7 +5025,7 @@ static void verify_lock_unused(struct lockdep_map *lock, struct held_lock *hlock
 
 	hlock->class_idx = class - lock_classes;
 
-	print_usage_bug(current, hlock, LOCK_USED, LOCK_USAGE_STATES);
+	print_usage_bug(&c, current, hlock, LOCK_USED, LOCK_USAGE_STATES);
 #endif
 }
 
@@ -5103,7 +5176,7 @@ void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie cookie)
 EXPORT_SYMBOL_GPL(lock_unpin_lock);
 
 #ifdef CONFIG_LOCK_STAT
-static void print_lock_contention_bug(struct task_struct *curr,
+static void print_lock_contention_bug(pr_cont_t *c, struct task_struct *curr,
 				      struct lockdep_map *lock,
 				      unsigned long ip)
 {
@@ -5119,12 +5192,14 @@ static void print_lock_contention_bug(struct task_struct *curr,
 	pr_warn("---------------------------------\n");
 	pr_warn("%s/%d is trying to contend lock (",
 		curr->comm, task_pid_nr(curr));
-	print_lockdep_cache(lock);
-	pr_cont(") at:\n");
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lockdep_cache(c, lock);
+	pr_cont_append(c, ") at:");
+	pr_cont_end(c);
 	print_ip_sym(KERN_WARNING, ip);
 	pr_warn("but there are no locks held!\n");
 	pr_warn("\nother info that might help us debug this:\n");
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(c, curr);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
@@ -5138,6 +5213,7 @@ __lock_contended(struct lockdep_map *lock, unsigned long ip)
 	struct lock_class_stats *stats;
 	unsigned int depth;
 	int i, contention_point, contending_point;
+	pr_cont_t c;
 
 	depth = curr->lockdep_depth;
 	/*
@@ -5149,7 +5225,7 @@ __lock_contended(struct lockdep_map *lock, unsigned long ip)
 
 	hlock = find_held_lock(curr, lock, depth, &i);
 	if (!hlock) {
-		print_lock_contention_bug(curr, lock, ip);
+		print_lock_contention_bug(&c, curr, lock, ip);
 		return;
 	}
 
@@ -5179,6 +5255,7 @@ __lock_acquired(struct lockdep_map *lock, unsigned long ip)
 	struct lock_class_stats *stats;
 	unsigned int depth;
 	u64 now, waittime = 0;
+	pr_cont_t c;
 	int i, cpu;
 
 	depth = curr->lockdep_depth;
@@ -5191,7 +5268,7 @@ __lock_acquired(struct lockdep_map *lock, unsigned long ip)
 
 	hlock = find_held_lock(curr, lock, depth, &i);
 	if (!hlock) {
-		print_lock_contention_bug(curr, lock, _RET_IP_);
+		print_lock_contention_bug(&c, curr, lock, _RET_IP_);
 		return;
 	}
 
@@ -5733,7 +5810,7 @@ void __init lockdep_init(void)
 }
 
 static void
-print_freed_lock_bug(struct task_struct *curr, const void *mem_from,
+print_freed_lock_bug(pr_cont_t *c, struct task_struct *curr, const void *mem_from,
 		     const void *mem_to, struct held_lock *hlock)
 {
 	if (!debug_locks_off())
@@ -5748,8 +5825,10 @@ print_freed_lock_bug(struct task_struct *curr, const void *mem_from,
 	pr_warn("-------------------------\n");
 	pr_warn("%s/%d is freeing memory %px-%px, with a lock still held there!\n",
 		curr->comm, task_pid_nr(curr), mem_from, mem_to-1);
-	print_lock(hlock);
-	lockdep_print_held_locks(curr);
+	pr_cont_begin(c, KERN_WARNING);
+	_print_lock(c, hlock);
+	pr_cont_end(c);
+	lockdep_print_held_locks(c, curr);
 
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
@@ -5772,6 +5851,7 @@ void debug_check_no_locks_freed(const void *mem_from, unsigned long mem_len)
 	struct task_struct *curr = current;
 	struct held_lock *hlock;
 	unsigned long flags;
+	pr_cont_t c;
 	int i;
 
 	if (unlikely(!debug_locks))
@@ -5785,7 +5865,7 @@ void debug_check_no_locks_freed(const void *mem_from, unsigned long mem_len)
 					sizeof(*hlock->instance)))
 			continue;
 
-		print_freed_lock_bug(curr, mem_from, mem_from + mem_len, hlock);
+		print_freed_lock_bug(&c, curr, mem_from, mem_from + mem_len, hlock);
 		break;
 	}
 	raw_local_irq_restore(flags);
@@ -5794,6 +5874,8 @@ EXPORT_SYMBOL_GPL(debug_check_no_locks_freed);
 
 static void print_held_locks_bug(void)
 {
+	pr_cont_t c;
+
 	if (!debug_locks_off())
 		return;
 	if (debug_locks_silent)
@@ -5805,7 +5887,7 @@ static void print_held_locks_bug(void)
 	       current->comm, task_pid_nr(current));
 	print_kernel_ident();
 	pr_warn("------------------------------------\n");
-	lockdep_print_held_locks(current);
+	lockdep_print_held_locks(&c, current);
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
 }
@@ -5821,6 +5903,7 @@ EXPORT_SYMBOL_GPL(debug_check_no_locks_held);
 void debug_show_all_locks(void)
 {
 	struct task_struct *g, *p;
+	pr_cont_t c;
 
 	if (unlikely(!debug_locks)) {
 		pr_warn("INFO: lockdep is turned off.\n");
@@ -5832,7 +5915,7 @@ void debug_show_all_locks(void)
 	for_each_process_thread(g, p) {
 		if (!p->lockdep_depth)
 			continue;
-		lockdep_print_held_locks(p);
+		lockdep_print_held_locks(&c, p);
 		touch_nmi_watchdog();
 		touch_all_softlockup_watchdogs();
 	}
@@ -5850,17 +5933,20 @@ EXPORT_SYMBOL_GPL(debug_show_all_locks);
  */
 void debug_show_held_locks(struct task_struct *task)
 {
+	pr_cont_t c;
+
 	if (unlikely(!debug_locks)) {
 		printk("INFO: lockdep is turned off.\n");
 		return;
 	}
-	lockdep_print_held_locks(task);
+	lockdep_print_held_locks(&c, task);
 }
 EXPORT_SYMBOL_GPL(debug_show_held_locks);
 
 asmlinkage __visible void lockdep_sys_exit(void)
 {
 	struct task_struct *curr = current;
+	pr_cont_t c;
 
 	if (unlikely(curr->lockdep_depth)) {
 		if (!debug_locks_off())
@@ -5872,7 +5958,7 @@ asmlinkage __visible void lockdep_sys_exit(void)
 		pr_warn("------------------------------------------------\n");
 		pr_warn("%s/%d is leaving the kernel with locks still held!\n",
 				curr->comm, curr->pid);
-		lockdep_print_held_locks(curr);
+		lockdep_print_held_locks(&c, curr);
 	}
 
 	/*
@@ -5885,6 +5971,7 @@ asmlinkage __visible void lockdep_sys_exit(void)
 void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
 {
 	struct task_struct *curr = current;
+	pr_cont_t c;
 
 	/* Note: the following can be executed concurrently, so be careful. */
 	pr_warn("\n");
@@ -5921,7 +6008,7 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
 	if (!rcu_is_watching())
 		pr_warn("RCU used illegally from extended quiescent state!\n");
 
-	lockdep_print_held_locks(curr);
+	lockdep_print_held_locks(&c, curr);
 	pr_warn("\nstack backtrace:\n");
 	dump_stack();
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-19 23:26 ` [RFC PATCH 1/5] printk: implement pr_cont_t John Ogness
@ 2020-08-20  0:33   ` Joe Perches
  2020-08-20  7:44     ` David Laight
  2020-08-20 10:16   ` Petr Mladek
  2020-08-24  5:37   ` Sergey Senozhatsky
  2 siblings, 1 reply; 24+ messages in thread
From: Joe Perches @ 2020-08-20  0:33 UTC (permalink / raw)
  To: John Ogness, Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

On Thu, 2020-08-20 at 01:32 +0206, John Ogness wrote:
> Implement a new buffering mechanism for pr_cont messages.
> 
> Old mechanism syntax:
> 
>     printk(KERN_INFO "text");
>     printk(KERN_CONT " continued");
>     printk(KERN_CONT "\n");
> 
> New mechanism syntax:
> 
>     pr_cont_t c;
> 
>     pr_cont_begin(&c, KERN_INFO "text");

bikeshed:

I suggest:

	printk_begin(&printk_context, fmt, ...)
	printk_continue(&printk_context, fmt, ...) (maybe printk_next())
	printk_end(&printk_context, fmt, ...)

and macros using pr_<level>_begin

ie:

#define pr_info_begin(context, fmt, ...)		\
	printk_begin(context, KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)

and each continued bit would use printk_continue or printk_end
as appropriate.

KERN_<LEVEL> could be a separate argument, but it's simple
enough to use
printk_get_level on the format.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 2/5] sysrq: use pr_cont_t for cont messages
  2020-08-19 23:26 ` [RFC PATCH 2/5] sysrq: use pr_cont_t for cont messages John Ogness
@ 2020-08-20  1:03   ` Linus Torvalds
  2020-08-20 17:48     ` Joe Perches
  0 siblings, 1 reply; 24+ messages in thread
From: Linus Torvalds @ 2020-08-20  1:03 UTC (permalink / raw)
  To: John Ogness
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	Linux Kernel Mailing List

On Wed, Aug 19, 2020 at 4:26 PM John Ogness <john.ogness@linutronix.de> wrote:
>
> Use the new pr_cont_t mechanism.

This looks actively much worse than the old code.

Don't do this.

Just make pr_cont() do what it used to do.

                 Linus

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-20  0:33   ` Joe Perches
@ 2020-08-20  7:44     ` David Laight
  2020-08-20  7:59       ` Joe Perches
  2020-08-20 16:03       ` Joe Perches
  0 siblings, 2 replies; 24+ messages in thread
From: David Laight @ 2020-08-20  7:44 UTC (permalink / raw)
  To: 'Joe Perches', John Ogness, Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

From: Joe Perches
> Sent: 20 August 2020 01:34
> 
> On Thu, 2020-08-20 at 01:32 +0206, John Ogness wrote:
> > Implement a new buffering mechanism for pr_cont messages.
> >
> > Old mechanism syntax:
> >
> >     printk(KERN_INFO "text");
> >     printk(KERN_CONT " continued");
> >     printk(KERN_CONT "\n");
> >
> > New mechanism syntax:
> >
> >     pr_cont_t c;
> >
> >     pr_cont_begin(&c, KERN_INFO "text");
> 
> bikeshed:
> 
> I suggest:
> 
> 	printk_begin(&printk_context, fmt, ...)
> 	printk_continue(&printk_context, fmt, ...) (maybe printk_next())
> 	printk_end(&printk_context, fmt, ...)

I see some very long source lines looming ...

Blue bikeshed:

You'd probably want printk_end(&ctx, NULL) to work.
Although the example doesn't show where the '\n' comes from.
Although I guess it is now inferred and actually deleted
from 'normal' printk() call.

I've no idea how you'd 'size' the number of buffers.
You could use kmalloc(), perhaps falling back on a local buffer.
While might lead to:
	pr_init(&ctx, level, GFP_KERNEL);
	pr_cont(&ctx, fmt, ...);
	pr_flush(&ctx);

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-20  7:44     ` David Laight
@ 2020-08-20  7:59       ` Joe Perches
  2020-08-20  8:49         ` David Laight
  2020-08-20 16:03       ` Joe Perches
  1 sibling, 1 reply; 24+ messages in thread
From: Joe Perches @ 2020-08-20  7:59 UTC (permalink / raw)
  To: David Laight, John Ogness, Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

On Thu, 2020-08-20 at 07:44 +0000, David Laight wrote:
> I've no idea how you'd 'size' the number of buffers.

I believe they are static and assume no more than 10
simultaneous uses of
printk_begin

+#define CONT_LINE_MAX LOG_LINE_MAX
+#define CONT_BUF_COUNT 10
+static char cont_buf[CONT_BUF_COUNT][CONT_LINE_MAX];
+static DECLARE_BITMAP(cont_buf_map, CONT_BUF_COUNT);

Though I don't see much value in a separate define
for CONT_LINE_MAx



^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-20  7:59       ` Joe Perches
@ 2020-08-20  8:49         ` David Laight
  2020-08-20  9:18           ` John Ogness
  0 siblings, 1 reply; 24+ messages in thread
From: David Laight @ 2020-08-20  8:49 UTC (permalink / raw)
  To: 'Joe Perches', John Ogness, Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

From: Joe Perches
> Sent: 20 August 2020 09:00
> 
> On Thu, 2020-08-20 at 07:44 +0000, David Laight wrote:
> > I've no idea how you'd 'size' the number of buffers.
> 
> I believe they are static and assume no more than 10
> simultaneous uses of printk_begin

What I meant was how you'd work out whether 10 was
in any way appropriate.
ISTM it is either 'too many' or 'nowhere near enough'
depending on exactly what the system is doing.

And if code 'leaks' them you are in deep doo-doos.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-20  8:49         ` David Laight
@ 2020-08-20  9:18           ` John Ogness
  0 siblings, 0 replies; 24+ messages in thread
From: John Ogness @ 2020-08-20  9:18 UTC (permalink / raw)
  To: David Laight, 'Joe Perches', Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

On 2020-08-20, David Laight <David.Laight@ACULAB.COM> wrote:
>> On Thu, 2020-08-20 at 07:44 +0000, David Laight wrote:
>>> I've no idea how you'd 'size' the number of buffers.
>> 
>> I believe they are static and assume no more than 10
>> simultaneous uses of printk_begin
>
> What I meant was how you'd work out whether 10 was in any way
> appropriate.  ISTM it is either 'too many' or 'nowhere near enough'
> depending on exactly what the system is doing.

Right now mainline has 1, which breaks pr_cont just booting your system.

I expect we will be increasing the number of buffers, regardless if we
adapt a new API or continue with what we have now.

> And if code 'leaks' them you are in deep doo-doos.

Not really. It falls back to printing individual parts. Also, the printk
subsystem has access to the open buffers and could even track the users
lockdep style.

But this discussion has little to do with the API. These are
implementation details that may end up under the hood of the current
mainline API.

John Ogness

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-19 23:26 ` [RFC PATCH 1/5] printk: implement pr_cont_t John Ogness
  2020-08-20  0:33   ` Joe Perches
@ 2020-08-20 10:16   ` Petr Mladek
  2020-08-20 12:33     ` David Laight
  2020-08-24  2:08     ` Sergey Senozhatsky
  2020-08-24  5:37   ` Sergey Senozhatsky
  2 siblings, 2 replies; 24+ messages in thread
From: Petr Mladek @ 2020-08-20 10:16 UTC (permalink / raw)
  To: John Ogness
  Cc: Linus Torvalds, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

On Thu 2020-08-20 01:32:28, John Ogness wrote:
> Implement a new buffering mechanism for pr_cont messages.
> 
> Old mechanism syntax:
> 
>     printk(KERN_INFO "text");
>     printk(KERN_CONT " continued");
>     printk(KERN_CONT "\n");
> 
> New mechanism syntax:
> 
>     pr_cont_t c;
> 
>     pr_cont_begin(&c, KERN_INFO "text");
>     pr_cont_append(&c, " continued");
>     pr_cont_end(&c);
> 
> Signed-off-by: John Ogness <john.ogness@linutronix.de>
> ---
>  include/linux/printk.h |  19 ++++++
>  kernel/printk/printk.c | 137 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 156 insertions(+)
> 
> diff --git a/include/linux/printk.h b/include/linux/printk.h
> index 34c1a7be3e01..4d9ce18e4afa 100644
> --- a/include/linux/printk.h
> +++ b/include/linux/printk.h
> @@ -380,6 +380,25 @@ extern int kptr_restrict;
>  #define pr_cont(fmt, ...) \
>  	printk(KERN_CONT fmt, ##__VA_ARGS__)
>  
> +/* opaque handle for continuous printk messages */
> +typedef struct {
> +	u8	index;
> +	u8	loglevel;
> +	u16	text_len;
> +} pr_cont_t;
> +
> +/* initialize handle, provide loglevel and initial message text */
> +int pr_cont_begin(pr_cont_t *c, const char *fmt, ...);
> +
> +/* append message text */
> +int pr_cont_append(pr_cont_t *c, const char *fmt, ...);
> +
> +/* flush message to kernel buffer */
> +void pr_cont_flush(pr_cont_t *c);
> +
> +/* flush message to kernel buffer, cleanup handle */
> +void pr_cont_end(pr_cont_t *c);
> +
>  /**
>   * pr_devel - Print a debug-level message conditionally
>   * @fmt: format string
> diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
> index ad8d1dfe5fbe..10113e7ea350 100644
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -3495,3 +3495,140 @@ void kmsg_dump_rewind(struct kmsg_dumper *dumper)
>  EXPORT_SYMBOL_GPL(kmsg_dump_rewind);
>  
>  #endif
> +
> +#define CONT_LINE_MAX LOG_LINE_MAX
> +#define CONT_BUF_COUNT 10
> +static char cont_buf[CONT_BUF_COUNT][CONT_LINE_MAX];
> +static DECLARE_BITMAP(cont_buf_map, CONT_BUF_COUNT);
> +
> +static int get_cont_buf(void)
> +{
> +	int bit;
> +
> +	do {
> +		bit = find_first_zero_bit(cont_buf_map, CONT_BUF_COUNT);
> +		if (bit == CONT_BUF_COUNT)
> +			break;
> +	} while (test_and_set_bit(bit, cont_buf_map));
> +
> +	return bit;
> +}

My big problem with this solution is a forgotten pr_cont_end(). It
will cause:

   + the message will never get printed

   + several calls into such a broken code will exhaust the pool of
     cont buffers and it will stop working for the entire system

The above might be solved with some fallback mechanism (timer,
watchdog) but then it needs some synchronization.

Why is it so bad?

   + it will happen like other bugs

   + it is hard to notice; it is easier to notice hard-lockup than
     a missing message


Now that I think about it. This is the biggest problem with any temporary buffer
for pr_cont() lines. I am more and more convinced that we should just
_keep the current behavior_. It is not ideal. But sometimes mixed
messages are always better than lost ones.


That said, some printk() API using local buffer would be still
valuable for pieces of code where people really want to avoid
mixed lines. But it should be optional and people should be
aware of the risks.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-20 10:16   ` Petr Mladek
@ 2020-08-20 12:33     ` David Laight
  2020-08-25 13:10       ` Petr Mladek
  2020-08-24  2:08     ` Sergey Senozhatsky
  1 sibling, 1 reply; 24+ messages in thread
From: David Laight @ 2020-08-20 12:33 UTC (permalink / raw)
  To: 'Petr Mladek', John Ogness
  Cc: Linus Torvalds, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

From: Petr Mladek
> Sent: 20 August 2020 11:16
...
> Now that I think about it. This is the biggest problem with any temporary buffer
> for pr_cont() lines. I am more and more convinced that we should just
> _keep the current behavior_. It is not ideal. But sometimes mixed
> messages are always better than lost ones.

Maybe a marker to say 'more expected' might be useful.
OTOH lack of a trailing '\n' probably signifies that a
pr_cont() is likely to be next.

Unexpected pr_cont() could be output with a leading "... "
to help indicate the message is a continuation.

> That said, some printk() API using local buffer would be still
> valuable for pieces of code where people really want to avoid
> mixed lines. But it should be optional and people should be
> aware of the risks.

That could be very useful if it supported multi-line output.
Thing like the stack backtrace code could use it avoid
the mess that happens when multiple processes generate
tracebacks at the same time.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-20  7:44     ` David Laight
  2020-08-20  7:59       ` Joe Perches
@ 2020-08-20 16:03       ` Joe Perches
  1 sibling, 0 replies; 24+ messages in thread
From: Joe Perches @ 2020-08-20 16:03 UTC (permalink / raw)
  To: David Laight, John Ogness, Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

On Thu, 2020-08-20 at 07:44 +0000, David Laight wrote:
> From: Joe Perches
> > Sent: 20 August 2020 01:34
> > 
> > On Thu, 2020-08-20 at 01:32 +0206, John Ogness wrote:
> > > Implement a new buffering mechanism for pr_cont messages.
> > > 
> > > Old mechanism syntax:
> > > 
> > >     printk(KERN_INFO "text");
> > >     printk(KERN_CONT " continued");
> > >     printk(KERN_CONT "\n");
> > > 
> > > New mechanism syntax:
> > > 
> > >     pr_cont_t c;
> > > 
> > >     pr_cont_begin(&c, KERN_INFO "text");
> > 
> > bikeshed:
> > 
> > I suggest:
> > 
> > 	printk_begin(&printk_context, fmt, ...)
> > 	printk_continue(&printk_context, fmt, ...) (maybe printk_next())
> > 	printk_end(&printk_context, fmt, ...)
> 
> I see some very long source lines looming ...

Macros make that pretty short.

pr_<level>_start
pr_next
pr_end

Doesn't seem too bad.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 2/5] sysrq: use pr_cont_t for cont messages
  2020-08-20  1:03   ` Linus Torvalds
@ 2020-08-20 17:48     ` Joe Perches
  2020-08-20 17:53       ` Linus Torvalds
  2020-08-20 22:11       ` John Ogness
  0 siblings, 2 replies; 24+ messages in thread
From: Joe Perches @ 2020-08-20 17:48 UTC (permalink / raw)
  To: Linus Torvalds, John Ogness
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	Linux Kernel Mailing List

On Wed, 2020-08-19 at 18:03 -0700, Linus Torvalds wrote:
> On Wed, Aug 19, 2020 at 4:26 PM John Ogness <john.ogness@linutronix.de> wrote:
> > Use the new pr_cont_t mechanism.
> 
> This looks actively much worse than the old code.

Isn't this just a generic mechanism to simplify
the accumulation of
logging message chunks?

It does seem straightforward enough to me.

And here it seems like the 'for (j =...)' loop is superfluous.

Maybe something like this would be reasonable:
---
 drivers/tty/sysrq.c | 19 ++++++-------------
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c
index a8e39b2cdd55..a145e4fc1a2a 100644
--- a/drivers/tty/sysrq.c
+++ b/drivers/tty/sysrq.c
@@ -572,21 +572,14 @@ void __handle_sysrq(int key, bool check_mask)
 			console_loglevel = orig_log_level;
 		}
 	} else {
-		pr_info("HELP : ");
-		/* Only print the help msg once per handler */
+		pr_context c;
+		pr_info_start(&c, "HELP :");
 		for (i = 0; i < ARRAY_SIZE(sysrq_key_table); i++) {
-			if (sysrq_key_table[i]) {
-				int j;
-
-				for (j = 0; sysrq_key_table[i] !=
-						sysrq_key_table[j]; j++)
-					;
-				if (j != i)
-					continue;
-				pr_cont("%s ", sysrq_key_table[i]->help_msg);
-			}
+			if (!sysrq_key_table[i])
+				continue;
+			pr_next(&c, " %s", sysrq_key_table[i]->help_msg);
 		}
-		pr_cont("\n");
+		pr_end(&c, "\n");
 		console_loglevel = orig_log_level;
 	}
 	rcu_read_unlock();



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 2/5] sysrq: use pr_cont_t for cont messages
  2020-08-20 17:48     ` Joe Perches
@ 2020-08-20 17:53       ` Linus Torvalds
  2020-08-20 22:11       ` John Ogness
  1 sibling, 0 replies; 24+ messages in thread
From: Linus Torvalds @ 2020-08-20 17:53 UTC (permalink / raw)
  To: Joe Perches
  Cc: John Ogness, Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	Linux Kernel Mailing List

On Thu, Aug 20, 2020 at 10:48 AM Joe Perches <joe@perches.com> wrote:
>
> Maybe something like this would be reasonable:

Yes. At this point this improves the code, rather than making it less legible.

              Linus

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 2/5] sysrq: use pr_cont_t for cont messages
  2020-08-20 17:48     ` Joe Perches
  2020-08-20 17:53       ` Linus Torvalds
@ 2020-08-20 22:11       ` John Ogness
  2020-08-20 22:36         ` Joe Perches
  1 sibling, 1 reply; 24+ messages in thread
From: John Ogness @ 2020-08-20 22:11 UTC (permalink / raw)
  To: Joe Perches, Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	Linux Kernel Mailing List

On 2020-08-20, Joe Perches <joe@perches.com> wrote:
> And here it seems like the 'for (j =...)' loop is superfluous.

AFAICT it is skipping duplicate entries. In the case of a duplicate,
only the first one is printed.

> Maybe something like this would be reasonable:
> ---
>  drivers/tty/sysrq.c | 19 ++++++-------------
>  1 file changed, 6 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c
> index a8e39b2cdd55..a145e4fc1a2a 100644
> --- a/drivers/tty/sysrq.c
> +++ b/drivers/tty/sysrq.c
> @@ -572,21 +572,14 @@ void __handle_sysrq(int key, bool check_mask)
>  			console_loglevel = orig_log_level;
>  		}
>  	} else {
> -		pr_info("HELP : ");
> -		/* Only print the help msg once per handler */
> +		pr_context c;
> +		pr_info_start(&c, "HELP :");
>  		for (i = 0; i < ARRAY_SIZE(sysrq_key_table); i++) {
> -			if (sysrq_key_table[i]) {
> -				int j;
> -
> -				for (j = 0; sysrq_key_table[i] !=
> -						sysrq_key_table[j]; j++)
> -					;
> -				if (j != i)
> -					continue;
> -				pr_cont("%s ", sysrq_key_table[i]->help_msg);
> -			}
> +			if (!sysrq_key_table[i])
> +				continue;
> +			pr_next(&c, " %s", sysrq_key_table[i]->help_msg);
>  		}
> -		pr_cont("\n");
> +		pr_end(&c, "\n");
>  		console_loglevel = orig_log_level;
>  	}
>  	rcu_read_unlock();

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 2/5] sysrq: use pr_cont_t for cont messages
  2020-08-20 22:11       ` John Ogness
@ 2020-08-20 22:36         ` Joe Perches
  0 siblings, 0 replies; 24+ messages in thread
From: Joe Perches @ 2020-08-20 22:36 UTC (permalink / raw)
  To: John Ogness, Linus Torvalds
  Cc: Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	Linux Kernel Mailing List

On Fri, 2020-08-21 at 00:17 +0206, John Ogness wrote:
> On 2020-08-20, Joe Perches <joe@perches.com> wrote:
> > And here it seems like the 'for (j =...)' loop is superfluous.
> 
> AFAICT it is skipping duplicate entries. In the case of a duplicate,
> only the first one is printed.

Ah, right.  thanks.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-20 10:16   ` Petr Mladek
  2020-08-20 12:33     ` David Laight
@ 2020-08-24  2:08     ` Sergey Senozhatsky
  1 sibling, 0 replies; 24+ messages in thread
From: Sergey Senozhatsky @ 2020-08-24  2:08 UTC (permalink / raw)
  To: Petr Mladek
  Cc: John Ogness, Linus Torvalds, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

On (20/08/20 12:16), Petr Mladek wrote:
> Now that I think about it. This is the biggest problem with any temporary buffer
> for pr_cont() lines. I am more and more convinced that we should just
> _keep the current behavior_. It is not ideal. But sometimes mixed
> messages are always better than lost ones.

Agreed. John, let's move this into a backlog for the time being? We can
continue discussions, but first let's fix the empty lines for cont line
print outs in the current dmesg output.

	-ss

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-19 23:26 ` [RFC PATCH 1/5] printk: implement pr_cont_t John Ogness
  2020-08-20  0:33   ` Joe Perches
  2020-08-20 10:16   ` Petr Mladek
@ 2020-08-24  5:37   ` Sergey Senozhatsky
  2 siblings, 0 replies; 24+ messages in thread
From: Sergey Senozhatsky @ 2020-08-24  5:37 UTC (permalink / raw)
  To: John Ogness
  Cc: Linus Torvalds, Petr Mladek, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

On (20/08/20 01:32), John Ogness wrote:
> +#define CONT_LINE_MAX LOG_LINE_MAX
> +#define CONT_BUF_COUNT 10
> +static char cont_buf[CONT_BUF_COUNT][CONT_LINE_MAX];
> +static DECLARE_BITMAP(cont_buf_map, CONT_BUF_COUNT);
> +
> +static int get_cont_buf(void)
> +{
> +	int bit;
> +
> +	do {
> +		bit = find_first_zero_bit(cont_buf_map, CONT_BUF_COUNT);
> +		if (bit == CONT_BUF_COUNT)
> +			break;
> +	} while (test_and_set_bit(bit, cont_buf_map));
> +
> +	return bit;
> +}
> +static void put_cont_buf(int index)
> +{
> +	if (WARN_ON(index >= CONT_BUF_COUNT))
> +		return;
> +	if (WARN_ON(!test_bit(index, cont_buf_map)))
> +		return;

Doesn't WARN_ON() do pr_cont() print outs as well? I'm a bit worried by
the path that re-enters pr_cont() from "broken" pr_cont() path.

Just ideas, to keep the discussion alive:

Overall, I wonder if pr_cont buffers should hold more info, e.g. owner
context. If the same context does "normal" printk() while still owning
the unfinished cont buffer then we should flush cont buffer. I may be
in minority here, I don't see a very strong reason to keep the global
order of the messages - e.g. if pid 234 does printk on CPU1 then we
probably don't need to flush pid's 8889 cont buffer (which runs on CPU42,
for instance), but keeping the order of the messages within the context
is somehow much more important. That is, if pid 8889 starts pr_cont buffer
and then triggers a pr_warn() or pr_alert() or any other printk() then I
think we need to flush the cont buffer. Just 5 cents.

	-ss

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-20 12:33     ` David Laight
@ 2020-08-25 13:10       ` Petr Mladek
  2020-08-25 13:38         ` David Laight
  0 siblings, 1 reply; 24+ messages in thread
From: Petr Mladek @ 2020-08-25 13:10 UTC (permalink / raw)
  To: David Laight
  Cc: John Ogness, Linus Torvalds, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

On Thu 2020-08-20 12:33:23, David Laight wrote:
> From: Petr Mladek
> > Sent: 20 August 2020 11:16
> ...
> > Now that I think about it. This is the biggest problem with any temporary buffer
> > for pr_cont() lines. I am more and more convinced that we should just
> > _keep the current behavior_. It is not ideal. But sometimes mixed
> > messages are always better than lost ones.
> 
> Maybe a marker to say 'more expected' might be useful.
> OTOH lack of a trailing '\n' probably signifies that a
> pr_cont() is likely to be next.

The problem is the "probably". Lack of trailing '\n' might also mean
that the author did not care. Note that newline is not strictly
required at the moment. The next message is concatenated only when
pr_cont() is used from the same process.

I would personally hate printk when I debugged some hard-to-reproduce
bug, finally succeeded, and some message was missing just because
of a missing newline.


> Unexpected pr_cont() could be output with a leading "... "
> to help indicate the message is a continuation.

Interesting idea. It might help to catch broken code. Well, I am still
not sure that people would appreciate this printk() behavior.


> > That said, some printk() API using local buffer would be still
> > valuable for pieces of code where people really want to avoid
> > mixed lines. But it should be optional and people should be
> > aware of the risks.
> 
> That could be very useful if it supported multi-line output.
> Thing like the stack backtrace code could use it avoid
> the mess that happens when multiple processes generate
> tracebacks at the same time.

Honestly, I am going to push against it. I agree that would be useful
but it is an evil path.

There has to be some limits. Backtraces are often printed in situation
where the buffer could not get allocated dynamically. There always
be a situation when a buffer will not be big enough.

If we add this feature, people will want to use it also for other
purposes that might need even bigger buffers and the size might
be even harder to predict.

And the temporary buffer is only part of the problem. The message also
has to be stored into the lockless ringbuffer, show on consoles, passed
to userspace, stored during panic by pstore.

The printk() design already is very complicated. And multiple lines
support would stretch it in yet another dimension.

And it is going to be less important. We are going to always store the
information about the printk caller. So that it will be much easier
to grep related lines.

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-25 13:10       ` Petr Mladek
@ 2020-08-25 13:38         ` David Laight
  2020-08-25 15:32           ` Petr Mladek
  0 siblings, 1 reply; 24+ messages in thread
From: David Laight @ 2020-08-25 13:38 UTC (permalink / raw)
  To: 'Petr Mladek'
  Cc: John Ogness, Linus Torvalds, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

From: Petr Mladek
> Sent: 25 August 2020 14:11
> 
> On Thu 2020-08-20 12:33:23, David Laight wrote:
> > From: Petr Mladek
> > > Sent: 20 August 2020 11:16
> > ...
> > > Now that I think about it. This is the biggest problem with any temporary buffer
> > > for pr_cont() lines. I am more and more convinced that we should just
> > > _keep the current behavior_. It is not ideal. But sometimes mixed
> > > messages are always better than lost ones.
> >
> > Maybe a marker to say 'more expected' might be useful.
> > OTOH lack of a trailing '\n' probably signifies that a
> > pr_cont() is likely to be next.
> 
> The problem is the "probably". Lack of trailing '\n' might also mean
> that the author did not care. Note that newline is not strictly
> required at the moment. The next message is concatenated only when
> pr_cont() is used from the same process.

Thinks.... (smoke comes out of ears...):
If the 'trace entry' contained the pid and whether it was a pr_cont
then the trace reader could merge continuation lines even if
there was a small number of interleaved other traces.

So anything reading continuously might break a continuation
(as might happen if there is a trace from an ISR).
But the output from dmesg and /var/log/messages will
almost always be correct.

This moves all the complexity away from the trace writing code.

> I would personally hate printk when I debugged some hard-to-reproduce
> bug, finally succeeded, and some message was missing just because
> of a missing newline.

My annoyance is that I want printk() to come out on the console
(like it used to 30 years ago).
I don't want the console output to be from a syslogd process.
On SYSV killing syslogd (mostly) did the trick.
x86 is annoying because you need to have setup a serial console
(or have a high speed camera pointing at the vga monitor).

> > Unexpected pr_cont() could be output with a leading "... "
> > to help indicate the message is a continuation.
> 
> Interesting idea. It might help to catch broken code. Well, I am still
> not sure that people would appreciate this printk() behavior.

I was thinking of just catching the case where pr_cont()
data can't be appended to the earlier line.

For direct tty output this would happen if an intervening
printk() had to add an extra '\n' to terminate the partial line.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 1/5] printk: implement pr_cont_t
  2020-08-25 13:38         ` David Laight
@ 2020-08-25 15:32           ` Petr Mladek
  0 siblings, 0 replies; 24+ messages in thread
From: Petr Mladek @ 2020-08-25 15:32 UTC (permalink / raw)
  To: David Laight
  Cc: John Ogness, Linus Torvalds, Sergey Senozhatsky, Steven Rostedt,
	Greg Kroah-Hartman, Thomas Gleixner, Sergey Senozhatsky,
	linux-kernel

On Tue 2020-08-25 13:38:57, David Laight wrote:
> From: Petr Mladek
> > Sent: 25 August 2020 14:11
> > 
> > On Thu 2020-08-20 12:33:23, David Laight wrote:
> > > From: Petr Mladek
> > > > Sent: 20 August 2020 11:16
> > > ...
> > > > Now that I think about it. This is the biggest problem with any temporary buffer
> > > > for pr_cont() lines. I am more and more convinced that we should just
> > > > _keep the current behavior_. It is not ideal. But sometimes mixed
> > > > messages are always better than lost ones.
> > >
> > > Maybe a marker to say 'more expected' might be useful.
> > > OTOH lack of a trailing '\n' probably signifies that a
> > > pr_cont() is likely to be next.
> > 
> > The problem is the "probably". Lack of trailing '\n' might also mean
> > that the author did not care. Note that newline is not strictly
> > required at the moment. The next message is concatenated only when
> > pr_cont() is used from the same process.
> 
> Thinks.... (smoke comes out of ears...):
> If the 'trace entry' contained the pid and whether it was a pr_cont
> then the trace reader could merge continuation lines even if
> there was a small number of interleaved other traces.
> 
> So anything reading continuously might break a continuation
> (as might happen if there is a trace from an ISR).
> But the output from dmesg and /var/log/messages will
> almost always be correct.
> 
> This moves all the complexity away from the trace writing code.

Yeah, this was the original plan. Unfortunately, it would require
changes on the reader side and it would break existing readers (userspace),
see
https://lore.kernel.org/lkml/20200811160551.GC12903@alley/

And it is not acceptable, see
https://lore.kernel.org/lkml/CAHk-=wivdy6-i=iqJ1ZG9YrRzaS0_LHHEPwb9KJg-S8i-Wm30w@mail.gmail.com/

Best Regards,
Petr

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-08-25 15:32 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-19 23:26 [RFC PATCH 0/5] printk: new log_cont implementation John Ogness
2020-08-19 23:26 ` [RFC PATCH 1/5] printk: implement pr_cont_t John Ogness
2020-08-20  0:33   ` Joe Perches
2020-08-20  7:44     ` David Laight
2020-08-20  7:59       ` Joe Perches
2020-08-20  8:49         ` David Laight
2020-08-20  9:18           ` John Ogness
2020-08-20 16:03       ` Joe Perches
2020-08-20 10:16   ` Petr Mladek
2020-08-20 12:33     ` David Laight
2020-08-25 13:10       ` Petr Mladek
2020-08-25 13:38         ` David Laight
2020-08-25 15:32           ` Petr Mladek
2020-08-24  2:08     ` Sergey Senozhatsky
2020-08-24  5:37   ` Sergey Senozhatsky
2020-08-19 23:26 ` [RFC PATCH 2/5] sysrq: use pr_cont_t for cont messages John Ogness
2020-08-20  1:03   ` Linus Torvalds
2020-08-20 17:48     ` Joe Perches
2020-08-20 17:53       ` Linus Torvalds
2020-08-20 22:11       ` John Ogness
2020-08-20 22:36         ` Joe Perches
2020-08-19 23:26 ` [RFC PATCH 3/5] workqueue: " John Ogness
2020-08-19 23:26 ` [RFC PATCH 4/5] locking/selftest: " John Ogness
2020-08-19 23:26 ` [RFC PATCH 5/5] lockdep: " John Ogness

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).