All of
 help / color / mirror / Atom feed
From: Tetsuo Handa <>
To: Sergey Senozhatsky <>
Cc: Petr Mladek <>,
	Sergey Senozhatsky <>,
	Steven Rostedt <>,
	John Ogness <>,
	Andrew Morton <>,
	Linus Torvalds <>,
Subject: Re: [RFC PATCH] printk: Introduce "store now but print later" prefix.
Date: Mon, 4 Mar 2019 20:40:37 +0900	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <20190304032202.GD23578@jagdpanzerIV>

On 2019/03/04 12:22, Sergey Senozhatsky wrote:
> On (02/23/19 13:42), Tetsuo Handa wrote:
> [..]
>> This patch tries to address "don't lockup the system" with minimal risk of
>> failing to "print out printk() messages", by allowing printk() callers to
>> tell printk() "store $body_text_lines lines into logbuf but start actual
>> printing after $trailer_text_line line is stored into logbuf". This patch
>> is different from existing printk_deferred(), for printk_deferred() is
>> intended for scheduler/timekeeping use only. Moreover, what this patch
>> wants to do is "do not try to print out printk() messages as soon as
>> possible", for accumulated stalling period cannot be decreased if
>> printk_deferred() from e.g. dump_tasks() from out_of_memory() immediately
>> prints out the messages. The point of this patch is to defer the stalling
>> duration to after leaving the critical section.
> We can export printk deferred, I guess; but I'm not sure if it's going
> to be easy to switch OOM to printk_deferred - there are lots of direct
> printk callers: warn-s, dump_stacks, etc; it might even be simpler to
> start re-directing OOM printouts to printk_safe buffer.

I confirmed that printk_deferred() is not suitable for this purpose, for
it suddenly stalls for seconds at random locations flushing pending output
accumulated by printk_deferred(). Stalling inside critical section (e.g.
RCU read lock held) is what I don't like.

> This is a bit of a strange issue, to be honest. If OOM prints too
> many messages then we might want to do some work on the OOM side.
> But, to begin with, can you give an example of such a lockup? Just
> to understand how big/real the problem is.
> What is that "OOM critical section" which printk can stall?

dump_task() is the OOM critical section from RCU perspective.
We can minimize RCU critical section by just getting a refcount on possible
candidates and then printing information and putting that refcount after
leaving RCU critical section.

diff --git a/include/linux/sched.h b/include/linux/sched.h
index f9b43c9..4781439 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1182,6 +1182,7 @@ struct task_struct {
 #ifdef CONFIG_MMU
 	struct task_struct		*oom_reaper_list;
+	struct list_head		oom_candidate_list;
 	struct vm_struct		*stack_vm_area;
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 26ea863..6750b18 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -399,6 +399,7 @@ static void dump_tasks(struct mem_cgroup *memcg, const nodemask_t *nodemask)
 	struct task_struct *p;
 	struct task_struct *task;
+	LIST_HEAD(candidates);
 	pr_info("Tasks state (memory values in pages):\n");
 	pr_info("[  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name\n");
@@ -407,6 +408,11 @@ static void dump_tasks(struct mem_cgroup *memcg, const nodemask_t *nodemask)
 		if (oom_unkillable_task(p, memcg, nodemask))
+		get_task_struct(p);
+		list_add_tail(&p->oom_candidate_list, &candidates);
+	}
+	rcu_read_unlock();
+	list_for_each_entry(p, &candidates, oom_candidate_list) {
 		task = find_lock_task_mm(p);
 		if (!task) {
@@ -425,7 +431,8 @@ static void dump_tasks(struct mem_cgroup *memcg, const nodemask_t *nodemask)
 			task->signal->oom_score_adj, task->comm);
-	rcu_read_unlock();
+	list_for_each_entry_safe(p, task, &candidates, oom_candidate_list)
+		put_task_struct(p);
 static void dump_oom_summary(struct oom_control *oc, struct task_struct *victim)

But almost all out_of_memory() (where oom_lock mutex is held) is the OOM critical
section from memory reclaiming perspective, for we cannot reclaim memory (and
other concurrently allocating threads are needlessly wasting CPU time) unless
SIGKILL is sent after all printk() completed. Therefore, despite out_of_memory()
prints a lot of messages, it is expected to complete quickly as if it is an interrupt
handler. We could disable preemption inside out_of_memory() if all printk() with
oom_lock mutex held is deferrable until oom_lock mutex is released.

> [..]
>> The possibility of failing to store all printk() messages to logbuf might
>> be increased by using "async" printk(). But since we have a lot of RAM
>> nowadays, allocating large logbuf enough to hold the entire SysRq-t output
>> using log_buf_len= kernel command line parameter won't be difficult.
> Note, logbuf size is limited - 2G. Might be not as large as people
> would want it to be.

Are "machines which want to use 2GB logbuf" hosting millions of threads such
that even 2GB is not enough for holding SysRq-t output? If yes, then I guess
that tasklist traversal under RCU read lock would lockup even without printk().

  reply	other threads:[~2019-03-04 11:40 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-23  4:42 [RFC PATCH] printk: Introduce "store now but print later" prefix Tetsuo Handa
2019-03-04  3:22 ` Sergey Senozhatsky
2019-03-04 11:40   ` Tetsuo Handa [this message]
2019-03-04 12:09     ` Sergey Senozhatsky
2019-03-04 14:23     ` Petr Mladek
2019-03-04 14:37       ` Sergey Senozhatsky
2019-03-05  1:23       ` Tetsuo Handa
2019-03-05  7:52         ` Sergey Senozhatsky
2019-03-05 12:57         ` Michal Hocko
2019-03-06 10:04         ` Petr Mladek
2019-03-06 14:27           ` Sergey Senozhatsky
2019-03-06 18:24           ` Tetsuo Handa
2019-03-15 10:49             ` Tetsuo Handa
2019-03-20 15:04             ` Petr Mladek
2019-03-20 15:25             ` ratelimit API: was: " Petr Mladek
2019-03-21  8:13               ` Tetsuo Handa
2019-03-21  8:49                 ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.