linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: linux-mm@kvack.org,
	"Luis Claudio R. Goncalves" <lgoncalv@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Michal Hocko <mhocko@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Petr Mladek <pmladek@suse.com>
Subject: Re: [PATCH] mm/page_alloc: Use write_seqlock_irqsave() instead write_seqlock() + local_irq_save().
Date: Thu, 22 Jun 2023 08:24:30 +0900	[thread overview]
Message-ID: <8b6d3f39-c573-ca2b-957b-8c48c2fa68ad@I-love.SAKURA.ne.jp> (raw)
In-Reply-To: <01031ffe-c81f-9cec-76fb-e70d548429cf@I-love.SAKURA.ne.jp>

On 2023/06/21 23:50, Tetsuo Handa wrote:
> On 2023/06/21 23:34, Sebastian Andrzej Siewior wrote:
>>> Also, if local_irq_save() is hidden due to RT, what guarantees that
>>>
>>>   write_seqlock_irqsave(&zonelist_update_seq, flags);
>>>   <<IRQ>>
>>>     some_timer_function() {
>>>       printk();
>>>     }
>>>   <<IRQ>>
>>>   printk_deferred_enter();
>>>
>>> does not happen because write_seqlock_irqsave() does not disable IRQ?
>>
>> I don't see how zonelist_update_seq and printk here are connected
>> without the port lock/ or memory allocation. But there are two things
>> that are different on RT which probably answer your question:
> 
> It is explained as the first deadlock scenario in commit 1007843a9190
> ("mm/page_alloc: fix potential deadlock on zonelist_update_seq seqlock").
> We have to disable IRQ before making zonelist_update_seq.seqcount odd.
> 

Since we must replace local_irq_save() + write_seqlock() with write_seqlock_irqsave() for
CONFIG_PREEMPT_RT=y case but we must not replace local_irq_save() + write_seqlock() with
write_seqlock_irqsave() for CONFIG_PREEMPT_RT=n case, the proper fix is something like below?

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 47421bedc12b..e3e9bd719dcc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5798,28 +5798,30 @@ static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonesta
 #define BOOT_PAGESET_HIGH	0
 #define BOOT_PAGESET_BATCH	1
 static DEFINE_PER_CPU(struct per_cpu_pages, boot_pageset);
 static DEFINE_PER_CPU(struct per_cpu_zonestat, boot_zonestats);
 
 static void __build_all_zonelists(void *data)
 {
 	int nid;
 	int __maybe_unused cpu;
 	pg_data_t *self = data;
+#ifndef CONFIG_PREEMPT_RT
 	unsigned long flags;
 
 	/*
 	 * Explicitly disable this CPU's interrupts before taking seqlock
 	 * to prevent any IRQ handler from calling into the page allocator
 	 * (e.g. GFP_ATOMIC) that could hit zonelist_iter_begin and livelock.
 	 */
 	local_irq_save(flags);
+#endif
 	/*
 	 * Explicitly disable this CPU's synchronous printk() before taking
 	 * seqlock to prevent any printk() from trying to hold port->lock, for
 	 * tty_insert_flip_string_and_push_buffer() on other CPU might be
 	 * calling kmalloc(GFP_ATOMIC | __GFP_NOWARN) with port->lock held.
 	 */
 	printk_deferred_enter();
 	write_seqlock(&zonelist_update_seq);
 
 #ifdef CONFIG_NUMA
@@ -5852,21 +5854,23 @@ static void __build_all_zonelists(void *data)
 		 * secondary cpus' numa_mem as they come on-line.  During
 		 * node/memory hotplug, we'll fixup all on-line cpus.
 		 */
 		for_each_online_cpu(cpu)
 			set_cpu_numa_mem(cpu, local_memory_node(cpu_to_node(cpu)));
 #endif
 	}
 
 	write_sequnlock(&zonelist_update_seq);
 	printk_deferred_exit();
+#ifndef CONFIG_PREEMPT_RT
 	local_irq_restore(flags);
+#endif
 }
 
 static noinline void __init
 build_all_zonelists_init(void)
 {
 	int cpu;
 
 	__build_all_zonelists(NULL);
 
 	/*


By the way, given

  write_seqlock_irqsave(&zonelist_update_seq, flags);
  <<IRQ>>
    some_timer_function() {
      kmalloc(GFP_ATOMIC);
    }
  <</IRQ>>
  printk_deferred_enter();

scenario in CONFIG_PREEMPT_RT=y case is handled by executing some_timer_function()
on a dedicated kernel thread for IRQs, what guarantees that the kernel thread for
IRQs gives up CPU and the user thread which called write_seqlock() gains CPU until
write_sequnlock() is called? How can the kernel figure out that executing the user
thread needs higher priority than the kernel thread?



  reply	other threads:[~2023-06-21 23:24 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-21 10:40 [PATCH] mm/page_alloc: Use write_seqlock_irqsave() instead write_seqlock() + local_irq_save() Sebastian Andrzej Siewior
2023-06-21 10:59 ` Michal Hocko
2023-06-21 11:16   ` Sebastian Andrzej Siewior
2023-06-21 11:49     ` Michal Hocko
2023-06-21 13:11       ` Sebastian Andrzej Siewior
2023-06-21 13:22         ` Michal Hocko
2023-06-21 13:25           ` Sebastian Andrzej Siewior
2023-06-21 11:14 ` David Hildenbrand
2023-06-21 11:33 ` Tetsuo Handa
2023-06-21 12:40   ` Petr Mladek
2023-06-21 13:08     ` Sebastian Andrzej Siewior
2023-06-21 13:06   ` Sebastian Andrzej Siewior
2023-06-21 13:32     ` Tetsuo Handa
2023-06-21 14:34       ` Sebastian Andrzej Siewior
2023-06-21 14:50         ` Tetsuo Handa
2023-06-21 23:24           ` Tetsuo Handa [this message]
2023-06-22  7:18             ` Michal Hocko
2023-06-22 10:58               ` Tetsuo Handa
2023-06-22 12:09                 ` Michal Hocko
2023-06-22 13:36             ` Tetsuo Handa
2023-06-22 14:11               ` Petr Mladek
2023-06-22 14:28                 ` Tetsuo Handa
2023-06-23  9:35                   ` Sebastian Andrzej Siewior
2023-06-22 15:04                 ` Petr Mladek
2023-06-22 15:43                   ` Tetsuo Handa
2023-06-23  9:45                     ` Sebastian Andrzej Siewior
2023-06-23  9:51                       ` Tetsuo Handa
2023-06-23 10:11                         ` Sebastian Andrzej Siewior
2023-06-23 10:36                           ` Tetsuo Handa
2023-06-23 12:44                             ` Sebastian Andrzej Siewior
2023-06-23 12:57                               ` Michal Hocko
2023-06-23 10:53                           ` Petr Mladek
2023-06-23 11:16                             ` Tetsuo Handa
2023-06-23 13:31                             ` Sebastian Andrzej Siewior
2023-06-23 15:38                               ` Petr Mladek
2023-06-23 16:04                                 ` Sebastian Andrzej Siewior
2023-06-23  9:31               ` Sebastian Andrzej Siewior
2023-06-23  7:27           ` Sebastian Andrzej Siewior
2023-06-21 15:38         ` Petr Mladek
2023-06-23  8:12           ` Sebastian Andrzej Siewior
2023-06-23  9:21             ` Michal Hocko
2023-06-23  9:58               ` Sebastian Andrzej Siewior
2023-06-23 10:43                 ` Michal Hocko
2023-06-23 10:45                 ` Sebastian Andrzej Siewior
2023-06-23 10:50                   ` Sebastian Andrzej Siewior
2023-06-23 11:32                   ` Michal Hocko
2023-06-23 10:40             ` Petr Mladek
2023-06-23 13:24               ` Sebastian Andrzej Siewior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8b6d3f39-c573-ca2b-957b-8c48c2fa68ad@I-love.SAKURA.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=lgoncalv@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=pmladek@suse.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).