Linux-rt-users archive on lore.kernel.org
 help / color / Atom feed
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: linux-rt-users@vger.kernel.org,
	Juri Lelli <juri.lelli@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Frederic Weisbecker <frederic@kernel.org>
Subject: Re: [patch 2/2] mm: page_alloc: drain pages remotely
Date: Tue, 16 Jun 2020 18:32:48 +0200
Message-ID: <20200616163248.z5bdrx7gj2sf7d3m@linutronix.de> (raw)
In-Reply-To: <20200616161409.299575008@fuller.cnet>

On 2020-06-16 13:11:51 [-0300], Marcelo Tosatti wrote:
> Remote draining of pages was removed from 5.6-rt.
> 
> Unfortunately its necessary for use-cases which have a busy spinning 
> SCHED_FIFO thread on isolated CPU:
> 
> [ 7475.821066] INFO: task ld:274531 blocked for more than 600 seconds.
> [ 7475.822157]       Not tainted 4.18.0-208.rt5.20.el8.x86_64 #1
> [ 7475.823094] echo 0  /proc/sys/kernel/hung_task_timeout_secs disables this message.
> [ 7475.824392] ld              D    0 274531 274530 0x00084080
> [ 7475.825307] Call Trace:
> [ 7475.825761]  __schedule+0x342/0x850
> [ 7475.826377]  schedule+0x39/0xd0
> [ 7475.826923]  schedule_timeout+0x20e/0x410
> [ 7475.827610]  ? __schedule+0x34a/0x850
> [ 7475.828247]  ? ___preempt_schedule+0x16/0x18
> [ 7475.828953]  wait_for_completion+0x85/0xe0
> [ 7475.829653]  flush_work+0x11a/0x1c0
> [ 7475.830313]  ? flush_workqueue_prep_pwqs+0x130/0x130
> [ 7475.831148]  drain_all_pages+0x140/0x190
> [ 7475.831803]  __alloc_pages_slowpath+0x3f8/0xe20
> [ 7475.832571]  ? mem_cgroup_commit_charge+0xcb/0x510
> [ 7475.833371]  __alloc_pages_nodemask+0x1ca/0x2b0
> [ 7475.834134]  pagecache_get_page+0xb5/0x2d0
> [ 7475.834814]  ? account_page_dirtied+0x11a/0x220
> [ 7475.835579]  grab_cache_page_write_begin+0x1f/0x40
> [ 7475.836379]  iomap_write_begin.constprop.44+0x1c1/0x370
> [ 7475.837241]  ? iomap_write_end+0x91/0x290
> [ 7475.837911]  iomap_write_actor+0x92/0x170
> ...
> 
> So enable remote draining again.

Is upstream affected by this? And if not, why not?

> Index: linux-rt-devel/mm/page_alloc.c
> ===================================================================
> --- linux-rt-devel.orig/mm/page_alloc.c
> +++ linux-rt-devel/mm/page_alloc.c
> @@ -360,6 +360,16 @@ EXPORT_SYMBOL(nr_online_nodes);
>  
>  static DEFINE_LOCAL_IRQ_LOCK(pa_lock);
>  
> +#ifdef CONFIG_PREEMPT_RT
> +# define cpu_lock_irqsave(cpu, flags)          \
> +	local_lock_irqsave_on(pa_lock, flags, cpu)
> +# define cpu_unlock_irqrestore(cpu, flags)     \
> +	local_unlock_irqrestore_on(pa_lock, flags, cpu)
> +#else
> +# define cpu_lock_irqsave(cpu, flags)		local_irq_save(flags)
> +# define cpu_unlock_irqrestore(cpu, flags)	local_irq_restore(flags)
> +#endif

This is going to be tough. I removed the cross-CPU local-locks from RT
because it does something different for !RT. Furthermore we have
local_locks in upstream as of v5.8-rc1, see commit
   91710728d1725 ("locking: Introduce local_lock()")

so whatever happens here should have upstream blessing or I will be
forced to drop the patch again while moving forward.

Before this, I looked for cases where remote drain is useful / needed
and didn't find one. I talked to Frederick and for the NO_HZ_FULL people
it is not a problem because they don't go to kernel and so they never
got anything on their per-CPU list.

We had this
  https://lore.kernel.org/linux-mm/20190424111208.24459-1-bigeasy@linutronix.de/

Sebastian

  reply index

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-16 16:11 [patch 0/2] re-enable remote per-cpu-pages draining Marcelo Tosatti
2020-06-16 16:11 ` [patch 1/2] rt: add local_lock_on and local_lock_irqsave_on to locallock.h Marcelo Tosatti
2020-06-16 16:11 ` [patch 2/2] mm: page_alloc: drain pages remotely Marcelo Tosatti
2020-06-16 16:32   ` Sebastian Andrzej Siewior [this message]
2020-06-16 16:55     ` Marcelo Tosatti
2020-06-17  7:42       ` Sebastian Andrzej Siewior
2020-06-17 10:11         ` Marcelo Tosatti
2020-06-17 12:01           ` Sebastian Andrzej Siewior
2020-06-16 17:04     ` Marcelo Tosatti
2020-06-17  7:46       ` Sebastian Andrzej Siewior
2020-06-17 10:12         ` Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200616163248.z5bdrx7gj2sf7d3m@linutronix.de \
    --to=bigeasy@linutronix.de \
    --cc=frederic@kernel.org \
    --cc=juri.lelli@redhat.com \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-rt-users archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-rt-users/0 linux-rt-users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-rt-users linux-rt-users/ https://lore.kernel.org/linux-rt-users \
		linux-rt-users@vger.kernel.org
	public-inbox-index linux-rt-users

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-rt-users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git