linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Nicolas Saenz Julienne <nsaenzju@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	"Paul E. McKenney" <paulmck@kernel.org>
Subject: Re: [patch v4] mm: lru_cache_disable: replace work queue synchronization with synchronize_rcu
Date: Fri, 4 Mar 2022 16:35:54 -0800	[thread overview]
Message-ID: <20220304163554.8872fe5d5a9d634f7a2884f5@linux-foundation.org> (raw)
In-Reply-To: <YiI+a9gTr/UBCf0X@fuller.cnet>

On Fri, 4 Mar 2022 13:29:31 -0300 Marcelo Tosatti <mtosatti@redhat.com> wrote:

>  
> On systems that run FIFO:1 applications that busy loop 
> on isolated CPUs, executing tasks on such CPUs under
> lower priority is undesired (since that will either
> hang the system, or cause longer interruption to the
> FIFO task due to execution of lower priority task 
> with very small sched slices).
> 
> Commit d479960e44f27e0e52ba31b21740b703c538027c ("mm: disable LRU 
> pagevec during the migration temporarily") relies on 
> queueing work items on all online CPUs to ensure visibility
> of lru_disable_count.
> 
> However, its possible to use synchronize_rcu which will provide the same
> guarantees (see comment this patch modifies on lru_cache_disable).
> 
> Fixes:
> 
> ...
>
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -831,8 +831,7 @@ inline void __lru_add_drain_all(bool force_all_cpus)
>  	for_each_online_cpu(cpu) {
>  		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
>  
> -		if (force_all_cpus ||
> -		    pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
> +		if (pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||

Please changelog this alteration?

>  		    data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) ||
>  		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
>  		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
> @@ -876,15 +875,21 @@ atomic_t lru_disable_count = ATOMIC_INIT(0);
>  void lru_cache_disable(void)
>  {
>  	atomic_inc(&lru_disable_count);
> -#ifdef CONFIG_SMP
>  	/*
> -	 * lru_add_drain_all in the force mode will schedule draining on
> -	 * all online CPUs so any calls of lru_cache_disabled wrapped by
> -	 * local_lock or preemption disabled would be ordered by that.
> -	 * The atomic operation doesn't need to have stronger ordering
> -	 * requirements because that is enforced by the scheduling
> -	 * guarantees.
> +	 * Readers of lru_disable_count are protected by either disabling
> +	 * preemption or rcu_read_lock:
> +	 *
> +	 * preempt_disable, local_irq_disable  [bh_lru_lock()]
> +	 * rcu_read_lock		       [rt_spin_lock CONFIG_PREEMPT_RT]
> +	 * preempt_disable		       [local_lock !CONFIG_PREEMPT_RT]
> +	 *
> +	 * Since v5.1 kernel, synchronize_rcu() is guaranteed to wait on
> +	 * preempt_disable() regions of code. So any CPU which sees
> +	 * lru_disable_count = 0 will have exited the critical
> +	 * section when synchronize_rcu() returns.
>  	 */
> +	synchronize_rcu();
> +#ifdef CONFIG_SMP
>  	__lru_add_drain_all(true);
>  #else
>  	lru_add_and_bh_lrus_drain();


  reply	other threads:[~2022-03-05  0:35 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-22 16:01 [patch v2] mm: lru_cache_disable: replace work queue synchronization with synchronize_rcu Marcelo Tosatti
2022-02-22 16:07 ` [patch v3] " Marcelo Tosatti
2022-02-22 16:25   ` Nicolas Saenz Julienne
2022-03-04  1:03   ` Andrew Morton
2022-03-04  1:49     ` Paul E. McKenney
2022-03-04 15:08       ` Marcelo Tosatti
2022-03-04 16:02         ` Paul E. McKenney
2022-03-04 15:11     ` Marcelo Tosatti
2022-03-04 16:29   ` [patch v4] " Marcelo Tosatti
2022-03-05  0:35     ` Andrew Morton [this message]
2022-03-07 18:52       ` Marcelo Tosatti
2022-03-10 13:22       ` [patch v5] " Marcelo Tosatti
2022-03-11  2:23         ` Andrew Morton
2022-03-11  8:35           ` Sebastian Andrzej Siewior
2022-03-12  0:40             ` Andrew Morton
2022-03-12 20:39             ` Marcelo Tosatti
2022-03-13  9:23               ` Hillf Danton
2022-03-31 13:52         ` Borislav Petkov
2022-04-28 18:00           ` Marcelo Tosatti
2022-05-28 21:18             ` Andrew Morton
2022-05-28 22:54               ` Michael Larabel
2022-05-29  0:48                 ` Michael Larabel
2022-06-19 12:14                   ` Thorsten Leemhuis
2022-06-22  0:15                     ` Andrew Morton
2022-03-05  4:33     ` [patch v4] " Paul E. McKenney
2022-03-08 17:41     ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220304163554.8872fe5d5a9d634f7a2884f5@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=minchan@kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=nsaenzju@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).