linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mikulas Patocka <mpatocka@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: axboe@kernel.dk, "Alasdair G. Kergon" <agk@redhat.com>,
	dm-devel@redhat.com, linux-kernel@vger.kernel.org,
	"Paul E. McKenney" <paulmck@us.ibm.com>
Subject: Re: [PATCH] backing-dev: use synchronize_rcu_expedited instead of synchronize_rcu
Date: Thu, 2 Feb 2012 15:43:01 -0500 (EST)	[thread overview]
Message-ID: <Pine.LNX.4.64.1202021538570.9035@hs20-bc2-1.build.redhat.com> (raw)
In-Reply-To: <1328042063.2446.250.camel@twins>



On Tue, 31 Jan 2012, Peter Zijlstra wrote:

> On Wed, 2011-07-20 at 20:29 -0400, Mikulas Patocka wrote:
> > Hi Jens
> > 
> > Please would you consider taking this into the block tree? It seems to 
> > speed up device deletion enormously.
> > 
> > Mikulas
> > 
> > ---
> > 
> > backing-dev: use synchronize_rcu_expedited instead of synchronize_rcu
> > 
> > synchronize_rcu sleeps several timer ticks. synchronize_rcu_expedited is 
> > much faster.
> > 
> > With 100Hz timer frequency, when we remove 10000 block devices with 
> > "dmsetup remove_all" command, it takes 27 minutes. With this patch, 
> > removing 10000 block devices takes only 15 seconds.
> > 
> > Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> > 
> > ---
> >  mm/backing-dev.c |    2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > Index: linux-3.0-rc7-fast/mm/backing-dev.c
> > ===================================================================
> > --- linux-3.0-rc7-fast.orig/mm/backing-dev.c	2011-07-19 18:01:00.000000000 +0200
> > +++ linux-3.0-rc7-fast/mm/backing-dev.c	2011-07-19 18:01:07.000000000 +0200
> > @@ -505,7 +505,7 @@ static void bdi_remove_from_list(struct 
> >  	list_del_rcu(&bdi->bdi_list);
> >  	spin_unlock_bh(&bdi_lock);
> >  
> > -	synchronize_rcu();
> > +	synchronize_rcu_expedited();
> >  }
> >  
> 
> Urgh, I just noticed this crap in my tree.. You realize that what you're
> effectively hammering a global sync primitive this way? Depending on
> what RCU flavour you have any SMP variant will at least do a machine
> wide IPI broadcast for every sync_rcu_exp(), some do significantly more.
> 
> The much better solution would've been to batch your block-dev removals
> and use a single sync_rcu as barrier.
> 
> This is not cool.

Do you have some measurable use case where the user is removing block 
devices so heavily that this causes a problem?

Mikulas

  parent reply	other threads:[~2012-02-02 20:44 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-21  0:29 [PATCH] backing-dev: use synchronize_rcu_expedited instead of synchronize_rcu Mikulas Patocka
2011-07-21  7:27 ` Jens Axboe
2012-01-31 20:34 ` Peter Zijlstra
2012-01-31 21:04   ` Paul E. McKenney
2012-02-02 20:43   ` Mikulas Patocka [this message]
2012-02-02 21:59     ` Peter Zijlstra
2012-02-03  0:29       ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.1202021538570.9035@hs20-bc2-1.build.redhat.com \
    --to=mpatocka@redhat.com \
    --cc=agk@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=dm-devel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulmck@us.ibm.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).