linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nick Piggin <npiggin@kernel.dk>
To: Dave Chinner <david@fromorbit.com>
Cc: Nick Piggin <npiggin@kernel.dk>, Nick Piggin <npiggin@gmail.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Eric Dumazet <eric.dumazet@gmail.com>,
	Al Viro <viro@zeniv.linux.org.uk>,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [patch 1/6] fs: icache RCU free inodes
Date: Tue, 16 Nov 2010 14:49:06 +1100	[thread overview]
Message-ID: <20101116034906.GA4596@amd> (raw)
In-Reply-To: <20101116030242.GI22876@dastard>

On Tue, Nov 16, 2010 at 02:02:43PM +1100, Dave Chinner wrote:
> On Mon, Nov 15, 2010 at 03:21:00PM +1100, Nick Piggin wrote:
> > This is 30K inodes per second per CPU, versus nearly 800K per second
> > number that I measured the 12% slowdown with. About 25x slower.
> 
> Hi Nick, the ramfs (800k/12%) numbers are not the context I was
> responding to - you're comparing apples to oranges. I was responding to
> the "XFS [on a ramdisk] is about 4.9% slower" result.

Well xfs on ramdisk was (85k/4.9%). A a lower number, like 30k, I would
expect that should be around 1-2% perhaps. And when in the context of a
real workload that is not 100% CPU bound on creating and destroying a
single inode, I expect that to be well under 1%.

Like I said, I never disputed a potential regression, but I have looked
for workloads that have a detectable regression and have not found any.
And I have extrapolated microbenchmark numbers to show that it's not
going to be a _big_ problem even in a worst case scenario.

Based on that, and the fact that complexity of rcu-walk goes up quite a
bit with SLAB_DESTROY_BY_RCU, I explained why I want to go with RCU
first. I am much more worried about complexity and review coverage and
maintainability than a small worst case regression in some rare
(non-existant) workloads.

I say non existant because if anybody actaully has a workload that tries
to create and destroy inodes fast, they would have been yelling at the
top of their lungs already because the vfs was totally unscalable for
them, let alone most filesystems. Ie. we have quite strong empirical
evidence that people are _not_ hitting this terribly hard.

The first real complaint was really not long ago by google, and that was
due to sockets workload, not files.

I think that is quite a reasonable position. I would definitely like to
see and hear of any numbers or workloads that you can suggest I try, but
really I still want to merge rcu-walk first and then do an incremental
SLAB RCU patch on top of that as my preferred approach to merging it.

Thanks,
Nick


  reply	other threads:[~2010-11-16  3:49 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-11-09 12:46 [patch 1/6] fs: icache RCU free inodes Nick Piggin
2010-11-09 12:47 ` [patch 2/6] fs: icache avoid RCU freeing for pseudo fs Nick Piggin
2010-11-09 12:58 ` [patch 3/6] fs: dcache documentation cleanup Nick Piggin
2010-11-09 16:24   ` Christoph Hellwig
2010-11-09 22:06     ` Nick Piggin
2010-11-10 16:27       ` Christoph Hellwig
2010-11-09 13:01 ` [patch 4/6] fs: d_delete change Nick Piggin
2010-11-09 16:25   ` Christoph Hellwig
2010-11-09 22:08     ` Nick Piggin
2010-11-10 16:32       ` Christoph Hellwig
2010-11-11  0:27         ` Nick Piggin
2010-11-11 22:07           ` Linus Torvalds
2010-11-09 13:02 ` [patch 5/6] fs: d_compare change for rcu-walk Nick Piggin
2010-11-09 16:25   ` Christoph Hellwig
2010-11-10  1:48     ` Nick Piggin
2010-11-09 13:03 ` [patch 6/6] fs: d_hash " Nick Piggin
2010-11-09 14:19 ` [patch 1/6] fs: icache RCU free inodes Andi Kleen
2010-11-09 21:36   ` Nick Piggin
2010-11-10 14:47     ` Andi Kleen
2010-11-11  4:27       ` Nick Piggin
2010-11-09 16:02 ` Linus Torvalds
2010-11-09 16:21   ` Christoph Hellwig
2010-11-09 21:48     ` Nick Piggin
2010-11-09 16:21   ` Eric Dumazet
2010-11-09 17:08     ` Linus Torvalds
2010-11-09 17:15       ` Christoph Hellwig
2010-11-09 21:55         ` Nick Piggin
2010-11-09 22:05       ` Nick Piggin
2010-11-12  1:24         ` Nick Piggin
2010-11-12  4:48           ` Linus Torvalds
2010-11-12  6:02             ` Nick Piggin
2010-11-12  6:49               ` Nick Piggin
2010-11-12 17:33                 ` Linus Torvalds
2010-11-12 23:17                   ` Nick Piggin
2010-11-15  1:00           ` Dave Chinner
2010-11-15  4:21             ` Nick Piggin
2010-11-16  3:02               ` Dave Chinner
2010-11-16  3:49                 ` Nick Piggin [this message]
2010-11-17  1:12                   ` Dave Chinner
2010-11-17  4:18                     ` Nick Piggin
2010-11-17  5:56                       ` Nick Piggin
2010-11-17  6:04                         ` Nick Piggin
2010-11-09 21:44   ` Nick Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101116034906.GA4596@amd \
    --to=npiggin@kernel.dk \
    --cc=david@fromorbit.com \
    --cc=eric.dumazet@gmail.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=npiggin@gmail.com \
    --cc=torvalds@linux-foundation.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).