linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Jim Meyering <jim@meyering.net>
Cc: "Theodore Ts'o" <tytso@mit.edu>,
	"Andreas Dilger" <adilger@dilger.ca>,
	"Niklas Hambüchen" <niklas@nh2.me>,
	"Linux FS-devel Mailing List" <linux-fsdevel@vger.kernel.org>,
	"Paul Eggert" <eggert@cs.ucla.edu>,
	"Pádraig Brady" <P@draigbrady.com>
Subject: Re: O(n^2) deletion performance
Date: Thu, 4 Jan 2018 22:42:41 +1100	[thread overview]
Message-ID: <20180104114241.GC32627@dastard> (raw)
In-Reply-To: <CA+8g5KFGDG6=R3vY4jvWuSZjap4sCxrSnK1sfTsPnjjJzbYEQw@mail.gmail.com>

On Wed, Jan 03, 2018 at 08:16:58PM -0800, Jim Meyering wrote:
> On Mon, Jan 1, 2018 at 10:22 PM, Theodore Ts'o <tytso@mit.edu> wrote:
> > versus the patience needed to recover from
> > accidentally dumping 16 million files into a directory --- I prefer
> > the latter.  I can wait a few minutes....
> 
> I've just run a test on the spinning-disk file system mentioned above,
> and it took 75 minutes to delete 12.8M entries. That's rather nasty.

An unrealistic performance expectation is not a code bug. It's a
knowledge problem and a learning opportunity. :)

So let's put some numbers to this operation. If this is a cache-cold rm, then the
number of read IOs required on a perfectly laid out /XFS/ filesystem
would be roughly;
	read IOs = num getdents IO + num inode lookup IO
		 = 12.8M / 32 + 12.8M / 500
		 = 400,000 + 25,600
		 = 425,600.

Now, lets assume an low seek time for each IO on a sata drive - about
3ms per IO. That means the read IO delay is:

	read IO delay = 425,600 * 0.003s
		      = 1277s
		      = 21m17s

Hence with a perfect layout we've got at least 21m of blocking IO
time alone in this workload.  If the directory and inodes are
scattered, and we get an average seek time (7ms), then we're looking
at 50 minutes of blocking read IO time in the workload.

Then we've got to add the time for each operation to be executed on
the CPU. If I take the number from a 10M file unlink I did yesterday
in a VM with device image files hosted on XFS on NVMe drives:

File count: 10000000
Create: 918.46 47.14 866.24
Unlink: 1545.99 12.14 1410.97

There's 1410s of system CPU time to unlink 10 million inodes. Add another
25% for the larger inode count in your test, and we've got almost 30
minutes of CPU time in this workload. If we add that to our 50
minutes of IO blocking time, we've got ~80 minutes total runtime.

IOWs, 75 minutes on a spinning sata drive to delete 12.8M directory
entries is entirely reasonable for any filesystem because the
workload is single threaded, blocks on each read IO, requires
hundreds of thousands of read IOs to complete and the IO time is
largely dependent on the physical layout of the directory and inode
structure.

That's the reason I asked for iowatcher to be run to analysis these
bug reports - it will tell us /exactly/ how much time is being spent
waiting for IO, what the seek patterns are, and that will tell us
how badly we all suck at laying out large directories.

> On the bright side, Kevin Vigor was kind enough to run tests showing
> that on some large, fast NVMe devices, everything looks linear:
> https://docs.google.com/spreadsheets/d/1bPi8MTvSP4xzzuARPOd5fxFujhBoU2Dxandr-Vh1T9c/edit#gid=0

Of course. That's because the read Io latency time is deterministic,
not dependent on the physical layout (i.e. constant "seek" time) and
it's much, much faster than a spinning disk - probably around 100
microseconds. In that case, the IO wait time drops to:

	read IO delay = 425,600 * 0.0001s
		      = 42.6s

That's not unreasonable given the above numbers from my test VM
showing roughly 2 minutes of wait time in 25 minutes for 10million
inodes being removed.

IOWs, the performance on fast SSDs and NVMe drives is determined
almost entirely by software algorithms and the per-operation CPU
cost rather than the physical layout of the structure on disk and
the time needed to retreive it into memory.  That's why you see
variable and slow performance on slow spinning drives, but it gets
more and more linear as the devices get faster and less sensitive to
physical layout of the filesystem metadata....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  parent reply	other threads:[~2018-01-04 11:42 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-02  0:21 O(n^2) deletion performance Niklas Hambüchen
2018-01-02  1:20 ` Niklas Hambüchen
2018-01-02  1:59 ` Theodore Ts'o
2018-01-02  2:49   ` Andreas Dilger
2018-01-02  4:27     ` Jim Meyering
2018-01-02  6:22       ` Theodore Ts'o
2018-01-04  4:16         ` Jim Meyering
2018-01-04  7:16           ` Theodore Ts'o
2018-01-04 11:42           ` Dave Chinner [this message]
2018-01-02  4:33     ` Theodore Ts'o
2018-01-02  4:54 ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180104114241.GC32627@dastard \
    --to=david@fromorbit.com \
    --cc=P@draigbrady.com \
    --cc=adilger@dilger.ca \
    --cc=eggert@cs.ucla.edu \
    --cc=jim@meyering.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=niklas@nh2.me \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).