linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Daniel J Blueman <daniel.blueman@gmail.com>
To: "K. Richard Pixley" <rich@noir.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Confused by performance
Date: Wed, 16 Jun 2010 22:44:32 +0100	[thread overview]
Message-ID: <AANLkTil5TqpPNlz17zah_oRX2P-tfIo3puR9SLXcLVG6@mail.gmail.com> (raw)
In-Reply-To: <4C191330.5060905@noir.com>

On Wed, Jun 16, 2010 at 7:08 PM, K. Richard Pixley <rich@noir.com> wrot=
e:
> Once again I'm stumped by some performance numbers and hoping for som=
e
> insight.
>
> Using an 8-core server, building in parallel, I'm building some code.=
 =A0Using
> ext2 over a 5-way, (5 disk), lvm partition, I can build that code in =
35
> minutes. =A0Tests with dd on the raw disk and lvm partitions show me =
that I'm
> getting near linear improvement from the raw stripe, even with dd run=
s
> exceeding 10G, so I think that convinces me that my disks and control=
ler
> subsystem are capable of operating in parallel and in concert. =A0hdp=
arm -t
> numbers seem to support what I'm seeing from dd.
>
> Running the same build, same parallelism, over a btrfs (defaults) par=
tition
> on a single drive, I'm seeing very consistent build times around an h=
our,
> which is reasonable. =A0I get a little under an hour on ext4 single d=
isk,
> again, very consistently.
>
> However, if I build a btrfs file system across the 5 disks, my build =
times
> decline to around 1.5 - 2hrs, although there's about a 30min variatio=
n
> between different runs.
>
> If I build a btrfs file system across the 5-way lvm stripe, I get eve=
n worse
> performance at around 2.5hrs per build, with about a 45min variation =
between
> runs.
>
> I can't explain these last two results. =A0Any theories?

Try mounting the BTRFS filesystem with 'nobarrier', since this may be
an obvious difference. Also, for metadata-write-intensive workloads,
when creating the filesystem try 'mkfs.btrfs -m single'. Of course,
all this doesn't explain the variance.

I'd say it's worth emplying 'blktrace' to see what happening at a
lower level, and even eg varying between deadline/CFQ I/O schedulers.

Daniel
--=20
Daniel J Blueman
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2010-06-16 21:44 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-05-24 21:08 Confused by performance K. Richard Pixley
2010-05-25  3:59 ` Mike Fedyk
2010-05-28  1:45 ` K. Richard Pixley
2010-06-16 18:08   ` K. Richard Pixley
2010-06-16 19:21     ` Roberto Ragusa
     [not found]       ` <AANLkTinM6ab_KEynfgvVT9v5TmcogoLZ0PLAz2oPnsiS@mail.gmail.com>
2010-06-16 19:35         ` Freddie Cash
2010-06-16 19:56           ` Roberto Ragusa
2010-06-17  6:57           ` David Brown
2010-06-16 21:44     ` Daniel J Blueman [this message]
2010-06-17  9:57     ` Chris Mason

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AANLkTil5TqpPNlz17zah_oRX2P-tfIo3puR9SLXcLVG6@mail.gmail.com \
    --to=daniel.blueman@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=rich@noir.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).