All of lore.kernel.org
 help / color / mirror / Atom feed
From: Martin Steigerwald <Martin@lichtvoll.de>
To: linux-xfs@oss.sgi.com
Cc: linux-raid@vger.kernel.org, Alan Piszcz <ap@solarrain.com>,
	Eric Sandeen <sandeen@sandeen.net>,
	xfs@oss.sgi.com
Subject: Re: 12x performance drop on md/linux+sw raid1 due to barriers [xfs]
Date: Sat, 13 Dec 2008 18:26:19 +0100	[thread overview]
Message-ID: <200812131826.25280.Martin@lichtvoll.de> (raw)
In-Reply-To: <alpine.DEB.1.10.0812130724340.18746@p34.internal.lan>


[-- Attachment #1.1: Type: text/plain, Size: 3397 bytes --]

Am Samstag 13 Dezember 2008 schrieb Justin Piszcz:
> On Sat, 6 Dec 2008, Eric Sandeen wrote:
> > Justin Piszcz wrote:
> >> Someone should write a document with XFS and barrier support, if I
> >> recall, in the past, they never worked right on raid1 or raid5
> >> devices, but it appears now they they work on RAID1, which slows
> >> down performance ~12 times!!
> >>
> >> There is some mention of it here:
> >> http://oss.sgi.com/projects/xfs/faq.html#wcache_persistent
> >>
> >> But basically I believe it should be noted in the kernel logs, FAQ
> >> or somewhere because just through the process of upgrading the
> >> kernel, not changing fstab or any other part of the system,
> >> performance can drop 12x just because the newer kernels implement
> >> barriers.
> >
> > Perhaps:
> >
> > printk(KERN_ALERT "XFS is now looking after your metadata very
> > carefully; if you prefer the old, fast, dangerous way, mount with -o
> > nobarrier\n");
> >
> > :)
> >
> > Really, this just gets xfs on md raid1 in line with how it behaves on
> > most other devices.
> >
> > But I agree, some documentation/education is probably in order; if
> > you choose to disable write caches or you have faith in the battery
> > backup of your write cache, turning off barriers would be a good
> > idea.  Justin, it might be interesting to do some tests with:
> >
> > barrier,   write cache enabled
> > nobarrier, write cache enabled
> > nobarrier, write cache disabled
> >
> > a 12x hit does hurt though...  If you're really motivated, try the
> > same scenarios on ext3 and ext4 to see what the barrier hit is on
> > those as well.
> >
> > -Eric
>
> No, I have not forgotten about this I have just been quite busy, I will
> test this now, as before, I did not use sync because I was in a hurry
> and did not have the ability to test, I am using a different machine/hw
> type but the setup is the same, md/raid1 etc.
>
> Since I will only be measuring barriers, per esandeen@ I have changed
> the mount options from what I typically use to the defaults.

[...]

> The benchmark:
> # /usr/bin/time bash -c 'tar xf linux-2.6.27.8.tar; sync'
> # echo 1 > /proc/sys/vm/drop_caches # (between tests)
>
> == The tests ==
>
>   KEY:
>   barriers = "b"
>   write_cache = "w"
>
>   SUMMARY:
>    b=on,w=on: 1:19.53 elapsed @ 2% CPU [BENCH_1]
>   b=on,w=off: 1:23.59 elapsed @ 2% CPU [BENCH_2]
>   b=off,w=on: 0:21.35 elapsed @ 9% CPU [BENCH_3]
> b=off,w=off: 0:42.90 elapsed @ 4% CPU [BENCH_4]

This is quite similar to what I got on my laptop without any RAID 
setup[1]. At least without barriers it was faster in all of my tar -xf 
linux-2.6.27.tar.bz2 and rm -rf linux-2.6.27 tests.

At the moment it appears to me that disabling write cache may often give 
more performance than using barriers. And this doesn't match my 
expectation of write barriers as a feature that enhances performance. 
Right now a "nowcache" option and having this as default appears to make 
more sense than defaulting to barriers. But I think this needs more 
testing than just those simple high meta data load tests. Anyway I am 
happy cause I have a way to speed up XFS ;-).

[1] http://oss.sgi.com/archives/xfs/2008-12/msg00244.html

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

WARNING: multiple messages have this Message-ID (diff)
From: Martin Steigerwald <Martin@lichtvoll.de>
To: linux-xfs@oss.sgi.com
Cc: linux-raid@vger.kernel.org, Alan Piszcz <ap@solarrain.com>,
	Eric Sandeen <sandeen@sandeen.net>
Subject: Re: 12x performance drop on md/linux+sw raid1 due to barriers [xfs]
Date: Sat, 13 Dec 2008 18:26:19 +0100	[thread overview]
Message-ID: <200812131826.25280.Martin@lichtvoll.de> (raw)
In-Reply-To: <alpine.DEB.1.10.0812130724340.18746@p34.internal.lan>


[-- Attachment #1.1: Type: text/plain, Size: 3397 bytes --]

Am Samstag 13 Dezember 2008 schrieb Justin Piszcz:
> On Sat, 6 Dec 2008, Eric Sandeen wrote:
> > Justin Piszcz wrote:
> >> Someone should write a document with XFS and barrier support, if I
> >> recall, in the past, they never worked right on raid1 or raid5
> >> devices, but it appears now they they work on RAID1, which slows
> >> down performance ~12 times!!
> >>
> >> There is some mention of it here:
> >> http://oss.sgi.com/projects/xfs/faq.html#wcache_persistent
> >>
> >> But basically I believe it should be noted in the kernel logs, FAQ
> >> or somewhere because just through the process of upgrading the
> >> kernel, not changing fstab or any other part of the system,
> >> performance can drop 12x just because the newer kernels implement
> >> barriers.
> >
> > Perhaps:
> >
> > printk(KERN_ALERT "XFS is now looking after your metadata very
> > carefully; if you prefer the old, fast, dangerous way, mount with -o
> > nobarrier\n");
> >
> > :)
> >
> > Really, this just gets xfs on md raid1 in line with how it behaves on
> > most other devices.
> >
> > But I agree, some documentation/education is probably in order; if
> > you choose to disable write caches or you have faith in the battery
> > backup of your write cache, turning off barriers would be a good
> > idea.  Justin, it might be interesting to do some tests with:
> >
> > barrier,   write cache enabled
> > nobarrier, write cache enabled
> > nobarrier, write cache disabled
> >
> > a 12x hit does hurt though...  If you're really motivated, try the
> > same scenarios on ext3 and ext4 to see what the barrier hit is on
> > those as well.
> >
> > -Eric
>
> No, I have not forgotten about this I have just been quite busy, I will
> test this now, as before, I did not use sync because I was in a hurry
> and did not have the ability to test, I am using a different machine/hw
> type but the setup is the same, md/raid1 etc.
>
> Since I will only be measuring barriers, per esandeen@ I have changed
> the mount options from what I typically use to the defaults.

[...]

> The benchmark:
> # /usr/bin/time bash -c 'tar xf linux-2.6.27.8.tar; sync'
> # echo 1 > /proc/sys/vm/drop_caches # (between tests)
>
> == The tests ==
>
>   KEY:
>   barriers = "b"
>   write_cache = "w"
>
>   SUMMARY:
>    b=on,w=on: 1:19.53 elapsed @ 2% CPU [BENCH_1]
>   b=on,w=off: 1:23.59 elapsed @ 2% CPU [BENCH_2]
>   b=off,w=on: 0:21.35 elapsed @ 9% CPU [BENCH_3]
> b=off,w=off: 0:42.90 elapsed @ 4% CPU [BENCH_4]

This is quite similar to what I got on my laptop without any RAID 
setup[1]. At least without barriers it was faster in all of my tar -xf 
linux-2.6.27.tar.bz2 and rm -rf linux-2.6.27 tests.

At the moment it appears to me that disabling write cache may often give 
more performance than using barriers. And this doesn't match my 
expectation of write barriers as a feature that enhances performance. 
Right now a "nowcache" option and having this as default appears to make 
more sense than defaulting to barriers. But I think this needs more 
testing than just those simple high meta data load tests. Anyway I am 
happy cause I have a way to speed up XFS ;-).

[1] http://oss.sgi.com/archives/xfs/2008-12/msg00244.html

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2008-12-13 17:26 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-12-06 14:28 12x performance drop on md/linux+sw raid1 due to barriers [xfs] Justin Piszcz
2008-12-06 14:28 ` Justin Piszcz
2008-12-06 15:36 ` Eric Sandeen
2008-12-06 20:35   ` Redeeman
2008-12-06 20:35     ` Redeeman
2008-12-13 12:54   ` Justin Piszcz
2008-12-13 12:54     ` Justin Piszcz
2008-12-13 17:26     ` Martin Steigerwald [this message]
2008-12-13 17:26       ` Martin Steigerwald
2008-12-13 17:40       ` Eric Sandeen
2008-12-13 17:40         ` Eric Sandeen
2008-12-14  3:31         ` Redeeman
2008-12-14  3:31           ` Redeeman
2008-12-14 14:02           ` Peter Grandi
2008-12-14 14:02             ` Peter Grandi
2008-12-14 18:12             ` Martin Steigerwald
2008-12-14 18:12               ` Martin Steigerwald
2008-12-14 22:02               ` Peter Grandi
2008-12-14 22:02                 ` Peter Grandi
2008-12-15 18:48                 ` Martin Steigerwald
2008-12-15 22:50                   ` Peter Grandi
2009-02-18 22:14                     ` Leon Woestenberg
2009-02-18 22:24                       ` Eric Sandeen
2009-02-18 23:09                       ` Ralf Liebenow
2009-02-18 23:19                         ` Eric Sandeen
2009-02-20 19:19                       ` Peter Grandi
2008-12-15 22:38                 ` Dave Chinner
2008-12-15 22:38                   ` Dave Chinner
2008-12-16  9:39                   ` Martin Steigerwald
2008-12-16  9:39                     ` Martin Steigerwald
2008-12-16 20:57                     ` Peter Grandi
2008-12-16 23:14                     ` Dave Chinner
2008-12-16 23:14                       ` Dave Chinner
2008-12-17 21:40                 ` Bill Davidsen
2008-12-17 21:40                   ` Bill Davidsen
2008-12-18  8:20                   ` Leon Woestenberg
2008-12-18 23:33                     ` Bill Davidsen
2008-12-21 19:16                     ` Peter Grandi
2008-12-22 13:19                       ` Leon Woestenberg
2008-12-22 13:19                         ` Leon Woestenberg
2008-12-18 22:26                   ` Dave Chinner
2008-12-18 22:26                     ` Dave Chinner
2008-12-20 14:06               ` Peter Grandi
2008-12-14 18:35             ` Martin Steigerwald
2008-12-14 18:35               ` Martin Steigerwald
2008-12-14 17:49           ` Martin Steigerwald
2008-12-14 17:49             ` Martin Steigerwald
2008-12-14 23:36         ` Dave Chinner
2008-12-14 23:36           ` Dave Chinner
2008-12-14 23:55           ` Eric Sandeen
2008-12-13 18:01       ` David Lethe
2008-12-13 18:01         ` David Lethe
2008-12-06 18:42 ` Peter Grandi
2008-12-11  0:20 ` Bill Davidsen
2008-12-11  0:20   ` Bill Davidsen
2008-12-11  9:18   ` Justin Piszcz
2008-12-11  9:18     ` Justin Piszcz
2008-12-11  9:24     ` Justin Piszcz
2008-12-11  9:24       ` Justin Piszcz
2008-12-14 18:33 Martin Steigerwald
2008-12-14 18:33 ` Martin Steigerwald

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200812131826.25280.Martin@lichtvoll.de \
    --to=martin@lichtvoll.de \
    --cc=ap@solarrain.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=linux-xfs@oss.sgi.com \
    --cc=sandeen@sandeen.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.