All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: xfs mount/create options (was: XFS status update for August 2010)
@ 2010-09-06 22:55 Richard Scobie
  2010-09-06 23:31 ` Michael Monnerie
  0 siblings, 1 reply; 12+ messages in thread
From: Richard Scobie @ 2010-09-06 22:55 UTC (permalink / raw)
  To: xfs

Michael Monnerie wrote:

 > When I defined su/sw on mkfs, is it enough, or would I always have to
 > specify sunit/swidth with every mount too?

Yes. sunit/swidth only needs to be added to your fstab if you either got 
the calculation wrong when you initially created the fs and wish to 
correct it, or if you grow the fs later over more drives.

Note that with recent kernels, mkfs.xfs will choose the optimal 
sunit/swidth for you if you are using md RAID or LVM (I believe the 
latter is correct).

Regards,

Richard

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfs mount/create options (was: XFS status update for August 2010)
  2010-09-06 22:55 xfs mount/create options (was: XFS status update for August 2010) Richard Scobie
@ 2010-09-06 23:31 ` Michael Monnerie
  0 siblings, 0 replies; 12+ messages in thread
From: Michael Monnerie @ 2010-09-06 23:31 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: Text/Plain, Size: 997 bytes --]

On Dienstag, 7. September 2010 Richard Scobie wrote:
> > When I defined su/sw on mkfs, is it enough, or would I always have
> > to specify sunit/swidth with every mount too?
> 
> Yes. sunit/swidth only needs to be added to your fstab if you either
>  got  the calculation wrong when you initially created the fs and
>  wish to correct it, or if you grow the fs later over more drives.

Thank you.
 
> Note that with recent kernels, mkfs.xfs will choose the optimal 
> sunit/swidth for you if you are using md RAID or LVM (I believe the 
> latter is correct).

I don't use software RAID, but thanks for clarification.

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

****** Aktuelles Radiointerview! ******
http://www.it-podcast.at/aktuelle-sendung.html

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfs mount/create options (was: XFS status update for August 2010)
  2010-09-09  7:27                 ` Dave Chinner
@ 2010-09-09  8:29                   ` Michael Monnerie
  0 siblings, 0 replies; 12+ messages in thread
From: Michael Monnerie @ 2010-09-09  8:29 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: Text/Plain, Size: 1062 bytes --]

On Donnerstag, 9. September 2010 Dave Chinner wrote:
> That is, if you have SATA drives then running them for 3 or 4 days
> at 100% duty cycle while a reshape takes place is putting them far
> outside their design limits.
 
Your arguments are all valid. Our hardware supplier recommended one 
drive type, and we only buy these and really they work very well. For 
the BER, I really hope the controller does read-after-write during 
rebuild. When I see the time it takes to resize an array, I think and 
hope they do it. That said, I've never had a problem on resize, and hope 
it stays like that :-)

There are other reasons to do a resize, which I don't want to discuss in 
public.

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

****** Aktuelles Radiointerview! ******
http://www.it-podcast.at/aktuelle-sendung.html

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfs mount/create options (was: XFS status update for August 2010)
  2010-09-08 23:30               ` Michael Monnerie
@ 2010-09-09  7:27                 ` Dave Chinner
  2010-09-09  8:29                   ` Michael Monnerie
  0 siblings, 1 reply; 12+ messages in thread
From: Dave Chinner @ 2010-09-09  7:27 UTC (permalink / raw)
  To: Michael Monnerie; +Cc: xfs

On Thu, Sep 09, 2010 at 01:30:15AM +0200, Michael Monnerie wrote:
> On Mittwoch, 8. September 2010 Dave Chinner wrote:
> > Dynamically changing the RAID array geometry is a Bad Idea.  Yes,
> > you can do it, but if you've got a filesystem full of data and
> > metadata aligned to the old geometry then after the modification
> > it won't be aligned anymore.
> > 
> > If you want to do this, then either don't bother about geomtry hints
> > in the first place, or dump, rebuild the array, mkfs and restore so
> > everything is properly aligned with the new world order. Hell,
> > dump/mkfs/restore might even be faster than reshaping a large
> > array...
>  
> You're right. But there are some customers who don't want to spend the 
> money for a 2nd array, and can't afford the downtime of backup, rebuild 
> raid (takes 8-48 hours), restore. So an online upgrade is needed. We're 
> not in an ideal world.

If you can't afford downtime, then I'd seriously question using
reshaping to expand storage because it is one of the highest risk
methods of increasing storage capacity you can use. That means
you've still got to do the backup before you reshape your raid
device - if reshaping fails, and then you need to rebuild + restore.

Reshaping is a dangerous operation - you can't go back once it has
started, and failures while reshaping can cause data loss. That is,
the risk of catastrophic failure goes up significantly while a
reshape is in progress. This is the same increase in risk of
failures occuring during rebuild after losing a disk - the next disk
failure is most likely to occur while the rebuild is in progress,
simply because of the sustained inrease in load on the drives.

That is, if you have SATA drives then running them for 3 or 4 days
at 100% duty cycle while a reshape takes place is putting them far
outside their design limits. SATA drives are generally designed for
a 20-30% duty cycle for sustained operation. Put disks that are a
couple of years old under this sort of load....

Of even more concern is that reshaping a multi-terabyte array
requires moving the same order of magnitude of bits around as the
BER of the drives. Hence there's every chance of introducing silent
bit errors into your data by reshaping unless you further slow the
reshape down by having it read back all the data to verify it was
reshaped correctly.

IMO, reshaping is not a practise you should be designing your
capacity upgrade processes around, especially if you have uptime and
perforamnce SLA guarantees. It's a very risky operation, and not
something I would suggest anyone uses in production unless they have
absolutely no other option.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfs mount/create options (was: XFS status update for August 2010)
  2010-09-08 15:24               ` Emmanuel Florac
@ 2010-09-08 23:34                 ` Michael Monnerie
  0 siblings, 0 replies; 12+ messages in thread
From: Michael Monnerie @ 2010-09-08 23:34 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: Text/Plain, Size: 903 bytes --]

On Mittwoch, 8. September 2010 Emmanuel Florac wrote:
> True, this is incredibly long. Adding two disks to an 8 drives array
> easily needs 72 hours.
 
Agreed. But how long does the process 
- backup
- build new RAID with more disks
- restore
take on the same storage? It's not that much faster, and during the time 
of RAID setup you can't work, if you are very serious about your data. 
Yes, you can build a raid in background and already move data into it, 
but you're not secured during that time. I don't take that risk.

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

****** Aktuelles Radiointerview! ******
http://www.it-podcast.at/aktuelle-sendung.html

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfs mount/create options (was: XFS status update for August 2010)
  2010-09-08 14:51             ` Dave Chinner
  2010-09-08 15:24               ` Emmanuel Florac
@ 2010-09-08 23:30               ` Michael Monnerie
  2010-09-09  7:27                 ` Dave Chinner
  1 sibling, 1 reply; 12+ messages in thread
From: Michael Monnerie @ 2010-09-08 23:30 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: Text/Plain, Size: 1190 bytes --]

On Mittwoch, 8. September 2010 Dave Chinner wrote:
> Dynamically changing the RAID array geometry is a Bad Idea.  Yes,
> you can do it, but if you've got a filesystem full of data and
> metadata aligned to the old geometry then after the modification
> it won't be aligned anymore.
> 
> If you want to do this, then either don't bother about geomtry hints
> in the first place, or dump, rebuild the array, mkfs and restore so
> everything is properly aligned with the new world order. Hell,
> dump/mkfs/restore might even be faster than reshaping a large
> array...
 
You're right. But there are some customers who don't want to spend the 
money for a 2nd array, and can't afford the downtime of backup, rebuild 
raid (takes 8-48 hours), restore. So an online upgrade is needed. We're 
not in an ideal world.

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

****** Aktuelles Radiointerview! ******
http://www.it-podcast.at/aktuelle-sendung.html

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfs mount/create options (was: XFS status update for August 2010)
  2010-09-08 14:51             ` Dave Chinner
@ 2010-09-08 15:24               ` Emmanuel Florac
  2010-09-08 23:34                 ` Michael Monnerie
  2010-09-08 23:30               ` Michael Monnerie
  1 sibling, 1 reply; 12+ messages in thread
From: Emmanuel Florac @ 2010-09-08 15:24 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Michael Monnerie, xfs

Le Thu, 9 Sep 2010 00:51:48 +1000
Dave Chinner <david@fromorbit.com> écrivait:

>  Hell,
> dump/mkfs/restore might even be faster than reshaping a large
> array...

True, this is incredibly long. Adding two disks to an 8 drives array
easily needs 72 hours.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfs mount/create options (was: XFS status update for August 2010)
  2010-09-08 13:38           ` Michael Monnerie
@ 2010-09-08 14:51             ` Dave Chinner
  2010-09-08 15:24               ` Emmanuel Florac
  2010-09-08 23:30               ` Michael Monnerie
  0 siblings, 2 replies; 12+ messages in thread
From: Dave Chinner @ 2010-09-08 14:51 UTC (permalink / raw)
  To: Michael Monnerie; +Cc: xfs

On Wed, Sep 08, 2010 at 03:38:53PM +0200, Michael Monnerie wrote:
> On Mittwoch, 8. September 2010 Dave Chinner wrote:
> > >  On machines with 32MiB or more 32k is the default, but most
> > >  machines these days have multi-gigabytes of RAM, so at least for
> > >  RAM>1GiB that could be made default.
> > 
> > That is definitely not true. XFS is widely used in the embedded NAS
> > space, where memory is very limited and might be configured with
> > many filesystems.  32k is the default because those sorts of machines
> > can't afford to burn 2MB RAM per filesystem just in log buffers.
> >
> > Also, you can go and search the archives or git history as to why we
> > don't tune the logbsize based on physical memory size anymore, too.
> 
> OK, then the man page should be updated to reflect this "newer logic". 
> I've got the information directly from there.
>  
> > You're getting the wrong information there. largeio affects the
> > output of the optimal IO size reported by stat(2). 'stat -f" does
> > a statfs(2) call. Try 'stat /disk/db/<file> --format %o'....
> 
> Ah, that's better, thank you :-)
>  
> > >  And while I am at it: Why does "mount" not provide the su=/sw=
> > >   options that we can use to create a filesystem? Would make life
> > >   easier, as it's much easier to read su=64k,sw=7 than
> > >   sunit=128,swidth=896.
> > 
> > You should never, ever need to use the mount options.
> 
> ..except when a disk is added to the RAID, or it's RAID level gets 
> changed. Then sw=7 becomes sw=8 or so - or better said: would become, as 
> then you must use the (I call it strange, error prone) semantics of 
> sunit/swidth.

Dynamically changing the RAID array geometry is a Bad Idea.  Yes,
you can do it, but if you've got a filesystem full of data and
metadata aligned to the old geometry then after the modification
it won't be aligned anymore.

If you want to do this, then either don't bother about geomtry hints
in the first place, or dump, rebuild the array, mkfs and restore so
everything is properly aligned with the new world order. Hell,
dump/mkfs/restore might even be faster than reshaping a large
array...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfs mount/create options (was: XFS status update for August 2010)
  2010-09-08 10:58         ` Dave Chinner
@ 2010-09-08 13:38           ` Michael Monnerie
  2010-09-08 14:51             ` Dave Chinner
  0 siblings, 1 reply; 12+ messages in thread
From: Michael Monnerie @ 2010-09-08 13:38 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: Text/Plain, Size: 2221 bytes --]

On Mittwoch, 8. September 2010 Dave Chinner wrote:
> >  On machines with 32MiB or more 32k is the default, but most
> >  machines these days have multi-gigabytes of RAM, so at least for
> >  RAM>1GiB that could be made default.
> 
> That is definitely not true. XFS is widely used in the embedded NAS
> space, where memory is very limited and might be configured with
> many filesystems.  32k is the default because those sorts of machines
> can't afford to burn 2MB RAM per filesystem just in log buffers.
>
> Also, you can go and search the archives or git history as to why we
> don't tune the logbsize based on physical memory size anymore, too.

OK, then the man page should be updated to reflect this "newer logic". 
I've got the information directly from there.
 
> You're getting the wrong information there. largeio affects the
> output of the optimal IO size reported by stat(2). 'stat -f" does
> a statfs(2) call. Try 'stat /disk/db/<file> --format %o'....

Ah, that's better, thank you :-)
 
> >  And while I am at it: Why does "mount" not provide the su=/sw=
> >   options that we can use to create a filesystem? Would make life
> >   easier, as it's much easier to read su=64k,sw=7 than
> >   sunit=128,swidth=896.
> 
> You should never, ever need to use the mount options.

..except when a disk is added to the RAID, or it's RAID level gets 
changed. Then sw=7 becomes sw=8 or so - or better said: would become, as 
then you must use the (I call it strange, error prone) semantics of 
sunit/swidth.
 
> >  When I defined su/sw on mkfs, is it enough, or would I always have
> > to specify sunit/swidth with every mount too?
> 
> Yes, no. mkfs.xfs stores sunit/swidth on disk in the superblock.

So when I add a disk, I must only once mount with the new sunit/swidth, 
and that is stored? That's nice.

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

****** Aktuelles Radiointerview! ******
http://www.it-podcast.at/aktuelle-sendung.html

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfs mount/create options (was: XFS status update for August 2010)
  2010-09-08  5:38       ` Michael Monnerie
@ 2010-09-08 10:58         ` Dave Chinner
  2010-09-08 13:38           ` Michael Monnerie
  0 siblings, 1 reply; 12+ messages in thread
From: Dave Chinner @ 2010-09-08 10:58 UTC (permalink / raw)
  To: Michael Monnerie; +Cc: xfs

On Wed, Sep 08, 2010 at 07:38:54AM +0200, Michael Monnerie wrote:
> I just found that my questions from Monday were not solved, but this is 
> interesting, so I want to warm it up again.
> 
> On Montag, 6. September 2010 Michael Monnerie wrote:
>  I looked into man mkfs now, which brings up these questions:
>  
>  On Sonntag, 5. September 2010 Dave Chinner wrote:
>  >         - relatime,logbufs=8,attr=2,barrier are all defaults.
>  
>  Why isn't logbsize=256k default, when it's suggested most of the time
>  anyway?

It's suggested when people are asking about performance tuning. When
the performance is acceptible with the default value, then you don't
hear about it, do you?

>  On machines with 32MiB or more 32k is the default, but most
>  machines these days have multi-gigabytes of RAM, so at least for
>  RAM>1GiB that could be made default.

That is definitely not true. XFS is widely used in the embedded NAS
space, where memory is very limited and might be configured with
many filesystems.  32k is the default because those sorts of machines
can't afford to burn 2MB RAM per filesystem just in log buffers.

Also, you can go and search the archives or git history as to why we
don't tune the logbsize based on physical memory size anymore, too.


>  >         - largeio only affects stat(2) output if you have
>  >           sunit/swidth set - unlikely on a laptop drive, and has
>  >           no effect on unlink performance.
>  >         - swalloc only affects allocation if sunit/swidth are set
>  >           and has no effect on unlink performance.
>  
>  Hm, it seems I don't understand that. I tried now on different
>   servers, using
>  stat -f /disks/db --format '%s %S'
>  4096 4096

You're getting the wrong information there. largeio affects the
output of the optimal IO size reported by stat(2). 'stat -f" does
a statfs(2) call. Try 'stat /disk/db/<file> --format %o'....

>  And while I am at it: Why does "mount" not provide the su=/sw=
>   options that we can use to create a filesystem? Would make life
>   easier, as it's much easier to read su=64k,sw=7 than
>   sunit=128,swidth=896.

You should never, ever need to use the mount options.

>  When I defined su/sw on mkfs, is it enough, or would I always have to
>  specify sunit/swidth with every mount too?

Yes, no. mkfs.xfs stores sunit/swidth on disk in the superblock.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfs mount/create options (was: XFS status update for August 2010)
  2010-09-06  5:49     ` xfs mount/create options (was: XFS status update for August 2010) Michael Monnerie
@ 2010-09-08  5:38       ` Michael Monnerie
  2010-09-08 10:58         ` Dave Chinner
  0 siblings, 1 reply; 12+ messages in thread
From: Michael Monnerie @ 2010-09-08  5:38 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: Text/Plain, Size: 2127 bytes --]

I just found that my questions from Monday were not solved, but this is 
interesting, so I want to warm it up again.

On Montag, 6. September 2010 Michael Monnerie wrote:
 I looked into man mkfs now, which brings up these questions:
 
 On Sonntag, 5. September 2010 Dave Chinner wrote:
 >         - relatime,logbufs=8,attr=2,barrier are all defaults.
 
 Why isn't logbsize=256k default, when it's suggested most of the time
 anyway? On machines with 32MiB or more 32k is the default, but most
 machines these days have multi-gigabytes of RAM, so at least for
 RAM>1GiB that could be made default.
 
 >         - largeio only affects stat(2) output if you have
 >           sunit/swidth set - unlikely on a laptop drive, and has
 >           no effect on unlink performance.
 >         - swalloc only affects allocation if sunit/swidth are set
 >           and has no effect on unlink performance.
 
 Hm, it seems I don't understand that. I tried now on different
  servers, using
 stat -f /disks/db --format '%s %S'
 4096 4096
 
 That filesystems were all created with su=64k,swidth=(values 4-8
 depending on RAID). So I retried specifying directly in the mount
 options: sunit=128,swidth=512
 and it still reports "4096" for %s - or is %s not the value I should
 look for? Some of the filesystems even have allocsize= specified,
  still always 4096 is given back. Where is my problem?
 
 And while I am at it: Why does "mount" not provide the su=/sw=
  options that we can use to create a filesystem? Would make life
  easier, as it's much easier to read su=64k,sw=7 than
  sunit=128,swidth=896.
 
 When I defined su/sw on mkfs, is it enough, or would I always have to
 specify sunit/swidth with every mount too?
 



-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

****** Aktuelles Radiointerview! ******
http://www.it-podcast.at/aktuelle-sendung.html

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: xfs mount/create options (was: XFS status update for August 2010)
  2010-09-05 13:08   ` Dave Chinner
@ 2010-09-06  5:49     ` Michael Monnerie
  2010-09-08  5:38       ` Michael Monnerie
  0 siblings, 1 reply; 12+ messages in thread
From: Michael Monnerie @ 2010-09-06  5:49 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: Text/Plain, Size: 1913 bytes --]

I looked into man mkfs now, which brings up these questions:

On Sonntag, 5. September 2010 Dave Chinner wrote:
>         - relatime,logbufs=8,attr=2,barrier are all defaults.

Why isn't logbsize=256k default, when it's suggested most of the time 
anyway? On machines with 32MiB or more 32k is the default, but most 
machines these days have multi-gigabytes of RAM, so at least for 
RAM>1GiB that could be made default.

>         - largeio only affects stat(2) output if you have
>           sunit/swidth set - unlikely on a laptop drive, and has
>           no effect on unlink performance.
>         - swalloc only affects allocation if sunit/swidth are set
>           and has no effect on unlink performance.

Hm, it seems I don't understand that. I tried now on different servers, 
using
stat -f /disks/db --format '%s %S'
4096 4096

That filesystems were all created with su=64k,swidth=(values 4-8 
depending on RAID). So I retried specifying directly in the mount 
options: sunit=128,swidth=512
and it still reports "4096" for %s - or is %s not the value I should 
look for? Some of the filesystems even have allocsize= specified, still 
always 4096 is given back. Where is my problem?

And while I am at it: Why does "mount" not provide the su=/sw= options 
that we can use to create a filesystem? Would make life easier, as it's 
much easier to read su=64k,sw=7 than sunit=128,swidth=896.

When I defined su/sw on mkfs, is it enough, or would I always have to 
specify sunit/swidth with every mount too?

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

****** Aktuelles Radiointerview! ******
http://www.it-podcast.at/aktuelle-sendung.html

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2010-09-09  8:28 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-06 22:55 xfs mount/create options (was: XFS status update for August 2010) Richard Scobie
2010-09-06 23:31 ` Michael Monnerie
  -- strict thread matches above, loose matches on Subject: below --
2010-09-02 14:59 XFS status update for August 2010 Christoph Hellwig
2010-09-05 10:47 ` Willy Tarreau
2010-09-05 13:08   ` Dave Chinner
2010-09-06  5:49     ` xfs mount/create options (was: XFS status update for August 2010) Michael Monnerie
2010-09-08  5:38       ` Michael Monnerie
2010-09-08 10:58         ` Dave Chinner
2010-09-08 13:38           ` Michael Monnerie
2010-09-08 14:51             ` Dave Chinner
2010-09-08 15:24               ` Emmanuel Florac
2010-09-08 23:34                 ` Michael Monnerie
2010-09-08 23:30               ` Michael Monnerie
2010-09-09  7:27                 ` Dave Chinner
2010-09-09  8:29                   ` Michael Monnerie

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.