All of lore.kernel.org
 help / color / mirror / Atom feed
* xfs_growfs / planned resize / performance impact
@ 2012-07-31 13:56 Stefan Priebe - Profihost AG
  2012-07-31 17:27 ` Stan Hoeppner
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Stefan Priebe - Profihost AG @ 2012-07-31 13:56 UTC (permalink / raw)
  To: xfs

Hello list,

i'm planning to create a couple of VMs with just 30GB of space while 
using xfs as the main filesystem.

Now i alreay know that some of the VMs will grow up to 250GB while 
resizing the block device and using xfs_growfs.

Should i take care of that and format these disks with special parameters?

I've discovered that a 500GB volume has agcount=4 and 64000 blocks of 
internal log - while a 300GB volume resized to 500GB has agcount 7 ad 
only 40960 blocks of internal log.

Is it a problem if this grow will happen in small portions (30GB => 50GB 
=> 75GB => 100GB => ... 300GB)?

Thanks!

Greets,
Stefan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-07-31 13:56 xfs_growfs / planned resize / performance impact Stefan Priebe - Profihost AG
@ 2012-07-31 17:27 ` Stan Hoeppner
  2012-08-03  4:03 ` Eric Sandeen
  2012-08-04 22:43 ` Dave Chinner
  2 siblings, 0 replies; 16+ messages in thread
From: Stan Hoeppner @ 2012-07-31 17:27 UTC (permalink / raw)
  To: xfs

On 7/31/2012 8:56 AM, Stefan Priebe - Profihost AG wrote:
> Hello list,
> 
> i'm planning to create a couple of VMs with just 30GB of space while
> using xfs as the main filesystem.
> 
> Now i alreay know that some of the VMs will grow up to 250GB while
> resizing the block device and using xfs_growfs.

If you already know they *will* need 250GB, make them 250GB now.  This
is common sense.

> Should i take care of that and format these disks with special parameters?

Take care of what?  Preemptively avoid what?

> I've discovered that a 500GB volume has agcount=4 and 64000 blocks of
> internal log - while a 300GB volume resized to 500GB has agcount 7 ad
> only 40960 blocks of internal log.

4 AGs is the default when an XFS is created unless the device is over
4TB.  When you grow XFS, new AGs must be created in the new space.  This
is because once an AG is "laid down" it doesn't move and its size never
changes.  Any time you grow and XFS you get more AGs.  This is by design.

> Is it a problem if this grow will happen in small portions (30GB => 50GB
> => 75GB => 100GB => ... 300GB)?

It 'could' be a problem.  But there's no way for us to know that
without, drum roll please, you guessed it-- knowing your workload and
the characteristics of the underlying storage.

Wild ass guess?  These are virtual machines with relatively tiny
storage.  Performance isn't critical or you'd not attempt this in the
first place.  So, I'd guess no, it won't be a problem.

If on the other hand you need high performance to these filesystems,
then you need to provide details of the storage device and the workloads
and we'll discuss it further.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-07-31 13:56 xfs_growfs / planned resize / performance impact Stefan Priebe - Profihost AG
  2012-07-31 17:27 ` Stan Hoeppner
@ 2012-08-03  4:03 ` Eric Sandeen
  2012-08-03  6:09   ` Stefan Priebe - Profihost AG
  2012-08-04 22:43 ` Dave Chinner
  2 siblings, 1 reply; 16+ messages in thread
From: Eric Sandeen @ 2012-08-03  4:03 UTC (permalink / raw)
  To: Stefan Priebe - Profihost AG; +Cc: xfs

On 7/31/12 8:56 AM, Stefan Priebe - Profihost AG wrote:
> Hello list,
> 
> i'm planning to create a couple of VMs with just 30GB of space while using xfs as the main filesystem.
> 
> Now i alreay know that some of the VMs will grow up to 250GB while resizing the block device and using xfs_growfs.
> 
> Should i take care of that and format these disks with special parameters?
> 
> I've discovered that a 500GB volume has agcount=4 and 64000 blocks of internal log - while a 300GB volume resized to 500GB has agcount 7 ad only 40960 blocks of internal log.
> 
> Is it a problem if this grow will happen in small portions (30GB => 50GB => 75GB => 100GB => ... 300GB)?

This incremental part doesn't matter a bit.  The first mkfs will choose the AG count & size according to defaults; further growth after this will add new (possibly partial) AGs of that pre-chosen size.

-Eric

> Thanks!
> 
> Greets,
> Stefan
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-03  4:03 ` Eric Sandeen
@ 2012-08-03  6:09   ` Stefan Priebe - Profihost AG
  2012-08-03 13:46     ` Eric Sandeen
  2012-08-05 11:03     ` Martin Steigerwald
  0 siblings, 2 replies; 16+ messages in thread
From: Stefan Priebe - Profihost AG @ 2012-08-03  6:09 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs

Am 03.08.2012 06:03, schrieb Eric Sandeen:
> On 7/31/12 8:56 AM, Stefan Priebe - Profihost AG wrote:
>> Hello list,
>>
>> i'm planning to create a couple of VMs with just 30GB of space while using xfs as the main filesystem.
>>
>> Now i alreay know that some of the VMs will grow up to 250GB while resizing the block device and using xfs_growfs.
>>
>> Should i take care of that and format these disks with special parameters?
>>
>> I've discovered that a 500GB volume has agcount=4 and 64000 blocks of internal log - while a 300GB volume resized to 500GB has agcount 7 ad only 40960 blocks of internal log.
>>
>> Is it a problem if this grow will happen in small portions (30GB => 50GB => 75GB => 100GB => ... 300GB)?
>
> This incremental part doesn't matter a bit.  The first mkfs will choose the AG count & size according to defaults;
 > further growth after this will add new (possibly partial) AGs of that 
pre-chosen size.

OK thanks for your reply. But does this influence performance? Should i 
perhaps start creating the 30GB with agcount 1 so that while raising the 
disk i don't end up with such a high agcount value? Does it make sense 
to create a bigger internal log from he beginning?

Thanks

Stefan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-03  6:09   ` Stefan Priebe - Profihost AG
@ 2012-08-03 13:46     ` Eric Sandeen
  2012-08-05 11:03     ` Martin Steigerwald
  1 sibling, 0 replies; 16+ messages in thread
From: Eric Sandeen @ 2012-08-03 13:46 UTC (permalink / raw)
  To: Stefan Priebe - Profihost AG; +Cc: xfs

On 8/3/12 1:09 AM, Stefan Priebe - Profihost AG wrote:
> Am 03.08.2012 06:03, schrieb Eric Sandeen:
>> On 7/31/12 8:56 AM, Stefan Priebe - Profihost AG wrote:
>>> Hello list,
>>> 
>>> i'm planning to create a couple of VMs with just 30GB of space
>>> while using xfs as the main filesystem.
>>> 
>>> Now i alreay know that some of the VMs will grow up to 250GB
>>> while resizing the block device and using xfs_growfs.
>>> 
>>> Should i take care of that and format these disks with special
>>> parameters?
>>> 
>>> I've discovered that a 500GB volume has agcount=4 and 64000
>>> blocks of internal log - while a 300GB volume resized to 500GB
>>> has agcount 7 ad only 40960 blocks of internal log.
>>> 
>>> Is it a problem if this grow will happen in small portions (30GB
>>> => 50GB => 75GB => 100GB => ... 300GB)?
>> 
>> This incremental part doesn't matter a bit.  The first mkfs will
>> choose the AG count & size according to defaults; further growth
>> after this will add new (possibly partial) AGs of that pre-chosen
>> size.
> 
> OK thanks for your reply. But does this influence performance? Should
> i perhaps start creating the 30GB with agcount 1 so that while
> raising the disk i don't end up with such a high agcount value? Does
> it make sense to create a bigger internal log from he beginning?

You can't make a single-AG filesystem, for starters.

I'd really suggest that you just do some testing, and see if your
proposed mkfs/growth plan impacts your VM performance in any 
significant way.

-Eric
 
> Thanks
> 
> Stefan
> 



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-07-31 13:56 xfs_growfs / planned resize / performance impact Stefan Priebe - Profihost AG
  2012-07-31 17:27 ` Stan Hoeppner
  2012-08-03  4:03 ` Eric Sandeen
@ 2012-08-04 22:43 ` Dave Chinner
  2012-08-05  5:46   ` Stefan Priebe
  2 siblings, 1 reply; 16+ messages in thread
From: Dave Chinner @ 2012-08-04 22:43 UTC (permalink / raw)
  To: Stefan Priebe - Profihost AG; +Cc: xfs

On Tue, Jul 31, 2012 at 03:56:54PM +0200, Stefan Priebe - Profihost AG wrote:
> Hello list,
> 
> i'm planning to create a couple of VMs with just 30GB of space while
> using xfs as the main filesystem.
> 
> Now i alreay know that some of the VMs will grow up to 250GB while
> resizing the block device and using xfs_growfs.

Just use thin provisioning and make it 250GB to begin with. Thin
provisioning mades filsystem grow/shrink pretty much redundant....

> Should i take care of that and format these disks with special parameters?
> 
> I've discovered that a 500GB volume has agcount=4 and 64000 blocks
> of internal log - while a 300GB volume resized to 500GB has agcount
> 7 ad only 40960 blocks of internal log.

I doubt you'll ever notice the difference.

> Is it a problem if this grow will happen in small portions (30GB =>
> 50GB => 75GB => 100GB => ... 300GB)?

Growing a filesystem by an order of magnitude is the limit of what
I'd suggest is sane. Growing it by two orders of magnitude
(espcially if you start with a 16 AG filesystem because of stripe
alignment) is going to cause problems with the number of AGs and
the subsequent freespace management scale issue....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-04 22:43 ` Dave Chinner
@ 2012-08-05  5:46   ` Stefan Priebe
  2012-08-05 11:06     ` Martin Steigerwald
                       ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Stefan Priebe @ 2012-08-05  5:46 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 1362 bytes --]

Am 05.08.2012 um 00:43 schrieb Dave Chinner <david@fromorbit.com>:

> On Tue, Jul 31, 2012 at 03:56:54PM +0200, Stefan Priebe - Profihost AG wrote:
>> Hello list,
>> 
>> i'm planning to create a couple of VMs with just 30GB of space while
>> using xfs as the main filesystem.
>> 
>> Now i alreay know that some of the VMs will grow up to 250GB while
>> resizing the block device and using xfs_growfs.
> 
> Just use thin provisioning and make it 250GB to begin with. Thin
> provisioning mades filsystem grow/shrink pretty much redundant....

But dm thin isn't stable isn't it? Does xfs reallocate used parts of the block
Device before using new parts? Otherwise deleting and recreating files will result in full used space pretty fast.

>> Is it a problem if this grow will happen in small portions (30GB =>
>> 50GB => 75GB => 100GB => ... 300GB)?
> 
> Growing a filesystem by an order of magnitude is the limit of what
> I'd suggest is sane. Growing it by two orders of magnitude
> (espcially if you start with a 16 AG filesystem because of stripe
> alignment) is going to cause problems with the number of AGs and
> the subsequent freespace management scale issue....
I would start with ag=4 and end up in ag 48 in my tests.

> Growing it by two orders of magnitude
what does that mean? (sorry no native speaker)

Thanks!

Stefan

[-- Attachment #1.2: Type: text/html, Size: 2374 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-03  6:09   ` Stefan Priebe - Profihost AG
  2012-08-03 13:46     ` Eric Sandeen
@ 2012-08-05 11:03     ` Martin Steigerwald
  2012-08-05 12:34       ` Stan Hoeppner
  2012-08-05 15:54       ` Stefan Priebe
  1 sibling, 2 replies; 16+ messages in thread
From: Martin Steigerwald @ 2012-08-05 11:03 UTC (permalink / raw)
  To: xfs; +Cc: Eric Sandeen, Stefan Priebe - Profihost AG

Am Freitag, 3. August 2012 schrieb Stefan Priebe - Profihost AG:
> Am 03.08.2012 06:03, schrieb Eric Sandeen:
> > On 7/31/12 8:56 AM, Stefan Priebe - Profihost AG wrote:
> >> Hello list,
> >> 
> >> i'm planning to create a couple of VMs with just 30GB of space while
> >> using xfs as the main filesystem.
> >> 
> >> Now i alreay know that some of the VMs will grow up to 250GB while
> >> resizing the block device and using xfs_growfs.
> >> 
> >> Should i take care of that and format these disks with special
> >> parameters?
> >> 
> >> I've discovered that a 500GB volume has agcount=4 and 64000 blocks
> >> of internal log - while a 300GB volume resized to 500GB has agcount
> >> 7 ad only 40960 blocks of internal log.
> >> 
> >> Is it a problem if this grow will happen in small portions (30GB =>
> >> 50GB => 75GB => 100GB => ... 300GB)?
> > 
> > This incremental part doesn't matter a bit.  The first mkfs will
> > choose the AG count & size according to defaults;
> > 
>  > further growth after this will add new (possibly partial) AGs of
>  > that
> 
> pre-chosen size.
> 
> OK thanks for your reply. But does this influence performance? Should i
> perhaps start creating the 30GB with agcount 1 so that while raising
> the disk i don't end up with such a high agcount value? Does it make
> sense to create a bigger internal log from he beginning?

Well the default was 16 AGs for volumes < 2 TiB AFAIR. And it has been 
reduced to 4 for as I remember exactly performance reasons. Too many AGs 
on a single device can incur too much parallelity. Thats at least is what 
I have understood back then.

But then you didn´t describe where those VM disks are located. If that 
location has many spindles it might not matter at all or even improve 
performance.

Anyway, I do not see much sense to make them 30 GiB when they can grow to 
500 GiB – at least provided that you use thin provisioning. Cause 
xfs_growfs and resizing the image will likely be a manual step while thin 
provisioning goes automatically and only needs to be monitored.

That manual step makes sense tough if you want to guarentee all the space 
thats visibly in df output is really physically available, without 
providing for x times 500 GiB initially.

As you see much guesswork in here, cause the data you provided was not 
enough for any clear recommendation.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-05  5:46   ` Stefan Priebe
@ 2012-08-05 11:06     ` Martin Steigerwald
  2012-08-05 11:35     ` Andy Bennett
  2012-08-05 20:57     ` Dave Chinner
  2 siblings, 0 replies; 16+ messages in thread
From: Martin Steigerwald @ 2012-08-05 11:06 UTC (permalink / raw)
  To: xfs; +Cc: Stefan Priebe

Am Sonntag, 5. August 2012 schrieb Stefan Priebe:
> Am 05.08.2012 um 00:43 schrieb Dave Chinner <david@fromorbit.com>:
> > On Tue, Jul 31, 2012 at 03:56:54PM +0200, Stefan Priebe - Profihost AG 
wrote:
> >> Hello list,
> >> 
> >> i'm planning to create a couple of VMs with just 30GB of space while
> >> using xfs as the main filesystem.
> >> 
> >> Now i alreay know that some of the VMs will grow up to 250GB while
> >> resizing the block device and using xfs_growfs.
> > 
> > Just use thin provisioning and make it 250GB to begin with. Thin
> > provisioning mades filsystem grow/shrink pretty much redundant....
> 
> But dm thin isn't stable isn't it? Does xfs reallocate used parts of
> the block Device before using new parts? Otherwise deleting and
> recreating files will result in full used space pretty fast.

A periodic fstrim might help if the TRIM/DISCARD is supported in all 
layers. And whether it is is a good question that depends on well where 
your data is stored and what layers in the kernel are involved in storing 
it and the kernel version of course.

> >> Is it a problem if this grow will happen in small portions (30GB =>
> >> 50GB => 75GB => 100GB => ... 300GB)?
> > 
> > Growing a filesystem by an order of magnitude is the limit of what
> > I'd suggest is sane. Growing it by two orders of magnitude
> > (espcially if you start with a 16 AG filesystem because of stripe
> > alignment) is going to cause problems with the number of AGs and
> > the subsequent freespace management scale issue....
> 
> I would start with ag=4 and end up in ag 48 in my tests.

Thats IMHO quite much for upto 500 GiB. But it still depends on what kind 
of storage this is located.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-05  5:46   ` Stefan Priebe
  2012-08-05 11:06     ` Martin Steigerwald
@ 2012-08-05 11:35     ` Andy Bennett
  2012-08-05 20:57     ` Dave Chinner
  2 siblings, 0 replies; 16+ messages in thread
From: Andy Bennett @ 2012-08-05 11:35 UTC (permalink / raw)
  To: Stefan Priebe; +Cc: xfs

Hi,

>>> Is it a problem if this grow will happen in small portions (30GB =>
>>> 50GB => 75GB => 100GB => ... 300GB)?
>>
>> Growing a filesystem by an order of magnitude is the limit of what
>> I'd suggest is sane. Growing it by two orders of magnitude
>> (espcially if you start with a 16 AG filesystem because of stripe
>> alignment) is going to cause problems with the number of AGs and
>> the subsequent freespace management scale issue....
> I would start with ag=4 and end up in ag 48 in my tests.
> 
>> Growing it by two orders of magnitude
> what does that mean? (sorry no native speaker)

In base 10 an order of magnitude is 10x. Two orders of magnitude would
be 100x.

Numbers of the same order of magnitude have the decimal point (comma in
EU) in the same place.

300GB is an order of magnitude larger than 30GB.



Sometimes, I've seen computer scientists use binary orders of magnitude.
I.e. 2x, 4x, 8x, 16x. I'm not sure if this is generally accepted or not.






Regards,
@ndy

-- 
andyjpb@ashurst.eu.org
http://www.ashurst.eu.org/
0x7EBA75FF

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-05 11:03     ` Martin Steigerwald
@ 2012-08-05 12:34       ` Stan Hoeppner
  2012-08-05 13:49         ` Martin Steigerwald
  2012-08-05 15:54       ` Stefan Priebe
  1 sibling, 1 reply; 16+ messages in thread
From: Stan Hoeppner @ 2012-08-05 12:34 UTC (permalink / raw)
  To: Martin Steigerwald; +Cc: Stefan Priebe - Profihost AG, Eric Sandeen, xfs

On 8/5/2012 6:03 AM, Martin Steigerwald wrote:

> Well the default was 16 AGs for volumes < 2 TiB AFAIR. And it has been 
> reduced to 4 for as I remember exactly performance reasons. Too many AGs 
> on a single device can incur too much parallelity. Thats at least is what 
> I have understood back then.

For striped md/RAID or LVM volumes mkfs.xfs will create 16 AGs by
default because it reads the configuration and finds a striped volume.
The theory here is that more AGs offers better performance in the
average case on a striped volume.

With hardware RAID or a single drive, or any storage configuration for
which mkfs.xfs is unable to query the parameters, mkfs.xfs creates 4 AGs
by default.  The 4 AG default has been with us for a very long time.  It
was never reduced.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-05 12:34       ` Stan Hoeppner
@ 2012-08-05 13:49         ` Martin Steigerwald
  2012-08-05 20:26           ` Stan Hoeppner
  0 siblings, 1 reply; 16+ messages in thread
From: Martin Steigerwald @ 2012-08-05 13:49 UTC (permalink / raw)
  To: stan; +Cc: Stefan Priebe - Profihost AG, Eric Sandeen, xfs

Am Sonntag, 5. August 2012 schrieb Stan Hoeppner:
> On 8/5/2012 6:03 AM, Martin Steigerwald wrote:
> > Well the default was 16 AGs for volumes < 2 TiB AFAIR. And it has
> > been reduced to 4 for as I remember exactly performance reasons. Too
> > many AGs on a single device can incur too much parallelity. Thats at
> > least is what I have understood back then.
> 
> For striped md/RAID or LVM volumes mkfs.xfs will create 16 AGs by
> default because it reads the configuration and finds a striped volume.
> The theory here is that more AGs offers better performance in the
> average case on a striped volume.
> 
> With hardware RAID or a single drive, or any storage configuration for
> which mkfs.xfs is unable to query the parameters, mkfs.xfs creates 4
> AGs by default.  The 4 AG default has been with us for a very long
> time.  It was never reduced.

That does not match my memory, but I´d have to look it up. Maybe next 
week.

I am pretty sure mkfs.xfs on a single partition on a single harddisk upto 
2 TiB used 16 AGs for quite some time and now uses 4 AGs since quite some 
time already. I think I have noted the exact xfsprogs version where it was 
changed in my training slides.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-05 11:03     ` Martin Steigerwald
  2012-08-05 12:34       ` Stan Hoeppner
@ 2012-08-05 15:54       ` Stefan Priebe
  2012-08-06 11:42         ` Michael Monnerie
  1 sibling, 1 reply; 16+ messages in thread
From: Stefan Priebe @ 2012-08-05 15:54 UTC (permalink / raw)
  To: Martin Steigerwald; +Cc: Eric Sandeen, xfs

Am 05.08.2012 13:03, schrieb Martin Steigerwald:
> Am Freitag, 3. August 2012 schrieb Stefan Priebe - Profihost AG:
> But then you didn´t describe where those VM disks are located. If that
> location has many spindles it might not matter at all or even improve
> performance.
30 disks raid 50 via LIO iSCSI and LVM.

> Anyway, I do not see much sense to make them 30 GiB when they can grow to
> 500 GiB – at least provided that you use thin provisioning. Cause
> xfs_growfs and resizing the image will likely be a manual step while thin
> provisioning goes automatically and only needs to be monitored.
I do not use thin provisioning as dm thin is not production ready isn't it?

> That manual step makes sense tough if you want to guarentee all the space
> thats visibly in df output is really physically available, without
> providing for x times 500 GiB initially.
Oh i can easily automate the whole resizing stuff.

Stefan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-05 13:49         ` Martin Steigerwald
@ 2012-08-05 20:26           ` Stan Hoeppner
  0 siblings, 0 replies; 16+ messages in thread
From: Stan Hoeppner @ 2012-08-05 20:26 UTC (permalink / raw)
  To: Martin Steigerwald; +Cc: xfs, Eric Sandeen, Stefan Priebe - Profihost AG

On 8/5/2012 8:49 AM, Martin Steigerwald wrote:
> Am Sonntag, 5. August 2012 schrieb Stan Hoeppner:
>> On 8/5/2012 6:03 AM, Martin Steigerwald wrote:
>>> Well the default was 16 AGs for volumes < 2 TiB AFAIR. And it has
>>> been reduced to 4 for as I remember exactly performance reasons. Too
>>> many AGs on a single device can incur too much parallelity. Thats at
>>> least is what I have understood back then.
>>
>> For striped md/RAID or LVM volumes mkfs.xfs will create 16 AGs by
>> default because it reads the configuration and finds a striped volume.
>> The theory here is that more AGs offers better performance in the
>> average case on a striped volume.
>>
>> With hardware RAID or a single drive, or any storage configuration for
>> which mkfs.xfs is unable to query the parameters, mkfs.xfs creates 4
>> AGs by default.  The 4 AG default has been with us for a very long
>> time.  It was never reduced.
> 
> That does not match my memory, but I´d have to look it up. Maybe next 
> week.
> 
> I am pretty sure mkfs.xfs on a single partition on a single harddisk upto 
> 2 TiB used 16 AGs for quite some time and now uses 4 AGs since quite some 
> time already. I think I have noted the exact xfsprogs version where it was 
> changed in my training slides.

>From 'man mkfs.xfs' of xfsprogs 3.1.4 (probably not the latest)

"The data section of the filesystem is divided into _value_ allocation
groups (default value is scaled automatically based on the underlying
device size)."

It's not stated in man but the minimum is 4 AGs, unless that has changed
in the last couple of years.  This is what I was referring to previously
when I stated 4 AGs is the default.

What you likely did was format a 2TB device and saw 16 AGs due to the
automatic scaling, then shortly thereafter formatted a much smaller
device and saw the default minimum 4 AGs.  Assuming agcount was
statically defined, you assumed the default value had been decreased.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-05  5:46   ` Stefan Priebe
  2012-08-05 11:06     ` Martin Steigerwald
  2012-08-05 11:35     ` Andy Bennett
@ 2012-08-05 20:57     ` Dave Chinner
  2 siblings, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2012-08-05 20:57 UTC (permalink / raw)
  To: Stefan Priebe; +Cc: xfs

On Sun, Aug 05, 2012 at 07:46:33AM +0200, Stefan Priebe wrote:
> Am 05.08.2012 um 00:43 schrieb Dave Chinner <david@fromorbit.com>:
> 
> > On Tue, Jul 31, 2012 at 03:56:54PM +0200, Stefan Priebe - Profihost AG wrote:
> >> Hello list,
> >> 
> >> i'm planning to create a couple of VMs with just 30GB of space while
> >> using xfs as the main filesystem.
> >> 
> >> Now i alreay know that some of the VMs will grow up to 250GB while
> >> resizing the block device and using xfs_growfs.
> > 
> > Just use thin provisioning and make it 250GB to begin with. Thin
> > provisioning mades filsystem grow/shrink pretty much redundant....
> 
> But dm thin isn't stable isn't it?

AFAIK, It's mostly stable.

> Does xfs reallocate used parts of the block
> Device before using new parts?

Sometimes. Depends on workload, locality of reference,  patterns of
freeing and allocation, etc.

> Otherwise deleting and recreating files will result in full used space pretty fast.

fstrim. dm-thinp supports discard commands just fine.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs_growfs / planned resize / performance impact
  2012-08-05 15:54       ` Stefan Priebe
@ 2012-08-06 11:42         ` Michael Monnerie
  0 siblings, 0 replies; 16+ messages in thread
From: Michael Monnerie @ 2012-08-06 11:42 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 425 bytes --]

Am Sonntag, 5. August 2012, 17:54:28 schrieb Stefan Priebe:
> I do not use thin provisioning as dm thin is not production ready
> isn't it?

So you're using Linux as virtualization host? Because with VMware your 
argument would be wrong.

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2012-08-06 11:42 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-07-31 13:56 xfs_growfs / planned resize / performance impact Stefan Priebe - Profihost AG
2012-07-31 17:27 ` Stan Hoeppner
2012-08-03  4:03 ` Eric Sandeen
2012-08-03  6:09   ` Stefan Priebe - Profihost AG
2012-08-03 13:46     ` Eric Sandeen
2012-08-05 11:03     ` Martin Steigerwald
2012-08-05 12:34       ` Stan Hoeppner
2012-08-05 13:49         ` Martin Steigerwald
2012-08-05 20:26           ` Stan Hoeppner
2012-08-05 15:54       ` Stefan Priebe
2012-08-06 11:42         ` Michael Monnerie
2012-08-04 22:43 ` Dave Chinner
2012-08-05  5:46   ` Stefan Priebe
2012-08-05 11:06     ` Martin Steigerwald
2012-08-05 11:35     ` Andy Bennett
2012-08-05 20:57     ` Dave Chinner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.