All of lore.kernel.org
 help / color / mirror / Atom feed
* Premature "No Space left on device" on XFS
@ 2011-10-06 19:55 Bernhard Schmidt
  2011-10-07  0:22 ` Stan Hoeppner
  2011-10-07  1:37 ` Dave Chinner
  0 siblings, 2 replies; 19+ messages in thread
From: Bernhard Schmidt @ 2011-10-06 19:55 UTC (permalink / raw)
  To: xfs

Hi,

this is an XFS-related summary of a problem report I sent to the postfix 
mailinglist a few minutes ago after a bulkmail test system blew up 
during a stress test.

We have a few MTAs running SLES11.1 amd64 (2.6.32.45-0.3-default), 10 GB 
XFS Spooldirectory with default blocksize (4k). It was bombarded with 
mails faster than it could send them on, which eventually led to almost 
2 million files of ~1.5kB in one directory. Suddenly, this started to happen

lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a
touch: cannot touch `a': No space left on device
lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df .
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sdb              10475520   7471160   3004360  72% 
/var/spool/postfix-bulk
lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df -i .
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sdb             10485760 1742528 8743232   17% /var/spool/postfix-bulk

So we could not create any file in the spool directory anymore despite 
df claiming to have both free blocks and inodes. This led to a pretty 
spectacular lockup of the mail processing chain.

My theory is that XFS is using a full 4k block for each 1.5kB file, 
which accounts to some loss. But still, 10GB / 4kB makes 2.5 mio files, 
which have surely not been reached here. Is there that high overhead? 
Why is neither df-metric reporting this problem? Is there any way to get 
reasonable readings out of df in this case? The system would have 
stopped accepting mail from outside if the freespace would have sunk 
below 2GB, so out-of-space happened way to early for it.

Thanks for your answers,
Bernhard

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-06 19:55 Premature "No Space left on device" on XFS Bernhard Schmidt
@ 2011-10-07  0:22 ` Stan Hoeppner
  2011-10-07  0:47   ` Bernhard Schmidt
  2011-10-07  1:37 ` Dave Chinner
  1 sibling, 1 reply; 19+ messages in thread
From: Stan Hoeppner @ 2011-10-07  0:22 UTC (permalink / raw)
  To: Bernhard Schmidt; +Cc: xfs

On 10/6/2011 2:55 PM, Bernhard Schmidt wrote:
> Hi,
> 
> this is an XFS-related summary of a problem report I sent to the postfix
> mailinglist a few minutes ago after a bulkmail test system blew up
> during a stress test.
> 
> We have a few MTAs running SLES11.1 amd64 (2.6.32.45-0.3-default), 10 GB
> XFS Spooldirectory with default blocksize (4k). It was bombarded with
> mails faster than it could send them on, which eventually led to almost
> 2 million files of ~1.5kB in one directory. Suddenly, this started to
> happen
> 
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a
> touch: cannot touch `a': No space left on device
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df .
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sdb              10475520   7471160   3004360  72%
> /var/spool/postfix-bulk
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df -i .
> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
> /dev/sdb             10485760 1742528 8743232   17% /var/spool/postfix-bulk
> 
> So we could not create any file in the spool directory anymore despite
> df claiming to have both free blocks and inodes. This led to a pretty
> spectacular lockup of the mail processing chain.
> 
> My theory is that XFS is using a full 4k block for each 1.5kB file,
> which accounts to some loss. But still, 10GB / 4kB makes 2.5 mio files,
> which have surely not been reached here. Is there that high overhead?
> Why is neither df-metric reporting this problem? Is there any way to get
> reasonable readings out of df in this case? The system would have
> stopped accepting mail from outside if the freespace would have sunk
> below 2GB, so out-of-space happened way to early for it.

Dig deeper so you can get past theory and find facts.  Do you see any
errors in dmseg?

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-07  0:22 ` Stan Hoeppner
@ 2011-10-07  0:47   ` Bernhard Schmidt
  0 siblings, 0 replies; 19+ messages in thread
From: Bernhard Schmidt @ 2011-10-07  0:47 UTC (permalink / raw)
  To: stan; +Cc: xfs

On 07.10.2011 02:22, Stan Hoeppner wrote:

Hi,

> On 10/6/2011 2:55 PM, Bernhard Schmidt wrote:
>> Hi,
>>
>> this is an XFS-related summary of a problem report I sent to the postfix
>> mailinglist a few minutes ago after a bulkmail test system blew up
>> during a stress test.
>>
>> We have a few MTAs running SLES11.1 amd64 (2.6.32.45-0.3-default), 10 GB
>> XFS Spooldirectory with default blocksize (4k). It was bombarded with
>> mails faster than it could send them on, which eventually led to almost
>> 2 million files of ~1.5kB in one directory. Suddenly, this started to
>> happen
>>
>> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a
>> touch: cannot touch `a': No space left on device
>> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df .
>> Filesystem           1K-blocks      Used Available Use% Mounted on
>> /dev/sdb              10475520   7471160   3004360  72%
>> /var/spool/postfix-bulk
>> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df -i .
>> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
>> /dev/sdb             10485760 1742528 8743232   17% /var/spool/postfix-bulk
>>
>> So we could not create any file in the spool directory anymore despite
>> df claiming to have both free blocks and inodes. This led to a pretty
>> spectacular lockup of the mail processing chain.
>>
>> My theory is that XFS is using a full 4k block for each 1.5kB file,
>> which accounts to some loss. But still, 10GB / 4kB makes 2.5 mio files,
>> which have surely not been reached here. Is there that high overhead?
>> Why is neither df-metric reporting this problem? Is there any way to get
>> reasonable readings out of df in this case? The system would have
>> stopped accepting mail from outside if the freespace would have sunk
>> below 2GB, so out-of-space happened way to early for it.
>
> Dig deeper so you can get past theory and find facts.  Do you see any
> errors in dmseg?

No, nothing in dmesg. As soon as I delete one file the mail processing 
continues. This is some sort of expected outcome in this situation, it 
is a classic 2-MTA-with-queues-with-a-content-filter setup. The 
before-filter instance will connect through the filter to the 
post-filter instance and try to deliver mails. During that period the 
mail allocates two files (active queue in the before-filter, incoming 
queue in the post-filter instance). If the second file cannot be opened 
the mail will never be delivered and the before-filter queue never 
processed.

Bernhard

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-06 19:55 Premature "No Space left on device" on XFS Bernhard Schmidt
  2011-10-07  0:22 ` Stan Hoeppner
@ 2011-10-07  1:37 ` Dave Chinner
  2011-10-07  8:40   ` Gim Leong Chin
                     ` (3 more replies)
  1 sibling, 4 replies; 19+ messages in thread
From: Dave Chinner @ 2011-10-07  1:37 UTC (permalink / raw)
  To: Bernhard Schmidt; +Cc: xfs

On Thu, Oct 06, 2011 at 09:55:07PM +0200, Bernhard Schmidt wrote:
> Hi,
> 
> this is an XFS-related summary of a problem report I sent to the
> postfix mailinglist a few minutes ago after a bulkmail test system
> blew up during a stress test.
> 
> We have a few MTAs running SLES11.1 amd64 (2.6.32.45-0.3-default),
> 10 GB XFS Spooldirectory with default blocksize (4k). It was
> bombarded with mails faster than it could send them on, which
> eventually led to almost 2 million files of ~1.5kB in one directory.
> Suddenly, this started to happen
> 
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a
> touch: cannot touch `a': No space left on device
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df .
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sdb              10475520   7471160   3004360  72%

So you have a 10GB filesystem, with about 3GB of free space.

> /var/spool/postfix-bulk
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df -i .
> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
> /dev/sdb             10485760 1742528 8743232   17% /var/spool/postfix-bulk

And with 1.7 million inodes in it. That's a lot for a tiny
filesystem, and not really a use case that XFS is well suited to.
XFS will work, but it won't age gracefully under these conditions...

As it is, your problem is most likely fragmented free space (an
aging problem). Inodes are allocated in chunks of 64, so require an
-aligned- contiguous 16k extent for the default 256 byte inode size.
If you have no aligned contiguous 16k extents free then inode
allocation will fail.

Running 'xfs_db -r "-c freesp -s" /dev/sdb' will give you a
histogram of free space extents in the filesystem, which will tell
us if you are hitting this problem.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-07  1:37 ` Dave Chinner
@ 2011-10-07  8:40   ` Gim Leong Chin
  2011-10-07 23:20     ` Dave Chinner
  2011-10-07 11:40   ` Michael Monnerie
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: Gim Leong Chin @ 2011-10-07  8:40 UTC (permalink / raw)
  To: Bernhard Schmidt, Dave Chinner; +Cc: xfs

Hi Dave,


> 
> As it is, your problem is most likely fragmented free space
> (an
> aging problem). Inodes are allocated in chunks of 64, so
> require an
> -aligned- contiguous 16k extent for the default 256 byte
> inode size.
> If you have no aligned contiguous 16k extents free then
> inode
> allocation will fail.
> 

I understand from the mkfs.xfs man page "The  XFS  inode  contains a fixed-size part and a variable-size part."

1) Do you mean inodes are allocated in units of 64 at one go?
2) What is the size of the fixed-size part?
3) Are the fixed-size parts of inodes also allocated in units of 64 at one go?
4) Where are the fixed-size parts located?  On special extents just like the variable-size part?
5) What about the locality of the variable and fixed size parts of the inodes?  Can they be any distance apart?

Thanks!


GL

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-07  1:37 ` Dave Chinner
  2011-10-07  8:40   ` Gim Leong Chin
@ 2011-10-07 11:40   ` Michael Monnerie
  2011-10-07 23:17     ` Dave Chinner
  2011-10-07 13:49   ` Bernhard Schmidt
  2011-10-07 13:58   ` Bryan J Smith
  3 siblings, 1 reply; 19+ messages in thread
From: Michael Monnerie @ 2011-10-07 11:40 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: Text/Plain, Size: 1083 bytes --]

On Freitag, 7. Oktober 2011 Dave Chinner wrote:
> And with 1.7 million inodes in it. That's a lot for a tiny
> filesystem, and not really a use case that XFS is well suited to.
> XFS will work, but it won't age gracefully under these conditions...

But which FS would fit better for that? We have similar usages, so I'm 
curious.

> As it is, your problem is most likely fragmented free space (an
> aging problem). Inodes are allocated in chunks of 64, so require an
> -aligned- contiguous 16k extent for the default 256 byte inode size.
> If you have no aligned contiguous 16k extents free then inode
> allocation will fail.

Would this be a use case for the "noikeep" mount option?

When would be the time XFS drops inode clusters that are already empty, 
when I use noikeep on an aged XFS with lots of such unused inode 
allocations?

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531

// Haus zu verkaufen: http://zmi.at/langegg/

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-07  1:37 ` Dave Chinner
  2011-10-07  8:40   ` Gim Leong Chin
  2011-10-07 11:40   ` Michael Monnerie
@ 2011-10-07 13:49   ` Bernhard Schmidt
  2011-10-07 23:14     ` Dave Chinner
  2011-10-07 13:58   ` Bryan J Smith
  3 siblings, 1 reply; 19+ messages in thread
From: Bernhard Schmidt @ 2011-10-07 13:49 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

Am 07.10.2011 03:37, schrieb Dave Chinner:

Hi,

>> this is an XFS-related summary of a problem report I sent to the
>> postfix mailinglist a few minutes ago after a bulkmail test system
>> blew up during a stress test.
>>
>> We have a few MTAs running SLES11.1 amd64 (2.6.32.45-0.3-default),
>> 10 GB XFS Spooldirectory with default blocksize (4k). It was
>> bombarded with mails faster than it could send them on, which
>> eventually led to almost 2 million files of ~1.5kB in one directory.
>> Suddenly, this started to happen
>>
>> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a
>> touch: cannot touch `a': No space left on device
>> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df .
>> Filesystem           1K-blocks      Used Available Use% Mounted on
>> /dev/sdb              10475520   7471160   3004360  72%
> 
> So you have a 10GB filesystem, with about 3GB of free space.
> 
>> /var/spool/postfix-bulk
>> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df -i .
>> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
>> /dev/sdb             10485760 1742528 8743232   17% /var/spool/postfix-bulk
> 
> And with 1.7 million inodes in it. That's a lot for a tiny
> filesystem, and not really a use case that XFS is well suited to.
> XFS will work, but it won't age gracefully under these conditions...
> 
> As it is, your problem is most likely fragmented free space (an
> aging problem). Inodes are allocated in chunks of 64, so require an
> -aligned- contiguous 16k extent for the default 256 byte inode size.
> If you have no aligned contiguous 16k extents free then inode
> allocation will fail.
> 
> Running 'xfs_db -r "-c freesp -s" /dev/sdb' will give you a
> histogram of free space extents in the filesystem, which will tell
> us if you are hitting this problem.

I managed to create the situation again. This time the total usage is a
bit higher, but it still failed.

lxmhs45:~ # df /var/spool/postfix-bulk
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sdb              10475520   8071008   2404512  78%
/var/spool/postfix-bulk
lxmhs45:~ # df -i /var/spool/postfix-bulk
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sdb             11500544 1882496 9618048   17% /var/spool/postfix-bulk

This is the output requested.

lxmhs45:~ # xfs_db -r "-c freesp -s" /dev/sdb
   from      to extents  blocks    pct
      1       1   32230   32230   5.36
      2       3    6874   16476   2.74
      4       7  138151  552604  91.90
total free extents 177255
total free blocks 601310
average free extent size 3.39234
lxmhs45:~ # xfs_info /dev/sdb
meta-data=/dev/sdb               isize=256    agcount=4, agsize=655360 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=2621440, imaxpct=50
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Best Regards,
Bernhard

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-07  1:37 ` Dave Chinner
                     ` (2 preceding siblings ...)
  2011-10-07 13:49   ` Bernhard Schmidt
@ 2011-10-07 13:58   ` Bryan J Smith
  2011-10-07 23:31     ` Dave Chinner
  3 siblings, 1 reply; 19+ messages in thread
From: Bryan J Smith @ 2011-10-07 13:58 UTC (permalink / raw)
  To: Dave Chinner, Bernhard Schmidt; +Cc: xfs

[ Not really adding any technical meat, but just some past experience with XFS, plus Ext3 experience ]

I remember running into this a long time ago when I was first playing with XFS for /tmp and /var (I was still a Linux/XFS noob at the time, not that I'm an expert today).  I ran into the same case where both free block and inodes were still available (although similarly well utilized), and the median file size was around 1KiB.  It was also in the case of many small files being written out in a short period.

In my case, I didn't use the XFS debugger to get into the allocation of the extents (would have if I wasn't such a noob, good, discrete command to know, thanx!).

Extents are outstanding for data and similar directories, ordering and placing large and small files to mitigate fragmentation.  But in this case, and correct me if I'm wrong, it's really just a wasteful use for the extents approach, as the files typically fit in a single data block or two.  I mean, I can still see some benefits in how inodes are allocated, but it seems small compared to the overhead.  Then add in the delete aspect, being that the files are not going to be retained in the user's use case here (this is a spool, correct?), and I'm not seeing XFS make sense.  The fact that the services "fell behind" does not surprise me, although that's just a subjective feel (and if anyone knows how to back that up with good tools and metrics, I'm all ears).

I never got around to benchmarking it against Ext3 in such a use case, but I quickly adopted a complementary Ext3+XFS volume approach.

I've used Ext3 with around 8 million files with a median size well under 4KiB (under 32GiB total).  It works "well enough."  I'm curious how Ext4 would do though.  I think Ric Wheeler's team (at Red Hat) has done some benchmarks on 7+ figure file counts on Ext3 and Ext4.  I think I remember a couple of info and related tidbits back and forth when I was doing some Ext3 (with GFS and GFS2) testing, on expectations of performance.

Although can't say I've had 2 million files in a single directory, so YMMV.  Then again, if it was extent overhead, it may not reach 2M to begin with.




----- Original Message -----
From: Dave Chinner <david@fromorbit.com>
Sent: Thursday, October 6, 2011 9:37 PM

On Thu, Oct 06, 2011 at 09:55:07PM +0200, Bernhard Schmidt wrote:
> ...
> It was  bombarded with mails faster than it could send them on,
> which eventually led to almost 2 million files of ~1.5kB in one
> directory.  Suddenly, this started to happen
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a
> touch: cannot touch `a': No space left on device
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df .
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sdb              10475520   7471160   3004360  72%
> So you have a 10GB filesystem, with about 3GB of free space.
> /var/spool/postfix-bulk
> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df -i .
> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
> /dev/sdb             10485760 1742528 8743232   17% /var/spool/postfix-bulk

And with 1.7 million inodes in it. That's a lot for a tiny
filesystem, and not really a use case that XFS is well suited to.
XFS will work, but it won't age gracefully under these conditions...

As it is, your problem is most likely fragmented free space (an
aging problem). Inodes are allocated in chunks of 64, so require an
-aligned- contiguous 16k extent for the default 256 byte inode size.
If you have no aligned contiguous 16k extents free then inode
allocation will fail.

Running 'xfs_db -r "-c freesp -s" /dev/sdb' will give you a
histogram of free space extents in the filesystem, which will tell
us if you are hitting this problem.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-07 13:49   ` Bernhard Schmidt
@ 2011-10-07 23:14     ` Dave Chinner
  2011-10-08 12:29       ` Bernhard Schmidt
  0 siblings, 1 reply; 19+ messages in thread
From: Dave Chinner @ 2011-10-07 23:14 UTC (permalink / raw)
  To: Bernhard Schmidt; +Cc: xfs

On Fri, Oct 07, 2011 at 03:49:57PM +0200, Bernhard Schmidt wrote:
> Am 07.10.2011 03:37, schrieb Dave Chinner:
> 
> Hi,
> 
> >> this is an XFS-related summary of a problem report I sent to the
> >> postfix mailinglist a few minutes ago after a bulkmail test system
> >> blew up during a stress test.
> >>
> >> We have a few MTAs running SLES11.1 amd64 (2.6.32.45-0.3-default),
> >> 10 GB XFS Spooldirectory with default blocksize (4k). It was
> >> bombarded with mails faster than it could send them on, which
> >> eventually led to almost 2 million files of ~1.5kB in one directory.
> >> Suddenly, this started to happen
> >>
> >> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # touch a
> >> touch: cannot touch `a': No space left on device
> >> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df .
> >> Filesystem           1K-blocks      Used Available Use% Mounted on
> >> /dev/sdb              10475520   7471160   3004360  72%
> > 
> > So you have a 10GB filesystem, with about 3GB of free space.
> > 
> >> /var/spool/postfix-bulk
> >> lxmhs45:/var/spool/postfix-bulk/postfix-bulkinhss # df -i .
> >> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
> >> /dev/sdb             10485760 1742528 8743232   17% /var/spool/postfix-bulk
> > 
> > And with 1.7 million inodes in it. That's a lot for a tiny
> > filesystem, and not really a use case that XFS is well suited to.
> > XFS will work, but it won't age gracefully under these conditions...
> > 
> > As it is, your problem is most likely fragmented free space (an
> > aging problem). Inodes are allocated in chunks of 64, so require an
> > -aligned- contiguous 16k extent for the default 256 byte inode size.
> > If you have no aligned contiguous 16k extents free then inode
> > allocation will fail.
> > 
> > Running 'xfs_db -r "-c freesp -s" /dev/sdb' will give you a
> > histogram of free space extents in the filesystem, which will tell
> > us if you are hitting this problem.
> 
> I managed to create the situation again. This time the total usage is a
> bit higher, but it still failed.

No surprise. The way you are using the filesystem is pre-disposed to
this sort of problem.

> lxmhs45:~ # df /var/spool/postfix-bulk
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sdb              10475520   8071008   2404512  78%
> /var/spool/postfix-bulk
> lxmhs45:~ # df -i /var/spool/postfix-bulk
> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
> /dev/sdb             11500544 1882496 9618048   17% /var/spool/postfix-bulk
> 
> This is the output requested.
> 
> lxmhs45:~ # xfs_db -r "-c freesp -s" /dev/sdb
>    from      to extents  blocks    pct
>       1       1   32230   32230   5.36
>       2       3    6874   16476   2.74
>       4       7  138151  552604  91.90
> total free extents 177255
> total free blocks 601310
> average free extent size 3.39234

And that shows your freespace is indeed badly fragmentedi and the
cause of your problem. The majority of the free space is in 4-7
block extents which, if inode allocation is failing, are all
unaligned even though they are large enough for an inode chunk.

> lxmhs45:~ # xfs_info /dev/sdb
> meta-data=/dev/sdb               isize=256    agcount=4, agsize=655360 blks
>          =                       sectsz=512   attr=2
> data     =                       bsize=4096   blocks=2621440, imaxpct=50
                                                                ^^^^^^^^^^

And there lies the reason you are getting the filesystem into this
situation - you're allowing a very large number of inodes to be created
in the filesystem.

I'd suggest that for your workload, you need to allow at least 10GB
of disk space per million inodes. Because of the number of small
files, XFS is going to need a much larger amount of free space
available to prevent aging related freespace fragmentation problems.
The above ratio results in a maximum space usage of about 50%, which
will avoid such issues. If you need to hold 2 million files, use a
20GB filesystem...

>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal               bsize=4096   blocks=2560, version=2
                                               ^^^^^^^^^^^^^

And you'll probably get better performance if you use a larger log
as well.

FWIW, if you have anyone with developer time available, finishing
off this work:

http://xfs.org/index.php/Unfinished_work#Inline_data_in_inodes

and using 2kB inodes (which would fit ~1900 bytes of data in line)
would solve you problem entirely and perform much, much better.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-07 11:40   ` Michael Monnerie
@ 2011-10-07 23:17     ` Dave Chinner
  0 siblings, 0 replies; 19+ messages in thread
From: Dave Chinner @ 2011-10-07 23:17 UTC (permalink / raw)
  To: Michael Monnerie; +Cc: xfs

On Fri, Oct 07, 2011 at 01:40:24PM +0200, Michael Monnerie wrote:
> On Freitag, 7. Oktober 2011 Dave Chinner wrote:
> > And with 1.7 million inodes in it. That's a lot for a tiny
> > filesystem, and not really a use case that XFS is well suited to.
> > XFS will work, but it won't age gracefully under these conditions...
> 
> But which FS would fit better for that? We have similar usages, so I'm 
> curious.
> 
> > As it is, your problem is most likely fragmented free space (an
> > aging problem). Inodes are allocated in chunks of 64, so require an
> > -aligned- contiguous 16k extent for the default 256 byte inode size.
> > If you have no aligned contiguous 16k extents free then inode
> > allocation will fail.
> 
> Would this be a use case for the "noikeep" mount option?

Possibly, but you still end up with the same problems when you have
to grow the inode count and free space is low.

> When would be the time XFS drops inode clusters that are already empty, 
> when I use noikeep on an aged XFS with lots of such unused inode 
> allocations?

XFS frees inode chunks the when the last inode in the chunk is
freed. IF you have a random create/unlink workload, most of the
inode chunks will be partially populated and so the filesystem will
behave mostly as though it has noikeep sepcified, anyway. (i.e.
allocate inode from partially populated chunks before allocating new
chunks).

Cheers,

Dave.

-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-07  8:40   ` Gim Leong Chin
@ 2011-10-07 23:20     ` Dave Chinner
  0 siblings, 0 replies; 19+ messages in thread
From: Dave Chinner @ 2011-10-07 23:20 UTC (permalink / raw)
  To: Gim Leong Chin; +Cc: Bernhard Schmidt, xfs

On Fri, Oct 07, 2011 at 04:40:13PM +0800, Gim Leong Chin wrote:
> Hi Dave,
> 
> 
> > 
> > As it is, your problem is most likely fragmented free space
> > (an
> > aging problem). Inodes are allocated in chunks of 64, so
> > require an
> > -aligned- contiguous 16k extent for the default 256 byte
> > inode size.
> > If you have no aligned contiguous 16k extents free then
> > inode
> > allocation will fail.
> > 
> 
> I understand from the mkfs.xfs man page "The  XFS  inode  contains a fixed-size part and a variable-size part."
> 
> 1) Do you mean inodes are allocated in units of 64 at one go?

http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure//tmp/en-US/html/AG_Inode_Management.html
http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure//tmp/en-US/html/Inode_Btrees.html

> 2) What is the size of the fixed-size part?
> 3) Are the fixed-size parts of inodes also allocated in units of 64 at one go?
> 4) Where are the fixed-size parts located?  On special extents just like the variable-size part?
> 5) What about the locality of the variable and fixed size parts of the inodes?  Can they be any distance apart?

http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure//tmp/en-US/html/On-disk_Inode.html

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-07 13:58   ` Bryan J Smith
@ 2011-10-07 23:31     ` Dave Chinner
  2011-10-08  6:30       ` Bryan J Smith
  0 siblings, 1 reply; 19+ messages in thread
From: Dave Chinner @ 2011-10-07 23:31 UTC (permalink / raw)
  To: Bryan J Smith; +Cc: Bernhard Schmidt, xfs

On Fri, Oct 07, 2011 at 06:58:53AM -0700, Bryan J Smith wrote:
> [ Not really adding any technical meat, but just some past
> experience with XFS, plus Ext3 experience ]
> 
> I remember running into this a long time ago when I was first
> playing with XFS for /tmp and /var (I was still a Linux/XFS noob
> at the time, not that I'm an expert today).  I ran into the same
> case where both free block and inodes were still available
> (although similarly well utilized), and the median file size was
> around 1KiB.  It was also in the case of many small files being
> written out in a short period.
> 
> In my case, I didn't use the XFS debugger to get into the
> allocation of the extents (would have if I wasn't such a noob,
> good, discrete command to know, thanx!).
> 
> Extents are outstanding for data and similar directories, ordering
> and placing large and small files to mitigate fragmentation.  But
> in this case, and correct me if I'm wrong, it's really just a
> wasteful use for the extents approach, as the files typically fit
> in a single data block or two. 

And single blocks still an extent, so there's nothing "wasted" by
having a single block extent.

....

> I've used Ext3 with around 8 million files with a median size well
> under 4KiB (under 32GiB total).  It works "well enough."  I'm
> curious how Ext4 would do though.  I think Ric Wheeler's team (at
> Red Hat) has done some benchmarks on 7+ figure file counts on Ext3
> and Ext4.

And against XFS, too. In case you didn't realise, you're talking to
the person who ran a large number of those tests. ;)

The results were ext4 is good for create/delete workloads up to 2-4
threads and about 100k files per directory on a decent disk
subsystem (4000 iops). It's much better than ext3, and for those
workloads about 2x as fast as XFS at 1-2 threads. This pattern held
true as long as the disk subsystem could handle the number of iops
that ext4 threw at it. XFS performance came at a much, much lower
iops cost (think order of magnitude), so shoul dbe more consistent
on a wider range of storage hardware than ext4.

However, XFS was about 3x faster on cold cache lookups than ext4, so
if you're workload is dominated by lookups, XFS is definitely the
faster filesystem to use even if creates/unlinks on ext4 are
faster..

As soon as you have more parallelism than 2-4 threads or large
directories, XFS create/unlink speed surpasses ext4 by a large
amount - the best I got out of ext4 was ~80k creates a second, while
XFS topped 130k creates/s at 8 threads. And the lookup speed
differential increase in XFS's favour at larger thread counts as
well.

So it really depends on your workload as to which filesystem will
handle your small files best. Mail spools tend to have lots of
parallelism, which is why XFS works pretty well, even though it is
a small file workload.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-07 23:31     ` Dave Chinner
@ 2011-10-08  6:30       ` Bryan J Smith
  2011-10-08 13:16         ` Christoph Hellwig
  0 siblings, 1 reply; 19+ messages in thread
From: Bryan J Smith @ 2011-10-08  6:30 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Bernhard Schmidt, xfs


[-- Attachment #1.1: Type: text/plain, Size: 3140 bytes --]

I figured someone would fill in the gaps in my experience-assumptions. Excellent info, especially on those fs comparisons.
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Dave Chinner <david@fromorbit.com> wrote:

On Fri, Oct 07, 2011 at 06:58:53AM -0700, Bryan J Smith wrote:
> [ Not really adding any technical meat, but just some past
> experience with XFS, plus Ext3 experience ]
> 
> I remember running into this a long time ago when I was first
> playing with XFS for /tmp and /var (I was still a Linux/XFS noob
> at the time, not that I'm an expert today).  I ran into the same
> case where both free block and inodes were still available
> (although similarly well utilized), and the median file size was
> around 1KiB.  It was also in the case of many small files being
> written out in a short period.
> 
> In my case, I didn't use the XFS debugger to get into the
> allocation of the extents (would have if I wasn't such a noob,
> good, discrete command to know, thanx!).
> 
> Extents are outstanding for data and similar directories, ordering
> and placing large and small files to mitigate fragmentation.  But
> in this case, and correct me if I'm wrong, it's really just a
> wasteful use for the extents approach, as the files typically fit
> in a single data block or two. 

And single blocks still an extent, so there's nothing "wasted" by
having a single block extent.

....

> I've used Ext3 with around 8 million files with a median size well
> under 4KiB (under 32GiB total).  It works "well enough."  I'm
> curious how Ext4 would do though.  I think Ric Wheeler's team (at
> Red Hat) has done some benchmarks on 7+ figure file counts on Ext3
> and Ext4.

And against XFS, too. In case you didn't realise, you're talking to
the person who ran a large number of those tests. ;)

The results were ext4 is good for create/delete workloads up to 2-4
threads and about 100k files per directory on a decent disk
subsystem (4000 iops). It's much better than ext3, and for those
workloads about 2x as fast as XFS at 1-2 threads. This pattern held
true as long as the disk subsystem could handle the number of iops
that ext4 threw at it. XFS performance came at a much, much lower
iops cost (think order of magnitude), so shoul dbe more consistent
on a wider range of storage hardware than ext4.

However, XFS was about 3x faster on cold cache lookups than ext4, so
if you're workload is dominated by lookups, XFS is definitely the
faster filesystem to use even if creates/unlinks on ext4 are
faster..

As soon as you have more parallelism than 2-4 threads or large
directories, XFS create/unlink speed surpasses ext4 by a large
amount - the best I got out of ext4 was ~80k creates a second, while
XFS topped 130k creates/s at 8 threads. And the lookup speed
differential increase in XFS's favour at larger thread counts as
well.

So it really depends on your workload as to which filesystem will
handle your small files best. Mail spools tend to have lots of
parallelism, which is why XFS works pretty well, even though it is
a small file workload.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com


[-- Attachment #1.2: Type: text/html, Size: 3886 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-07 23:14     ` Dave Chinner
@ 2011-10-08 12:29       ` Bernhard Schmidt
  2011-10-08 13:18         ` Christoph Hellwig
  2011-10-08 22:30         ` Dave Chinner
  0 siblings, 2 replies; 19+ messages in thread
From: Bernhard Schmidt @ 2011-10-08 12:29 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

Hi,

>> lxmhs45:~ # xfs_info /dev/sdb
>> meta-data=/dev/sdb               isize=256    agcount=4, agsize=655360 blks
>>           =                       sectsz=512   attr=2
>> data     =                       bsize=4096   blocks=2621440, imaxpct=50
>                                                                  ^^^^^^^^^^
>
> And there lies the reason you are getting the filesystem into this
> situation - you're allowing a very large number of inodes to be created
> in the filesystem.

Ah, sorry, I changed that to 50% _after_ the first fuckup due to a 
suggestion on the postfix ml, it used to be the default 25% before.
>
> I'd suggest that for your workload, you need to allow at least 10GB
> of disk space per million inodes. Because of the number of small
> files, XFS is going to need a much larger amount of free space
> available to prevent aging related freespace fragmentation problems.
> The above ratio results in a maximum space usage of about 50%, which
> will avoid such issues. If you need to hold 2 million files, use a
> 20GB filesystem...

I don't need to hold 2 million files, 1 million might be enough, I have 
to make sure I cannot run out of inodes way before I run out of free space.

Generally speaking I have the following problem:

External nodes are submitting data (mails) to this system as fast as 
they can. The mails can be between 800 bytes and several megabytes. 
There are 50 receiver that write those mails as single files flat in a 
single directory.

There are 4 worker threads that process a _random_ file out of this 
directory. To process it they need to be able to create a temporary file 
on the same filesystem. Together they are slower than the 50 receivers 
(they can process maybe 20% of the incoming rate), which means that this 
incoming directory is going to fill. For the sake of the argument lets 
assume that the amount of mails to be sent is unlimited.

The only knob the software knows to prevent this from going over is free 
disk space. When free disk space is lower than 2 Gigabyte, the 
acceptance of new mails is blocked gracefully until there is free space 
again.

It has, however, no way to deal with ENOSPC before that. When it cannot 
create new files due no free inodes (ext4 with default settings) or 
fragmentation in XFS, it breaks quite horribly and cannot recover by itself.

Can I avoid XFS giving ENOSPC due to inode shortage even in worst case 
situations? I would be fine preallocating 1 GB for inode storage if that 
would fix the problem. ext4 with bytes-per-inode = blocksize does this fine.

You mentioned an aging problem with XFS. I guess you mean that an XFS 
filesystem will get slower/more fragmented by time with abuse like this. 
These mail submission above will happen in bursts, during normal times 
it will go down to << 1000 files on the entire filesystem (empty 
incoming directory). Is this enough for XFS to "fix itself"?

BTW, the software can hash the incoming directory in 16 or 16x16 
subdirectories. Would that help XFS in any way with those filesizes? At 
first glance I would have said yes, but due to the random access in 
those directories it would still have the entire spool as workload.

Bernhard

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-08  6:30       ` Bryan J Smith
@ 2011-10-08 13:16         ` Christoph Hellwig
  0 siblings, 0 replies; 19+ messages in thread
From: Christoph Hellwig @ 2011-10-08 13:16 UTC (permalink / raw)
  To: Bryan J Smith; +Cc: Bernhard Schmidt, xfs

On Sat, Oct 08, 2011 at 02:30:04AM -0400, Bryan J Smith wrote:
> I figured someone would fill in the gaps in my experience-assumptions. Excellent info, especially on those fs comparisons.

If you want to help the project with some relatively simple work
summarizing posts like these on pages on the xfs.org wiki would be fairly
useful.  If you;'re interested in that we could even dump raw data
of benchmarks for graphs / etc.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-08 12:29       ` Bernhard Schmidt
@ 2011-10-08 13:18         ` Christoph Hellwig
  2011-10-08 22:34           ` Dave Chinner
  2011-10-08 22:30         ` Dave Chinner
  1 sibling, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2011-10-08 13:18 UTC (permalink / raw)
  To: Bernhard Schmidt; +Cc: xfs

On Sat, Oct 08, 2011 at 02:29:34PM +0200, Bernhard Schmidt wrote:
> Can I avoid XFS giving ENOSPC due to inode shortage even in worst
> case situations? I would be fine preallocating 1 GB for inode
> storage if that would fix the problem. ext4 with bytes-per-inode =
> blocksize does this fine.

It's a bit of a hack, but you can do that fairly easily by:

 - mounting the filesystem with the ikeep option
 - creating the numbers of inodes you need (use a worst case allocation)
   before starting the workload

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-08 12:29       ` Bernhard Schmidt
  2011-10-08 13:18         ` Christoph Hellwig
@ 2011-10-08 22:30         ` Dave Chinner
  1 sibling, 0 replies; 19+ messages in thread
From: Dave Chinner @ 2011-10-08 22:30 UTC (permalink / raw)
  To: Bernhard Schmidt; +Cc: xfs

On Sat, Oct 08, 2011 at 02:29:34PM +0200, Bernhard Schmidt wrote:
> >I'd suggest that for your workload, you need to allow at least 10GB
> >of disk space per million inodes. Because of the number of small
> >files, XFS is going to need a much larger amount of free space
> >available to prevent aging related freespace fragmentation problems.
> >The above ratio results in a maximum space usage of about 50%, which
> >will avoid such issues. If you need to hold 2 million files, use a
> >20GB filesystem...
> 
> I don't need to hold 2 million files, 1 million might be enough, I
> have to make sure I cannot run out of inodes way before I run out of
> free space.
> 
> Generally speaking I have the following problem:
> 
> External nodes are submitting data (mails) to this system as fast as
> they can. The mails can be between 800 bytes and several megabytes.
> There are 50 receiver that write those mails as single files flat in
> a single directory.
> 
> There are 4 worker threads that process a _random_ file out of this
> directory. To process it they need to be able to create a temporary
> file on the same filesystem. Together they are slower than the 50
> receivers (they can process maybe 20% of the incoming rate), which
> means that this incoming directory is going to fill. For the sake of
> the argument lets assume that the amount of mails to be sent is
> unlimited.
> 
> The only knob the software knows to prevent this from going over is
> free disk space. When free disk space is lower than 2 Gigabyte, the
> acceptance of new mails is blocked gracefully until there is free
> space again.

You could increase this free space limit - that is likely to reduce
the incidence of too-early ENOSPC.

> It has, however, no way to deal with ENOSPC before that. When it
> cannot create new files due no free inodes (ext4 with default
> settings) or fragmentation in XFS, it breaks quite horribly and
> cannot recover by itself.
> 
> Can I avoid XFS giving ENOSPC due to inode shortage even in worst
> case situations? I would be fine preallocating 1 GB for inode
> storage if that would fix the problem. ext4 with bytes-per-inode =
> blocksize does this fine.
> 
> You mentioned an aging problem with XFS. I guess you mean that an
> XFS filesystem will get slower/more fragmented by time with abuse
> like this. These mail submission above will happen in bursts, during
> normal times it will go down to << 1000 files on the entire
> filesystem (empty incoming directory). Is this enough for XFS to
> "fix itself"?

In most cases, yes.

> BTW, the software can hash the incoming directory in 16 or 16x16
> subdirectories. Would that help XFS in any way with those filesizes?

Directory scalability is not affected by the size of the files
they index. OTOH, concurrency of operations would be improved.

That is, if you have all 2 million files in a single directory, only
one process (incoming or processing) can be modifying the directory
at a time. That will serialise a lot of the work that is being done.
If you have those 2M files hashed across 16 directories, then
modification/access collisions will be less likely hence operations
are more likely to be done in parallel (and therefore faster).

> At first glance I would have said yes, but due to the random access
> in those directories it would still have the entire spool as
> workload.

Random directory lookups on large directories are pretty efficient
on XFS due to the btree-based name hash indexing scheme they use.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-08 13:18         ` Christoph Hellwig
@ 2011-10-08 22:34           ` Dave Chinner
  2011-10-09 14:46             ` Christoph Hellwig
  0 siblings, 1 reply; 19+ messages in thread
From: Dave Chinner @ 2011-10-08 22:34 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Bernhard Schmidt, xfs

On Sat, Oct 08, 2011 at 09:18:09AM -0400, Christoph Hellwig wrote:
> On Sat, Oct 08, 2011 at 02:29:34PM +0200, Bernhard Schmidt wrote:
> > Can I avoid XFS giving ENOSPC due to inode shortage even in worst
> > case situations? I would be fine preallocating 1 GB for inode
> > storage if that would fix the problem. ext4 with bytes-per-inode =
> > blocksize does this fine.
> 
> It's a bit of a hack, but you can do that fairly easily by:
> 
>  - mounting the filesystem with the ikeep option
>  - creating the numbers of inodes you need (use a worst case allocation)
>    before starting the workload

Just bear in mind that this can result in all inodes being allocated
in the same AG, thereby serialising all future file creation and
deletion in the filesystem. If you take this route, using a directory
hashing scheme is, IMO, absolutely necessary.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Premature "No Space left on device" on XFS
  2011-10-08 22:34           ` Dave Chinner
@ 2011-10-09 14:46             ` Christoph Hellwig
  0 siblings, 0 replies; 19+ messages in thread
From: Christoph Hellwig @ 2011-10-09 14:46 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Bernhard Schmidt, xfs

On Sun, Oct 09, 2011 at 09:34:04AM +1100, Dave Chinner wrote:
> Just bear in mind that this can result in all inodes being allocated
> in the same AG, thereby serialising all future file creation and
> deletion in the filesystem. If you take this route, using a directory
> hashing scheme is, IMO, absolutely necessary.

Only if you preallocate all inodes in the same directory, else you
get them spread around.  But either way the hashing scheme probably
is the much better idea.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2011-10-09 14:46 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-10-06 19:55 Premature "No Space left on device" on XFS Bernhard Schmidt
2011-10-07  0:22 ` Stan Hoeppner
2011-10-07  0:47   ` Bernhard Schmidt
2011-10-07  1:37 ` Dave Chinner
2011-10-07  8:40   ` Gim Leong Chin
2011-10-07 23:20     ` Dave Chinner
2011-10-07 11:40   ` Michael Monnerie
2011-10-07 23:17     ` Dave Chinner
2011-10-07 13:49   ` Bernhard Schmidt
2011-10-07 23:14     ` Dave Chinner
2011-10-08 12:29       ` Bernhard Schmidt
2011-10-08 13:18         ` Christoph Hellwig
2011-10-08 22:34           ` Dave Chinner
2011-10-09 14:46             ` Christoph Hellwig
2011-10-08 22:30         ` Dave Chinner
2011-10-07 13:58   ` Bryan J Smith
2011-10-07 23:31     ` Dave Chinner
2011-10-08  6:30       ` Bryan J Smith
2011-10-08 13:16         ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.