All of lore.kernel.org
 help / color / mirror / Atom feed
* maxpct option for small xfs filesystems
@ 2015-01-26 16:14 Alexander Tsvetkov
  2015-01-26 22:37 ` Dave Chinner
  0 siblings, 1 reply; 8+ messages in thread
From: Alexander Tsvetkov @ 2015-01-26 16:14 UTC (permalink / raw)
  To: xfs

Hello,

I'm trying to understand the expected behaviour of "maxpct" option in 
case of small xfs filesystem
comparing the maximum percentage defined for this option with the 
percentage of actually allocated
inodes in filesystem, but the result of prepared test case doesn't 
correspond to the expectations:

[root@fedora ~]#mkfs.xfs -f -d size=16m -i maxpct=1 /dev/sdb2
[root@fedora ~]# mount /dev/sdb2 /mnt/scratch/
[root@fedora ~]# mount | grep scratch
/dev/sdb2 on /mnt/scratch type xfs 
(rw,relatime,seclabel,attr2,inode64,noquota)
[root@fedora ~]# xfs_info /mnt/scratch/
meta-data=/dev/sdb2        isize=256    agcount=1, agsize=4096 blks
                  =                       sectsz=512   attr=2, projid32bit=1
                  =                       crc=0        finobt=0
data          =                       bsize=4096   blocks=4096, imaxpct=1
                  =                       sunit=0      swidth=0 blks
naming     =version 2        bsize=4096   ascii-ci=0 ftype=0
log            =internal           bsize=4096   blocks=853, version=2
                  =                       sectsz=512   sunit=0 blks, 
lazy-count=1
realtime    =none               extsz=4096   blocks=0, rtextents=0

then trying to fill filesystem with lot of empty files until enospc 
obtained and break the loop:

[root@fedora ~]# for i in {0..100000}; do str=$(mktemp 
--tmpdir=/mnt/scratch tmp.XXXXXXXXXX); echo $str; done

filesystem is full with created files:

[root@fedora ~]# df -Th | grep scratch
/dev/sdb2      xfs        13M   13M  148K  99% /mnt/scratch

and from the number of actually created inodes:

[root@fedora ~]# xfs_db -c "blockget -n" -c "ncheck" /dev/sdb2 | wc -l
40512

it's calculated the space allocated for file inodes 9.9Mb (40512*256b 
inode size) that is about 77% of filesystem
space instead of defined maximum with maxpct=1% or even default 25% value.

Thanks,
Alexander Tsvetkov

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: maxpct option for small xfs filesystems
  2015-01-26 16:14 maxpct option for small xfs filesystems Alexander Tsvetkov
@ 2015-01-26 22:37 ` Dave Chinner
  2015-01-27 16:23   ` Alexander Tsvetkov
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Chinner @ 2015-01-26 22:37 UTC (permalink / raw)
  To: Alexander Tsvetkov; +Cc: xfs

On Mon, Jan 26, 2015 at 07:14:43PM +0300, Alexander Tsvetkov wrote:
> Hello,
> 
> I'm trying to understand the expected behaviour of "maxpct" option
> in case of small xfs filesystem
> comparing the maximum percentage defined for this option with the
> percentage of actually allocated
> inodes in filesystem, but the result of prepared test case doesn't
> correspond to the expectations:
> 
> [root@fedora ~]#mkfs.xfs -f -d size=16m -i maxpct=1 /dev/sdb2

On 3.19-rc5, immediately after mount:

# df -i /mnt/scratch
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/ram1         640     3   637    1% /mnt/scratch

Which indicates that imaxpct=1 is being calculated correctly, before
we even look at whether it is being enforced correctly or not.

So, what kernel version?

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

> [root@fedora ~]# for i in {0..100000}; do str=$(mktemp
> --tmpdir=/mnt/scratch tmp.XXXXXXXXXX); echo $str; done

Which is a complex (and very slow!) way of doing:

# for i in {0..100000}; do echo > /mnt/scratch/$i ; done 2> /dev/null

> filesystem is full with created files:
> 
> [root@fedora ~]# df -Th | grep scratch
> /dev/sdb2      xfs        13M   13M  148K  99% /mnt/scratch

# df -Th /mnt/scratch
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/ram1      xfs    13M  1.1M   12M   9% /mnt/scratch
# df -i /mnt/scratch
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/ram1         640   640     0  100% /mnt/scratch

> and from the number of actually created inodes:
> 
> [root@fedora ~]# xfs_db -c "blockget -n" -c "ncheck" /dev/sdb2 | wc -l
> 40512

That's a directory structure entry count, equivalent to 'find
/mnt/scratch | wc -l', not an allocated inode count which is what
'df -i' reports. Even so, on 3.19-rc5:

# xfs_db -c "blockget -n" -c "ncheck" /dev/ram1 | wc -l
637

which matches what 'df -i' tells us about allocated inodes and hence
imaxpct is working as expected.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: maxpct option for small xfs filesystems
  2015-01-26 22:37 ` Dave Chinner
@ 2015-01-27 16:23   ` Alexander Tsvetkov
  2015-01-27 16:31     ` Eric Sandeen
  0 siblings, 1 reply; 8+ messages in thread
From: Alexander Tsvetkov @ 2015-01-27 16:23 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs


On 01/27/2015 01:37 AM, Dave Chinner wrote:
> On Mon, Jan 26, 2015 at 07:14:43PM +0300, Alexander Tsvetkov wrote:
>> Hello,
>>
>> I'm trying to understand the expected behaviour of "maxpct" option
>> in case of small xfs filesystem
>> comparing the maximum percentage defined for this option with the
>> percentage of actually allocated
>> inodes in filesystem, but the result of prepared test case doesn't
>> correspond to the expectations:
>>
>> [root@fedora ~]#mkfs.xfs -f -d size=16m -i maxpct=1 /dev/sdb2
> On 3.19-rc5, immediately after mount:
>
> # df -i /mnt/scratch
> Filesystem     Inodes IUsed IFree IUse% Mounted on
> /dev/ram1         640     3   637    1% /mnt/scratch
>
> Which indicates that imaxpct=1 is being calculated correctly, before
> we even look at whether it is being enforced correctly or not.
>
> So, what kernel version?
>
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
I use Fedora20 on vbox virtual machine with latest kernel version available
from fedora repos: 3.17.8-200.fc20.x86_64 and xfsprogs-3.2.1-1.fc20.x86_64.

/dev/sdb test storage is of VDI format, fixed size:
[root@fedora ~]# fdisk -l

Disk /dev/sda: 10.3 GiB, 11005845504 bytes, 21495792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000011de

Device    Boot     Start       End  Blocks  Id System
/dev/sda1 *         2048   1026047  512000  83 Linux
/dev/sda2        1026048  20469759 9721856  83 Linux
/dev/sda3       20469760  21493759  512000  82 Linux swap / Solaris


Disk /dev/sdb: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00006ee7

Device    Boot     Start       End  Blocks  Id System
/dev/sdb1           2048   8390655 4194304  83 Linux
/dev/sdb2        8390656  16777215 4193280  83 Linux


>> [root@fedora ~]# for i in {0..100000}; do str=$(mktemp
>> --tmpdir=/mnt/scratch tmp.XXXXXXXXXX); echo $str; done
> Which is a complex (and very slow!) way of doing:
>
> # for i in {0..100000}; do echo > /mnt/scratch/$i ; done 2> /dev/null
>
>> filesystem is full with created files:
>>
>> [root@fedora ~]# df -Th | grep scratch
>> /dev/sdb2      xfs        13M   13M  148K  99% /mnt/scratch
> # df -Th /mnt/scratch
> Filesystem     Type  Size  Used Avail Use% Mounted on
> /dev/ram1      xfs    13M  1.1M   12M   9% /mnt/scratch
> # df -i /mnt/scratch
> Filesystem     Inodes IUsed IFree IUse% Mounted on
> /dev/ram1         640   640     0  100% /mnt/scratch
>
>> and from the number of actually created inodes:
>>
>> [root@fedora ~]# xfs_db -c "blockget -n" -c "ncheck" /dev/sdb2 | wc -l
>> 40512
> That's a directory structure entry count, equivalent to 'find
> /mnt/scratch | wc -l', not an allocated inode count which is what
> 'df -i' reports.
manual page for xfs_db ncheck says about inode numbers not a
directory entry numbers:

"ncheck [-s] [-i ino] Print name-inode pairs"

> Even so, on 3.19-rc5:
>
> # xfs_db -c "blockget -n" -c "ncheck" /dev/ram1 | wc -l
> 637
>
> which matches what 'df -i' tells us about allocated inodes and hence
> imaxpct is working as expected.
I have not the same results,  just installed 3.19-rc6 and repeated the test,
df -i reports 640 inodes for filesystem, but actually created 40512 files:

[root@fedora ~]# mkfs.xfs -f -d size=16m -i maxpct=1 /dev/sdb2
meta-data=/dev/sdb2              isize=256    agcount=1, agsize=4096 blks
          =                       sectsz=512   attr=2, projid32bit=1
          =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=4096, imaxpct=1
          =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=853, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@fedora ~]# mount /dev/sdb2 /mnt/scratch/
fill with files until enospc...
[root@fedora ~]# df -i /mnt/scratch/
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/sdb2         640   640     0  100% /mnt/scratch
[root@fedora ~]# df -Th /mnt/scratch/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdb2      xfs    13M   13M  156K  99% /mnt/scratch
[root@fedora ~]# umount /mnt/scratch
[root@fedora ~]# xfs_db -c "blockget -n" -c "ncheck" /dev/sdb2 | wc -l
40512

Looking into ncheck output there are 40512 pairs reported in the output 
each with own unique
inode number. ncheck doesn't report inodes count by definition, but what 
does these
40512 reported inode numbers mean if only actually 640 inodes were 
allocated? From another hand
each new file should have associated meta-data in the corresponding 
allocated inode structure, so for
40512 newly created files I expect the same count of allocated inodes, 
is it correct?

> Cheers,
>
> Dave.
Thanks,
Alexander Tsvetkov

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: maxpct option for small xfs filesystems
  2015-01-27 16:23   ` Alexander Tsvetkov
@ 2015-01-27 16:31     ` Eric Sandeen
  2015-01-27 19:15       ` Eric Sandeen
  2015-01-28 10:41       ` Alexander Tsvetkov
  0 siblings, 2 replies; 8+ messages in thread
From: Eric Sandeen @ 2015-01-27 16:31 UTC (permalink / raw)
  To: Alexander Tsvetkov, Dave Chinner; +Cc: xfs

On 1/27/15 10:23 AM, Alexander Tsvetkov wrote:

...

> I have not the same results,  just installed 3.19-rc6 and repeated the test.,
> df -i reports 640 inodes for filesystem, but actually created 40512 files:
> 
> [root@fedora ~]# mkfs.xfs -f -d size=16m -i maxpct=1 /dev/sdb2
> meta-data=/dev/sdb2              isize=256    agcount=1, agsize=4096 blks
>          =                       sectsz=512   attr=2, projid32bit=1
>          =                       crc=0        finobt=0
> data     =                       bsize=4096   blocks=4096, imaxpct=1
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal log           bsize=4096   blocks=853, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> [root@fedora ~]# mount /dev/sdb2 /mnt/scratch/
> fill with files until enospc...
> [root@fedora ~]# df -i /mnt/scratch/
> Filesystem     Inodes IUsed IFree IUse% Mounted on
> /dev/sdb2         640   640     0  100% /mnt/scratch
> [root@fedora ~]# df -Th /mnt/scratch/
> Filesystem     Type  Size  Used Avail Use% Mounted on
> /dev/sdb2      xfs    13M   13M  156K  99% /mnt/scratch
> [root@fedora ~]# umount /mnt/scratch
> [root@fedora ~]# xfs_db -c "blockget -n" -c "ncheck" /dev/sdb2 | wc -l
> 40512

and what does df -i say after remount?

This is actually a problem with the lazy superblock counters I've run into before,
but haven't yet fixed.  This kind of workload is such that it never trips the
runtime rebalancing.


> Looking into ncheck output there are 40512 pairs reported in the output each with own unique
> inode number. ncheck doesn't report inodes count by definition, but what does these
> 40512 reported inode numbers mean if only actually 640 inodes were allocated? From another hand
> each new file should have associated meta-data in the corresponding allocated inode structure, so for
> 40512 newly created files I expect the same count of allocated inodes, is it correct?

Recheck df -i after remount, I think you will see many more than 640.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: maxpct option for small xfs filesystems
  2015-01-27 16:31     ` Eric Sandeen
@ 2015-01-27 19:15       ` Eric Sandeen
  2015-01-28 10:41       ` Alexander Tsvetkov
  1 sibling, 0 replies; 8+ messages in thread
From: Eric Sandeen @ 2015-01-27 19:15 UTC (permalink / raw)
  To: Alexander Tsvetkov, Dave Chinner; +Cc: xfs

On 1/27/15 10:31 AM, Eric Sandeen wrote:

> This is actually a problem with the lazy superblock counters I've run into before,
> but haven't yet fixed.  This kind of workload is such that it never trips the
> runtime rebalancing.

working on a fix now ;)

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: maxpct option for small xfs filesystems
  2015-01-27 16:31     ` Eric Sandeen
  2015-01-27 19:15       ` Eric Sandeen
@ 2015-01-28 10:41       ` Alexander Tsvetkov
  2015-01-28 15:44         ` Eric Sandeen
  2015-01-28 18:05         ` Eric Sandeen
  1 sibling, 2 replies; 8+ messages in thread
From: Alexander Tsvetkov @ 2015-01-28 10:41 UTC (permalink / raw)
  To: Eric Sandeen, Dave Chinner; +Cc: xfs


On 01/27/2015 07:31 PM, Eric Sandeen wrote:
> On 1/27/15 10:23 AM, Alexander Tsvetkov wrote:
>
> ...
>
>> I have not the same results,  just installed 3.19-rc6 and repeated the test.,
>> df -i reports 640 inodes for filesystem, but actually created 40512 files:
>>
>> [root@fedora ~]# mkfs.xfs -f -d size=16m -i maxpct=1 /dev/sdb2
>> meta-data=/dev/sdb2              isize=256    agcount=1, agsize=4096 blks
>>           =                       sectsz=512   attr=2, projid32bit=1
>>           =                       crc=0        finobt=0
>> data     =                       bsize=4096   blocks=4096, imaxpct=1
>>           =                       sunit=0      swidth=0 blks
>> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
>> log      =internal log           bsize=4096   blocks=853, version=2
>>           =                       sectsz=512   sunit=0 blks, lazy-count=1
>> realtime =none                   extsz=4096   blocks=0, rtextents=0
>> [root@fedora ~]# mount /dev/sdb2 /mnt/scratch/
>> fill with files until enospc...
>> [root@fedora ~]# df -i /mnt/scratch/
>> Filesystem     Inodes IUsed IFree IUse% Mounted on
>> /dev/sdb2         640   640     0  100% /mnt/scratch
>> [root@fedora ~]# df -Th /mnt/scratch/
>> Filesystem     Type  Size  Used Avail Use% Mounted on
>> /dev/sdb2      xfs    13M   13M  156K  99% /mnt/scratch
>> [root@fedora ~]# umount /mnt/scratch
>> [root@fedora ~]# xfs_db -c "blockget -n" -c "ncheck" /dev/sdb2 | wc -l
>> 40512
> and what does df -i say after remount?
Nothing changed after remount:

[root@fedora ~]# xfs_db -c "blockget -n" -c "ncheck" /dev/sdb2 | wc -l
40512
[root@fedora ~]# mount /dev/sdb2 /mnt/scratch/
[root@fedora ~]# xfs_info /mnt/scratch/
meta-data=/dev/sdb2 isize=256 agcount=1, agsize=4096 blks
                  =                sectsz=512   attr=2, projid32bit=1
                  =                crc=0 finobt=0
data          =                bsize=4096   blocks=4096, imaxpct=1
                  =                sunit=0      swidth=0 blks
naming     =version 2 bsize=4096   ascii-ci=0 ftype=0
log            =internal    bsize=4096   blocks=853, version=2
                  =                sectsz=512   sunit=0 blks, lazy-count=1
realtime    =none        extsz=4096   blocks=0, rtextents=0
[root@fedora ~]# df -i /mnt/scratch/
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/sdb2         640   640     0  100% /mnt/scratch
[root@fedora ~]# umount /mnt/scratch
[root@fedora ~]# xfs_db -c "blockget -n" -c "ncheck" /dev/sdb2 | wc -l
40512
[root@fedora ~]# mount /dev/sdb2 /mnt/scratch/
[root@fedora ~]# df -i /mnt/scratch/
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/sdb2         640   640     0  100% /mnt/scratch
[root@fedora ~]# mount -o remount /dev/sdb2 /mnt/scratch/
[root@fedora ~]# df -i /mnt/scratch/
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/sdb2         640   640     0  100% /mnt/scratch
[root@fedora ~]# umount /mnt/scratch
[root@fedora ~]# xfs_db -c "blockget -n" -c "ncheck" /dev/sdb2 | wc -l
40512

>
> This is actually a problem with the lazy superblock counters I've run into before,
> but haven't yet fixed.  This kind of workload is such that it never trips the
> runtime rebalancing.
>
>> Looking into ncheck output there are 40512 pairs reported in the output each with own unique
>> inode number. ncheck doesn't report inodes count by definition, but what does these
>> 40512 reported inode numbers mean if only actually 640 inodes were allocated? From another hand
>> each new file should have associated meta-data in the corresponding allocated inode structure, so for
>> 40512 newly created files I expect the same count of allocated inodes, is it correct?
> Recheck df -i after remount, I think you will see many more than 640.
Do you mean that ncheck reports right number of allocated inodes instead 
of df -i? If counters are incorrect then the
maxpct limit is not working as well, because there are much more of 
filesystem space allocated for inodes then defined limit 1%.

What is expected behaviour in this case when it's required for fs to 
allocate new inodes starting to exceed defined maxpct
percentage? Which error is expected to be returned to user, enospc or 
probably just some warning?


>
> -Eric
Thanks,
Alexander Tsvetkov

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: maxpct option for small xfs filesystems
  2015-01-28 10:41       ` Alexander Tsvetkov
@ 2015-01-28 15:44         ` Eric Sandeen
  2015-01-28 18:05         ` Eric Sandeen
  1 sibling, 0 replies; 8+ messages in thread
From: Eric Sandeen @ 2015-01-28 15:44 UTC (permalink / raw)
  To: Alexander Tsvetkov, Dave Chinner; +Cc: xfs

On 1/28/15 4:41 AM, Alexander Tsvetkov wrote:

...

>> and what does df -i say after remount?
> Nothing changed after remount:

Sorry, I misremembered the problem.  :(
 
...

>>
>> This is actually a problem with the lazy superblock counters I've run into before,
>> but haven't yet fixed.  This kind of workload is such that it never trips the
>> runtime rebalancing.
>>
>>> Looking into ncheck output there are 40512 pairs reported in the output each with own unique
>>> inode number. ncheck doesn't report inodes count by definition, but what does these
>>> 40512 reported inode numbers mean if only actually 640 inodes were allocated? From another hand
>>> each new file should have associated meta-data in the corresponding allocated inode structure, so for
>>> 40512 newly created files I expect the same count of allocated inodes, is it correct?
>> Recheck df -i after remount, I think you will see many more than 640.

> Do you mean that ncheck reports right number of allocated inodes instead of df -i? If counters are incorrect then the
> maxpct limit is not working as well, because there are much more of filesystem space allocated for inodes then defined limit 1%.
> 
> What is expected behaviour in this case when it's required for fs to allocate new inodes starting to exceed defined maxpct
> percentage? Which error is expected to be returned to user, enospc or probably just some warning?

There are a couple of things going on here.

XFS superblock counters are kept in per-cpu variables for scalability; at certain times these per-cpu counts are coalesced into a global counter based on various thresholds.  The thing about the inode counter is that it generally counts up, and it's the high thresholds that we care about (in order to determine if we've hit maxpct), but it's written as one of the low-threshold counters (as with free inodes / free space).

So in your test, you're rapidly running up the counter on a single cpu, and it's not getting coalesced into the main counter, which is where the imaxpct test happens.  So that's why you're able to exceed imaxpct.

The df -i (statfs) output is wrong because of a bug in xfs_fs_statfs.  For a filesystem like XFS which dynamically allocates and deallocates inodes, it's a little tricky to answer the question "how many inodes are available?"  It's limited by free space as well as by imaxpct. There's code in there which does this:

        if (mp->m_maxicount)
                statp->f_files = min_t(typeof(statp->f_files),
                                        statp->f_files,
                                        mp->m_maxicount);

so it will never report more than mp->m_maxicount, which is essentially the imaxpct limit, even if we've actually allocated more than that.

A simple fix for this is to recognize that imaxpct is something of a soft limit (it's not critical if we overrun by a few hundred or even thousand inodes), and if the superblock counter of allocated inodes (sbp->sb_icount) exceeds mp->m_maxicount due to such an overrun, we should report that (true) value instead.

I'm working on a couple of patches which should fix both issues.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: maxpct option for small xfs filesystems
  2015-01-28 10:41       ` Alexander Tsvetkov
  2015-01-28 15:44         ` Eric Sandeen
@ 2015-01-28 18:05         ` Eric Sandeen
  1 sibling, 0 replies; 8+ messages in thread
From: Eric Sandeen @ 2015-01-28 18:05 UTC (permalink / raw)
  To: Alexander Tsvetkov, Dave Chinner; +Cc: xfs

On 1/28/15 4:41 AM, Alexander Tsvetkov wrote:
> What is expected behaviour in this case when it's required for fs to allocate new inodes starting to exceed defined maxpct
> percentage? Which error is expected to be returned to user, enospc or probably just some warning?

Sorry, I didn't answer this.  ENOSPC is expected.

There's not a lot of value in enforcing this strictly to the last inode,
because a % is fairly coarse anyway, but we should not blow right past
it as we do today.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-01-28 18:05 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-26 16:14 maxpct option for small xfs filesystems Alexander Tsvetkov
2015-01-26 22:37 ` Dave Chinner
2015-01-27 16:23   ` Alexander Tsvetkov
2015-01-27 16:31     ` Eric Sandeen
2015-01-27 19:15       ` Eric Sandeen
2015-01-28 10:41       ` Alexander Tsvetkov
2015-01-28 15:44         ` Eric Sandeen
2015-01-28 18:05         ` Eric Sandeen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.