All of lore.kernel.org
 help / color / mirror / Atom feed
* ext4 reserved blocks not enforced?
@ 2019-07-18 15:26 Ian Malone
  2019-07-19 18:04 ` Theodore Y. Ts'o
  0 siblings, 1 reply; 2+ messages in thread
From: Ian Malone @ 2019-07-18 15:26 UTC (permalink / raw)
  To: linux-ext4

Hi,

We've got a number of ext4 fs on a LVM logical volume. When they get
'full' (0 available space according to df, normal user can't place any
more files), or ideally slightly before, we use lvm's lvextend command
with the -r option, which invokes fsadm, which I guess in turn uses
resize2fs.

Recently we extended a ~1.9TB filesystem by 20GB, however afterwards
df reported 0 available bytes. The LV had been increased and running
resize2fs reported that the fs was already the full size of the
device. tune2fs showed fewer free blocks than reserved blocks. Despite
this, normal users could create files on the filesystem (via nfs) and
copy several GB of data on without trouble. Forced fsck found no
issues, though with fragcheck enabled it reported plenty of blocks
that were not the expected length (and 9.4% non-contiguous).

Further, copying data on did not appear to change the reported free
nodes from tune2fs, though we didn't attempt to completely fill the
drive. We've since manually lowered the reserved blocks count below
the free blocks count in case the inverted situation was somehow
causing this behaviour.

Is this a bug? Normally once df reports 0 available normal users are
not able to add more data, regardless of remaining reserved space. If
it's expected behaviour can someone clarify what's happening?

This is the current tune2fs, reserved blocks was previously about
24924508, free blocks was already around 17315627 when we lowered the
reserved blocks:
tune2fs 1.42.9 (28-Dec-2013)
Filesystem volume name:   <none>
Last mounted on:          /mnt/research/ftd
Filesystem UUID:          0e4e6c45-a11b-43ff-84c8-ed21fbbde460
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index
filetype needs_recovery extent 64bit flex_bg sparse_super large_file
huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              126623744
Block count:              506472448
Reserved block count:     13107200
Free blocks:              17315627
Free inodes:              124820983
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Reserved GDT blocks:      795
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    4096
Filesystem created:       Wed Jun 24 15:05:23 2015
Last mount time:          Thu Jul 18 13:23:59 2019
Last write time:          Thu Jul 18 13:23:59 2019
Mount count:              1
Maximum mount count:      -1
Last checked:             Thu Jul 18 13:21:25 2019
Check interval:           0 (<none>)
Lifetime writes:          12 TB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      a5f445de-e71e-4601-9014-c82b8c8ec89e
Journal backup:           inode blocks


-- 
imalone
http://ibmalone.blogspot.co.uk

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: ext4 reserved blocks not enforced?
  2019-07-18 15:26 ext4 reserved blocks not enforced? Ian Malone
@ 2019-07-19 18:04 ` Theodore Y. Ts'o
  0 siblings, 0 replies; 2+ messages in thread
From: Theodore Y. Ts'o @ 2019-07-19 18:04 UTC (permalink / raw)
  To: Ian Malone; +Cc: linux-ext4

On Thu, Jul 18, 2019 at 04:26:19PM +0100, Ian Malone wrote:
> Recently we extended a ~1.9TB filesystem by 20GB, however afterwards
> df reported 0 available bytes. The LV had been increased and running
> resize2fs reported that the fs was already the full size of the
> device. tune2fs showed fewer free blocks than reserved blocks. Despite
> this, normal users could create files on the filesystem (via nfs)

It's the "via NFS" which is the issue.  The problem is that model with
NFS is that access checks are done on the client side, and the NFS
client doesn't know about ext4's reserved block policy (nor does the
NFS client have a good way of knowing how blocks are reserved, or,
without constantly requesting the free space via repeated NFS queries,
how many free blocks are availble on the server).

On the NFS server side, the server has no way of knowing whether or
not "root" was issuing the write.  The NFS server could know whether
or not the "root squash" flag is set, and pass that to ext4, but
that's not currently being done.

						- Ted

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-07-19 18:04 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-18 15:26 ext4 reserved blocks not enforced? Ian Malone
2019-07-19 18:04 ` Theodore Y. Ts'o

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.