All of lore.kernel.org
 help / color / mirror / Atom feed
* quota problems with e2fsck -p?
@ 2018-02-05 22:46 Darrick J. Wong
  0 siblings, 0 replies; only message in thread
From: Darrick J. Wong @ 2018-02-05 22:46 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: linux-ext4

Hi everyone,

So I was test-driving my e2scrub patches the other night and saw this:

systemd[1]: Starting Online ext4 Metadata Check for /dev/sub3_raid/storage...
e2scrub@-dev-sub3_raid-storage[9332]:   Logical volume "storage.e2scrub" created.
e2scrub@-dev-sub3_raid-storage[9332]: sub3-raid-fs: Clearing orphaned inode 6950133 (uid=1021, gid=1021, mode=040700, size=4096)
e2scrub@-dev-sub3_raid-storage[9332]: sub3-raid-fs: Clearing orphaned inode 6952084 (uid=1021, gid=1021, mode=0100600, size=8388608)
e2scrub@-dev-sub3_raid-storage[9332]: sub3-raid-fs: clean, 6835947/121307136 files, 338587593/485198848 blocks
e2scrub@-dev-sub3_raid-storage[9332]: e2fsck 1.43.9~WIP-2018-02-03 (3-Feb-2018)
e2scrub@-dev-sub3_raid-storage[9332]: Pass 1: Checking inodes, blocks, and sizes
e2scrub@-dev-sub3_raid-storage[9332]: Pass 2: Checking directory structure
e2scrub@-dev-sub3_raid-storage[9332]: Pass 3: Checking directory connectivity
e2scrub@-dev-sub3_raid-storage[9332]: Pass 4: Checking reference counts
e2scrub@-dev-sub3_raid-storage[9332]: Pass 5: Checking group summary information
e2scrub@-dev-sub3_raid-storage[9332]: [QUOTA WARNING] Usage inconsistent for ID 1021:actual (618773123072, 4395080) != expected (618781515776, 4395082)
e2scrub@-dev-sub3_raid-storage[9332]: Update quota info for quota type 0? yes
e2scrub@-dev-sub3_raid-storage[9332]: [QUOTA WARNING] Usage inconsistent for ID 1021:actual (613615316992, 4507364) != expected (613623709696, 4507366)
e2scrub@-dev-sub3_raid-storage[9332]: Update quota info for quota type 1? yes
e2scrub@-dev-sub3_raid-storage[9332]: sub3-raid-fs: ***** FILE SYSTEM WAS MODIFIED *****
e2scrub@-dev-sub3_raid-storage[9332]: sub3-raid-fs: 6835947/121307136 files (0.9% non-contiguous), 338587593/485198848 blocks
e2scrub@-dev-sub3_raid-storage[9332]: Scrub of /dev/sub3_raid/storage FAILED due to invalid snapshot.
e2scrub@-dev-sub3_raid-storage[9332]:   Logical volume "storage.e2scrub" successfully removed
systemd[1]: e2scrub@-dev-sub3_raid-storage.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: Failed to start Online ext4 Metadata Check for /dev/sub3_raid/storage.
systemd[1]: e2scrub@-dev-sub3_raid-storage.service: Unit entered failed state.
systemd[1]: e2scrub@-dev-sub3_raid-storage.service: Triggering OnFailure= dependencies.
systemd[1]: e2scrub@-dev-sub3_raid-storage.service: Failed with result 'exit-code'.

It looks like all we have to do trigger the QUOTA WARNING is enable
quota, write a file, unlink the file (without closing it), snapshot the
fs, and then run e2fsck -p followed by e2fsck -fn on the snapshot.

Note that first we run e2fsck to preen the filesystem, then we run it again to
see if it spots any corruption.  The first run finds two orphaned inodes and
zaps them, but because of -p it's a short run and we don't update the quota
information.  As a result, the second run triggers on the quota information
being wrong and the whole job fails.

The orphan inode processing occurs as part of check_super_block ->
release_orphan_inodes prior to pass 1, which means that we've not set up any
quota context nor read the quota data in from disk.  Given that we don't end
up checking the quota accounting at all in a preening run, I'm a little
hesitant to just plumb in code to fetch the quota info, update the info when
we recover orphans, and then write the quota info back out.  But that does
seem to be what this situation requires.

So, I punt to the list instead -- is that crazy?

--Darrick

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2018-02-05 22:46 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-05 22:46 quota problems with e2fsck -p? Darrick J. Wong

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.