All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: "Arkadiusz Miśkiewicz" <arekm@maven.pl>
Cc: xfs@oss.sgi.com
Subject: Re: usrquota, prjquota mount success on 3.16rc4 but Failed to initialize disk quotas on 3.10.43
Date: Wed, 9 Jul 2014 10:32:48 +1000	[thread overview]
Message-ID: <20140709003248.GE4453@dastard> (raw)
In-Reply-To: <201407082349.07906.arekm@maven.pl>

On Tue, Jul 08, 2014 at 11:49:07PM +0200, Arkadiusz Miśkiewicz wrote:
> On Tuesday 08 of July 2014, Arkadiusz Miśkiewicz wrote:
> > On Tuesday 08 of July 2014, Arkadiusz Miśkiewicz wrote:
> > > My broken fs
> > > http://ixion.pld-linux.org/~arekm/p2/x1/web2-home.metadump.gz
> > > thanks to "xfsprogs: fixes for 3.2.1" by Dave is now almost in good
> > > shape.
> > > 
> > > What I found interesting is that repairing it with current git xfs_repair
> > > and then mounting (-o usrquota,prjquota) on 3.16.0-rc4 gives me:
> > > 
> > > [32208.142316] XFS (sdb): Mounting V4 Filesystem
> > > [32208.205449] XFS (sdb): Ending clean mount
> > > [32208.258991] XFS (sdb): Quotacheck needed: Please wait.
> > > [32821.930437] XFS (sdb): Quotacheck: Done.
> > > 
> > > Then I umount it and mount again (same options as before) but on 3.10.43
> > > kernel:
> > > 
> > > [  111.325889] XFS (sdb): Mounting Filesystem
> > > [  111.419331] XFS (sdb): Failed to initialize disk quotas.
> > > [  111.419339] XFS (sdb): Ending clean mount
> > > 
> > > What did change in later kernels that could have meaning here? (so I
> > > could try to backport that change).
> > > 
> > > For testing I mounted again on 3.16 and:
> > > [33870.472769] XFS (sdb): Mounting V4 Filesystem
> > > [33870.543539] XFS (sdb): Ending clean mount
> > > [33870.597791] XFS (sdb): Quotacheck needed: Please wait.
> > > [34484.332879] XFS (sdb): Quotacheck: Done
> > > 
> > > and again on 3.10.43:
> > > [ 1649.215390] XFS (sdb): Mounting Filesystem
> > > [ 1649.316894] XFS (sdb): Failed to initialize disk quotas.
> > > [ 1649.316902] XFS (sdb): Ending clean mount
> > 
> > Fails in xfs_iget at xfs_qm_init_quotainos():
> > 
> >                 if (XFS_IS_OQUOTA_ON(mp) &&
> >                     mp->m_sb.sb_gquotino != NULLFSINO) {
> >                         ASSERT(mp->m_sb.sb_gquotino > 0);
> >                         if ((error = xfs_iget(mp, NULL,
> > mp->m_sb.sb_gquotino, 0, 0, &gip))) {
> >                                 if (uip)
> >                                         IRELE(uip);
> >                                 return XFS_ERROR(error);
> >                         }
> >                 }
> > 
> > and in xfs_iget:
> > 
> >         /* reject inode numbers outside existing AGs */
> >         if (!ino || XFS_INO_TO_AGNO(mp, ino) >= mp->m_sb.sb_agcount)
> >                 return EINVAL;
> > 
> > 
> > m_sb.sb_gquotino/ino looks to be 0
> > 
> > # xfs_db /dev/sdb -c "sb 0" -c "print" |grep quot
> > uquotino = 4077961
> > gquotino = 0
> > pquotino = 0
> > 
> > and older kernels (testing on 3.10.46 now) seem to be unable to deal with
> > this.
> > 
> > Newer kernels survive probably due to this:
> > 
> >    xfs_sb_quota_from_disk(struct xfs_sb *sbp)
> > {
> >         /*
> >          * older mkfs doesn't initialize quota inodes to NULLFSINO. This
> >          * leads to in-core values having two different values for a quota
> >          * inode to be invalid: 0 and NULLFSINO. Change it to a single
> > value * NULLFSINO.
> >          *
> >          * Note that this change affect only the in-core values. These
> >          * values are not written back to disk unless any quota information
> >          * is written to the disk. Even in that case, sb_pquotino field is
> >          * not written to disk unless the superblock supports pquotino.
> >          */
> >         if (sbp->sb_uquotino == 0)
> >                 sbp->sb_uquotino = NULLFSINO;
> >         if (sbp->sb_gquotino == 0)
> >                 sbp->sb_gquotino = NULLFSINO;
> >         if (sbp->sb_pquotino == 0)
> >                 sbp->sb_pquotino = NULLFSINO;
> > 
> > so sb_gquotino = NULLFSINO gets set and it never reaches error condition at
> > xfs_qm_init_quotainos()
> 
> Small summary from my observation:
> 
> - mkfs.xfs sets all three [ugp]quotino to 0 (contrary to what comment above 
> says: "older mkfs doesn't initialize quota inodes to NULLFSINO.". New/fresh 
> mkfs.xfs also doesn't do that. Were any intermediate versions actually setting 
> NULLFSINO ?

Not yet. Mainly because the handling of quota in superblocks appears
to be broadly broken in the xfsprogs code base.

> - freshly created fs; 3.10 kernel; mount image -o usrquota causes: uquotino 
> gets NEW inode; gquotino is set to null; pquotino not used (stays at 0)

That is triggered by the XFS_SB_VERSION_QUOTABIT not being set, and
xfs_qm_qino_alloc() setting the incore values quotaino values to
NULLFSINO.

> - 3.16 behaviour is different; freshly created fs; mount image -o usrquota 
> causes: uquotino gets NEW inode; gquotino/pquotino stays at 0 (while on 3.10 
> gquotino was being set to null)

Same thing, except we never write the sb_gquotino because group
quota is not enabled. That's a bug in the 3.16 kernel code that
needs fixing.

> - 3.10 kernel is not able to handle case when uquotino == value, gquotino == 
> 0. For 3.10 this case is impossible / should never happen. 3.10 expects 
> (uquotino == value, gquotino == null) or (uquotino == value, gquotino == 
> othervalue) or (uqotinfo == null, gruotino == value) only.

Only when the XFS_SB_VERSION_QUOTABIT feature bit is set. If it is
not set, then the code will first set them to NULLFSINO as per the
above "new fs without quota" case.

> - this means that fresh fs; mount under 3.16 with -o usrquota. Try to mount 
> the same fs on 3.10 but with -o usrquota,grpquota -> will fail "Failed to 
> initialize disk quotas."

Right, that's the 3.16 issue, and so the fix will need to go to
-stable kernels.

> - 3.16 handles both cases (uquotino == val, gquotino == 0, too) because it 
> does translation when reading from disk (the one cited at beginning of this 
> mail)

Right, which is necessary for the code to handle the optional
project quota correctly.

> Probably solution is to make xfs_repair make if [ugp]quotino  == 0 set these 
> to null. IMO old and new kernels should handle null properly (regardless of 
> quota feature bit state). mkfs.xfs should also get updated to set these to 
> null initially. 

New kernels already handle the mixed state just fine, old kernels
don't. New kernels need to be fixed not to leave mixed state on
unmount.

Mkfs can remain doing what it currently does - it's not leaving
mixed state behind.

Repair, well, it's just broken when it comes to quotas. It doesn't
actually check quotas, or even check the validity of the dquots in
the quota files. It doesn't leave the quota inodes in consistent
state, nor does it handle the quota version bits appropriately.

What I think I'll do is finish the quota lobotomisation patch I have
for repair - it just nulls all the quota inodes, clears the quota
flags and marks the quota inodes as unused. It the removes all the
knowledge of quota and quota inodes from repair. This will ensure
that quotas are always in a known state after repair, and they will
always be rebuilt on the next mount.

Note that repair currently always clears the quota checked flags
even if there was nothing wrong with the filesystem so that quotas
are regenerated on the next mount. Hence jsut trashing everything
quota related is not going to change behaviour at all.

> Last question/issue is: should mount fail in case of "Failed to initialize 
> disk quotas." problem? Right now user requests quotas to be enabled but gets 
> fs without quota (and the only failure indication is one line in dmesg).

Not according to history. Any quota mount failure results in quotas
being disabled until the problem is corrected...

> Citation:
> 23:24 < dchinner__> arekm:it's just a nasty can of worms

Yup, I have to fix multiple kernel versions as well as xfs_repair.
Not to mention it's clear that the xfsprogs change-over to support
the seprate project quota inode was fundamentally broken, and so I
have to audit everything else in xfsprogs that reads and writes
superblocks and work out how to fix the problems that exist.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2014-07-09  0:32 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-08  8:41 usrquota, prjquota mount success on 3.16rc4 but Failed to initialize disk quotas on 3.10.43 Arkadiusz Miśkiewicz
2014-07-08 13:27 ` Arkadiusz Miśkiewicz
2014-07-08 21:49   ` Arkadiusz Miśkiewicz
2014-07-09  0:32     ` Dave Chinner [this message]
2014-07-08 13:56 ` Arkadiusz Miśkiewicz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140709003248.GE4453@dastard \
    --to=david@fromorbit.com \
    --cc=arekm@maven.pl \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.