All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH] xfs: make sure our default quota warning limits and grace periods survive quotacheck
@ 2020-02-19  0:34 Darrick J. Wong
  2020-02-19  2:02 ` Eric Sandeen
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Darrick J. Wong @ 2020-02-19  0:34 UTC (permalink / raw)
  To: Eric Sandeen, Eryu Guan; +Cc: xfs, fstests

From: Darrick J. Wong <darrick.wong@oracle.com>

Make sure that the default quota grace period and maximum warning limits
set by the administrator survive quotacheck.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
This is the testcase to go with 'xfs: preserve default grace interval
during quotacheck', though Eric and I haven't figured out how we're
going to land that one...
---
 tests/xfs/913     |   69 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 tests/xfs/913.out |   13 ++++++++++
 tests/xfs/group   |    1 +
 3 files changed, 83 insertions(+)
 create mode 100755 tests/xfs/913
 create mode 100644 tests/xfs/913.out

diff --git a/tests/xfs/913 b/tests/xfs/913
new file mode 100755
index 00000000..94681b02
--- /dev/null
+++ b/tests/xfs/913
@@ -0,0 +1,69 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0-or-later
+# Copyright (c) 2020, Oracle and/or its affiliates.  All Rights Reserved.
+#
+# FS QA Test No. 913
+#
+# Make sure that the quota default grace period and maximum warning limits
+# survive quotacheck.
+
+seq=`basename $0`
+seqres=$RESULT_DIR/$seq
+echo "QA output created by $seq"
+
+here=`pwd`
+tmp=/tmp/$$
+status=1    # failure is the default!
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+_cleanup()
+{
+	cd /
+	rm -f $tmp.*
+}
+
+# get standard environment, filters and checks
+. ./common/rc
+. ./common/filter
+. ./common/quota
+
+# real QA test starts here
+_supported_fs xfs
+_supported_os Linux
+_require_quota
+
+rm -f $seqres.full
+
+# Format filesystem and set up quota limits
+_scratch_mkfs > $seqres.full
+_qmount_option "usrquota"
+_scratch_mount >> $seqres.full
+
+$XFS_QUOTA_PROG -x -c 'timer -u 300m' $SCRATCH_MNT
+$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
+_scratch_unmount
+
+# Remount and check the limits
+_scratch_mount >> $seqres.full
+$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
+_scratch_unmount
+
+# Run repair to force quota check
+_scratch_xfs_repair >> $seqres.full 2>&1
+
+# Remount (this time to run quotacheck) and check the limits.  There's a bug
+# in quotacheck where we would reset the ondisk default grace period to zero
+# while the incore copy stays at whatever was read in prior to quotacheck.
+# This will show up after the /next/ remount.
+_scratch_mount >> $seqres.full
+$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
+_scratch_unmount
+
+# Remount and check the limits
+_scratch_mount >> $seqres.full
+$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
+_scratch_unmount
+
+# success, all done
+status=0
+exit
diff --git a/tests/xfs/913.out b/tests/xfs/913.out
new file mode 100644
index 00000000..ee989388
--- /dev/null
+++ b/tests/xfs/913.out
@@ -0,0 +1,13 @@
+QA output created by 913
+Blocks grace time: [0 days 05:00:00]
+Inodes grace time: [0 days 05:00:00]
+Realtime Blocks grace time: [0 days 05:00:00]
+Blocks grace time: [0 days 05:00:00]
+Inodes grace time: [0 days 05:00:00]
+Realtime Blocks grace time: [0 days 05:00:00]
+Blocks grace time: [0 days 05:00:00]
+Inodes grace time: [0 days 05:00:00]
+Realtime Blocks grace time: [0 days 05:00:00]
+Blocks grace time: [0 days 05:00:00]
+Inodes grace time: [0 days 05:00:00]
+Realtime Blocks grace time: [0 days 05:00:00]
diff --git a/tests/xfs/group b/tests/xfs/group
index 056072fb..87b3c75d 100644
--- a/tests/xfs/group
+++ b/tests/xfs/group
@@ -539,4 +539,5 @@
 910 auto quick inobtcount
 911 auto quick bigtime
 912 auto quick label
+913 auto quick quota
 997 auto quick mount

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] xfs: make sure our default quota warning limits and grace periods survive quotacheck
  2020-02-19  0:34 [RFC PATCH] xfs: make sure our default quota warning limits and grace periods survive quotacheck Darrick J. Wong
@ 2020-02-19  2:02 ` Eric Sandeen
  2020-02-20  4:31 ` Zorro Lang
  2020-05-08 21:12 ` Eric Sandeen
  2 siblings, 0 replies; 5+ messages in thread
From: Eric Sandeen @ 2020-02-19  2:02 UTC (permalink / raw)
  To: Darrick J. Wong, Eric Sandeen, Eryu Guan; +Cc: xfs, fstests

On 2/18/20 6:34 PM, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> Make sure that the default quota grace period and maximum warning limits
> set by the administrator survive quotacheck.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
> This is the testcase to go with 'xfs: preserve default grace interval
> during quotacheck', though Eric and I haven't figured out how we're
> going to land that one...

This probably could be a generic test, though I don't know how we're supposed
to handle the test matrix of all the different versions, types, and applications
which can implement or manage quota... I guess since it's an xfs specific fix
perhaps an xfs-specific test is reasonable.

> ---
>  tests/xfs/913     |   69 +++++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/xfs/913.out |   13 ++++++++++
>  tests/xfs/group   |    1 +
>  3 files changed, 83 insertions(+)
>  create mode 100755 tests/xfs/913
>  create mode 100644 tests/xfs/913.out
> 
> diff --git a/tests/xfs/913 b/tests/xfs/913
> new file mode 100755
> index 00000000..94681b02
> --- /dev/null
> +++ b/tests/xfs/913
> @@ -0,0 +1,69 @@
> +#! /bin/bash
> +# SPDX-License-Identifier: GPL-2.0-or-later
> +# Copyright (c) 2020, Oracle and/or its affiliates.  All Rights Reserved.
> +#
> +# FS QA Test No. 913
> +#
> +# Make sure that the quota default grace period and maximum warning limits
> +# survive quotacheck.
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1    # failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +	cd /
> +	rm -f $tmp.*
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +. ./common/quota
> +
> +# real QA test starts here
> +_supported_fs xfs
> +_supported_os Linux
> +_require_quota
> +
> +rm -f $seqres.full
> +
> +# Format filesystem and set up quota limits
> +_scratch_mkfs > $seqres.full
> +_qmount_option "usrquota"
> +_scratch_mount >> $seqres.full
> +
> +$XFS_QUOTA_PROG -x -c 'timer -u 300m' $SCRATCH_MNT
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount

ok as long as you're only doing one quota type and it's the user
quota type this should survive future changes.

Hm, actually I wonder if it'd be best to explicitly call "state -u"
because state_f() does:

        if (!type)
                type = XFS_USER_QUOTA | XFS_GROUP_QUOTA | XFS_PROJ_QUOTA;

and that bitmask'd type(s) gets eventually fed to xfsquotactl -> xtype_to_qtype
which really only handles one quota type, not a bitmask, and if more than
one is set it gives up and returns zero which magically happens to be
XFS_USER_QUOTA.  (whee!)

tl;dr: "state -u" might be more deterministic in the long run.

> +
> +# Remount and check the limits
> +_scratch_mount >> $seqres.full
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount
> +
> +# Run repair to force quota check
> +_scratch_xfs_repair >> $seqres.full 2>&1
> +
> +# Remount (this time to run quotacheck) and check the limits.  There's a bug
> +# in quotacheck where we would reset the ondisk default grace period to zero
> +# while the incore copy stays at whatever was read in prior to quotacheck.
> +# This will show up after the /next/ remount.
> +_scratch_mount >> $seqres.full
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount
> +
> +# Remount and check the limits
> +_scratch_mount >> $seqres.full
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount
> +
> +# success, all done
> +status=0
> +exit
> diff --git a/tests/xfs/913.out b/tests/xfs/913.out
> new file mode 100644
> index 00000000..ee989388
> --- /dev/null
> +++ b/tests/xfs/913.out
> @@ -0,0 +1,13 @@
> +QA output created by 913
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> diff --git a/tests/xfs/group b/tests/xfs/group
> index 056072fb..87b3c75d 100644
> --- a/tests/xfs/group
> +++ b/tests/xfs/group
> @@ -539,4 +539,5 @@
>  910 auto quick inobtcount
>  911 auto quick bigtime
>  912 auto quick label
> +913 auto quick quota
>  997 auto quick mount
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] xfs: make sure our default quota warning limits and grace periods survive quotacheck
  2020-02-20  4:31 ` Zorro Lang
@ 2020-02-20  4:29   ` Darrick J. Wong
  0 siblings, 0 replies; 5+ messages in thread
From: Darrick J. Wong @ 2020-02-20  4:29 UTC (permalink / raw)
  To: xfs, fstests

On Thu, Feb 20, 2020 at 12:31:44PM +0800, Zorro Lang wrote:
> On Tue, Feb 18, 2020 at 04:34:23PM -0800, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > Make sure that the default quota grace period and maximum warning limits
> > set by the administrator survive quotacheck.
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> > This is the testcase to go with 'xfs: preserve default grace interval
> > during quotacheck', though Eric and I haven't figured out how we're
> > going to land that one...
> > ---
> >  tests/xfs/913     |   69 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  tests/xfs/913.out |   13 ++++++++++
> >  tests/xfs/group   |    1 +
> >  3 files changed, 83 insertions(+)
> >  create mode 100755 tests/xfs/913
> >  create mode 100644 tests/xfs/913.out
> > 
> > diff --git a/tests/xfs/913 b/tests/xfs/913
> 
> Hi,
> 
> Can "_require_xfs_quota_foreign" help this case to be a generic case?
> 
> > new file mode 100755
> > index 00000000..94681b02
> > --- /dev/null
> > +++ b/tests/xfs/913
> > @@ -0,0 +1,69 @@
> > +#! /bin/bash
> > +# SPDX-License-Identifier: GPL-2.0-or-later
> > +# Copyright (c) 2020, Oracle and/or its affiliates.  All Rights Reserved.
> > +#
> > +# FS QA Test No. 913
> > +#
> > +# Make sure that the quota default grace period and maximum warning limits
> > +# survive quotacheck.
> > +
> > +seq=`basename $0`
> > +seqres=$RESULT_DIR/$seq
> > +echo "QA output created by $seq"
> > +
> > +here=`pwd`
> > +tmp=/tmp/$$
> > +status=1    # failure is the default!
> > +trap "_cleanup; exit \$status" 0 1 2 3 15
> > +
> > +_cleanup()
> > +{
> > +	cd /
> > +	rm -f $tmp.*
> > +}
> > +
> > +# get standard environment, filters and checks
> > +. ./common/rc
> > +. ./common/filter
> > +. ./common/quota
> > +
> > +# real QA test starts here
> > +_supported_fs xfs
> > +_supported_os Linux
> > +_require_quota
> > +
> > +rm -f $seqres.full
> > +
> > +# Format filesystem and set up quota limits
> > +_scratch_mkfs > $seqres.full
> > +_qmount_option "usrquota"
> > +_scratch_mount >> $seqres.full
> > +
> > +$XFS_QUOTA_PROG -x -c 'timer -u 300m' $SCRATCH_MNT
> > +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> > +_scratch_unmount
> > +
> > +# Remount and check the limits
> > +_scratch_mount >> $seqres.full
> > +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> > +_scratch_unmount
> > +
> > +# Run repair to force quota check
> > +_scratch_xfs_repair >> $seqres.full 2>&1
> 
> I've sent a case looks like do similar test as this:
>   [PATCH 1/2] generic: per-type quota timers set/get test
> 
> But it doesn't do fsck before cycle-mount. And ...[below]
> 
> > +
> > +# Remount (this time to run quotacheck) and check the limits.  There's a bug
> > +# in quotacheck where we would reset the ondisk default grace period to zero
> > +# while the incore copy stays at whatever was read in prior to quotacheck.
> > +# This will show up after the /next/ remount.
> > +_scratch_mount >> $seqres.full
> > +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> > +_scratch_unmount
> > +
> > +# Remount and check the limits
> > +_scratch_mount >> $seqres.full
> > +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> > +_scratch_unmount
> 
> It doesn't do twice cycle mount either. Do you think the fsck is necessary?

This test is looking for a bug in quotacheck, so I use xfs_repair to
force a quotacheck.

> And do you think these two cases can be merged into one case?

<shrug> Probably.  I don't see a problem in having one testcase poke
related problems, and it can always come after the bits that are already
in the growing pile of quota tests (see the one that Eric sent...)

--D

> Thanks,
> Zorro
> 
> > +
> > +# success, all done
> > +status=0
> > +exit
> > diff --git a/tests/xfs/913.out b/tests/xfs/913.out
> > new file mode 100644
> > index 00000000..ee989388
> > --- /dev/null
> > +++ b/tests/xfs/913.out
> > @@ -0,0 +1,13 @@
> > +QA output created by 913
> > +Blocks grace time: [0 days 05:00:00]
> > +Inodes grace time: [0 days 05:00:00]
> > +Realtime Blocks grace time: [0 days 05:00:00]
> > +Blocks grace time: [0 days 05:00:00]
> > +Inodes grace time: [0 days 05:00:00]
> > +Realtime Blocks grace time: [0 days 05:00:00]
> > +Blocks grace time: [0 days 05:00:00]
> > +Inodes grace time: [0 days 05:00:00]
> > +Realtime Blocks grace time: [0 days 05:00:00]
> > +Blocks grace time: [0 days 05:00:00]
> > +Inodes grace time: [0 days 05:00:00]
> > +Realtime Blocks grace time: [0 days 05:00:00]
> > diff --git a/tests/xfs/group b/tests/xfs/group
> > index 056072fb..87b3c75d 100644
> > --- a/tests/xfs/group
> > +++ b/tests/xfs/group
> > @@ -539,4 +539,5 @@
> >  910 auto quick inobtcount
> >  911 auto quick bigtime
> >  912 auto quick label
> > +913 auto quick quota
> >  997 auto quick mount
> > 
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] xfs: make sure our default quota warning limits and grace periods survive quotacheck
  2020-02-19  0:34 [RFC PATCH] xfs: make sure our default quota warning limits and grace periods survive quotacheck Darrick J. Wong
  2020-02-19  2:02 ` Eric Sandeen
@ 2020-02-20  4:31 ` Zorro Lang
  2020-02-20  4:29   ` Darrick J. Wong
  2020-05-08 21:12 ` Eric Sandeen
  2 siblings, 1 reply; 5+ messages in thread
From: Zorro Lang @ 2020-02-20  4:31 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: xfs, fstests

On Tue, Feb 18, 2020 at 04:34:23PM -0800, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> Make sure that the default quota grace period and maximum warning limits
> set by the administrator survive quotacheck.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
> This is the testcase to go with 'xfs: preserve default grace interval
> during quotacheck', though Eric and I haven't figured out how we're
> going to land that one...
> ---
>  tests/xfs/913     |   69 +++++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/xfs/913.out |   13 ++++++++++
>  tests/xfs/group   |    1 +
>  3 files changed, 83 insertions(+)
>  create mode 100755 tests/xfs/913
>  create mode 100644 tests/xfs/913.out
> 
> diff --git a/tests/xfs/913 b/tests/xfs/913

Hi,

Can "_require_xfs_quota_foreign" help this case to be a generic case?

> new file mode 100755
> index 00000000..94681b02
> --- /dev/null
> +++ b/tests/xfs/913
> @@ -0,0 +1,69 @@
> +#! /bin/bash
> +# SPDX-License-Identifier: GPL-2.0-or-later
> +# Copyright (c) 2020, Oracle and/or its affiliates.  All Rights Reserved.
> +#
> +# FS QA Test No. 913
> +#
> +# Make sure that the quota default grace period and maximum warning limits
> +# survive quotacheck.
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1    # failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +	cd /
> +	rm -f $tmp.*
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +. ./common/quota
> +
> +# real QA test starts here
> +_supported_fs xfs
> +_supported_os Linux
> +_require_quota
> +
> +rm -f $seqres.full
> +
> +# Format filesystem and set up quota limits
> +_scratch_mkfs > $seqres.full
> +_qmount_option "usrquota"
> +_scratch_mount >> $seqres.full
> +
> +$XFS_QUOTA_PROG -x -c 'timer -u 300m' $SCRATCH_MNT
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount
> +
> +# Remount and check the limits
> +_scratch_mount >> $seqres.full
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount
> +
> +# Run repair to force quota check
> +_scratch_xfs_repair >> $seqres.full 2>&1

I've sent a case looks like do similar test as this:
  [PATCH 1/2] generic: per-type quota timers set/get test

But it doesn't do fsck before cycle-mount. And ...[below]

> +
> +# Remount (this time to run quotacheck) and check the limits.  There's a bug
> +# in quotacheck where we would reset the ondisk default grace period to zero
> +# while the incore copy stays at whatever was read in prior to quotacheck.
> +# This will show up after the /next/ remount.
> +_scratch_mount >> $seqres.full
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount
> +
> +# Remount and check the limits
> +_scratch_mount >> $seqres.full
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount

It doesn't do twice cycle mount either. Do you think the fsck is necessary?
And do you think these two cases can be merged into one case?

Thanks,
Zorro

> +
> +# success, all done
> +status=0
> +exit
> diff --git a/tests/xfs/913.out b/tests/xfs/913.out
> new file mode 100644
> index 00000000..ee989388
> --- /dev/null
> +++ b/tests/xfs/913.out
> @@ -0,0 +1,13 @@
> +QA output created by 913
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> diff --git a/tests/xfs/group b/tests/xfs/group
> index 056072fb..87b3c75d 100644
> --- a/tests/xfs/group
> +++ b/tests/xfs/group
> @@ -539,4 +539,5 @@
>  910 auto quick inobtcount
>  911 auto quick bigtime
>  912 auto quick label
> +913 auto quick quota
>  997 auto quick mount
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] xfs: make sure our default quota warning limits and grace periods survive quotacheck
  2020-02-19  0:34 [RFC PATCH] xfs: make sure our default quota warning limits and grace periods survive quotacheck Darrick J. Wong
  2020-02-19  2:02 ` Eric Sandeen
  2020-02-20  4:31 ` Zorro Lang
@ 2020-05-08 21:12 ` Eric Sandeen
  2 siblings, 0 replies; 5+ messages in thread
From: Eric Sandeen @ 2020-05-08 21:12 UTC (permalink / raw)
  To: Darrick J. Wong, Eric Sandeen, Eryu Guan; +Cc: xfs, fstests

On 2/18/20 6:34 PM, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> Make sure that the default quota grace period and maximum warning limits
> set by the administrator survive quotacheck.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
> This is the testcase to go with 'xfs: preserve default grace interval
> during quotacheck', though Eric and I haven't figured out how we're
> going to land that one...

<it landed>

This looks fine to me and it works.  Sorry for derailing it thinking that it
didn't work based on a bug in my patch stack.  :(

Reviewed-by: Eric Sandeen <sandeen@redhat.com>

> ---
>  tests/xfs/913     |   69 +++++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/xfs/913.out |   13 ++++++++++
>  tests/xfs/group   |    1 +
>  3 files changed, 83 insertions(+)
>  create mode 100755 tests/xfs/913
>  create mode 100644 tests/xfs/913.out
> 
> diff --git a/tests/xfs/913 b/tests/xfs/913
> new file mode 100755
> index 00000000..94681b02
> --- /dev/null
> +++ b/tests/xfs/913
> @@ -0,0 +1,69 @@
> +#! /bin/bash
> +# SPDX-License-Identifier: GPL-2.0-or-later
> +# Copyright (c) 2020, Oracle and/or its affiliates.  All Rights Reserved.
> +#
> +# FS QA Test No. 913
> +#
> +# Make sure that the quota default grace period and maximum warning limits
> +# survive quotacheck.
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1    # failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +	cd /
> +	rm -f $tmp.*
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +. ./common/quota
> +
> +# real QA test starts here
> +_supported_fs xfs
> +_supported_os Linux
> +_require_quota
> +
> +rm -f $seqres.full
> +
> +# Format filesystem and set up quota limits
> +_scratch_mkfs > $seqres.full
> +_qmount_option "usrquota"
> +_scratch_mount >> $seqres.full
> +
> +$XFS_QUOTA_PROG -x -c 'timer -u 300m' $SCRATCH_MNT
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount
> +
> +# Remount and check the limits
> +_scratch_mount >> $seqres.full
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount
> +
> +# Run repair to force quota check
> +_scratch_xfs_repair >> $seqres.full 2>&1
> +
> +# Remount (this time to run quotacheck) and check the limits.  There's a bug
> +# in quotacheck where we would reset the ondisk default grace period to zero
> +# while the incore copy stays at whatever was read in prior to quotacheck.
> +# This will show up after the /next/ remount.
> +_scratch_mount >> $seqres.full
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount
> +
> +# Remount and check the limits
> +_scratch_mount >> $seqres.full
> +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> +_scratch_unmount
> +
> +# success, all done
> +status=0
> +exit
> diff --git a/tests/xfs/913.out b/tests/xfs/913.out
> new file mode 100644
> index 00000000..ee989388
> --- /dev/null
> +++ b/tests/xfs/913.out
> @@ -0,0 +1,13 @@
> +QA output created by 913
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> +Blocks grace time: [0 days 05:00:00]
> +Inodes grace time: [0 days 05:00:00]
> +Realtime Blocks grace time: [0 days 05:00:00]
> diff --git a/tests/xfs/group b/tests/xfs/group
> index 056072fb..87b3c75d 100644
> --- a/tests/xfs/group
> +++ b/tests/xfs/group
> @@ -539,4 +539,5 @@
>  910 auto quick inobtcount
>  911 auto quick bigtime
>  912 auto quick label
> +913 auto quick quota
>  997 auto quick mount
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-05-08 21:12 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-19  0:34 [RFC PATCH] xfs: make sure our default quota warning limits and grace periods survive quotacheck Darrick J. Wong
2020-02-19  2:02 ` Eric Sandeen
2020-02-20  4:31 ` Zorro Lang
2020-02-20  4:29   ` Darrick J. Wong
2020-05-08 21:12 ` Eric Sandeen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.