All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] xfs: test xfsdump when an inode < root inode is present
@ 2022-04-08 17:08 Zorro Lang
  2022-04-09  4:00 ` Darrick J. Wong
  0 siblings, 1 reply; 7+ messages in thread
From: Zorro Lang @ 2022-04-08 17:08 UTC (permalink / raw)
  To: fstests

From: Eric Sandeen <sandeen@redhat.com>

This tests a longstanding bug where xfsdumps are not properly
created when an inode is present on the filesytsem which has
a lower number than the root inode.

Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Zorro Lang <zlang@redhat.com>
---

Friendly ping for reviewing.

Thanks,
Zorro

 common/dump       |  1 +
 tests/xfs/543     | 63 +++++++++++++++++++++++++++++++++++++++++++++++
 tests/xfs/543.out | 47 +++++++++++++++++++++++++++++++++++
 3 files changed, 111 insertions(+)
 create mode 100755 tests/xfs/543
 create mode 100644 tests/xfs/543.out

diff --git a/common/dump b/common/dump
index ea16d442..ab48cd13 100644
--- a/common/dump
+++ b/common/dump
@@ -219,6 +219,7 @@ _require_tape()
 
 _wipe_fs()
 {
+    [[ "$WIPE_FS" = "no" ]] && return
     _require_scratch
 
     _scratch_mkfs_xfs >>$seqres.full || _fail "mkfs failed"
diff --git a/tests/xfs/543 b/tests/xfs/543
new file mode 100755
index 00000000..4e3c823a
--- /dev/null
+++ b/tests/xfs/543
@@ -0,0 +1,63 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2022 Red Hat, Inc. All Rights Reserved.
+#
+# FS QA Test 543
+#
+# Create a filesystem which contains an inode with a lower number
+# than the root inode. Ensure that xfsdump/xfsrestore handles this.
+#
+. ./common/preamble
+_begin_fstest auto quick dump
+
+# Import common functions.
+. ./common/dump
+
+_supported_fs xfs
+_require_scratch
+
+# A large stripe unit will put the root inode out quite far
+# due to alignment, leaving free blocks ahead of it.
+_scratch_mkfs_xfs -d sunit=1024,swidth=1024 > $seqres.full 2>&1
+
+# Mounting /without/ a stripe should allow inodes to be allocated
+# in lower free blocks, without the stripe alignment.
+_scratch_mount -o sunit=0,swidth=0
+
+root_inum=$(stat -c %i $SCRATCH_MNT)
+
+# Consume space after the root inode so that the blocks before
+# root look "close" for the next inode chunk allocation
+$XFS_IO_PROG -f -c "falloc 0 16m" $SCRATCH_MNT/fillfile
+
+# And make a bunch of inodes until we (hopefully) get one lower
+# than root, in a new inode chunk.
+echo "root_inum: $root_inum" >> $seqres.full
+for i in $(seq 0 4096) ; do
+	fname=$SCRATCH_MNT/$(printf "FILE_%03d" $i)
+	touch $fname
+	inum=$(stat -c "%i" $fname)
+	[[ $inum -lt $root_inum ]] && break
+done
+
+echo "created: $inum" >> $seqres.full
+
+[[ $inum -lt $root_inum ]] || _notrun "Could not set up test"
+
+# Now try a dump and restore. Cribbed from xfs/068
+WIPE_FS="no"
+_create_dumpdir_stress
+
+echo -n "Before: " >> $seqres.full
+_count_dumpdir_files | tee $tmp.before >> $seqres.full
+
+# filter out the file count, it changes as fsstress adds new operations
+_do_dump_restore | sed -e "/entries processed$/s/[0-9][0-9]*/NUM/g"
+
+echo -n "After: " >> $seqres.full
+_count_restoredir_files | tee $tmp.after >> $seqres.full
+diff -u $tmp.before $tmp.after
+
+# success, all done
+status=0
+exit
diff --git a/tests/xfs/543.out b/tests/xfs/543.out
new file mode 100644
index 00000000..a5224aaf
--- /dev/null
+++ b/tests/xfs/543.out
@@ -0,0 +1,47 @@
+QA output created by 543
+Creating directory system to dump using fsstress.
+
+-----------------------------------------------
+fsstress : -f link=10 -f creat=10 -f mkdir=10 -f truncate=5 -f symlink=10
+-----------------------------------------------
+xfsdump|xfsrestore ...
+xfsdump  -s DUMP_SUBDIR - SCRATCH_MNT | xfsrestore  - RESTORE_DIR
+xfsrestore: using file dump (drive_simple) strategy
+xfsrestore: searching media for dump
+xfsrestore: examining media file 0
+xfsrestore: dump description: 
+xfsrestore: hostname: HOSTNAME
+xfsrestore: mount point: SCRATCH_MNT
+xfsrestore: volume: SCRATCH_DEV
+xfsrestore: session time: TIME
+xfsrestore: level: 0
+xfsrestore: session label: ""
+xfsrestore: media label: ""
+xfsrestore: file system ID: ID
+xfsrestore: session id: ID
+xfsrestore: media ID: ID
+xfsrestore: searching media for directory dump
+xfsrestore: reading directories
+xfsrestore: NUM directories and NUM entries processed
+xfsrestore: directory post-processing
+xfsrestore: restoring non-directory files
+xfsrestore: restore complete: SECS seconds elapsed
+xfsrestore: Restore Status: SUCCESS
+xfsdump: using file dump (drive_simple) strategy
+xfsdump: level 0 dump of HOSTNAME:SCRATCH_MNT
+xfsdump: dump date: DATE
+xfsdump: session id: ID
+xfsdump: session label: ""
+xfsdump: ino map <PHASES>
+xfsdump: ino map construction complete
+xfsdump: estimated dump size: NUM bytes
+xfsdump: /var/xfsdump/inventory created
+xfsdump: creating dump session media file 0 (media 0, file 0)
+xfsdump: dumping ino map
+xfsdump: dumping directories
+xfsdump: dumping non-directory files
+xfsdump: ending media file
+xfsdump: media file size NUM bytes
+xfsdump: dump size (non-dir files) : NUM bytes
+xfsdump: dump complete: SECS seconds elapsed
+xfsdump: Dump Status: SUCCESS
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] xfs: test xfsdump when an inode < root inode is present
  2022-04-08 17:08 [PATCH v2] xfs: test xfsdump when an inode < root inode is present Zorro Lang
@ 2022-04-09  4:00 ` Darrick J. Wong
  2022-04-10 12:11   ` Zorro Lang
  0 siblings, 1 reply; 7+ messages in thread
From: Darrick J. Wong @ 2022-04-09  4:00 UTC (permalink / raw)
  To: Zorro Lang; +Cc: fstests

On Sat, Apr 09, 2022 at 01:08:47AM +0800, Zorro Lang wrote:
> From: Eric Sandeen <sandeen@redhat.com>
> 
> This tests a longstanding bug where xfsdumps are not properly
> created when an inode is present on the filesytsem which has
> a lower number than the root inode.
> 
> Signed-off-by: Eric Sandeen <sandeen@redhat.com>
> Signed-off-by: Zorro Lang <zlang@redhat.com>
> ---
> 
> Friendly ping for reviewing.
> 
> Thanks,
> Zorro
> 
>  common/dump       |  1 +
>  tests/xfs/543     | 63 +++++++++++++++++++++++++++++++++++++++++++++++
>  tests/xfs/543.out | 47 +++++++++++++++++++++++++++++++++++
>  3 files changed, 111 insertions(+)
>  create mode 100755 tests/xfs/543
>  create mode 100644 tests/xfs/543.out
> 
> diff --git a/common/dump b/common/dump
> index ea16d442..ab48cd13 100644
> --- a/common/dump
> +++ b/common/dump
> @@ -219,6 +219,7 @@ _require_tape()
>  
>  _wipe_fs()
>  {
> +    [[ "$WIPE_FS" = "no" ]] && return

Is WIPE_FS intended as a general means to prevent wipefs -a between
tests?  If so, it ought to be in the README.

--D

>      _require_scratch
>  
>      _scratch_mkfs_xfs >>$seqres.full || _fail "mkfs failed"
> diff --git a/tests/xfs/543 b/tests/xfs/543
> new file mode 100755
> index 00000000..4e3c823a
> --- /dev/null
> +++ b/tests/xfs/543
> @@ -0,0 +1,63 @@
> +#! /bin/bash
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (c) 2022 Red Hat, Inc. All Rights Reserved.
> +#
> +# FS QA Test 543
> +#
> +# Create a filesystem which contains an inode with a lower number
> +# than the root inode. Ensure that xfsdump/xfsrestore handles this.
> +#
> +. ./common/preamble
> +_begin_fstest auto quick dump
> +
> +# Import common functions.
> +. ./common/dump
> +
> +_supported_fs xfs
> +_require_scratch
> +
> +# A large stripe unit will put the root inode out quite far
> +# due to alignment, leaving free blocks ahead of it.
> +_scratch_mkfs_xfs -d sunit=1024,swidth=1024 > $seqres.full 2>&1
> +
> +# Mounting /without/ a stripe should allow inodes to be allocated
> +# in lower free blocks, without the stripe alignment.
> +_scratch_mount -o sunit=0,swidth=0
> +
> +root_inum=$(stat -c %i $SCRATCH_MNT)
> +
> +# Consume space after the root inode so that the blocks before
> +# root look "close" for the next inode chunk allocation
> +$XFS_IO_PROG -f -c "falloc 0 16m" $SCRATCH_MNT/fillfile
> +
> +# And make a bunch of inodes until we (hopefully) get one lower
> +# than root, in a new inode chunk.
> +echo "root_inum: $root_inum" >> $seqres.full
> +for i in $(seq 0 4096) ; do
> +	fname=$SCRATCH_MNT/$(printf "FILE_%03d" $i)
> +	touch $fname
> +	inum=$(stat -c "%i" $fname)
> +	[[ $inum -lt $root_inum ]] && break
> +done
> +
> +echo "created: $inum" >> $seqres.full
> +
> +[[ $inum -lt $root_inum ]] || _notrun "Could not set up test"
> +
> +# Now try a dump and restore. Cribbed from xfs/068
> +WIPE_FS="no"
> +_create_dumpdir_stress
> +
> +echo -n "Before: " >> $seqres.full
> +_count_dumpdir_files | tee $tmp.before >> $seqres.full
> +
> +# filter out the file count, it changes as fsstress adds new operations
> +_do_dump_restore | sed -e "/entries processed$/s/[0-9][0-9]*/NUM/g"
> +
> +echo -n "After: " >> $seqres.full
> +_count_restoredir_files | tee $tmp.after >> $seqres.full
> +diff -u $tmp.before $tmp.after
> +
> +# success, all done
> +status=0
> +exit
> diff --git a/tests/xfs/543.out b/tests/xfs/543.out
> new file mode 100644
> index 00000000..a5224aaf
> --- /dev/null
> +++ b/tests/xfs/543.out
> @@ -0,0 +1,47 @@
> +QA output created by 543
> +Creating directory system to dump using fsstress.
> +
> +-----------------------------------------------
> +fsstress : -f link=10 -f creat=10 -f mkdir=10 -f truncate=5 -f symlink=10
> +-----------------------------------------------
> +xfsdump|xfsrestore ...
> +xfsdump  -s DUMP_SUBDIR - SCRATCH_MNT | xfsrestore  - RESTORE_DIR
> +xfsrestore: using file dump (drive_simple) strategy
> +xfsrestore: searching media for dump
> +xfsrestore: examining media file 0
> +xfsrestore: dump description: 
> +xfsrestore: hostname: HOSTNAME
> +xfsrestore: mount point: SCRATCH_MNT
> +xfsrestore: volume: SCRATCH_DEV
> +xfsrestore: session time: TIME
> +xfsrestore: level: 0
> +xfsrestore: session label: ""
> +xfsrestore: media label: ""
> +xfsrestore: file system ID: ID
> +xfsrestore: session id: ID
> +xfsrestore: media ID: ID
> +xfsrestore: searching media for directory dump
> +xfsrestore: reading directories
> +xfsrestore: NUM directories and NUM entries processed
> +xfsrestore: directory post-processing
> +xfsrestore: restoring non-directory files
> +xfsrestore: restore complete: SECS seconds elapsed
> +xfsrestore: Restore Status: SUCCESS
> +xfsdump: using file dump (drive_simple) strategy
> +xfsdump: level 0 dump of HOSTNAME:SCRATCH_MNT
> +xfsdump: dump date: DATE
> +xfsdump: session id: ID
> +xfsdump: session label: ""
> +xfsdump: ino map <PHASES>
> +xfsdump: ino map construction complete
> +xfsdump: estimated dump size: NUM bytes
> +xfsdump: /var/xfsdump/inventory created
> +xfsdump: creating dump session media file 0 (media 0, file 0)
> +xfsdump: dumping ino map
> +xfsdump: dumping directories
> +xfsdump: dumping non-directory files
> +xfsdump: ending media file
> +xfsdump: media file size NUM bytes
> +xfsdump: dump size (non-dir files) : NUM bytes
> +xfsdump: dump complete: SECS seconds elapsed
> +xfsdump: Dump Status: SUCCESS
> -- 
> 2.31.1
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] xfs: test xfsdump when an inode < root inode is present
  2022-04-09  4:00 ` Darrick J. Wong
@ 2022-04-10 12:11   ` Zorro Lang
  2022-04-11  5:31     ` Dave Chinner
  0 siblings, 1 reply; 7+ messages in thread
From: Zorro Lang @ 2022-04-10 12:11 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: fstests

On Fri, Apr 08, 2022 at 09:00:56PM -0700, Darrick J. Wong wrote:
> On Sat, Apr 09, 2022 at 01:08:47AM +0800, Zorro Lang wrote:
> > From: Eric Sandeen <sandeen@redhat.com>
> > 
> > This tests a longstanding bug where xfsdumps are not properly
> > created when an inode is present on the filesytsem which has
> > a lower number than the root inode.
> > 
> > Signed-off-by: Eric Sandeen <sandeen@redhat.com>
> > Signed-off-by: Zorro Lang <zlang@redhat.com>
> > ---
> > 
> > Friendly ping for reviewing.
> > 
> > Thanks,
> > Zorro
> > 
> >  common/dump       |  1 +
> >  tests/xfs/543     | 63 +++++++++++++++++++++++++++++++++++++++++++++++
> >  tests/xfs/543.out | 47 +++++++++++++++++++++++++++++++++++
> >  3 files changed, 111 insertions(+)
> >  create mode 100755 tests/xfs/543
> >  create mode 100644 tests/xfs/543.out
> > 
> > diff --git a/common/dump b/common/dump
> > index ea16d442..ab48cd13 100644
> > --- a/common/dump
> > +++ b/common/dump
> > @@ -219,6 +219,7 @@ _require_tape()
> >  
> >  _wipe_fs()
> >  {
> > +    [[ "$WIPE_FS" = "no" ]] && return
> 
> Is WIPE_FS intended as a general means to prevent wipefs -a between
> tests?  If so, it ought to be in the README.

Actually I don't know how to if I should say WIPE_FS means to prevent
wipefs -a between tests. Due to:

1) We have a _wipe_fs() in common/dump[1], it's very old, and it only
try to make a new xfs, although it calls wipe "fs"... And it's only used
for tests/xfs/* cases to call _create_dumpdir_*() function.

2) Then we have _try_wipe_scratch_devs() in common/rc [2], which calls
_try_wipe_scratch_xfs() in common/xfs [3] (which is newer), it's only
used in ./check script now.

So I'm wondering if we'd better to change the _wipe_fs() to:
_wipe_dump()
{
	[[ "$WIPE_DUMP" = "no" ]] && return
	_require_scratch

	_try_wipe_scratch_devs >>$seqres.full
	_scratch_mount >>$seqres.full
}

What do you think?

Thanks,
Zorro

[1]
_wipe_fs()
{
    [[ "$WIPE_FS" = "no" ]] && return
    _require_scratch

    _scratch_mkfs_xfs >>$seqres.full || _fail "mkfs failed"
    _scratch_mount >>$seqres.full
}

[2]
_try_wipe_scratch_devs()
{
        test -x "$WIPEFS_PROG" || return 0

        # Do specified filesystem wipe at first
        case "$FSTYP" in
        "xfs")
                _try_wipe_scratch_xfs
                ;;
        esac

        # Then do wipefs on all scratch devices
        for dev in $SCRATCH_DEV_POOL $SCRATCH_DEV $SCRATCH_LOGDEV $SCRATCH_RTDEV; do
                test -b $dev && $WIPEFS_PROG -a $dev
        done
}

[3]
# Wipe the superblock of each XFS AGs
_try_wipe_scratch_xfs()
{
        local num='^[0-9]+$'
        local agcount
        local agsize
        local dbsize

        # Try to wipe each SB if there's an existed XFS
        agcount=`_scratch_xfs_get_sb_field agcount 2>/dev/null`
        agsize=`_scratch_xfs_get_sb_field agblocks 2>/dev/null`
        dbsize=`_scratch_xfs_get_sb_field blocksize 2>/dev/null`
        if [[ $agcount =~ $num && $agsize =~ $num && $dbsize =~ $num ]];then
                for ((i = 0; i < agcount; i++)); do
                        $XFS_IO_PROG -c "pwrite $((i * dbsize * agsize)) $dbsize" \
                                $SCRATCH_DEV >/dev/null;
                done
        fi

        # Try to wipe each SB by default mkfs.xfs geometry
        local tmp=`mktemp -u`
        unset agcount agsize dbsize
        _scratch_mkfs_xfs -N 2>/dev/null | perl -ne '
                if (/^meta-data=.*\s+agcount=(\d+), agsize=(\d+) blks/) {
                        print STDOUT "agcount=$1\nagsize=$2\n";
                }
                if (/^data\s+=\s+bsize=(\d+)\s/) {
                        print STDOUT "dbsize=$1\n";
                }' > $tmp.mkfs

        . $tmp.mkfs
        if [[ $agcount =~ $num && $agsize =~ $num && $dbsize =~ $num ]];then
                for ((i = 0; i < agcount; i++)); do
                        $XFS_IO_PROG -c "pwrite $((i * dbsize * agsize)) $dbsize" \
                                $SCRATCH_DEV >/dev/null;
                done
        fi
        rm -f $tmp.mkfs
}

> 
> --D
> 
> >      _require_scratch
> >  
> >      _scratch_mkfs_xfs >>$seqres.full || _fail "mkfs failed"
> > diff --git a/tests/xfs/543 b/tests/xfs/543
> > new file mode 100755
> > index 00000000..4e3c823a
> > --- /dev/null
> > +++ b/tests/xfs/543
> > @@ -0,0 +1,63 @@
> > +#! /bin/bash
> > +# SPDX-License-Identifier: GPL-2.0
> > +# Copyright (c) 2022 Red Hat, Inc. All Rights Reserved.
> > +#
> > +# FS QA Test 543
> > +#
> > +# Create a filesystem which contains an inode with a lower number
> > +# than the root inode. Ensure that xfsdump/xfsrestore handles this.
> > +#
> > +. ./common/preamble
> > +_begin_fstest auto quick dump
> > +
> > +# Import common functions.
> > +. ./common/dump
> > +
> > +_supported_fs xfs
> > +_require_scratch
> > +
> > +# A large stripe unit will put the root inode out quite far
> > +# due to alignment, leaving free blocks ahead of it.
> > +_scratch_mkfs_xfs -d sunit=1024,swidth=1024 > $seqres.full 2>&1
> > +
> > +# Mounting /without/ a stripe should allow inodes to be allocated
> > +# in lower free blocks, without the stripe alignment.
> > +_scratch_mount -o sunit=0,swidth=0
> > +
> > +root_inum=$(stat -c %i $SCRATCH_MNT)
> > +
> > +# Consume space after the root inode so that the blocks before
> > +# root look "close" for the next inode chunk allocation
> > +$XFS_IO_PROG -f -c "falloc 0 16m" $SCRATCH_MNT/fillfile
> > +
> > +# And make a bunch of inodes until we (hopefully) get one lower
> > +# than root, in a new inode chunk.
> > +echo "root_inum: $root_inum" >> $seqres.full
> > +for i in $(seq 0 4096) ; do
> > +	fname=$SCRATCH_MNT/$(printf "FILE_%03d" $i)
> > +	touch $fname
> > +	inum=$(stat -c "%i" $fname)
> > +	[[ $inum -lt $root_inum ]] && break
> > +done
> > +
> > +echo "created: $inum" >> $seqres.full
> > +
> > +[[ $inum -lt $root_inum ]] || _notrun "Could not set up test"
> > +
> > +# Now try a dump and restore. Cribbed from xfs/068
> > +WIPE_FS="no"
> > +_create_dumpdir_stress
> > +
> > +echo -n "Before: " >> $seqres.full
> > +_count_dumpdir_files | tee $tmp.before >> $seqres.full
> > +
> > +# filter out the file count, it changes as fsstress adds new operations
> > +_do_dump_restore | sed -e "/entries processed$/s/[0-9][0-9]*/NUM/g"
> > +
> > +echo -n "After: " >> $seqres.full
> > +_count_restoredir_files | tee $tmp.after >> $seqres.full
> > +diff -u $tmp.before $tmp.after
> > +
> > +# success, all done
> > +status=0
> > +exit
> > diff --git a/tests/xfs/543.out b/tests/xfs/543.out
> > new file mode 100644
> > index 00000000..a5224aaf
> > --- /dev/null
> > +++ b/tests/xfs/543.out
> > @@ -0,0 +1,47 @@
> > +QA output created by 543
> > +Creating directory system to dump using fsstress.
> > +
> > +-----------------------------------------------
> > +fsstress : -f link=10 -f creat=10 -f mkdir=10 -f truncate=5 -f symlink=10
> > +-----------------------------------------------
> > +xfsdump|xfsrestore ...
> > +xfsdump  -s DUMP_SUBDIR - SCRATCH_MNT | xfsrestore  - RESTORE_DIR
> > +xfsrestore: using file dump (drive_simple) strategy
> > +xfsrestore: searching media for dump
> > +xfsrestore: examining media file 0
> > +xfsrestore: dump description: 
> > +xfsrestore: hostname: HOSTNAME
> > +xfsrestore: mount point: SCRATCH_MNT
> > +xfsrestore: volume: SCRATCH_DEV
> > +xfsrestore: session time: TIME
> > +xfsrestore: level: 0
> > +xfsrestore: session label: ""
> > +xfsrestore: media label: ""
> > +xfsrestore: file system ID: ID
> > +xfsrestore: session id: ID
> > +xfsrestore: media ID: ID
> > +xfsrestore: searching media for directory dump
> > +xfsrestore: reading directories
> > +xfsrestore: NUM directories and NUM entries processed
> > +xfsrestore: directory post-processing
> > +xfsrestore: restoring non-directory files
> > +xfsrestore: restore complete: SECS seconds elapsed
> > +xfsrestore: Restore Status: SUCCESS
> > +xfsdump: using file dump (drive_simple) strategy
> > +xfsdump: level 0 dump of HOSTNAME:SCRATCH_MNT
> > +xfsdump: dump date: DATE
> > +xfsdump: session id: ID
> > +xfsdump: session label: ""
> > +xfsdump: ino map <PHASES>
> > +xfsdump: ino map construction complete
> > +xfsdump: estimated dump size: NUM bytes
> > +xfsdump: /var/xfsdump/inventory created
> > +xfsdump: creating dump session media file 0 (media 0, file 0)
> > +xfsdump: dumping ino map
> > +xfsdump: dumping directories
> > +xfsdump: dumping non-directory files
> > +xfsdump: ending media file
> > +xfsdump: media file size NUM bytes
> > +xfsdump: dump size (non-dir files) : NUM bytes
> > +xfsdump: dump complete: SECS seconds elapsed
> > +xfsdump: Dump Status: SUCCESS
> > -- 
> > 2.31.1
> > 
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] xfs: test xfsdump when an inode < root inode is present
  2022-04-10 12:11   ` Zorro Lang
@ 2022-04-11  5:31     ` Dave Chinner
  2022-04-11  8:34       ` Zorro Lang
  0 siblings, 1 reply; 7+ messages in thread
From: Dave Chinner @ 2022-04-11  5:31 UTC (permalink / raw)
  To: Darrick J. Wong, fstests

On Sun, Apr 10, 2022 at 08:11:03PM +0800, Zorro Lang wrote:
> On Fri, Apr 08, 2022 at 09:00:56PM -0700, Darrick J. Wong wrote:
> > On Sat, Apr 09, 2022 at 01:08:47AM +0800, Zorro Lang wrote:
> > > From: Eric Sandeen <sandeen@redhat.com>
> > > 
> > > This tests a longstanding bug where xfsdumps are not properly
> > > created when an inode is present on the filesytsem which has
> > > a lower number than the root inode.
> > > 
> > > Signed-off-by: Eric Sandeen <sandeen@redhat.com>
> > > Signed-off-by: Zorro Lang <zlang@redhat.com>
> > > ---
> > > 
> > > Friendly ping for reviewing.
> > > 
> > > Thanks,
> > > Zorro
> > > 
> > >  common/dump       |  1 +
> > >  tests/xfs/543     | 63 +++++++++++++++++++++++++++++++++++++++++++++++
> > >  tests/xfs/543.out | 47 +++++++++++++++++++++++++++++++++++
> > >  3 files changed, 111 insertions(+)
> > >  create mode 100755 tests/xfs/543
> > >  create mode 100644 tests/xfs/543.out
> > > 
> > > diff --git a/common/dump b/common/dump
> > > index ea16d442..ab48cd13 100644
> > > --- a/common/dump
> > > +++ b/common/dump
> > > @@ -219,6 +219,7 @@ _require_tape()
> > >  
> > >  _wipe_fs()
> > >  {
> > > +    [[ "$WIPE_FS" = "no" ]] && return
> > 
> > Is WIPE_FS intended as a general means to prevent wipefs -a between
> > tests?  If so, it ought to be in the README.
> 
> Actually I don't know how to if I should say WIPE_FS means to prevent
> wipefs -a between tests. Due to:
> 
> 1) We have a _wipe_fs() in common/dump[1], it's very old, and it only
> try to make a new xfs, although it calls wipe "fs"... And it's only used
> for tests/xfs/* cases to call _create_dumpdir_*() function.
> 
> 2) Then we have _try_wipe_scratch_devs() in common/rc [2], which calls
> _try_wipe_scratch_xfs() in common/xfs [3] (which is newer), it's only
> used in ./check script now.
> 
> So I'm wondering if we'd better to change the _wipe_fs() to:
> _wipe_dump()
> {
> 	[[ "$WIPE_DUMP" = "no" ]] && return
> 	_require_scratch
> 
> 	_try_wipe_scratch_devs >>$seqres.full
> 	_scratch_mount >>$seqres.full
> }
> 
> What do you think?

common/dump::_wipe_fs() is effectively just _scratch_mkfs() followed
by mounting it. It does not need to exist at all.

The test(s) need to have run _require_scratch() in it's setup, it
shouldn't be hidden deep inside random functions that fill the
scratch device.

They should also run scratch_mkfs/mount themselves to prep the
scratch device, and then _wipe_fs() can go away entirely.

And the new test simply calls the function to populate the scratch
device at the place where you want to preserve the previous contents
of the scratch device without first running scratch_mkfs....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] xfs: test xfsdump when an inode < root inode is present
  2022-04-11  5:31     ` Dave Chinner
@ 2022-04-11  8:34       ` Zorro Lang
  2022-04-11 22:08         ` Darrick J. Wong
  0 siblings, 1 reply; 7+ messages in thread
From: Zorro Lang @ 2022-04-11  8:34 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Darrick J. Wong, fstests

On Mon, Apr 11, 2022 at 03:31:35PM +1000, Dave Chinner wrote:
> On Sun, Apr 10, 2022 at 08:11:03PM +0800, Zorro Lang wrote:
> > On Fri, Apr 08, 2022 at 09:00:56PM -0700, Darrick J. Wong wrote:
> > > On Sat, Apr 09, 2022 at 01:08:47AM +0800, Zorro Lang wrote:
> > > > From: Eric Sandeen <sandeen@redhat.com>
> > > > 
> > > > This tests a longstanding bug where xfsdumps are not properly
> > > > created when an inode is present on the filesytsem which has
> > > > a lower number than the root inode.
> > > > 
> > > > Signed-off-by: Eric Sandeen <sandeen@redhat.com>
> > > > Signed-off-by: Zorro Lang <zlang@redhat.com>
> > > > ---
> > > > 
> > > > Friendly ping for reviewing.
> > > > 
> > > > Thanks,
> > > > Zorro
> > > > 
> > > >  common/dump       |  1 +
> > > >  tests/xfs/543     | 63 +++++++++++++++++++++++++++++++++++++++++++++++
> > > >  tests/xfs/543.out | 47 +++++++++++++++++++++++++++++++++++
> > > >  3 files changed, 111 insertions(+)
> > > >  create mode 100755 tests/xfs/543
> > > >  create mode 100644 tests/xfs/543.out
> > > > 
> > > > diff --git a/common/dump b/common/dump
> > > > index ea16d442..ab48cd13 100644
> > > > --- a/common/dump
> > > > +++ b/common/dump
> > > > @@ -219,6 +219,7 @@ _require_tape()
> > > >  
> > > >  _wipe_fs()
> > > >  {
> > > > +    [[ "$WIPE_FS" = "no" ]] && return
> > > 
> > > Is WIPE_FS intended as a general means to prevent wipefs -a between
> > > tests?  If so, it ought to be in the README.
> > 
> > Actually I don't know how to if I should say WIPE_FS means to prevent
> > wipefs -a between tests. Due to:
> > 
> > 1) We have a _wipe_fs() in common/dump[1], it's very old, and it only
> > try to make a new xfs, although it calls wipe "fs"... And it's only used
> > for tests/xfs/* cases to call _create_dumpdir_*() function.
> > 
> > 2) Then we have _try_wipe_scratch_devs() in common/rc [2], which calls
> > _try_wipe_scratch_xfs() in common/xfs [3] (which is newer), it's only
> > used in ./check script now.
> > 
> > So I'm wondering if we'd better to change the _wipe_fs() to:
> > _wipe_dump()
> > {
> > 	[[ "$WIPE_DUMP" = "no" ]] && return
> > 	_require_scratch
> > 
> > 	_try_wipe_scratch_devs >>$seqres.full
> > 	_scratch_mount >>$seqres.full
> > }
> > 
> > What do you think?
> 
> common/dump::_wipe_fs() is effectively just _scratch_mkfs() followed
> by mounting it. It does not need to exist at all.

Oh! I was really confused why dumpdir need to wipefs at first, so that's
the reason. I just checked all cases which call _create_dumpdir_*, and
yes, they just hope to omit below common steps by running _create_dumpdir_*:
_require_scratch
_scratch_mkfs
_scratch_mount

I'd like to remove the common/dump::_wipe_fs(), and help all related cases
call _require_scratch && _scratch_mkfs && _scratch_mount by themselves. Due
to it's match the current xfstests case format.

Is that good to you? Anymore suggestions or notice ?

Thanks,
Zorro

> 
> The test(s) need to have run _require_scratch() in it's setup, it
> shouldn't be hidden deep inside random functions that fill the
> scratch device.
> 
> They should also run scratch_mkfs/mount themselves to prep the
> scratch device, and then _wipe_fs() can go away entirely.
> 
> And the new test simply calls the function to populate the scratch
> device at the place where you want to preserve the previous contents
> of the scratch device without first running scratch_mkfs....
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] xfs: test xfsdump when an inode < root inode is present
  2022-04-11  8:34       ` Zorro Lang
@ 2022-04-11 22:08         ` Darrick J. Wong
  0 siblings, 0 replies; 7+ messages in thread
From: Darrick J. Wong @ 2022-04-11 22:08 UTC (permalink / raw)
  To: Dave Chinner, fstests

On Mon, Apr 11, 2022 at 04:34:33PM +0800, Zorro Lang wrote:
> On Mon, Apr 11, 2022 at 03:31:35PM +1000, Dave Chinner wrote:
> > On Sun, Apr 10, 2022 at 08:11:03PM +0800, Zorro Lang wrote:
> > > On Fri, Apr 08, 2022 at 09:00:56PM -0700, Darrick J. Wong wrote:
> > > > On Sat, Apr 09, 2022 at 01:08:47AM +0800, Zorro Lang wrote:
> > > > > From: Eric Sandeen <sandeen@redhat.com>
> > > > > 
> > > > > This tests a longstanding bug where xfsdumps are not properly
> > > > > created when an inode is present on the filesytsem which has
> > > > > a lower number than the root inode.
> > > > > 
> > > > > Signed-off-by: Eric Sandeen <sandeen@redhat.com>
> > > > > Signed-off-by: Zorro Lang <zlang@redhat.com>
> > > > > ---
> > > > > 
> > > > > Friendly ping for reviewing.
> > > > > 
> > > > > Thanks,
> > > > > Zorro
> > > > > 
> > > > >  common/dump       |  1 +
> > > > >  tests/xfs/543     | 63 +++++++++++++++++++++++++++++++++++++++++++++++
> > > > >  tests/xfs/543.out | 47 +++++++++++++++++++++++++++++++++++
> > > > >  3 files changed, 111 insertions(+)
> > > > >  create mode 100755 tests/xfs/543
> > > > >  create mode 100644 tests/xfs/543.out
> > > > > 
> > > > > diff --git a/common/dump b/common/dump
> > > > > index ea16d442..ab48cd13 100644
> > > > > --- a/common/dump
> > > > > +++ b/common/dump
> > > > > @@ -219,6 +219,7 @@ _require_tape()
> > > > >  
> > > > >  _wipe_fs()
> > > > >  {
> > > > > +    [[ "$WIPE_FS" = "no" ]] && return
> > > > 
> > > > Is WIPE_FS intended as a general means to prevent wipefs -a between
> > > > tests?  If so, it ought to be in the README.
> > > 
> > > Actually I don't know how to if I should say WIPE_FS means to prevent
> > > wipefs -a between tests. Due to:
> > > 
> > > 1) We have a _wipe_fs() in common/dump[1], it's very old, and it only
> > > try to make a new xfs, although it calls wipe "fs"... And it's only used
> > > for tests/xfs/* cases to call _create_dumpdir_*() function.
> > > 
> > > 2) Then we have _try_wipe_scratch_devs() in common/rc [2], which calls
> > > _try_wipe_scratch_xfs() in common/xfs [3] (which is newer), it's only
> > > used in ./check script now.
> > > 
> > > So I'm wondering if we'd better to change the _wipe_fs() to:
> > > _wipe_dump()
> > > {
> > > 	[[ "$WIPE_DUMP" = "no" ]] && return
> > > 	_require_scratch
> > > 
> > > 	_try_wipe_scratch_devs >>$seqres.full
> > > 	_scratch_mount >>$seqres.full
> > > }
> > > 
> > > What do you think?
> > 
> > common/dump::_wipe_fs() is effectively just _scratch_mkfs() followed
> > by mounting it. It does not need to exist at all.

Aha!

> Oh! I was really confused why dumpdir need to wipefs at first, so that's
> the reason. I just checked all cases which call _create_dumpdir_*, and
> yes, they just hope to omit below common steps by running _create_dumpdir_*:
> _require_scratch
> _scratch_mkfs
> _scratch_mount
> 
> I'd like to remove the common/dump::_wipe_fs(), and help all related cases
> call _require_scratch && _scratch_mkfs && _scratch_mount by themselves. Due
> to it's match the current xfstests case format.
> 
> Is that good to you? Anymore suggestions or notice ?

Sounds good to /me.

--D

> Thanks,
> Zorro
> 
> > 
> > The test(s) need to have run _require_scratch() in it's setup, it
> > shouldn't be hidden deep inside random functions that fill the
> > scratch device.
> > 
> > They should also run scratch_mkfs/mount themselves to prep the
> > scratch device, and then _wipe_fs() can go away entirely.
> > 
> > And the new test simply calls the function to populate the scratch
> > device at the place where you want to preserve the previous contents
> > of the scratch device without first running scratch_mkfs....
> > 
> > Cheers,
> > 
> > Dave.
> > -- 
> > Dave Chinner
> > david@fromorbit.com
> > 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2] xfs: test xfsdump when an inode < root inode is present
@ 2022-03-27 17:39 Zorro Lang
  0 siblings, 0 replies; 7+ messages in thread
From: Zorro Lang @ 2022-03-27 17:39 UTC (permalink / raw)
  To: fstests; +Cc: sandeen

From: Eric Sandeen <sandeen@redhat.com>

This tests a longstanding bug where xfsdumps are not properly
created when an inode is present on the filesytsem which has
a lower number than the root inode.

Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Zorro Lang <zlang@redhat.com>
---

Hi,

The original author of this patch is Eric Sandeen, I just helped to
send this V2 patch (Eric knew that) due to he's on long time PTO and
we hope this case can be merged early.

I change this patch according to the review point [1] (from myself):
1) filter mkfs error output
2) chmod +x to this case file

Thanks,
Zorro

[1]
https://lore.kernel.org/fstests/20220214052519.xsmxb4d7mjcv2xod@zlang-mailbox/

 common/dump       |  1 +
 tests/xfs/543     | 63 +++++++++++++++++++++++++++++++++++++++++++++++
 tests/xfs/543.out | 47 +++++++++++++++++++++++++++++++++++
 3 files changed, 111 insertions(+)
 create mode 100755 tests/xfs/543
 create mode 100644 tests/xfs/543.out

diff --git a/common/dump b/common/dump
index ea16d442..ab48cd13 100644
--- a/common/dump
+++ b/common/dump
@@ -219,6 +219,7 @@ _require_tape()
 
 _wipe_fs()
 {
+    [[ "$WIPE_FS" = "no" ]] && return
     _require_scratch
 
     _scratch_mkfs_xfs >>$seqres.full || _fail "mkfs failed"
diff --git a/tests/xfs/543 b/tests/xfs/543
new file mode 100755
index 00000000..4e3c823a
--- /dev/null
+++ b/tests/xfs/543
@@ -0,0 +1,63 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2022 Red Hat, Inc. All Rights Reserved.
+#
+# FS QA Test 543
+#
+# Create a filesystem which contains an inode with a lower number
+# than the root inode. Ensure that xfsdump/xfsrestore handles this.
+#
+. ./common/preamble
+_begin_fstest auto quick dump
+
+# Import common functions.
+. ./common/dump
+
+_supported_fs xfs
+_require_scratch
+
+# A large stripe unit will put the root inode out quite far
+# due to alignment, leaving free blocks ahead of it.
+_scratch_mkfs_xfs -d sunit=1024,swidth=1024 > $seqres.full 2>&1
+
+# Mounting /without/ a stripe should allow inodes to be allocated
+# in lower free blocks, without the stripe alignment.
+_scratch_mount -o sunit=0,swidth=0
+
+root_inum=$(stat -c %i $SCRATCH_MNT)
+
+# Consume space after the root inode so that the blocks before
+# root look "close" for the next inode chunk allocation
+$XFS_IO_PROG -f -c "falloc 0 16m" $SCRATCH_MNT/fillfile
+
+# And make a bunch of inodes until we (hopefully) get one lower
+# than root, in a new inode chunk.
+echo "root_inum: $root_inum" >> $seqres.full
+for i in $(seq 0 4096) ; do
+	fname=$SCRATCH_MNT/$(printf "FILE_%03d" $i)
+	touch $fname
+	inum=$(stat -c "%i" $fname)
+	[[ $inum -lt $root_inum ]] && break
+done
+
+echo "created: $inum" >> $seqres.full
+
+[[ $inum -lt $root_inum ]] || _notrun "Could not set up test"
+
+# Now try a dump and restore. Cribbed from xfs/068
+WIPE_FS="no"
+_create_dumpdir_stress
+
+echo -n "Before: " >> $seqres.full
+_count_dumpdir_files | tee $tmp.before >> $seqres.full
+
+# filter out the file count, it changes as fsstress adds new operations
+_do_dump_restore | sed -e "/entries processed$/s/[0-9][0-9]*/NUM/g"
+
+echo -n "After: " >> $seqres.full
+_count_restoredir_files | tee $tmp.after >> $seqres.full
+diff -u $tmp.before $tmp.after
+
+# success, all done
+status=0
+exit
diff --git a/tests/xfs/543.out b/tests/xfs/543.out
new file mode 100644
index 00000000..a5224aaf
--- /dev/null
+++ b/tests/xfs/543.out
@@ -0,0 +1,47 @@
+QA output created by 543
+Creating directory system to dump using fsstress.
+
+-----------------------------------------------
+fsstress : -f link=10 -f creat=10 -f mkdir=10 -f truncate=5 -f symlink=10
+-----------------------------------------------
+xfsdump|xfsrestore ...
+xfsdump  -s DUMP_SUBDIR - SCRATCH_MNT | xfsrestore  - RESTORE_DIR
+xfsrestore: using file dump (drive_simple) strategy
+xfsrestore: searching media for dump
+xfsrestore: examining media file 0
+xfsrestore: dump description: 
+xfsrestore: hostname: HOSTNAME
+xfsrestore: mount point: SCRATCH_MNT
+xfsrestore: volume: SCRATCH_DEV
+xfsrestore: session time: TIME
+xfsrestore: level: 0
+xfsrestore: session label: ""
+xfsrestore: media label: ""
+xfsrestore: file system ID: ID
+xfsrestore: session id: ID
+xfsrestore: media ID: ID
+xfsrestore: searching media for directory dump
+xfsrestore: reading directories
+xfsrestore: NUM directories and NUM entries processed
+xfsrestore: directory post-processing
+xfsrestore: restoring non-directory files
+xfsrestore: restore complete: SECS seconds elapsed
+xfsrestore: Restore Status: SUCCESS
+xfsdump: using file dump (drive_simple) strategy
+xfsdump: level 0 dump of HOSTNAME:SCRATCH_MNT
+xfsdump: dump date: DATE
+xfsdump: session id: ID
+xfsdump: session label: ""
+xfsdump: ino map <PHASES>
+xfsdump: ino map construction complete
+xfsdump: estimated dump size: NUM bytes
+xfsdump: /var/xfsdump/inventory created
+xfsdump: creating dump session media file 0 (media 0, file 0)
+xfsdump: dumping ino map
+xfsdump: dumping directories
+xfsdump: dumping non-directory files
+xfsdump: ending media file
+xfsdump: media file size NUM bytes
+xfsdump: dump size (non-dir files) : NUM bytes
+xfsdump: dump complete: SECS seconds elapsed
+xfsdump: Dump Status: SUCCESS
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-04-11 22:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-08 17:08 [PATCH v2] xfs: test xfsdump when an inode < root inode is present Zorro Lang
2022-04-09  4:00 ` Darrick J. Wong
2022-04-10 12:11   ` Zorro Lang
2022-04-11  5:31     ` Dave Chinner
2022-04-11  8:34       ` Zorro Lang
2022-04-11 22:08         ` Darrick J. Wong
  -- strict thread matches above, loose matches on Subject: below --
2022-03-27 17:39 Zorro Lang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.