Linux-XFS Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 0/6] xfsprogs: port utilities to bulkstat v5
@ 2019-09-06  3:34 Darrick J. Wong
  2019-09-06  3:34 ` [PATCH 1/6] man: add documentation for v5 bulkstat ioctl Darrick J. Wong
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Darrick J. Wong @ 2019-09-06  3:34 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

Hi all,

Port the libfrog bulkstat/inumbers wrapper functions to use the new v5
interfaces and to enable falling back to the v1 interfaces if need be.
This series ports only the more lightweight users of these interfaces.

If you're going to start using this mess, you probably ought to just
pull from my git trees, which are linked below.

This is an extraordinary way to destroy everything.  Enjoy!
Comments and questions are, as always, welcome.

--D

xfsprogs git tree:
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=bulkstat-v5-porting

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/6] man: add documentation for v5 bulkstat ioctl
  2019-09-06  3:34 [PATCH 0/6] xfsprogs: port utilities to bulkstat v5 Darrick J. Wong
@ 2019-09-06  3:34 ` Darrick J. Wong
  2019-09-06  3:35 ` [PATCH 2/6] man: add documentation for v5 inumbers ioctl Darrick J. Wong
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Darrick J. Wong @ 2019-09-06  3:34 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

From: Darrick J. Wong <darrick.wong@oracle.com>

Add a new manpage describing the V5 XFS_IOC_BULKSTAT ioctl.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 man/man2/ioctl_xfs_bulkstat.2   |  330 +++++++++++++++++++++++++++++++++++++++
 man/man2/ioctl_xfs_fsbulkstat.2 |    6 +
 2 files changed, 336 insertions(+)
 create mode 100644 man/man2/ioctl_xfs_bulkstat.2


diff --git a/man/man2/ioctl_xfs_bulkstat.2 b/man/man2/ioctl_xfs_bulkstat.2
new file mode 100644
index 00000000..f687cfe8
--- /dev/null
+++ b/man/man2/ioctl_xfs_bulkstat.2
@@ -0,0 +1,330 @@
+.\" Copyright (c) 2019, Oracle.  All rights reserved.
+.\"
+.\" %%%LICENSE_START(GPLv2+_DOC_FULL)
+.\" SPDX-License-Identifier: GPL-2.0+
+.\" %%%LICENSE_END
+.TH IOCTL-XFS-BULKSTAT 2 2019-05-23 "XFS"
+.SH NAME
+ioctl_xfs_bulkstat \- query information for a batch of XFS inodes
+.SH SYNOPSIS
+.br
+.B #include <xfs/xfs_fs.h>
+.PP
+.BI "int ioctl(int " fd ", XFS_IOC_BULKSTAT, struct xfs_bulkstat_req *" arg );
+.SH DESCRIPTION
+Query stat information for a group of XFS inodes.
+This ioctl uses
+.B struct xfs_bulkstat_req
+to set up a bulk transfer with the kernel:
+.PP
+.in +4n
+.nf
+struct xfs_bulkstat_req {
+	struct xfs_bulk_ireq    hdr;
+	struct xfs_bulkstat     bulkstat[];
+};
+
+struct xfs_bulk_ireq {
+	uint64_t                ino;
+	uint32_t                flags;
+	uint32_t                icount;
+	uint32_t                ocount;
+	uint32_t                agno;
+	uint64_t                reserved[5];
+};
+.fi
+.in
+.PP
+.I hdr.ino
+should be set to the number of the first inode for which the caller wants
+information, or zero to start with the first inode in the filesystem.
+Note that this is a different semantic than the
+.B lastip
+in the old
+.B FSBULKSTAT
+ioctl.
+After the call, this value will be set to the number of the next inode for
+which information could supplied.
+This sets up the next call for an iteration loop.
+.PP
+If the
+.B XFS_BULK_REQ_SPECIAL
+flag is set, this field is interpreted as follows:
+.RS 0.4i
+.TP
+.B XFS_BULK_IREQ_SPECIAL_ROOT
+Return stat information for the root directory inode.
+.RE
+.PP
+.PP
+.I hdr.flags
+is a bit set of operational flags:
+.RS 0.4i
+.TP
+.B XFS_BULK_REQ_AGNO
+If this is set, the call will only return results for the allocation group (AG)
+set in
+.BR hdr.agno .
+If
+.B hdr.ino
+is set to zero, results will be returned starting with the first inode in the
+AG.
+This flag may not be set at the same time as the
+.B XFS_BULK_REQ_SPECIAL
+flag.
+.TP
+.B XFS_BULK_REQ_SPECIAL
+If this is set, results will be returned for only the special inode
+specified in the
+.B hdr.ino
+field.
+This flag may not be set at the same time as the
+.B XFS_BULK_REQ_AGNO
+flag.
+.RE
+.PP
+.I hdr.icount
+is the number of inodes to examine.
+.PP
+.I hdr.ocount
+will be set to the number of records returned.
+.PP
+.I hdr.agno
+is the number of the allocation group (AG) for which we want results.
+If the
+.B XFS_BULK_REQ_AGNO
+flag is not set, this field is ignored.
+.PP
+.I hdr.reserved
+must be set to zero.
+
+.PP
+.I bulkstat
+is an array of
+.B struct xfs_bulkstat
+which is described below.
+The array must have at least
+.I icount
+elements.
+.PP
+.in +4n
+.nf
+struct xfs_bulkstat {
+	uint64_t                bs_ino;
+	uint64_t                bs_size;
+
+	uint64_t                bs_blocks;
+	uint64_t                bs_xflags;
+
+	uint64_t                bs_atime;
+	uint64_t                bs_mtime;
+
+	uint64_t                bs_ctime;
+	uint64_t                bs_btime;
+
+	uint32_t                bs_gen;
+	uint32_t                bs_uid;
+	uint32_t                bs_gid;
+	uint32_t                bs_projectid;
+
+	uint32_t                bs_atime_nsec;
+	uint32_t                bs_mtime_nsec;
+	uint32_t                bs_ctime_nsec;
+	uint32_t                bs_btime_nsec;
+
+	uint32_t                bs_blksize;
+	uint32_t                bs_rdev;
+	uint32_t                bs_cowextsize_blks;
+	uint32_t                bs_extsize_blks;
+
+	uint32_t                bs_nlink;
+	uint32_t                bs_extents;
+	uint32_t                bs_aextents;
+	uint16_t                bs_version;
+	uint16_t                bs_forkoff;
+
+	uint16_t                bs_sick;
+	uint16_t                bs_checked;
+	uint16_t                bs_mode;
+	uint16_t                bs_pad2;
+
+	uint64_t                bs_pad[7];
+};
+.fi
+.in
+.PP
+.I bs_ino
+is the inode number of this record.
+.PP
+.I bs_size
+is the size of the file, in bytes.
+.PP
+.I bs_blocks
+is the number of filesystem blocks allocated to this file, including metadata.
+.PP
+.I bs_xflags
+tell us what extended flags are set this inode.
+These flags are the same values as those defined in the
+.B XFS INODE FLAGS
+section of the
+.BR ioctl_xfs_fsgetxattr (2)
+manpage.
+.PP
+.I bs_atime
+is the last time this file was accessed, in seconds.
+.PP
+.I bs_mtime
+is the last time the contents of this file were modified, in seconds.
+.PP
+.I bs_ctime
+is the last time this inode record was modified, in seconds.
+.PP
+.I bs_btime
+is the time this inode record was created, in seconds.
+.PP
+.I bs_gen
+is the generation number of the inode record.
+.PP
+.I bs_uid
+is the user id.
+.PP
+.I bs_gid
+is the group id.
+.PP
+.I bs_projectid
+is the the project id.
+.PP
+.I bs_atime_nsec
+is the nanoseconds component of the last time this file was accessed.
+.PP
+.I bs_mtime_nsec
+is the nanoseconds component of the last time the contents of this file were
+modified.
+.PP
+.I bs_ctime_nsec
+is the nanoseconds component of the last time this inode record was modified.
+.PP
+.I bs_btime_nsec
+is the nanoseconds component of the time this inode record was created.
+.PP
+.I bs_blksize
+is the size of a data block for this file, in units of bytes.
+.PP
+.I bs_rdev
+is the encoded device id if this is a special file.
+.PP
+.I bs_cowextsize_blks
+is the Copy on Write extent size hint for this file, in units of data blocks.
+.PP
+.I bs_extsize_blks
+is the extent size hint for this file, in units of data blocks.
+.PP
+.I bs_nlink
+is the number of hard links to this inode.
+.PP
+.I bs_extents
+is the number of storage mappings associated with this file's data.
+.PP
+.I bs_aextents
+is the number of storage mappings associated with this file's extended
+attributes.
+.PP
+.I bs_version
+is the version of this data structure.
+Currently, only 1 or 5 are supported.
+.PP
+.I bs_forkoff
+is the offset of the attribute fork in the inode record, in bytes.
+.PP
+The fields
+.IR bs_sick " and " bs_checked
+indicate the relative health of various allocation group metadata.
+Please see the section
+.B XFS INODE METADATA HEALTH REPORTING
+for more information.
+.PP
+.I bs_mode
+is the file type and mode.
+.PP
+.I bs_pad[7]
+is zeroed.
+.SH RETURN VALUE
+On error, \-1 is returned, and
+.I errno
+is set to indicate the error.
+.PP
+.SH XFS INODE METADATA HEALTH REPORTING
+.PP
+The online filesystem checking utility scans inode metadata and records what it
+finds in the kernel incore state.
+The following scheme is used for userspace to read the incore health status of
+an inode:
+.IP \[bu] 2
+If a given sick flag is set in
+.IR bs_sick ,
+then that piece of metadata has been observed to be damaged.
+The same bit should be set in
+.IR bs_checked .
+.IP \[bu]
+If a given sick flag is set in
+.I bs_checked
+but is not set in
+.IR bs_sick ,
+then that piece of metadata has been checked and is not faulty.
+.IP \[bu]
+If a given sick flag is not set in
+.IR bs_checked ,
+then no conclusion can be made.
+.PP
+The following flags apply to these fields:
+.RS 0.4i
+.TP
+.B XFS_BS_SICK_INODE
+The inode's record itself.
+.TP
+.B XFS_BS_SICK_BMBTD
+File data extent mappings.
+.TP
+.B XFS_BS_SICK_BMBTA
+Extended attribute extent mappings.
+.TP
+.B XFS_BS_SICK_BMBTC
+Copy on Write staging extent mappings.
+.TP
+.B XFS_BS_SICK_DIR
+Directory information.
+.TP
+.B XFS_BS_SICK_XATTR
+Extended attribute data.
+.TP
+.B XFS_BS_SICK_SYMLINK
+Symbolic link target.
+.TP
+.B XFS_BS_SICK_PARENT
+Parent pointers.
+.RE
+.SH ERRORS
+Error codes can be one of, but are not limited to, the following:
+.TP
+.B EFAULT
+The kernel was not able to copy into the userspace buffer.
+.TP
+.B EFSBADCRC
+Metadata checksum validation failed while performing the query.
+.TP
+.B EFSCORRUPTED
+Metadata corruption was encountered while performing the query.
+.TP
+.B EINVAL
+One of the arguments was not valid.
+.TP
+.B EIO
+An I/O error was encountered while performing the query.
+.TP
+.B ENOMEM
+There was insufficient memory to perform the query.
+.SH CONFORMING TO
+This API is specific to XFS filesystem on the Linux kernel.
+.SH SEE ALSO
+.BR ioctl (2),
+.BR ioctl_xfs_fsgetxattr (2)
diff --git a/man/man2/ioctl_xfs_fsbulkstat.2 b/man/man2/ioctl_xfs_fsbulkstat.2
index 3e13cfa8..81f9d9bf 100644
--- a/man/man2/ioctl_xfs_fsbulkstat.2
+++ b/man/man2/ioctl_xfs_fsbulkstat.2
@@ -15,6 +15,12 @@ ioctl_xfs_fsbulkstat \- query information for a batch of XFS inodes
 .BI "int ioctl(int " fd ", XFS_IOC_FSBULKSTAT_SINGLE, struct xfs_fsop_bulkreq *" arg );
 .SH DESCRIPTION
 Query stat information for a group of XFS inodes.
+.PP
+NOTE: This ioctl has been superseded.
+Please see the
+.BR ioctl_xfs_bulkstat (2)
+manpage for information about its replacement.
+.PP
 These ioctls use
 .B struct xfs_fsop_bulkreq
 to set up a bulk transfer with the kernel:


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 2/6] man: add documentation for v5 inumbers ioctl
  2019-09-06  3:34 [PATCH 0/6] xfsprogs: port utilities to bulkstat v5 Darrick J. Wong
  2019-09-06  3:34 ` [PATCH 1/6] man: add documentation for v5 bulkstat ioctl Darrick J. Wong
@ 2019-09-06  3:35 ` Darrick J. Wong
  2019-09-06  3:35 ` [PATCH 3/6] misc: convert XFS_IOC_FSBULKSTAT to XFS_IOC_BULKSTAT Darrick J. Wong
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Darrick J. Wong @ 2019-09-06  3:35 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

From: Darrick J. Wong <darrick.wong@oracle.com>

Add a manpage describing the new v5 XFS_IOC_INUMBERS ioctl.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 man/man2/ioctl_xfs_inumbers.2 |  118 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 118 insertions(+)
 create mode 100644 man/man2/ioctl_xfs_inumbers.2


diff --git a/man/man2/ioctl_xfs_inumbers.2 b/man/man2/ioctl_xfs_inumbers.2
new file mode 100644
index 00000000..b1e854d3
--- /dev/null
+++ b/man/man2/ioctl_xfs_inumbers.2
@@ -0,0 +1,118 @@
+.\" Copyright (c) 2019, Oracle.  All rights reserved.
+.\"
+.\" %%%LICENSE_START(GPLv2+_DOC_FULL)
+.\" SPDX-License-Identifier: GPL-2.0+
+.\" %%%LICENSE_END
+.TH IOCTL-XFS-INUMBERS 2 2019-05-23 "XFS"
+.SH NAME
+ioctl_xfs_inumbers \- query allocation information for groups of XFS inodes
+.SH SYNOPSIS
+.br
+.B #include <xfs/xfs_fs.h>
+.PP
+.BI "int ioctl(int " fd ", XFS_IOC_INUMBERS, struct xfs_inumbers_req *" arg );
+.SH DESCRIPTION
+Query inode allocation information for groups of XFS inodes.
+This ioctl uses
+.B struct xfs_inumbers_req
+to set up a bulk transfer with the kernel:
+.PP
+.in +4n
+.nf
+struct xfs_inumbers_req {
+	struct xfs_bulk_ireq    hdr;
+	struct xfs_inumbers     inumbers[];
+};
+
+struct xfs_bulk_ireq {
+	uint64_t                ino;
+	uint32_t                flags;
+	uint32_t                icount;
+	uint32_t                ocount;
+	uint32_t                agno;
+	uint64_t                reserved[5];
+};
+.fi
+.in
+.PP
+.I hdr
+describes the information to query.
+The layout and behavior are documented in the
+.BR ioctl_xfs_bulkstat (2)
+manpage and will not be discussed further here.
+
+.PP
+.I inumbers
+is an array of
+.B struct xfs_inumbers
+which is described below.
+The array must have at least
+.I icount
+elements.
+.PP
+.in +4n
+.nf
+struct xfs_inumbers {
+	uint64_t                xi_startino;
+	uint64_t                xi_allocmask;
+	uint8_t                 xi_alloccount;
+	uint8_t                 xi_version;
+	uint8_t                 xi_padding[6];
+};
+.fi
+.in
+.PP
+This structure describes inode usage information for a group of 64 consecutive
+inode numbers.
+.PP
+.I xi_startino
+is the first inode number of this group.
+.PP
+.I xi_allocmask
+is a bitmask telling which inodes in this group are allocated.
+To clarify, bit
+.B N
+is set if inode
+.BR xi_startino + N
+is allocated.
+.PP
+.I xi_alloccount
+is the number of inodes in this group that are allocated.
+This should be equal to popcnt(xi_allocmask).
+.PP
+.I xi_version
+is the version of this data structure.
+Currently, only 1 or 5 are supported.
+.PP
+.I xi_padding[6]
+is zeroed.
+.SH RETURN VALUE
+On error, \-1 is returned, and
+.I errno
+is set to indicate the error.
+.PP
+.SH ERRORS
+Error codes can be one of, but are not limited to, the following:
+.TP
+.B EFAULT
+The kernel was not able to copy into the userspace buffer.
+.TP
+.B EFSBADCRC
+Metadata checksum validation failed while performing the query.
+.TP
+.B EFSCORRUPTED
+Metadata corruption was encountered while performing the query.
+.TP
+.B EINVAL
+One of the arguments was not valid.
+.TP
+.B EIO
+An I/O error was encountered while performing the query.
+.TP
+.B ENOMEM
+There was insufficient memory to perform the query.
+.SH CONFORMING TO
+This API is specific to XFS filesystem on the Linux kernel.
+.SH SEE ALSO
+.BR ioctl (2),
+.BR ioctl_xfs_bulkstat (2).


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 3/6] misc: convert XFS_IOC_FSBULKSTAT to XFS_IOC_BULKSTAT
  2019-09-06  3:34 [PATCH 0/6] xfsprogs: port utilities to bulkstat v5 Darrick J. Wong
  2019-09-06  3:34 ` [PATCH 1/6] man: add documentation for v5 bulkstat ioctl Darrick J. Wong
  2019-09-06  3:35 ` [PATCH 2/6] man: add documentation for v5 inumbers ioctl Darrick J. Wong
@ 2019-09-06  3:35 ` Darrick J. Wong
  2019-09-13  0:54   ` Dave Chinner
  2019-09-06  3:35 ` [PATCH 4/6] misc: convert to v5 bulkstat_single call Darrick J. Wong
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Darrick J. Wong @ 2019-09-06  3:35 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

From: Darrick J. Wong <darrick.wong@oracle.com>

Convert the v1 calls to v5 calls.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fsr/xfs_fsr.c      |   45 ++++++--
 io/open.c          |   17 ++-
 libfrog/bulkstat.c |  290 +++++++++++++++++++++++++++++++++++++++++++++++++---
 libfrog/bulkstat.h |   10 +-
 libfrog/fsgeom.h   |    9 ++
 quota/quot.c       |   29 ++---
 scrub/inodes.c     |   45 +++++---
 scrub/inodes.h     |    2 
 scrub/phase3.c     |    6 +
 scrub/phase5.c     |    8 +
 scrub/phase6.c     |    2 
 scrub/unicrash.c   |    6 +
 scrub/unicrash.h   |    4 -
 spaceman/health.c  |   28 +++--
 14 files changed, 411 insertions(+), 90 deletions(-)


diff --git a/fsr/xfs_fsr.c b/fsr/xfs_fsr.c
index a53eb924..cc3cc93a 100644
--- a/fsr/xfs_fsr.c
+++ b/fsr/xfs_fsr.c
@@ -466,6 +466,17 @@ fsrallfs(char *mtab, int howlong, char *leftofffile)
 				ptr = strchr(ptr, ' ');
 				if (ptr) {
 					startino = strtoull(++ptr, NULL, 10);
+					/*
+					 * NOTE: The inode number read in from
+					 * the leftoff file is the last inode
+					 * to have been fsr'd.  Since the new
+					 * xfrog_bulkstat function wants to be
+					 * passed the first inode that we want
+					 * to examine, increment the value that
+					 * we read in.  The debug message below
+					 * prints the lastoff value.
+					 */
+					startino++;
 				}
 			}
 			if (startpass < 0)
@@ -484,7 +495,7 @@ fsrallfs(char *mtab, int howlong, char *leftofffile)
 
 	if (vflag) {
 		fsrprintf(_("START: pass=%d ino=%llu %s %s\n"),
-			  fs->npass, (unsigned long long)startino,
+			  fs->npass, (unsigned long long)startino - 1,
 			  fs->dev, fs->mnt);
 	}
 
@@ -576,12 +587,10 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange)
 	int	fd;
 	int	count = 0;
 	int	ret;
-	uint32_t buflenout;
-	struct xfs_bstat buf[GRABSZ];
 	char	fname[64];
 	char	*tname;
 	jdm_fshandle_t	*fshandlep;
-	xfs_ino_t	lastino = startino;
+	struct xfs_bulkstat_req	*breq;
 
 	fsrprintf(_("%s start inode=%llu\n"), mntdir,
 		(unsigned long long)startino);
@@ -604,10 +613,21 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange)
 
 	tmp_init(mntdir);
 
-	while ((ret = xfrog_bulkstat(&fsxfd, &lastino, GRABSZ, &buf[0],
-				&buflenout)) == 0) {
-		struct xfs_bstat *p;
-		struct xfs_bstat *endp;
+	breq = xfrog_bulkstat_alloc_req(GRABSZ, startino);
+	if (!breq) {
+		fsrprintf(_("Skipping %s: not enough memory\n"),
+			  mntdir);
+		xfd_close(&fsxfd);
+		free(fshandlep);
+		return -1;
+	}
+
+	while ((ret = xfrog_bulkstat(&fsxfd, breq) == 0)) {
+		struct xfs_bstat	bs1;
+		struct xfs_bulkstat	*buf = breq->bulkstat;
+		struct xfs_bulkstat	*p;
+		struct xfs_bulkstat	*endp;
+		uint32_t		buflenout = breq->hdr.ocount;
 
 		if (buflenout == 0)
 			goto out0;
@@ -615,7 +635,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange)
 		/* Each loop through, defrag targetrange percent of the files */
 		count = (buflenout * targetrange) / 100;
 
-		qsort((char *)buf, buflenout, sizeof(struct xfs_bstat), cmp);
+		qsort((char *)buf, buflenout, sizeof(struct xfs_bulkstat), cmp);
 
 		for (p = buf, endp = (buf + buflenout); p < endp ; p++) {
 			/* Do some obvious checks now */
@@ -623,7 +643,8 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange)
 			     (p->bs_extents < 2))
 				continue;
 
-			fd = jdm_open(fshandlep, p, O_RDWR|O_DIRECT);
+			xfrog_bulkstat_to_bstat(&fsxfd, &bs1, p);
+			fd = jdm_open(fshandlep, &bs1, O_RDWR | O_DIRECT);
 			if (fd < 0) {
 				/* This probably means the file was
 				 * removed while in progress of handling
@@ -641,7 +662,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange)
 			/* Get a tmp file name */
 			tname = tmp_next(mntdir);
 
-			ret = fsrfile_common(fname, tname, mntdir, fd, p);
+			ret = fsrfile_common(fname, tname, mntdir, fd, &bs1);
 
 			leftoffino = p->bs_ino;
 
@@ -653,6 +674,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange)
 			}
 		}
 		if (endtime && endtime < time(NULL)) {
+			free(breq);
 			tmp_close(mntdir);
 			xfd_close(&fsxfd);
 			fsrall_cleanup(1);
@@ -662,6 +684,7 @@ fsrfs(char *mntdir, xfs_ino_t startino, int targetrange)
 	if (ret)
 		fsrprintf(_("%s: bulkstat: %s\n"), progname, strerror(ret));
 out0:
+	free(breq);
 	tmp_close(mntdir);
 	xfd_close(&fsxfd);
 	free(fshandlep);
diff --git a/io/open.c b/io/open.c
index 99ca0dd3..e1aac7d1 100644
--- a/io/open.c
+++ b/io/open.c
@@ -724,7 +724,6 @@ inode_f(
 	char			**argv)
 {
 	struct xfs_bstat	bstat;
-	uint32_t		count = 0;
 	uint64_t		result_ino = 0;
 	uint64_t		userino = NULLFSINO;
 	char			*p;
@@ -775,21 +774,31 @@ inode_f(
 		}
 	} else if (ret_next) {
 		struct xfs_fd	xfd = XFS_FD_INIT(file->fd);
+		struct xfs_bulkstat_req	*breq;
+
+		breq = xfrog_bulkstat_alloc_req(1, userino + 1);
+		if (!breq) {
+			perror("alloc bulkstat");
+			exitcode = 1;
+			return 0;
+		}
 
 		/* get next inode */
-		ret = xfrog_bulkstat(&xfd, &userino, 1, &bstat, &count);
+		ret = xfrog_bulkstat(&xfd, breq);
 		if (ret) {
 			errno = ret;
 			perror("bulkstat");
+			free(breq);
 			exitcode = 1;
 			return 0;
 		}
 
 		/* The next inode in use, or 0 if none */
-		if (count)
-			result_ino = bstat.bs_ino;
+		if (breq->hdr.ocount)
+			result_ino = breq->bulkstat[0].bs_ino;
 		else
 			result_ino = 0;
+		free(breq);
 	} else {
 		struct xfs_fd	xfd = XFS_FD_INIT(file->fd);
 
diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c
index fa10f298..b4468243 100644
--- a/libfrog/bulkstat.c
+++ b/libfrog/bulkstat.c
@@ -3,10 +3,23 @@
  * Copyright (C) 2019 Oracle.  All Rights Reserved.
  * Author: Darrick J. Wong <darrick.wong@oracle.com>
  */
+#include <string.h>
+#include <strings.h>
 #include "xfs.h"
 #include "fsgeom.h"
 #include "bulkstat.h"
 
+/* Grab fs geometry needed to degrade to v1 bulkstat/inumbers ioctls. */
+static inline int
+xfrog_bulkstat_prep_v1_emulation(
+	struct xfs_fd		*xfd)
+{
+	if (xfd->fsgeom.blocksize > 0)
+		return 0;
+
+	return xfd_prepare_geometry(xfd);
+}
+
 /* Bulkstat a single inode.  Returns zero or a positive error code. */
 int
 xfrog_bulkstat_single(
@@ -29,29 +42,278 @@ xfrog_bulkstat_single(
 	return 0;
 }
 
-/* Bulkstat a bunch of inodes.  Returns zero or a positive error code. */
-int
-xfrog_bulkstat(
+/*
+ * Set up emulation of a v5 bulk request ioctl with a v1 bulk request ioctl.
+ * Returns 0 if the emulation should proceed; ECANCELED if there are no
+ * records; or a positive error code.
+ */
+static int
+xfrog_bulk_req_setup(
 	struct xfs_fd		*xfd,
-	uint64_t		*lastino,
-	uint32_t		icount,
-	struct xfs_bstat	*ubuffer,
-	uint32_t		*ocount)
+	struct xfs_bulk_ireq	*hdr,
+	struct xfs_fsop_bulkreq	*bulkreq,
+	size_t			rec_size)
+{
+	void			*buf;
+
+	if (hdr->flags & XFS_BULK_IREQ_AGNO) {
+		uint32_t	agno = cvt_ino_to_agno(xfd, hdr->ino);
+
+		if (hdr->ino == 0)
+			hdr->ino = cvt_agino_to_ino(xfd, hdr->agno, 0);
+		else if (agno < hdr->agno)
+			return EINVAL;
+		else if (agno > hdr->agno)
+			goto no_results;
+	}
+
+	if (cvt_ino_to_agno(xfd, hdr->ino) > xfd->fsgeom.agcount)
+		goto no_results;
+
+	buf = malloc(hdr->icount * rec_size);
+	if (!buf)
+		return errno;
+
+	if (hdr->ino)
+		hdr->ino--;
+	bulkreq->lastip = (__u64 *)&hdr->ino,
+	bulkreq->icount = hdr->icount,
+	bulkreq->ocount = (__s32 *)&hdr->ocount,
+	bulkreq->ubuffer = buf;
+	return 0;
+
+no_results:
+	hdr->ocount = 0;
+	return ECANCELED;
+}
+
+/*
+ * Convert records and free resources used to do a v1 emulation of v5 bulk
+ * request.
+ */
+static int
+xfrog_bulk_req_teardown(
+	struct xfs_fd		*xfd,
+	struct xfs_bulk_ireq	*hdr,
+	struct xfs_fsop_bulkreq	*bulkreq,
+	size_t			v1_rec_size,
+	uint64_t		(*v1_ino)(void *v1_rec),
+	void			*v5_records,
+	size_t			v5_rec_size,
+	void			(*cvt)(struct xfs_fd *xfd, void *v5, void *v1),
+	unsigned int		startino_adj,
+	int			error)
+{
+	void			*v1_rec = bulkreq->ubuffer;
+	void			*v5_rec = v5_records;
+	unsigned int		i;
+
+	if (error == ECANCELED) {
+		error = 0;
+		goto free;
+	}
+	if (error)
+		goto free;
+
+	/*
+	 * Convert each record from v1 to v5 format, keeping the startino
+	 * value up to date and (if desired) stopping at the end of the
+	 * AG.
+	 */
+	for (i = 0;
+	     i < hdr->ocount;
+	     i++, v1_rec += v1_rec_size, v5_rec += v5_rec_size) {
+		uint64_t	ino = v1_ino(v1_rec);
+
+		/* Stop if we hit a different AG. */
+		if ((hdr->flags & XFS_BULK_IREQ_AGNO) &&
+		    cvt_ino_to_agno(xfd, ino) != hdr->agno) {
+			hdr->ocount = i;
+			break;
+		}
+		cvt(xfd, v5_rec, v1_rec);
+		hdr->ino = ino + startino_adj;
+	}
+
+free:
+	free(bulkreq->ubuffer);
+	return error;
+}
+
+static uint64_t xfrog_bstat_ino(void *v1_rec)
+{
+	return ((struct xfs_bstat *)v1_rec)->bs_ino;
+}
+
+static void xfrog_bstat_cvt(struct xfs_fd *xfd, void *v5, void *v1)
+{
+	xfrog_bstat_to_bulkstat(xfd, v5, v1);
+}
+
+/* Bulkstat a bunch of inodes using the v5 interface. */
+static int
+xfrog_bulkstat5(
+	struct xfs_fd		*xfd,
+	struct xfs_bulkstat_req	*req)
 {
-	struct xfs_fsop_bulkreq	bulkreq = {
-		.lastip		= (__u64 *)lastino,
-		.icount		= icount,
-		.ubuffer	= ubuffer,
-		.ocount		= (__s32 *)ocount,
-	};
 	int			ret;
 
-	ret = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT, &bulkreq);
+	ret = ioctl(xfd->fd, XFS_IOC_BULKSTAT, req);
 	if (ret)
 		return errno;
 	return 0;
 }
 
+/* Bulkstat a bunch of inodes using the v1 interface. */
+static int
+xfrog_bulkstat1(
+	struct xfs_fd		*xfd,
+	struct xfs_bulkstat_req	*req)
+{
+	struct xfs_fsop_bulkreq	bulkreq = { 0 };
+	int			error;
+
+	error = xfrog_bulkstat_prep_v1_emulation(xfd);
+	if (error)
+		return error;
+
+	error = xfrog_bulk_req_setup(xfd, &req->hdr, &bulkreq,
+			sizeof(struct xfs_bstat));
+	if (error == ECANCELED)
+		goto out_teardown;
+	if (error)
+		return error;
+
+	error = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT, &bulkreq);
+	if (error)
+		error = errno;
+
+out_teardown:
+	return xfrog_bulk_req_teardown(xfd, &req->hdr, &bulkreq,
+			sizeof(struct xfs_bstat), xfrog_bstat_ino,
+			&req->bulkstat, sizeof(struct xfs_bulkstat),
+			xfrog_bstat_cvt, 1, error);
+}
+
+/* Bulkstat a bunch of inodes.  Returns zero or a positive error code. */
+int
+xfrog_bulkstat(
+	struct xfs_fd		*xfd,
+	struct xfs_bulkstat_req	*req)
+{
+	int			error;
+
+	if (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V1)
+		goto try_v1;
+
+	error = xfrog_bulkstat5(xfd, req);
+	if (error == 0 || (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V5))
+		return error;
+
+	/* If the v5 ioctl wasn't found, we punt to v1. */
+	switch (error) {
+	case EOPNOTSUPP:
+	case ENOTTY:
+		xfd->flags |= XFROG_FLAG_BULKSTAT_FORCE_V1;
+		break;
+	}
+
+try_v1:
+	return xfrog_bulkstat1(xfd, req);
+}
+
+/* Convert bulkstat (v5) to bstat (v1). */
+void
+xfrog_bulkstat_to_bstat(
+	struct xfs_fd			*xfd,
+	struct xfs_bstat		*bs1,
+	const struct xfs_bulkstat	*bstat)
+{
+	bs1->bs_ino = bstat->bs_ino;
+	bs1->bs_mode = bstat->bs_mode;
+	bs1->bs_nlink = bstat->bs_nlink;
+	bs1->bs_uid = bstat->bs_uid;
+	bs1->bs_gid = bstat->bs_gid;
+	bs1->bs_rdev = bstat->bs_rdev;
+	bs1->bs_blksize = bstat->bs_blksize;
+	bs1->bs_size = bstat->bs_size;
+	bs1->bs_atime.tv_sec = bstat->bs_atime;
+	bs1->bs_mtime.tv_sec = bstat->bs_mtime;
+	bs1->bs_ctime.tv_sec = bstat->bs_ctime;
+	bs1->bs_atime.tv_nsec = bstat->bs_atime_nsec;
+	bs1->bs_mtime.tv_nsec = bstat->bs_mtime_nsec;
+	bs1->bs_ctime.tv_nsec = bstat->bs_ctime_nsec;
+	bs1->bs_blocks = bstat->bs_blocks;
+	bs1->bs_xflags = bstat->bs_xflags;
+	bs1->bs_extsize = cvt_off_fsb_to_b(xfd, bstat->bs_extsize_blks);
+	bs1->bs_extents = bstat->bs_extents;
+	bs1->bs_gen = bstat->bs_gen;
+	bs1->bs_projid_lo = bstat->bs_projectid & 0xFFFF;
+	bs1->bs_forkoff = bstat->bs_forkoff;
+	bs1->bs_projid_hi = bstat->bs_projectid >> 16;
+	bs1->bs_sick = bstat->bs_sick;
+	bs1->bs_checked = bstat->bs_checked;
+	bs1->bs_cowextsize = cvt_off_fsb_to_b(xfd, bstat->bs_cowextsize_blks);
+	bs1->bs_dmevmask = 0;
+	bs1->bs_dmstate = 0;
+	bs1->bs_aextents = bstat->bs_aextents;
+}
+
+/* Convert bstat (v1) to bulkstat (v5). */
+void
+xfrog_bstat_to_bulkstat(
+	struct xfs_fd			*xfd,
+	struct xfs_bulkstat		*bstat,
+	const struct xfs_bstat		*bs1)
+{
+	memset(bstat, 0, sizeof(*bstat));
+	bstat->bs_version = XFS_BULKSTAT_VERSION_V1;
+
+	bstat->bs_ino = bs1->bs_ino;
+	bstat->bs_mode = bs1->bs_mode;
+	bstat->bs_nlink = bs1->bs_nlink;
+	bstat->bs_uid = bs1->bs_uid;
+	bstat->bs_gid = bs1->bs_gid;
+	bstat->bs_rdev = bs1->bs_rdev;
+	bstat->bs_blksize = bs1->bs_blksize;
+	bstat->bs_size = bs1->bs_size;
+	bstat->bs_atime = bs1->bs_atime.tv_sec;
+	bstat->bs_mtime = bs1->bs_mtime.tv_sec;
+	bstat->bs_ctime = bs1->bs_ctime.tv_sec;
+	bstat->bs_atime_nsec = bs1->bs_atime.tv_nsec;
+	bstat->bs_mtime_nsec = bs1->bs_mtime.tv_nsec;
+	bstat->bs_ctime_nsec = bs1->bs_ctime.tv_nsec;
+	bstat->bs_blocks = bs1->bs_blocks;
+	bstat->bs_xflags = bs1->bs_xflags;
+	bstat->bs_extsize_blks = cvt_b_to_off_fsbt(xfd, bs1->bs_extsize);
+	bstat->bs_extents = bs1->bs_extents;
+	bstat->bs_gen = bs1->bs_gen;
+	bstat->bs_projectid = bstat_get_projid(bs1);
+	bstat->bs_forkoff = bs1->bs_forkoff;
+	bstat->bs_sick = bs1->bs_sick;
+	bstat->bs_checked = bs1->bs_checked;
+	bstat->bs_cowextsize_blks = cvt_b_to_off_fsbt(xfd, bs1->bs_cowextsize);
+	bstat->bs_aextents = bs1->bs_aextents;
+}
+
+/* Allocate a bulkstat request.  On error returns NULL and sets errno. */
+struct xfs_bulkstat_req *
+xfrog_bulkstat_alloc_req(
+	uint32_t		nr,
+	uint64_t		startino)
+{
+	struct xfs_bulkstat_req	*breq;
+
+	breq = calloc(1, XFS_BULKSTAT_REQ_SIZE(nr));
+	if (!breq)
+		return NULL;
+
+	breq->hdr.icount = nr;
+	breq->hdr.ino = startino;
+
+	return breq;
+}
+
 /*
  * Query inode allocation bitmask information.  Returns zero or a positive
  * error code.
diff --git a/libfrog/bulkstat.h b/libfrog/bulkstat.h
index 83ac0e37..6f51c167 100644
--- a/libfrog/bulkstat.h
+++ b/libfrog/bulkstat.h
@@ -10,8 +10,14 @@
 struct xfs_bstat;
 int xfrog_bulkstat_single(struct xfs_fd *xfd, uint64_t ino,
 		struct xfs_bstat *ubuffer);
-int xfrog_bulkstat(struct xfs_fd *xfd, uint64_t *lastino, uint32_t icount,
-		struct xfs_bstat *ubuffer, uint32_t *ocount);
+int xfrog_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat_req *req);
+
+struct xfs_bulkstat_req *xfrog_bulkstat_alloc_req(uint32_t nr,
+		uint64_t startino);
+void xfrog_bulkstat_to_bstat(struct xfs_fd *xfd, struct xfs_bstat *bs1,
+		const struct xfs_bulkstat *bstat);
+void xfrog_bstat_to_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat *bstat,
+		const struct xfs_bstat *bs1);
 
 struct xfs_inogrp;
 int xfrog_inumbers(struct xfs_fd *xfd, uint64_t *lastino, uint32_t icount,
diff --git a/libfrog/fsgeom.h b/libfrog/fsgeom.h
index 55b14c2b..ca38324e 100644
--- a/libfrog/fsgeom.h
+++ b/libfrog/fsgeom.h
@@ -39,8 +39,17 @@ struct xfs_fd {
 
 	/* log2 of sb_blocksize / sb_sectsize */
 	unsigned int		blkbb_log;
+
+	/* XFROG_FLAG_* state flags */
+	unsigned int		flags;
 };
 
+/* Only use v1 bulkstat/inumbers ioctls. */
+#define XFROG_FLAG_BULKSTAT_FORCE_V1	(1 << 0)
+
+/* Only use v5 bulkstat/inumbers ioctls. */
+#define XFROG_FLAG_BULKSTAT_FORCE_V5	(1 << 1)
+
 /* Static initializers */
 #define XFS_FD_INIT(_fd)	{ .fd = (_fd), }
 #define XFS_FD_INIT_EMPTY	XFS_FD_INIT(-1)
diff --git a/quota/quot.c b/quota/quot.c
index 686b2726..7edfad16 100644
--- a/quota/quot.c
+++ b/quota/quot.c
@@ -69,7 +69,7 @@ quot_help(void)
 
 static void
 quot_bulkstat_add(
-	struct xfs_bstat *p,
+	struct xfs_bulkstat	*p,
 	uint		flags)
 {
 	du_t		*dp;
@@ -93,7 +93,7 @@ quot_bulkstat_add(
 	}
 	for (i = 0; i < 3; i++) {
 		id = (i == 0) ? p->bs_uid : ((i == 1) ?
-			p->bs_gid : bstat_get_projid(p));
+			p->bs_gid : p->bs_projectid);
 		hp = &duhash[i][id % DUHASH];
 		for (dp = *hp; dp; dp = dp->next)
 			if (dp->id == id)
@@ -113,11 +113,11 @@ quot_bulkstat_add(
 		}
 		dp->blocks += size;
 
-		if (now - p->bs_atime.tv_sec > 30 * (60*60*24))
+		if (now - p->bs_atime > 30 * (60*60*24))
 			dp->blocks30 += size;
-		if (now - p->bs_atime.tv_sec > 60 * (60*60*24))
+		if (now - p->bs_atime > 60 * (60*60*24))
 			dp->blocks60 += size;
-		if (now - p->bs_atime.tv_sec > 90 * (60*60*24))
+		if (now - p->bs_atime > 90 * (60*60*24))
 			dp->blocks90 += size;
 		dp->nfiles++;
 	}
@@ -129,9 +129,7 @@ quot_bulkstat_mount(
 	unsigned int		flags)
 {
 	struct xfs_fd		fsxfd = XFS_FD_INIT_EMPTY;
-	struct xfs_bstat	*buf;
-	uint64_t		last = 0;
-	uint32_t		count;
+	struct xfs_bulkstat_req	*breq;
 	int			i, sts, ret;
 	du_t			**dp;
 
@@ -154,25 +152,24 @@ quot_bulkstat_mount(
 		return;
 	}
 
-	buf = (struct xfs_bstat *)calloc(NBSTAT, sizeof(struct xfs_bstat));
-	if (!buf) {
+	breq = xfrog_bulkstat_alloc_req(NBSTAT, 0);
+	if (!breq) {
 		perror("calloc");
 		xfd_close(&fsxfd);
 		return;
 	}
 
-	while ((sts = xfrog_bulkstat(&fsxfd, &last, NBSTAT, buf,
-				&count)) == 0) {
-		if (count == 0)
+	while ((sts = xfrog_bulkstat(&fsxfd, breq)) == 0) {
+		if (breq->hdr.ocount == 0)
 			break;
-		for (i = 0; i < count; i++)
-			quot_bulkstat_add(&buf[i], flags);
+		for (i = 0; i < breq->hdr.ocount; i++)
+			quot_bulkstat_add(&breq->bulkstat[i], flags);
 	}
 	if (sts < 0) {
 		errno = sts;
 		perror("XFS_IOC_FSBULKSTAT");
 	}
-	free(buf);
+	free(breq);
 	xfd_close(&fsxfd);
 }
 
diff --git a/scrub/inodes.c b/scrub/inodes.c
index 580a845e..851c24bd 100644
--- a/scrub/inodes.c
+++ b/scrub/inodes.c
@@ -50,13 +50,15 @@ static void
 xfs_iterate_inodes_range_check(
 	struct scrub_ctx	*ctx,
 	struct xfs_inogrp	*inogrp,
-	struct xfs_bstat	*bstat)
+	struct xfs_bulkstat	*bstat)
 {
-	struct xfs_bstat	*bs;
+	struct xfs_bulkstat	*bs;
 	int			i;
 	int			error;
 
 	for (i = 0, bs = bstat; i < XFS_INODES_PER_CHUNK; i++) {
+		struct xfs_bstat bs1;
+
 		if (!(inogrp->xi_allocmask & (1ULL << i)))
 			continue;
 		if (bs->bs_ino == inogrp->xi_startino + i) {
@@ -66,11 +68,13 @@ xfs_iterate_inodes_range_check(
 
 		/* Load the one inode. */
 		error = xfrog_bulkstat_single(&ctx->mnt,
-				inogrp->xi_startino + i, bs);
-		if (error || bs->bs_ino != inogrp->xi_startino + i) {
-			memset(bs, 0, sizeof(struct xfs_bstat));
+				inogrp->xi_startino + i, &bs1);
+		if (error || bs1.bs_ino != inogrp->xi_startino + i) {
+			memset(bs, 0, sizeof(struct xfs_bulkstat));
 			bs->bs_ino = inogrp->xi_startino + i;
 			bs->bs_blksize = ctx->mnt_sv.f_frsize;
+		} else {
+			xfrog_bstat_to_bulkstat(&ctx->mnt, bs, &bs1);
 		}
 		bs++;
 	}
@@ -93,41 +97,41 @@ xfs_iterate_inodes_range(
 {
 	struct xfs_handle	handle;
 	struct xfs_inogrp	inogrp;
-	struct xfs_bstat	bstat[XFS_INODES_PER_CHUNK];
+	struct xfs_bulkstat_req	*breq;
 	char			idescr[DESCR_BUFSZ];
-	struct xfs_bstat	*bs;
+	struct xfs_bulkstat	*bs;
 	uint64_t		igrp_ino;
-	uint64_t		ino;
-	uint32_t		bulklen = 0;
 	uint32_t		igrplen = 0;
 	bool			moveon = true;
 	int			i;
 	int			error;
 	int			stale_count = 0;
 
-
-	memset(bstat, 0, XFS_INODES_PER_CHUNK * sizeof(struct xfs_bstat));
-
 	memcpy(&handle.ha_fsid, fshandle, sizeof(handle.ha_fsid));
 	handle.ha_fid.fid_len = sizeof(xfs_fid_t) -
 			sizeof(handle.ha_fid.fid_len);
 	handle.ha_fid.fid_pad = 0;
 
+	breq = xfrog_bulkstat_alloc_req(XFS_INODES_PER_CHUNK, 0);
+	if (!breq) {
+		str_info(ctx, descr, _("Insufficient memory; giving up."));
+		return false;
+	}
+
 	/* Find the inode chunk & alloc mask */
 	igrp_ino = first_ino;
 	error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp, &igrplen);
 	while (!error && igrplen) {
-		/* Load the inodes. */
-		ino = inogrp.xi_startino - 1;
-
 		/*
 		 * We can have totally empty inode chunks on filesystems where
 		 * there are more than 64 inodes per block.  Skip these.
 		 */
 		if (inogrp.xi_alloccount == 0)
 			goto igrp_retry;
-		error = xfrog_bulkstat(&ctx->mnt, &ino, inogrp.xi_alloccount,
-				bstat, &bulklen);
+
+		breq->hdr.ino = inogrp.xi_startino;
+		breq->hdr.icount = inogrp.xi_alloccount;
+		error = xfrog_bulkstat(&ctx->mnt, breq);
 		if (error) {
 			char	errbuf[DESCR_BUFSZ];
 
@@ -135,10 +139,12 @@ xfs_iterate_inodes_range(
 						errbuf, DESCR_BUFSZ));
 		}
 
-		xfs_iterate_inodes_range_check(ctx, &inogrp, bstat);
+		xfs_iterate_inodes_range_check(ctx, &inogrp, breq->bulkstat);
 
 		/* Iterate all the inodes. */
-		for (i = 0, bs = bstat; i < inogrp.xi_alloccount; i++, bs++) {
+		for (i = 0, bs = breq->bulkstat;
+		     i < inogrp.xi_alloccount;
+		     i++, bs++) {
 			if (bs->bs_ino > last_ino)
 				goto out;
 
@@ -184,6 +190,7 @@ _("Changed too many times during scan; giving up."));
 		str_liberror(ctx, error, descr);
 		moveon = false;
 	}
+	free(breq);
 out:
 	return moveon;
 }
diff --git a/scrub/inodes.h b/scrub/inodes.h
index 631848c3..3341c6d9 100644
--- a/scrub/inodes.h
+++ b/scrub/inodes.h
@@ -7,7 +7,7 @@
 #define XFS_SCRUB_INODES_H_
 
 typedef int (*xfs_inode_iter_fn)(struct scrub_ctx *ctx,
-		struct xfs_handle *handle, struct xfs_bstat *bs, void *arg);
+		struct xfs_handle *handle, struct xfs_bulkstat *bs, void *arg);
 
 #define XFS_ITERATE_INODES_ABORT	(-1)
 bool xfs_scan_all_inodes(struct scrub_ctx *ctx, xfs_inode_iter_fn fn,
diff --git a/scrub/phase3.c b/scrub/phase3.c
index 81c64cd1..a32d1ced 100644
--- a/scrub/phase3.c
+++ b/scrub/phase3.c
@@ -30,7 +30,7 @@ xfs_scrub_fd(
 	struct scrub_ctx	*ctx,
 	bool			(*fn)(struct scrub_ctx *ctx, uint64_t ino,
 				      uint32_t gen, struct xfs_action_list *a),
-	struct xfs_bstat	*bs,
+	struct xfs_bulkstat	*bs,
 	struct xfs_action_list	*alist)
 {
 	return fn(ctx, bs->bs_ino, bs->bs_gen, alist);
@@ -45,7 +45,7 @@ struct scrub_inode_ctx {
 static void
 xfs_scrub_inode_vfs_error(
 	struct scrub_ctx	*ctx,
-	struct xfs_bstat	*bstat)
+	struct xfs_bulkstat	*bstat)
 {
 	char			descr[DESCR_BUFSZ];
 	xfs_agnumber_t		agno;
@@ -65,7 +65,7 @@ static int
 xfs_scrub_inode(
 	struct scrub_ctx	*ctx,
 	struct xfs_handle	*handle,
-	struct xfs_bstat	*bstat,
+	struct xfs_bulkstat	*bstat,
 	void			*arg)
 {
 	struct xfs_action_list	alist;
diff --git a/scrub/phase5.c b/scrub/phase5.c
index 3ff34251..99cd51b2 100644
--- a/scrub/phase5.c
+++ b/scrub/phase5.c
@@ -80,7 +80,7 @@ xfs_scrub_scan_dirents(
 	struct scrub_ctx	*ctx,
 	const char		*descr,
 	int			*fd,
-	struct xfs_bstat	*bstat)
+	struct xfs_bulkstat	*bstat)
 {
 	struct unicrash		*uc = NULL;
 	DIR			*dir;
@@ -140,7 +140,7 @@ xfs_scrub_scan_fhandle_namespace_xattrs(
 	struct scrub_ctx		*ctx,
 	const char			*descr,
 	struct xfs_handle		*handle,
-	struct xfs_bstat		*bstat,
+	struct xfs_bulkstat		*bstat,
 	const struct attrns_decode	*attr_ns)
 {
 	struct attrlist_cursor		cur;
@@ -200,7 +200,7 @@ xfs_scrub_scan_fhandle_xattrs(
 	struct scrub_ctx		*ctx,
 	const char			*descr,
 	struct xfs_handle		*handle,
-	struct xfs_bstat		*bstat)
+	struct xfs_bulkstat		*bstat)
 {
 	const struct attrns_decode	*ns;
 	bool				moveon = true;
@@ -228,7 +228,7 @@ static int
 xfs_scrub_connections(
 	struct scrub_ctx	*ctx,
 	struct xfs_handle	*handle,
-	struct xfs_bstat	*bstat,
+	struct xfs_bulkstat	*bstat,
 	void			*arg)
 {
 	bool			*pmoveon = arg;
diff --git a/scrub/phase6.c b/scrub/phase6.c
index 506e75d2..b41f90e0 100644
--- a/scrub/phase6.c
+++ b/scrub/phase6.c
@@ -172,7 +172,7 @@ static int
 xfs_report_verify_inode(
 	struct scrub_ctx		*ctx,
 	struct xfs_handle		*handle,
-	struct xfs_bstat		*bstat,
+	struct xfs_bulkstat		*bstat,
 	void				*arg)
 {
 	char				descr[DESCR_BUFSZ];
diff --git a/scrub/unicrash.c b/scrub/unicrash.c
index 17e8f34f..b02c5658 100644
--- a/scrub/unicrash.c
+++ b/scrub/unicrash.c
@@ -432,7 +432,7 @@ unicrash_init(
  */
 static bool
 is_only_root_writable(
-	struct xfs_bstat	*bstat)
+	struct xfs_bulkstat	*bstat)
 {
 	if (bstat->bs_uid != 0 || bstat->bs_gid != 0)
 		return false;
@@ -444,7 +444,7 @@ bool
 unicrash_dir_init(
 	struct unicrash		**ucp,
 	struct scrub_ctx	*ctx,
-	struct xfs_bstat	*bstat)
+	struct xfs_bulkstat	*bstat)
 {
 	/*
 	 * Assume 64 bytes per dentry, clamp buckets between 16 and 64k.
@@ -459,7 +459,7 @@ bool
 unicrash_xattr_init(
 	struct unicrash		**ucp,
 	struct scrub_ctx	*ctx,
-	struct xfs_bstat	*bstat)
+	struct xfs_bulkstat	*bstat)
 {
 	/* Assume 16 attributes per extent for lack of a better idea. */
 	return unicrash_init(ucp, ctx, false, 16 * (1 + bstat->bs_aextents),
diff --git a/scrub/unicrash.h b/scrub/unicrash.h
index fb8f5f72..feb9cc86 100644
--- a/scrub/unicrash.h
+++ b/scrub/unicrash.h
@@ -14,9 +14,9 @@ struct unicrash;
 struct dirent;
 
 bool unicrash_dir_init(struct unicrash **ucp, struct scrub_ctx *ctx,
-		struct xfs_bstat *bstat);
+		struct xfs_bulkstat *bstat);
 bool unicrash_xattr_init(struct unicrash **ucp, struct scrub_ctx *ctx,
-		struct xfs_bstat *bstat);
+		struct xfs_bulkstat *bstat);
 bool unicrash_fs_label_init(struct unicrash **ucp, struct scrub_ctx *ctx);
 void unicrash_free(struct unicrash *uc);
 bool unicrash_check_dir_name(struct unicrash *uc, const char *descr,
diff --git a/spaceman/health.c b/spaceman/health.c
index c3575b8e..9bed7fdf 100644
--- a/spaceman/health.c
+++ b/spaceman/health.c
@@ -266,11 +266,10 @@ static int
 report_bulkstat_health(
 	xfs_agnumber_t		agno)
 {
-	struct xfs_bstat	bstat[BULKSTAT_NR];
+	struct xfs_bulkstat_req	*breq;
 	char			descr[256];
 	uint64_t		startino = 0;
 	uint64_t		lastino = -1ULL;
-	uint32_t		ocount;
 	uint32_t		i;
 	int			error;
 
@@ -279,15 +278,23 @@ report_bulkstat_health(
 		lastino = cvt_agino_to_ino(&file->xfd, agno + 1, 0) - 1;
 	}
 
-	while ((error = xfrog_bulkstat(&file->xfd, &startino, BULKSTAT_NR,
-			bstat, &ocount) == 0) && ocount > 0) {
-		for (i = 0; i < ocount; i++) {
-			if (bstat[i].bs_ino > lastino)
+	breq = xfrog_bulkstat_alloc_req(BULKSTAT_NR, startino);
+	if (!breq) {
+		perror("bulk alloc req");
+		exitcode = 1;
+		return 1;
+	}
+
+	while ((error = xfrog_bulkstat(&file->xfd, breq) == 0) &&
+			breq->hdr.ocount > 0) {
+		for (i = 0; i < breq->hdr.ocount; i++) {
+			if (breq->bulkstat[i].bs_ino > lastino)
 				goto out;
-			snprintf(descr, sizeof(descr) - 1, _("inode %llu"),
-					bstat[i].bs_ino);
-			report_sick(descr, inode_flags, bstat[i].bs_sick,
-					bstat[i].bs_checked);
+			snprintf(descr, sizeof(descr) - 1, _("inode %"PRIu64),
+					breq->bulkstat[i].bs_ino);
+			report_sick(descr, inode_flags,
+					breq->bulkstat[i].bs_sick,
+					breq->bulkstat[i].bs_checked);
 		}
 	}
 	if (error) {
@@ -295,6 +302,7 @@ report_bulkstat_health(
 		perror("bulkstat");
 	}
 out:
+	free(breq);
 	return error;
 }
 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 4/6] misc: convert to v5 bulkstat_single call
  2019-09-06  3:34 [PATCH 0/6] xfsprogs: port utilities to bulkstat v5 Darrick J. Wong
                   ` (2 preceding siblings ...)
  2019-09-06  3:35 ` [PATCH 3/6] misc: convert XFS_IOC_FSBULKSTAT to XFS_IOC_BULKSTAT Darrick J. Wong
@ 2019-09-06  3:35 ` Darrick J. Wong
  2019-09-13  1:02   ` Dave Chinner
  2019-09-06  3:35 ` [PATCH 5/6] misc: convert from XFS_IOC_FSINUMBERS to XFS_IOC_INUMBERS Darrick J. Wong
  2019-09-06  3:35 ` [PATCH 6/6] libxfs: revert FSGEOMETRY v5 -> v4 hack Darrick J. Wong
  5 siblings, 1 reply; 14+ messages in thread
From: Darrick J. Wong @ 2019-09-06  3:35 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

From: Darrick J. Wong <darrick.wong@oracle.com>

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fsr/xfs_fsr.c      |    8 +++-
 io/open.c          |    6 ++-
 io/swapext.c       |    4 ++
 libfrog/bulkstat.c |  103 ++++++++++++++++++++++++++++++++++++++++++++--------
 libfrog/bulkstat.h |    4 +-
 scrub/inodes.c     |    8 +---
 spaceman/health.c  |    4 +-
 7 files changed, 105 insertions(+), 32 deletions(-)


diff --git a/fsr/xfs_fsr.c b/fsr/xfs_fsr.c
index cc3cc93a..e8fa39ab 100644
--- a/fsr/xfs_fsr.c
+++ b/fsr/xfs_fsr.c
@@ -724,6 +724,7 @@ fsrfile(
 	xfs_ino_t		ino)
 {
 	struct xfs_fd		fsxfd = XFS_FD_INIT_EMPTY;
+	struct xfs_bulkstat	bulkstat;
 	struct xfs_bstat	statbuf;
 	jdm_fshandle_t		*fshandlep;
 	int			fd = -1;
@@ -748,12 +749,13 @@ fsrfile(
 		goto out;
 	}
 
-	error = xfrog_bulkstat_single(&fsxfd, ino, &statbuf);
+	error = xfrog_bulkstat_single(&fsxfd, ino, 0, &bulkstat);
 	if (error) {
 		fsrprintf(_("unable to get bstat on %s: %s\n"),
 			fname, strerror(error));
 		goto out;
 	}
+	xfrog_bulkstat_to_bstat(&fsxfd, &statbuf, &bulkstat);
 
 	fd = jdm_open(fshandlep, &statbuf, O_RDWR|O_DIRECT);
 	if (fd < 0) {
@@ -974,7 +976,7 @@ fsr_setup_attr_fork(
 
 	i = 0;
 	do {
-		struct xfs_bstat tbstat;
+		struct xfs_bulkstat	tbstat;
 		char		name[64];
 		int		ret;
 
@@ -983,7 +985,7 @@ fsr_setup_attr_fork(
 		 * this to compare against the target and determine what we
 		 * need to do.
 		 */
-		ret = xfrog_bulkstat_single(&txfd, tstatbuf.st_ino, &tbstat);
+		ret = xfrog_bulkstat_single(&txfd, tstatbuf.st_ino, 0, &tbstat);
 		if (ret) {
 			fsrprintf(_("unable to get bstat on temp file: %s\n"),
 						strerror(ret));
diff --git a/io/open.c b/io/open.c
index e1aac7d1..e1979501 100644
--- a/io/open.c
+++ b/io/open.c
@@ -723,7 +723,7 @@ inode_f(
 	int			argc,
 	char			**argv)
 {
-	struct xfs_bstat	bstat;
+	struct xfs_bulkstat	bulkstat;
 	uint64_t		result_ino = 0;
 	uint64_t		userino = NULLFSINO;
 	char			*p;
@@ -803,7 +803,7 @@ inode_f(
 		struct xfs_fd	xfd = XFS_FD_INIT(file->fd);
 
 		/* get this inode */
-		ret = xfrog_bulkstat_single(&xfd, userino, &bstat);
+		ret = xfrog_bulkstat_single(&xfd, userino, 0, &bulkstat);
 		if (ret == EINVAL) {
 			/* Not in use */
 			result_ino = 0;
@@ -813,7 +813,7 @@ inode_f(
 			exitcode = 1;
 			return 0;
 		} else {
-			result_ino = bstat.bs_ino;
+			result_ino = bulkstat.bs_ino;
 		}
 	}
 
diff --git a/io/swapext.c b/io/swapext.c
index 2b4918f8..ca024b93 100644
--- a/io/swapext.c
+++ b/io/swapext.c
@@ -28,6 +28,7 @@ swapext_f(
 	char			**argv)
 {
 	struct xfs_fd		fxfd = XFS_FD_INIT(file->fd);
+	struct xfs_bulkstat	bulkstat;
 	int			fd;
 	int			error;
 	struct xfs_swapext	sx;
@@ -48,12 +49,13 @@ swapext_f(
 		goto out;
 	}
 
-	error = xfrog_bulkstat_single(&fxfd, stat.st_ino, &sx.sx_stat);
+	error = xfrog_bulkstat_single(&fxfd, stat.st_ino, 0, &bulkstat);
 	if (error) {
 		errno = error;
 		perror("bulkstat");
 		goto out;
 	}
+	xfrog_bulkstat_to_bstat(&fxfd, &sx.sx_stat, &bulkstat);
 	sx.sx_version = XFS_SX_VERSION;
 	sx.sx_fdtarget = file->fd;
 	sx.sx_fdtmp = fd;
diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c
index b4468243..2a70824e 100644
--- a/libfrog/bulkstat.c
+++ b/libfrog/bulkstat.c
@@ -20,26 +20,99 @@ xfrog_bulkstat_prep_v1_emulation(
 	return xfd_prepare_geometry(xfd);
 }
 
+/* Bulkstat a single inode using v5 ioctl. */
+static int
+xfrog_bulkstat_single5(
+	struct xfs_fd			*xfd,
+	uint64_t			ino,
+	unsigned int			flags,
+	struct xfs_bulkstat		*bulkstat)
+{
+	struct xfs_bulkstat_req		*req;
+	int				ret;
+
+	if (flags & ~(XFS_BULK_IREQ_SPECIAL))
+		return EINVAL;
+
+	req = xfrog_bulkstat_alloc_req(1, ino);
+	if (!req)
+		return ENOMEM;
+
+	req->hdr.flags = flags;
+	ret = ioctl(xfd->fd, XFS_IOC_BULKSTAT, req);
+	if (ret) {
+		ret = errno;
+		goto free;
+	}
+
+	if (req->hdr.ocount == 0) {
+		ret = ENOENT;
+		goto free;
+	}
+
+	memcpy(bulkstat, req->bulkstat, sizeof(struct xfs_bulkstat));
+free:
+	free(req);
+	return ret;
+}
+
+/* Bulkstat a single inode using v1 ioctl. */
+static int
+xfrog_bulkstat_single1(
+	struct xfs_fd			*xfd,
+	uint64_t			ino,
+	unsigned int			flags,
+	struct xfs_bulkstat		*bulkstat)
+{
+	struct xfs_bstat		bstat;
+	struct xfs_fsop_bulkreq		bulkreq = { 0 };
+	int				error;
+
+	if (flags)
+		return EINVAL;
+
+	error = xfrog_bulkstat_prep_v1_emulation(xfd);
+	if (error)
+		return error;
+
+	bulkreq.lastip = (__u64 *)&ino;
+	bulkreq.icount = 1;
+	bulkreq.ubuffer = &bstat;
+	error = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT_SINGLE, &bulkreq);
+	if (error)
+		return errno;
+
+	xfrog_bstat_to_bulkstat(xfd, bulkstat, &bstat);
+	return 0;
+}
+
 /* Bulkstat a single inode.  Returns zero or a positive error code. */
 int
 xfrog_bulkstat_single(
-	struct xfs_fd		*xfd,
-	uint64_t		ino,
-	struct xfs_bstat	*ubuffer)
+	struct xfs_fd			*xfd,
+	uint64_t			ino,
+	unsigned int			flags,
+	struct xfs_bulkstat		*bulkstat)
 {
-	__u64			i = ino;
-	struct xfs_fsop_bulkreq	bulkreq = {
-		.lastip		= &i,
-		.icount		= 1,
-		.ubuffer	= ubuffer,
-		.ocount		= NULL,
-	};
-	int			ret;
+	int				error;
 
-	ret = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT_SINGLE, &bulkreq);
-	if (ret)
-		return errno;
-	return 0;
+	if (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V1)
+		goto try_v1;
+
+	error = xfrog_bulkstat_single5(xfd, ino, flags, bulkstat);
+	if (error == 0 || (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V5))
+		return error;
+
+	/* If the v5 ioctl wasn't found, we punt to v1. */
+	switch (error) {
+	case EOPNOTSUPP:
+	case ENOTTY:
+		xfd->flags |= XFROG_FLAG_BULKSTAT_FORCE_V1;
+		break;
+	}
+
+try_v1:
+	return xfrog_bulkstat_single1(xfd, ino, flags, bulkstat);
 }
 
 /*
diff --git a/libfrog/bulkstat.h b/libfrog/bulkstat.h
index 6f51c167..3135e752 100644
--- a/libfrog/bulkstat.h
+++ b/libfrog/bulkstat.h
@@ -8,8 +8,8 @@
 
 /* Bulkstat wrappers */
 struct xfs_bstat;
-int xfrog_bulkstat_single(struct xfs_fd *xfd, uint64_t ino,
-		struct xfs_bstat *ubuffer);
+int xfrog_bulkstat_single(struct xfs_fd *xfd, uint64_t ino, unsigned int flags,
+		struct xfs_bulkstat *bulkstat);
 int xfrog_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat_req *req);
 
 struct xfs_bulkstat_req *xfrog_bulkstat_alloc_req(uint32_t nr,
diff --git a/scrub/inodes.c b/scrub/inodes.c
index 851c24bd..2112c9d1 100644
--- a/scrub/inodes.c
+++ b/scrub/inodes.c
@@ -57,8 +57,6 @@ xfs_iterate_inodes_range_check(
 	int			error;
 
 	for (i = 0, bs = bstat; i < XFS_INODES_PER_CHUNK; i++) {
-		struct xfs_bstat bs1;
-
 		if (!(inogrp->xi_allocmask & (1ULL << i)))
 			continue;
 		if (bs->bs_ino == inogrp->xi_startino + i) {
@@ -68,13 +66,11 @@ xfs_iterate_inodes_range_check(
 
 		/* Load the one inode. */
 		error = xfrog_bulkstat_single(&ctx->mnt,
-				inogrp->xi_startino + i, &bs1);
-		if (error || bs1.bs_ino != inogrp->xi_startino + i) {
+				inogrp->xi_startino + i, 0, bs);
+		if (error || bs->bs_ino != inogrp->xi_startino + i) {
 			memset(bs, 0, sizeof(struct xfs_bulkstat));
 			bs->bs_ino = inogrp->xi_startino + i;
 			bs->bs_blksize = ctx->mnt_sv.f_frsize;
-		} else {
-			xfrog_bstat_to_bulkstat(&ctx->mnt, bs, &bs1);
 		}
 		bs++;
 	}
diff --git a/spaceman/health.c b/spaceman/health.c
index 9bed7fdf..b6e1fcd9 100644
--- a/spaceman/health.c
+++ b/spaceman/health.c
@@ -208,7 +208,7 @@ report_inode_health(
 	unsigned long long	ino,
 	const char		*descr)
 {
-	struct xfs_bstat	bs;
+	struct xfs_bulkstat	bs;
 	char			d[256];
 	int			ret;
 
@@ -217,7 +217,7 @@ report_inode_health(
 		descr = d;
 	}
 
-	ret = xfrog_bulkstat_single(&file->xfd, ino, &bs);
+	ret = xfrog_bulkstat_single(&file->xfd, ino, 0, &bs);
 	if (ret) {
 		errno = ret;
 		perror(descr);


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 5/6] misc: convert from XFS_IOC_FSINUMBERS to XFS_IOC_INUMBERS
  2019-09-06  3:34 [PATCH 0/6] xfsprogs: port utilities to bulkstat v5 Darrick J. Wong
                   ` (3 preceding siblings ...)
  2019-09-06  3:35 ` [PATCH 4/6] misc: convert to v5 bulkstat_single call Darrick J. Wong
@ 2019-09-06  3:35 ` Darrick J. Wong
  2019-09-13  1:10   ` Dave Chinner
  2019-09-06  3:35 ` [PATCH 6/6] libxfs: revert FSGEOMETRY v5 -> v4 hack Darrick J. Wong
  5 siblings, 1 reply; 14+ messages in thread
From: Darrick J. Wong @ 2019-09-06  3:35 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

From: Darrick J. Wong <darrick.wong@oracle.com>

Convert all programs to use the v5 inumbers ioctl.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 io/imap.c          |   26 +++++-----
 io/open.c          |   27 +++++++----
 libfrog/bulkstat.c |  132 ++++++++++++++++++++++++++++++++++++++++++++++------
 libfrog/bulkstat.h |   10 +++-
 scrub/fscounters.c |   21 +++++---
 scrub/inodes.c     |   36 ++++++++------
 6 files changed, 189 insertions(+), 63 deletions(-)


diff --git a/io/imap.c b/io/imap.c
index 472c1fda..fa69676e 100644
--- a/io/imap.c
+++ b/io/imap.c
@@ -17,9 +17,7 @@ static int
 imap_f(int argc, char **argv)
 {
 	struct xfs_fd		xfd = XFS_FD_INIT(file->fd);
-	struct xfs_inogrp	*t;
-	uint64_t		last = 0;
-	uint32_t		count;
+	struct xfs_inumbers_req	*ireq;
 	uint32_t		nent;
 	int			i;
 	int			error;
@@ -29,17 +27,19 @@ imap_f(int argc, char **argv)
 	else
 		nent = atoi(argv[1]);
 
-	t = malloc(nent * sizeof(*t));
-	if (!t)
+	ireq = xfrog_inumbers_alloc_req(nent, 0);
+	if (!ireq) {
+		perror("alloc req");
 		return 0;
+	}
 
-	while ((error = xfrog_inumbers(&xfd, &last, nent, t, &count)) == 0 &&
-	       count > 0) {
-		for (i = 0; i < count; i++) {
-			printf(_("ino %10llu count %2d mask %016llx\n"),
-				(unsigned long long)t[i].xi_startino,
-				t[i].xi_alloccount,
-				(unsigned long long)t[i].xi_allocmask);
+	while ((error = xfrog_inumbers(&xfd, ireq)) == 0 &&
+	       ireq->hdr.ocount > 0) {
+		for (i = 0; i < ireq->hdr.ocount; i++) {
+			printf(_("ino %10"PRIu64" count %2d mask %016"PRIx64"\n"),
+				ireq->inumbers[i].xi_startino,
+				ireq->inumbers[i].xi_alloccount,
+				ireq->inumbers[i].xi_allocmask);
 		}
 	}
 
@@ -48,7 +48,7 @@ imap_f(int argc, char **argv)
 		perror("xfsctl(XFS_IOC_FSINUMBERS)");
 		exitcode = 1;
 	}
-	free(t);
+	free(ireq);
 	return 0;
 }
 
diff --git a/io/open.c b/io/open.c
index e1979501..e198bcd8 100644
--- a/io/open.c
+++ b/io/open.c
@@ -681,39 +681,46 @@ static __u64
 get_last_inode(void)
 {
 	struct xfs_fd		xfd = XFS_FD_INIT(file->fd);
-	uint64_t		lastip = 0;
+	struct xfs_inumbers_req	*ireq;
 	uint32_t		lastgrp = 0;
-	uint32_t		ocount = 0;
 	__u64			last_ino;
-	struct xfs_inogrp	igroup[IGROUP_NR];
+
+	ireq = xfrog_inumbers_alloc_req(IGROUP_NR, 0);
+	if (!ireq) {
+		perror("alloc req");
+		return 0;
+	}
 
 	for (;;) {
 		int		ret;
 
-		ret = xfrog_inumbers(&xfd, &lastip, IGROUP_NR, igroup,
-				&ocount);
+		ret = xfrog_inumbers(&xfd, ireq);
 		if (ret) {
 			errno = ret;
 			perror("XFS_IOC_FSINUMBERS");
+			free(ireq);
 			return 0;
 		}
 
 		/* Did we reach the last inode? */
-		if (ocount == 0)
+		if (ireq->hdr.ocount == 0)
 			break;
 
 		/* last inode in igroup table */
-		lastgrp = ocount;
+		lastgrp = ireq->hdr.ocount;
 	}
 
-	if (lastgrp == 0)
+	if (lastgrp == 0) {
+		free(ireq);
 		return 0;
+	}
 
 	lastgrp--;
 
 	/* The last inode number in use */
-	last_ino = igroup[lastgrp].xi_startino +
-		  libxfs_highbit64(igroup[lastgrp].xi_allocmask);
+	last_ino = ireq->inumbers[lastgrp].xi_startino +
+		  libxfs_highbit64(ireq->inumbers[lastgrp].xi_allocmask);
+	free(ireq);
 
 	return last_ino;
 }
diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c
index 2a70824e..748d0f32 100644
--- a/libfrog/bulkstat.c
+++ b/libfrog/bulkstat.c
@@ -387,6 +387,86 @@ xfrog_bulkstat_alloc_req(
 	return breq;
 }
 
+/* Convert an inumbers (v5) struct to a inogrp (v1) struct. */
+void
+xfrog_inumbers_to_inogrp(
+	struct xfs_inogrp		*ig1,
+	const struct xfs_inumbers	*ig)
+{
+	ig1->xi_startino = ig->xi_startino;
+	ig1->xi_alloccount = ig->xi_alloccount;
+	ig1->xi_allocmask = ig->xi_allocmask;
+}
+
+/* Convert an inogrp (v1) struct to a inumbers (v5) struct. */
+void
+xfrog_inogrp_to_inumbers(
+	struct xfs_inumbers		*ig,
+	const struct xfs_inogrp		*ig1)
+{
+	memset(ig, 0, sizeof(*ig));
+	ig->xi_version = XFS_INUMBERS_VERSION_V1;
+
+	ig->xi_startino = ig1->xi_startino;
+	ig->xi_alloccount = ig1->xi_alloccount;
+	ig->xi_allocmask = ig1->xi_allocmask;
+}
+
+static uint64_t xfrog_inum_ino(void *v1_rec)
+{
+	return ((struct xfs_inogrp *)v1_rec)->xi_startino;
+}
+
+static void xfrog_inum_cvt(struct xfs_fd *xfd, void *v5, void *v1)
+{
+	xfrog_inogrp_to_inumbers(v5, v1);
+}
+
+/* Query inode allocation bitmask information using v5 ioctl. */
+static int
+xfrog_inumbers5(
+	struct xfs_fd		*xfd,
+	struct xfs_inumbers_req	*req)
+{
+	int			ret;
+
+	ret = ioctl(xfd->fd, XFS_IOC_INUMBERS, req);
+	if (ret)
+		return errno;
+	return 0;
+}
+
+/* Query inode allocation bitmask information using v1 ioctl. */
+static int
+xfrog_inumbers1(
+	struct xfs_fd		*xfd,
+	struct xfs_inumbers_req	*req)
+{
+	struct xfs_fsop_bulkreq	bulkreq = { 0 };
+	int			error;
+
+	error = xfrog_bulkstat_prep_v1_emulation(xfd);
+	if (error)
+		return error;
+
+	error = xfrog_bulk_req_setup(xfd, &req->hdr, &bulkreq,
+			sizeof(struct xfs_inogrp));
+	if (error == ECANCELED)
+		goto out_teardown;
+	if (error)
+		return error;
+
+	error = ioctl(xfd->fd, XFS_IOC_FSINUMBERS, &bulkreq);
+	if (error)
+		error = errno;
+
+out_teardown:
+	return xfrog_bulk_req_teardown(xfd, &req->hdr, &bulkreq,
+			sizeof(struct xfs_inogrp), xfrog_inum_ino,
+			&req->inumbers, sizeof(struct xfs_inumbers),
+			xfrog_inum_cvt, 64, error);
+}
+
 /*
  * Query inode allocation bitmask information.  Returns zero or a positive
  * error code.
@@ -394,21 +474,43 @@ xfrog_bulkstat_alloc_req(
 int
 xfrog_inumbers(
 	struct xfs_fd		*xfd,
-	uint64_t		*lastino,
-	uint32_t		icount,
-	struct xfs_inogrp	*ubuffer,
-	uint32_t		*ocount)
+	struct xfs_inumbers_req	*req)
 {
-	struct xfs_fsop_bulkreq	bulkreq = {
-		.lastip		= (__u64 *)lastino,
-		.icount		= icount,
-		.ubuffer	= ubuffer,
-		.ocount		= (__s32 *)ocount,
-	};
-	int			ret;
+	int			error;
 
-	ret = ioctl(xfd->fd, XFS_IOC_FSINUMBERS, &bulkreq);
-	if (ret)
-		return errno;
-	return 0;
+	if (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V1)
+		goto try_v1;
+
+	error = xfrog_inumbers5(xfd, req);
+	if (error == 0 || (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V5))
+		return error;
+
+	/* If the v5 ioctl wasn't found, we punt to v1. */
+	switch (error) {
+	case EOPNOTSUPP:
+	case ENOTTY:
+		xfd->flags |= XFROG_FLAG_BULKSTAT_FORCE_V1;
+		break;
+	}
+
+try_v1:
+	return xfrog_inumbers1(xfd, req);
+}
+
+/* Allocate a inumbers request.  On error returns NULL and sets errno. */
+struct xfs_inumbers_req *
+xfrog_inumbers_alloc_req(
+	uint32_t		nr,
+	uint64_t		startino)
+{
+	struct xfs_inumbers_req	*ireq;
+
+	ireq = calloc(1, XFS_INUMBERS_REQ_SIZE(nr));
+	if (!ireq)
+		return NULL;
+
+	ireq->hdr.icount = nr;
+	ireq->hdr.ino = startino;
+
+	return ireq;
 }
diff --git a/libfrog/bulkstat.h b/libfrog/bulkstat.h
index 3135e752..5da7d3f5 100644
--- a/libfrog/bulkstat.h
+++ b/libfrog/bulkstat.h
@@ -20,7 +20,13 @@ void xfrog_bstat_to_bulkstat(struct xfs_fd *xfd, struct xfs_bulkstat *bstat,
 		const struct xfs_bstat *bs1);
 
 struct xfs_inogrp;
-int xfrog_inumbers(struct xfs_fd *xfd, uint64_t *lastino, uint32_t icount,
-		struct xfs_inogrp *ubuffer, uint32_t *ocount);
+int xfrog_inumbers(struct xfs_fd *xfd, struct xfs_inumbers_req *req);
+
+struct xfs_inumbers_req *xfrog_inumbers_alloc_req(uint32_t nr,
+		uint64_t startino);
+void xfrog_inumbers_to_inogrp(struct xfs_inogrp *ig1,
+		const struct xfs_inumbers *ig);
+void xfrog_inogrp_to_inumbers(struct xfs_inumbers *ig,
+		const struct xfs_inogrp *ig1);
 
 #endif	/* __LIBFROG_BULKSTAT_H__ */
diff --git a/scrub/fscounters.c b/scrub/fscounters.c
index 8e4b3467..2fdf658a 100644
--- a/scrub/fscounters.c
+++ b/scrub/fscounters.c
@@ -42,23 +42,28 @@ xfs_count_inodes_range(
 	uint64_t		last_ino,
 	uint64_t		*count)
 {
-	struct xfs_inogrp	inogrp;
-	uint64_t		igrp_ino;
+	struct xfs_inumbers_req	*ireq;
 	uint64_t		nr = 0;
-	uint32_t		igrplen = 0;
 	int			error;
 
 	ASSERT(!(first_ino & (XFS_INODES_PER_CHUNK - 1)));
 	ASSERT((last_ino & (XFS_INODES_PER_CHUNK - 1)));
 
-	igrp_ino = first_ino;
-	while (!(error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp,
-			&igrplen))) {
-		if (igrplen == 0 || inogrp.xi_startino >= last_ino)
+	ireq = xfrog_inumbers_alloc_req(1, first_ino);
+	if (!ireq) {
+		str_info(ctx, descr, _("Insufficient memory; giving up."));
+		return false;
+	}
+
+	while (!(error = xfrog_inumbers(&ctx->mnt, ireq))) {
+		if (ireq->hdr.ocount == 0 ||
+		    ireq->inumbers[0].xi_startino >= last_ino)
 			break;
-		nr += inogrp.xi_alloccount;
+		nr += ireq->inumbers[0].xi_alloccount;
 	}
 
+	free(ireq);
+
 	if (error) {
 		str_liberror(ctx, error, descr);
 		return false;
diff --git a/scrub/inodes.c b/scrub/inodes.c
index 2112c9d1..65c404ab 100644
--- a/scrub/inodes.c
+++ b/scrub/inodes.c
@@ -49,7 +49,7 @@
 static void
 xfs_iterate_inodes_range_check(
 	struct scrub_ctx	*ctx,
-	struct xfs_inogrp	*inogrp,
+	struct xfs_inumbers	*inogrp,
 	struct xfs_bulkstat	*bstat)
 {
 	struct xfs_bulkstat	*bs;
@@ -92,12 +92,11 @@ xfs_iterate_inodes_range(
 	void			*arg)
 {
 	struct xfs_handle	handle;
-	struct xfs_inogrp	inogrp;
+	struct xfs_inumbers_req	*ireq;
 	struct xfs_bulkstat_req	*breq;
 	char			idescr[DESCR_BUFSZ];
 	struct xfs_bulkstat	*bs;
-	uint64_t		igrp_ino;
-	uint32_t		igrplen = 0;
+	struct xfs_inumbers	*inogrp;
 	bool			moveon = true;
 	int			i;
 	int			error;
@@ -114,19 +113,26 @@ xfs_iterate_inodes_range(
 		return false;
 	}
 
+	ireq = xfrog_inumbers_alloc_req(1, first_ino);
+	if (!ireq) {
+		str_info(ctx, descr, _("Insufficient memory; giving up."));
+		free(breq);
+		return false;
+	}
+	inogrp = &ireq->inumbers[0];
+
 	/* Find the inode chunk & alloc mask */
-	igrp_ino = first_ino;
-	error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp, &igrplen);
-	while (!error && igrplen) {
+	error = xfrog_inumbers(&ctx->mnt, ireq);
+	while (!error && ireq->hdr.ocount > 0) {
 		/*
 		 * We can have totally empty inode chunks on filesystems where
 		 * there are more than 64 inodes per block.  Skip these.
 		 */
-		if (inogrp.xi_alloccount == 0)
+		if (inogrp->xi_alloccount == 0)
 			goto igrp_retry;
 
-		breq->hdr.ino = inogrp.xi_startino;
-		breq->hdr.icount = inogrp.xi_alloccount;
+		breq->hdr.ino = inogrp->xi_startino;
+		breq->hdr.icount = inogrp->xi_alloccount;
 		error = xfrog_bulkstat(&ctx->mnt, breq);
 		if (error) {
 			char	errbuf[DESCR_BUFSZ];
@@ -135,11 +141,11 @@ xfs_iterate_inodes_range(
 						errbuf, DESCR_BUFSZ));
 		}
 
-		xfs_iterate_inodes_range_check(ctx, &inogrp, breq->bulkstat);
+		xfs_iterate_inodes_range_check(ctx, inogrp, breq->bulkstat);
 
 		/* Iterate all the inodes. */
 		for (i = 0, bs = breq->bulkstat;
-		     i < inogrp.xi_alloccount;
+		     i < inogrp->xi_alloccount;
 		     i++, bs++) {
 			if (bs->bs_ino > last_ino)
 				goto out;
@@ -153,7 +159,7 @@ xfs_iterate_inodes_range(
 			case ESTALE:
 				stale_count++;
 				if (stale_count < 30) {
-					igrp_ino = inogrp.xi_startino;
+					ireq->hdr.ino = inogrp->xi_startino;
 					goto igrp_retry;
 				}
 				snprintf(idescr, DESCR_BUFSZ, "inode %"PRIu64,
@@ -177,8 +183,7 @@ _("Changed too many times during scan; giving up."));
 
 		stale_count = 0;
 igrp_retry:
-		error = xfrog_inumbers(&ctx->mnt, &igrp_ino, 1, &inogrp,
-				&igrplen);
+		error = xfrog_inumbers(&ctx->mnt, ireq);
 	}
 
 err:
@@ -186,6 +191,7 @@ _("Changed too many times during scan; giving up."));
 		str_liberror(ctx, error, descr);
 		moveon = false;
 	}
+	free(ireq);
 	free(breq);
 out:
 	return moveon;


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 6/6] libxfs: revert FSGEOMETRY v5 -> v4 hack
  2019-09-06  3:34 [PATCH 0/6] xfsprogs: port utilities to bulkstat v5 Darrick J. Wong
                   ` (4 preceding siblings ...)
  2019-09-06  3:35 ` [PATCH 5/6] misc: convert from XFS_IOC_FSINUMBERS to XFS_IOC_INUMBERS Darrick J. Wong
@ 2019-09-06  3:35 ` Darrick J. Wong
  2019-09-10  6:46   ` Christoph Hellwig
  5 siblings, 1 reply; 14+ messages in thread
From: Darrick J. Wong @ 2019-09-06  3:35 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

From: Darrick J. Wong <darrick.wong@oracle.com>

Revert the #define redirection of XFS_IOC_FSGEOMETRY to the old V4
ioctl.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 libxfs/xfs_fs.h |    4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)


diff --git a/libxfs/xfs_fs.h b/libxfs/xfs_fs.h
index 67fceffc..31ac6323 100644
--- a/libxfs/xfs_fs.h
+++ b/libxfs/xfs_fs.h
@@ -822,9 +822,7 @@ struct xfs_scrub_metadata {
 #define XFS_IOC_ATTRMULTI_BY_HANDLE  _IOW ('X', 123, struct xfs_fsop_attrmulti_handlereq)
 #define XFS_IOC_FSGEOMETRY_V4	     _IOR ('X', 124, struct xfs_fsop_geom_v4)
 #define XFS_IOC_GOINGDOWN	     _IOR ('X', 125, uint32_t)
-/* For compatibility, for now */
-/* #define XFS_IOC_FSGEOMETRY	     _IOR ('X', 126, struct xfs_fsop_geom_v5) */
-#define XFS_IOC_FSGEOMETRY XFS_IOC_FSGEOMETRY_V4
+#define XFS_IOC_FSGEOMETRY	     _IOR ('X', 126, struct xfs_fsop_geom)
 #define XFS_IOC_BULKSTAT	     _IOR ('X', 127, struct xfs_bulkstat_req)
 #define XFS_IOC_INUMBERS	     _IOR ('X', 128, struct xfs_inumbers_req)
 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 6/6] libxfs: revert FSGEOMETRY v5 -> v4 hack
  2019-09-06  3:35 ` [PATCH 6/6] libxfs: revert FSGEOMETRY v5 -> v4 hack Darrick J. Wong
@ 2019-09-10  6:46   ` Christoph Hellwig
  0 siblings, 0 replies; 14+ messages in thread
From: Christoph Hellwig @ 2019-09-10  6:46 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: sandeen, linux-xfs

On Thu, Sep 05, 2019 at 08:35:34PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> Revert the #define redirection of XFS_IOC_FSGEOMETRY to the old V4
> ioctl.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>

As said before I think we should keep this as a v5 define and not
reuse a non-versioned name.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/6] misc: convert XFS_IOC_FSBULKSTAT to XFS_IOC_BULKSTAT
  2019-09-06  3:35 ` [PATCH 3/6] misc: convert XFS_IOC_FSBULKSTAT to XFS_IOC_BULKSTAT Darrick J. Wong
@ 2019-09-13  0:54   ` Dave Chinner
  2019-09-16 21:58     ` Darrick J. Wong
  0 siblings, 1 reply; 14+ messages in thread
From: Dave Chinner @ 2019-09-13  0:54 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: sandeen, linux-xfs

On Thu, Sep 05, 2019 at 08:35:15PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> Convert the v1 calls to v5 calls.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  fsr/xfs_fsr.c      |   45 ++++++--
>  io/open.c          |   17 ++-
>  libfrog/bulkstat.c |  290 +++++++++++++++++++++++++++++++++++++++++++++++++---
>  libfrog/bulkstat.h |   10 +-
>  libfrog/fsgeom.h   |    9 ++
>  quota/quot.c       |   29 ++---
>  scrub/inodes.c     |   45 +++++---
>  scrub/inodes.h     |    2 
>  scrub/phase3.c     |    6 +
>  scrub/phase5.c     |    8 +
>  scrub/phase6.c     |    2 
>  scrub/unicrash.c   |    6 +
>  scrub/unicrash.h   |    4 -
>  spaceman/health.c  |   28 +++--
>  14 files changed, 411 insertions(+), 90 deletions(-)
> 
> 
> diff --git a/fsr/xfs_fsr.c b/fsr/xfs_fsr.c
> index a53eb924..cc3cc93a 100644
> --- a/fsr/xfs_fsr.c
> +++ b/fsr/xfs_fsr.c
> @@ -466,6 +466,17 @@ fsrallfs(char *mtab, int howlong, char *leftofffile)
>  				ptr = strchr(ptr, ' ');
>  				if (ptr) {
>  					startino = strtoull(++ptr, NULL, 10);
> +					/*
> +					 * NOTE: The inode number read in from
> +					 * the leftoff file is the last inode
> +					 * to have been fsr'd.  Since the new
> +					 * xfrog_bulkstat function wants to be
> +					 * passed the first inode that we want
> +					 * to examine, increment the value that
> +					 * we read in.  The debug message below
> +					 * prints the lastoff value.
> +					 */
> +					startino++;
>  				}
>  			}
>  			if (startpass < 0)
> @@ -484,7 +495,7 @@ fsrallfs(char *mtab, int howlong, char *leftofffile)
>  
>  	if (vflag) {
>  		fsrprintf(_("START: pass=%d ino=%llu %s %s\n"),
> -			  fs->npass, (unsigned long long)startino,
> +			  fs->npass, (unsigned long long)startino - 1,
>  			  fs->dev, fs->mnt);
>  	}

This could probably go in a spearate patch....

> @@ -724,7 +724,6 @@ inode_f(
>  	char			**argv)
>  {
>  	struct xfs_bstat	bstat;
> -	uint32_t		count = 0;
>  	uint64_t		result_ino = 0;
>  	uint64_t		userino = NULLFSINO;
>  	char			*p;
> @@ -775,21 +774,31 @@ inode_f(
>  		}
>  	} else if (ret_next) {
>  		struct xfs_fd	xfd = XFS_FD_INIT(file->fd);
> +		struct xfs_bulkstat_req	*breq;
> +
> +		breq = xfrog_bulkstat_alloc_req(1, userino + 1);
> +		if (!breq) {
> +			perror("alloc bulkstat");
> +			exitcode = 1;
> +			return 0;
> +		}
>  
>  		/* get next inode */
> -		ret = xfrog_bulkstat(&xfd, &userino, 1, &bstat, &count);

Why the "+ 1" on userino setup for the new interface?

> @@ -29,29 +42,278 @@ xfrog_bulkstat_single(
>  	return 0;
>  }
>  
> -/* Bulkstat a bunch of inodes.  Returns zero or a positive error code. */
> -int
> -xfrog_bulkstat(
> +/*
> + * Set up emulation of a v5 bulk request ioctl with a v1 bulk request ioctl.
> + * Returns 0 if the emulation should proceed; ECANCELED if there are no
> + * records; or a positive error code.
> + */
> +static int
> +xfrog_bulk_req_setup(
>  	struct xfs_fd		*xfd,
> -	uint64_t		*lastino,
> -	uint32_t		icount,
> -	struct xfs_bstat	*ubuffer,
> -	uint32_t		*ocount)
> +	struct xfs_bulk_ireq	*hdr,
> +	struct xfs_fsop_bulkreq	*bulkreq,
> +	size_t			rec_size)
> +{
> +	void			*buf;
> +
> +	if (hdr->flags & XFS_BULK_IREQ_AGNO) {
> +		uint32_t	agno = cvt_ino_to_agno(xfd, hdr->ino);
> +
> +		if (hdr->ino == 0)
> +			hdr->ino = cvt_agino_to_ino(xfd, hdr->agno, 0);
> +		else if (agno < hdr->agno)
> +			return EINVAL;
> +		else if (agno > hdr->agno)
> +			goto no_results;
> +	}
> +
> +	if (cvt_ino_to_agno(xfd, hdr->ino) > xfd->fsgeom.agcount)
> +		goto no_results;
> +
> +	buf = malloc(hdr->icount * rec_size);
> +	if (!buf)
> +		return errno;
> +
> +	if (hdr->ino)
> +		hdr->ino--;

This goes with my last question: why?

> +	bulkreq->lastip = (__u64 *)&hdr->ino,
> +	bulkreq->icount = hdr->icount,
> +	bulkreq->ocount = (__s32 *)&hdr->ocount,
> +	bulkreq->ubuffer = buf;
> +	return 0;
> +
> +no_results:
> +	hdr->ocount = 0;
> +	return ECANCELED;

We should be returning negative errors for everything.

> +}
> +
> +/*
> + * Convert records and free resources used to do a v1 emulation of v5 bulk
> + * request.
> + */
> +static int
> +xfrog_bulk_req_teardown(

What's "teardown" got to do with converting results to a v1 format?

Indeed, why is there even emulation of v1 calls in the first place?
why don't callers that need v1 format just use the existing v1
ioctls directly?

>  
> +/* Bulkstat a bunch of inodes using the v1 interface. */
> +static int
> +xfrog_bulkstat1(
> +	struct xfs_fd		*xfd,
> +	struct xfs_bulkstat_req	*req)
> +{
> +	struct xfs_fsop_bulkreq	bulkreq = { 0 };
> +	int			error;
> +
> +	error = xfrog_bulkstat_prep_v1_emulation(xfd);
> +	if (error)
> +		return error;
> +
> +	error = xfrog_bulk_req_setup(xfd, &req->hdr, &bulkreq,
> +			sizeof(struct xfs_bstat));
> +	if (error == ECANCELED)
> +		goto out_teardown;
> +	if (error)
> +		return error;
> +
> +	error = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT, &bulkreq);
> +	if (error)
> +		error = errno;

negative errors, please.

> +
> +out_teardown:
> +	return xfrog_bulk_req_teardown(xfd, &req->hdr, &bulkreq,
> +			sizeof(struct xfs_bstat), xfrog_bstat_ino,
> +			&req->bulkstat, sizeof(struct xfs_bulkstat),
> +			xfrog_bstat_cvt, 1, error);
> +}

What conversion is necessary here given we've done a v1 format
bulkstat?

> +/* Bulkstat a bunch of inodes.  Returns zero or a positive error code. */
> +int
> +xfrog_bulkstat(
> +	struct xfs_fd		*xfd,
> +	struct xfs_bulkstat_req	*req)
> +{
> +	int			error;
> +
> +	if (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V1)
> +		goto try_v1;
> +
> +	error = xfrog_bulkstat5(xfd, req);
> +	if (error == 0 || (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V5))
> +		return error;
> +
> +	/* If the v5 ioctl wasn't found, we punt to v1. */
> +	switch (error) {
> +	case EOPNOTSUPP:
> +	case ENOTTY:
> +		xfd->flags |= XFROG_FLAG_BULKSTAT_FORCE_V1;
> +		break;
> +	}
> +
> +try_v1:
> +	return xfrog_bulkstat1(xfd, req);
> +}

Oh, wait, "v1 emulation" is supposed to mean "use a v1 call to
return v5 format structures"? That's emulation of the _v5_ ioctl,
which kinda says to me there's some naming problems here...


> +/* Convert bulkstat (v5) to bstat (v1). */
> +void
> +xfrog_bulkstat_to_bstat(
> +	struct xfs_fd			*xfd,
> +	struct xfs_bstat		*bs1,
> +	const struct xfs_bulkstat	*bstat)

Which I read as "convert bulkstat to bulkstat" and it doesn't tell
me what is actually going on.  xfrog_bulkstat_v5_to_v1() indicates
what format conversion is actually taking place...

and um, naming the v5 field bstat, and the struct xfs_bstat field
bs1 is entirely confusing.

void
xfrog_bulkstat_v5_to_v1(
	struct xfs_fd		*xfd
	const struct xfs_bulkstat *from,
	struct xfs_bstat	*to)
{
	to->bs_ino = from->bs_ino;
....

> +{
> +	bs1->bs_ino = bstat->bs_ino;
> +	bs1->bs_mode = bstat->bs_mode;
> +	bs1->bs_nlink = bstat->bs_nlink;
> +	bs1->bs_uid = bstat->bs_uid;
> +	bs1->bs_gid = bstat->bs_gid;
> +	bs1->bs_rdev = bstat->bs_rdev;
> +	bs1->bs_blksize = bstat->bs_blksize;
> +	bs1->bs_size = bstat->bs_size;
> +	bs1->bs_atime.tv_sec = bstat->bs_atime;
> +	bs1->bs_mtime.tv_sec = bstat->bs_mtime;
> +	bs1->bs_ctime.tv_sec = bstat->bs_ctime;

What about 32 bit overflows here?

> +/* Convert bstat (v1) to bulkstat (v5). */
> +void
> +xfrog_bstat_to_bulkstat(
> +	struct xfs_fd			*xfd,
> +	struct xfs_bulkstat		*bstat,
> +	const struct xfs_bstat		*bs1)
> +{

same comments about names here.
>  
> +/* Only use v1 bulkstat/inumbers ioctls. */
> +#define XFROG_FLAG_BULKSTAT_FORCE_V1	(1 << 0)
> +
> +/* Only use v5 bulkstat/inumbers ioctls. */
> +#define XFROG_FLAG_BULKSTAT_FORCE_V5	(1 << 1)

These don't actually define what format the results are presented
in. What happens if the user wants v1 format structures but wants
the V5 ioctl to be used?

> --- a/scrub/inodes.c
> +++ b/scrub/inodes.c
> @@ -50,13 +50,15 @@ static void
>  xfs_iterate_inodes_range_check(
>  	struct scrub_ctx	*ctx,
>  	struct xfs_inogrp	*inogrp,
> -	struct xfs_bstat	*bstat)
> +	struct xfs_bulkstat	*bstat)
>  {
> -	struct xfs_bstat	*bs;
> +	struct xfs_bulkstat	*bs;
>  	int			i;
>  	int			error;
>  
>  	for (i = 0, bs = bstat; i < XFS_INODES_PER_CHUNK; i++) {
> +		struct xfs_bstat bs1;
> +
>  		if (!(inogrp->xi_allocmask & (1ULL << i)))
>  			continue;
>  		if (bs->bs_ino == inogrp->xi_startino + i) {
> @@ -66,11 +68,13 @@ xfs_iterate_inodes_range_check(
>  
>  		/* Load the one inode. */
>  		error = xfrog_bulkstat_single(&ctx->mnt,
> -				inogrp->xi_startino + i, bs);
> -		if (error || bs->bs_ino != inogrp->xi_startino + i) {
> -			memset(bs, 0, sizeof(struct xfs_bstat));
> +				inogrp->xi_startino + i, &bs1);
> +		if (error || bs1.bs_ino != inogrp->xi_startino + i) {
> +			memset(bs, 0, sizeof(struct xfs_bulkstat));
>  			bs->bs_ino = inogrp->xi_startino + i;
>  			bs->bs_blksize = ctx->mnt_sv.f_frsize;
> +		} else {
> +			xfrog_bstat_to_bulkstat(&ctx->mnt, bs, &bs1);

I'm confused - why is xfrog_bulkstat_single() returning a v1 format
structure here and not using v5 format for everything?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/6] misc: convert to v5 bulkstat_single call
  2019-09-06  3:35 ` [PATCH 4/6] misc: convert to v5 bulkstat_single call Darrick J. Wong
@ 2019-09-13  1:02   ` Dave Chinner
  2019-09-16 22:02     ` Darrick J. Wong
  0 siblings, 1 reply; 14+ messages in thread
From: Dave Chinner @ 2019-09-13  1:02 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: sandeen, linux-xfs

On Thu, Sep 05, 2019 at 08:35:22PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  spaceman/health.c  |    4 +-
>  7 files changed, 105 insertions(+), 32 deletions(-)
> 
> 
> diff --git a/fsr/xfs_fsr.c b/fsr/xfs_fsr.c
> index cc3cc93a..e8fa39ab 100644
> --- a/fsr/xfs_fsr.c
> +++ b/fsr/xfs_fsr.c
> @@ -724,6 +724,7 @@ fsrfile(
>  	xfs_ino_t		ino)
>  {
>  	struct xfs_fd		fsxfd = XFS_FD_INIT_EMPTY;
> +	struct xfs_bulkstat	bulkstat;
>  	struct xfs_bstat	statbuf;
>  	jdm_fshandle_t		*fshandlep;
>  	int			fd = -1;
> @@ -748,12 +749,13 @@ fsrfile(
>  		goto out;
>  	}
>  
> -	error = xfrog_bulkstat_single(&fsxfd, ino, &statbuf);
> +	error = xfrog_bulkstat_single(&fsxfd, ino, 0, &bulkstat);
>  	if (error) {
>  		fsrprintf(_("unable to get bstat on %s: %s\n"),
>  			fname, strerror(error));
>  		goto out;
>  	}
> +	xfrog_bulkstat_to_bstat(&fsxfd, &statbuf, &bulkstat);

So this is so none of the rest of fsr needs to be converted to use
the new structure versions? When will this go away?

>  	do {
> -		struct xfs_bstat tbstat;
> +		struct xfs_bulkstat	tbstat;
>  		char		name[64];
>  		int		ret;
>  
> @@ -983,7 +985,7 @@ fsr_setup_attr_fork(
>  		 * this to compare against the target and determine what we
>  		 * need to do.
>  		 */
> -		ret = xfrog_bulkstat_single(&txfd, tstatbuf.st_ino, &tbstat);
> +		ret = xfrog_bulkstat_single(&txfd, tstatbuf.st_ino, 0, &tbstat);
>  		if (ret) {
>  			fsrprintf(_("unable to get bstat on temp file: %s\n"),
>  						strerror(ret));

Because this looks like we now have a combination of v1 and v5
structures being used...

>  
> diff --git a/io/swapext.c b/io/swapext.c
> index 2b4918f8..ca024b93 100644
> --- a/io/swapext.c
> +++ b/io/swapext.c
> @@ -28,6 +28,7 @@ swapext_f(
>  	char			**argv)
>  {
>  	struct xfs_fd		fxfd = XFS_FD_INIT(file->fd);
> +	struct xfs_bulkstat	bulkstat;
>  	int			fd;
>  	int			error;
>  	struct xfs_swapext	sx;
> @@ -48,12 +49,13 @@ swapext_f(
>  		goto out;
>  	}
>  
> -	error = xfrog_bulkstat_single(&fxfd, stat.st_ino, &sx.sx_stat);
> +	error = xfrog_bulkstat_single(&fxfd, stat.st_ino, 0, &bulkstat);
>  	if (error) {
>  		errno = error;
>  		perror("bulkstat");
>  		goto out;
>  	}
> +	xfrog_bulkstat_to_bstat(&fxfd, &sx.sx_stat, &bulkstat);

and this is required because bstat is part of the swapext ioctl ABI?

>  	sx.sx_version = XFS_SX_VERSION;
>  	sx.sx_fdtarget = file->fd;
>  	sx.sx_fdtmp = fd;
> diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c
> index b4468243..2a70824e 100644
> --- a/libfrog/bulkstat.c
> +++ b/libfrog/bulkstat.c
> @@ -20,26 +20,99 @@ xfrog_bulkstat_prep_v1_emulation(
>  	return xfd_prepare_geometry(xfd);
>  }
>  
> +/* Bulkstat a single inode using v5 ioctl. */
> +static int
> +xfrog_bulkstat_single5(
> +	struct xfs_fd			*xfd,
> +	uint64_t			ino,
> +	unsigned int			flags,
> +	struct xfs_bulkstat		*bulkstat)
> +{
> +	struct xfs_bulkstat_req		*req;
> +	int				ret;
> +
> +	if (flags & ~(XFS_BULK_IREQ_SPECIAL))
> +		return EINVAL;

negative error returns, please.

> @@ -57,8 +57,6 @@ xfs_iterate_inodes_range_check(
>  	int			error;
>  
>  	for (i = 0, bs = bstat; i < XFS_INODES_PER_CHUNK; i++) {
> -		struct xfs_bstat bs1;
> -
>  		if (!(inogrp->xi_allocmask & (1ULL << i)))
>  			continue;
>  		if (bs->bs_ino == inogrp->xi_startino + i) {
> @@ -68,13 +66,11 @@ xfs_iterate_inodes_range_check(
>  
>  		/* Load the one inode. */
>  		error = xfrog_bulkstat_single(&ctx->mnt,
> -				inogrp->xi_startino + i, &bs1);
> -		if (error || bs1.bs_ino != inogrp->xi_startino + i) {
> +				inogrp->xi_startino + i, 0, bs);
> +		if (error || bs->bs_ino != inogrp->xi_startino + i) {
>  			memset(bs, 0, sizeof(struct xfs_bulkstat));
>  			bs->bs_ino = inogrp->xi_startino + i;
>  			bs->bs_blksize = ctx->mnt_sv.f_frsize;
> -		} else {
> -			xfrog_bstat_to_bulkstat(&ctx->mnt, bs, &bs1);
>  		}
>  		bs++;
>  	}

So this immediately tears down the confusing stuff that was set up
in the previous patch. Perhaps separate out the scrub changes and do
both bulkstat and bulkstat_single conversions in one patch?

-Dave.

-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/6] misc: convert from XFS_IOC_FSINUMBERS to XFS_IOC_INUMBERS
  2019-09-06  3:35 ` [PATCH 5/6] misc: convert from XFS_IOC_FSINUMBERS to XFS_IOC_INUMBERS Darrick J. Wong
@ 2019-09-13  1:10   ` Dave Chinner
  2019-09-16 22:05     ` Darrick J. Wong
  0 siblings, 1 reply; 14+ messages in thread
From: Dave Chinner @ 2019-09-13  1:10 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: sandeen, linux-xfs

On Thu, Sep 05, 2019 at 08:35:28PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> Convert all programs to use the v5 inumbers ioctl.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  io/imap.c          |   26 +++++-----
>  io/open.c          |   27 +++++++----
>  libfrog/bulkstat.c |  132 ++++++++++++++++++++++++++++++++++++++++++++++------
>  libfrog/bulkstat.h |   10 +++-
>  scrub/fscounters.c |   21 +++++---
>  scrub/inodes.c     |   36 ++++++++------
>  6 files changed, 189 insertions(+), 63 deletions(-)

....
> diff --git a/io/open.c b/io/open.c
> index e1979501..e198bcd8 100644
> --- a/io/open.c
> +++ b/io/open.c
> @@ -681,39 +681,46 @@ static __u64
>  get_last_inode(void)
>  {
>  	struct xfs_fd		xfd = XFS_FD_INIT(file->fd);
> -	uint64_t		lastip = 0;
> +	struct xfs_inumbers_req	*ireq;
>  	uint32_t		lastgrp = 0;
> -	uint32_t		ocount = 0;
>  	__u64			last_ino;

	__u64			last_ino = 0;

> -	struct xfs_inogrp	igroup[IGROUP_NR];
> +
> +	ireq = xfrog_inumbers_alloc_req(IGROUP_NR, 0);
> +	if (!ireq) {
> +		perror("alloc req");
> +		return 0;
> +	}
>  
>  	for (;;) {
>  		int		ret;
>  
> -		ret = xfrog_inumbers(&xfd, &lastip, IGROUP_NR, igroup,
> -				&ocount);
> +		ret = xfrog_inumbers(&xfd, ireq);
>  		if (ret) {
>  			errno = ret;
>  			perror("XFS_IOC_FSINUMBERS");
> +			free(ireq);
>  			return 0;

			goto out;
>  		}
>  
>  		/* Did we reach the last inode? */
> -		if (ocount == 0)
> +		if (ireq->hdr.ocount == 0)
>  			break;
>  
>  		/* last inode in igroup table */
> -		lastgrp = ocount;
> +		lastgrp = ireq->hdr.ocount;
>  	}
>  
> -	if (lastgrp == 0)
> +	if (lastgrp == 0) {
> +		free(ireq);
>  		return 0;

		goto out;
> +	}
>  
>  	lastgrp--;
>  
>  	/* The last inode number in use */
> -	last_ino = igroup[lastgrp].xi_startino +
> -		  libxfs_highbit64(igroup[lastgrp].xi_allocmask);
> +	last_ino = ireq->inumbers[lastgrp].xi_startino +
> +		  libxfs_highbit64(ireq->inumbers[lastgrp].xi_allocmask);

out:
> +	free(ireq);
>  
>  	return last_ino;
>  }
> diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c
> index 2a70824e..748d0f32 100644
> --- a/libfrog/bulkstat.c
> +++ b/libfrog/bulkstat.c
> @@ -387,6 +387,86 @@ xfrog_bulkstat_alloc_req(
>  	return breq;
>  }
>  
> +/* Convert an inumbers (v5) struct to a inogrp (v1) struct. */
> +void
> +xfrog_inumbers_to_inogrp(
> +	struct xfs_inogrp		*ig1,
> +	const struct xfs_inumbers	*ig)
> +{
> +	ig1->xi_startino = ig->xi_startino;
> +	ig1->xi_alloccount = ig->xi_alloccount;
> +	ig1->xi_allocmask = ig->xi_allocmask;

Same thing - inumbers_v5_to_v1(from, to);

> +}
> +
> +/* Convert an inogrp (v1) struct to a inumbers (v5) struct. */
> +void
> +xfrog_inogrp_to_inumbers(
> +	struct xfs_inumbers		*ig,
> +	const struct xfs_inogrp		*ig1)

ditto.

> +{
> +	memset(ig, 0, sizeof(*ig));
> +	ig->xi_version = XFS_INUMBERS_VERSION_V1;
> +
> +	ig->xi_startino = ig1->xi_startino;
> +	ig->xi_alloccount = ig1->xi_alloccount;
> +	ig->xi_allocmask = ig1->xi_allocmask;
> +}
> +
> +static uint64_t xfrog_inum_ino(void *v1_rec)
> +{
> +	return ((struct xfs_inogrp *)v1_rec)->xi_startino;
> +}
> +
> +static void xfrog_inum_cvt(struct xfs_fd *xfd, void *v5, void *v1)
> +{
> +	xfrog_inogrp_to_inumbers(v5, v1);
> +}

what's the point of this wrapper?

> +
> +/* Query inode allocation bitmask information using v5 ioctl. */
> +static int
> +xfrog_inumbers5(
> +	struct xfs_fd		*xfd,
> +	struct xfs_inumbers_req	*req)
> +{
> +	int			ret;
> +
> +	ret = ioctl(xfd->fd, XFS_IOC_INUMBERS, req);
> +	if (ret)
> +		return errno;
> +	return 0;

negative errors.

> +}
> +
> +/* Query inode allocation bitmask information using v1 ioctl. */
> +static int
> +xfrog_inumbers1(
> +	struct xfs_fd		*xfd,
> +	struct xfs_inumbers_req	*req)
> +{
> +	struct xfs_fsop_bulkreq	bulkreq = { 0 };
> +	int			error;
> +
> +	error = xfrog_bulkstat_prep_v1_emulation(xfd);
> +	if (error)
> +		return error;
> +
> +	error = xfrog_bulk_req_setup(xfd, &req->hdr, &bulkreq,
> +			sizeof(struct xfs_inogrp));
> +	if (error == ECANCELED)
> +		goto out_teardown;
> +	if (error)
> +		return error;
> +
> +	error = ioctl(xfd->fd, XFS_IOC_FSINUMBERS, &bulkreq);
> +	if (error)
> +		error = errno;

negative errors.

> +
> +out_teardown:
> +	return xfrog_bulk_req_teardown(xfd, &req->hdr, &bulkreq,
> +			sizeof(struct xfs_inogrp), xfrog_inum_ino,
> +			&req->inumbers, sizeof(struct xfs_inumbers),
> +			xfrog_inum_cvt, 64, error);
> +}
....

>  	struct xfs_bulkstat	*bs;
> -	uint64_t		igrp_ino;
> -	uint32_t		igrplen = 0;
> +	struct xfs_inumbers	*inogrp;

Isn't that mixing v1 structure names with v5 operations? Aren't we
pulling infomration out in inode records?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/6] misc: convert XFS_IOC_FSBULKSTAT to XFS_IOC_BULKSTAT
  2019-09-13  0:54   ` Dave Chinner
@ 2019-09-16 21:58     ` Darrick J. Wong
  0 siblings, 0 replies; 14+ messages in thread
From: Darrick J. Wong @ 2019-09-16 21:58 UTC (permalink / raw)
  To: Dave Chinner; +Cc: sandeen, linux-xfs

On Fri, Sep 13, 2019 at 10:54:26AM +1000, Dave Chinner wrote:
> On Thu, Sep 05, 2019 at 08:35:15PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > Convert the v1 calls to v5 calls.

This really should have been more descriptive, sorry...

"Convert xfrog_bulkstat() to take arguments using v5 bulkstat semantics
and return bulkstat information in v5 structures.  If the v5 ioctl is
not available, xfrog_bulkstat() will emulate the v5 ioctl using the v1
ioctl."

> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> >  fsr/xfs_fsr.c      |   45 ++++++--
> >  io/open.c          |   17 ++-
> >  libfrog/bulkstat.c |  290 +++++++++++++++++++++++++++++++++++++++++++++++++---
> >  libfrog/bulkstat.h |   10 +-
> >  libfrog/fsgeom.h   |    9 ++
> >  quota/quot.c       |   29 ++---
> >  scrub/inodes.c     |   45 +++++---
> >  scrub/inodes.h     |    2 
> >  scrub/phase3.c     |    6 +
> >  scrub/phase5.c     |    8 +
> >  scrub/phase6.c     |    2 
> >  scrub/unicrash.c   |    6 +
> >  scrub/unicrash.h   |    4 -
> >  spaceman/health.c  |   28 +++--
> >  14 files changed, 411 insertions(+), 90 deletions(-)
> > 
> > 
> > diff --git a/fsr/xfs_fsr.c b/fsr/xfs_fsr.c
> > index a53eb924..cc3cc93a 100644
> > --- a/fsr/xfs_fsr.c
> > +++ b/fsr/xfs_fsr.c
> > @@ -466,6 +466,17 @@ fsrallfs(char *mtab, int howlong, char *leftofffile)
> >  				ptr = strchr(ptr, ' ');
> >  				if (ptr) {
> >  					startino = strtoull(++ptr, NULL, 10);
> > +					/*
> > +					 * NOTE: The inode number read in from
> > +					 * the leftoff file is the last inode
> > +					 * to have been fsr'd.  Since the new
> > +					 * xfrog_bulkstat function wants to be
> > +					 * passed the first inode that we want
> > +					 * to examine, increment the value that
> > +					 * we read in.  The debug message below
> > +					 * prints the lastoff value.
> > +					 */
> > +					startino++;
> >  				}
> >  			}
> >  			if (startpass < 0)
> > @@ -484,7 +495,7 @@ fsrallfs(char *mtab, int howlong, char *leftofffile)
> >  
> >  	if (vflag) {
> >  		fsrprintf(_("START: pass=%d ino=%llu %s %s\n"),
> > -			  fs->npass, (unsigned long long)startino,
> > +			  fs->npass, (unsigned long long)startino - 1,
> >  			  fs->dev, fs->mnt);
> >  	}
> 
> This could probably go in a spearate patch....

It can't, because we're changing the meaning of the xfrog_bulkstat
arguments to match the ioctls.

> > @@ -724,7 +724,6 @@ inode_f(
> >  	char			**argv)
> >  {
> >  	struct xfs_bstat	bstat;
> > -	uint32_t		count = 0;
> >  	uint64_t		result_ino = 0;
> >  	uint64_t		userino = NULLFSINO;
> >  	char			*p;
> > @@ -775,21 +774,31 @@ inode_f(
> >  		}
> >  	} else if (ret_next) {
> >  		struct xfs_fd	xfd = XFS_FD_INIT(file->fd);
> > +		struct xfs_bulkstat_req	*breq;
> > +
> > +		breq = xfrog_bulkstat_alloc_req(1, userino + 1);
> > +		if (!breq) {
> > +			perror("alloc bulkstat");
> > +			exitcode = 1;
> > +			return 0;
> > +		}
> >  
> >  		/* get next inode */
> > -		ret = xfrog_bulkstat(&xfd, &userino, 1, &bstat, &count);
> 
> Why the "+ 1" on userino setup for the new interface?

The inode parameter to bulkstat changes between v1 and v5:

FSBULKSTAT (i.e. v1) takes a *lastino pointer that's supposed to point
to the inode number before the one you want.  If you want to bulkstat
starting with inode 100, you set *lastino = 99.

FSBULKSTAT_SINGLE (v1) takes a *lastino pointer that points to the inode
you want.  If you want to B_S inode 100, you set *lastino = 100.

BULKSTAT (v5) takes a startino number in the request header that should
be the inode you want.  If you want to bulkstat starting with inode 100,
you set breq->startino = 100.

In fsr's case, it traditionally used FSBULKSTAT (v1) and records the
last inode number that it defragged, so in order to use BULKSTAT (v5) we
have to increment the startino value here.  If v5 bulkstat is available
then we pass that number straight to the kernel.  If not, the
xfrog_bulk_req_setup function decrements the inode number so that it can
use the v1 ioctl to emulate the v5 behavior.

> 
> > @@ -29,29 +42,278 @@ xfrog_bulkstat_single(
> >  	return 0;
> >  }
> >  
> > -/* Bulkstat a bunch of inodes.  Returns zero or a positive error code. */
> > -int
> > -xfrog_bulkstat(
> > +/*
> > + * Set up emulation of a v5 bulk request ioctl with a v1 bulk request ioctl.
> > + * Returns 0 if the emulation should proceed; ECANCELED if there are no
> > + * records; or a positive error code.

/*
 * Given a v5 BULKSTAT request, set up a v1 FSBULKSTAT control structure
 * so that we can emulate the v5 request using the old v1 code as best
 * we can.  This enables callers to run on older kernels.
 *
 * Returns 0 if the emulation should succeed, -ECANCELED if there won't
 * be any records; or a negative error code.
 */

How about that?

> > + */
> > +static int
> > +xfrog_bulk_req_setup(
> >  	struct xfs_fd		*xfd,
> > -	uint64_t		*lastino,
> > -	uint32_t		icount,
> > -	struct xfs_bstat	*ubuffer,
> > -	uint32_t		*ocount)
> > +	struct xfs_bulk_ireq	*hdr,
> > +	struct xfs_fsop_bulkreq	*bulkreq,
> > +	size_t			rec_size)
> > +{
> > +	void			*buf;
> > +
> > +	if (hdr->flags & XFS_BULK_IREQ_AGNO) {
> > +		uint32_t	agno = cvt_ino_to_agno(xfd, hdr->ino);
> > +
> > +		if (hdr->ino == 0)
> > +			hdr->ino = cvt_agino_to_ino(xfd, hdr->agno, 0);
> > +		else if (agno < hdr->agno)
> > +			return EINVAL;
> > +		else if (agno > hdr->agno)
> > +			goto no_results;
> > +	}
> > +
> > +	if (cvt_ino_to_agno(xfd, hdr->ino) > xfd->fsgeom.agcount)
> > +		goto no_results;
> > +
> > +	buf = malloc(hdr->icount * rec_size);
> > +	if (!buf)
> > +		return errno;
> > +
> > +	if (hdr->ino)
> > +		hdr->ino--;
> 
> This goes with my last question: why?

(See above)

> > +	bulkreq->lastip = (__u64 *)&hdr->ino,
> > +	bulkreq->icount = hdr->icount,
> > +	bulkreq->ocount = (__s32 *)&hdr->ocount,
> > +	bulkreq->ubuffer = buf;
> > +	return 0;
> > +
> > +no_results:
> > +	hdr->ocount = 0;
> > +	return ECANCELED;
> 
> We should be returning negative errors for everything.

Heh, that's going to be a lengthy overhaul of everything that comes
after this.

> 
> > +}
> > +
> > +/*
> > + * Convert records and free resources used to do a v1 emulation of v5 bulk
> > + * request.
> > + */

/*
 * If we have used the v1 FSBULKSTAT ioctl to emulate the v5 BULKSTAT
 * ioctl for a caller, migrate the v1 bulkstat data into the caller's
 * v5 bulkstat buffer and tear down the emulation control structures.
 */

> > +static int
> > +xfrog_bulk_req_teardown(
> 
> What's "teardown" got to do with converting results to a v1 format?
> 
> Indeed, why is there even emulation of v1 calls in the first place?
> why don't callers that need v1 format just use the existing v1
> ioctls directly?

This code emulates v5 bulkstat using old v1 bulkstat, but we've covered
this above.

> 
> >  
> > +/* Bulkstat a bunch of inodes using the v1 interface. */
> > +static int
> > +xfrog_bulkstat1(
> > +	struct xfs_fd		*xfd,
> > +	struct xfs_bulkstat_req	*req)
> > +{
> > +	struct xfs_fsop_bulkreq	bulkreq = { 0 };
> > +	int			error;
> > +
> > +	error = xfrog_bulkstat_prep_v1_emulation(xfd);
> > +	if (error)
> > +		return error;
> > +
> > +	error = xfrog_bulk_req_setup(xfd, &req->hdr, &bulkreq,
> > +			sizeof(struct xfs_bstat));
> > +	if (error == ECANCELED)
> > +		goto out_teardown;
> > +	if (error)
> > +		return error;
> > +
> > +	error = ioctl(xfd->fd, XFS_IOC_FSBULKSTAT, &bulkreq);
> > +	if (error)
> > +		error = errno;
> 
> negative errors, please.
> 
> > +
> > +out_teardown:
> > +	return xfrog_bulk_req_teardown(xfd, &req->hdr, &bulkreq,
> > +			sizeof(struct xfs_bstat), xfrog_bstat_ino,
> > +			&req->bulkstat, sizeof(struct xfs_bulkstat),
> > +			xfrog_bstat_cvt, 1, error);
> > +}
> 
> What conversion is necessary here given we've done a v1 format
> bulkstat?
> 
> > +/* Bulkstat a bunch of inodes.  Returns zero or a positive error code. */
> > +int
> > +xfrog_bulkstat(
> > +	struct xfs_fd		*xfd,
> > +	struct xfs_bulkstat_req	*req)
> > +{
> > +	int			error;
> > +
> > +	if (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V1)
> > +		goto try_v1;
> > +
> > +	error = xfrog_bulkstat5(xfd, req);
> > +	if (error == 0 || (xfd->flags & XFROG_FLAG_BULKSTAT_FORCE_V5))
> > +		return error;
> > +
> > +	/* If the v5 ioctl wasn't found, we punt to v1. */
> > +	switch (error) {
> > +	case EOPNOTSUPP:
> > +	case ENOTTY:
> > +		xfd->flags |= XFROG_FLAG_BULKSTAT_FORCE_V1;
> > +		break;
> > +	}
> > +
> > +try_v1:
> > +	return xfrog_bulkstat1(xfd, req);
> > +}
> 
> Oh, wait, "v1 emulation" is supposed to mean "use a v1 call to
> return v5 format structures"? That's emulation of the _v5_ ioctl,
> which kinda says to me there's some naming problems here...

Yes.

> 
> > +/* Convert bulkstat (v5) to bstat (v1). */
> > +void
> > +xfrog_bulkstat_to_bstat(
> > +	struct xfs_fd			*xfd,
> > +	struct xfs_bstat		*bs1,
> > +	const struct xfs_bulkstat	*bstat)
> 
> Which I read as "convert bulkstat to bulkstat" and it doesn't tell
> me what is actually going on.  xfrog_bulkstat_v5_to_v1() indicates
> what format conversion is actually taking place...
> 
> and um, naming the v5 field bstat, and the struct xfs_bstat field
> bs1 is entirely confusing.

Indeed.  I'll change it as you suggest.

> void
> xfrog_bulkstat_v5_to_v1(
> 	struct xfs_fd		*xfd
> 	const struct xfs_bulkstat *from,
> 	struct xfs_bstat	*to)
> {
> 	to->bs_ino = from->bs_ino;
> ....
> 
> > +{
> > +	bs1->bs_ino = bstat->bs_ino;
> > +	bs1->bs_mode = bstat->bs_mode;
> > +	bs1->bs_nlink = bstat->bs_nlink;
> > +	bs1->bs_uid = bstat->bs_uid;
> > +	bs1->bs_gid = bstat->bs_gid;
> > +	bs1->bs_rdev = bstat->bs_rdev;
> > +	bs1->bs_blksize = bstat->bs_blksize;
> > +	bs1->bs_size = bstat->bs_size;
> > +	bs1->bs_atime.tv_sec = bstat->bs_atime;
> > +	bs1->bs_mtime.tv_sec = bstat->bs_mtime;
> > +	bs1->bs_ctime.tv_sec = bstat->bs_ctime;
> 
> What about 32 bit overflows here?

Oops.  Ok.

> > +/* Convert bstat (v1) to bulkstat (v5). */
> > +void
> > +xfrog_bstat_to_bulkstat(
> > +	struct xfs_fd			*xfd,
> > +	struct xfs_bulkstat		*bstat,
> > +	const struct xfs_bstat		*bs1)
> > +{
> 
> same comments about names here.
> >  
> > +/* Only use v1 bulkstat/inumbers ioctls. */
> > +#define XFROG_FLAG_BULKSTAT_FORCE_V1	(1 << 0)
> > +
> > +/* Only use v5 bulkstat/inumbers ioctls. */
> > +#define XFROG_FLAG_BULKSTAT_FORCE_V5	(1 << 1)
> 
> These don't actually define what format the results are presented
> in. What happens if the user wants v1 format structures but wants
> the V5 ioctl to be used?
> 
> > --- a/scrub/inodes.c
> > +++ b/scrub/inodes.c
> > @@ -50,13 +50,15 @@ static void
> >  xfs_iterate_inodes_range_check(
> >  	struct scrub_ctx	*ctx,
> >  	struct xfs_inogrp	*inogrp,
> > -	struct xfs_bstat	*bstat)
> > +	struct xfs_bulkstat	*bstat)
> >  {
> > -	struct xfs_bstat	*bs;
> > +	struct xfs_bulkstat	*bs;
> >  	int			i;
> >  	int			error;
> >  
> >  	for (i = 0, bs = bstat; i < XFS_INODES_PER_CHUNK; i++) {
> > +		struct xfs_bstat bs1;
> > +
> >  		if (!(inogrp->xi_allocmask & (1ULL << i)))
> >  			continue;
> >  		if (bs->bs_ino == inogrp->xi_startino + i) {
> > @@ -66,11 +68,13 @@ xfs_iterate_inodes_range_check(
> >  
> >  		/* Load the one inode. */
> >  		error = xfrog_bulkstat_single(&ctx->mnt,
> > -				inogrp->xi_startino + i, bs);
> > -		if (error || bs->bs_ino != inogrp->xi_startino + i) {
> > -			memset(bs, 0, sizeof(struct xfs_bstat));
> > +				inogrp->xi_startino + i, &bs1);
> > +		if (error || bs1.bs_ino != inogrp->xi_startino + i) {
> > +			memset(bs, 0, sizeof(struct xfs_bulkstat));
> >  			bs->bs_ino = inogrp->xi_startino + i;
> >  			bs->bs_blksize = ctx->mnt_sv.f_frsize;
> > +		} else {
> > +			xfrog_bstat_to_bulkstat(&ctx->mnt, bs, &bs1);
> 
> I'm confused - why is xfrog_bulkstat_single() returning a v1 format
> structure here and not using v5 format for everything?

Because I probably should have converted both at the same time. :/

--D

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/6] misc: convert to v5 bulkstat_single call
  2019-09-13  1:02   ` Dave Chinner
@ 2019-09-16 22:02     ` Darrick J. Wong
  0 siblings, 0 replies; 14+ messages in thread
From: Darrick J. Wong @ 2019-09-16 22:02 UTC (permalink / raw)
  To: Dave Chinner; +Cc: sandeen, linux-xfs

On Fri, Sep 13, 2019 at 11:02:37AM +1000, Dave Chinner wrote:
> On Thu, Sep 05, 2019 at 08:35:22PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> >  spaceman/health.c  |    4 +-
> >  7 files changed, 105 insertions(+), 32 deletions(-)
> > 
> > 
> > diff --git a/fsr/xfs_fsr.c b/fsr/xfs_fsr.c
> > index cc3cc93a..e8fa39ab 100644
> > --- a/fsr/xfs_fsr.c
> > +++ b/fsr/xfs_fsr.c
> > @@ -724,6 +724,7 @@ fsrfile(
> >  	xfs_ino_t		ino)
> >  {
> >  	struct xfs_fd		fsxfd = XFS_FD_INIT_EMPTY;
> > +	struct xfs_bulkstat	bulkstat;
> >  	struct xfs_bstat	statbuf;
> >  	jdm_fshandle_t		*fshandlep;
> >  	int			fd = -1;
> > @@ -748,12 +749,13 @@ fsrfile(
> >  		goto out;
> >  	}
> >  
> > -	error = xfrog_bulkstat_single(&fsxfd, ino, &statbuf);
> > +	error = xfrog_bulkstat_single(&fsxfd, ino, 0, &bulkstat);
> >  	if (error) {
> >  		fsrprintf(_("unable to get bstat on %s: %s\n"),
> >  			fname, strerror(error));
> >  		goto out;
> >  	}
> > +	xfrog_bulkstat_to_bstat(&fsxfd, &statbuf, &bulkstat);
> 
> So this is so none of the rest of fsr needs to be converted to use
> the new structure versions? When will this go away?
> 
> >  	do {
> > -		struct xfs_bstat tbstat;
> > +		struct xfs_bulkstat	tbstat;
> >  		char		name[64];
> >  		int		ret;
> >  
> > @@ -983,7 +985,7 @@ fsr_setup_attr_fork(
> >  		 * this to compare against the target and determine what we
> >  		 * need to do.
> >  		 */
> > -		ret = xfrog_bulkstat_single(&txfd, tstatbuf.st_ino, &tbstat);
> > +		ret = xfrog_bulkstat_single(&txfd, tstatbuf.st_ino, 0, &tbstat);
> >  		if (ret) {
> >  			fsrprintf(_("unable to get bstat on temp file: %s\n"),
> >  						strerror(ret));
> 
> Because this looks like we now have a combination of v1 and v5
> structures being used...
> 
> >  
> > diff --git a/io/swapext.c b/io/swapext.c
> > index 2b4918f8..ca024b93 100644
> > --- a/io/swapext.c
> > +++ b/io/swapext.c
> > @@ -28,6 +28,7 @@ swapext_f(
> >  	char			**argv)
> >  {
> >  	struct xfs_fd		fxfd = XFS_FD_INIT(file->fd);
> > +	struct xfs_bulkstat	bulkstat;
> >  	int			fd;
> >  	int			error;
> >  	struct xfs_swapext	sx;
> > @@ -48,12 +49,13 @@ swapext_f(
> >  		goto out;
> >  	}
> >  
> > -	error = xfrog_bulkstat_single(&fxfd, stat.st_ino, &sx.sx_stat);
> > +	error = xfrog_bulkstat_single(&fxfd, stat.st_ino, 0, &bulkstat);
> >  	if (error) {
> >  		errno = error;
> >  		perror("bulkstat");
> >  		goto out;
> >  	}
> > +	xfrog_bulkstat_to_bstat(&fxfd, &sx.sx_stat, &bulkstat);
> 
> and this is required because bstat is part of the swapext ioctl ABI?

Yes.  I think a lof of the retained bulkstat weirdness in fsr could go
away if (a) we maintained an open xfs_fd to the filesystem and (b)
wrapped the swapext ioctl... but there's already too much here so I
stopped short of refactoring fsr.

> >  	sx.sx_version = XFS_SX_VERSION;
> >  	sx.sx_fdtarget = file->fd;
> >  	sx.sx_fdtmp = fd;
> > diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c
> > index b4468243..2a70824e 100644
> > --- a/libfrog/bulkstat.c
> > +++ b/libfrog/bulkstat.c
> > @@ -20,26 +20,99 @@ xfrog_bulkstat_prep_v1_emulation(
> >  	return xfd_prepare_geometry(xfd);
> >  }
> >  
> > +/* Bulkstat a single inode using v5 ioctl. */
> > +static int
> > +xfrog_bulkstat_single5(
> > +	struct xfs_fd			*xfd,
> > +	uint64_t			ino,
> > +	unsigned int			flags,
> > +	struct xfs_bulkstat		*bulkstat)
> > +{
> > +	struct xfs_bulkstat_req		*req;
> > +	int				ret;
> > +
> > +	if (flags & ~(XFS_BULK_IREQ_SPECIAL))
> > +		return EINVAL;
> 
> negative error returns, please.
> 
> > @@ -57,8 +57,6 @@ xfs_iterate_inodes_range_check(
> >  	int			error;
> >  
> >  	for (i = 0, bs = bstat; i < XFS_INODES_PER_CHUNK; i++) {
> > -		struct xfs_bstat bs1;
> > -
> >  		if (!(inogrp->xi_allocmask & (1ULL << i)))
> >  			continue;
> >  		if (bs->bs_ino == inogrp->xi_startino + i) {
> > @@ -68,13 +66,11 @@ xfs_iterate_inodes_range_check(
> >  
> >  		/* Load the one inode. */
> >  		error = xfrog_bulkstat_single(&ctx->mnt,
> > -				inogrp->xi_startino + i, &bs1);
> > -		if (error || bs1.bs_ino != inogrp->xi_startino + i) {
> > +				inogrp->xi_startino + i, 0, bs);
> > +		if (error || bs->bs_ino != inogrp->xi_startino + i) {
> >  			memset(bs, 0, sizeof(struct xfs_bulkstat));
> >  			bs->bs_ino = inogrp->xi_startino + i;
> >  			bs->bs_blksize = ctx->mnt_sv.f_frsize;
> > -		} else {
> > -			xfrog_bstat_to_bulkstat(&ctx->mnt, bs, &bs1);
> >  		}
> >  		bs++;
> >  	}
> 
> So this immediately tears down the confusing stuff that was set up
> in the previous patch. Perhaps separate out the scrub changes and do
> both bulkstat and bulkstat_single conversions in one patch?

Ok.

--D

> -Dave.
> 
> -- 
> Dave Chinner
> david@fromorbit.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/6] misc: convert from XFS_IOC_FSINUMBERS to XFS_IOC_INUMBERS
  2019-09-13  1:10   ` Dave Chinner
@ 2019-09-16 22:05     ` Darrick J. Wong
  0 siblings, 0 replies; 14+ messages in thread
From: Darrick J. Wong @ 2019-09-16 22:05 UTC (permalink / raw)
  To: Dave Chinner; +Cc: sandeen, linux-xfs

On Fri, Sep 13, 2019 at 11:10:36AM +1000, Dave Chinner wrote:
> On Thu, Sep 05, 2019 at 08:35:28PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > Convert all programs to use the v5 inumbers ioctl.
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> >  io/imap.c          |   26 +++++-----
> >  io/open.c          |   27 +++++++----
> >  libfrog/bulkstat.c |  132 ++++++++++++++++++++++++++++++++++++++++++++++------
> >  libfrog/bulkstat.h |   10 +++-
> >  scrub/fscounters.c |   21 +++++---
> >  scrub/inodes.c     |   36 ++++++++------
> >  6 files changed, 189 insertions(+), 63 deletions(-)
> 
> ....
> > diff --git a/io/open.c b/io/open.c
> > index e1979501..e198bcd8 100644
> > --- a/io/open.c
> > +++ b/io/open.c
> > @@ -681,39 +681,46 @@ static __u64
> >  get_last_inode(void)
> >  {
> >  	struct xfs_fd		xfd = XFS_FD_INIT(file->fd);
> > -	uint64_t		lastip = 0;
> > +	struct xfs_inumbers_req	*ireq;
> >  	uint32_t		lastgrp = 0;
> > -	uint32_t		ocount = 0;
> >  	__u64			last_ino;
> 
> 	__u64			last_ino = 0;
> 
> > -	struct xfs_inogrp	igroup[IGROUP_NR];
> > +
> > +	ireq = xfrog_inumbers_alloc_req(IGROUP_NR, 0);
> > +	if (!ireq) {
> > +		perror("alloc req");
> > +		return 0;
> > +	}
> >  
> >  	for (;;) {
> >  		int		ret;
> >  
> > -		ret = xfrog_inumbers(&xfd, &lastip, IGROUP_NR, igroup,
> > -				&ocount);
> > +		ret = xfrog_inumbers(&xfd, ireq);
> >  		if (ret) {
> >  			errno = ret;
> >  			perror("XFS_IOC_FSINUMBERS");
> > +			free(ireq);
> >  			return 0;
> 
> 			goto out;
> >  		}
> >  
> >  		/* Did we reach the last inode? */
> > -		if (ocount == 0)
> > +		if (ireq->hdr.ocount == 0)
> >  			break;
> >  
> >  		/* last inode in igroup table */
> > -		lastgrp = ocount;
> > +		lastgrp = ireq->hdr.ocount;
> >  	}
> >  
> > -	if (lastgrp == 0)
> > +	if (lastgrp == 0) {
> > +		free(ireq);
> >  		return 0;
> 
> 		goto out;
> > +	}
> >  
> >  	lastgrp--;
> >  
> >  	/* The last inode number in use */
> > -	last_ino = igroup[lastgrp].xi_startino +
> > -		  libxfs_highbit64(igroup[lastgrp].xi_allocmask);
> > +	last_ino = ireq->inumbers[lastgrp].xi_startino +
> > +		  libxfs_highbit64(ireq->inumbers[lastgrp].xi_allocmask);
> 
> out:

Ok, fixed.

> > +	free(ireq);
> >  
> >  	return last_ino;
> >  }
> > diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c
> > index 2a70824e..748d0f32 100644
> > --- a/libfrog/bulkstat.c
> > +++ b/libfrog/bulkstat.c
> > @@ -387,6 +387,86 @@ xfrog_bulkstat_alloc_req(
> >  	return breq;
> >  }
> >  
> > +/* Convert an inumbers (v5) struct to a inogrp (v1) struct. */
> > +void
> > +xfrog_inumbers_to_inogrp(
> > +	struct xfs_inogrp		*ig1,
> > +	const struct xfs_inumbers	*ig)
> > +{
> > +	ig1->xi_startino = ig->xi_startino;
> > +	ig1->xi_alloccount = ig->xi_alloccount;
> > +	ig1->xi_allocmask = ig->xi_allocmask;
> 
> Same thing - inumbers_v5_to_v1(from, to);
> 
> > +}
> > +
> > +/* Convert an inogrp (v1) struct to a inumbers (v5) struct. */
> > +void
> > +xfrog_inogrp_to_inumbers(
> > +	struct xfs_inumbers		*ig,
> > +	const struct xfs_inogrp		*ig1)
> 
> ditto.

Fixed too.

> > +{
> > +	memset(ig, 0, sizeof(*ig));
> > +	ig->xi_version = XFS_INUMBERS_VERSION_V1;
> > +
> > +	ig->xi_startino = ig1->xi_startino;
> > +	ig->xi_alloccount = ig1->xi_alloccount;
> > +	ig->xi_allocmask = ig1->xi_allocmask;
> > +}
> > +
> > +static uint64_t xfrog_inum_ino(void *v1_rec)
> > +{
> > +	return ((struct xfs_inogrp *)v1_rec)->xi_startino;
> > +}
> > +
> > +static void xfrog_inum_cvt(struct xfs_fd *xfd, void *v5, void *v1)
> > +{
> > +	xfrog_inogrp_to_inumbers(v5, v1);
> > +}
> 
> what's the point of this wrapper?

Function adapter so we can use xfrog_bulk_req_teardown as part of using
the V1 inumbers ioctl to emulate the V5 inumbers ioctl.

> > +
> > +/* Query inode allocation bitmask information using v5 ioctl. */
> > +static int
> > +xfrog_inumbers5(
> > +	struct xfs_fd		*xfd,
> > +	struct xfs_inumbers_req	*req)
> > +{
> > +	int			ret;
> > +
> > +	ret = ioctl(xfd->fd, XFS_IOC_INUMBERS, req);
> > +	if (ret)
> > +		return errno;
> > +	return 0;
> 
> negative errors.
> 
> > +}
> > +
> > +/* Query inode allocation bitmask information using v1 ioctl. */
> > +static int
> > +xfrog_inumbers1(
> > +	struct xfs_fd		*xfd,
> > +	struct xfs_inumbers_req	*req)
> > +{
> > +	struct xfs_fsop_bulkreq	bulkreq = { 0 };
> > +	int			error;
> > +
> > +	error = xfrog_bulkstat_prep_v1_emulation(xfd);
> > +	if (error)
> > +		return error;
> > +
> > +	error = xfrog_bulk_req_setup(xfd, &req->hdr, &bulkreq,
> > +			sizeof(struct xfs_inogrp));
> > +	if (error == ECANCELED)
> > +		goto out_teardown;
> > +	if (error)
> > +		return error;
> > +
> > +	error = ioctl(xfd->fd, XFS_IOC_FSINUMBERS, &bulkreq);
> > +	if (error)
> > +		error = errno;
> 
> negative errors.
> 
> > +
> > +out_teardown:
> > +	return xfrog_bulk_req_teardown(xfd, &req->hdr, &bulkreq,
> > +			sizeof(struct xfs_inogrp), xfrog_inum_ino,
> > +			&req->inumbers, sizeof(struct xfs_inumbers),
> > +			xfrog_inum_cvt, 64, error);
> > +}
> ....
> 
> >  	struct xfs_bulkstat	*bs;
> > -	uint64_t		igrp_ino;
> > -	uint32_t		igrplen = 0;
> > +	struct xfs_inumbers	*inogrp;
> 
> Isn't that mixing v1 structure names with v5 operations? Aren't we
> pulling infomration out in inode records?

Yeah, I'll fix the names too.

--D

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, back to index

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-06  3:34 [PATCH 0/6] xfsprogs: port utilities to bulkstat v5 Darrick J. Wong
2019-09-06  3:34 ` [PATCH 1/6] man: add documentation for v5 bulkstat ioctl Darrick J. Wong
2019-09-06  3:35 ` [PATCH 2/6] man: add documentation for v5 inumbers ioctl Darrick J. Wong
2019-09-06  3:35 ` [PATCH 3/6] misc: convert XFS_IOC_FSBULKSTAT to XFS_IOC_BULKSTAT Darrick J. Wong
2019-09-13  0:54   ` Dave Chinner
2019-09-16 21:58     ` Darrick J. Wong
2019-09-06  3:35 ` [PATCH 4/6] misc: convert to v5 bulkstat_single call Darrick J. Wong
2019-09-13  1:02   ` Dave Chinner
2019-09-16 22:02     ` Darrick J. Wong
2019-09-06  3:35 ` [PATCH 5/6] misc: convert from XFS_IOC_FSINUMBERS to XFS_IOC_INUMBERS Darrick J. Wong
2019-09-13  1:10   ` Dave Chinner
2019-09-16 22:05     ` Darrick J. Wong
2019-09-06  3:35 ` [PATCH 6/6] libxfs: revert FSGEOMETRY v5 -> v4 hack Darrick J. Wong
2019-09-10  6:46   ` Christoph Hellwig

Linux-XFS Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-xfs/0 linux-xfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-xfs linux-xfs/ https://lore.kernel.org/linux-xfs \
		linux-xfs@vger.kernel.org linux-xfs@archiver.kernel.org
	public-inbox-index linux-xfs


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-xfs


AGPL code for this site: git clone https://public-inbox.org/ public-inbox