All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] libfrog: fix the if condition in xfrog_bulk_req_v1_setup
@ 2022-07-29  7:57 Stephen Zhang
  2022-07-29 15:44 ` Darrick J. Wong
  0 siblings, 1 reply; 6+ messages in thread
From: Stephen Zhang @ 2022-07-29  7:57 UTC (permalink / raw)
  To: sandeen, djwong, hch; +Cc: zhangshida, starzhangzsd, linux-xfs

when scanning all inodes in each ag, hdr->ino serves as a iterator to
specify the ino to start scanning with.

After hdr->ino-- , we can get the last ino returned from the previous
iteration.

But there are cases that hdr->ino-- is pointless, that is,the case when
starting to scan inodes in each ag.

Hence the condition should be cvt_ino_to_agno(xfd, hdr->ino) ==0, which
represents the start of scan in each ag,
instead of hdr->ino ==0, which represents the start of scan in ag 0 only.

Signed-off-by: Stephen Zhang <zhangshida@kylinos.cn>
---
 libfrog/bulkstat.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c
index 195f6ea0..77a385bb 100644
--- a/libfrog/bulkstat.c
+++ b/libfrog/bulkstat.c
@@ -172,7 +172,7 @@ xfrog_bulk_req_v1_setup(
 	if (!buf)
 		return -errno;
 
-	if (hdr->ino)
+	if (cvt_ino_to_agno(xfd, hdr->ino))
 		hdr->ino--;
 	bulkreq->lastip = (__u64 *)&hdr->ino,
 	bulkreq->icount = hdr->icount,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] libfrog: fix the if condition in xfrog_bulk_req_v1_setup
  2022-07-29  7:57 [PATCH] libfrog: fix the if condition in xfrog_bulk_req_v1_setup Stephen Zhang
@ 2022-07-29 15:44 ` Darrick J. Wong
  2022-07-30  1:25   ` Stephen Zhang
  0 siblings, 1 reply; 6+ messages in thread
From: Darrick J. Wong @ 2022-07-29 15:44 UTC (permalink / raw)
  To: Stephen Zhang; +Cc: sandeen, hch, zhangshida, linux-xfs

On Fri, Jul 29, 2022 at 03:57:46PM +0800, Stephen Zhang wrote:
> when scanning all inodes in each ag, hdr->ino serves as a iterator to
> specify the ino to start scanning with.
> 
> After hdr->ino-- , we can get the last ino returned from the previous
> iteration.
> 
> But there are cases that hdr->ino-- is pointless, that is,the case when
> starting to scan inodes in each ag.
> 
> Hence the condition should be cvt_ino_to_agno(xfd, hdr->ino) ==0, which
> represents the start of scan in each ag,

Er, cvt_ino_to_agno extracts the AG number from an inumber;
cvt_ino_to_agino extracts the inumber within an AG.  Given your
description of the problem (not wanting hdr->ino to go backwards in the
inumber space when it's already at the start of an AG), I think you want
the latter here?

> instead of hdr->ino ==0, which represents the start of scan in ag 0 only.
> 
> Signed-off-by: Stephen Zhang <zhangshida@kylinos.cn>
> ---
>  libfrog/bulkstat.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c
> index 195f6ea0..77a385bb 100644
> --- a/libfrog/bulkstat.c
> +++ b/libfrog/bulkstat.c
> @@ -172,7 +172,7 @@ xfrog_bulk_req_v1_setup(
>  	if (!buf)
>  		return -errno;
>  
> -	if (hdr->ino)
> +	if (cvt_ino_to_agno(xfd, hdr->ino))

...because I think this change means that we go backwards for any inode
in AG 0, and we do not go backwards for any other AG.

--D

>  		hdr->ino--;
>  	bulkreq->lastip = (__u64 *)&hdr->ino,
>  	bulkreq->icount = hdr->icount,
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] libfrog: fix the if condition in xfrog_bulk_req_v1_setup
  2022-07-29 15:44 ` Darrick J. Wong
@ 2022-07-30  1:25   ` Stephen Zhang
  2022-07-30  1:30     ` Darrick J. Wong
  0 siblings, 1 reply; 6+ messages in thread
From: Stephen Zhang @ 2022-07-30  1:25 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: sandeen, hch, zhangshida, linux-xfs

Darrick J. Wong <djwong@kernel.org> 于2022年7月29日周五 23:44写道:
>
> On Fri, Jul 29, 2022 at 03:57:46PM +0800, Stephen Zhang wrote:
> > when scanning all inodes in each ag, hdr->ino serves as a iterator to
> > specify the ino to start scanning with.
> >
> > After hdr->ino-- , we can get the last ino returned from the previous
> > iteration.
> >
> > But there are cases that hdr->ino-- is pointless, that is,the case when
> > starting to scan inodes in each ag.
> >
> > Hence the condition should be cvt_ino_to_agno(xfd, hdr->ino) ==0, which
> > represents the start of scan in each ag,
>
> Er, cvt_ino_to_agno extracts the AG number from an inumber;
> cvt_ino_to_agino extracts the inumber within an AG.  Given your
> description of the problem (not wanting hdr->ino to go backwards in the
> inumber space when it's already at the start of an AG), I think you want
> the latter here?
>
> > instead of hdr->ino ==0, which represents the start of scan in ag 0 only.
> >
> > Signed-off-by: Stephen Zhang <zhangshida@kylinos.cn>
> > ---
> >  libfrog/bulkstat.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c
> > index 195f6ea0..77a385bb 100644
> > --- a/libfrog/bulkstat.c
> > +++ b/libfrog/bulkstat.c
> > @@ -172,7 +172,7 @@ xfrog_bulk_req_v1_setup(
> >       if (!buf)
> >               return -errno;
> >
> > -     if (hdr->ino)
> > +     if (cvt_ino_to_agno(xfd, hdr->ino))
>
> ...because I think this change means that we go backwards for any inode
> in AG 0, and we do not go backwards for any other AG.
>
> --D
>
> >               hdr->ino--;
> >       bulkreq->lastip = (__u64 *)&hdr->ino,
> >       bulkreq->icount = hdr->icount,
> > --
> > 2.25.1
> >

Yeah, i mean the latter. Sorry for the mistake.
Hence the patch will be like:
=====
@@ -172,7 +172,7 @@ xfrog_bulk_req_v1_setup(
        if (!buf)
                return -errno;

-       if (hdr->ino)
+       if (cvt_ino_to_agino(xfd, hdr->ino))
                hdr->ino--;
        bulkreq->lastip = (__u64 *)&hdr->ino,
        bulkreq->icount = hdr->icount,
====
Should i resend the patch later, or do you have any other idea about
this change?

Thanks,

Stephen.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] libfrog: fix the if condition in xfrog_bulk_req_v1_setup
  2022-07-30  1:25   ` Stephen Zhang
@ 2022-07-30  1:30     ` Darrick J. Wong
  2022-07-30  7:51       ` Stephen Zhang
  0 siblings, 1 reply; 6+ messages in thread
From: Darrick J. Wong @ 2022-07-30  1:30 UTC (permalink / raw)
  To: Stephen Zhang; +Cc: sandeen, hch, zhangshida, linux-xfs

On Sat, Jul 30, 2022 at 09:25:25AM +0800, Stephen Zhang wrote:
> Darrick J. Wong <djwong@kernel.org> 于2022年7月29日周五 23:44写道:
> >
> > On Fri, Jul 29, 2022 at 03:57:46PM +0800, Stephen Zhang wrote:
> > > when scanning all inodes in each ag, hdr->ino serves as a iterator to
> > > specify the ino to start scanning with.
> > >
> > > After hdr->ino-- , we can get the last ino returned from the previous
> > > iteration.
> > >
> > > But there are cases that hdr->ino-- is pointless, that is,the case when
> > > starting to scan inodes in each ag.
> > >
> > > Hence the condition should be cvt_ino_to_agno(xfd, hdr->ino) ==0, which
> > > represents the start of scan in each ag,
> >
> > Er, cvt_ino_to_agno extracts the AG number from an inumber;
> > cvt_ino_to_agino extracts the inumber within an AG.  Given your
> > description of the problem (not wanting hdr->ino to go backwards in the
> > inumber space when it's already at the start of an AG), I think you want
> > the latter here?
> >
> > > instead of hdr->ino ==0, which represents the start of scan in ag 0 only.
> > >
> > > Signed-off-by: Stephen Zhang <zhangshida@kylinos.cn>
> > > ---
> > >  libfrog/bulkstat.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/libfrog/bulkstat.c b/libfrog/bulkstat.c
> > > index 195f6ea0..77a385bb 100644
> > > --- a/libfrog/bulkstat.c
> > > +++ b/libfrog/bulkstat.c
> > > @@ -172,7 +172,7 @@ xfrog_bulk_req_v1_setup(
> > >       if (!buf)
> > >               return -errno;
> > >
> > > -     if (hdr->ino)
> > > +     if (cvt_ino_to_agno(xfd, hdr->ino))
> >
> > ...because I think this change means that we go backwards for any inode
> > in AG 0, and we do not go backwards for any other AG.
> >
> > --D
> >
> > >               hdr->ino--;
> > >       bulkreq->lastip = (__u64 *)&hdr->ino,
> > >       bulkreq->icount = hdr->icount,
> > > --
> > > 2.25.1
> > >
> 
> Yeah, i mean the latter. Sorry for the mistake.
> Hence the patch will be like:
> =====
> @@ -172,7 +172,7 @@ xfrog_bulk_req_v1_setup(
>         if (!buf)
>                 return -errno;
> 
> -       if (hdr->ino)
> +       if (cvt_ino_to_agino(xfd, hdr->ino))
>                 hdr->ino--;
>         bulkreq->lastip = (__u64 *)&hdr->ino,
>         bulkreq->icount = hdr->icount,
> ====
> Should i resend the patch later, or do you have any other idea about
> this change?

It's probably ok to resend with that change, but ... what were you doing
to trip over this error, anyway?

--D

> Thanks,
> 
> Stephen.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] libfrog: fix the if condition in xfrog_bulk_req_v1_setup
  2022-07-30  1:30     ` Darrick J. Wong
@ 2022-07-30  7:51       ` Stephen Zhang
  2022-07-31 16:44         ` Darrick J. Wong
  0 siblings, 1 reply; 6+ messages in thread
From: Stephen Zhang @ 2022-07-30  7:51 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: sandeen, hch, zhangshida, linux-xfs

Darrick J. Wong <djwong@kernel.org> 于2022年7月30日周六 09:30写道:
>
> It's probably ok to resend with that change, but ... what were you doing
> to trip over this error, anyway?
>
> --D
>

Well, I was running xfs/285, and ran into some other error, which was
already fixed by the latest xfsprogs.
But in the process of examining the code logic in xfs_scrub, i still find
there may exist a flaw here, although it hasn't cause any problem so far.
Maybe it's still neccessary to submit the fix.Or am I just understanding
the code in a wrong way?

Thanks,

Stephen.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] libfrog: fix the if condition in xfrog_bulk_req_v1_setup
  2022-07-30  7:51       ` Stephen Zhang
@ 2022-07-31 16:44         ` Darrick J. Wong
  0 siblings, 0 replies; 6+ messages in thread
From: Darrick J. Wong @ 2022-07-31 16:44 UTC (permalink / raw)
  To: Stephen Zhang; +Cc: sandeen, hch, zhangshida, linux-xfs

On Sat, Jul 30, 2022 at 03:51:40PM +0800, Stephen Zhang wrote:
> Darrick J. Wong <djwong@kernel.org> 于2022年7月30日周六 09:30写道:
> >
> > It's probably ok to resend with that change, but ... what were you doing
> > to trip over this error, anyway?
> >
> > --D
> >
> 
> Well, I was running xfs/285, and ran into some other error, which was
> already fixed by the latest xfsprogs.
> But in the process of examining the code logic in xfs_scrub, i still find
> there may exist a flaw here, although it hasn't cause any problem so far.
> Maybe it's still neccessary to submit the fix.Or am I just understanding
> the code in a wrong way?

FSBULKSTAT was always weird.  Look at the current kernel implementation,
which translates the V1 FSBULKSTAT call into a V5 BULKSTAT call:

	if (cmd == XFS_IOC_FSINUMBERS) {
		breq.startino = lastino ? lastino + 1 : 0;
		error = xfs_inumbers(&breq, xfs_fsinumbers_fmt);
		lastino = breq.startino - 1;
	} else if (cmd == XFS_IOC_FSBULKSTAT_SINGLE) {
		breq.startino = lastino;
		breq.icount = 1;
		error = xfs_bulkstat_one(&breq, xfs_fsbulkstat_one_fmt);
	} else {	/* XFS_IOC_FSBULKSTAT */
		breq.startino = lastino ? lastino + 1 : 0;
		error = xfs_bulkstat(&breq, xfs_fsbulkstat_one_fmt);
		lastino = breq.startino - 1;
	}

We always bump lastino by one, except in the case where it's 0, because
0 is the magic signal to start at the first inode in the filesystem.
This "only bump it if nonzero" behavior works solely because the fs
layout prevents there ever from being an inode 0.

Now, why does it behave like that?  Before the creation of v5 bulkstat,
which made the cursor work like a standard cursor (i.e. breq->startino
points to the inode that should be stat'd next), the old bulkstat-v1
xfs_bulkstat_grab_chunk did this to mask off all inumbers before and
including the passed in *lastinop:

	idx = agino - irec->ir_startino + 1;
	if (idx < XFS_INODES_PER_CHUNK &&
	    (xfs_inobt_maskn(idx, XFS_INODES_PER_CHUNK - idx) & ~irec->ir_free)) {
		int	i;

		/* We got a right chunk with some left inodes allocated at it.
		 * Grab the chunk record.  Mark all the uninteresting inodes
		 * free -- because they're before our start point.
		 */
		for (i = 0; i < idx; i++) {
			if (XFS_INOBT_MASK(i) & ~irec->ir_free)
				irec->ir_freecount++;
		}

		irec->ir_free |= xfs_inobt_maskn(0, idx);
		*icount = irec->ir_count - irec->ir_freecount;
	}

Notice the "idx = agino - irec->ir_startino + 1".  That means that to go
from bulkstat v5 back to v1, we have to subtract 1 from the inode number
except in the case of zero, which is what libfrog does.  So I don't
think this patch is correct, though the reasons why are ... obscure and
took me several days to remember.

--D

> Thanks,
> 
> Stephen.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-07-31 16:44 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-29  7:57 [PATCH] libfrog: fix the if condition in xfrog_bulk_req_v1_setup Stephen Zhang
2022-07-29 15:44 ` Darrick J. Wong
2022-07-30  1:25   ` Stephen Zhang
2022-07-30  1:30     ` Darrick J. Wong
2022-07-30  7:51       ` Stephen Zhang
2022-07-31 16:44         ` Darrick J. Wong

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.