All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC][PATCH] xfs: adjust size/used/avail information for quota-df
@ 2018-03-09 13:15 Chengguang Xu
  2018-03-15 14:25 ` Brian Foster
  0 siblings, 1 reply; 7+ messages in thread
From: Chengguang Xu @ 2018-03-09 13:15 UTC (permalink / raw)
  To: linux-xfs; +Cc: darrick.wong, Chengguang Xu

In order to more accurately reflect size/used/avail information
for quota-df, slightly adjust related counting logic.

Signed-off-by: Chengguang Xu <cgxu519@gmx.com>
---

Hello folks,

Recently I’m testing  project quota  for our container users and found
sometimes the result of df command could not exactly represent
block/inode usage amount. So I checked  the logic in function xfs_qm_statvfs()
and I think it might be better if slightly adjusting the counting logic.
What do you think?

Terms:
# Size(F)   - The size field  in df result of filesystem
# Size(Q) - The size field of in df result of pquota-dir
# Used(F) - The used field  in df result of filesystem
# Used(Q) - The used field  in df result of pquota-dir
# Avail(F)  - The avail field  in df result of filesystem
# Avail(Q)  - The avail field  in df result of pquota-dir
# Used(A) - Actual used

Problems that I found.
1) Avail(Q) can be higher than Avail(F)
2) Used(A) can be higher than Used(Q)


 fs/xfs/xfs_qm_bhv.c | 40 ++++++++++++++++++++++++++++++----------
 1 file changed, 30 insertions(+), 10 deletions(-)

diff --git a/fs/xfs/xfs_qm_bhv.c b/fs/xfs/xfs_qm_bhv.c
index 2be6d27..cb2e6c9 100644
--- a/fs/xfs/xfs_qm_bhv.c
+++ b/fs/xfs/xfs_qm_bhv.c
@@ -38,21 +38,41 @@
 	limit = dqp->q_core.d_blk_softlimit ?
 		be64_to_cpu(dqp->q_core.d_blk_softlimit) :
 		be64_to_cpu(dqp->q_core.d_blk_hardlimit);
-	if (limit && statp->f_blocks > limit) {
-		statp->f_blocks = limit;
-		statp->f_bfree = statp->f_bavail =
-			(statp->f_blocks > dqp->q_res_bcount) ?
-			 (statp->f_blocks - dqp->q_res_bcount) : 0;
+
+	if (limit) {
+		if (limit > dqp->q_res_bcount + statp->f_bavail)
+			statp->f_blocks = dqp->q_res_bcount + statp->f_bavail;
+		else
+			statp->f_blocks = limit;
+	} else {
+		statp->f_blocks = dqp->q_res_bcount + statp->f_bavail;
+	}
+
+	if (dqp->q_res_bcount >= statp->f_blocks) {
+		statp->f_blocks = dqp->q_res_bcount;
+		statp->f_bfree = statp->f_bavail = 0;
+	} else {
+		statp->f_bfree = statp->f_bavail = statp->f_blocks - dqp->q_res_bcount;
 	}
 
 	limit = dqp->q_core.d_ino_softlimit ?
 		be64_to_cpu(dqp->q_core.d_ino_softlimit) :
 		be64_to_cpu(dqp->q_core.d_ino_hardlimit);
-	if (limit && statp->f_files > limit) {
-		statp->f_files = limit;
-		statp->f_ffree =
-			(statp->f_files > dqp->q_res_icount) ?
-			 (statp->f_ffree - dqp->q_res_icount) : 0;
+
+	if (limit) {
+		if (limit > dqp->q_res_icount + statp->f_ffree)
+			statp->f_files = dqp->q_res_icount + statp->f_ffree;
+		else
+			statp->f_files = limit;
+	} else {
+		statp->f_files = dqp->q_res_icount + statp->f_ffree;
+	}
+
+	if (dqp->q_res_icount >= statp->f_files) {
+		statp->f_files = dqp->q_res_icount;
+		statp->f_ffree = 0;
+	} else {
+		statp->f_ffree = statp->f_files - dqp->q_res_icount;
 	}
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] xfs: adjust size/used/avail information for quota-df
  2018-03-09 13:15 [RFC][PATCH] xfs: adjust size/used/avail information for quota-df Chengguang Xu
@ 2018-03-15 14:25 ` Brian Foster
  2018-03-20 14:49   ` cgxu519
  0 siblings, 1 reply; 7+ messages in thread
From: Brian Foster @ 2018-03-15 14:25 UTC (permalink / raw)
  To: Chengguang Xu; +Cc: linux-xfs, darrick.wong

On Fri, Mar 09, 2018 at 09:15:55PM +0800, Chengguang Xu wrote:
> In order to more accurately reflect size/used/avail information
> for quota-df, slightly adjust related counting logic.
> 
> Signed-off-by: Chengguang Xu <cgxu519@gmx.com>
> ---
> 
> Hello folks,
> 
> Recently I’m testing  project quota  for our container users and found
> sometimes the result of df command could not exactly represent
> block/inode usage amount. So I checked  the logic in function xfs_qm_statvfs()
> and I think it might be better if slightly adjusting the counting logic.
> What do you think?
> 
> Terms:
> # Size(F)   - The size field  in df result of filesystem
> # Size(Q) - The size field of in df result of pquota-dir
> # Used(F) - The used field  in df result of filesystem
> # Used(Q) - The used field  in df result of pquota-dir
> # Avail(F)  - The avail field  in df result of filesystem
> # Avail(Q)  - The avail field  in df result of pquota-dir
> # Used(A) - Actual used
> 
> Problems that I found.
> 1) Avail(Q) can be higher than Avail(F)

Does this refer to a quota limit that exceeds the size of the fs? I'm
not sure that's necessarily a problem.

> 2) Used(A) can be higher than Used(Q)
> 

I'm not quite sure what this means.

As it is, the commit log doesn't clearly explain the problem you're
trying to solve. Perhaps you should provide some example commands and
output that demonstrate the problem and solution.

> 
>  fs/xfs/xfs_qm_bhv.c | 40 ++++++++++++++++++++++++++++++----------
>  1 file changed, 30 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/xfs/xfs_qm_bhv.c b/fs/xfs/xfs_qm_bhv.c
> index 2be6d27..cb2e6c9 100644
> --- a/fs/xfs/xfs_qm_bhv.c
> +++ b/fs/xfs/xfs_qm_bhv.c
> @@ -38,21 +38,41 @@
>  	limit = dqp->q_core.d_blk_softlimit ?
>  		be64_to_cpu(dqp->q_core.d_blk_softlimit) :
>  		be64_to_cpu(dqp->q_core.d_blk_hardlimit);
> -	if (limit && statp->f_blocks > limit) {
> -		statp->f_blocks = limit;
> -		statp->f_bfree = statp->f_bavail =
> -			(statp->f_blocks > dqp->q_res_bcount) ?
> -			 (statp->f_blocks - dqp->q_res_bcount) : 0;

So the current logic is that if there's a quota limit and the limit is
more restrictive than the fs, we clamp the stat size/free info to the
pquota limit.

> +
> +	if (limit) {
> +		if (limit > dqp->q_res_bcount + statp->f_bavail)
> +			statp->f_blocks = dqp->q_res_bcount + statp->f_bavail;
> +		else
> +			statp->f_blocks = limit;
> +	} else {
> +		statp->f_blocks = dqp->q_res_bcount + statp->f_bavail;
> +	}
> +

Now it looks like we fix up f_blocks regardless of whether there's a
limit set. IOW:

	if (limit && limit <= dqp->q_res_bcount + statp->f_bavail)
		statp->f_blocks = limit;
	else
		statp->f_blocks = dqp->q_res_bcount + statp->f_bavail;

The purpose of the original code was to make a subtree with a pquota
appear like a smaller fs to a subtenant, for example. What is the
additional purpose of the f_blocks adjustment?

> +	if (dqp->q_res_bcount >= statp->f_blocks) {
> +		statp->f_blocks = dqp->q_res_bcount;
> +		statp->f_bfree = statp->f_bavail = 0;
> +	} else {
> +		statp->f_bfree = statp->f_bavail = statp->f_blocks - dqp->q_res_bcount;
>  	}

q_res_bcount is a component of f_blocks in the else case above, so I
think the first case here turns the logic into something like:

	if (limit && limit <= dqp->q_res_bcount + statp->f_bavail) {
		if (dqp->q_res_bcount > limit)
			limit = dqp->q_res_bcount;
		statp->f_blocks = limit;
	} else
		statp->f_blocks = dqp->q_res_bcount + statp->f_bavail;
	statp->f_bfree = statp->f_bavail = statp->f_blocks - dqp->q_res_bcount;

Is the purpose here to deal with an over limit soft quota or something?

Brian

>  
>  	limit = dqp->q_core.d_ino_softlimit ?
>  		be64_to_cpu(dqp->q_core.d_ino_softlimit) :
>  		be64_to_cpu(dqp->q_core.d_ino_hardlimit);
> -	if (limit && statp->f_files > limit) {
> -		statp->f_files = limit;
> -		statp->f_ffree =
> -			(statp->f_files > dqp->q_res_icount) ?
> -			 (statp->f_ffree - dqp->q_res_icount) : 0;
> +
> +	if (limit) {
> +		if (limit > dqp->q_res_icount + statp->f_ffree)
> +			statp->f_files = dqp->q_res_icount + statp->f_ffree;
> +		else
> +			statp->f_files = limit;
> +	} else {
> +		statp->f_files = dqp->q_res_icount + statp->f_ffree;
> +	}
> +
> +	if (dqp->q_res_icount >= statp->f_files) {
> +		statp->f_files = dqp->q_res_icount;
> +		statp->f_ffree = 0;
> +	} else {
> +		statp->f_ffree = statp->f_files - dqp->q_res_icount;
>  	}
>  }
>  
> -- 
> 1.8.3.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] xfs: adjust size/used/avail information for quota-df
  2018-03-15 14:25 ` Brian Foster
@ 2018-03-20 14:49   ` cgxu519
  2018-03-20 17:41     ` Eric Sandeen
  0 siblings, 1 reply; 7+ messages in thread
From: cgxu519 @ 2018-03-20 14:49 UTC (permalink / raw)
  To: Brian Foster; +Cc: cgxu519, linux-xfs, darrick.wong

Hi Brian,

I’m sorry for delayed reply, detail explanation is in inline mail.



> 在 2018年3月15日,下午10:25,Brian Foster <bfoster@redhat.com> 写道:
> 
> On Fri, Mar 09, 2018 at 09:15:55PM +0800, Chengguang Xu wrote:
>> In order to more accurately reflect size/used/avail information
>> for quota-df, slightly adjust related counting logic.
>> 
>> Signed-off-by: Chengguang Xu <cgxu519@gmx.com>
>> ---
>> 
>> Hello folks,
>> 
>> Recently I’m testing  project quota  for our container users and found
>> sometimes the result of df command could not exactly represent
>> block/inode usage amount. So I checked  the logic in function xfs_qm_statvfs()
>> and I think it might be better if slightly adjusting the counting logic.
>> What do you think?
>> 
>> Terms:
>> # Size(F)   - The size field  in df result of filesystem
>> # Size(Q) - The size field of in df result of pquota-dir
>> # Used(F) - The used field  in df result of filesystem
>> # Used(Q) - The used field  in df result of pquota-dir
>> # Avail(F)  - The avail field  in df result of filesystem
>> # Avail(Q)  - The avail field  in df result of pquota-dir
>> # Used(A) - Actual used
>> 
>> Problems that I found.
>> 1) Avail(Q) can be higher than Avail(F)
> 
> Does this refer to a quota limit that exceeds the size of the fs? I'm
> not sure that's necessarily a problem.

No, not really. Assume if we have 100GB xfs filesystem(/mnt/test2) and we have
3 directories(pq1, pq2, pq3) inside the fs, each directory sets project quota. 
(size limit up to 10GB)

When avail space of total filesystem is only left 3.2MB, but when running df for
pg1,pg2,pg3 then avail space is 9.5GB, this is much more than real filesystem. 
What do you think?

Detail output [1]. (without this fix patch)

$ df -h /mnt/test2
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb2       100G  100G  3.2M 100% /mnt/test2

$ df -h /mnt/test2/pq1
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb2        10G  570M  9.5G   6% /mnt/test2

$ df -h /mnt/test2/pq2
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb2        10G  570M  9.5G   6% /mnt/test2

$ df -h /mnt/test2/pq3
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb2        10G  570M  9.5G   6% /mnt/test2


Detail output [2]. (with this fix patch)

$ df -h /mnt/test2
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb2       100G  100G  3.2M 100% /mnt/test2

$ df -h /mnt/test2/pq1
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb2       574M  570M  3.2M 100% /mnt/test2

$ df -h /mnt/test2/pq2
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb2       574M  570M  3.2M 100% /mnt/test2

$ df -h /mnt/test2/pq3
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb2       574M  570M  3.2M 100% /mnt/test2


> 
>> 2) Used(A) can be higher than Used(Q)
>> 
> 
> I'm not quite sure what this means.
> 
> As it is, the commit log doesn't clearly explain the problem you're
> trying to solve. Perhaps you should provide some example commands and
> output that demonstrate the problem and solution.
> 
>> 
>> fs/xfs/xfs_qm_bhv.c | 40 ++++++++++++++++++++++++++++++----------
>> 1 file changed, 30 insertions(+), 10 deletions(-)
>> 
>> diff --git a/fs/xfs/xfs_qm_bhv.c b/fs/xfs/xfs_qm_bhv.c
>> index 2be6d27..cb2e6c9 100644
>> --- a/fs/xfs/xfs_qm_bhv.c
>> +++ b/fs/xfs/xfs_qm_bhv.c
>> @@ -38,21 +38,41 @@
>> 	limit = dqp->q_core.d_blk_softlimit ?
>> 		be64_to_cpu(dqp->q_core.d_blk_softlimit) :
>> 		be64_to_cpu(dqp->q_core.d_blk_hardlimit);
>> -	if (limit && statp->f_blocks > limit) {
>> -		statp->f_blocks = limit;
>> -		statp->f_bfree = statp->f_bavail =
>> -			(statp->f_blocks > dqp->q_res_bcount) ?
>> -			 (statp->f_blocks - dqp->q_res_bcount) : 0;
> 
> So the current logic is that if there's a quota limit and the limit is
> more restrictive than the fs, we clamp the stat size/free info to the
> pquota limit.

Here, I think this logic has some problems.

1. Limit is set to soft limit if it exists, but soft limit can be
exceeded by temporarily, so the usage output might not exactly represent
actual usage. Especially, when soft limit and hard limit are much different,
then the usage output might be more confusing.

2. When calculating free space, this logic does not consider the free space
of real filesystem, so the output of free space sometimes much larger than free
space of real filesystem. I have showed the detail in above explanation [1].


> 
>> +
>> +	if (limit) {
>> +		if (limit > dqp->q_res_bcount + statp->f_bavail)
>> +			statp->f_blocks = dqp->q_res_bcount + statp->f_bavail;
>> +		else
>> +			statp->f_blocks = limit;
>> +	} else {
>> +		statp->f_blocks = dqp->q_res_bcount + statp->f_bavail;
>> +	}
>> +
> 
> Now it looks like we fix up f_blocks regardless of whether there's a
> limit set. IOW:
> 
> 	if (limit && limit <= dqp->q_res_bcount + statp->f_bavail)
> 		statp->f_blocks = limit;
> 	else
> 		statp->f_blocks = dqp->q_res_bcount + statp->f_bavail;
> 
> The purpose of the original code was to make a subtree with a pquota
> appear like a smaller fs to a subtenant, for example. What is the
> additional purpose of the f_blocks adjustment?

The additional purpose of the f_blocks adjustment is for keeping below counting rule.

total = used + free(or avail) 


> 
>> +	if (dqp->q_res_bcount >= statp->f_blocks) {
>> +		statp->f_blocks = dqp->q_res_bcount;
>> +		statp->f_bfree = statp->f_bavail = 0;
>> +	} else {
>> +		statp->f_bfree = statp->f_bavail = statp->f_blocks - dqp->q_res_bcount;
>> 	}
> 
> q_res_bcount is a component of f_blocks in the else case above, so I
> think the first case here turns the logic into something like:
> 
> 	if (limit && limit <= dqp->q_res_bcount + statp->f_bavail) {
> 		if (dqp->q_res_bcount > limit)
> 			limit = dqp->q_res_bcount;
> 		statp->f_blocks = limit;
> 	} else
> 		statp->f_blocks = dqp->q_res_bcount + statp->f_bavail;
> 	statp->f_bfree = statp->f_bavail = statp->f_blocks - dqp->q_res_bcount;
> 
> Is the purpose here to deal with an over limit soft quota or something?

Yes, exactly.

Thanks,
Chengguang.


> 
> Brian
> 
>> 
>> 	limit = dqp->q_core.d_ino_softlimit ?
>> 		be64_to_cpu(dqp->q_core.d_ino_softlimit) :
>> 		be64_to_cpu(dqp->q_core.d_ino_hardlimit);
>> -	if (limit && statp->f_files > limit) {
>> -		statp->f_files = limit;
>> -		statp->f_ffree =
>> -			(statp->f_files > dqp->q_res_icount) ?
>> -			 (statp->f_ffree - dqp->q_res_icount) : 0;
>> +
>> +	if (limit) {
>> +		if (limit > dqp->q_res_icount + statp->f_ffree)
>> +			statp->f_files = dqp->q_res_icount + statp->f_ffree;
>> +		else
>> +			statp->f_files = limit;
>> +	} else {
>> +		statp->f_files = dqp->q_res_icount + statp->f_ffree;
>> +	}
>> +
>> +	if (dqp->q_res_icount >= statp->f_files) {
>> +		statp->f_files = dqp->q_res_icount;
>> +		statp->f_ffree = 0;
>> +	} else {
>> +		statp->f_ffree = statp->f_files - dqp->q_res_icount;
>> 	}
>> }
>> 
>> -- 
>> 1.8.3.1
>> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] xfs: adjust size/used/avail information for quota-df
  2018-03-20 14:49   ` cgxu519
@ 2018-03-20 17:41     ` Eric Sandeen
  2018-03-20 19:18       ` Brian Foster
  0 siblings, 1 reply; 7+ messages in thread
From: Eric Sandeen @ 2018-03-20 17:41 UTC (permalink / raw)
  To: cgxu519, Brian Foster; +Cc: linux-xfs, darrick.wong

On 3/20/18 9:49 AM, cgxu519@gmx.com wrote:

...

> No, not really. Assume if we have 100GB xfs filesystem(/mnt/test2) and we have
> 3 directories(pq1, pq2, pq3) inside the fs, each directory sets project quota. 
> (size limit up to 10GB)
> 
> When avail space of total filesystem is only left 3.2MB, but when running df for
> pg1,pg2,pg3 then avail space is 9.5GB, this is much more than real filesystem. 
> What do you think?
> 
> Detail output [1]. (without this fix patch)
> 
> $ df -h /mnt/test2
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
> 
> $ df -h /mnt/test2/pq1
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
> 
> $ df -h /mnt/test2/pq2
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
> 
> $ df -h /mnt/test2/pq3
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2

I agree that this is a confusing result.
 
> Detail output [2]. (with this fix patch)
> 
> $ df -h /mnt/test2
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
> 
> $ df -h /mnt/test2/pq1
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vdb2       574M  570M  3.2M 100% /mnt/test2
                   ^           ^
                   |           |
                   |           +-- This makes sense 
                   |
                   +-- This is a little bit odd

So you cap the available project space to host filesystem
available space, and also use that to compute the
total size of the "project" by adding used+available.

The slightly strange result is that "size" will shrink
as more filesystem space gets used, but I'm not
sure I have a better suggestion here... would the below
result be too confusing?  It is truthful; the limit is 10G,
570M are used, and only 3.2M is currently available due to
the host filesystem freespace constraint:

$ df -h /mnt/test2/pq1
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb2       10G   570M  3.2M 100% /mnt/test2

-Eric

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] xfs: adjust size/used/avail information for quota-df
  2018-03-20 17:41     ` Eric Sandeen
@ 2018-03-20 19:18       ` Brian Foster
  2018-03-21  3:36         ` cgxu519
  0 siblings, 1 reply; 7+ messages in thread
From: Brian Foster @ 2018-03-20 19:18 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: cgxu519, linux-xfs, darrick.wong

On Tue, Mar 20, 2018 at 12:41:50PM -0500, Eric Sandeen wrote:
> On 3/20/18 9:49 AM, cgxu519@gmx.com wrote:
> 
> ...
> 
> > No, not really. Assume if we have 100GB xfs filesystem(/mnt/test2) and we have
> > 3 directories(pq1, pq2, pq3) inside the fs, each directory sets project quota. 
> > (size limit up to 10GB)
> > 
> > When avail space of total filesystem is only left 3.2MB, but when running df for
> > pg1,pg2,pg3 then avail space is 9.5GB, this is much more than real filesystem. 
> > What do you think?
> > 
> > Detail output [1]. (without this fix patch)
> > 
> > $ df -h /mnt/test2
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
> > 
> > $ df -h /mnt/test2/pq1
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
> > 
> > $ df -h /mnt/test2/pq2
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
> > 
> > $ df -h /mnt/test2/pq3
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
> 
> I agree that this is a confusing result.
>  

Ditto. Thanks for the example Chengguang.

> > Detail output [2]. (with this fix patch)
> > 
> > $ df -h /mnt/test2
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
> > 
> > $ df -h /mnt/test2/pq1
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/vdb2       574M  570M  3.2M 100% /mnt/test2
>                    ^           ^
>                    |           |
>                    |           +-- This makes sense 
>                    |
>                    +-- This is a little bit odd
> 
> So you cap the available project space to host filesystem
> available space, and also use that to compute the
> total size of the "project" by adding used+available.
> 

I think I agree here too. Personally, I'd expect the fs size to remain
static one way or another (i.e., whether it's the full fs or a sub-fs
via project quota) and see the user/avail numbers change based on the
current state rather than see the size float around due to just wanting
to make the numbers add up. The latter makes it difficult to understand
the (virtual) geometry of the project.

> The slightly strange result is that "size" will shrink
> as more filesystem space gets used, but I'm not
> sure I have a better suggestion here... would the below
> result be too confusing?  It is truthful; the limit is 10G,
> 570M are used, and only 3.2M is currently available due to
> the host filesystem freespace constraint:
> 
> $ df -h /mnt/test2/pq1
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vdb2       10G   570M  3.2M 100% /mnt/test2
> 

Slightly confusing, but I'd rather have accuracy than guarantee that
size = used + avail. The above at least tells us that something is
missing, even if it's not totally obvious that the missing space is
unavailable due to the broader fs free space limitation. It's probably
the type of thing you'd expect to see if space reporting were truly
accurate on a thin volume, for example.

FWIW, the other option is just to leave the output as above where we
presumably ignore the global free space cap and present 9.5GB available.
I think it's fine to fix/limit that, but I'd prefer an inaccurate
available number to an inaccurate/variable fs size either way.

With regard to a soft limit, it looks like we currently size the fs at
the soft limit and simply call it 100% used if the limit is exceeded.
That seems reasonable to me if only a soft limit is set, but I suppose
that could hide some info if both hard/soft limits are set. Perhaps we
should use the max of the soft/hard limit if both are set (or I guess
prioritize a hard limit iff it's larger than the soft, to avoid
insanity)? I suppose one could also argue that some admins might want to
size an fs with the soft limit, give users a bit of landing room, then
set a hard cap to protect the broader fs. :/

Brian

> -Eric
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] xfs: adjust size/used/avail information for quota-df
  2018-03-20 19:18       ` Brian Foster
@ 2018-03-21  3:36         ` cgxu519
  2018-03-21 11:51           ` Brian Foster
  0 siblings, 1 reply; 7+ messages in thread
From: cgxu519 @ 2018-03-21  3:36 UTC (permalink / raw)
  To: Brian Foster; +Cc: cgxu519, Eric Sandeen, linux-xfs, darrick.wong

在 2018年3月21日,上午3:18,Brian Foster <bfoster@redhat.com> 写道:
> 
> On Tue, Mar 20, 2018 at 12:41:50PM -0500, Eric Sandeen wrote:
>> On 3/20/18 9:49 AM, cgxu519@gmx.com wrote:
>> 
>> ...
>> 
>>> No, not really. Assume if we have 100GB xfs filesystem(/mnt/test2) and we have
>>> 3 directories(pq1, pq2, pq3) inside the fs, each directory sets project quota. 
>>> (size limit up to 10GB)
>>> 
>>> When avail space of total filesystem is only left 3.2MB, but when running df for
>>> pg1,pg2,pg3 then avail space is 9.5GB, this is much more than real filesystem. 
>>> What do you think?
>>> 
>>> Detail output [1]. (without this fix patch)
>>> 
>>> $ df -h /mnt/test2
>>> Filesystem      Size  Used Avail Use% Mounted on
>>> /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
>>> 
>>> $ df -h /mnt/test2/pq1
>>> Filesystem      Size  Used Avail Use% Mounted on
>>> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
>>> 
>>> $ df -h /mnt/test2/pq2
>>> Filesystem      Size  Used Avail Use% Mounted on
>>> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
>>> 
>>> $ df -h /mnt/test2/pq3
>>> Filesystem      Size  Used Avail Use% Mounted on
>>> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
>> 
>> I agree that this is a confusing result.
>> 
> 
> Ditto. Thanks for the example Chengguang.
> 
>>> Detail output [2]. (with this fix patch)
>>> 
>>> $ df -h /mnt/test2
>>> Filesystem      Size  Used Avail Use% Mounted on
>>> /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
>>> 
>>> $ df -h /mnt/test2/pq1
>>> Filesystem      Size  Used Avail Use% Mounted on
>>> /dev/vdb2       574M  570M  3.2M 100% /mnt/test2
>>                   ^           ^
>>                   |           |
>>                   |           +-- This makes sense 
>>                   |
>>                   +-- This is a little bit odd
>> 
>> So you cap the available project space to host filesystem
>> available space, and also use that to compute the
>> total size of the "project" by adding used+available.
>> 
> 
> I think I agree here too. Personally, I'd expect the fs size to remain
> static one way or another (i.e., whether it's the full fs or a sub-fs
> via project quota) and see the user/avail numbers change based on the
> current state rather than see the size float around due to just wanting
> to make the numbers add up. The latter makes it difficult to understand
> the (virtual) geometry of the project.
> 
>> The slightly strange result is that "size" will shrink
>> as more filesystem space gets used, but I'm not
>> sure I have a better suggestion here... would the below
>> result be too confusing?  It is truthful; the limit is 10G,
>> 570M are used, and only 3.2M is currently available due to
>> the host filesystem freespace constraint:
>> 
>> $ df -h /mnt/test2/pq1
>> Filesystem      Size  Used Avail Use% Mounted on
>> /dev/vdb2       10G   570M  3.2M 100% /mnt/test2
>> 
> 
> Slightly confusing, but I'd rather have accuracy than guarantee that
> size = used + avail. The above at least tells us that something is
> missing, even if it's not totally obvious that the missing space is
> unavailable due to the broader fs free space limitation. It's probably
> the type of thing you'd expect to see if space reporting were truly
> accurate on a thin volume, for example.


Personally, I agree with your suggestions, I’m more care about avail/used
not the size, but unfortunately, statfs seems only collecting f_blocks,
f_bfree, f_bavail then df calculate used space based on those variables,
so I think there is no chance directly specify used space. This is the
reason that I hope to guarantee 'total = used + avail'


If we remain static size 10GB and avail adjust to 3.2MB, then the result
looks like below. :(

$ df -h /mnt/test2
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb2       100G  100G  3.2M 100% /mnt/test2

$ df -h /mnt/test2/pq1
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb2        10G   10G  3.2M 100% /mnt/test2



> 
> FWIW, the other option is just to leave the output as above where we
> presumably ignore the global free space cap and present 9.5GB available.
> I think it's fine to fix/limit that, but I'd prefer an inaccurate
> available number to an inaccurate/variable fs size either way.
> 
> With regard to a soft limit, it looks like we currently size the fs at
> the soft limit and simply call it 100% used if the limit is exceeded.
> That seems reasonable to me if only a soft limit is set, but I suppose
> that could hide some info if both hard/soft limits are set. Perhaps we
> should use the max of the soft/hard limit if both are set (or I guess
> prioritize a hard limit iff it's larger than the soft, to avoid
> insanity)? I suppose one could also argue that some admins might want to
> size an fs with the soft limit, give users a bit of landing room, then
> set a hard cap to protect the broader fs. :/

If we want to remain the size static then we need choose either soft or hard limit.
I think hard limit is a little bit better and meaningful because soft limit
might not directly cause write error even if the limit has already exceeded.



> 
> Brian
> 
>> -Eric
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] xfs: adjust size/used/avail information for quota-df
  2018-03-21  3:36         ` cgxu519
@ 2018-03-21 11:51           ` Brian Foster
  0 siblings, 0 replies; 7+ messages in thread
From: Brian Foster @ 2018-03-21 11:51 UTC (permalink / raw)
  To: cgxu519; +Cc: Eric Sandeen, linux-xfs, darrick.wong

On Wed, Mar 21, 2018 at 11:36:08AM +0800, cgxu519@gmx.com wrote:
> 在 2018年3月21日,上午3:18,Brian Foster <bfoster@redhat.com> 写道:
> > 
> > On Tue, Mar 20, 2018 at 12:41:50PM -0500, Eric Sandeen wrote:
> >> On 3/20/18 9:49 AM, cgxu519@gmx.com wrote:
> >> 
> >> ...
> >> 
> >>> No, not really. Assume if we have 100GB xfs filesystem(/mnt/test2) and we have
> >>> 3 directories(pq1, pq2, pq3) inside the fs, each directory sets project quota. 
> >>> (size limit up to 10GB)
> >>> 
> >>> When avail space of total filesystem is only left 3.2MB, but when running df for
> >>> pg1,pg2,pg3 then avail space is 9.5GB, this is much more than real filesystem. 
> >>> What do you think?
> >>> 
> >>> Detail output [1]. (without this fix patch)
> >>> 
> >>> $ df -h /mnt/test2
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
> >>> 
> >>> $ df -h /mnt/test2/pq1
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
> >>> 
> >>> $ df -h /mnt/test2/pq2
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
> >>> 
> >>> $ df -h /mnt/test2/pq3
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
> >> 
> >> I agree that this is a confusing result.
> >> 
> > 
> > Ditto. Thanks for the example Chengguang.
> > 
> >>> Detail output [2]. (with this fix patch)
> >>> 
> >>> $ df -h /mnt/test2
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
> >>> 
> >>> $ df -h /mnt/test2/pq1
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2       574M  570M  3.2M 100% /mnt/test2
> >>                   ^           ^
> >>                   |           |
> >>                   |           +-- This makes sense 
> >>                   |
> >>                   +-- This is a little bit odd
> >> 
> >> So you cap the available project space to host filesystem
> >> available space, and also use that to compute the
> >> total size of the "project" by adding used+available.
> >> 
> > 
> > I think I agree here too. Personally, I'd expect the fs size to remain
> > static one way or another (i.e., whether it's the full fs or a sub-fs
> > via project quota) and see the user/avail numbers change based on the
> > current state rather than see the size float around due to just wanting
> > to make the numbers add up. The latter makes it difficult to understand
> > the (virtual) geometry of the project.
> > 
> >> The slightly strange result is that "size" will shrink
> >> as more filesystem space gets used, but I'm not
> >> sure I have a better suggestion here... would the below
> >> result be too confusing?  It is truthful; the limit is 10G,
> >> 570M are used, and only 3.2M is currently available due to
> >> the host filesystem freespace constraint:
> >> 
> >> $ df -h /mnt/test2/pq1
> >> Filesystem      Size  Used Avail Use% Mounted on
> >> /dev/vdb2       10G   570M  3.2M 100% /mnt/test2
> >> 
> > 
> > Slightly confusing, but I'd rather have accuracy than guarantee that
> > size = used + avail. The above at least tells us that something is
> > missing, even if it's not totally obvious that the missing space is
> > unavailable due to the broader fs free space limitation. It's probably
> > the type of thing you'd expect to see if space reporting were truly
> > accurate on a thin volume, for example.
> 
> 
> Personally, I agree with your suggestions, I’m more care about avail/used
> not the size, but unfortunately, statfs seems only collecting f_blocks,
> f_bfree, f_bavail then df calculate used space based on those variables,
> so I think there is no chance directly specify used space. This is the
> reason that I hope to guarantee 'total = used + avail'
> 
> 
> If we remain static size 10GB and avail adjust to 3.2MB, then the result
> looks like below. :(
> 
> $ df -h /mnt/test2
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
> 
> $ df -h /mnt/test2/pq1
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vdb2        10G   10G  3.2M 100% /mnt/test2
> 

Ah, I see. So used becomes inaccurate at that point. Hmm, it's starting
to seem to me that maybe leaving this as is is the right approach. The
output above is misleading because that much space has not been used by
the quota. As previously noted, the floating size approach I personally
just find confusing. It's not really clear at all what it's telling me
as a user.

The current approach directly maps the quota state to the stats fields
so it clearly tells me 1.) the limit and 2.) how much of the limit I've
used. If the parent filesystem is more restrictive and operations result
in ENOSPC, then that's something the admin will have to resolve one way
or another.

That's just my .02. Perhaps others feel differently and/or have better
logic.

> 
> 
> > 
> > FWIW, the other option is just to leave the output as above where we
> > presumably ignore the global free space cap and present 9.5GB available.
> > I think it's fine to fix/limit that, but I'd prefer an inaccurate
> > available number to an inaccurate/variable fs size either way.
> > 
> > With regard to a soft limit, it looks like we currently size the fs at
> > the soft limit and simply call it 100% used if the limit is exceeded.
> > That seems reasonable to me if only a soft limit is set, but I suppose
> > that could hide some info if both hard/soft limits are set. Perhaps we
> > should use the max of the soft/hard limit if both are set (or I guess
> > prioritize a hard limit iff it's larger than the soft, to avoid
> > insanity)? I suppose one could also argue that some admins might want to
> > size an fs with the soft limit, give users a bit of landing room, then
> > set a hard cap to protect the broader fs. :/
> 
> If we want to remain the size static then we need choose either soft or hard limit.
> I think hard limit is a little bit better and meaningful because soft limit
> might not directly cause write error even if the limit has already exceeded.
> 

It seems reasonable enough to me to always use the hardlimit when both a
hard and soft limit are set, but I don't really have a strong opinion
either way.

Brian

> 
> 
> > 
> > Brian
> > 
> >> -Eric
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-03-21 11:51 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-09 13:15 [RFC][PATCH] xfs: adjust size/used/avail information for quota-df Chengguang Xu
2018-03-15 14:25 ` Brian Foster
2018-03-20 14:49   ` cgxu519
2018-03-20 17:41     ` Eric Sandeen
2018-03-20 19:18       ` Brian Foster
2018-03-21  3:36         ` cgxu519
2018-03-21 11:51           ` Brian Foster

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.