All of lore.kernel.org
 help / color / mirror / Atom feed
* btrfs quota issues
@ 2016-08-11 17:32 Rakesh Sankeshi
  2016-08-11 19:13 ` Duncan
  2016-08-15  2:11 ` Qu Wenruo
  0 siblings, 2 replies; 13+ messages in thread
From: Rakesh Sankeshi @ 2016-08-11 17:32 UTC (permalink / raw)
  To: linux-btrfs

I set 200GB limit to one user and 100GB to another user.

as soon as I reached 139GB and 53GB each, hitting the quota errors.
anyway to workaround quota functionality on btrfs LZO compressed
filesystem?



4.7.0-040700-generic #201608021801 SMP

btrfs-progs v4.7


Label: none  uuid: 66a78faf-2052-4864-8a52-c5aec7a56ab8

Total devices 2 FS bytes used 150.62GiB

devid    1 size 1.00TiB used 78.01GiB path /dev/xvdc

devid    2 size 1.00TiB used 78.01GiB path /dev/xvde


Data, RAID0: total=150.00GiB, used=149.12GiB

System, RAID1: total=8.00MiB, used=16.00KiB

Metadata, RAID1: total=3.00GiB, used=1.49GiB

GlobalReserve, single: total=512.00MiB, used=0.00B


Filesystem      Size  Used Avail Use% Mounted on

/dev/xvdc       2.0T  153G  1.9T   8% /test_lzo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-11 17:32 btrfs quota issues Rakesh Sankeshi
@ 2016-08-11 19:13 ` Duncan
  2016-08-12 15:47   ` Rakesh Sankeshi
  2016-08-15  2:11 ` Qu Wenruo
  1 sibling, 1 reply; 13+ messages in thread
From: Duncan @ 2016-08-11 19:13 UTC (permalink / raw)
  To: linux-btrfs

Rakesh Sankeshi posted on Thu, 11 Aug 2016 10:32:03 -0700 as excerpted:

> I set 200GB limit to one user and 100GB to another user.
> 
> as soon as I reached 139GB and 53GB each, hitting the quota errors.
> anyway to workaround quota functionality on btrfs LZO compressed
> filesystem?

The btrfs quota subsystem remains somewhat buggy and unstable.  A lot of 
work has gone into it to fix the problems, including rewrites of the 
entire subsystem, and it's much better than it used to be, but it's still 
a feature that I would recommend not using on btrfs.

My general position is this.  Either you need quotas for your use-case or 
you don't.  If you truly need them, you're far better off using a more 
mature filesystem with proven quota subsystem reliability.  If you don't 
really need them, simply keep the feature off for now, and for however 
long it takes to stabilize the feature, which could be some time.

Of course if you're specifically testing quotas in ordered to report 
issues and test bugfixes, that's a specific case of needing quota 
functionality, and your work is greatly appreciated as it'll help to 
eventually make that feature stable and workable for all. =:^)



-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-11 19:13 ` Duncan
@ 2016-08-12 15:47   ` Rakesh Sankeshi
  2016-08-13 23:05     ` Duncan
  0 siblings, 1 reply; 13+ messages in thread
From: Rakesh Sankeshi @ 2016-08-12 15:47 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

Thanks for your inputs.

Another question I had was, is there any way to check what's the
directory/file sizes prior to compression and how much copression
btrfs did, etc? Basicaly some stats around compression and/or dedupe
from btrfs.


On Thu, Aug 11, 2016 at 12:13 PM, Duncan <1i5t5.duncan@cox.net> wrote:
> Rakesh Sankeshi posted on Thu, 11 Aug 2016 10:32:03 -0700 as excerpted:
>
>> I set 200GB limit to one user and 100GB to another user.
>>
>> as soon as I reached 139GB and 53GB each, hitting the quota errors.
>> anyway to workaround quota functionality on btrfs LZO compressed
>> filesystem?
>
> The btrfs quota subsystem remains somewhat buggy and unstable.  A lot of
> work has gone into it to fix the problems, including rewrites of the
> entire subsystem, and it's much better than it used to be, but it's still
> a feature that I would recommend not using on btrfs.
>
> My general position is this.  Either you need quotas for your use-case or
> you don't.  If you truly need them, you're far better off using a more
> mature filesystem with proven quota subsystem reliability.  If you don't
> really need them, simply keep the feature off for now, and for however
> long it takes to stabilize the feature, which could be some time.
>
> Of course if you're specifically testing quotas in ordered to report
> issues and test bugfixes, that's a specific case of needing quota
> functionality, and your work is greatly appreciated as it'll help to
> eventually make that feature stable and workable for all. =:^)
>
>
>
> --
> Duncan - List replies preferred.   No HTML msgs.
> "Every nonfree program has a lord, a master --
> and if you use the program, he is your master."  Richard Stallman
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-12 15:47   ` Rakesh Sankeshi
@ 2016-08-13 23:05     ` Duncan
  0 siblings, 0 replies; 13+ messages in thread
From: Duncan @ 2016-08-13 23:05 UTC (permalink / raw)
  To: linux-btrfs

Rakesh Sankeshi posted on Fri, 12 Aug 2016 08:47:13 -0700 as excerpted:

> Another question I had was, is there any way to check what's the
> directory/file sizes prior to compression and how much copression btrfs
> did, etc? Basicaly some stats around compression and/or dedupe from
> btrfs.

There's some dedupe reportes I've seen posted, that basically show shared 
vs. unique extents, but that's out of both my area of usage /and/ my area 
of interest, so that's pretty much all I can say on that.

Compression...  AFAIK there's no nice neat admin-level command for it 
(yet?), but it's possible to get the raw information via filefrag -v.  
The output is a list of extents for the file in question, with both their 
(uncompressed) size and start/end addresses.  If the size is greater than 
the space they take up based on their start/end addresses, then obviously 
that extent is compressed.  People have posted python scripts and the 
like that process that information into something higher level that an 
admin could digest, but I believe you'll have to mine the list archives 
to find them.

In the context of filefrag, it is worth mentioning, however, that btrfs 
compression blocks are 128 KiB in (uncompressed) size, and that filefrag 
considers each such reported block its own extent, even where it's 
contiguous with the next one.  Put another way, filefrag doesn't 
understand btrfs compression, tho it's possible to use its output to 
figure out whether a file is compressed or not, and by how much.

It's also relatively easy to quickly scan the filefrag output to see if 
all the extents are 128 KiB in size and consider that compressed, and/or 
to divide the file size by 128 KiB and see if that matches the number of 
reported extents, considering it compressed if so.  For a few one-off 
files, that's easy enough to do manually, but you'd definitely want to 
automate the process if you wanted that information on more than a few 
individual files.


-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-11 17:32 btrfs quota issues Rakesh Sankeshi
  2016-08-11 19:13 ` Duncan
@ 2016-08-15  2:11 ` Qu Wenruo
  2016-08-15 19:11   ` Rakesh Sankeshi
  1 sibling, 1 reply; 13+ messages in thread
From: Qu Wenruo @ 2016-08-15  2:11 UTC (permalink / raw)
  To: Rakesh Sankeshi, linux-btrfs



At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote:
> I set 200GB limit to one user and 100GB to another user.
>
> as soon as I reached 139GB and 53GB each, hitting the quota errors.
> anyway to workaround quota functionality on btrfs LZO compressed
> filesystem?
>

Please paste "btrfs qgroup show -prce <mnt>" output if you are using 
btrfs qgroup/quota function.

And, AFAIK btrfs qgroup is applied to subvolume, not user.

So did you mean limit it to one subvolume belongs to one user?

Thanks,
Qu

>
>
> 4.7.0-040700-generic #201608021801 SMP
>
> btrfs-progs v4.7
>
>
> Label: none  uuid: 66a78faf-2052-4864-8a52-c5aec7a56ab8
>
> Total devices 2 FS bytes used 150.62GiB
>
> devid    1 size 1.00TiB used 78.01GiB path /dev/xvdc
>
> devid    2 size 1.00TiB used 78.01GiB path /dev/xvde
>
>
> Data, RAID0: total=150.00GiB, used=149.12GiB
>
> System, RAID1: total=8.00MiB, used=16.00KiB
>
> Metadata, RAID1: total=3.00GiB, used=1.49GiB
>
> GlobalReserve, single: total=512.00MiB, used=0.00B
>
>
> Filesystem      Size  Used Avail Use% Mounted on
>
> /dev/xvdc       2.0T  153G  1.9T   8% /test_lzo
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-15  2:11 ` Qu Wenruo
@ 2016-08-15 19:11   ` Rakesh Sankeshi
  2016-08-16  1:01     ` Qu Wenruo
  0 siblings, 1 reply; 13+ messages in thread
From: Rakesh Sankeshi @ 2016-08-15 19:11 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

yes, subvol level.

qgroupid         rfer         excl     max_rfer     max_excl parent  child

--------         ----         ----     --------     -------- ------  -----

0/5          16.00KiB     16.00KiB         none         none ---     ---

0/258       119.48GiB    119.48GiB    200.00GiB         none ---     ---

0/259        92.57GiB     92.57GiB    200.00GiB         none ---     ---


although I have 200GB limit on 2 subvols, running into issue at about
120 and 92GB itself


On Sun, Aug 14, 2016 at 7:11 PM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>
>
> At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote:
>>
>> I set 200GB limit to one user and 100GB to another user.
>>
>> as soon as I reached 139GB and 53GB each, hitting the quota errors.
>> anyway to workaround quota functionality on btrfs LZO compressed
>> filesystem?
>>
>
> Please paste "btrfs qgroup show -prce <mnt>" output if you are using btrfs
> qgroup/quota function.
>
> And, AFAIK btrfs qgroup is applied to subvolume, not user.
>
> So did you mean limit it to one subvolume belongs to one user?
>
> Thanks,
> Qu
>
>>
>>
>> 4.7.0-040700-generic #201608021801 SMP
>>
>> btrfs-progs v4.7
>>
>>
>> Label: none  uuid: 66a78faf-2052-4864-8a52-c5aec7a56ab8
>>
>> Total devices 2 FS bytes used 150.62GiB
>>
>> devid    1 size 1.00TiB used 78.01GiB path /dev/xvdc
>>
>> devid    2 size 1.00TiB used 78.01GiB path /dev/xvde
>>
>>
>> Data, RAID0: total=150.00GiB, used=149.12GiB
>>
>> System, RAID1: total=8.00MiB, used=16.00KiB
>>
>> Metadata, RAID1: total=3.00GiB, used=1.49GiB
>>
>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>
>>
>> Filesystem      Size  Used Avail Use% Mounted on
>>
>> /dev/xvdc       2.0T  153G  1.9T   8% /test_lzo
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-15 19:11   ` Rakesh Sankeshi
@ 2016-08-16  1:01     ` Qu Wenruo
  2016-08-16 16:05       ` Rakesh Sankeshi
  0 siblings, 1 reply; 13+ messages in thread
From: Qu Wenruo @ 2016-08-16  1:01 UTC (permalink / raw)
  To: Rakesh Sankeshi; +Cc: linux-btrfs



At 08/16/2016 03:11 AM, Rakesh Sankeshi wrote:
> yes, subvol level.
>
> qgroupid         rfer         excl     max_rfer     max_excl parent  child
>
> --------         ----         ----     --------     -------- ------  -----
>
> 0/5          16.00KiB     16.00KiB         none         none ---     ---
>
> 0/258       119.48GiB    119.48GiB    200.00GiB         none ---     ---
>
> 0/259        92.57GiB     92.57GiB    200.00GiB         none ---     ---
>
>
> although I have 200GB limit on 2 subvols, running into issue at about
> 120 and 92GB itself

1) About workload
Would you mind to mention the work pattern of your write?

Just dd data with LZO compression?
For compression part, it's a little complicated, as the reserved data 
size and on disk extent size are different.

It's possible that at some code we leaked some reserved data space.


2) Behavior after EDQUOT
And, after EDQUOT happens, can you write data into the subvolume?
If you can still write a lot of data (at least several giga), it seems 
to be something related with temporary reserved space.

If not, and even can't remove any file due to EQUOTA, then it's almost 
sure we have underflowed the reserved data.
In that case, unmount and mount again will be the only workaround.
(In fact, not workaround at all)

3) Behavior without compression

If it's OK for you, would you mind to test it without compression?
Currently we mostly use the assumption that on-disk extent size are the 
same with in-memory extent size (non-compression).

So qgroup + compression is not the main concern before and is buggy.

If without compression, qgroup works sanely, at least we can be sure 
that the cause is qgroup + compression.

Thanks,
Qu

>
>
> On Sun, Aug 14, 2016 at 7:11 PM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>>
>>
>> At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote:
>>>
>>> I set 200GB limit to one user and 100GB to another user.
>>>
>>> as soon as I reached 139GB and 53GB each, hitting the quota errors.
>>> anyway to workaround quota functionality on btrfs LZO compressed
>>> filesystem?
>>>
>>
>> Please paste "btrfs qgroup show -prce <mnt>" output if you are using btrfs
>> qgroup/quota function.
>>
>> And, AFAIK btrfs qgroup is applied to subvolume, not user.
>>
>> So did you mean limit it to one subvolume belongs to one user?
>>
>> Thanks,
>> Qu
>>
>>>
>>>
>>> 4.7.0-040700-generic #201608021801 SMP
>>>
>>> btrfs-progs v4.7
>>>
>>>
>>> Label: none  uuid: 66a78faf-2052-4864-8a52-c5aec7a56ab8
>>>
>>> Total devices 2 FS bytes used 150.62GiB
>>>
>>> devid    1 size 1.00TiB used 78.01GiB path /dev/xvdc
>>>
>>> devid    2 size 1.00TiB used 78.01GiB path /dev/xvde
>>>
>>>
>>> Data, RAID0: total=150.00GiB, used=149.12GiB
>>>
>>> System, RAID1: total=8.00MiB, used=16.00KiB
>>>
>>> Metadata, RAID1: total=3.00GiB, used=1.49GiB
>>>
>>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>>
>>>
>>> Filesystem      Size  Used Avail Use% Mounted on
>>>
>>> /dev/xvdc       2.0T  153G  1.9T   8% /test_lzo
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>>
>>
>>
>
>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-16  1:01     ` Qu Wenruo
@ 2016-08-16 16:05       ` Rakesh Sankeshi
  2016-08-16 23:33         ` Rakesh Sankeshi
  2016-08-17  0:56         ` Qu Wenruo
  0 siblings, 2 replies; 13+ messages in thread
From: Rakesh Sankeshi @ 2016-08-16 16:05 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

2) after EDQUOT, can't write anymore.

I can delete the data, but still can't write further

3) tested it without compression and also with LZO and ZLIB.. all
behave same way with qgroup. no consistency on when it hits the quota
limit and don't understand on how it's calculating the numbers.

In case of ext4 and xfs, I can see visually that it's hitting the quota limit.



On Mon, Aug 15, 2016 at 6:01 PM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>
>
> At 08/16/2016 03:11 AM, Rakesh Sankeshi wrote:
>>
>> yes, subvol level.
>>
>> qgroupid         rfer         excl     max_rfer     max_excl parent  child
>>
>> --------         ----         ----     --------     -------- ------  -----
>>
>> 0/5          16.00KiB     16.00KiB         none         none ---     ---
>>
>> 0/258       119.48GiB    119.48GiB    200.00GiB         none ---     ---
>>
>> 0/259        92.57GiB     92.57GiB    200.00GiB         none ---     ---
>>
>>
>> although I have 200GB limit on 2 subvols, running into issue at about
>> 120 and 92GB itself
>
>
> 1) About workload
> Would you mind to mention the work pattern of your write?
>
> Just dd data with LZO compression?
> For compression part, it's a little complicated, as the reserved data size
> and on disk extent size are different.
>
> It's possible that at some code we leaked some reserved data space.
>
>
> 2) Behavior after EDQUOT
> And, after EDQUOT happens, can you write data into the subvolume?
> If you can still write a lot of data (at least several giga), it seems to be
> something related with temporary reserved space.
>
> If not, and even can't remove any file due to EQUOTA, then it's almost sure
> we have underflowed the reserved data.
> In that case, unmount and mount again will be the only workaround.
> (In fact, not workaround at all)
>
> 3) Behavior without compression
>
> If it's OK for you, would you mind to test it without compression?
> Currently we mostly use the assumption that on-disk extent size are the same
> with in-memory extent size (non-compression).
>
> So qgroup + compression is not the main concern before and is buggy.
>
> If without compression, qgroup works sanely, at least we can be sure that
> the cause is qgroup + compression.
>
> Thanks,
> Qu
>
>
>>
>>
>> On Sun, Aug 14, 2016 at 7:11 PM, Qu Wenruo <quwenruo@cn.fujitsu.com>
>> wrote:
>>>
>>>
>>>
>>> At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote:
>>>>
>>>>
>>>> I set 200GB limit to one user and 100GB to another user.
>>>>
>>>> as soon as I reached 139GB and 53GB each, hitting the quota errors.
>>>> anyway to workaround quota functionality on btrfs LZO compressed
>>>> filesystem?
>>>>
>>>
>>> Please paste "btrfs qgroup show -prce <mnt>" output if you are using
>>> btrfs
>>> qgroup/quota function.
>>>
>>> And, AFAIK btrfs qgroup is applied to subvolume, not user.
>>>
>>> So did you mean limit it to one subvolume belongs to one user?
>>>
>>> Thanks,
>>> Qu
>>>
>>>>
>>>>
>>>> 4.7.0-040700-generic #201608021801 SMP
>>>>
>>>> btrfs-progs v4.7
>>>>
>>>>
>>>> Label: none  uuid: 66a78faf-2052-4864-8a52-c5aec7a56ab8
>>>>
>>>> Total devices 2 FS bytes used 150.62GiB
>>>>
>>>> devid    1 size 1.00TiB used 78.01GiB path /dev/xvdc
>>>>
>>>> devid    2 size 1.00TiB used 78.01GiB path /dev/xvde
>>>>
>>>>
>>>> Data, RAID0: total=150.00GiB, used=149.12GiB
>>>>
>>>> System, RAID1: total=8.00MiB, used=16.00KiB
>>>>
>>>> Metadata, RAID1: total=3.00GiB, used=1.49GiB
>>>>
>>>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>>>
>>>>
>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>>
>>>> /dev/xvdc       2.0T  153G  1.9T   8% /test_lzo
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
>>>> in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>>
>>>
>>>
>>
>>
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-16 16:05       ` Rakesh Sankeshi
@ 2016-08-16 23:33         ` Rakesh Sankeshi
  2016-08-17  0:09           ` Tim Walberg
  2016-08-17  0:56         ` Qu Wenruo
  1 sibling, 1 reply; 13+ messages in thread
From: Rakesh Sankeshi @ 2016-08-16 23:33 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

also is there any timeframe on when the qgroup / quota issues would be
stabilized in btrfs?

Thanks!


On Tue, Aug 16, 2016 at 9:05 AM, Rakesh Sankeshi
<rakesh.sankeshi@gmail.com> wrote:
> 2) after EDQUOT, can't write anymore.
>
> I can delete the data, but still can't write further
>
> 3) tested it without compression and also with LZO and ZLIB.. all
> behave same way with qgroup. no consistency on when it hits the quota
> limit and don't understand on how it's calculating the numbers.
>
> In case of ext4 and xfs, I can see visually that it's hitting the quota limit.
>
>
>
> On Mon, Aug 15, 2016 at 6:01 PM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>>
>>
>> At 08/16/2016 03:11 AM, Rakesh Sankeshi wrote:
>>>
>>> yes, subvol level.
>>>
>>> qgroupid         rfer         excl     max_rfer     max_excl parent  child
>>>
>>> --------         ----         ----     --------     -------- ------  -----
>>>
>>> 0/5          16.00KiB     16.00KiB         none         none ---     ---
>>>
>>> 0/258       119.48GiB    119.48GiB    200.00GiB         none ---     ---
>>>
>>> 0/259        92.57GiB     92.57GiB    200.00GiB         none ---     ---
>>>
>>>
>>> although I have 200GB limit on 2 subvols, running into issue at about
>>> 120 and 92GB itself
>>
>>
>> 1) About workload
>> Would you mind to mention the work pattern of your write?
>>
>> Just dd data with LZO compression?
>> For compression part, it's a little complicated, as the reserved data size
>> and on disk extent size are different.
>>
>> It's possible that at some code we leaked some reserved data space.
>>
>>
>> 2) Behavior after EDQUOT
>> And, after EDQUOT happens, can you write data into the subvolume?
>> If you can still write a lot of data (at least several giga), it seems to be
>> something related with temporary reserved space.
>>
>> If not, and even can't remove any file due to EQUOTA, then it's almost sure
>> we have underflowed the reserved data.
>> In that case, unmount and mount again will be the only workaround.
>> (In fact, not workaround at all)
>>
>> 3) Behavior without compression
>>
>> If it's OK for you, would you mind to test it without compression?
>> Currently we mostly use the assumption that on-disk extent size are the same
>> with in-memory extent size (non-compression).
>>
>> So qgroup + compression is not the main concern before and is buggy.
>>
>> If without compression, qgroup works sanely, at least we can be sure that
>> the cause is qgroup + compression.
>>
>> Thanks,
>> Qu
>>
>>
>>>
>>>
>>> On Sun, Aug 14, 2016 at 7:11 PM, Qu Wenruo <quwenruo@cn.fujitsu.com>
>>> wrote:
>>>>
>>>>
>>>>
>>>> At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote:
>>>>>
>>>>>
>>>>> I set 200GB limit to one user and 100GB to another user.
>>>>>
>>>>> as soon as I reached 139GB and 53GB each, hitting the quota errors.
>>>>> anyway to workaround quota functionality on btrfs LZO compressed
>>>>> filesystem?
>>>>>
>>>>
>>>> Please paste "btrfs qgroup show -prce <mnt>" output if you are using
>>>> btrfs
>>>> qgroup/quota function.
>>>>
>>>> And, AFAIK btrfs qgroup is applied to subvolume, not user.
>>>>
>>>> So did you mean limit it to one subvolume belongs to one user?
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>>
>>>>>
>>>>> 4.7.0-040700-generic #201608021801 SMP
>>>>>
>>>>> btrfs-progs v4.7
>>>>>
>>>>>
>>>>> Label: none  uuid: 66a78faf-2052-4864-8a52-c5aec7a56ab8
>>>>>
>>>>> Total devices 2 FS bytes used 150.62GiB
>>>>>
>>>>> devid    1 size 1.00TiB used 78.01GiB path /dev/xvdc
>>>>>
>>>>> devid    2 size 1.00TiB used 78.01GiB path /dev/xvde
>>>>>
>>>>>
>>>>> Data, RAID0: total=150.00GiB, used=149.12GiB
>>>>>
>>>>> System, RAID1: total=8.00MiB, used=16.00KiB
>>>>>
>>>>> Metadata, RAID1: total=3.00GiB, used=1.49GiB
>>>>>
>>>>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>>>>
>>>>>
>>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>>>
>>>>> /dev/xvdc       2.0T  153G  1.9T   8% /test_lzo
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
>>>>> in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>
>>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-16 23:33         ` Rakesh Sankeshi
@ 2016-08-17  0:09           ` Tim Walberg
  0 siblings, 0 replies; 13+ messages in thread
From: Tim Walberg @ 2016-08-17  0:09 UTC (permalink / raw)
  To: Rakesh Sankeshi; +Cc: Qu Wenruo, linux-btrfs



On 08/16/2016 16:33 -0700, Rakesh Sankeshi wrote:
>>	also is there any timeframe on when the qgroup / quota issues would be
>>	stabilized in btrfs?
>>	
>>	Thanks!

This may or may not be of interest to you, but for the record, since at least
linux 4.2, I've had pretty good luck with what I'd loosely call "non-enforcing
quotas" with btrfs - i.e. qgroups enabled so that you can actually track usage,
but no limits set, so none of the "deny allocation" logic ever gets hit. It's
understandably not the desired end-state - I still have to use daily reports
and manual enforcement to keep things in balance, but it's better than not
being able to use them at all, especially with the other benefits btrfs brings
to the table. If you really need the enforcement, I'm afraid a different FS
is the best option right now...



-- 
twalberg@gmail.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-16 16:05       ` Rakesh Sankeshi
  2016-08-16 23:33         ` Rakesh Sankeshi
@ 2016-08-17  0:56         ` Qu Wenruo
  2016-08-23 18:38           ` Rakesh Sankeshi
  1 sibling, 1 reply; 13+ messages in thread
From: Qu Wenruo @ 2016-08-17  0:56 UTC (permalink / raw)
  To: Rakesh Sankeshi; +Cc: linux-btrfs



At 08/17/2016 12:05 AM, Rakesh Sankeshi wrote:
> 2) after EDQUOT, can't write anymore.
>
> I can delete the data, but still can't write further

So it's a underflow now.

>
> 3) tested it without compression and also with LZO and ZLIB.. all
> behave same way with qgroup. no consistency on when it hits the quota
> limit and don't understand on how it's calculating the numbers.

Even without compression?!

That's a really big problem then.
Workload please, it's an urgent bug now.

It's better to provide the scripts to reproduce it.



And for the meaning of the numbers, for rfer(reference) it means the 
size of all extents the subvolume has referred to, including both data 
and metadata.

For excl(exclusive), it means the size of all extents that only belongs 
to the subvolume.

And since it's all about size of extents(on-disk), for compression case, 
it's the size after compression.

Also, if one subvolume only referred to part of an extent, the whole 
extent size will be accounted.


Last but not least.
Considering there is quite a lot of report about hitting ENOSPC while 
there is still a lot of unallocated space,
is it reporting error message like "No space left on device" (ENOSPC) or 
"Quota exceeded"(EDQUOT)?

Thanks,
Qu

>
> In case of ext4 and xfs, I can see visually that it's hitting the quota limit.
>
>
>
> On Mon, Aug 15, 2016 at 6:01 PM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>>
>>
>> At 08/16/2016 03:11 AM, Rakesh Sankeshi wrote:
>>>
>>> yes, subvol level.
>>>
>>> qgroupid         rfer         excl     max_rfer     max_excl parent  child
>>>
>>> --------         ----         ----     --------     -------- ------  -----
>>>
>>> 0/5          16.00KiB     16.00KiB         none         none ---     ---
>>>
>>> 0/258       119.48GiB    119.48GiB    200.00GiB         none ---     ---
>>>
>>> 0/259        92.57GiB     92.57GiB    200.00GiB         none ---     ---
>>>
>>>
>>> although I have 200GB limit on 2 subvols, running into issue at about
>>> 120 and 92GB itself
>>
>>
>> 1) About workload
>> Would you mind to mention the work pattern of your write?
>>
>> Just dd data with LZO compression?
>> For compression part, it's a little complicated, as the reserved data size
>> and on disk extent size are different.
>>
>> It's possible that at some code we leaked some reserved data space.
>>
>>
>> 2) Behavior after EDQUOT
>> And, after EDQUOT happens, can you write data into the subvolume?
>> If you can still write a lot of data (at least several giga), it seems to be
>> something related with temporary reserved space.
>>
>> If not, and even can't remove any file due to EQUOTA, then it's almost sure
>> we have underflowed the reserved data.
>> In that case, unmount and mount again will be the only workaround.
>> (In fact, not workaround at all)
>>
>> 3) Behavior without compression
>>
>> If it's OK for you, would you mind to test it without compression?
>> Currently we mostly use the assumption that on-disk extent size are the same
>> with in-memory extent size (non-compression).
>>
>> So qgroup + compression is not the main concern before and is buggy.
>>
>> If without compression, qgroup works sanely, at least we can be sure that
>> the cause is qgroup + compression.
>>
>> Thanks,
>> Qu
>>
>>
>>>
>>>
>>> On Sun, Aug 14, 2016 at 7:11 PM, Qu Wenruo <quwenruo@cn.fujitsu.com>
>>> wrote:
>>>>
>>>>
>>>>
>>>> At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote:
>>>>>
>>>>>
>>>>> I set 200GB limit to one user and 100GB to another user.
>>>>>
>>>>> as soon as I reached 139GB and 53GB each, hitting the quota errors.
>>>>> anyway to workaround quota functionality on btrfs LZO compressed
>>>>> filesystem?
>>>>>
>>>>
>>>> Please paste "btrfs qgroup show -prce <mnt>" output if you are using
>>>> btrfs
>>>> qgroup/quota function.
>>>>
>>>> And, AFAIK btrfs qgroup is applied to subvolume, not user.
>>>>
>>>> So did you mean limit it to one subvolume belongs to one user?
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>>
>>>>>
>>>>> 4.7.0-040700-generic #201608021801 SMP
>>>>>
>>>>> btrfs-progs v4.7
>>>>>
>>>>>
>>>>> Label: none  uuid: 66a78faf-2052-4864-8a52-c5aec7a56ab8
>>>>>
>>>>> Total devices 2 FS bytes used 150.62GiB
>>>>>
>>>>> devid    1 size 1.00TiB used 78.01GiB path /dev/xvdc
>>>>>
>>>>> devid    2 size 1.00TiB used 78.01GiB path /dev/xvde
>>>>>
>>>>>
>>>>> Data, RAID0: total=150.00GiB, used=149.12GiB
>>>>>
>>>>> System, RAID1: total=8.00MiB, used=16.00KiB
>>>>>
>>>>> Metadata, RAID1: total=3.00GiB, used=1.49GiB
>>>>>
>>>>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>>>>
>>>>>
>>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>>>
>>>>> /dev/xvdc       2.0T  153G  1.9T   8% /test_lzo
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
>>>>> in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>
>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-17  0:56         ` Qu Wenruo
@ 2016-08-23 18:38           ` Rakesh Sankeshi
  2016-08-26  1:52             ` Qu Wenruo
  0 siblings, 1 reply; 13+ messages in thread
From: Rakesh Sankeshi @ 2016-08-23 18:38 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

sorry, was out of the town.

not much load on the system at all.

As we are hitting many issues in production, just using this system
for my test purpose. Built few different filesystems. 1 with LZO
compression, second one with ZLIB and third one without any
compression. All has issues related to quota.

whenever there is an issue, I am getting quota exceeded error (EDQUOT).

Please let me know if you still need more info.



On Tue, Aug 16, 2016 at 5:56 PM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>
>
> At 08/17/2016 12:05 AM, Rakesh Sankeshi wrote:
>>
>> 2) after EDQUOT, can't write anymore.
>>
>> I can delete the data, but still can't write further
>
>
> So it's a underflow now.
>
>>
>> 3) tested it without compression and also with LZO and ZLIB.. all
>> behave same way with qgroup. no consistency on when it hits the quota
>> limit and don't understand on how it's calculating the numbers.
>
>
> Even without compression?!
>
> That's a really big problem then.
> Workload please, it's an urgent bug now.
>
> It's better to provide the scripts to reproduce it.
>
>
>
> And for the meaning of the numbers, for rfer(reference) it means the size of
> all extents the subvolume has referred to, including both data and metadata.
>
> For excl(exclusive), it means the size of all extents that only belongs to
> the subvolume.
>
> And since it's all about size of extents(on-disk), for compression case,
> it's the size after compression.
>
> Also, if one subvolume only referred to part of an extent, the whole extent
> size will be accounted.
>
>
> Last but not least.
> Considering there is quite a lot of report about hitting ENOSPC while there
> is still a lot of unallocated space,
> is it reporting error message like "No space left on device" (ENOSPC) or
> "Quota exceeded"(EDQUOT)?
>
> Thanks,
> Qu
>
>
>>
>> In case of ext4 and xfs, I can see visually that it's hitting the quota
>> limit.
>>
>>
>>
>> On Mon, Aug 15, 2016 at 6:01 PM, Qu Wenruo <quwenruo@cn.fujitsu.com>
>> wrote:
>>>
>>>
>>>
>>> At 08/16/2016 03:11 AM, Rakesh Sankeshi wrote:
>>>>
>>>>
>>>> yes, subvol level.
>>>>
>>>> qgroupid         rfer         excl     max_rfer     max_excl parent
>>>> child
>>>>
>>>> --------         ----         ----     --------     -------- ------
>>>> -----
>>>>
>>>> 0/5          16.00KiB     16.00KiB         none         none ---     ---
>>>>
>>>> 0/258       119.48GiB    119.48GiB    200.00GiB         none ---     ---
>>>>
>>>> 0/259        92.57GiB     92.57GiB    200.00GiB         none ---     ---
>>>>
>>>>
>>>> although I have 200GB limit on 2 subvols, running into issue at about
>>>> 120 and 92GB itself
>>>
>>>
>>>
>>> 1) About workload
>>> Would you mind to mention the work pattern of your write?
>>>
>>> Just dd data with LZO compression?
>>> For compression part, it's a little complicated, as the reserved data
>>> size
>>> and on disk extent size are different.
>>>
>>> It's possible that at some code we leaked some reserved data space.
>>>
>>>
>>> 2) Behavior after EDQUOT
>>> And, after EDQUOT happens, can you write data into the subvolume?
>>> If you can still write a lot of data (at least several giga), it seems to
>>> be
>>> something related with temporary reserved space.
>>>
>>> If not, and even can't remove any file due to EQUOTA, then it's almost
>>> sure
>>> we have underflowed the reserved data.
>>> In that case, unmount and mount again will be the only workaround.
>>> (In fact, not workaround at all)
>>>
>>> 3) Behavior without compression
>>>
>>> If it's OK for you, would you mind to test it without compression?
>>> Currently we mostly use the assumption that on-disk extent size are the
>>> same
>>> with in-memory extent size (non-compression).
>>>
>>> So qgroup + compression is not the main concern before and is buggy.
>>>
>>> If without compression, qgroup works sanely, at least we can be sure that
>>> the cause is qgroup + compression.
>>>
>>> Thanks,
>>> Qu
>>>
>>>
>>>>
>>>>
>>>> On Sun, Aug 14, 2016 at 7:11 PM, Qu Wenruo <quwenruo@cn.fujitsu.com>
>>>> wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> I set 200GB limit to one user and 100GB to another user.
>>>>>>
>>>>>> as soon as I reached 139GB and 53GB each, hitting the quota errors.
>>>>>> anyway to workaround quota functionality on btrfs LZO compressed
>>>>>> filesystem?
>>>>>>
>>>>>
>>>>> Please paste "btrfs qgroup show -prce <mnt>" output if you are using
>>>>> btrfs
>>>>> qgroup/quota function.
>>>>>
>>>>> And, AFAIK btrfs qgroup is applied to subvolume, not user.
>>>>>
>>>>> So did you mean limit it to one subvolume belongs to one user?
>>>>>
>>>>> Thanks,
>>>>> Qu
>>>>>
>>>>>>
>>>>>>
>>>>>> 4.7.0-040700-generic #201608021801 SMP
>>>>>>
>>>>>> btrfs-progs v4.7
>>>>>>
>>>>>>
>>>>>> Label: none  uuid: 66a78faf-2052-4864-8a52-c5aec7a56ab8
>>>>>>
>>>>>> Total devices 2 FS bytes used 150.62GiB
>>>>>>
>>>>>> devid    1 size 1.00TiB used 78.01GiB path /dev/xvdc
>>>>>>
>>>>>> devid    2 size 1.00TiB used 78.01GiB path /dev/xvde
>>>>>>
>>>>>>
>>>>>> Data, RAID0: total=150.00GiB, used=149.12GiB
>>>>>>
>>>>>> System, RAID1: total=8.00MiB, used=16.00KiB
>>>>>>
>>>>>> Metadata, RAID1: total=3.00GiB, used=1.49GiB
>>>>>>
>>>>>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>>>>>
>>>>>>
>>>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>>>>
>>>>>> /dev/xvdc       2.0T  153G  1.9T   8% /test_lzo
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
>>>>>> in
>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: btrfs quota issues
  2016-08-23 18:38           ` Rakesh Sankeshi
@ 2016-08-26  1:52             ` Qu Wenruo
  0 siblings, 0 replies; 13+ messages in thread
From: Qu Wenruo @ 2016-08-26  1:52 UTC (permalink / raw)
  To: Rakesh Sankeshi; +Cc: linux-btrfs



At 08/24/2016 02:38 AM, Rakesh Sankeshi wrote:
> sorry, was out of the town.
>
> not much load on the system at all.
>
> As we are hitting many issues in production, just using this system
> for my test purpose. Built few different filesystems. 1 with LZO
> compression, second one with ZLIB and third one without any
> compression. All has issues related to quota.
>
> whenever there is an issue, I am getting quota exceeded error (EDQUOT).
>
> Please let me know if you still need more info.
>

Would you please try this patch and see if it has any improvement?
https://patchwork.kernel.org/patch/9201685/

BTW, is balance/relocate involved in your workload?

Also, for non-compressed case, what's the threshold to trigger the bug?
Is it always about 100 and 90G?

Or is it related to the sum of the 2 subvolumes?
(In your initial report, the limit is 200 for each subvolume while the 
sum of the rfer of these 2 subvolumes seems to be 200G, maybe just 
coincident?)

Thanks,
Qu

>
>
> On Tue, Aug 16, 2016 at 5:56 PM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>>
>>
>> At 08/17/2016 12:05 AM, Rakesh Sankeshi wrote:
>>>
>>> 2) after EDQUOT, can't write anymore.
>>>
>>> I can delete the data, but still can't write further
>>
>>
>> So it's a underflow now.
>>
>>>
>>> 3) tested it without compression and also with LZO and ZLIB.. all
>>> behave same way with qgroup. no consistency on when it hits the quota
>>> limit and don't understand on how it's calculating the numbers.
>>
>>
>> Even without compression?!
>>
>> That's a really big problem then.
>> Workload please, it's an urgent bug now.
>>
>> It's better to provide the scripts to reproduce it.
>>
>>
>>
>> And for the meaning of the numbers, for rfer(reference) it means the size of
>> all extents the subvolume has referred to, including both data and metadata.
>>
>> For excl(exclusive), it means the size of all extents that only belongs to
>> the subvolume.
>>
>> And since it's all about size of extents(on-disk), for compression case,
>> it's the size after compression.
>>
>> Also, if one subvolume only referred to part of an extent, the whole extent
>> size will be accounted.
>>
>>
>> Last but not least.
>> Considering there is quite a lot of report about hitting ENOSPC while there
>> is still a lot of unallocated space,
>> is it reporting error message like "No space left on device" (ENOSPC) or
>> "Quota exceeded"(EDQUOT)?
>>
>> Thanks,
>> Qu
>>
>>
>>>
>>> In case of ext4 and xfs, I can see visually that it's hitting the quota
>>> limit.
>>>
>>>
>>>
>>> On Mon, Aug 15, 2016 at 6:01 PM, Qu Wenruo <quwenruo@cn.fujitsu.com>
>>> wrote:
>>>>
>>>>
>>>>
>>>> At 08/16/2016 03:11 AM, Rakesh Sankeshi wrote:
>>>>>
>>>>>
>>>>> yes, subvol level.
>>>>>
>>>>> qgroupid         rfer         excl     max_rfer     max_excl parent
>>>>> child
>>>>>
>>>>> --------         ----         ----     --------     -------- ------
>>>>> -----
>>>>>
>>>>> 0/5          16.00KiB     16.00KiB         none         none ---     ---
>>>>>
>>>>> 0/258       119.48GiB    119.48GiB    200.00GiB         none ---     ---
>>>>>
>>>>> 0/259        92.57GiB     92.57GiB    200.00GiB         none ---     ---
>>>>>
>>>>>
>>>>> although I have 200GB limit on 2 subvols, running into issue at about
>>>>> 120 and 92GB itself
>>>>
>>>>
>>>>
>>>> 1) About workload
>>>> Would you mind to mention the work pattern of your write?
>>>>
>>>> Just dd data with LZO compression?
>>>> For compression part, it's a little complicated, as the reserved data
>>>> size
>>>> and on disk extent size are different.
>>>>
>>>> It's possible that at some code we leaked some reserved data space.
>>>>
>>>>
>>>> 2) Behavior after EDQUOT
>>>> And, after EDQUOT happens, can you write data into the subvolume?
>>>> If you can still write a lot of data (at least several giga), it seems to
>>>> be
>>>> something related with temporary reserved space.
>>>>
>>>> If not, and even can't remove any file due to EQUOTA, then it's almost
>>>> sure
>>>> we have underflowed the reserved data.
>>>> In that case, unmount and mount again will be the only workaround.
>>>> (In fact, not workaround at all)
>>>>
>>>> 3) Behavior without compression
>>>>
>>>> If it's OK for you, would you mind to test it without compression?
>>>> Currently we mostly use the assumption that on-disk extent size are the
>>>> same
>>>> with in-memory extent size (non-compression).
>>>>
>>>> So qgroup + compression is not the main concern before and is buggy.
>>>>
>>>> If without compression, qgroup works sanely, at least we can be sure that
>>>> the cause is qgroup + compression.
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>
>>>>>
>>>>>
>>>>> On Sun, Aug 14, 2016 at 7:11 PM, Qu Wenruo <quwenruo@cn.fujitsu.com>
>>>>> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I set 200GB limit to one user and 100GB to another user.
>>>>>>>
>>>>>>> as soon as I reached 139GB and 53GB each, hitting the quota errors.
>>>>>>> anyway to workaround quota functionality on btrfs LZO compressed
>>>>>>> filesystem?
>>>>>>>
>>>>>>
>>>>>> Please paste "btrfs qgroup show -prce <mnt>" output if you are using
>>>>>> btrfs
>>>>>> qgroup/quota function.
>>>>>>
>>>>>> And, AFAIK btrfs qgroup is applied to subvolume, not user.
>>>>>>
>>>>>> So did you mean limit it to one subvolume belongs to one user?
>>>>>>
>>>>>> Thanks,
>>>>>> Qu
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 4.7.0-040700-generic #201608021801 SMP
>>>>>>>
>>>>>>> btrfs-progs v4.7
>>>>>>>
>>>>>>>
>>>>>>> Label: none  uuid: 66a78faf-2052-4864-8a52-c5aec7a56ab8
>>>>>>>
>>>>>>> Total devices 2 FS bytes used 150.62GiB
>>>>>>>
>>>>>>> devid    1 size 1.00TiB used 78.01GiB path /dev/xvdc
>>>>>>>
>>>>>>> devid    2 size 1.00TiB used 78.01GiB path /dev/xvde
>>>>>>>
>>>>>>>
>>>>>>> Data, RAID0: total=150.00GiB, used=149.12GiB
>>>>>>>
>>>>>>> System, RAID1: total=8.00MiB, used=16.00KiB
>>>>>>>
>>>>>>> Metadata, RAID1: total=3.00GiB, used=1.49GiB
>>>>>>>
>>>>>>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>>>>>>
>>>>>>>
>>>>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>>>>>
>>>>>>> /dev/xvdc       2.0T  153G  1.9T   8% /test_lzo
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
>>>>>>> in
>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>
>



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-08-26  1:52 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-11 17:32 btrfs quota issues Rakesh Sankeshi
2016-08-11 19:13 ` Duncan
2016-08-12 15:47   ` Rakesh Sankeshi
2016-08-13 23:05     ` Duncan
2016-08-15  2:11 ` Qu Wenruo
2016-08-15 19:11   ` Rakesh Sankeshi
2016-08-16  1:01     ` Qu Wenruo
2016-08-16 16:05       ` Rakesh Sankeshi
2016-08-16 23:33         ` Rakesh Sankeshi
2016-08-17  0:09           ` Tim Walberg
2016-08-17  0:56         ` Qu Wenruo
2016-08-23 18:38           ` Rakesh Sankeshi
2016-08-26  1:52             ` Qu Wenruo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.