From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: Tomasz Pala <gotar@polanet.pl>
Cc: Qu Wenruo <quwenruo.btrfs@gmx.com>, linux-btrfs@vger.kernel.org
Subject: Re: Report correct filesystem usage / limits on BTRFS subvolumes with quota
Date: Fri, 10 Aug 2018 14:48:15 -0400 [thread overview]
Message-ID: <5b00bed1-44cd-8b63-e720-85995e91feae@gmail.com> (raw)
In-Reply-To: <20180810182115.GA922@polanet.pl>
On 2018-08-10 14:21, Tomasz Pala wrote:
> On Fri, Aug 10, 2018 at 07:39:30 -0400, Austin S. Hemmelgarn wrote:
>
>>> I.e.: every shared segment should be accounted within quota (at least once).
>> I think what you mean to say here is that every shared extent should be
>> accounted to quotas for every location it is reflinked from. IOW, that
>> if an extent is shared between two subvolumes each with it's own quota,
>> they should both have it accounted against their quota.
>
> Yes.
>
>>> Moreover - if there would be per-subvolume RAID levels someday, the data
>>> should be accouted in relation to "default" (filesystem) RAID level,
>>> i.e. having a RAID0 subvolume on RAID1 fs should account half of the
>>> data, and twice the data in an opposite scenario (like "dup" profile on
>>> single-drive filesystem).
>>
>> This is irrelevant to your point here. In fact, it goes against it,
>> you're arguing for quotas to report data like `du`, but all of
>> chunk-profile stuff is invisible to `du` (and everything else in
>> userspace that doesn't look through BTRFS ioctls).
>
> My point is user-point, not some system tool like du. Consider this:
> 1. user wants higher (than default) protection of some data,
> 2. user wants more storage space with less protection.
>
> Ad. 1 - requesting better redundancy is similar to cp --reflink=never
> - there are functional differences, but the cost is similar: trading
> space for security,
>
> Ad. 2 - many would like to have .cache, .ccache, tmp or some build
> system directory with faster writes and no redundancy at all. This
> requires per-file/directory data profile attrs though.
>
> Since we agreed that transparent data compression is user's storage bonus,
> gains from the reduced redundancy should also profit user.
Do you actually know of any services that do this though? I mean,
Amazon S3 and similar services have the option of reduced redundancy
(and other alternate storage tiers), but they charge
per-unit-data-per-unit-time with no hard limit on how much space they
use, and charge different rates for different storage tiers. In
comparison, what you appear to be talking about is something more
similar to Dropbox or Google Drive, where you pay up front for a fixed
amount of storage for a fixed amount of time and can't use more than
that, and all the services I know of like that offer exactly one option
for storage redundancy.
That aside, you seem to be overthinking this. No sane provider is going
to give their users the ability to create subvolumes themselves (there's
too much opportunity for a tiny bug in your software to cost you a _lot_
of lost revenue, because creating subvolumes can let you escape qgroups)
That means in turn that what you're trying to argue for is no
different from the provider just selling units of storage for different
redundancy levels separately, and charging different rates for each of
them. In fact, that approach is better, because it works independent of
the underlying storage technology (it will work with hardware RAID,
LVM2, MD, ZFS, and even distributed storage platforms like Ceph and
Gluster), _and_ it lets them charge differently than the trivial case of
N copies costing N times as much as one copy (which is not quite
accurate in terms of actual management costs).
Now, if BTRFS were to have the ability to set profiles per-file, then
this might be useful, albeit with the option to tune how it gets accounted.
>
> Disclaimer: all the above statements in relation to conception and
> understanding of quotas, not to be confused with qgroups.
>
next prev parent reply other threads:[~2018-08-10 21:19 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-31 13:49 Report correct filesystem usage / limits on BTRFS subvolumes with quota Thomas Leister
2018-07-31 14:32 ` Qu Wenruo
2018-07-31 16:03 ` Austin S. Hemmelgarn
2018-08-01 1:23 ` Qu Wenruo
2018-08-09 17:48 ` Tomasz Pala
2018-08-09 23:35 ` Qu Wenruo
2018-08-10 7:17 ` Tomasz Pala
2018-08-10 7:55 ` Qu Wenruo
2018-08-10 9:33 ` Tomasz Pala
2018-08-11 6:54 ` Andrei Borzenkov
2018-08-10 11:32 ` Austin S. Hemmelgarn
2018-08-10 18:07 ` Chris Murphy
2018-08-10 19:10 ` Austin S. Hemmelgarn
2018-08-11 3:29 ` Duncan
2018-08-12 3:16 ` Chris Murphy
2018-08-12 7:04 ` Andrei Borzenkov
2018-08-12 17:39 ` Andrei Borzenkov
2018-08-13 11:23 ` Austin S. Hemmelgarn
[not found] ` <f66b8ff3-d7ec-31ad-e9ca-e09c9eb76474@gmail.com>
2018-08-10 7:33 ` Tomasz Pala
2018-08-11 5:46 ` Andrei Borzenkov
2018-08-10 11:39 ` Austin S. Hemmelgarn
2018-08-10 18:21 ` Tomasz Pala
2018-08-10 18:48 ` Austin S. Hemmelgarn [this message]
2018-08-11 6:18 ` Andrei Borzenkov
2018-08-14 2:49 ` Jeff Mahoney
2018-08-15 11:22 ` Austin S. Hemmelgarn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5b00bed1-44cd-8b63-e720-85995e91feae@gmail.com \
--to=ahferroin7@gmail.com \
--cc=gotar@polanet.pl \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.