All of lore.kernel.org
 help / color / mirror / Atom feed
* unsolvable technical issues?
@ 2018-06-21 23:13 waxhead
  2018-06-22  2:39 ` Chris Murphy
                   ` (3 more replies)
  0 siblings, 4 replies; 23+ messages in thread
From: waxhead @ 2018-06-21 23:13 UTC (permalink / raw)
  To: linux-btrfs

According to this:

https://stratis-storage.github.io/StratisSoftwareDesign.pdf
Page 4 , section 1.2

It claims that BTRFS still have significant technical issues that may 
never be resolved.
Could someone shed some light on exactly what these technical issues 
might be?! What are BTRFS biggest technical problems?

If you forget about the "RAID"5/6 like features then the only annoyances 
that I have with BTRFS so far is...

1. Lack of per subvolume "RAID" levels
2. Lack of not using the deviceid to re-discover and re-add dropped devices

And that's about it really...

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-21 23:13 unsolvable technical issues? waxhead
@ 2018-06-22  2:39 ` Chris Murphy
  2018-06-27 18:50   ` waxhead
  2018-06-22  5:48 ` Nikolay Borisov
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 23+ messages in thread
From: Chris Murphy @ 2018-06-22  2:39 UTC (permalink / raw)
  To: waxhead; +Cc: Btrfs BTRFS

On Thu, Jun 21, 2018 at 5:13 PM, waxhead <waxhead@dirtcellar.net> wrote:
> According to this:
>
> https://stratis-storage.github.io/StratisSoftwareDesign.pdf
> Page 4 , section 1.2
>
> It claims that BTRFS still have significant technical issues that may never
> be resolved.
> Could someone shed some light on exactly what these technical issues might
> be?! What are BTRFS biggest technical problems?


I think it's appropriate to file an issue and ask what they're
referring to. It very well might be use case specific to Red Hat.
https://github.com/stratis-storage/stratis-storage.github.io/issues

I also think it's appropriate to crosslink: include URL for the start
of this thread in the issue, and the issue URL to this thread.



-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-21 23:13 unsolvable technical issues? waxhead
  2018-06-22  2:39 ` Chris Murphy
@ 2018-06-22  5:48 ` Nikolay Borisov
  2018-06-23 22:01   ` waxhead
  2018-06-23  5:11 ` Duncan
  2018-06-25 13:36 ` David Sterba
  3 siblings, 1 reply; 23+ messages in thread
From: Nikolay Borisov @ 2018-06-22  5:48 UTC (permalink / raw)
  To: waxhead, linux-btrfs



On 22.06.2018 02:13, waxhead wrote:
> According to this:
> 
> https://stratis-storage.github.io/StratisSoftwareDesign.pdf
> Page 4 , section 1.2
> 
> It claims that BTRFS still have significant technical issues that may
> never be resolved.
> Could someone shed some light on exactly what these technical issues
> might be?! What are BTRFS biggest technical problems?

That's a question that needs to be directed at the author of the statement.

> 
> If you forget about the "RAID"5/6 like features then the only annoyances
> that I have with BTRFS so far is...
> 
> 1. Lack of per subvolume "RAID" levels
> 2. Lack of not using the deviceid to re-discover and re-add dropped devices
> 
> And that's about it really...
> -- 
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-21 23:13 unsolvable technical issues? waxhead
  2018-06-22  2:39 ` Chris Murphy
  2018-06-22  5:48 ` Nikolay Borisov
@ 2018-06-23  5:11 ` Duncan
  2018-06-24 20:22   ` Goffredo Baroncelli
  2018-06-25 14:20   ` David Sterba
  2018-06-25 13:36 ` David Sterba
  3 siblings, 2 replies; 23+ messages in thread
From: Duncan @ 2018-06-23  5:11 UTC (permalink / raw)
  To: linux-btrfs

waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:

> According to this:
> 
> https://stratis-storage.github.io/StratisSoftwareDesign.pdf Page 4 ,
> section 1.2
> 
> It claims that BTRFS still have significant technical issues that may
> never be resolved.
> Could someone shed some light on exactly what these technical issues
> might be?! What are BTRFS biggest technical problems?
> 
> If you forget about the "RAID"5/6 like features then the only annoyances
> that I have with BTRFS so far is...
> 
> 1. Lack of per subvolume "RAID" levels
> 2. Lack of not using the deviceid to re-discover and re-add dropped
> devices
> 
> And that's about it really...

... And those both have solutions on the roadmap, with RFC patches 
already posted for #2 (tho I'm not sure they use devid) altho 
realistically they're likely to take years to appear and be tested to 
stability.  Meanwhile...

While as the others have said you really need to go to the author to get 
what was referred to, and I agree, I can speculate a bit.  While this 
*is* speculation, admittedly somewhat uninformed as I don't claim to be a 
dev, and I'd actually be interested in what others think so don't be 
afraid to tell me I haven't a clue, as long as you say why... based on 
several years reading the list now...

1) When I see btrfs "technical issue that may never be resolved", the #1 
first thing I think of, that AFAIK there are _definitely_ no plans to 
resolve, because it's very deeply woven into the btrfs core by now, is...

Filesystem UUID Identification.  Btrfs takes the UU bit of Universally 
Unique quite literally, assuming they really *are* unique, at least on 
that system, and uses them to identify the possibly multiple devices that 
may be components of the filesystem, a problem most filesystems don't 
have to deal with since they're single-device-only.  Because btrfs uses 
this supposedly unique ID to ID devices that belong to the filesystem, it 
can get *very* mixed up, with results possibly including dataloss, if it 
sees devices that don't actually belong to a filesystem with the same UUID 
as a mounted filesystem.

But technologies such as LVM allow cloning devices and these additional 
devices naturally have the same filesystem metadata, including filesystem 
UUID, as the original.  Making the problem worse is udev with its plug-n-
play style detection, which will normally trigger a btrfs device scan, 
thus making btrfs aware of new devices containing (a component of) a 
btrfs, as soon as udev detects the device.

So people, including users of redhat/fedora which standardizes on lvm and 
systemd/udev, have to be _very_ careful when cloning devices, etc, with 
existing mounted btrfs, not to allow btrfs to see the new clones, lest it 
get mixed up and write data to the wrong device due to it having the same 
UUID as the mounted filesystem, possibly resulting in data loss.

But btrfs made the choice to use UUID as if it were really unique, just 
as it says it is on the label, many years ago, when btrfs was much 
younger, and that choice is now embedded so deeply it's not practical to 
consider changing it to something else (tho there is a utility to allow a 
suitably careful user to change it on a cloned device, should it be 
necessary).

For someone standardized on a solution such as lvm, that could be 
considered an unsolvable technical issue indeed, and indeed, I don't 
believe anyone here will argue that it's going to change.  Tho I'd 
definitely argue the bug is in apps that deliberately make UUIDs non-UUID 
any longer, no longer unique, not in btrfs, which simply takes the claim 
on the label at face value.


While that's the only truly "unsolvable" one I know of, depending on 
one's strictness in defining "unsolvable" and the scope of the time frame 
under consideration, it's quite conceivable (indeed, having read a bit 
about them before, it seems to be the case, certainly the PR case) that 
stratis, et. al., have lost patience at the slow pace of btrfs 
development, and consider various other still missing features as now 
"practically insolvable as in won't be solved to production ready", at 
least in a "reasonable" time frame of under say 3-5 (or 5-7, or whatever) 
years.  These could arguably include:

2) Subvolume and (more technically) reflink-aware defrag.

It was there for a couple kernel versions some time ago, but "impossibly" 
slow, so it was disabled until such time as btrfs could be made to scale 
rather better in this regard.

There's no hint yet as to when that might actually be, if it will _ever_ 
be, so this can arguably be validly added to the "may never be resolved" 
list.

3) N-way-mirroring.

This one was on the roadmap for "right after raid56 support, since it'll 
use some of that code", since at least 3.5, when raid56 was supposed to 
be introduced in 3.6.  I know because this is the one I've been most 
looking forward to personally, tho my original reason, aging but still 
usable devices that I wanted extra redundancy for, has long since itself 
been aged out of rotation.

Of course we know the raid56 story and thus the implied delay here, if 
it's even still roadmapped at all now, and as with reflink-aware-defrag, 
there's no hint yet as to when we'll actually see this at all, let alone 
see it in a reasonably stable form, so at least in the practical sense, 
it's arguably "might never be resolved."

4) (Until relatively recently, and still in terms of scaling) Quotas.

Until relatively recently, quotas could arguably be added to the list.  
They were rewritten multiple times, and until recently, appeared to be 
effectively eternally broken.

While that has happily changed recently and (based on the list, I don't 
use 'em personally) quotas actually seem at least someone usable these 
days (altho less critical bugs are still being fixed), AFAIK quota 
scalability while doing btrfs maintenance remains a serious enough issue 
that the recommendation is to turn them off before doing balances, and 
the same would almost certainly apply to reflink-aware-defrag (turn 
quotas off before defraging) were it available, as well.  That 
scalability alone could arguably be a "technical issue that may never be 
resolved", and while quotas themselves appear to be reasonably functional 
now, that could arguably justify them still being on the list.


And of course that's avoiding the two you mentioned, tho arguably they 
could go on the "may in practice never be resolved, at least not in the 
non-bluesky lifetime" list as well.


As for stratis, supposedly they're deliberately taking existing proven in 
multi-layer-form technology and simply exposing it in unified form.  They 
claim this dramatically lessens the required new code and shortens time-
to-stability to something reasonable, in contrast to the about a decade 
btrfs has taken already, without yet reaching a full feature set and full 
stability.  IMO they may well have a point, tho AFAIK they're still new 
and immature themselves and (I believe) don't have it either, so it's a 
point that AFAIK has yet to be fully demonstrated.

We'll see how they evolve.  I do actually expect them to move faster than 
btrfs, but also expect the interface may not be as smooth and unified as 
they'd like to present as I expect there to remain some hiccups in 
smoothing over the layering issues.  Also, because they've deliberately 
chosen to go with existing technology where possible in ordered to evolve 
to stability faster, by the same token they're deliberately limiting the 
evolution to incremental over existing technology, and I expect there's 
some stuff btrfs will do better as a result... at least until btrfs (or a 
successor) becomes stable enough for them to integrate (parts of?) it as 
existing demonstrated-stable technology.

The other difference, AFAIK, is that stratis is specifically a 
corporation making it a/the main money product, whereas btrfs was always 
something the btrfs devs used at their employers (oracle, facebook), who 
have other things as their main product.  As such, stratis is much more 
likely to prioritize things like raid status monitors, hot-spares, etc, 
that can be part of the product they sell, where they've been lower 
priority for btrfs.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-22  5:48 ` Nikolay Borisov
@ 2018-06-23 22:01   ` waxhead
  2018-06-24  3:55     ` Jukka Larja
  0 siblings, 1 reply; 23+ messages in thread
From: waxhead @ 2018-06-23 22:01 UTC (permalink / raw)
  To: Nikolay Borisov, linux-btrfs

Nikolay Borisov wrote:
> 
> 
> On 22.06.2018 02:13, waxhead wrote:
>> According to this:
>>
>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf
>> Page 4 , section 1.2
>>
>> It claims that BTRFS still have significant technical issues that may
>> never be resolved.
>> Could someone shed some light on exactly what these technical issues
>> might be?! What are BTRFS biggest technical problems?
> 
> That's a question that needs to be directed at the author of the statement.
> 
I think not, and here's why: I am asking the BTRFS developers a general 
question , with some basis as to why I became curious. The question is 
simply what (if any) are the biggest technical issues in BTRFS because 
one must expect that if anyone is going to give me a credible answer it 
must be the people that hack on BTRFS and understand what they are 
working on and not the stratis guys. It would surprise me if they knew 
better than the BTRFS devs.

And yes absolutely, I do understand why one would want to direct that to 
the author of the statement as this claim is as far as I can tell 
completely without basis at all, and we all know that extraordinary 
claims require extraordinary evidence right? I do however feel that I 
should educate myself a bit on BTRFS to have some sort of basis to work 
on before confronting the stratis guys and risk ending up as the middle 
man in a potential email flame war.

So again , does BTRFS have any *known* major technical obstacles which 
the devs are having a hard time solving? (Duncan already gave the best 
answer so far).

PS! I have a tendency to sound a bit aggressive / harsh. I assure you 
all that it is not my intent. I am simply trying to get some knowledge 
of a filesystem (that interest me a lot before) trying to validate a 
"third party" claim.






^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-23 22:01   ` waxhead
@ 2018-06-24  3:55     ` Jukka Larja
  2018-06-24  8:41       ` waxhead
  0 siblings, 1 reply; 23+ messages in thread
From: Jukka Larja @ 2018-06-24  3:55 UTC (permalink / raw)
  To: waxhead, linux-btrfs

waxhead wrote 24.6.2018 klo 1.01:
> Nikolay Borisov wrote:
>>
>> On 22.06.2018 02:13, waxhead wrote:
>>> According to this:
>>>
>>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf
>>> Page 4 , section 1.2
>>>
>>> It claims that BTRFS still have significant technical issues that may
>>> never be resolved.
>>> Could someone shed some light on exactly what these technical issues
>>> might be?! What are BTRFS biggest technical problems?
>>
>> That's a question that needs to be directed at the author of the statement.
>>
> I think not, and here's why: I am asking the BTRFS developers a general 
> question , with some basis as to why I became curious. The question is 
> simply what (if any) are the biggest technical issues in BTRFS because one 
> must expect that if anyone is going to give me a credible answer it must be 
> the people that hack on BTRFS and understand what they are working on and 
> not the stratis guys. It would surprise me if they knew better than the 
> BTRFS devs.

I think the problem with that question is that it is too general. Duncan's 
post already highlights several things that could be a significant problem 
for some user while being non-issue for most. Without more specific problem 
description, best you can hope for is speculation on things that Btrfs 
currently does badly.

-Jukka Larja

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-24  3:55     ` Jukka Larja
@ 2018-06-24  8:41       ` waxhead
  2018-06-24 15:06         ` Ferry Toth
  0 siblings, 1 reply; 23+ messages in thread
From: waxhead @ 2018-06-24  8:41 UTC (permalink / raw)
  To: Jukka Larja, linux-btrfs

Jukka Larja wrote:
> waxhead wrote 24.6.2018 klo 1.01:
>> Nikolay Borisov wrote:
>>>
>>> On 22.06.2018 02:13, waxhead wrote:
>>>> According to this:
>>>>
>>>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf
>>>> Page 4 , section 1.2
>>>>
>>>> It claims that BTRFS still have significant technical issues that may
>>>> never be resolved.
>>>> Could someone shed some light on exactly what these technical issues
>>>> might be?! What are BTRFS biggest technical problems?
>>>
>>> That's a question that needs to be directed at the author of the 
>>> statement.
>>>
>> I think not, and here's why: I am asking the BTRFS developers a 
>> general question , with some basis as to why I became curious. The 
>> question is simply what (if any) are the biggest technical issues in 
>> BTRFS because one must expect that if anyone is going to give me a 
>> credible answer it must be the people that hack on BTRFS and 
>> understand what they are working on and not the stratis guys. It would 
>> surprise me if they knew better than the BTRFS devs.
> 
> I think the problem with that question is that it is too general. 
> Duncan's post already highlights several things that could be a 
> significant problem for some user while being non-issue for most. 
> Without more specific problem description, best you can hope for is 
> speculation on things that Btrfs currently does badly.
> 
> -Jukka Larja

Well, I still don't agree (apparently I am starting to become 
difficult). There is a "roadmap" on the BTRFS wiki that describes 
features implemented and feature planned for example. Naturally people 
are working on improvements to existing features and prep-work for new 
features. If some of this work is not moving ahead due to design issues 
it sounds likely that someone would know about it by now.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-24  8:41       ` waxhead
@ 2018-06-24 15:06         ` Ferry Toth
  0 siblings, 0 replies; 23+ messages in thread
From: Ferry Toth @ 2018-06-24 15:06 UTC (permalink / raw)
  To: linux-btrfs

waxhead wrote:

> Jukka Larja wrote:
>> waxhead wrote 24.6.2018 klo 1.01:
>>> Nikolay Borisov wrote:
>>>>
>>>> On 22.06.2018 02:13, waxhead wrote:
>>>>> According to this:
>>>>>
>>>>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf
>>>>> Page 4 , section 1.2
>>>>>
>>>>> It claims that BTRFS still have significant technical issues that may
>>>>> never be resolved.
>>>>> Could someone shed some light on exactly what these technical issues
>>>>> might be?! What are BTRFS biggest technical problems?
>>>>
>>>> That's a question that needs to be directed at the author of the
>>>> statement.
>>>>
>>> I think not, and here's why: I am asking the BTRFS developers a
>>> general question , with some basis as to why I became curious. The
>>> question is simply what (if any) are the biggest technical issues in
>>> BTRFS because one must expect that if anyone is going to give me a
>>> credible answer it must be the people that hack on BTRFS and
>>> understand what they are working on and not the stratis guys. It would
>>> surprise me if they knew better than the BTRFS devs.
>> 
>> I think the problem with that question is that it is too general.
>> Duncan's post already highlights several things that could be a
>> significant problem for some user while being non-issue for most.
>> Without more specific problem description, best you can hope for is
>> speculation on things that Btrfs currently does badly.
>> 
>> -Jukka Larja
> 
> Well, I still don't agree (apparently I am starting to become
> difficult). There is a "roadmap" on the BTRFS wiki that describes
> features implemented and feature planned for example. Naturally people
> are working on improvements to existing features and prep-work for new
> features. If some of this work is not moving ahead due to design issues
> it sounds likely that someone would know about it by now.

This one doesn't seem to be moving ahead, while it seems like a very 
promising one: Hot data tracking and moving to faster devices (or provided 
on the generic VFS layer)

It would be really fantastic to just add a ssd to a pool of hdd's and have 
fsync sensitive stuff run normally (dpkg on raid10 with 50 snapshots 
currently can take hours to do a few minute job)

> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-23  5:11 ` Duncan
@ 2018-06-24 20:22   ` Goffredo Baroncelli
  2018-06-25 11:26     ` Austin S. Hemmelgarn
  2018-06-25 14:20   ` David Sterba
  1 sibling, 1 reply; 23+ messages in thread
From: Goffredo Baroncelli @ 2018-06-24 20:22 UTC (permalink / raw)
  To: Duncan, linux-btrfs

On 06/23/2018 07:11 AM, Duncan wrote:
> waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
> 
>> According to this:
>>
>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf Page 4 ,
>> section 1.2
>>
>> It claims that BTRFS still have significant technical issues that may
>> never be resolved.
>> Could someone shed some light on exactly what these technical issues
>> might be?! What are BTRFS biggest technical problems?
>>
>> If you forget about the "RAID"5/6 like features then the only annoyances
>> that I have with BTRFS so far is...
>>
>> 1. Lack of per subvolume "RAID" levels
>> 2. Lack of not using the deviceid to re-discover and re-add dropped
>> devices
>>
>> And that's about it really...
> 
> ... And those both have solutions on the roadmap, with RFC patches 
> already posted for #2 (tho I'm not sure they use devid) altho 
> realistically they're likely to take years to appear and be tested to 
> stability.  Meanwhile...
> 
> While as the others have said you really need to go to the author to get 
> what was referred to, and I agree, I can speculate a bit.  While this 
> *is* speculation, admittedly somewhat uninformed as I don't claim to be a 
> dev, and I'd actually be interested in what others think so don't be 
> afraid to tell me I haven't a clue, as long as you say why... based on 
> several years reading the list now...
> 
> 1) When I see btrfs "technical issue that may never be resolved", the #1 
> first thing I think of, that AFAIK there are _definitely_ no plans to 
> resolve, because it's very deeply woven into the btrfs core by now, is...
> 
> Filesystem UUID Identification.  Btrfs takes the UU bit of Universally 
> Unique quite literally, assuming they really *are* unique, at least on 
> that system, and uses them to identify the possibly multiple devices that 
> may be components of the filesystem, a problem most filesystems don't 
> have to deal with since they're single-device-only.  Because btrfs uses 
> this supposedly unique ID to ID devices that belong to the filesystem, it 
> can get *very* mixed up, with results possibly including dataloss, if it 
> sees devices that don't actually belong to a filesystem with the same UUID 
> as a mounted filesystem.

As partial workaround you can disable udev btrfs rules and then do a "btrfs dev scan" manually only for the device which you need. The you can mount the filesystem. Unfortunately you cannot mount two filesystem with the same UUID. However I have to point out that also LVM/dm might have problems if you clone a PV....

[...]
der say 3-5 (or 5-7, or whatever) 
> years.  These could arguably include:
> 
> 2) Subvolume and (more technically) reflink-aware defrag.
> 
> It was there for a couple kernel versions some time ago, but "impossibly" 
> slow, so it was disabled until such time as btrfs could be made to scale 
> rather better in this regard.

Did you try something like that with XFS+DM snapshot ? No you can't, because defrag in XFS cannot traverse snapshot (and I have to suppose that defrag cannot be effective on a dm-snapshot at all)..
What I am trying to point out is that even tough btrfs is not the fastest filesystem (and for some workload is VERY slow), when you compare it when few snapshots were presents LVM/dm is a lot slower.

IMHO most of the complaint which affect BTRFS, are due to the fact that with BTRFS an user can quite easily exploit a lot of features and their combinations. When a the slowness issue appears when some advance features combinations are used (i.e. multiple disks profile and (a lot of ) snapshots), this is reported as a BTRFS failure. But in fact even LVM/dm is very slow when the snapshot is used. 


> 
> There's no hint yet as to when that might actually be, if it will _ever_ 
> be, so this can arguably be validly added to the "may never be resolved" 
> list.
> 
> 3) N-way-mirroring.
> 
[...]
This is not an issue, but a not implemented feature
> 
> 4) (Until relatively recently, and still in terms of scaling) Quotas.
> 
> Until relatively recently, quotas could arguably be added to the list.  
> They were rewritten multiple times, and until recently, appeared to be 
> effectively eternally broken.

Even tough what you are reporting is correct, I have to point out that the quota in BTRFS is more complex than the equivalent one of the other FS. In fact it handles (good or bad) quota of gorup of subvolumes. How this concept could be translated in terms of "stratis"


[...]
> 
> As for stratis, supposedly they're deliberately taking existing proven in 
> multi-layer-form technology and simply exposing it in unified form.  They 
> claim this dramatically lessens the required new code and shortens time-
> to-stability to something reasonable, in contrast to the about a decade 
> btrfs has taken already, without yet reaching a full feature set and full 
> stability.  IMO they may well have a point, tho AFAIK they're still new 
> and immature themselves and (I believe) don't have it either, so it's a 
> point that AFAIK has yet to be fully demonstrated.
> 
> We'll see how they evolve.  I do actually expect them to move faster than 
> btrfs, but also expect the interface may not be as smooth and unified as 
> they'd like to present as I expect there to remain some hiccups in 
> smoothing over the layering issues.  Also, because they've deliberately 
> chosen to go with existing technology where possible in ordered to evolve 
> to stability faster, by the same token they're deliberately limiting the 
> evolution to incremental over existing technology, and I expect there's 
> some stuff btrfs will do better as a result... at least until btrfs (or a 
> successor) becomes stable enough for them to integrate (parts of?) it as 
> existing demonstrated-stable technology.

I fully agree with the above sentences...
> 
> The other difference, AFAIK, is that stratis is specifically a 
> corporation making it a/the main money product, whereas btrfs was always 
> something the btrfs devs used at their employers (oracle, facebook), who 
> have other things as their main product.  As such, stratis is much more 
> likely to prioritize things like raid status monitors, hot-spares, etc, 
> that can be part of the product they sell, where they've been lower 
> priority for btrfs.
> 


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-24 20:22   ` Goffredo Baroncelli
@ 2018-06-25 11:26     ` Austin S. Hemmelgarn
  2018-06-30  3:22       ` Duncan
  0 siblings, 1 reply; 23+ messages in thread
From: Austin S. Hemmelgarn @ 2018-06-25 11:26 UTC (permalink / raw)
  To: kreijack, Duncan, linux-btrfs

On 2018-06-24 16:22, Goffredo Baroncelli wrote:
> On 06/23/2018 07:11 AM, Duncan wrote:
>> waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
>>
>>> According to this:
>>>
>>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf Page 4 ,
>>> section 1.2
>>>
>>> It claims that BTRFS still have significant technical issues that may
>>> never be resolved.
>>> Could someone shed some light on exactly what these technical issues
>>> might be?! What are BTRFS biggest technical problems?
>>>
>>> If you forget about the "RAID"5/6 like features then the only annoyances
>>> that I have with BTRFS so far is...
>>>
>>> 1. Lack of per subvolume "RAID" levels
>>> 2. Lack of not using the deviceid to re-discover and re-add dropped
>>> devices
>>>
>>> And that's about it really...
>>
>> ... And those both have solutions on the roadmap, with RFC patches
>> already posted for #2 (tho I'm not sure they use devid) altho
>> realistically they're likely to take years to appear and be tested to
>> stability.  Meanwhile...
>>
>> While as the others have said you really need to go to the author to get
>> what was referred to, and I agree, I can speculate a bit.  While this
>> *is* speculation, admittedly somewhat uninformed as I don't claim to be a
>> dev, and I'd actually be interested in what others think so don't be
>> afraid to tell me I haven't a clue, as long as you say why... based on
>> several years reading the list now...
>>
>> 1) When I see btrfs "technical issue that may never be resolved", the #1
>> first thing I think of, that AFAIK there are _definitely_ no plans to
>> resolve, because it's very deeply woven into the btrfs core by now, is...
>>
>> Filesystem UUID Identification.  Btrfs takes the UU bit of Universally
>> Unique quite literally, assuming they really *are* unique, at least on
>> that system, and uses them to identify the possibly multiple devices that
>> may be components of the filesystem, a problem most filesystems don't
>> have to deal with since they're single-device-only.  Because btrfs uses
>> this supposedly unique ID to ID devices that belong to the filesystem, it
>> can get *very* mixed up, with results possibly including dataloss, if it
>> sees devices that don't actually belong to a filesystem with the same UUID
>> as a mounted filesystem.
> 
> As partial workaround you can disable udev btrfs rules and then do a "btrfs dev scan" manually only for the device which you need. The you can mount the filesystem. Unfortunately you cannot mount two filesystem with the same UUID. However I have to point out that also LVM/dm might have problems if you clone a PV....
You don't even need `btrfs dev scan` if you just specify the exact set 
of devices in the mount options.  The `device=` mount option tells the 
kernel to check that device during the mount process.

Also, while LVM does have 'issues' with cloned PV's, it fails safe (by 
refusing to work on VG's that have duplicate PV's), while BTRFS fails 
very unsafely (by randomly corrupting data).
> 
> [...]
> der say 3-5 (or 5-7, or whatever)
>> years.  These could arguably include:
>>
>> 2) Subvolume and (more technically) reflink-aware defrag.
>>
>> It was there for a couple kernel versions some time ago, but "impossibly"
>> slow, so it was disabled until such time as btrfs could be made to scale
>> rather better in this regard.
> 
> Did you try something like that with XFS+DM snapshot ? No you can't, because defrag in XFS cannot traverse snapshot (and I have to suppose that defrag cannot be effective on a dm-snapshot at all)..
> What I am trying to point out is that even tough btrfs is not the fastest filesystem (and for some workload is VERY slow), when you compare it when few snapshots were presents LVM/dm is a lot slower.
> 
> IMHO most of the complaint which affect BTRFS, are due to the fact that with BTRFS an user can quite easily exploit a lot of features and their combinations. When a the slowness issue appears when some advance features combinations are used (i.e. multiple disks profile and (a lot of ) snapshots), this is reported as a BTRFS failure. But in fact even LVM/dm is very slow when the snapshot is used.
I still contend that the biggest issue WRT reflink-aware defrag was that 
it was not optional.  The only way to get the old defrag behavior was to 
boot a kernel that didn't have reflink-aware defrag support.  IOW, 
_everyone_ had to deal with the performance issues, not just the people 
who wanted to use reflink-aware defrag.
> 
> 
>>
>> There's no hint yet as to when that might actually be, if it will _ever_
>> be, so this can arguably be validly added to the "may never be resolved"
>> list.
>>
>> 3) N-way-mirroring.
>>
> [...]
> This is not an issue, but a not implemented feature
If you're looking at feature parity with competitors, it's an issue.
>>
>> 4) (Until relatively recently, and still in terms of scaling) Quotas.
>>
>> Until relatively recently, quotas could arguably be added to the list.
>> They were rewritten multiple times, and until recently, appeared to be
>> effectively eternally broken.
> 
> Even tough what you are reporting is correct, I have to point out that the quota in BTRFS is more complex than the equivalent one of the other FS. In fact it handles (good or bad) quota of gorup of subvolumes. How this concept could be translated in terms of "stratis"
> 
> 
> [...]
>>
>> As for stratis, supposedly they're deliberately taking existing proven in
>> multi-layer-form technology and simply exposing it in unified form.  They
>> claim this dramatically lessens the required new code and shortens time-
>> to-stability to something reasonable, in contrast to the about a decade
>> btrfs has taken already, without yet reaching a full feature set and full
>> stability.  IMO they may well have a point, tho AFAIK they're still new
>> and immature themselves and (I believe) don't have it either, so it's a
>> point that AFAIK has yet to be fully demonstrated.
>>
>> We'll see how they evolve.  I do actually expect them to move faster than
>> btrfs, but also expect the interface may not be as smooth and unified as
>> they'd like to present as I expect there to remain some hiccups in
>> smoothing over the layering issues.  Also, because they've deliberately
>> chosen to go with existing technology where possible in ordered to evolve
>> to stability faster, by the same token they're deliberately limiting the
>> evolution to incremental over existing technology, and I expect there's
>> some stuff btrfs will do better as a result... at least until btrfs (or a
>> successor) becomes stable enough for them to integrate (parts of?) it as
>> existing demonstrated-stable technology.
> 
> I fully agree with the above sentences...
>>
>> The other difference, AFAIK, is that stratis is specifically a
>> corporation making it a/the main money product, whereas btrfs was always
>> something the btrfs devs used at their employers (oracle, facebook), who
>> have other things as their main product.  As such, stratis is much more
>> likely to prioritize things like raid status monitors, hot-spares, etc,
>> that can be part of the product they sell, where they've been lower
>> priority for btrfs.
>>
> 
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-21 23:13 unsolvable technical issues? waxhead
                   ` (2 preceding siblings ...)
  2018-06-23  5:11 ` Duncan
@ 2018-06-25 13:36 ` David Sterba
  2018-06-25 16:43   ` waxhead
  3 siblings, 1 reply; 23+ messages in thread
From: David Sterba @ 2018-06-25 13:36 UTC (permalink / raw)
  To: waxhead; +Cc: linux-btrfs

On Fri, Jun 22, 2018 at 01:13:31AM +0200, waxhead wrote:
> According to this:
> 
> https://stratis-storage.github.io/StratisSoftwareDesign.pdf
> Page 4 , section 1.2
> 
> It claims that BTRFS still have significant technical issues that may 
> never be resolved.
> Could someone shed some light on exactly what these technical issues 
> might be?! What are BTRFS biggest technical problems?

The subject you write is 'unsolvable', which I read as 'impossible to
solve', eg. on the design level. I'm not aware of such issues.

If this is about issues that are difficult either to implement or
getting right, there are a few known ones.

> If you forget about the "RAID"5/6 like features then the only annoyances 
> that I have with BTRFS so far is...
> 
> 1. Lack of per subvolume "RAID" levels
> 2. Lack of not using the deviceid to re-discover and re-add dropped devices
> 
> And that's about it really...

This could quickly turn into 'my faviourite bug/feature' list that can
be very long. The most asked for are raid56, and performance of qgroups.

Qu Wenruo improved some of the core problems and Jeff is working on the
performance problem. So there are people working on that.

On the raid56 front, there were some recent updates that fixed some
bugs, but the fix for write hole is still missing so we can't raise the
status yet.  I have some some good news but nobody should get too
excited until the code lands.

I have prototype for the N-copy raid (where N is 3 or 4).  This will
provide the underlying infrastructure for the raid5/6 logging mechanism,
the rest can be taken from Liu Bo's patchset sent some time ago.  In the
end the N-copy can be used for data and metadata too, independently and
flexibly switched via the balance filters. This will cost one
incompatibility bit.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-23  5:11 ` Duncan
  2018-06-24 20:22   ` Goffredo Baroncelli
@ 2018-06-25 14:20   ` David Sterba
  1 sibling, 0 replies; 23+ messages in thread
From: David Sterba @ 2018-06-25 14:20 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

On Sat, Jun 23, 2018 at 05:11:52AM +0000, Duncan wrote:
> > According to this:
> > 
> > https://stratis-storage.github.io/StratisSoftwareDesign.pdf Page 4 ,
> > section 1.2
> > 
> > It claims that BTRFS still have significant technical issues that may
> > never be resolved.
> > Could someone shed some light on exactly what these technical issues
> > might be?! What are BTRFS biggest technical problems?
> > 
> > If you forget about the "RAID"5/6 like features then the only annoyances
> > that I have with BTRFS so far is...
> > 
> > 1. Lack of per subvolume "RAID" levels
> > 2. Lack of not using the deviceid to re-discover and re-add dropped
> > devices
> > 
> > And that's about it really...
> 
> ... And those both have solutions on the roadmap, with RFC patches 
> already posted for #2 (tho I'm not sure they use devid) altho 
> realistically they're likely to take years to appear and be tested to 
> stability.  Meanwhile...
> 
> While as the others have said you really need to go to the author to get 
> what was referred to, and I agree, I can speculate a bit.  While this 
> *is* speculation, admittedly somewhat uninformed as I don't claim to be a 
> dev, and I'd actually be interested in what others think so don't be 
> afraid to tell me I haven't a clue, as long as you say why... based on 
> several years reading the list now...
> 
> 1) When I see btrfs "technical issue that may never be resolved", the #1 
> first thing I think of, that AFAIK there are _definitely_ no plans to 
> resolve, because it's very deeply woven into the btrfs core by now, is...
> 
> Filesystem UUID Identification.

> Btrfs takes the UU bit of Universally 
> Unique quite literally, assuming they really *are* unique, at least on 
> that system, and uses them to identify the possibly multiple devices that 
> may be components of the filesystem, a problem most filesystems don't 
> have to deal with since they're single-device-only.  Because btrfs uses 
> this supposedly unique ID to ID devices that belong to the filesystem, it 
> can get *very* mixed up, with results possibly including dataloss, if it 
> sees devices that don't actually belong to a filesystem with the same UUID 
> as a mounted filesystem.
> 
> But technologies such as LVM allow cloning devices and these additional 
> devices naturally have the same filesystem metadata, including filesystem 
> UUID, as the original.  Making the problem worse is udev with its plug-n-
> play style detection, which will normally trigger a btrfs device scan, 
> thus making btrfs aware of new devices containing (a component of) a 
> btrfs, as soon as udev detects the device.

The automatic scanning is partially making it hard and would require
either extending the scanning mechanim to distinguish automatic and
manual scan, and using that information in kernel.

Right now, a cloned device will not be added to the filesystem UUID set
if the fs is mounted, but otherwise it's up to the administrator. The
misisng bit is possibly a way to tell the kernel module to 'forget' a
device (forget and never auto-scan).

> 2) Subvolume and (more technically) reflink-aware defrag.

There was a discussion in the mailinglist recently, some additions to
the interface were requested. The code to avoid the OOM exists but the
original author is not apparently interested and noone else has that
high enough in the todo list.

> 3) N-way-mirroring.

I have a prototype code for that, 3-copy and 4-copy type of profile.
Doning a fully dynamic N-way would become a messs once thre are mixed
N-way chunks for different N. Adding N=5 would not be too hard, but I'm
not sure if this makes sense.

The raid5 write-hole log will build on top of that, but the code has not
been written yet, other than the separate device logging sent by Liu Bo. 

> 4) (Until relatively recently, and still in terms of scaling) Quotas.

That's ongoing WIP, as qgroups touch the core parts.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-25 13:36 ` David Sterba
@ 2018-06-25 16:43   ` waxhead
  2018-06-25 16:54     ` Hugo Mills
  0 siblings, 1 reply; 23+ messages in thread
From: waxhead @ 2018-06-25 16:43 UTC (permalink / raw)
  To: dsterba, linux-btrfs



David Sterba wrote:
> On Fri, Jun 22, 2018 at 01:13:31AM +0200, waxhead wrote:
>> According to this:
>>
>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf
>> Page 4 , section 1.2
>>
>> It claims that BTRFS still have significant technical issues that may
>> never be resolved.
>> Could someone shed some light on exactly what these technical issues
>> might be?! What are BTRFS biggest technical problems?
> 
> The subject you write is 'unsolvable', which I read as 'impossible to
> solve', eg. on the design level. I'm not aware of such issues.
> 
Alright , so I interpret this as there is no showstopper regarding 
implementation of existing and planned features...

> If this is about issues that are difficult either to implement or
> getting right, there are a few known ones.
> 
Alright again, and I interpret this as there might be some code that is 
not flexible enough and changing that might affect working / stable 
parts of the code so therefore other solutions are looked at which is 
not that uncommon for software. Apart from not listing the known issues 
I think I got my questions answered :) and now it is perhaps finally 
appropriate to file a request at the Stratis bugtracker to ask what 
specifically they are referring to.

>> If you forget about the "RAID"5/6 like features then the only annoyances
>> that I have with BTRFS so far is...
>>
>> 1. Lack of per subvolume "RAID" levels
>> 2. Lack of not using the deviceid to re-discover and re-add dropped devices
>>
>> And that's about it really...
> 
> This could quickly turn into 'my faviourite bug/feature' list that can
> be very long. The most asked for are raid56, and performance of qgroups.
> 
> Qu Wenruo improved some of the core problems and Jeff is working on the
> performance problem. So there are people working on that.
> 
> On the raid56 front, there were some recent updates that fixed some
> bugs, but the fix for write hole is still missing so we can't raise the
> status yet.  I have some some good news but nobody should get too
> excited until the code lands.
> 
> I have prototype for the N-copy raid (where N is 3 or 4).  This will
> provide the underlying infrastructure for the raid5/6 logging mechanism,
> the rest can be taken from Liu Bo's patchset sent some time ago.  In the
> end the N-copy can be used for data and metadata too, independently and
> flexibly switched via the balance filters. This will cost one
> incompatibility bit.

I hope I am not asking for too much (but I know I probably am), but I 
suggest that having a small snippet of information on the status page 
showing a little bit about what is either currently the development 
focus , or what people are known for working at would be very valuable 
for users and it may of course work both ways, such as exciting people 
or calming them down. ;)

For example something simple like a "development focus" list...
2018-Q4: (planned) Renaming the grotesque "RAID" terminology
2018-Q3: (planned) Magical feature X
2018-Q2: N-Way mirroring
2018-Q1: Feature work "RAID"5/6

I think it would be good for people living their lives outside as it 
would perhaps spark some attention from developers and perhaps even 
media as well.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-25 16:43   ` waxhead
@ 2018-06-25 16:54     ` Hugo Mills
  2018-06-30  3:59       ` Duncan
  0 siblings, 1 reply; 23+ messages in thread
From: Hugo Mills @ 2018-06-25 16:54 UTC (permalink / raw)
  To: waxhead; +Cc: dsterba, linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1243 bytes --]

On Mon, Jun 25, 2018 at 06:43:38PM +0200, waxhead wrote:
[snip]
> I hope I am not asking for too much (but I know I probably am), but
> I suggest that having a small snippet of information on the status
> page showing a little bit about what is either currently the
> development focus , or what people are known for working at would be
> very valuable for users and it may of course work both ways, such as
> exciting people or calming them down. ;)
> 
> For example something simple like a "development focus" list...
> 2018-Q4: (planned) Renaming the grotesque "RAID" terminology
> 2018-Q3: (planned) Magical feature X
> 2018-Q2: N-Way mirroring
> 2018-Q1: Feature work "RAID"5/6
> 
> I think it would be good for people living their lives outside as it
> would perhaps spark some attention from developers and perhaps even
> media as well.

   I started doing this a couple of years ago, but it turned out to be
impossible to keep even vaguely accurate or up to date, without going
round and bugging the developers individually on a per-release
basis. I don't think it's going to happen.

   Hugo.

-- 
Hugo Mills             | emacs: Emacs Makes A Computer Slow.
hugo@... carfax.org.uk |
http://carfax.org.uk/  |
PGP: E2AB1DE4          |

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-22  2:39 ` Chris Murphy
@ 2018-06-27 18:50   ` waxhead
  2018-06-28 14:46     ` Adam Borowski
  0 siblings, 1 reply; 23+ messages in thread
From: waxhead @ 2018-06-27 18:50 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Btrfs BTRFS



Chris Murphy wrote:
> On Thu, Jun 21, 2018 at 5:13 PM, waxhead <waxhead@dirtcellar.net> wrote:
>> According to this:
>>
>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf
>> Page 4 , section 1.2
>>
>> It claims that BTRFS still have significant technical issues that may never
>> be resolved.
>> Could someone shed some light on exactly what these technical issues might
>> be?! What are BTRFS biggest technical problems?
> 
> 
> I think it's appropriate to file an issue and ask what they're
> referring to. It very well might be use case specific to Red Hat.
> https://github.com/stratis-storage/stratis-storage.github.io/issues
> 
> I also think it's appropriate to crosslink: include URL for the start
> of this thread in the issue, and the issue URL to this thread.
> 
> 
> 
https://github.com/stratis-storage/stratis-storage.github.io/issues/1

Apparently the author have toned down the wording a bit, this confirm 
that the claim was without basis and probably based on "popular myth".
The document the PDF links to is not yet updated.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-27 18:50   ` waxhead
@ 2018-06-28 14:46     ` Adam Borowski
  0 siblings, 0 replies; 23+ messages in thread
From: Adam Borowski @ 2018-06-28 14:46 UTC (permalink / raw)
  To: linux-btrfs

On Wed, Jun 27, 2018 at 08:50:11PM +0200, waxhead wrote:
> Chris Murphy wrote:
> > On Thu, Jun 21, 2018 at 5:13 PM, waxhead <waxhead@dirtcellar.net> wrote:
> > > According to this:
> > > 
> > > https://stratis-storage.github.io/StratisSoftwareDesign.pdf
> > > Page 4 , section 1.2
> > > 
> > > It claims that BTRFS still have significant technical issues that may never
> > > be resolved.
> > > Could someone shed some light on exactly what these technical issues might
> > > be?! What are BTRFS biggest technical problems?
> > 
> > 
> > I think it's appropriate to file an issue and ask what they're
> > referring to. It very well might be use case specific to Red Hat.
> > https://github.com/stratis-storage/stratis-storage.github.io/issues

> https://github.com/stratis-storage/stratis-storage.github.io/issues/1
> 
> Apparently the author have toned down the wording a bit, this confirm that
> the claim was without basis and probably based on "popular myth".
> The document the PDF links to is not yet updated.

It's a company whose profits rely on users choosing it over anything that
competes.  Adding propaganda to a public document is a natural thing for
them to do.


Meow!
-- 
⢀⣴⠾⠻⢶⣦⠀ There's an easy way to tell toy operating systems from real ones.
⣾⠁⢰⠒⠀⣿⡁ Just look at how their shipped fonts display U+1F52B, this makes
⢿⡄⠘⠷⠚⠋⠀ the intended audience obvious.  It's also interesting to see OSes
⠈⠳⣄⠀⠀⠀⠀ go back and forth wrt their intended target.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-25 11:26     ` Austin S. Hemmelgarn
@ 2018-06-30  3:22       ` Duncan
  2018-06-30  5:32         ` Andrei Borzenkov
  2018-07-02 11:44         ` Austin S. Hemmelgarn
  0 siblings, 2 replies; 23+ messages in thread
From: Duncan @ 2018-06-30  3:22 UTC (permalink / raw)
  To: linux-btrfs

Austin S. Hemmelgarn posted on Mon, 25 Jun 2018 07:26:41 -0400 as
excerpted:

> On 2018-06-24 16:22, Goffredo Baroncelli wrote:
>> On 06/23/2018 07:11 AM, Duncan wrote:
>>> waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
>>>
>>>> According to this:
>>>>
>>>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf Page 4 ,
>>>> section 1.2
>>>>
>>>> It claims that BTRFS still have significant technical issues that may
>>>> never be resolved.
>>>
>>> I can speculate a bit.
>>>
>>> 1) When I see btrfs "technical issue that may never be resolved", the
>>> #1 first thing I think of, that AFAIK there are _definitely_ no plans
>>> to resolve, because it's very deeply woven into the btrfs core by now,
>>> is...
>>>
>>> [1)] Filesystem UUID Identification.  Btrfs takes the UU bit of
>>> Universally Unique quite literally, assuming they really *are*
>>> unique, at least on that system[.]  Because
>>> btrfs uses this supposedly unique ID to ID devices that belong to the
>>> filesystem, it can get *very* mixed up, with results possibly
>>> including dataloss, if it sees devices that don't actually belong to a
>>> filesystem with the same UUID as a mounted filesystem.
>> 
>> As partial workaround you can disable udev btrfs rules and then do a
>> "btrfs dev scan" manually only for the device which you need.

> You don't even need `btrfs dev scan` if you just specify the exact set
> of devices in the mount options.  The `device=` mount option tells the
> kernel to check that device during the mount process.

Not that lvm does any better in this regard[1], but has btrfs ever solved 
the bug where only one device= in the kernel commandline's rootflags= 
would take effect, effectively forcing initr* on people (like me) who 
would otherwise not need them and prefer to do without them, if they're 
using a multi-device btrfs as root?

Not to mention the fact that as kernel people will tell you, device 
enumeration isn't guaranteed to be in the same order every boot, so 
device=/dev/* can't be relied upon and shouldn't be used -- but of course 
device=LABEL= and device=UUID= and similar won't work without userspace, 
basically udev (if they work at all, IDK if they actually do).

Tho in practice from what I've seen, device enumeration order tends to be 
dependable /enough/ for at least those without enterprise-level numbers 
of devices to enumerate.  True, it /does/ change from time to time with a 
new kernel, but anybody sane keeps a tested-dependable old kernel around 
to boot to until they know the new one works as expected, and that sort 
of change is seldom enough that users can boot to the old kernel and 
adjust their settings for the new one as necessary when it does happen.  
So as "don't do it that way because it's not reliable" as it might indeed 
be in theory, in practice, just using an ordered /dev/* in kernel 
commandlines does tend to "just work"... provided one is ready for the 
occasion when that device parameter might need a bit of adjustment, of 
course.

> Also, while LVM does have 'issues' with cloned PV's, it fails safe (by
> refusing to work on VG's that have duplicate PV's), while BTRFS fails
> very unsafely (by randomly corrupting data).

And IMO that "failing unsafe" is both serious and common enough that it 
easily justifies adding the point to a list of this sort, thus my putting 
it #1.

>>> 2) Subvolume and (more technically) reflink-aware defrag.
>>>
>>> It was there for a couple kernel versions some time ago, but
>>> "impossibly" slow, so it was disabled until such time as btrfs could
>>> be made to scale rather better in this regard.

> I still contend that the biggest issue WRT reflink-aware defrag was that
> it was not optional.  The only way to get the old defrag behavior was to
> boot a kernel that didn't have reflink-aware defrag support.  IOW,
> _everyone_ had to deal with the performance issues, not just the people
> who wanted to use reflink-aware defrag.

Absolutely.

Which of course suggests making it optional, with a suitable warning as 
to the speed implications with lots of snapshots/reflinks, when it does 
get enabled again (and as David mentions elsewhere, there's apparently 
some work going into the idea once again, which potentially moves it from 
the 3-5 year range, at best, back to a 1/2-2-year range, time will tell).

>>> 3) N-way-mirroring.
>>>
>> [...]
>> This is not an issue, but a not implemented feature
> If you're looking at feature parity with competitors, it's an issue.

Exactly my point.  Thanks. =:^)

>>> 4) (Until relatively recently, and still in terms of scaling) Quotas.
>>>
>>> Until relatively recently, quotas could arguably be added to the list.
>>> They were rewritten multiple times, and until recently, appeared to be
>>> effectively eternally broken.
>> 
>> Even tough what you are reporting is correct, I have to point out that
>> the quota in BTRFS is more complex than the equivalent one of the other
>> FS.

Which, arguably, is exactly Stratis' point.  "More complex" to the point 
it might never, at least in reasonable-planning-horizon-time, actually be 
reliable enough for general production use, and if it /does/ happen to 
meet /that/ qualification, due to all that complexity it could very 
possibly still scale horribly enough that it's /still/ not actually 
practically usable for many planning-horizon use-cases.

And Stratis' answer to that problem they've pointed out with btrfs is to 
use existing and already demonstrated production-stable technologies, 
simply presenting them in a new, now unified-management, whole.

And IMO they have a point, tho AFAIK they've not yet demonstrated that 
they are /the/ solution just yet.  But I hope they do, because zfs, the 
existing all-in-one solution,  has a serious square-zfs-peg-in-round--
linux-hole issue in at least two areas, license-wise and cache-technology-
wise, leaving a serious void that remains to be filled, possibly 
eventually with btrfs, but it's taking its time to get there, and if 
stratis can fill it with more practical, less pie-in-the-sky, until then, 
great!

---
[1] LVM is userspace code on top of the kernelspace devicemapper, and 
therefore requires an initr* if root is on lvm, regardless.  So btrfs 
actually does a bit better here, only requiring it for multi-device btrfs.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-25 16:54     ` Hugo Mills
@ 2018-06-30  3:59       ` Duncan
  0 siblings, 0 replies; 23+ messages in thread
From: Duncan @ 2018-06-30  3:59 UTC (permalink / raw)
  To: linux-btrfs

Hugo Mills posted on Mon, 25 Jun 2018 16:54:36 +0000 as excerpted:

> On Mon, Jun 25, 2018 at 06:43:38PM +0200, waxhead wrote:
> [snip]
>> I hope I am not asking for too much (but I know I probably am), but I
>> suggest that having a small snippet of information on the status page
>> showing a little bit about what is either currently the development
>> focus , or what people are known for working at would be very valuable
>> for users and it may of course work both ways, such as exciting people
>> or calming them down. ;)
>> 
>> For example something simple like a "development focus" list...
>> 2018-Q4: (planned) Renaming the grotesque "RAID" terminology
>> 2018-Q3: (planned) Magical feature X
>> 2018-Q2: N-Way mirroring
>> 2018-Q1: Feature work "RAID"5/6
>> 
>> I think it would be good for people living their lives outside as it
>> would perhaps spark some attention from developers and perhaps even
>> media as well.
> 
> I started doing this a couple of years ago, but it turned out to be
> impossible to keep even vaguely accurate or up to date, without going
> round and bugging the developers individually on a per-release basis. I
> don't think it's going to happen.

In addition, anything like quarter, kernel cycle, etc, has been 
repeatedly demonstrated to be entirely broken beyond "current", because 
roadmapped tasks have rather consistently taken longer, sometimes /many/ 
/times/ longer (by a factor of 20+ in the case of raid56), than first 
predicted.

But in theory it might be double, with just a roughly ordered list, no 
dates beyond "current focus", and with suitably big disclaimers about 
other things (generally bugs in otherwise more stable features, but 
occasionally a quick sub-feature that is seen to be easier to introduce 
at the current state than it might be later, etc) possibly getting 
priority and temporarily displacing roadmapped items.

In fact, this last one is the big reason why raid56 has taken so long to 
even somewhat stabilize -- the devs kept finding bugs in already semi-
stable features that took priority... for kernel cycle after kernel 
cycle.  The quotas/qgroups feature, already introduced and intended to be 
at least semi-stable was one such culprit, requiring repeated rewrite and 
kernel cycles worth of bug squashing.  A few critical under the right 
circumstances compression bugs, where compression was supposed to be an 
already reasonably stable feature, were another, tho these took far less 
developer bandwidth than quotas.  Getting a reasonably usable fsck was a 
bunch of little patches.  AFAIK that one wasn't actually an original 
focus and was intended to be back-burnered for some time, but once btrfs 
hit mainline, users started demanding it, so the priority was bumped.  
And of course having it has been good for finding and ultimately fixing 
other bugs as well, so it wasn't a bad thing, but the hard fact is the 
repairing fsck has taken, all told, I'd guess about the same number of 
developer cycles as quotas, and those developer cycles had to come from 
stuff that had been roadmapped for earlier.

As a bit of an optimist I'd be inclined to argue that OK, we've gotten 
btrfs in far better shape general stability-wise now, and going forward, 
the focus can be back on the stuff that was roadmapped for earlier that 
this stuff displaced, so one might hope things will move faster again 
now, but really, who knows?  That's arguably what the devs thought when 
they mainlined btrfs, too, and yet it took all this much longer to mature 
and stabilize since then.  Still, it /has/ to happen at /some/ point, 
right?  And I know for a fact that btrfs is far more stable now than it 
was... because things like ungraceful shutdowns that used to at minimum 
trigger (raid1 mode) scrub fixes on remount and scrub, now... don't -- 
btrfs is now stable enough that the atomic COW is doing its job and 
things "just work", where before, they required scrub repair at best, and 
occasional blow away and restore from backups.  So I can at least /hope/ 
that the worst of the plague of bugs is behind us, and people can work on 
what they intended to do most (say 80%) of the time now, spending say a 
day's worth a week (20%) on bugs, instead of the reverse, 80% (4 days a 
week) on bugs and if they're lucky, a day a week on what they were 
supposed to be focused on, which is what we were seeing for awhile.

Plus the tools to do the debugging, etc, are far more mature now, another 
reason bugs should hopefully take less time now.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-30  3:22       ` Duncan
@ 2018-06-30  5:32         ` Andrei Borzenkov
  2018-07-02 11:49           ` Austin S. Hemmelgarn
  2018-07-02 11:44         ` Austin S. Hemmelgarn
  1 sibling, 1 reply; 23+ messages in thread
From: Andrei Borzenkov @ 2018-06-30  5:32 UTC (permalink / raw)
  To: Duncan, linux-btrfs

30.06.2018 06:22, Duncan пишет:
> Austin S. Hemmelgarn posted on Mon, 25 Jun 2018 07:26:41 -0400 as
> excerpted:
> 
>> On 2018-06-24 16:22, Goffredo Baroncelli wrote:
>>> On 06/23/2018 07:11 AM, Duncan wrote:
>>>> waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
>>>>
>>>>> According to this:
>>>>>
>>>>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf Page 4 ,
>>>>> section 1.2
>>>>>
>>>>> It claims that BTRFS still have significant technical issues that may
>>>>> never be resolved.
>>>>
>>>> I can speculate a bit.
>>>>
>>>> 1) When I see btrfs "technical issue that may never be resolved", the
>>>> #1 first thing I think of, that AFAIK there are _definitely_ no plans
>>>> to resolve, because it's very deeply woven into the btrfs core by now,
>>>> is...
>>>>
>>>> [1)] Filesystem UUID Identification.  Btrfs takes the UU bit of
>>>> Universally Unique quite literally, assuming they really *are*
>>>> unique, at least on that system[.]  Because
>>>> btrfs uses this supposedly unique ID to ID devices that belong to the
>>>> filesystem, it can get *very* mixed up, with results possibly
>>>> including dataloss, if it sees devices that don't actually belong to a
>>>> filesystem with the same UUID as a mounted filesystem.
>>>
>>> As partial workaround you can disable udev btrfs rules and then do a
>>> "btrfs dev scan" manually only for the device which you need.
> 
>> You don't even need `btrfs dev scan` if you just specify the exact set
>> of devices in the mount options.  The `device=` mount option tells the
>> kernel to check that device during the mount process.
> 
> Not that lvm does any better in this regard[1], but has btrfs ever solved 
> the bug where only one device= in the kernel commandline's rootflags= 
> would take effect, effectively forcing initr* on people (like me) who 
> would otherwise not need them and prefer to do without them, if they're 
> using a multi-device btrfs as root?
> 

This requires in-kernel device scanning; I doubt we will ever see it.

> Not to mention the fact that as kernel people will tell you, device 
> enumeration isn't guaranteed to be in the same order every boot, so 
> device=/dev/* can't be relied upon and shouldn't be used -- but of course 
> device=LABEL= and device=UUID= and similar won't work without userspace, 
> basically udev (if they work at all, IDK if they actually do).
> 
> Tho in practice from what I've seen, device enumeration order tends to be 
> dependable /enough/ for at least those without enterprise-level numbers 
> of devices to enumerate.

Just boot with USB stick/eSATA drive plugged in, there are good chances
it changes device order.

>  True, it /does/ change from time to time with a 
> new kernel, but anybody sane keeps a tested-dependable old kernel around 
> to boot to until they know the new one works as expected, and that sort 
> of change is seldom enough that users can boot to the old kernel and 
> adjust their settings for the new one as necessary when it does happen.  
> So as "don't do it that way because it's not reliable" as it might indeed 
> be in theory, in practice, just using an ordered /dev/* in kernel 
> commandlines does tend to "just work"... provided one is ready for the 
> occasion when that device parameter might need a bit of adjustment, of 
> course.
> 
...
> 
> ---
> [1] LVM is userspace code on top of the kernelspace devicemapper, and 
> therefore requires an initr* if root is on lvm, regardless.  So btrfs 
> actually does a bit better here, only requiring it for multi-device btrfs.
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-30  3:22       ` Duncan
  2018-06-30  5:32         ` Andrei Borzenkov
@ 2018-07-02 11:44         ` Austin S. Hemmelgarn
  1 sibling, 0 replies; 23+ messages in thread
From: Austin S. Hemmelgarn @ 2018-07-02 11:44 UTC (permalink / raw)
  To: linux-btrfs

On 2018-06-29 23:22, Duncan wrote:
> Austin S. Hemmelgarn posted on Mon, 25 Jun 2018 07:26:41 -0400 as
> excerpted:
> 
>> On 2018-06-24 16:22, Goffredo Baroncelli wrote:
>>> On 06/23/2018 07:11 AM, Duncan wrote:
>>>> waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
>>>>
>>>>> According to this:
>>>>>
>>>>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf Page 4 ,
>>>>> section 1.2
>>>>>
>>>>> It claims that BTRFS still have significant technical issues that may
>>>>> never be resolved.
>>>>
>>>> I can speculate a bit.
>>>>
>>>> 1) When I see btrfs "technical issue that may never be resolved", the
>>>> #1 first thing I think of, that AFAIK there are _definitely_ no plans
>>>> to resolve, because it's very deeply woven into the btrfs core by now,
>>>> is...
>>>>
>>>> [1)] Filesystem UUID Identification.  Btrfs takes the UU bit of
>>>> Universally Unique quite literally, assuming they really *are*
>>>> unique, at least on that system[.]  Because
>>>> btrfs uses this supposedly unique ID to ID devices that belong to the
>>>> filesystem, it can get *very* mixed up, with results possibly
>>>> including dataloss, if it sees devices that don't actually belong to a
>>>> filesystem with the same UUID as a mounted filesystem.
>>>
>>> As partial workaround you can disable udev btrfs rules and then do a
>>> "btrfs dev scan" manually only for the device which you need.
> 
>> You don't even need `btrfs dev scan` if you just specify the exact set
>> of devices in the mount options.  The `device=` mount option tells the
>> kernel to check that device during the mount process.
> 
> Not that lvm does any better in this regard[1], but has btrfs ever solved
> the bug where only one device= in the kernel commandline's rootflags=
> would take effect, effectively forcing initr* on people (like me) who
> would otherwise not need them and prefer to do without them, if they're
> using a multi-device btrfs as root?
I haven't tested this recently myself, so I don't know.
> 
> Not to mention the fact that as kernel people will tell you, device
> enumeration isn't guaranteed to be in the same order every boot, so
> device=/dev/* can't be relied upon and shouldn't be used -- but of course
> device=LABEL= and device=UUID= and similar won't work without userspace,
> basically udev (if they work at all, IDK if they actually do).
They aren't guaranteed to be stable, but they functionally are provided 
you don't modify hardware in any way and your disks can't be enumerated 
asynchronously without some form of ordered identification (IOW, you're 
using just one SATA or SCSI controller for all your disks).

That said, the required component for the LABEL= and UUID= syntax is not 
udev, it's blkid.  blkid can use udev to avoid having to read 
everything, but it's not mandatory.
> 
> Tho in practice from what I've seen, device enumeration order tends to be
> dependable /enough/ for at least those without enterprise-level numbers
> of devices to enumerate.  True, it /does/ change from time to time with a
> new kernel, but anybody sane keeps a tested-dependable old kernel around
> to boot to until they know the new one works as expected, and that sort
> of change is seldom enough that users can boot to the old kernel and
> adjust their settings for the new one as necessary when it does happen.
> So as "don't do it that way because it's not reliable" as it might indeed
> be in theory, in practice, just using an ordered /dev/* in kernel
> commandlines does tend to "just work"... provided one is ready for the
> occasion when that device parameter might need a bit of adjustment, of
> course.
> 
>> Also, while LVM does have 'issues' with cloned PV's, it fails safe (by
>> refusing to work on VG's that have duplicate PV's), while BTRFS fails
>> very unsafely (by randomly corrupting data).
> 
> And IMO that "failing unsafe" is both serious and common enough that it
> easily justifies adding the point to a list of this sort, thus my putting
> it #1.
Agreed.  My point wasn't that BTRFS is doing things correctly, just that 
LVM is not a saint in this respect either (it's just more saintly than 
we are).
> 
>>>> 2) Subvolume and (more technically) reflink-aware defrag.
>>>>
>>>> It was there for a couple kernel versions some time ago, but
>>>> "impossibly" slow, so it was disabled until such time as btrfs could
>>>> be made to scale rather better in this regard.
> 
>> I still contend that the biggest issue WRT reflink-aware defrag was that
>> it was not optional.  The only way to get the old defrag behavior was to
>> boot a kernel that didn't have reflink-aware defrag support.  IOW,
>> _everyone_ had to deal with the performance issues, not just the people
>> who wanted to use reflink-aware defrag.
> 
> Absolutely.
> 
> Which of course suggests making it optional, with a suitable warning as
> to the speed implications with lots of snapshots/reflinks, when it does
> get enabled again (and as David mentions elsewhere, there's apparently
> some work going into the idea once again, which potentially moves it from
> the 3-5 year range, at best, back to a 1/2-2-year range, time will tell).
> 
>>>> 3) N-way-mirroring.
>>>>
>>> [...]
>>> This is not an issue, but a not implemented feature
>> If you're looking at feature parity with competitors, it's an issue.
> 
> Exactly my point.  Thanks. =:^)
> 
>>>> 4) (Until relatively recently, and still in terms of scaling) Quotas.
>>>>
>>>> Until relatively recently, quotas could arguably be added to the list.
>>>> They were rewritten multiple times, and until recently, appeared to be
>>>> effectively eternally broken.
>>>
>>> Even tough what you are reporting is correct, I have to point out that
>>> the quota in BTRFS is more complex than the equivalent one of the other
>>> FS.
> 
> Which, arguably, is exactly Stratis' point.  "More complex" to the point
> it might never, at least in reasonable-planning-horizon-time, actually be
> reliable enough for general production use, and if it /does/ happen to
> meet /that/ qualification, due to all that complexity it could very
> possibly still scale horribly enough that it's /still/ not actually
> practically usable for many planning-horizon use-cases.
The other thing here though is that you can't realistically use classic 
quota semantics with BTRFS, which is actually somewhat of a problem for 
some people
> 
> And Stratis' answer to that problem they've pointed out with btrfs is to
> use existing and already demonstrated production-stable technologies,
> simply presenting them in a new, now unified-management, whole.
> 
> And IMO they have a point, tho AFAIK they've not yet demonstrated that
> they are /the/ solution just yet.  But I hope they do, because zfs, the
> existing all-in-one solution,  has a serious square-zfs-peg-in-round--
> linux-hole issue in at least two areas, license-wise and cache-technology-
> wise, leaving a serious void that remains to be filled, possibly
> eventually with btrfs, but it's taking its time to get there, and if
> stratis can fill it with more practical, less pie-in-the-sky, until then,
> great!
> 
> ---
> [1] LVM is userspace code on top of the kernelspace devicemapper, and
> therefore requires an initr* if root is on lvm, regardless.  So btrfs
> actually does a bit better here, only requiring it for multi-device btrfs.
In theory, LVM might not always need it in the future.  There were some 
patches a while back on LKML to support specifying DM tables directly on 
the kernel command-line, though I don't remember if those got merged or 
not.  With that though, it _might_ be possible to support simple setups 
without needing an initramfs with some help from the bootloader.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-06-30  5:32         ` Andrei Borzenkov
@ 2018-07-02 11:49           ` Austin S. Hemmelgarn
  2018-07-03  7:35             ` Duncan
  0 siblings, 1 reply; 23+ messages in thread
From: Austin S. Hemmelgarn @ 2018-07-02 11:49 UTC (permalink / raw)
  To: Andrei Borzenkov, Duncan, linux-btrfs

On 2018-06-30 01:32, Andrei Borzenkov wrote:
> 30.06.2018 06:22, Duncan пишет:
>> Austin S. Hemmelgarn posted on Mon, 25 Jun 2018 07:26:41 -0400 as
>> excerpted:
>>
>>> On 2018-06-24 16:22, Goffredo Baroncelli wrote:
>>>> On 06/23/2018 07:11 AM, Duncan wrote:
>>>>> waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
>>>>>
>>>>>> According to this:
>>>>>>
>>>>>> https://stratis-storage.github.io/StratisSoftwareDesign.pdf Page 4 ,
>>>>>> section 1.2
>>>>>>
>>>>>> It claims that BTRFS still have significant technical issues that may
>>>>>> never be resolved.
>>>>>
>>>>> I can speculate a bit.
>>>>>
>>>>> 1) When I see btrfs "technical issue that may never be resolved", the
>>>>> #1 first thing I think of, that AFAIK there are _definitely_ no plans
>>>>> to resolve, because it's very deeply woven into the btrfs core by now,
>>>>> is...
>>>>>
>>>>> [1)] Filesystem UUID Identification.  Btrfs takes the UU bit of
>>>>> Universally Unique quite literally, assuming they really *are*
>>>>> unique, at least on that system[.]  Because
>>>>> btrfs uses this supposedly unique ID to ID devices that belong to the
>>>>> filesystem, it can get *very* mixed up, with results possibly
>>>>> including dataloss, if it sees devices that don't actually belong to a
>>>>> filesystem with the same UUID as a mounted filesystem.
>>>>
>>>> As partial workaround you can disable udev btrfs rules and then do a
>>>> "btrfs dev scan" manually only for the device which you need.
>>
>>> You don't even need `btrfs dev scan` if you just specify the exact set
>>> of devices in the mount options.  The `device=` mount option tells the
>>> kernel to check that device during the mount process.
>>
>> Not that lvm does any better in this regard[1], but has btrfs ever solved
>> the bug where only one device= in the kernel commandline's rootflags=
>> would take effect, effectively forcing initr* on people (like me) who
>> would otherwise not need them and prefer to do without them, if they're
>> using a multi-device btrfs as root?
>>
> 
> This requires in-kernel device scanning; I doubt we will ever see it.
> 
>> Not to mention the fact that as kernel people will tell you, device
>> enumeration isn't guaranteed to be in the same order every boot, so
>> device=/dev/* can't be relied upon and shouldn't be used -- but of course
>> device=LABEL= and device=UUID= and similar won't work without userspace,
>> basically udev (if they work at all, IDK if they actually do).
>>
>> Tho in practice from what I've seen, device enumeration order tends to be
>> dependable /enough/ for at least those without enterprise-level numbers
>> of devices to enumerate.
> 
> Just boot with USB stick/eSATA drive plugged in, there are good chances
> it changes device order.
It really depends on your particular hardware.  If your USB controllers 
are at lower PCI addresses than your primary disk controllers, then yes, 
this will cause an issue.  Same for whatever SATA controller your eSATA 
port is on (or stupid systems where the eSATA port is port 0 on the main 
SATA controller).

Notably, most Intel systems I've seen have the SATA controllers in the 
chipset enumerate after the USB controllers, and the whole chipset 
enumerates after add-in cards (so they almost always have this issue), 
while most AMD systems I've seen demonstrate the exact opposite 
behavior, they enumerate the SATA controller from the chipset before the 
USB controllers, and then enumerate the chipset before all the add-in 
cards (so they almost never have this issue).

That said, one of the constraints for them remaining consistent is that 
you don't change hardware.
> 
>>   True, it /does/ change from time to time with a
>> new kernel, but anybody sane keeps a tested-dependable old kernel around
>> to boot to until they know the new one works as expected, and that sort
>> of change is seldom enough that users can boot to the old kernel and
>> adjust their settings for the new one as necessary when it does happen.
>> So as "don't do it that way because it's not reliable" as it might indeed
>> be in theory, in practice, just using an ordered /dev/* in kernel
>> commandlines does tend to "just work"... provided one is ready for the
>> occasion when that device parameter might need a bit of adjustment, of
>> course.
>>
> ...
>>
>> ---
>> [1] LVM is userspace code on top of the kernelspace devicemapper, and
>> therefore requires an initr* if root is on lvm, regardless.  So btrfs
>> actually does a bit better here, only requiring it for multi-device btrfs.
>>
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-07-02 11:49           ` Austin S. Hemmelgarn
@ 2018-07-03  7:35             ` Duncan
  2018-07-03 11:54               ` Austin S. Hemmelgarn
  0 siblings, 1 reply; 23+ messages in thread
From: Duncan @ 2018-07-03  7:35 UTC (permalink / raw)
  To: linux-btrfs

Austin S. Hemmelgarn posted on Mon, 02 Jul 2018 07:49:05 -0400 as
excerpted:

> Notably, most Intel systems I've seen have the SATA controllers in the
> chipset enumerate after the USB controllers, and the whole chipset
> enumerates after add-in cards (so they almost always have this issue),
> while most AMD systems I've seen demonstrate the exact opposite
> behavior,
> they enumerate the SATA controller from the chipset before the USB
> controllers, and then enumerate the chipset before all the add-in cards
> (so they almost never have this issue).

Thanks.  That's a difference I wasn't aware of, and would (because I tend 
to favor amd) explain why I've never seen a change in enumeration order 
unless I've done something like unplug my sata cables for maintenance and 
forget which ones I had plugged in where -- random USB stuff left plugged 
in doesn't seem to matter, even choosing different boot media from the 
bios doesn't seem to matter by the time the kernel runs (I'm less sure 
about grub).

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: unsolvable technical issues?
  2018-07-03  7:35             ` Duncan
@ 2018-07-03 11:54               ` Austin S. Hemmelgarn
  0 siblings, 0 replies; 23+ messages in thread
From: Austin S. Hemmelgarn @ 2018-07-03 11:54 UTC (permalink / raw)
  To: linux-btrfs

On 2018-07-03 03:35, Duncan wrote:
> Austin S. Hemmelgarn posted on Mon, 02 Jul 2018 07:49:05 -0400 as
> excerpted:
> 
>> Notably, most Intel systems I've seen have the SATA controllers in the
>> chipset enumerate after the USB controllers, and the whole chipset
>> enumerates after add-in cards (so they almost always have this issue),
>> while most AMD systems I've seen demonstrate the exact opposite
>> behavior,
>> they enumerate the SATA controller from the chipset before the USB
>> controllers, and then enumerate the chipset before all the add-in cards
>> (so they almost never have this issue).
> 
> Thanks.  That's a difference I wasn't aware of, and would (because I tend
> to favor amd) explain why I've never seen a change in enumeration order
> unless I've done something like unplug my sata cables for maintenance and
> forget which ones I had plugged in where -- random USB stuff left plugged
> in doesn't seem to matter, even choosing different boot media from the
> bios doesn't seem to matter by the time the kernel runs (I'm less sure
> about grub).
> 
Additionally though, if you in some way make sure SATA drivers are 
loaded before USB ones, you will also never see this issue because of 
USB devices (same goes for GRUB).  A lot of laptops that use connections 
other than USB for the keyboard and mouse behave like this if you use a 
properly stripped down initramfs because you won't have USB drivers in 
the initramfs (and therefore the SATA drivers always load first).

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2018-07-03 11:54 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-21 23:13 unsolvable technical issues? waxhead
2018-06-22  2:39 ` Chris Murphy
2018-06-27 18:50   ` waxhead
2018-06-28 14:46     ` Adam Borowski
2018-06-22  5:48 ` Nikolay Borisov
2018-06-23 22:01   ` waxhead
2018-06-24  3:55     ` Jukka Larja
2018-06-24  8:41       ` waxhead
2018-06-24 15:06         ` Ferry Toth
2018-06-23  5:11 ` Duncan
2018-06-24 20:22   ` Goffredo Baroncelli
2018-06-25 11:26     ` Austin S. Hemmelgarn
2018-06-30  3:22       ` Duncan
2018-06-30  5:32         ` Andrei Borzenkov
2018-07-02 11:49           ` Austin S. Hemmelgarn
2018-07-03  7:35             ` Duncan
2018-07-03 11:54               ` Austin S. Hemmelgarn
2018-07-02 11:44         ` Austin S. Hemmelgarn
2018-06-25 14:20   ` David Sterba
2018-06-25 13:36 ` David Sterba
2018-06-25 16:43   ` waxhead
2018-06-25 16:54     ` Hugo Mills
2018-06-30  3:59       ` Duncan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.