All of lore.kernel.org
 help / color / mirror / Atom feed
* Can I see what device was used to mount btrfs?
@ 2017-04-30  5:47 Andrei Borzenkov
  2017-05-02  3:26 ` Anand Jain
  2017-05-02 13:58 ` Adam Borowski
  0 siblings, 2 replies; 17+ messages in thread
From: Andrei Borzenkov @ 2017-04-30  5:47 UTC (permalink / raw)
  To: linux-btrfs

I'm chasing issue with btrfs mounts under systemd
(https://github.com/systemd/systemd/issues/5781) - to summarize, systemd
waits for the final device that makes btrfs complete and mounts it using
this device name. But in /proc/self/mountinfo we actually see another
device name. Due to peculiarities of systemd implementation this device
"does not exist" from systemd PoV.

Looking at btrfs code I start to suspect that we actually do not know
what device was used to mount it at all. I.e.

static int btrfs_show_devname(struct seq_file *m, struct dentry *root)
{
...
        while (cur_devices) {
                head = &cur_devices->devices;
                list_for_each_entry(dev, head, dev_list) {
                        if (dev->missing)
                                continue;
                        if (!dev->name)
                                continue;
                        if (!first_dev || dev->devid < first_dev->devid)
                                first_dev = dev;
                }
                cur_devices = cur_devices->seed;
        }

        if (first_dev) {
                rcu_read_lock();
                name = rcu_dereference(first_dev->name);
                seq_escape(m, name->str, " \t\n\\");
                rcu_read_unlock();
...


So we always show device with the smallest devid, irrespectively of what
device was actually used to mount it.

Am I correct? What I have here is

localhost:~ # ll /dev/disk/by-label/
total 0
lrwxrwxrwx 1 root root 10 Apr 30 08:03 Storage -> ../../dm-1
localhost:~ # systemctl --no-pager status thin.mount
● thin.mount - /thin
   Loaded: loaded (/etc/fstab; generated; vendor preset: disabled)
   Active: active (mounted) since Sun 2017-04-30 08:03:07 MSK; 6min ago
    Where: /thin
     What: /dev/dm-0
     Docs: man:fstab(5)
           man:systemd-fstab-generator(8)
  Process: 982 ExecMount=/usr/bin/mount /dev/disk/by-label/Storage /thin
-t btrfs (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/thin.mount

Apr 30 08:03:07 localhost systemd[1]: Mounting /thin...
Apr 30 08:03:07 localhost systemd[1]: Mounted /thin.


bur mountinfo shows

localhost:~ # grep /thin /proc/self/mountinfo
96 59 0:63 / /thin rw,relatime shared:47 - btrfs /dev/dm-0
rw,space_cache,subvolid=5,subvol=/

which matches the above algorithm

localhost:~ # btrfs fi show /thin
Label: 'Storage'  uuid: a6f9dd05-460c-418b-83ab-ebdf81f2931a
	Total devices 2 FS bytes used 640.00KiB
	devid    1 size 20.00GiB used 2.01GiB path /dev/mapper/vg01-storage1
	devid    2 size 10.00GiB used 2.01GiB path /dev/mapper/vg01-storage2

localhost:~ # ll /dev/mapper/
total 0
crw------- 1 root root 10, 236 Apr 30 08:03 control
lrwxrwxrwx 1 root root       7 Apr 30 08:03 vg01-storage1 -> ../dm-0
lrwxrwxrwx 1 root root       7 Apr 30 08:03 vg01-storage2 -> ../dm-1

The original device is presumably stored in kernel somewhere but I do
not know how can I query it?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-04-30  5:47 Can I see what device was used to mount btrfs? Andrei Borzenkov
@ 2017-05-02  3:26 ` Anand Jain
  2017-05-02 13:58 ` Adam Borowski
  1 sibling, 0 replies; 17+ messages in thread
From: Anand Jain @ 2017-05-02  3:26 UTC (permalink / raw)
  To: Andrei Borzenkov; +Cc: linux-btrfs





On 04/30/2017 01:47 PM, Andrei Borzenkov wrote:
> I'm chasing issue with btrfs mounts under systemd
> (https://github.com/systemd/systemd/issues/5781) - to summarize, systemd
> waits for the final device that makes btrfs complete and mounts it using
> this device name.


> But in /proc/self/mountinfo we actually see another
> device name.


> Due to peculiarities of systemd implementation this device
> "does not exist" from systemd PoV.
>
> Looking at btrfs code I start to suspect that we actually do not know
> what device was used to mount it at all. I.e.

  Actually it does not matter. right ?

> static int btrfs_show_devname(struct seq_file *m, struct dentry *root)
> {
> ...
>         while (cur_devices) {
>                 head = &cur_devices->devices;
>                 list_for_each_entry(dev, head, dev_list) {
>                         if (dev->missing)
>                                 continue;
>                         if (!dev->name)
>                                 continue;
>                         if (!first_dev || dev->devid < first_dev->devid)
>                                 first_dev = dev;
>                 }
>                 cur_devices = cur_devices->seed;
>         }
>
>         if (first_dev) {
>                 rcu_read_lock();
>                 name = rcu_dereference(first_dev->name);
>                 seq_escape(m, name->str, " \t\n\\");
>                 rcu_read_unlock();
> ...
>
>
> So we always show device with the smallest devid, irrespectively of what
> device was actually used to mount it.
>
> Am I correct? What I have here is


> localhost:~ # ll /dev/disk/by-label/
> total 0
> lrwxrwxrwx 1 root root 10 Apr 30 08:03 Storage -> ../../dm-1
> localhost:~ # systemctl --no-pager status thin.mount
> ● thin.mount - /thin
>    Loaded: loaded (/etc/fstab; generated; vendor preset: disabled)
>    Active: active (mounted) since Sun 2017-04-30 08:03:07 MSK; 6min ago
>     Where: /thin
>      What: /dev/dm-0
>      Docs: man:fstab(5)
>            man:systemd-fstab-generator(8)
>   Process: 982 ExecMount=/usr/bin/mount /dev/disk/by-label/Storage /thin
> -t btrfs (code=exited, status=0/SUCCESS)
>     Tasks: 0 (limit: 4915)
>    CGroup: /system.slice/thin.mount
>
> Apr 30 08:03:07 localhost systemd[1]: Mounting /thin...
> Apr 30 08:03:07 localhost systemd[1]: Mounted /thin.
>
>
> bur mountinfo shows
>
> localhost:~ # grep /thin /proc/self/mountinfo
> 96 59 0:63 / /thin rw,relatime shared:47 - btrfs /dev/dm-0
> rw,space_cache,subvolid=5,subvol=/
>
> which matches the above algorithm
>
> localhost:~ # btrfs fi show /thin
> Label: 'Storage'  uuid: a6f9dd05-460c-418b-83ab-ebdf81f2931a
> 	Total devices 2 FS bytes used 640.00KiB
> 	devid    1 size 20.00GiB used 2.01GiB path /dev/mapper/vg01-storage1
> 	devid    2 size 10.00GiB used 2.01GiB path /dev/mapper/vg01-storage2
>
> localhost:~ # ll /dev/mapper/
> total 0
> crw------- 1 root root 10, 236 Apr 30 08:03 control
> lrwxrwxrwx 1 root root       7 Apr 30 08:03 vg01-storage1 -> ../dm-0
> lrwxrwxrwx 1 root root       7 Apr 30 08:03 vg01-storage2 -> ../dm-1
>
> The original device is presumably stored in kernel somewhere but I do
> not know how can I query it?


  Hm, actually we don't know the order in which devices were scanned
  and used for the mount, or it does not matter for the btrfs.

  HTH

Thanks,
Anand



> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-04-30  5:47 Can I see what device was used to mount btrfs? Andrei Borzenkov
  2017-05-02  3:26 ` Anand Jain
@ 2017-05-02 13:58 ` Adam Borowski
  2017-05-02 14:19   ` Andrei Borzenkov
  1 sibling, 1 reply; 17+ messages in thread
From: Adam Borowski @ 2017-05-02 13:58 UTC (permalink / raw)
  To: Andrei Borzenkov; +Cc: linux-btrfs

On Sun, Apr 30, 2017 at 08:47:43AM +0300, Andrei Borzenkov wrote:
> I'm chasing issue with btrfs mounts under systemd
> (https://github.com/systemd/systemd/issues/5781) - to summarize, systemd
> waits for the final device that makes btrfs complete and mounts it using
> this device name.

Systemd is wrong here -- its approach can possibly work only on clean mounts;
if you need to mount degraded it will hang forever.  Even worse, if it's not
the root filesystem, it will "helpfully" unmount it after you mount manually
(for root fs, it will _try_ to unmount but that obviously fails, resulting
in nothing but some CPU wasted on trying to unmount over and over).

> But in /proc/self/mountinfo we actually see another
> device name. Due to peculiarities of systemd implementation this device
> "does not exist" from systemd PoV.
> 
> Looking at btrfs code I start to suspect that we actually do not know
> what device was used to mount it at all.
> 
> So we always show device with the smallest devid, irrespectively of what
> device was actually used to mount it.

Devices come and go (ok, it's not like you hot-remove disks every day,
but...).  Storing the device that started the mount is pointless: btrfs
can handle removal fine so such a stored device would point nowhere -- or
worse, to some unrelated innocent disk you put in for data recovery (you may
have other plans than re-provisioning that raid).


Meow!
-- 
Don't be racist.  White, amber or black, all beers should be judged based
solely on their merits.  Heck, even if occasionally a cider applies for a
beer's job, why not?
On the other hand, corpo lager is not a race.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-02 13:58 ` Adam Borowski
@ 2017-05-02 14:19   ` Andrei Borzenkov
  2017-05-02 18:49     ` Adam Borowski
  0 siblings, 1 reply; 17+ messages in thread
From: Andrei Borzenkov @ 2017-05-02 14:19 UTC (permalink / raw)
  To: Adam Borowski; +Cc: linux-btrfs

On Tue, May 2, 2017 at 4:58 PM, Adam Borowski <kilobyte@angband.pl> wrote:
> On Sun, Apr 30, 2017 at 08:47:43AM +0300, Andrei Borzenkov wrote:
>> I'm chasing issue with btrfs mounts under systemd
>> (https://github.com/systemd/systemd/issues/5781) - to summarize, systemd
>> waits for the final device that makes btrfs complete and mounts it using
>> this device name.
>
> Systemd is wrong here -- its approach can possibly work only on clean mounts;
> if you need to mount degraded it will hang forever.  Even worse, if it's not
> the root filesystem, it will "helpfully" unmount it after you mount manually
> (for root fs, it will _try_ to unmount but that obviously fails, resulting
> in nothing but some CPU wasted on trying to unmount over and over).
>
>> But in /proc/self/mountinfo we actually see another
>> device name. Due to peculiarities of systemd implementation this device
>> "does not exist" from systemd PoV.
>>
>> Looking at btrfs code I start to suspect that we actually do not know
>> what device was used to mount it at all.
>>
>> So we always show device with the smallest devid, irrespectively of what
>> device was actually used to mount it.
>
> Devices come and go (ok, it's not like you hot-remove disks every day,
> but...).  Storing the device that started the mount is pointless: btrfs
> can handle removal fine so such a stored device would point nowhere -- or
> worse, to some unrelated innocent disk you put in for data recovery (you may
> have other plans than re-provisioning that raid).
>

Yes, I understand all of this, you do not need to convince me. OTOH
the problem is real - we need to have some way to order btrfs mounts
during bootup. In the past it was solved by delays. Systemd tries to
eliminate ad hoc delays ... which is by itself not bad. So what can be
utilized from btrfs side to implement ordering? We need /something/ to
wait for. It could be virtual device that represents btrfs RAID and
have state online/offline (similar to Linux MD). It could be some
daemon that waits for btrfs to become complete. Do we have something?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-02 14:19   ` Andrei Borzenkov
@ 2017-05-02 18:49     ` Adam Borowski
  2017-05-02 19:50       ` Goffredo Baroncelli
  0 siblings, 1 reply; 17+ messages in thread
From: Adam Borowski @ 2017-05-02 18:49 UTC (permalink / raw)
  To: Andrei Borzenkov; +Cc: linux-btrfs

On Tue, May 02, 2017 at 05:19:34PM +0300, Andrei Borzenkov wrote:
> On Tue, May 2, 2017 at 4:58 PM, Adam Borowski <kilobyte@angband.pl> wrote:
> > On Sun, Apr 30, 2017 at 08:47:43AM +0300, Andrei Borzenkov wrote:
> >> systemd waits for the final device that makes btrfs complete and mounts
> >> it using this device name.
> >
> >> But in /proc/self/mountinfo we actually see another
> >> device name. Due to peculiarities of systemd implementation this device
> >> "does not exist" from systemd PoV.
> >>
> >> Looking at btrfs code I start to suspect that we actually do not know
> >> what device was used to mount it at all.
> >>
> >> So we always show device with the smallest devid, irrespectively of what
> >> device was actually used to mount it.
> >
> > Devices come and go (ok, it's not like you hot-remove disks every day,
> > but...).  Storing the device that started the mount is pointless: btrfs
> > can handle removal fine so such a stored device would point nowhere -- or
> > worse, to some unrelated innocent disk you put in for data recovery (you may
> > have other plans than re-provisioning that raid).
> 
> Yes, I understand all of this, you do not need to convince me. OTOH
> the problem is real - we need to have some way to order btrfs mounts
> during bootup. In the past it was solved by delays. Systemd tries to
> eliminate ad hoc delays ... which is by itself not bad. So what can be
> utilized from btrfs side to implement ordering? We need /something/ to
> wait for. It could be virtual device that represents btrfs RAID and
> have state online/offline (similar to Linux MD).

It's not so simple -- such a btrfs device would have THREE states:

1. not mountable yet (multi-device with not enough disks present)
2. mountable ro / rw-degraded
3. healthy

The distinction between 1 and 2 is important, especially because systemd for
some reason insists on forcing unmount if it thinks the filesystem is in a
bad state (why?!?).  On distributions that follow the traditional remount
scheme (ie, you mount ro during boot, run fsck/whatever (no-op for btrfs),
then remount rw), starting as soon as we're in state 2 would be faster.  It
would also allow automatically going degraded if a timeout is hit[1].

To distinguish between 1 and 2 you need to halfway mount the filesystem, at
least to read the chunk tree (Qu's "why the heck it wasn't merged 2 years
ago" chunk check patch would help).

Naively thinking, it might be tempting to have only two states, varying it
whether the filesystem is already mounted -- currently it's 1+2 vs 3; it
would be: before mount: 1+2 vs 3, after mount: 1 vs 2+3.

But this would lead to breakage in corner cases.

For example: a box has a 3-way raid1 on sda sdb sdc.  Due to a cable not
being firmly seated, power supply or controller having a hiccup, etc,
suddenly sda goes offline.  Btrfs handles that fine, the admin gets worried
and hot-plugs a fourth disk, adding it to the raid.  Reboot.  sda gets up
first, boot goes fine so far, mountall/systemd starts, wants to mount that
filesystem.  sda appears to be fine, systemd reads it and sees there are _3_
disks (as obviously sda doesn't yet know about the fourth).  As sdd was a
random slow crap disk the admin happened to have on the shelf, it's not yet
up.  So systemd sees sda sdb sdc on -- they have all the device IDs it's
looking for, the count is ok, so it assumes all is fine.  It tries to mount,
but btrfs then properly notices there are four disks needed, and because
there was no -odegraded, the mount fails.  Boom.

Thus, there's no real way to know if the mount will succeed beforehand.


> It could be some daemon that waits for btrfs to become complete.  Do we
> have something?

Such a daemon would also have to read the chunk tree.


Meow!

[1]. Not entirely sure if that's a good default -- in one of my boxes, two
disks throw scary errors like UnrecovData BadCRC then after ninetysomething
seconds all goes well, although md (/ is on a 5GB 5-way raid1 md) first goes
degraded then starts autorecovery.  As systemd likes timeouts of 90 seconds,
just a few seconds shy of what this box needs to settle, having systemd and
auto-degrade there would lead to unpaired blocks, which btrfs doesn't yet
repair without being ordered to by hand.

-- 
Don't be racist.  White, amber or black, all beers should be judged based
solely on their merits.  Heck, even if occasionally a cider applies for a
beer's job, why not?
On the other hand, corpo lager is not a race.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-02 18:49     ` Adam Borowski
@ 2017-05-02 19:50       ` Goffredo Baroncelli
  2017-05-02 20:15         ` Kai Krakow
  2017-05-03 11:26         ` Austin S. Hemmelgarn
  0 siblings, 2 replies; 17+ messages in thread
From: Goffredo Baroncelli @ 2017-05-02 19:50 UTC (permalink / raw)
  To: Adam Borowski, Andrei Borzenkov; +Cc: linux-btrfs

On 2017-05-02 20:49, Adam Borowski wrote:
>> It could be some daemon that waits for btrfs to become complete.  Do we
>> have something?
> Such a daemon would also have to read the chunk tree.

I don't think that a daemon is necessary. As proof of concept, in the past I developed a mount helper [1] which handled the mount of a btrfs filesystem:
this handler first checks if the filesystem is a multivolume devices, if so it waits that all the devices are appeared. Finally mount the filesystem.

> It's not so simple -- such a btrfs device would have THREE states:
> 
> 1. not mountable yet (multi-device with not enough disks present)
> 2. mountable ro / rw-degraded
> 3. healthy

My mount.btrfs could be "programmed" to wait a timeout, then it mounts the filesystem as degraded if not all devices are present. This is a very simple strategy, but this could be expanded.

I am inclined to think that the current approach doesn't fit well the btrfs requirements.  The roles and responsibilities are spread to too much layer (udev, systemd, mount)... I hoped that my helper could be adopted in order to concentrate all the responsibility to only one binary; this would reduce the interface number with the other subsystem (eg systemd, udev).

For example, it would be possible to implement a sane check that prevent to mount a btrfs filesystem if two devices exposes the same UUID... 


BR
G.Baroncelli

[1] See "[RFC][PATCH v2] mount.btrfs helper" thread ( https://marc.info/?l=linux-btrfs&m=141736989508243&w=2 )



-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-02 19:50       ` Goffredo Baroncelli
@ 2017-05-02 20:15         ` Kai Krakow
  2017-05-02 20:34           ` Adam Borowski
                             ` (2 more replies)
  2017-05-03 11:26         ` Austin S. Hemmelgarn
  1 sibling, 3 replies; 17+ messages in thread
From: Kai Krakow @ 2017-05-02 20:15 UTC (permalink / raw)
  To: linux-btrfs

Am Tue, 2 May 2017 21:50:19 +0200
schrieb Goffredo Baroncelli <kreijack@inwind.it>:

> On 2017-05-02 20:49, Adam Borowski wrote:
> >> It could be some daemon that waits for btrfs to become complete.
> >> Do we have something?  
> > Such a daemon would also have to read the chunk tree.  
> 
> I don't think that a daemon is necessary. As proof of concept, in the
> past I developed a mount helper [1] which handled the mount of a
> btrfs filesystem: this handler first checks if the filesystem is a
> multivolume devices, if so it waits that all the devices are
> appeared. Finally mount the filesystem.
> 
> > It's not so simple -- such a btrfs device would have THREE states:
> > 
> > 1. not mountable yet (multi-device with not enough disks present)
> > 2. mountable ro / rw-degraded
> > 3. healthy  
> 
> My mount.btrfs could be "programmed" to wait a timeout, then it
> mounts the filesystem as degraded if not all devices are present.
> This is a very simple strategy, but this could be expanded.
> 
> I am inclined to think that the current approach doesn't fit well the
> btrfs requirements.  The roles and responsibilities are spread to too
> much layer (udev, systemd, mount)... I hoped that my helper could be
> adopted in order to concentrate all the responsibility to only one
> binary; this would reduce the interface number with the other
> subsystem (eg systemd, udev).
> 
> For example, it would be possible to implement a sane check that
> prevent to mount a btrfs filesystem if two devices exposes the same
> UUID... 

Ideally, the btrfs wouldn't even appear in /dev until it was assembled
by udev. But apparently that's not the case, and I think this is where
the problems come from. I wish, btrfs would not show up as device nodes
in /dev that the mount command identified as btrfs. Instead, btrfs
would expose (probably through udev) a device node
in /dev/btrfs/fs_identifier when it is ready.

Apparently, the core problem of how to handle degraded btrfs still
remains. Maybe it could be solved by adding more stages of btrfs nodes,
like /dev/btrfs-incomplete (for unusable btrfs), /dev/btrfs-degraded
(for btrfs still missing devices but at least one stripe of btrfs raid
available) and /dev/btrfs as the final stage. That way, a mount process
could wait for a while, and if the device doesn't appear, it tries the
degraded stage instead. If the fs is opened from the degraded dev node
stage, udev (or other processes) that scan for devices should stop
assembling the fs if they still do so.

bcache has a similar approach by hiding an fs within a protective
superblock. Unless bcache is setup, the fs won't show up in /dev, and
that fs won't be visible by other means. Btrfs should do something
similar and only show a single device node if assembled completely. The
component devices would have superblocks ignored by mount, and only the
final node would expose a virtual superblock and the compound device
after it. Of course, this makes things like compound device resizing
more complicated maybe even impossible.

If I'm not totally wrong, I think this is also how zfs exposes its
pools. You need user space tools to make the fs pools visible in the
tree. If zfs is incomplete, there's nothing to mount, and thus no race
condition. But I never tried zfs seriously, so I do not know.

-- 
Regards,
Kai

Replies to list-only preferred.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-02 20:15         ` Kai Krakow
@ 2017-05-02 20:34           ` Adam Borowski
  2017-05-03 11:32           ` Austin S. Hemmelgarn
  2017-05-03 17:05           ` Goffredo Baroncelli
  2 siblings, 0 replies; 17+ messages in thread
From: Adam Borowski @ 2017-05-02 20:34 UTC (permalink / raw)
  To: linux-btrfs

On Tue, May 02, 2017 at 10:15:06PM +0200, Kai Krakow wrote:
> Ideally, the btrfs wouldn't even appear in /dev until it was assembled
> by udev. But apparently that's not the case, and I think this is where
> the problems come from. I wish, btrfs would not show up as device nodes
> in /dev that the mount command identified as btrfs. Instead, btrfs
> would expose (probably through udev) a device node
> in /dev/btrfs/fs_identifier when it is ready.
> 
> Apparently, the core problem of how to handle degraded btrfs still
> remains. Maybe it could be solved by adding more stages of btrfs nodes,
> like /dev/btrfs-incomplete (for unusable btrfs), /dev/btrfs-degraded
> (for btrfs still missing devices but at least one stripe of btrfs raid
> available) and /dev/btrfs as the final stage.

The problem is, we can't tell these states apart other than doing the vast
majority of mount's work.  As I described earlier in this thread, even the
"fully available" stage is not trivial.


Meow!
-- 
Don't be racist.  White, amber or black, all beers should be judged based
solely on their merits.  Heck, even if occasionally a cider applies for a
beer's job, why not?
On the other hand, corpo lager is not a race.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-02 19:50       ` Goffredo Baroncelli
  2017-05-02 20:15         ` Kai Krakow
@ 2017-05-03 11:26         ` Austin S. Hemmelgarn
  2017-05-03 18:12           ` Andrei Borzenkov
  1 sibling, 1 reply; 17+ messages in thread
From: Austin S. Hemmelgarn @ 2017-05-03 11:26 UTC (permalink / raw)
  To: kreijack, Adam Borowski, Andrei Borzenkov; +Cc: linux-btrfs

On 2017-05-02 15:50, Goffredo Baroncelli wrote:
> On 2017-05-02 20:49, Adam Borowski wrote:
>>> It could be some daemon that waits for btrfs to become complete.  Do we
>>> have something?
>> Such a daemon would also have to read the chunk tree.
>
> I don't think that a daemon is necessary. As proof of concept, in the past I developed a mount helper [1] which handled the mount of a btrfs filesystem:
> this handler first checks if the filesystem is a multivolume devices, if so it waits that all the devices are appeared. Finally mount the filesystem.
>
>> It's not so simple -- such a btrfs device would have THREE states:
>>
>> 1. not mountable yet (multi-device with not enough disks present)
>> 2. mountable ro / rw-degraded
>> 3. healthy
>
> My mount.btrfs could be "programmed" to wait a timeout, then it mounts the filesystem as degraded if not all devices are present. This is a very simple strategy, but this could be expanded.
>
> I am inclined to think that the current approach doesn't fit well the btrfs requirements.  The roles and responsibilities are spread to too much layer (udev, systemd, mount)... I hoped that my helper could be adopted in order to concentrate all the responsibility to only one binary; this would reduce the interface number with the other subsystem (eg systemd, udev).
The primary problem is that systemd treats BTRFS like a block-layer 
instead of a filesystem (so it assumes all devices need to be present), 
and that it doesn't trust the kernel's mount function to work correctly. 
  As a result, it assumes that the mount operation will fail if it 
doesn't see all the devices instead of just trying it like it should.
>
> For example, it would be possible to implement a sane check that prevent to mount a btrfs filesystem if two devices exposes the same UUID...
>
>
> BR
> G.Baroncelli
>
> [1] See "[RFC][PATCH v2] mount.btrfs helper" thread ( https://marc.info/?l=linux-btrfs&m=141736989508243&w=2 )
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-02 20:15         ` Kai Krakow
  2017-05-02 20:34           ` Adam Borowski
@ 2017-05-03 11:32           ` Austin S. Hemmelgarn
  2017-05-03 17:05           ` Goffredo Baroncelli
  2 siblings, 0 replies; 17+ messages in thread
From: Austin S. Hemmelgarn @ 2017-05-03 11:32 UTC (permalink / raw)
  To: linux-btrfs

On 2017-05-02 16:15, Kai Krakow wrote:
> Am Tue, 2 May 2017 21:50:19 +0200
> schrieb Goffredo Baroncelli <kreijack@inwind.it>:
>
>> On 2017-05-02 20:49, Adam Borowski wrote:
>>>> It could be some daemon that waits for btrfs to become complete.
>>>> Do we have something?
>>> Such a daemon would also have to read the chunk tree.
>>
>> I don't think that a daemon is necessary. As proof of concept, in the
>> past I developed a mount helper [1] which handled the mount of a
>> btrfs filesystem: this handler first checks if the filesystem is a
>> multivolume devices, if so it waits that all the devices are
>> appeared. Finally mount the filesystem.
>>
>>> It's not so simple -- such a btrfs device would have THREE states:
>>>
>>> 1. not mountable yet (multi-device with not enough disks present)
>>> 2. mountable ro / rw-degraded
>>> 3. healthy
>>
>> My mount.btrfs could be "programmed" to wait a timeout, then it
>> mounts the filesystem as degraded if not all devices are present.
>> This is a very simple strategy, but this could be expanded.
>>
>> I am inclined to think that the current approach doesn't fit well the
>> btrfs requirements.  The roles and responsibilities are spread to too
>> much layer (udev, systemd, mount)... I hoped that my helper could be
>> adopted in order to concentrate all the responsibility to only one
>> binary; this would reduce the interface number with the other
>> subsystem (eg systemd, udev).
>>
>> For example, it would be possible to implement a sane check that
>> prevent to mount a btrfs filesystem if two devices exposes the same
>> UUID...
>
> Ideally, the btrfs wouldn't even appear in /dev until it was assembled
> by udev. But apparently that's not the case, and I think this is where
> the problems come from. I wish, btrfs would not show up as device nodes
> in /dev that the mount command identified as btrfs. Instead, btrfs
> would expose (probably through udev) a device node
> in /dev/btrfs/fs_identifier when it is ready.
>
> Apparently, the core problem of how to handle degraded btrfs still
> remains. Maybe it could be solved by adding more stages of btrfs nodes,
> like /dev/btrfs-incomplete (for unusable btrfs), /dev/btrfs-degraded
> (for btrfs still missing devices but at least one stripe of btrfs raid
> available) and /dev/btrfs as the final stage. That way, a mount process
> could wait for a while, and if the device doesn't appear, it tries the
> degraded stage instead. If the fs is opened from the degraded dev node
> stage, udev (or other processes) that scan for devices should stop
> assembling the fs if they still do so.
That won't work though because BTRFS is a _filesystem_ not a block 
layer.  We don't have any way of hiding things.  Even if we did, we 
would still need to parse the superblocks and chunk tree, and at that 
point, it just makes more sense to try to mount the FS instead.  IOW, 
the correct way to determine if a BTRFS volume is mountable is to try to 
mount it, not to wait and try to find all the devices.
>
> bcache has a similar approach by hiding an fs within a protective
> superblock. Unless bcache is setup, the fs won't show up in /dev, and
> that fs won't be visible by other means. Btrfs should do something
> similar and only show a single device node if assembled completely. The
> component devices would have superblocks ignored by mount, and only the
> final node would expose a virtual superblock and the compound device
> after it. Of course, this makes things like compound device resizing
> more complicated maybe even impossible.
Except there is no 'btrfs' device node for a filesystem.  The only node 
is /dev/btrfs-control, which is used for a small handful of things that 
don't involve the mountability of any filesystem.  To reiterate, we are 
_NOT_ a block layer, so there is _NO_ associated block device for an 
assembled multi-device volume, nor should there be.
>
> If I'm not totally wrong, I think this is also how zfs exposes its
> pools. You need user space tools to make the fs pools visible in the
> tree. If zfs is incomplete, there's nothing to mount, and thus no race
> condition. But I never tried zfs seriously, so I do not know.
For zvols, yes, this is how it works.  For actual filesystem datasets, 
it behaves almost identically to BTRFS AFAIK.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-02 20:15         ` Kai Krakow
  2017-05-02 20:34           ` Adam Borowski
  2017-05-03 11:32           ` Austin S. Hemmelgarn
@ 2017-05-03 17:05           ` Goffredo Baroncelli
  2017-05-03 18:43             ` Chris Murphy
  2 siblings, 1 reply; 17+ messages in thread
From: Goffredo Baroncelli @ 2017-05-03 17:05 UTC (permalink / raw)
  To: Kai Krakow, linux-btrfs

On 2017-05-02 22:15, Kai Krakow wrote:
>> For example, it would be possible to implement a sane check that
>> prevent to mount a btrfs filesystem if two devices exposes the same
>> UUID... 
> Ideally, the btrfs wouldn't even appear in /dev until it was assembled
> by udev. But apparently that's not the case, and I think this is where
> the problems come from. I wish, btrfs would not show up as device nodes
> in /dev that the mount command identified as btrfs. Instead, btrfs
> would expose (probably through udev) a device node
> in /dev/btrfs/fs_identifier when it is ready.


And what if udev fails to assemble the devices (for example because not all the disks are available or because there are two disks with the same uuid) ?
And if the user can't access the disks, how he could solve the issues (i.e. two disk with the same uuid)

I think that udev should be put out of the game of assembling the disks. For the following reasons:
1) udev is not developed by the BTRFS community where the btrfs knowledges are; there are a lot of corner cases which are not clear to the btrfs developers; how these case could be more clearer to the udev developers (who indeed are very smart guys) ?
2) I don't think that udev is flexible enough to handle all the cases (e.g.: two disks with the same uuid,  missing devices)
3) udev works quite well at handling the device appearing; why it should be involved in the filesystem assembling ?


BR
G.Baroncelli


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-03 11:26         ` Austin S. Hemmelgarn
@ 2017-05-03 18:12           ` Andrei Borzenkov
  2017-05-03 18:53             ` Austin S. Hemmelgarn
  0 siblings, 1 reply; 17+ messages in thread
From: Andrei Borzenkov @ 2017-05-03 18:12 UTC (permalink / raw)
  To: Austin S. Hemmelgarn, kreijack, Adam Borowski; +Cc: linux-btrfs

03.05.2017 14:26, Austin S. Hemmelgarn пишет:
> On 2017-05-02 15:50, Goffredo Baroncelli wrote:
>> On 2017-05-02 20:49, Adam Borowski wrote:
>>>> It could be some daemon that waits for btrfs to become complete.  Do we
>>>> have something?
>>> Such a daemon would also have to read the chunk tree.
>>
>> I don't think that a daemon is necessary. As proof of concept, in the
>> past I developed a mount helper [1] which handled the mount of a btrfs
>> filesystem:
>> this handler first checks if the filesystem is a multivolume devices,
>> if so it waits that all the devices are appeared. Finally mount the
>> filesystem.
>>
>>> It's not so simple -- such a btrfs device would have THREE states:
>>>
>>> 1. not mountable yet (multi-device with not enough disks present)
>>> 2. mountable ro / rw-degraded
>>> 3. healthy
>>
>> My mount.btrfs could be "programmed" to wait a timeout, then it mounts
>> the filesystem as degraded if not all devices are present. This is a
>> very simple strategy, but this could be expanded.
>>
>> I am inclined to think that the current approach doesn't fit well the
>> btrfs requirements.  The roles and responsibilities are spread to too
>> much layer (udev, systemd, mount)... I hoped that my helper could be
>> adopted in order to concentrate all the responsibility to only one
>> binary; this would reduce the interface number with the other
>> subsystem (eg systemd, udev).
> The primary problem is that systemd treats BTRFS like a block-layer
> instead of a filesystem (so it assumes all devices need to be present),
> and that it doesn't trust the kernel's mount function to work correctly.

My understanding is that before kernel mount can succeed for
multi-device btrfs, kernel must be made aware of devices that comprise
this filesystem. This is done by using (equivalent of) "btrfs device
scan" or "btrfs device ready". Am I wrong here?

>  As a result, it assumes that the mount operation will fail if it
> doesn't see all the devices instead of just trying it like it should.

So do you suggest that mount will succeed even if kernel is not made
aware of all devices? If not, could you elaborate how btrfs should be
mounted on boot - we must give mount command some device, right? How
should we chose this device?


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-03 17:05           ` Goffredo Baroncelli
@ 2017-05-03 18:43             ` Chris Murphy
  2017-05-03 21:19               ` Duncan
  2017-05-04  3:48               ` Andrei Borzenkov
  0 siblings, 2 replies; 17+ messages in thread
From: Chris Murphy @ 2017-05-03 18:43 UTC (permalink / raw)
  To: Goffredo Baroncelli; +Cc: Kai Krakow, Btrfs BTRFS

If I understand the bug report correctly, the user specifies mounting
by label which then systemd is converting into /dev/dm-0 (because it's
a two LUKS devices Btrfs volume).

Why not convert the fstab mount by label request, into a /dev/by-uuid/
path; and then systemd calls mount with -u,--uuid mount option rather
than the block device? I'd think this is better no matter what the
file system is, but certainly with Btrfs and possibly ZFS. That makes
the device discovery problem not systemd's problem to hassle with.



Chris Murphy




On Wed, May 3, 2017 at 11:05 AM, Goffredo Baroncelli <kreijack@inwind.it> wrote:
> On 2017-05-02 22:15, Kai Krakow wrote:
>>> For example, it would be possible to implement a sane check that
>>> prevent to mount a btrfs filesystem if two devices exposes the same
>>> UUID...
>> Ideally, the btrfs wouldn't even appear in /dev until it was assembled
>> by udev. But apparently that's not the case, and I think this is where
>> the problems come from. I wish, btrfs would not show up as device nodes
>> in /dev that the mount command identified as btrfs. Instead, btrfs
>> would expose (probably through udev) a device node
>> in /dev/btrfs/fs_identifier when it is ready.
>
>
> And what if udev fails to assemble the devices (for example because not all the disks are available or because there are two disks with the same uuid) ?
> And if the user can't access the disks, how he could solve the issues (i.e. two disk with the same uuid)
>
> I think that udev should be put out of the game of assembling the disks. For the following reasons:
> 1) udev is not developed by the BTRFS community where the btrfs knowledges are; there are a lot of corner cases which are not clear to the btrfs developers; how these case could be more clearer to the udev developers (who indeed are very smart guys) ?
> 2) I don't think that udev is flexible enough to handle all the cases (e.g.: two disks with the same uuid,  missing devices)
> 3) udev works quite well at handling the device appearing; why it should be involved in the filesystem assembling ?
>
>
> BR
> G.Baroncelli
>
>
> --
> gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
> Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-03 18:12           ` Andrei Borzenkov
@ 2017-05-03 18:53             ` Austin S. Hemmelgarn
  0 siblings, 0 replies; 17+ messages in thread
From: Austin S. Hemmelgarn @ 2017-05-03 18:53 UTC (permalink / raw)
  To: Andrei Borzenkov, kreijack, Adam Borowski; +Cc: linux-btrfs

On 2017-05-03 14:12, Andrei Borzenkov wrote:
> 03.05.2017 14:26, Austin S. Hemmelgarn пишет:
>> On 2017-05-02 15:50, Goffredo Baroncelli wrote:
>>> On 2017-05-02 20:49, Adam Borowski wrote:
>>>>> It could be some daemon that waits for btrfs to become complete.  Do we
>>>>> have something?
>>>> Such a daemon would also have to read the chunk tree.
>>>
>>> I don't think that a daemon is necessary. As proof of concept, in the
>>> past I developed a mount helper [1] which handled the mount of a btrfs
>>> filesystem:
>>> this handler first checks if the filesystem is a multivolume devices,
>>> if so it waits that all the devices are appeared. Finally mount the
>>> filesystem.
>>>
>>>> It's not so simple -- such a btrfs device would have THREE states:
>>>>
>>>> 1. not mountable yet (multi-device with not enough disks present)
>>>> 2. mountable ro / rw-degraded
>>>> 3. healthy
>>>
>>> My mount.btrfs could be "programmed" to wait a timeout, then it mounts
>>> the filesystem as degraded if not all devices are present. This is a
>>> very simple strategy, but this could be expanded.
>>>
>>> I am inclined to think that the current approach doesn't fit well the
>>> btrfs requirements.  The roles and responsibilities are spread to too
>>> much layer (udev, systemd, mount)... I hoped that my helper could be
>>> adopted in order to concentrate all the responsibility to only one
>>> binary; this would reduce the interface number with the other
>>> subsystem (eg systemd, udev).
>> The primary problem is that systemd treats BTRFS like a block-layer
>> instead of a filesystem (so it assumes all devices need to be present),
>> and that it doesn't trust the kernel's mount function to work correctly.
>
> My understanding is that before kernel mount can succeed for
> multi-device btrfs, kernel must be made aware of devices that comprise
> this filesystem. This is done by using (equivalent of) "btrfs device
> scan" or "btrfs device ready". Am I wrong here?
That is correct, the kernel needs to be notified about the devices via 
'btrfs device scan' (or directly with the ioctl that calls).  Udev calls 
this automatically on newly connected block devices though, so currently 
there is no reason manually run it on most systems.  Ideally, this 
should be in a mount helper and possibly triggered by 'btrfs filesystem 
show'.  Unless you're mounting a BTRFS volume or listing what the kernel 
knows about, there is no reason the kernel needs to be tracking the FS, 
so there is no point in regularly wasting time in udev processing by 
scanning all newly connected devices.

As far as 'btrfs device ready', that only tells you if the kernel thinks 
the filesystem is mountable _and_ not degraded.  It's usually correct, 
but watching that has the usual TOCTOU races present in any kind of 
status checking system, and it's useless if you want to mount degraded.
>
>>  As a result, it assumes that the mount operation will fail if it
>> doesn't see all the devices instead of just trying it like it should.
>
> So do you suggest that mount will succeed even if kernel is not made
> aware of all devices? If not, could you elaborate how btrfs should be
> mounted on boot - we must give mount command some device, right? How
> should we chose this device?
See my above comment on kernel awareness.

If you have 'degraded' in the mount options, the mount can succeed even 
if not all the devices are present.  Systemd refuses to even try the 
mount if it doesn't see all the devices, and then *unmounts* the FS if 
it gets mounted manually and not all devices are present.  Both of these 
are undesired behaviors for many people (the second more than the first).

I think I've outlined my thoughts on all of this somewhere before, but I 
can't find them, so I might as well do so here:

1. Device scanning should be done by a mount helper, not udev.  This 
closes a serious data safety/security issue present in the current 
combined implementation (if you plug in a device that has the same UUID 
as an existing BTRFS volume on the system and both volumes are marked as 
multi-device, you can cause data loss in the existing volume),  allows 
for more concise tracking of devices, and also eliminates the need for 
system-wide scanning in some cases (if you use 'device=' mount options 
that cover all the devices in the filesystem).  It also saves some time 
in processing of uevents for hot-plugged devices.

2. Systemd should not default to unmounting filesystems it thinks aren't 
ready yet when they've been manually mounted.  This behavior is highly 
counter-intuitive for most users ('The mount command didn't complain and 
returned 0 and dmesg has no errors, why the hell is the filesystem I 
just mounted not mounted?'), and more importantly in this context, makes 
it impossible to manually repair a BTRFS filesystem that's listed in a 
mount unit without dropping to emergency mode, which largely defeats the 
purpose of using a multi-device filesystem that can be repaired online.

3. For BTRFS, and possibly under special circumstances with other 
filesystems (partially present ZFS pool, partially assembled LVM or MD 
array that can run degraded, etc), systemd should try to mount the FS 
when it times out waiting for devices, and there should be an option to 
control this behavior.  While I don't advocate mounting filesystems 
degraded then letting the system run, some people do, and I still expect 
it to work, but currently it does not when using systemd. 
Alternatively, it could do a polling loop with a delay to call mount 
instead of using 'btrfs device ready'.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-03 18:43             ` Chris Murphy
@ 2017-05-03 21:19               ` Duncan
  2017-05-04  2:15                 ` Chris Murphy
  2017-05-04  3:48               ` Andrei Borzenkov
  1 sibling, 1 reply; 17+ messages in thread
From: Duncan @ 2017-05-03 21:19 UTC (permalink / raw)
  To: linux-btrfs

Chris Murphy posted on Wed, 03 May 2017 12:43:36 -0600 as excerpted:

> If I understand the bug report correctly, the user specifies mounting by
> label which then systemd is converting into /dev/dm-0 (because it's a
> two LUKS devices Btrfs volume).
> 
> Why not convert the fstab mount by label request, into a /dev/by-uuid/
> path; and then systemd calls mount with -u,--uuid mount option rather
> than the block device? I'd think this is better no matter what the file
> system is, but certainly with Btrfs and possibly ZFS. That makes the
> device discovery problem not systemd's problem to hassle with.

You're forgetting one thing:  mount doesn't handle either label or uuid 
(or similar partlabel/partuid/etc) mounting on its own -- it interprets 
all of these as the appropriate /dev/disk/by-* symlink references, 
dereferences those to the canonical device name, and actually mounts 
using that.

See the mount (8) manpage, relatively near the top under "Indicating the 
device", the relevant quote being "The mount (8) command internally uses 
udev symlinks, so the use of symlinks in /etc/fstab has no advantage over 
[LABEL=, UUID=, etc] tags."

And of course it's systemd's udev component that sets up those /dev/disk/
by-* symlinks in the first place.

So converting mount-by-label requests to mount-by-uuid requests gets you 
exactly nowhere, as underneath the covers they're both using the the same 
level udev-generated /dev/disk/by-* symlinks to get the canonical device 
name, so not only do you fail to get rid of the systemd/udev involvement, 
you fail to even reduce abstraction to a lower level.

Not to mention that newer systemd is apparently using its own mount-
alternative now, and doesn't actually call the traditional mount binary 
any more, at least for supported filesystem types.  I'm not qualified to 
argue the case for or against that, but it's apparently happening now, 
and regardless of where one stands on the merits, it's certainly 
inserting systemd even farther into the mount process for relatively 
complex and fuller-featured filesystems such as btrfs.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-03 21:19               ` Duncan
@ 2017-05-04  2:15                 ` Chris Murphy
  0 siblings, 0 replies; 17+ messages in thread
From: Chris Murphy @ 2017-05-04  2:15 UTC (permalink / raw)
  To: Duncan; +Cc: Btrfs BTRFS

[-- Attachment #1: Type: text/plain, Size: 1940 bytes --]

qemu-kvm (Fedora 26 pre-beta guest and host)
systemd-233-3.fc26.x86_64
kernel-4.11.0-0.rc8.git0.1.fc26.x86_64

The guest installed OS uses ext4, and boot parameters rd.udev.debug
systemd.log_level=debug so we can see the entirety of Btrfs device
discovery and module loading. Using virsh I can hot plug a virtio
device. There are two devices, vg/test1 and vg/test2 which have a
single Btrfs volume made with:

mkfs.btrfs -L themirror -draid1 -mraid1 /dev/vg/test1 /dev/vg/test2

When I 'virsh attach-device' to hot add only /dev/vg/test1, I get
output that's in test1.log attached.

When I 'vrish attach-device' to hot add /dev/vg/test1, I get output
found in test2.log also attached.

These appear as /dev/vdb and /dev/vdc respectively in the guest.

And then 'mount /dev/vdb /mnt' followed 10 seconds later with 'umount
/mnt' and that's in mount-umount.log, attached.

Keep an eye on the monotonic times, the entries aren't always in order.

Note in test2.log how there is a replacement of the label and uuid
links. They originally point to /dev/vdb, but upon 2nd device
appearing...

[  300.676608] f26.localdomain systemd-udevd[1797]: found 'b252:16'
claiming '/run/udev/links/\x2fdisk\x2fby-uuid\x2fe5eb17cb-40d7-48c8-ab84-925f565edf93'
[  300.676681] f26.localdomain systemd-udevd[1797]: creating link
'/dev/disk/by-uuid/e5eb17cb-40d7-48c8-ab84-925f565edf93' to '/dev/vdc'
[  300.676752] f26.localdomain systemd-udevd[1797]: atomically replace
'/dev/disk/by-uuid/e5eb17cb-40d7-48c8-ab84-925f565edf93'

After umounting, I tried another mount this time mounting /dev/vdc.
The kernel recognizes vdc...

[ 1719.716027] f26.localdomain kernel: BTRFS info (device vdc): disk
space caching is enabled
[ 1719.716030] f26.localdomain kernel: BTRFS info (device vdc): has
skinny extents

But systemd still refers to dev-vdb.device in the mount and umount
process, /dev/vdc is not mentioned. And both df and findmnt and
mountinfo show /dev/vdb.

[-- Attachment #2: test1.log --]
[-- Type: text/x-log, Size: 7549 bytes --]


[  134.574002] f26.localdomain kernel: pci 0000:00:0a.0: [1af4:1001] type 00 class 0x010000
[  134.576003] f26.localdomain kernel: pci 0000:00:0a.0: reg 0x10: [io  0x0000-0x003f]
[  134.576042] f26.localdomain kernel: pci 0000:00:0a.0: reg 0x14: [mem 0x00000000-0x00000fff]
[  134.576154] f26.localdomain kernel: pci 0000:00:0a.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref]
[  134.579091] f26.localdomain kernel: pci 0000:00:0a.0: BAR 4: assigned [mem 0x80000000-0x80003fff 64bit pref]
[  134.579279] f26.localdomain kernel: pci 0000:00:0a.0: BAR 1: assigned [mem 0x80004000-0x80004fff]
[  134.579305] f26.localdomain kernel: pci 0000:00:0a.0: BAR 0: assigned [io  0x1000-0x103f]
[  134.579522] f26.localdomain kernel: virtio-pci 0000:00:0a.0: enabling device (0000 -> 0003)
[  135.122898] f26.localdomain systemd-udevd[548]: seq 2093 queued, 'add' 'pci'
[  135.123173] f26.localdomain systemd-udevd[548]: Validate module index
[  135.123326] f26.localdomain systemd-udevd[548]: Check if link configuration needs reloading.
[  135.123463] f26.localdomain systemd-udevd[548]: seq 2093 forked new worker [1791]
[  135.123604] f26.localdomain systemd-udevd[1791]: seq 2093 running
[  135.123724] f26.localdomain systemd-udevd[1791]: IMPORT builtin 'hwdb' /usr/lib/udev/rules.d/50-udev-default.rules:15
[  135.123845] f26.localdomain systemd-udevd[1791]: RUN 'kmod load $env{MODALIAS}' /usr/lib/udev/rules.d/80-drivers.rules:5
[  135.123969] f26.localdomain systemd-udevd[1791]: created db file '/run/udev/data/+pci:0000:00:0a.0' for '/devices/pci0000:00/0000:00:0a.0'
[  135.124100] f26.localdomain systemd-udevd[1791]: Execute 'load' 'pci:v00001AF4d00001001sv00001AF4sd00000002bc01sc00i00'
[  135.124204] f26.localdomain systemd-udevd[1791]: Inserted 'virtio_pci'
[  135.124313] f26.localdomain systemd-udevd[1791]: passed device to netlink monitor 0x5580e6cf9ac0
[  135.124432] f26.localdomain systemd-udevd[1791]: seq 2093 processed
[  135.124550] f26.localdomain systemd-udevd[548]: cleanup idle workers
[  135.124857] f26.localdomain systemd-udevd[1791]: Unload module index
[  135.124972] f26.localdomain systemd-udevd[1791]: Unloaded link configuration context.
[  135.125096] f26.localdomain systemd-udevd[548]: worker [1791] exited
[  135.150783] f26.localdomain systemd-udevd[548]: seq 2094 queued, 'add' 'virtio'
[  135.151698] f26.localdomain systemd-udevd[548]: seq 2094 forked new worker [1793]
[  135.152200] f26.localdomain systemd-udevd[548]: seq 2095 queued, 'add' 'bdi'
[  135.152856] f26.localdomain systemd-udevd[1793]: seq 2094 running
[  135.154059] f26.localdomain systemd-udevd[1793]: IMPORT builtin 'hwdb' /usr/lib/udev/rules.d/50-udev-default.rules:15
[  135.154896] f26.localdomain systemd-udevd[1793]: IMPORT builtin 'hwdb' returned non-zero
[  135.156140] f26.localdomain systemd-udevd[548]: seq 2095 forked new worker [1794]
[  135.156582] f26.localdomain systemd-udevd[548]: seq 2096 queued, 'add' 'block'
[  135.156865] f26.localdomain systemd-udevd[1793]: RUN 'kmod load $env{MODALIAS}' /usr/lib/udev/rules.d/80-drivers.rules:5
[  135.157044] f26.localdomain systemd-udevd[1793]: Execute 'load' 'virtio:d00000002v00001AF4'
[  135.157196] f26.localdomain systemd-udevd[1793]: Inserted 'virtio_blk'
[  135.157336] f26.localdomain systemd-udevd[1793]: passed device to netlink monitor 0x5580e6cf9590
[  135.157484] f26.localdomain systemd-udevd[1793]: seq 2094 processed
[  135.157630] f26.localdomain systemd-udevd[1794]: seq 2095 running
[  135.157883] f26.localdomain systemd-udevd[1794]: passed device to netlink monitor 0x5580e6d04470
[  135.158052] f26.localdomain systemd-udevd[548]: passed 214 byte device to netlink monitor 0x5580e6d313a0
[  135.158200] f26.localdomain systemd-udevd[1793]: seq 2096 running
[  135.158459] f26.localdomain systemd-udevd[1793]: GROUP 6 /usr/lib/udev/rules.d/50-udev-default.rules:55
[  135.158732] f26.localdomain systemd-udevd[1794]: seq 2095 processed
[  135.158903] f26.localdomain systemd-udevd[1793]: IMPORT builtin 'path_id' /usr/lib/udev/rules.d/60-persistent-storage.rules:66
[  135.159144] f26.localdomain systemd-udevd[1793]: LINK 'disk/by-path/pci-0000:00:0a.0' /usr/lib/udev/rules.d/60-persistent-storage.rules:67
[  135.159383] f26.localdomain systemd-udevd[1793]: LINK 'disk/by-path/virtio-pci-0000:00:0a.0' /usr/lib/udev/rules.d/60-persistent-storage.rules:71
[  135.159574] f26.localdomain systemd-udevd[1793]: IMPORT builtin 'blkid' /usr/lib/udev/rules.d/60-persistent-storage.rules:82
[  135.159790] f26.localdomain systemd-udevd[1793]: probe /dev/vdb raid offset=0
[  134.629044] f26.localdomain kernel: BTRFS: device label themirror devid 1 transid 9 /dev/vdb
[  135.170860] f26.localdomain systemd-udevd[1793]: LINK 'disk/by-uuid/e5eb17cb-40d7-48c8-ab84-925f565edf93' /usr/lib/udev/rules.d/60-persistent-storage.rules:85
[  135.171268] f26.localdomain systemd-udevd[1793]: LINK 'disk/by-label/themirror' /usr/lib/udev/rules.d/60-persistent-storage.rules:86
[  135.171619] f26.localdomain systemd-udevd[1793]: IMPORT builtin 'btrfs' /usr/lib/udev/rules.d/64-btrfs.rules:8
[  135.172159] f26.localdomain systemd-udevd[1793]: handling device node '/dev/vdb', devnum=b252:16, mode=0660, uid=0, gid=6
[  135.172967] f26.localdomain systemd-udevd[1793]: set permissions /dev/vdb, 060660, uid=0, gid=6
[  135.173472] f26.localdomain systemd-udevd[1793]: creating symlink '/dev/block/252:16' to '../vdb'
[  135.173887] f26.localdomain systemd-udevd[1793]: creating link '/dev/disk/by-label/themirror' to '/dev/vdb'
[  135.174273] f26.localdomain systemd-udevd[1793]: creating symlink '/dev/disk/by-label/themirror' to '../../vdb'
[  135.174556] f26.localdomain systemd-udevd[1793]: creating link '/dev/disk/by-path/pci-0000:00:0a.0' to '/dev/vdb'
[  135.174889] f26.localdomain systemd-udevd[1793]: creating symlink '/dev/disk/by-path/pci-0000:00:0a.0' to '../../vdb'
[  135.175065] f26.localdomain systemd-udevd[1793]: creating link '/dev/disk/by-path/virtio-pci-0000:00:0a.0' to '/dev/vdb'
[  135.175163] f26.localdomain systemd-udevd[1793]: creating symlink '/dev/disk/by-path/virtio-pci-0000:00:0a.0' to '../../vdb'
[  135.175241] f26.localdomain systemd-udevd[1793]: creating link '/dev/disk/by-uuid/e5eb17cb-40d7-48c8-ab84-925f565edf93' to '/dev/vdb'
[  135.175323] f26.localdomain systemd-udevd[1793]: creating symlink '/dev/disk/by-uuid/e5eb17cb-40d7-48c8-ab84-925f565edf93' to '../../vdb'
[  135.175409] f26.localdomain systemd-udevd[1793]: created db file '/run/udev/data/b252:16' for '/devices/pci0000:00/0000:00:0a.0/virtio5/block/vdb'
[  135.175520] f26.localdomain systemd-udevd[1793]: adding watch on '/dev/vdb'
[  135.175604] f26.localdomain systemd-udevd[1793]: created db file '/run/udev/data/b252:16' for '/devices/pci0000:00/0000:00:0a.0/virtio5/block/vdb'
[  135.175724] f26.localdomain systemd-udevd[1793]: passed device to netlink monitor 0x5580e6cf9590
[  135.175801] f26.localdomain systemd-udevd[1793]: seq 2096 processed
[  135.175877] f26.localdomain systemd-udevd[548]: cleanup idle workers
[  135.176114] f26.localdomain systemd-udevd[1793]: Unload module index
[  135.176187] f26.localdomain systemd-udevd[1793]: Unloaded link configuration context.
[  135.176261] f26.localdomain systemd-udevd[548]: worker [1793] exited
[  135.176405] f26.localdomain systemd-udevd[548]: cleanup idle workers
[  135.176534] f26.localdomain systemd-udevd[1794]: Unload module index
[  135.176612] f26.localdomain systemd-udevd[1794]: Unloaded link configuration context.
[  135.176690] f26.localdomain systemd-udevd[548]: worker [1794] exited



[-- Attachment #3: mount-umount.log --]
[-- Type: text/x-log, Size: 12730 bytes --]

[  787.753337] f26.localdomain systemd-udevd[548]: seq 2104 queued, 'add' 'bdi'
[  787.754670] f26.localdomain systemd-udevd[548]: Validate module index
[  787.755439] f26.localdomain systemd-udevd[548]: Check if link configuration needs reloading.
[  787.756053] f26.localdomain systemd[1]: systemd-udevd.service: Got notification message from PID 548 (WATCHDOG=1)
[  787.759546] f26.localdomain systemd-udevd[548]: seq 2104 forked new worker [1926]
[  787.760348] f26.localdomain systemd-udevd[1926]: seq 2104 running
[  787.760730] f26.localdomain systemd-udevd[1926]: passed device to netlink monitor 0x5580e6cf9590
[  787.761151] f26.localdomain systemd-udevd[1926]: seq 2104 processed
[  787.761512] f26.localdomain systemd-udevd[548]: cleanup idle workers
[  787.762161] f26.localdomain systemd-udevd[1926]: Unload module index
[  787.762531] f26.localdomain systemd-udevd[1926]: Unloaded link configuration context.
[  787.762827] f26.localdomain systemd-udevd[548]: worker [1926] exited
[  787.225680] f26.localdomain kernel: BTRFS info (device vdc): disk space caching is enabled
[  787.225753] f26.localdomain kernel: BTRFS info (device vdc): has skinny extents
[  787.782509] f26.localdomain systemd[1182]: libmount event [rescan: yes]
[  787.782843] f26.localdomain systemd[1]: libmount event [rescan: yes]
[  787.783107] f26.localdomain systemd[1]: dev-vdb.device: Changed dead -> tentative
[  787.783286] f26.localdomain systemd[1182]: dev-vdb.device: Changed dead -> tentative
[  787.783512] f26.localdomain systemd[1]: mnt.mount: Changed dead -> mounted
[  787.783663] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=507 reply_cookie=0 error=n/a
[  787.783812] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=508 reply_cookie=0 error=n/a
[  787.783947] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvda1_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=509 reply_cookie=0 error=n/a
[  787.784101] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvda1_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=510 reply_cookie=0 error=n/a
[  787.784229] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dmapper_2dfedora_5cx2droot_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=511 reply_cookie=0 error=n/a
[  787.784363] f26.localdomain systemd[1182]: home.mount: Failed to load configuration: No such file or directory
[  787.784613] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dmapper_2dfedora_5cx2droot_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=512 reply_cookie=0 error=n/a
[  787.784751] f26.localdomain systemd[1182]: home-chris.mount: Failed to load configuration: No such file or directory
[  787.784921] f26.localdomain systemd[1182]: mnt.mount: Changed dead -> mounted
[  787.785248] f26.localdomain systemd[1182]: home.mount: Collecting.
[  787.785807] f26.localdomain systemd[1182]: home-chris.mount: Collecting.
[  787.785995] f26.localdomain systemd[836]: libmount event [rescan: yes]
[  787.786230] f26.localdomain systemd[836]: dev-vdb.device: Changed dead -> tentative
[  787.786388] f26.localdomain systemd[836]: var.mount: Failed to load configuration: No such file or directory
[  787.786613] f26.localdomain systemd[836]: var-lib.mount: Failed to load configuration: No such file or directory
[  787.786775] f26.localdomain systemd[836]: var-lib-gdm.mount: Failed to load configuration: No such file or directory
[  787.786918] f26.localdomain systemd[836]: mnt.mount: Changed dead -> mounted
[  787.787076] f26.localdomain systemd[836]: var.mount: Collecting.
[  787.787224] f26.localdomain systemd[836]: var-lib.mount: Collecting.
[  787.787372] f26.localdomain systemd[836]: var-lib-gdm.mount: Collecting.
[  787.787528] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvda1_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=509 reply_cookie=0 error=n/a
[  787.787747] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvda1_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=510 reply_cookie=0 error=n/a
[  787.787882] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dmapper_2dfedora_5cx2droot_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=511 reply_cookie=0 error=n/a
[  787.787998] f26.localdomain systemd[1]: systemd-logind.service: Got notification message from PID 716 (WATCHDOG=1)
[  787.788190] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dmapper_2dfedora_5cx2droot_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=512 reply_cookie=0 error=n/a
[  797.457962] f26.localdomain systemd[1]: libmount event [rescan: yes]
[  797.461807] f26.localdomain systemd[1182]: libmount event [rescan: yes]
[  797.463969] f26.localdomain systemd[836]: libmount event [rescan: yes]
[  797.465525] f26.localdomain systemd[1]: mnt.mount: Changed mounted -> dead
[  797.466570] f26.localdomain systemd[1]: dev-vdb.device: Changed tentative -> dead
[  797.467511] f26.localdomain systemd[1]: dev-vdb.device: Collecting.
[  797.468391] f26.localdomain systemd[1]: mnt.mount: Collecting.
[  797.469280] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvdb_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=513 reply_cookie=0 error=n/a
[  797.470239] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvdb_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=514 reply_cookie=0 error=n/a
[  797.471173] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitRemoved cookie=515 reply_cookie=0 error=n/a
[  797.472126] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/mnt_2emount interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=516 reply_cookie=0 error=n/a
[  797.472974] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/mnt_2emount interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=517 reply_cookie=0 error=n/a
[  797.473757] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitRemoved cookie=518 reply_cookie=0 error=n/a
[  797.474531] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvda1_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=519 reply_cookie=0 error=n/a
[  797.475292] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvda1_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=520 reply_cookie=0 error=n/a
[  797.476047] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dmapper_2dfedora_5cx2droot_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=521 reply_cookie=0 error=n/a
[  797.476808] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dmapper_2dfedora_5cx2droot_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=522 reply_cookie=0 error=n/a
[  797.477766] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvdb_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=513 reply_cookie=0 error=n/a
[  797.478762] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvdb_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=514 reply_cookie=0 error=n/a
[  797.479729] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitRemoved cookie=515 reply_cookie=0 error=n/a
[  797.480741] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/mnt_2emount interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=516 reply_cookie=0 error=n/a
[  797.481702] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/mnt_2emount interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=517 reply_cookie=0 error=n/a
[  797.482350] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitRemoved cookie=518 reply_cookie=0 error=n/a
[  797.483280] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvda1_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=519 reply_cookie=0 error=n/a
[  797.483862] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dvda1_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=520 reply_cookie=0 error=n/a
[  797.484765] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dmapper_2dfedora_5cx2droot_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=521 reply_cookie=0 error=n/a
[  797.485534] f26.localdomain systemd-logind[716]: Got message type=signal sender=:1.0 destination=n/a object=/org/freedesktop/systemd1/unit/dev_2dmapper_2dfedora_5cx2droot_2edevice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=522 reply_cookie=0 error=n/a
[  797.486348] f26.localdomain systemd[1182]: mnt.mount: Changed mounted -> dead
[  797.487358] f26.localdomain systemd[1182]: dev-vdb.device: Changed tentative -> dead
[  797.488487] f26.localdomain systemd[1182]: dev-vdb.device: Collecting.
[  797.489477] f26.localdomain systemd[1182]: mnt.mount: Collecting.
[  797.490059] f26.localdomain systemd[836]: mnt.mount: Changed mounted -> dead
[  797.490560] f26.localdomain systemd[836]: dev-vdb.device: Changed tentative -> dead
[  797.491030] f26.localdomain systemd[836]: dev-vdb.device: Collecting.
[  797.491611] f26.localdomain systemd[836]: mnt.mount: Collecting.
[  797.519450] f26.localdomain systemd-udevd[548]: seq 2105 queued, 'remove' 'bdi'
[  797.519798] f26.localdomain systemd-udevd[548]: Validate module index
[  797.519971] f26.localdomain systemd-udevd[548]: Check if link configuration needs reloading.
[  797.520187] f26.localdomain systemd-udevd[548]: seq 2105 forked new worker [1950]
[  797.520340] f26.localdomain systemd-udevd[1950]: seq 2105 running
[  797.520511] f26.localdomain systemd-udevd[1950]: passed device to netlink monitor 0x5580e6cf9590
[  797.520596] f26.localdomain systemd-udevd[1950]: seq 2105 processed
[  797.520677] f26.localdomain systemd-udevd[548]: cleanup idle workers
[  797.520822] f26.localdomain systemd-udevd[1950]: Unload module index
[  797.520904] f26.localdomain systemd-udevd[1950]: Unloaded link configuration context.
[  797.520983] f26.localdomain systemd-udevd[548]: worker [1950] exited



[-- Attachment #4: test2.log --]
[-- Type: text/x-log, Size: 11390 bytes --]

[  300.078978] f26.localdomain kernel: pci 0000:00:0b.0: [1af4:1001] type 00 class 0x010000
[  300.079071] f26.localdomain kernel: pci 0000:00:0b.0: reg 0x10: [io  0x0000-0x003f]
[  300.079101] f26.localdomain kernel: pci 0000:00:0b.0: reg 0x14: [mem 0x00000000-0x00000fff]
[  300.081905] f26.localdomain kernel: pci 0000:00:0b.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref]
[  300.620744] f26.localdomain systemd-udevd[548]: seq 2097 queued, 'add' 'pci'
[  300.621232] f26.localdomain systemd[1]: systemd-udevd.service: Got notification message from PID 548 (WATCHDOG=1)
[  300.622532] f26.localdomain systemd-udevd[548]: Validate module index
[  300.622897] f26.localdomain systemd-udevd[548]: Check if link configuration needs reloading.
[  300.623174] f26.localdomain systemd-udevd[548]: seq 2097 forked new worker [1795]
[  300.623426] f26.localdomain systemd-udevd[1795]: seq 2097 running
[  300.623763] f26.localdomain systemd-udevd[1795]: IMPORT builtin 'hwdb' /usr/lib/udev/rules.d/50-udev-default.rules:15
[  300.085160] f26.localdomain kernel: pci 0000:00:0b.0: BAR 4: assigned [mem 0x80008000-0x8000bfff 64bit pref]
[  300.085221] f26.localdomain kernel: pci 0000:00:0b.0: BAR 1: assigned [mem 0x80005000-0x80005fff]
[  300.085238] f26.localdomain kernel: pci 0000:00:0b.0: BAR 0: assigned [io  0x1040-0x107f]
[  300.085385] f26.localdomain kernel: virtio-pci 0000:00:0b.0: enabling device (0000 -> 0003)
[  300.624948] f26.localdomain systemd-udevd[1795]: RUN 'kmod load $env{MODALIAS}' /usr/lib/udev/rules.d/80-drivers.rules:5
[  300.625215] f26.localdomain systemd-udevd[1795]: created db file '/run/udev/data/+pci:0000:00:0b.0' for '/devices/pci0000:00/0000:00:0b.0'
[  300.625443] f26.localdomain systemd-udevd[1795]: Execute 'load' 'pci:v00001AF4d00001001sv00001AF4sd00000002bc01sc00i00'
[  300.625667] f26.localdomain systemd-udevd[1795]: Inserted 'virtio_pci'
[  300.625902] f26.localdomain systemd-udevd[1795]: passed device to netlink monitor 0x5580e6cf9ac0
[  300.626175] f26.localdomain systemd-udevd[1795]: seq 2097 processed
[  300.627128] f26.localdomain systemd-udevd[548]: cleanup idle workers
[  300.627457] f26.localdomain systemd-udevd[1795]: Unload module index
[  300.627600] f26.localdomain systemd-udevd[1795]: Unloaded link configuration context.
[  300.627730] f26.localdomain systemd-udevd[548]: worker [1795] exited
[  300.657139] f26.localdomain systemd-udevd[548]: seq 2098 queued, 'add' 'virtio'
[  300.657822] f26.localdomain systemd-udevd[548]: seq 2098 forked new worker [1797]
[  300.658116] f26.localdomain systemd-udevd[1797]: seq 2098 running
[  300.658345] f26.localdomain systemd-udevd[548]: seq 2099 queued, 'add' 'bdi'
[  300.658561] f26.localdomain systemd-udevd[548]: seq 2099 forked new worker [1798]
[  300.658804] f26.localdomain systemd-udevd[1797]: IMPORT builtin 'hwdb' /usr/lib/udev/rules.d/50-udev-default.rules:15
[  300.659149] f26.localdomain systemd-udevd[1797]: IMPORT builtin 'hwdb' returned non-zero
[  300.659372] f26.localdomain systemd-udevd[1797]: RUN 'kmod load $env{MODALIAS}' /usr/lib/udev/rules.d/80-drivers.rules:5
[  300.659530] f26.localdomain systemd-udevd[1797]: Execute 'load' 'virtio:d00000002v00001AF4'
[  300.659666] f26.localdomain systemd-udevd[1797]: Inserted 'virtio_blk'
[  300.659800] f26.localdomain systemd-udevd[1797]: passed device to netlink monitor 0x5580e6cf9590
[  300.659936] f26.localdomain systemd-udevd[1797]: seq 2098 processed
[  300.660086] f26.localdomain systemd-udevd[548]: seq 2100 queued, 'add' 'block'
[  300.660223] f26.localdomain systemd-udevd[548]: passed 214 byte device to netlink monitor 0x5580e6d313a0
[  300.660408] f26.localdomain systemd-udevd[1797]: seq 2100 running
[  300.660545] f26.localdomain systemd-udevd[1797]: GROUP 6 /usr/lib/udev/rules.d/50-udev-default.rules:55
[  300.660695] f26.localdomain systemd-udevd[1798]: seq 2099 running
[  300.660894] f26.localdomain systemd-udevd[1798]: passed device to netlink monitor 0x5580e6d04470
[  300.661111] f26.localdomain systemd-udevd[1798]: seq 2099 processed
[  300.661314] f26.localdomain systemd-udevd[1797]: IMPORT builtin 'path_id' /usr/lib/udev/rules.d/60-persistent-storage.rules:66
[  300.661558] f26.localdomain systemd-udevd[1797]: LINK 'disk/by-path/pci-0000:00:0b.0' /usr/lib/udev/rules.d/60-persistent-storage.rules:67
[  300.661794] f26.localdomain systemd-udevd[1797]: LINK 'disk/by-path/virtio-pci-0000:00:0b.0' /usr/lib/udev/rules.d/60-persistent-storage.rules:71
[  300.661991] f26.localdomain systemd-udevd[1797]: IMPORT builtin 'blkid' /usr/lib/udev/rules.d/60-persistent-storage.rules:82
[  300.662242] f26.localdomain systemd-udevd[1797]: probe /dev/vdc raid offset=0
[  300.671661] f26.localdomain systemd-udevd[1797]: LINK 'disk/by-uuid/e5eb17cb-40d7-48c8-ab84-925f565edf93' /usr/lib/udev/rules.d/60-persistent-storage.rules:85
[  300.137200] f26.localdomain kernel: BTRFS: device label themirror devid 2 transid 9 /dev/vdc
[  300.672133] f26.localdomain systemd-udevd[1797]: LINK 'disk/by-label/themirror' /usr/lib/udev/rules.d/60-persistent-storage.rules:86
[  300.672379] f26.localdomain systemd-udevd[1797]: IMPORT builtin 'btrfs' /usr/lib/udev/rules.d/64-btrfs.rules:8
[  300.672600] f26.localdomain systemd-udevd[1797]: handling device node '/dev/vdc', devnum=b252:32, mode=0660, uid=0, gid=6
[  300.672812] f26.localdomain systemd-udevd[1797]: set permissions /dev/vdc, 060660, uid=0, gid=6
[  300.673100] f26.localdomain systemd-udevd[1797]: creating symlink '/dev/block/252:32' to '../vdc'
[  300.673334] f26.localdomain systemd-udevd[1797]: found 'b252:16' claiming '/run/udev/links/\x2fdisk\x2fby-label\x2fthemirror'
[  300.675673] f26.localdomain systemd-udevd[1797]: creating link '/dev/disk/by-label/themirror' to '/dev/vdc'
[  300.675967] f26.localdomain systemd-udevd[1797]: atomically replace '/dev/disk/by-label/themirror'
[  300.676193] f26.localdomain systemd-udevd[1797]: creating link '/dev/disk/by-path/pci-0000:00:0b.0' to '/dev/vdc'
[  300.676315] f26.localdomain systemd-udevd[1797]: creating symlink '/dev/disk/by-path/pci-0000:00:0b.0' to '../../vdc'
[  300.676428] f26.localdomain systemd-udevd[1797]: creating link '/dev/disk/by-path/virtio-pci-0000:00:0b.0' to '/dev/vdc'
[  300.676534] f26.localdomain systemd-udevd[1797]: creating symlink '/dev/disk/by-path/virtio-pci-0000:00:0b.0' to '../../vdc'
[  300.676608] f26.localdomain systemd-udevd[1797]: found 'b252:16' claiming '/run/udev/links/\x2fdisk\x2fby-uuid\x2fe5eb17cb-40d7-48c8-ab84-925f565edf93'
[  300.676681] f26.localdomain systemd-udevd[1797]: creating link '/dev/disk/by-uuid/e5eb17cb-40d7-48c8-ab84-925f565edf93' to '/dev/vdc'
[  300.676752] f26.localdomain systemd-udevd[1797]: atomically replace '/dev/disk/by-uuid/e5eb17cb-40d7-48c8-ab84-925f565edf93'
[  300.676820] f26.localdomain systemd-udevd[1797]: created db file '/run/udev/data/b252:32' for '/devices/pci0000:00/0000:00:0b.0/virtio6/block/vdc'
[  300.676895] f26.localdomain systemd-udevd[1797]: adding watch on '/dev/vdc'
[  300.676970] f26.localdomain systemd-udevd[1797]: created db file '/run/udev/data/b252:32' for '/devices/pci0000:00/0000:00:0b.0/virtio6/block/vdc'
[  300.677078] f26.localdomain systemd[1182]: dev-disk-by\x2duuid-e5eb17cb\x2d40d7\x2d48c8\x2dab84\x2d925f565edf93.device: Changed dead -> plugged
[  300.677312] f26.localdomain systemd[1182]: dev-disk-by\x2dpath-virtio\x2dpci\x2d0000:00:0b.0.device: Changed dead -> plugged
[  300.677499] f26.localdomain systemd[1182]: dev-disk-by\x2dpath-pci\x2d0000:00:0b.0.device: Changed dead -> plugged
[  300.677632] f26.localdomain systemd[1182]: dev-disk-by\x2dlabel-themirror.device: Changed dead -> plugged
[  300.677757] f26.localdomain systemd[1182]: dev-vdc.device: Changed dead -> plugged
[  300.677883] f26.localdomain systemd[1182]: sys-devices-pci0000:00-0000:00:0b.0-virtio6-block-vdc.device: Changed dead -> plugged
[  300.678020] f26.localdomain systemd[1]: dev-disk-by\x2duuid-e5eb17cb\x2d40d7\x2d48c8\x2dab84\x2d925f565edf93.device: Changed dead -> plugged
[  300.678298] f26.localdomain systemd[1]: dev-disk-by\x2dpath-virtio\x2dpci\x2d0000:00:0b.0.device: Changed dead -> plugged
[  300.678894] f26.localdomain systemd[1]: dev-disk-by\x2dpath-pci\x2d0000:00:0b.0.device: Changed dead -> plugged
[  300.679512] f26.localdomain systemd[1]: dev-disk-by\x2dlabel-themirror.device: Changed dead -> plugged
[  300.680172] f26.localdomain systemd[1]: dev-vdc.device: Changed dead -> plugged
[  300.680835] f26.localdomain systemd[1]: sys-devices-pci0000:00-0000:00:0b.0-virtio6-block-vdc.device: Changed dead -> plugged
[  300.681767] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=426 reply_cookie=0 error=n/a
[  300.682320] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=427 reply_cookie=0 error=n/a
[  300.682520] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=428 reply_cookie=0 error=n/a
[  300.682636] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=429 reply_cookie=0 error=n/a
[  300.682748] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=430 reply_cookie=0 error=n/a
[  300.682854] f26.localdomain systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=431 reply_cookie=0 error=n/a
[  300.682964] f26.localdomain systemd-udevd[1797]: passed device to netlink monitor 0x5580e6cf9590
[  300.683066] f26.localdomain systemd-udevd[1797]: seq 2100 processed
[  300.683150] f26.localdomain systemd[836]: dev-disk-by\x2duuid-e5eb17cb\x2d40d7\x2d48c8\x2dab84\x2d925f565edf93.device: Changed dead -> plugged
[  300.683353] f26.localdomain systemd[836]: dev-disk-by\x2dpath-virtio\x2dpci\x2d0000:00:0b.0.device: Changed dead -> plugged
[  300.683502] f26.localdomain systemd[836]: dev-disk-by\x2dpath-pci\x2d0000:00:0b.0.device: Changed dead -> plugged
[  300.683642] f26.localdomain systemd[836]: dev-disk-by\x2dlabel-themirror.device: Changed dead -> plugged
[  300.683778] f26.localdomain systemd-udevd[548]: cleanup idle workers
[  300.683936] f26.localdomain systemd[836]: dev-vdc.device: Changed dead -> plugged
[  300.684092] f26.localdomain systemd[836]: sys-devices-pci0000:00-0000:00:0b.0-virtio6-block-vdc.device: Changed dead -> plugged
[  300.684231] f26.localdomain systemd-udevd[1798]: Unload module index
[  300.684307] f26.localdomain systemd-udevd[1798]: Unloaded link configuration context.
[  300.684401] f26.localdomain systemd-udevd[548]: worker [1798] exited
[  300.684551] f26.localdomain systemd-udevd[548]: cleanup idle workers
[  300.684681] f26.localdomain systemd-udevd[1797]: Unload module index
[  300.684758] f26.localdomain systemd-udevd[1797]: Unloaded link configuration context.
[  300.684826] f26.localdomain systemd-udevd[548]: worker [1797] exited


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Can I see what device was used to mount btrfs?
  2017-05-03 18:43             ` Chris Murphy
  2017-05-03 21:19               ` Duncan
@ 2017-05-04  3:48               ` Andrei Borzenkov
  1 sibling, 0 replies; 17+ messages in thread
From: Andrei Borzenkov @ 2017-05-04  3:48 UTC (permalink / raw)
  To: Chris Murphy, Goffredo Baroncelli; +Cc: Kai Krakow, Btrfs BTRFS

03.05.2017 21:43, Chris Murphy пишет:
> If I understand the bug report correctly, the user specifies mounting
> by label which then systemd is converting into /dev/dm-0 (because it's
> a two LUKS devices Btrfs volume).
> 

No, that's not the problem.

The actual reason for report is that systemd shows device, which is
apparently (as shown in "systemctl status foo.mount" and in
/proc/self/mountinfo for /foo) used to mount /foo, as non-existent.
"Tentative" state is used by systemd to represent devices that do exist
but were not announced to it (or announced with SYSTEMD_READY=0).

To understand why it happens you need to know how systemd handles btrfs
mounts. udev rule calls "btrfs device ready" and if it says that
filesystem is not yet ready to be mounted, it sets SYSTEMD_READY=0. When
final device that makes btrfs complete appears, it is announced to
systemd. Usually this last device also "wins" when creating device
aliases, so LABEL/UUID is resolved to it. When systemd sees this device
it proceeds with mount.

Now as we have seen device that is shown in /proc/self/mountinfo is
*not* the same device that was used to actually mount filesystem. So in
case of bug report all links point to /dev/dm-1 but /proc/self/mountinfo
show /dev/dm-0. But /dev/dm-0 "does not esist" (when it was scanned
btrfs was not ready yet) hence the confusion.

Now related problem (which was "helpfully" closed as duplicate without
understanding the real issue) is that creation of those links itself is
not deterministic.

> Why not convert the fstab mount by label request, into a /dev/by-uuid/
> path; and then systemd calls mount with -u,--uuid mount option rather
> than the block device?

It absolutely does not matter. Internally util-linux will convert it to
/dev/disk/by-xxx anyway. And currently systemd needs device name to wait
for so it needs to resolve it.

> I'd think this is better no matter what the
> file system is, but certainly with Btrfs and possibly ZFS. That makes
> the device discovery problem not systemd's problem to hassle with.
> 
> 
> 
> Chris Murphy
> 
> 
> 
> 
> On Wed, May 3, 2017 at 11:05 AM, Goffredo Baroncelli <kreijack@inwind.it> wrote:
>> On 2017-05-02 22:15, Kai Krakow wrote:
>>>> For example, it would be possible to implement a sane check that
>>>> prevent to mount a btrfs filesystem if two devices exposes the same
>>>> UUID...
>>> Ideally, the btrfs wouldn't even appear in /dev until it was assembled
>>> by udev. But apparently that's not the case, and I think this is where
>>> the problems come from. I wish, btrfs would not show up as device nodes
>>> in /dev that the mount command identified as btrfs. Instead, btrfs
>>> would expose (probably through udev) a device node
>>> in /dev/btrfs/fs_identifier when it is ready.
>>
>>
>> And what if udev fails to assemble the devices (for example because not all the disks are available or because there are two disks with the same uuid) ?
>> And if the user can't access the disks, how he could solve the issues (i.e. two disk with the same uuid)
>>
>> I think that udev should be put out of the game of assembling the disks. For the following reasons:
>> 1) udev is not developed by the BTRFS community where the btrfs knowledges are; there are a lot of corner cases which are not clear to the btrfs developers; how these case could be more clearer to the udev developers (who indeed are very smart guys) ?
>> 2) I don't think that udev is flexible enough to handle all the cases (e.g.: two disks with the same uuid,  missing devices)
>> 3) udev works quite well at handling the device appearing; why it should be involved in the filesystem assembling ?
>>
>>
>> BR
>> G.Baroncelli
>>
>>
>> --
>> gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
>> Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-05-04  3:49 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-30  5:47 Can I see what device was used to mount btrfs? Andrei Borzenkov
2017-05-02  3:26 ` Anand Jain
2017-05-02 13:58 ` Adam Borowski
2017-05-02 14:19   ` Andrei Borzenkov
2017-05-02 18:49     ` Adam Borowski
2017-05-02 19:50       ` Goffredo Baroncelli
2017-05-02 20:15         ` Kai Krakow
2017-05-02 20:34           ` Adam Borowski
2017-05-03 11:32           ` Austin S. Hemmelgarn
2017-05-03 17:05           ` Goffredo Baroncelli
2017-05-03 18:43             ` Chris Murphy
2017-05-03 21:19               ` Duncan
2017-05-04  2:15                 ` Chris Murphy
2017-05-04  3:48               ` Andrei Borzenkov
2017-05-03 11:26         ` Austin S. Hemmelgarn
2017-05-03 18:12           ` Andrei Borzenkov
2017-05-03 18:53             ` Austin S. Hemmelgarn

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.