All of lore.kernel.org
 help / color / mirror / Atom feed
* Subvolumes cannot be mounted after raid1 conversion
@ 2016-05-03  8:52 Hasse Hagen Johansen
  2016-05-03  9:55 ` Hugo Mills
  0 siblings, 1 reply; 12+ messages in thread
From: Hasse Hagen Johansen @ 2016-05-03  8:52 UTC (permalink / raw)
  To: linux-btrfs

Hi

I have made a btrfs on a single physical disk and made 3 subvolumes which I manually mounted and copied data to. Yesterday I added another disk to the filesystem and ran btrfs balance start -mconvert=raid1 -dconvert=raid1 /mnt/temp where /mnt/temp is top-level fimesystem(not the subvolumes). After that have finished I cannot mount the individual subvolumes

can anyone explain how it is supposed to work. I can still see the files in the top-level filesystem but it seems subvolumes doesn't work after raid1 conversion

Best Regards 
Hasse
-- 
Sendt fra min telefon med K9 Mail. Undskyld hvis jeg er lidt kortfattet.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Subvolumes cannot be mounted after raid1 conversion
  2016-05-03  8:52 Subvolumes cannot be mounted after raid1 conversion Hasse Hagen Johansen
@ 2016-05-03  9:55 ` Hugo Mills
  2016-05-03 10:24   ` Hasse Hagen Johansen
  0 siblings, 1 reply; 12+ messages in thread
From: Hugo Mills @ 2016-05-03  9:55 UTC (permalink / raw)
  To: Hasse Hagen Johansen; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 926 bytes --]

On Tue, May 03, 2016 at 10:52:36AM +0200, Hasse Hagen Johansen wrote:
> Hi
> 
> I have made a btrfs on a single physical disk and made 3 subvolumes which I manually mounted and copied data to. Yesterday I added another disk to the filesystem and ran btrfs balance start -mconvert=raid1 -dconvert=raid1 /mnt/temp where /mnt/temp is top-level fimesystem(not the subvolumes). After that have finished I cannot mount the individual subvolumes
> 
> can anyone explain how it is supposed to work. I can still see the files in the top-level filesystem but it seems subvolumes doesn't work after raid1 conversion

   The balance should have made no difference.

   How are you trying to mount the subvols? (What commands/fstab config?)

   What errors do you get when trying to mount?

   Hugo.

-- 
Hugo Mills             | The Creature from the Black Logon
hugo@... carfax.org.uk |
http://carfax.org.uk/  |
PGP: E2AB1DE4          |

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Subvolumes cannot be mounted after raid1 conversion
  2016-05-03  9:55 ` Hugo Mills
@ 2016-05-03 10:24   ` Hasse Hagen Johansen
  2016-05-03 10:27     ` Hugo Mills
  2016-05-04 18:12     ` Chris Murphy
  0 siblings, 2 replies; 12+ messages in thread
From: Hasse Hagen Johansen @ 2016-05-03 10:24 UTC (permalink / raw)
  To: linux-btrfs

Ok.  I can mount it manually just fine now using this command : sudo mount -t btrfs -o subvol=music /dev/sde /mnt/temp

But somehow I cannot mount it at /music anymore(and I just found out that is what has been tricking me) 

I have also tried with this in fstab 

/dev/sde  /music          btrfs device=/dev/sdd,device=/dev/sde,subvol=music          0       2

But I don't get any errors anyway (and no errors in dmesg) 

The exact thing I did was having the subvols mounted. Then mounted top-level volume on /mbt/temp. And then ran balance to convert to raid1.. When finished I umounted /mnt/temp (the top-level) and then my 3 subvolumes was unmounted and a cannot mount them at the same mountpoints again... No errors and it says that it is not mounted when trying to umount them. So it seems they don't get mounted at all and without throwing an error

I will look into it more thoroughly now that I can at least mount the subvols on other mountpoints than the original 

Best regards
Hasse
-- 
Sendt fra min telefon med K9 Mail. Undskyld hvis jeg er lidt kortfattet.

On 3. maj 2016 11.55.15 CEST, Hugo Mills <hugo@carfax.org.uk> wrote:
>On Tue, May 03, 2016 at 10:52:36AM +0200, Hasse Hagen Johansen wrote:
>> Hi
>> 
>> I have made a btrfs on a single physical disk and made 3 subvolumes
>which I manually mounted and copied data to. Yesterday I added another
>disk to the filesystem and ran btrfs balance start -mconvert=raid1
>-dconvert=raid1 /mnt/temp where /mnt/temp is top-level fimesystem(not
>the subvolumes). After that have finished I cannot mount the individual
>subvolumes
>> 
>> can anyone explain how it is supposed to work. I can still see the
>files in the top-level filesystem but it seems subvolumes doesn't work
>after raid1 conversion
>
>   The balance should have made no difference.
>
> How are you trying to mount the subvols? (What commands/fstab config?)
>
>   What errors do you get when trying to mount?
>
>   Hugo.
>
>-- 
>Hugo Mills             | The Creature from the Black Logon
>hugo@... carfax.org.uk |
>http://carfax.org.uk/  |
>PGP: E2AB1DE4          |

-- 
Sendt fra min telefon med K9 Mail. Undskyld hvis jeg er lidt kortfattet.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Subvolumes cannot be mounted after raid1 conversion
  2016-05-03 10:24   ` Hasse Hagen Johansen
@ 2016-05-03 10:27     ` Hugo Mills
  2016-05-03 11:13       ` Hasse Hagen Johansen
  2016-05-03 16:30       ` Duncan
  2016-05-04 18:12     ` Chris Murphy
  1 sibling, 2 replies; 12+ messages in thread
From: Hugo Mills @ 2016-05-03 10:27 UTC (permalink / raw)
  To: Hasse Hagen Johansen; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1515 bytes --]

On Tue, May 03, 2016 at 12:24:52PM +0200, Hasse Hagen Johansen wrote:
> Ok.  I can mount it manually just fine now using this command : sudo mount -t btrfs -o subvol=music /dev/sde /mnt/temp
> 
> But somehow I cannot mount it at /music anymore(and I just found out that is what has been tricking me) 
> 
> I have also tried with this in fstab 
> 
> /dev/sde  /music          btrfs device=/dev/sdd,device=/dev/sde,subvol=music          0       2
> 
> But I don't get any errors anyway (and no errors in dmesg) 
> 
> The exact thing I did was having the subvols mounted. Then mounted top-level volume on /mbt/temp. And then ran balance to convert to raid1.. When finished I umounted /mnt/temp (the top-level) and then my 3 subvolumes was unmounted and a cannot mount them at the same mountpoints again... No errors and it says that it is not mounted when trying to umount them. So it seems they don't get mounted at all and without throwing an error
> 
> I will look into it more thoroughly now that I can at least mount the subvols on other mountpoints than the original 

   Given those symptoms (mount doesn't report errors, but no mount
happens), I would guess that your problem is with systemd. It has a
bug where it sometimes unmounts things immediately after you've
mounted them.

   Hugo.

-- 
Hugo Mills             | "You know, the British have always been nice to mad
hugo@... carfax.org.uk | people."
http://carfax.org.uk/  |
PGP: E2AB1DE4          |                         Laura Jesson, Brief Encounter

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Subvolumes cannot be mounted after raid1 conversion
  2016-05-03 10:27     ` Hugo Mills
@ 2016-05-03 11:13       ` Hasse Hagen Johansen
  2016-05-03 18:06         ` hasse
  2016-05-03 16:30       ` Duncan
  1 sibling, 1 reply; 12+ messages in thread
From: Hasse Hagen Johansen @ 2016-05-03 11:13 UTC (permalink / raw)
  To: Hugo Mills; +Cc: linux-btrfs

Ok. Thanks for the info. I will look into it some more and return when I find out what happens. Thanks for the help until now


On 3. maj 2016 12.27.46 CEST, Hugo Mills <hugo@carfax.org.uk> wrote:
>On Tue, May 03, 2016 at 12:24:52PM +0200, Hasse Hagen Johansen wrote:
>> Ok.  I can mount it manually just fine now using this command : sudo
>mount -t btrfs -o subvol=music /dev/sde /mnt/temp
>> 
>> But somehow I cannot mount it at /music anymore(and I just found out
>that is what has been tricking me) 
>> 
>> I have also tried with this in fstab 
>> 
>> /dev/sde  /music          btrfs
>device=/dev/sdd,device=/dev/sde,subvol=music          0       2
>> 
>> But I don't get any errors anyway (and no errors in dmesg) 
>> 
>> The exact thing I did was having the subvols mounted. Then mounted
>top-level volume on /mbt/temp. And then ran balance to convert to
>raid1.. When finished I umounted /mnt/temp (the top-level) and then my
>3 subvolumes was unmounted and a cannot mount them at the same
>mountpoints again... No errors and it says that it is not mounted when
>trying to umount them. So it seems they don't get mounted at all and
>without throwing an error
>> 
>> I will look into it more thoroughly now that I can at least mount the
>subvols on other mountpoints than the original 
>
>   Given those symptoms (mount doesn't report errors, but no mount
>happens), I would guess that your problem is with systemd. It has a
>bug where it sometimes unmounts things immediately after you've
>mounted them.
>
>   Hugo.
>
>-- 
>Hugo Mills             | "You know, the British have always been nice
>to mad
>hugo@... carfax.org.uk | people."
>http://carfax.org.uk/  |
>PGP: E2AB1DE4          |                         Laura Jesson, Brief
>Encounter

-- 
Sendt fra min telefon med K9 Mail. Undskyld hvis jeg er lidt kortfattet.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Subvolumes cannot be mounted after raid1 conversion
  2016-05-03 10:27     ` Hugo Mills
  2016-05-03 11:13       ` Hasse Hagen Johansen
@ 2016-05-03 16:30       ` Duncan
  2016-05-03 18:31         ` hasse
  1 sibling, 1 reply; 12+ messages in thread
From: Duncan @ 2016-05-03 16:30 UTC (permalink / raw)
  To: linux-btrfs

Hugo Mills posted on Tue, 03 May 2016 10:27:46 +0000 as excerpted:

> Given those symptoms (mount doesn't report errors, but no mount
> happens), I would guess that your problem is with systemd. It has a bug
> where it sometimes unmounts things immediately after you've mounted
> them.

FWIW, I have some personal experience with that "bug" myself.  But I 
suspect the systemd devs might call it a "feature", not a bug.  Based on 
my own experience and understanding...

The mount does actually happen.  It's just that, as Hugo says, systemd 
immediately unmounts it, because...

While mount (the command) itself is not a systemd command, and it works 
by making the appropriate kernel calls to accomplish the mount with the 
specified options, so systemd can't directly stop the actual mount...

The systemd perspective looks rather different...

Systemd actually uses its own mount units (*.mount files) to track 
mounts, generating them dynamically from fstab for entries found there.  
As with all systemd unit files, there's documentation; start with 
systemd.mount.  If you're not familiar with systemd units, systemd.unit,  
systemd-fstab-generator and systemd.device manpages, among others listed 
in systemd.mount's SEE ALSO section, will certainly help.

If you look in /run/systemd/generator/ (where /run is a tmpfs mounted by 
systemd itself, so these files are actually in memory only), you will see 
the mount units the systemd generator creates dynamically.  These files 
follow the format documented in the manpages discussed above, and it can 
be quite educational to read a few of these files and compare them to the 
fstab entries they were generated from to get some insights on how it all 
fits together.

So far so good.  But where things get interesting is in the device units 
(*.device files) and how they interact with mounts and with udev.

The problem turns out to be that between udev and systemd, sometimes 
systemd doesn't see the required devices available that it believes are 
required by a particular mount unit.  The kernel of course has its own 
idea of what devices are available, and will mount or fail to mount the 
filesystem based on that, which is why the mount actually succeeds.  But 
if systemd thinks those devices aren't there, it will immediately unmount 
the filesystem on its own, fast enough that the only way you can 
generally tell it was mounted is by noting that mount didn't return an 
error and by dmesg output such as the skinny extents notation that will 
normally be printed if the filesystem was created with half-current btrfs-
progs on a half-current kernel.  Otherwise, it looks as if the filesystem 
was never mounted at all, because systemd umounts it so fast.

So the trick is to convince systemd (via udev) that all the devices are 
actually there, after which it will leave the filesystem mounted.

Here's where my experience breaks down a bit, as in my case, I was trying 
to boot to systemd rescue mode, to work on these filesystems, and found 
out that in rescue mode (the rescue target) systemd hadn't started some 
of the udev services, etc, and thus thought devices were missing, so it 
wouldn't mount the filesystems.  However, by booting to emergency mode 
(the emergency target), systemd would run the necessary pre-filesystem 
udev, etc services and do the mounts in fstab, and then I could umount 
and mount just fine, because udev was running and tracking the devices, 
so systemd knew they were there and would let manual mounts and umounts 
work properly.

What I have /not/ specifically figured out, however, is which specific 
services that emergency mode runs that rescue mode doesn't, are involved, 
other than I'm quite sure that udev's involved, and of course from there 
I've only a vague idea how the specific device dependencies are worked 
out.

It's this last bit that will likely need tweaked a bit to get systemd 
aware of what device units need to be available for that mount unit, and 
aware that they /are/ actually available.

So yeah, as I said, from systemd's perspective, it's arguably a feature, 
not a bug.  You "just" need to adjust the udev, mount unit and device 
unit configuration, so that systemd via udev knows what devices are 
required and actually knows they're available, and all _should_ be well.

But the trick is in that "just"... which isn't likely to be quite so 
simple, especially if you have little existing knowledge and experience 
of systemd unit basics and how they work on for example the service and 
target unit levels.  As I already have some experience at that level, 
once I figured out that the mounts were actually happening and systemd 
was immediately umounting, I already had a reasonable idea as to why, and 
what sort of thing I needed to do to fix it.  In my case, it was simply 
that I needed to boot to emergency instead of rescue mode for my 
"singleuser mode", and I haven't bothered looking into it further, but 
I'd have a head-start on it if I did as I'm already used to doing custom 
service, target and timer units.  Without that information, it'd 
definitely take awhile longer to figure it all out, and I expect many 
people will simply give up and find some other solution that works for 
them, instead of trying to figure out why this one isn't, and fixing it 
so it does.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Subvolumes cannot be mounted after raid1 conversion
  2016-05-03 11:13       ` Hasse Hagen Johansen
@ 2016-05-03 18:06         ` hasse
  0 siblings, 0 replies; 12+ messages in thread
From: hasse @ 2016-05-03 18:06 UTC (permalink / raw)
  To: Hugo Mills; +Cc: linux-btrfs

Den 03/05/16 kl. 13:13 skrev Hasse Hagen Johansen:

>>>
>>> The exact thing I did was having the subvols mounted. Then mounted
>> top-level volume on /mbt/temp. And then ran balance to convert to
>> raid1.. When finished I umounted /mnt/temp (the top-level) and then my
>> 3 subvolumes was unmounted and a cannot mount them at the same
>> mountpoints again... No errors and it says that it is not mounted when
>> trying to umount them. So it seems they don't get mounted at all and
>> without throwing an error
>>>
>>> I will look into it more thoroughly now that I can at least mount the
>> subvols on other mountpoints than the original 
>>
>>   Given those symptoms (mount doesn't report errors, but no mount
>> happens), I would guess that your problem is with systemd. It has a
>> bug where it sometimes unmounts things immediately after you've
>> mounted them.
>>

You were right. Looking more into it (home at my computer and not my
phone) I found this in /var/log/syslog:

May  3 19:57:23 pris systemd[1]: books.mount: Unit is bound to inactive
unit dev-Data\x2dVG-books.
device. Stopping, too.
May  3 19:57:23 pris systemd[1]: Unmounting /books...
May  3 19:57:25 pris systemd[1]: Unmounted /books.
May  3 19:57:25 pris systemd[1]: books.mount: Unit entered failed state.

So it is indeed systemd...great...seems it is confused because the
mountpoints which I have now moved to btrfs was earlier using lvm. I
will try telling systemd to forget that old unit

Thanks for the help


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Subvolumes cannot be mounted after raid1 conversion
  2016-05-03 16:30       ` Duncan
@ 2016-05-03 18:31         ` hasse
  2016-05-03 18:38           ` hasse
  0 siblings, 1 reply; 12+ messages in thread
From: hasse @ 2016-05-03 18:31 UTC (permalink / raw)
  To: Duncan, linux-btrfs

Den 03/05/16 kl. 18:30 skrev Duncan:
> Hugo Mills posted on Tue, 03 May 2016 10:27:46 +0000 as excerpted:
> 
>> Given those symptoms (mount doesn't report errors, but no mount
>> happens), I would guess that your problem is with systemd. It has a bug
>> where it sometimes unmounts things immediately after you've mounted
>> them.
> 
> FWIW, I have some personal experience with that "bug" myself.  But I 
> suspect the systemd devs might call it a "feature", not a bug.  Based on 
> my own experience and understanding...
> 
> The mount does actually happen.  It's just that, as Hugo says, systemd 
> immediately unmounts it, because...
> 
> While mount (the command) itself is not a systemd command, and it works 
> by making the appropriate kernel calls to accomplish the mount with the 
> specified options, so systemd can't directly stop the actual mount...
> 
> The systemd perspective looks rather different...
> 
> Systemd actually uses its own mount units (*.mount files) to track 
> mounts, generating them dynamically from fstab for entries found there.  
> As with all systemd unit files, there's documentation; start with 
> systemd.mount.  If you're not familiar with systemd units, systemd.unit,  
> systemd-fstab-generator and systemd.device manpages, among others listed 
> in systemd.mount's SEE ALSO section, will certainly help.
> 
> If you look in /run/systemd/generator/ (where /run is a tmpfs mounted by 
> systemd itself, so these files are actually in memory only), you will see 
> the mount units the systemd generator creates dynamically.  These files 
> follow the format documented in the manpages discussed above, and it can 
> be quite educational to read a few of these files and compare them to the 
> fstab entries they were generated from to get some insights on how it all 
> fits together.
> 
> So far so good.  But where things get interesting is in the device units 
> (*.device files) and how they interact with mounts and with udev.
> 
> The problem turns out to be that between udev and systemd, sometimes 
> systemd doesn't see the required devices available that it believes are 
> required by a particular mount unit.  The kernel of course has its own 
> idea of what devices are available, and will mount or fail to mount the 
> filesystem based on that, which is why the mount actually succeeds.  But 
> if systemd thinks those devices aren't there, it will immediately unmount 
> the filesystem on its own, fast enough that the only way you can 
> generally tell it was mounted is by noting that mount didn't return an 
> error and by dmesg output such as the skinny extents notation that will 
> normally be printed if the filesystem was created with half-current btrfs-
> progs on a half-current kernel.  Otherwise, it looks as if the filesystem 
> was never mounted at all, because systemd umounts it so fast.
> 
> So the trick is to convince systemd (via udev) that all the devices are 
> actually there, after which it will leave the filesystem mounted.
> 
> Here's where my experience breaks down a bit, as in my case, I was trying 
> to boot to systemd rescue mode, to work on these filesystems, and found 
> out that in rescue mode (the rescue target) systemd hadn't started some 
> of the udev services, etc, and thus thought devices were missing, so it 
> wouldn't mount the filesystems.  However, by booting to emergency mode 
> (the emergency target), systemd would run the necessary pre-filesystem 
> udev, etc services and do the mounts in fstab, and then I could umount 
> and mount just fine, because udev was running and tracking the devices, 
> so systemd knew they were there and would let manual mounts and umounts 
> work properly.
> 
> What I have /not/ specifically figured out, however, is which specific 
> services that emergency mode runs that rescue mode doesn't, are involved, 
> other than I'm quite sure that udev's involved, and of course from there 
> I've only a vague idea how the specific device dependencies are worked 
> out.
> 
> It's this last bit that will likely need tweaked a bit to get systemd 
> aware of what device units need to be available for that mount unit, and 
> aware that they /are/ actually available.
> 
> So yeah, as I said, from systemd's perspective, it's arguably a feature, 
> not a bug.  You "just" need to adjust the udev, mount unit and device 
> unit configuration, so that systemd via udev knows what devices are 
> required and actually knows they're available, and all _should_ be well.
> 
> But the trick is in that "just"... which isn't likely to be quite so 
> simple, especially if you have little existing knowledge and experience 
> of systemd unit basics and how they work on for example the service and 
> target unit levels.  As I already have some experience at that level, 
> once I figured out that the mounts were actually happening and systemd 
> was immediately umounting, I already had a reasonable idea as to why, and 
> what sort of thing I needed to do to fix it.  In my case, it was simply 
> that I needed to boot to emergency instead of rescue mode for my 
> "singleuser mode", and I haven't bothered looking into it further, but 
> I'd have a head-start on it if I did as I'm already used to doing custom 
> service, target and timer units.  Without that information, it'd 
> definitely take awhile longer to figure it all out, and I expect many 
> people will simply give up and find some other solution that works for 
> them, instead of trying to figure out why this one isn't, and fixing it 
> so it does.
> 

Thanks for the very extensive explanation. I did find out systemd did
the umount by looking in /var/log/syslog. Now I wil try to fixup systemd
unit files and I will try not boot in either rescue or emergency mode. I
believe it is fixable without it. It seems systemd thinks it is my old
LVM setup(which now are moved to btrfs) which should be used and
immediately unmounts the filesystem again because it cannot find the
"LVM device". But I must say I find it a little bit stupid that systemd
doesn't monitor /etc/fstab now that it behind the users back are
generating its own unit files (I have actually seen the mount.xxx units
etc when looking at running units...but have never looked into how it
works...so I would now have to study that)

Best Regards
Hasse

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Subvolumes cannot be mounted after raid1 conversion
  2016-05-03 18:31         ` hasse
@ 2016-05-03 18:38           ` hasse
  2016-05-04  9:54             ` Duncan
  0 siblings, 1 reply; 12+ messages in thread
From: hasse @ 2016-05-03 18:38 UTC (permalink / raw)
  To: Duncan, linux-btrfs

Den 03/05/16 kl. 20:31 skrev hasse@hagenjohansen.dk:
> Den 03/05/16 kl. 18:30 skrev Duncan:
>> Hugo Mills posted on Tue, 03 May 2016 10:27:46 +0000 as excerpted:
>>
>>> Given those symptoms (mount doesn't report errors, but no mount
>>> happens), I would guess that your problem is with systemd. It has a bug
>>> where it sometimes unmounts things immediately after you've mounted
>>> them.
>>
>> FWIW, I have some personal experience with that "bug" myself.  But I 
>> suspect the systemd devs might call it a "feature", not a bug.  Based on 
>> my own experience and understanding...
.
.
.

> Thanks for the very extensive explanation. I did find out systemd did
> the umount by looking in /var/log/syslog. Now I wil try to fixup systemd
> unit files and I will try not boot in either rescue or emergency mode. I
> believe it is fixable without it. It seems systemd thinks it is my old
> LVM setup(which now are moved to btrfs) which should be used and
> immediately unmounts the filesystem again because it cannot find the
> "LVM device". But I must say I find it a little bit stupid that systemd
> doesn't monitor /etc/fstab now that it behind the users back are
> generating its own unit files (I have actually seen the mount.xxx units
> etc when looking at running units...but have never looked into how it
> works...so I would now have to study that)
>
I have found the easy solution. From
https://bbs.archlinux.org/viewtopic.php?id=192991

    First:

    # systemctl daemon-reload

    followed by:

    # systemctl restart remote-fs.target

    or

    # systemctl restart local-fs.target

    depending on filesystem type

I still think it is pretty stupid that systemd doesn't monitor
/etc/fstab (when choosing to create it own secret files) :)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Subvolumes cannot be mounted after raid1 conversion
  2016-05-03 18:38           ` hasse
@ 2016-05-04  9:54             ` Duncan
  0 siblings, 0 replies; 12+ messages in thread
From: Duncan @ 2016-05-04  9:54 UTC (permalink / raw)
  To: linux-btrfs

hasse posted on Tue, 03 May 2016 20:38:14 +0200 as excerpted:

> I have found the easy solution. From
> https://bbs.archlinux.org/viewtopic.php?id=192991
> 
>     First:
> 
>     # systemctl daemon-reload
> 
>     followed by:
> 
>     # systemctl restart remote-fs.target
> 
>     or
> 
>     # systemctl restart local-fs.target
> 
>     depending on filesystem type
> 
> I still think it is pretty stupid that systemd doesn't monitor
> /etc/fstab (when choosing to create it own secret files) :)

Indeed.  And thanks for that hint about what to restart.  It will likely 
save me some time when I look into things in a bit more detail, here. =:^)

FWIW, my pet gripe about systemd filesystem via fstab handling is...  I 
have several levels of backup mountable from fstab, for example, the 
first backup root which is also btrfs raid1 on different partitions on 
the same pair of ssds as my primary btrfs raid1 root, and a second and 
third backup roots that are still reiserfs on spinning rust.  Similarly 
for home, the media partition, the distro updates tree and (since it's 
gentoo) build machinery partition, etc.

But while I have different mountpoints for the backups based on whether 
they're of root/home/media/packages, all root backups, for instance, 
share the same root-backup mountpoint, as there's little reason for me to 
mount more than one backup of a particular type at a time (and a big 
reason not to, since in theory that endangers more than one at a time 
should something go wrong).  I mount by label and all the fstab lines 
have different LABEL= first-fields, so it's fine, and has worked well for 
me for years now.

But at boot and every time I run systemctl daemon-reload or daemon-reexec 
(the latter of which I often run after updates that affected some library 
that systemd uses, so it's not running the old version and I can 
ultimately remount my / readonly again, as it is by default and except 
when updating), systemd complains about it, because its *.mount unit 
files are named after the mountpoint, and the multiple fstab entries for 
the same mountpoint means it tries to create multiple identically named 
mount unit files.

The systemd devs are very aware of the problem as I've seen discussion of 
it elsewhere, but their policy is simply only one fstab entry per 
mountpoint, and they see the failure to support more as a feature, not a 
bug.

I've always thought if I got bored I might post a message on the systemd 
discussion list and ask what their recommendation is, for people that 
don't want to uselessly track all those extra mount points for multiple 
levels of backup but still want the convenience of having the entries in 
fstab, so feeding mount just the LABEL= line works (as does the mountpoint 
for just mounting the first/default entry), just to see what sort of 
recommendation they'd have, but I know better than to think that I'd get 
them to change the policy and call systemd's failure to support the 
multiple fstab entries for a single mountpoint feature a bug, instead of 
a feature.

Oh, well...  As long as it still works in practice, as it has so far, 
systemd complaining every time I reload it, isn't going to hurt me.

But if it stops working, get out the torches and pitchforks! =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Subvolumes cannot be mounted after raid1 conversion
  2016-05-03 10:24   ` Hasse Hagen Johansen
  2016-05-03 10:27     ` Hugo Mills
@ 2016-05-04 18:12     ` Chris Murphy
  2016-05-04 19:00       ` Hasse Hagen Johansen
  1 sibling, 1 reply; 12+ messages in thread
From: Chris Murphy @ 2016-05-04 18:12 UTC (permalink / raw)
  To: Hasse Hagen Johansen; +Cc: Btrfs BTRFS

On Tue, May 3, 2016 at 4:24 AM, Hasse Hagen Johansen
<hasse@hagenjohansen.dk> wrote:
> Ok.  I can mount it manually just fine now using this command : sudo mount -t btrfs -o subvol=music /dev/sde /mnt/temp
>
> But somehow I cannot mount it at /music anymore(and I just found out that is what has been tricking me)
>
> I have also tried with this in fstab
>
> /dev/sde  /music          btrfs device=/dev/sdd,device=/dev/sde,subvol=music          0       2

I suggest you use volume UUID, and not /dev/ designation for anything,
it's not reliable so for all we know you're forcing systemd to
explicitly mount the wrong devices because their /dev/ letters are
different. Then you can drop device= and then you can also make
fs_passno 0 instead of 2, since it doesn't apply and just makes
systemd run fsck.btrfs which then does nothing.

If that doesn't work then you should add boot parameter
systemd.log_level=debug and then after startup put the output from
'journalctl -b -o short-monotonic > journal.log' up somewhere for
others to look at. It will be much larger than usual and will make it
easier to find out where things are getting confused.



-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Subvolumes cannot be mounted after raid1 conversion
  2016-05-04 18:12     ` Chris Murphy
@ 2016-05-04 19:00       ` Hasse Hagen Johansen
  0 siblings, 0 replies; 12+ messages in thread
From: Hasse Hagen Johansen @ 2016-05-04 19:00 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Btrfs BTRFS

Den 04/05/16 kl. 20:12 skrev Chris Murphy:
> On Tue, May 3, 2016 at 4:24 AM, Hasse Hagen Johansen
> <hasse@hagenjohansen.dk> wrote:
>> Ok.  I can mount it manually just fine now using this command : sudo mount -t btrfs -o subvol=music /dev/sde /mnt/temp
>>
>> But somehow I cannot mount it at /music anymore(and I just found out that is what has been tricking me)
>>
>> I have also tried with this in fstab
>>
>> /dev/sde  /music          btrfs device=/dev/sdd,device=/dev/sde,subvol=music          0       2
> 
> I suggest you use volume UUID, and not /dev/ designation for anything,
> it's not reliable so for all we know you're forcing systemd to
> explicitly mount the wrong devices because their /dev/ letters are
> different. Then you can drop device= and then you can also make
> fs_passno 0 instead of 2, since it doesn't apply and just makes
> systemd run fsck.btrfs which then does nothing.
> 
> If that doesn't work then you should add boot parameter
> systemd.log_level=debug and then after startup put the output from
> 'journalctl -b -o short-monotonic > journal.log' up somewhere for
> others to look at. It will be much larger than usual and will make it
> easier to find out where things are getting confused.
> 
> 
> 
Hi

Actually I did change it back from using the UUID to /dev/sde when it
didn't work. The problem was systemd unmounting right after manually
mounting because it does that when you have changed devices used for the
mountpoint - because it only reads /etc/fstab at boot to recreate its
unit files (ex. the music.mount unit).

I also removed the device= entries, but this was a configuration I knew
worked earlier so I reverted to it when I had problems mounting(or
actually problems with systemd unmounting it right after I manually
mounted it)

So the problem has been solved and is somewhat a misbehaviour in
systemd. I agree that the fs_passno 2 doesn't make sense with btrfs. It
is a leftover from the ext4 which was mountet earlier on that mountpoint

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-05-04 19:00 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-03  8:52 Subvolumes cannot be mounted after raid1 conversion Hasse Hagen Johansen
2016-05-03  9:55 ` Hugo Mills
2016-05-03 10:24   ` Hasse Hagen Johansen
2016-05-03 10:27     ` Hugo Mills
2016-05-03 11:13       ` Hasse Hagen Johansen
2016-05-03 18:06         ` hasse
2016-05-03 16:30       ` Duncan
2016-05-03 18:31         ` hasse
2016-05-03 18:38           ` hasse
2016-05-04  9:54             ` Duncan
2016-05-04 18:12     ` Chris Murphy
2016-05-04 19:00       ` Hasse Hagen Johansen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.