All of lore.kernel.org
 help / color / mirror / Atom feed
* systemd : Timed out waiting for defice dev-disk-by…
@ 2015-07-24 18:41 Vincent Olivier
  2015-07-24 18:43 ` Vincent Olivier
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Vincent Olivier @ 2015-07-24 18:41 UTC (permalink / raw)
  To: linux-btrfs

Hi,

(Sorry if this gets sent twice : one of my mail relay is misbehaving today)

50% of the time when booting, the system go in safe mode because my 12x 4TB RAID10 btrfs is taking too long to mount from fstab.

When I comment it out from fstab and mount it manually, it’s all good.

I don’t like that. Is there a way to increase the timer or something ?

Thanks,

Vincent


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: systemd : Timed out waiting for defice dev-disk-by…
  2015-07-24 18:41 systemd : Timed out waiting for defice dev-disk-by… Vincent Olivier
@ 2015-07-24 18:43 ` Vincent Olivier
  2015-07-24 19:26 ` Tomasz Torcz
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Vincent Olivier @ 2015-07-24 18:43 UTC (permalink / raw)
  To: linux-btrfs

I forgot to say: I'm with Centos 7 and Kernel 4.1.3 but it's been doing this since Kernel 4.0, the time at which I started using btrfs.

thanks

-----Original Message-----
From: "Vincent Olivier" <vincent@up4.com>
Sent: Friday, July 24, 2015 14:41
To: linux-btrfs@vger.kernel.org
Subject: systemd : Timed out waiting for defice dev-disk-by…

Hi,

(Sorry if this gets sent twice : one of my mail relay is misbehaving today)

50% of the time when booting, the system go in safe mode because my 12x 4TB RAID10 btrfs is taking too long to mount from fstab.

When I comment it out from fstab and mount it manually, it’s all good.

I don’t like that. Is there a way to increase the timer or something ?

Thanks,

Vincent



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: systemd : Timed out waiting for defice dev-disk-by…
  2015-07-24 18:41 systemd : Timed out waiting for defice dev-disk-by… Vincent Olivier
  2015-07-24 18:43 ` Vincent Olivier
@ 2015-07-24 19:26 ` Tomasz Torcz
  2015-07-24 19:27 ` Chris Murphy
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Tomasz Torcz @ 2015-07-24 19:26 UTC (permalink / raw)
  To: linux-btrfs

On Fri, Jul 24, 2015 at 02:41:17PM -0400, Vincent Olivier wrote:
> Hi,
> 
> (Sorry if this gets sent twice : one of my mail relay is misbehaving today)
> 
> 50% of the time when booting, the system go in safe mode because my 12x 4TB RAID10 btrfs is taking too long to mount from fstab.
> 
> When I comment it out from fstab and mount it manually, it’s all good.
> 
> I don’t like that. Is there a way to increase the timer or something ?

  man systemd.mount, search for ”x-systemd.device-timeout=”.

  But maybe that's a hint for developers, long mount time for btrfs are
quite common – it would be cool if it could be reduced.

-- 
Tomasz Torcz            There exists no separation between gods and men:
xmpp: zdzichubg@chrome.pl   one blends softly casual into the other.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: systemd : Timed out waiting for defice dev-disk-by…
  2015-07-24 18:41 systemd : Timed out waiting for defice dev-disk-by… Vincent Olivier
  2015-07-24 18:43 ` Vincent Olivier
  2015-07-24 19:26 ` Tomasz Torcz
@ 2015-07-24 19:27 ` Chris Murphy
       [not found]   ` <1437766634.808316031@apps.rackspace.com>
  2015-07-26 20:39 ` Philip Seeger
  2016-03-25  7:41 ` Qu Wenruo
  4 siblings, 1 reply; 12+ messages in thread
From: Chris Murphy @ 2015-07-24 19:27 UTC (permalink / raw)
  To: Btrfs BTRFS

I'm not sure if its a systemd bug, a udev bug, or a btrfs device scan
bug, or a fast boot with slow spin up of the drives in the Btrfs.

This system doesn't boot from this 12 disk Btrfs, correct?

You could try changing fstab mount options to include:
noauto,x-systemd.automount


I do this for the EFI System partition at /boot/efi because I don't
want it mounted unless something accesses that path - so at that time
only does it get mounted. In your case it sounds similar, just don't
mount until accessed, which wouldn't happen until basic.target is
finished anyway (?) if it's not a boot drive.


http://www.freedesktop.org/software/systemd/man/systemd.mount.html


Chris Murphy

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: systemd : Timed out waiting for defice dev-disk-by…
       [not found]   ` <1437766634.808316031@apps.rackspace.com>
@ 2015-07-24 19:49     ` Chris Murphy
  0 siblings, 0 replies; 12+ messages in thread
From: Chris Murphy @ 2015-07-24 19:49 UTC (permalink / raw)
  To: Btrfs BTRFS

On Fri, Jul 24, 2015 at 1:37 PM, Vincent Olivier <vincent@up4.com> wrote:
> On Jul 24, 2015, at 3:27 PM, Chris Murphy <lists@colorremedies.com> wrote:
> I'm not sure if its a systemd bug, a udev bug, or a btrfs device scan
> bug, or a fast boot with slow spin up of the drives in the Btrfs.
>
> This system doesn't boot from this 12 disk Btrfs, correct?
>
> Most exact, yes.
>
>
>
>
> You could try changing fstab mount options to include:
> noauto,x-systemd.automount
>
> I did and it worked. Isn’t it a tad ugly though ?

No. The noauto is appropriate because you don't want this volume to
cause basic.target to fail. By using noauto, the volume isn't added to
local-fs.target.

You could experiment with x-systemd.device-timeout= but that's just
going to cause your boot to delay waiting for it; and I'm not sure
it'll actually work. You may have to iterate.

> Mount time is very close to 10 seconds which is quite high I think, no ?

Well that's a Btrfs thing, and it sounds like right now that's normal
for a volume of this size. Since it has to find all the devices, read
their superblocks, make sure the fs is minimally consistent, it
doesn't sound too bad.



-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: systemd : Timed out waiting for defice dev-disk-by…
  2015-07-24 18:41 systemd : Timed out waiting for defice dev-disk-by… Vincent Olivier
                   ` (2 preceding siblings ...)
  2015-07-24 19:27 ` Chris Murphy
@ 2015-07-26 20:39 ` Philip Seeger
  2015-07-27  5:20   ` Duncan
  2016-03-25  7:41 ` Qu Wenruo
  4 siblings, 1 reply; 12+ messages in thread
From: Philip Seeger @ 2015-07-26 20:39 UTC (permalink / raw)
  To: linux-btrfs

Hi,

> 50% of the time when booting, the system go in safe mode because my 12x 4TB RAID10 btrfs is taking too long to mount from fstab.

This won't help, but I've seen this exact behavior too (some time ago).
Except that it wasn't 50% that it didn't work, more like almost everytime.
Commenting out the fstab entry "fixed" it, mounting using a cronjob 
(@reboot) worked without a problem.

(As far as I remember, options like x-systemd.device-timeout didn't 
change anything.)

If someone has the answer, I'd be interested too.


Philip

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: systemd : Timed out waiting for defice dev-disk-by…
  2015-07-26 20:39 ` Philip Seeger
@ 2015-07-27  5:20   ` Duncan
  2015-07-27 19:16     ` Philip Seeger
  2015-07-30 11:22     ` Rich Freeman
  0 siblings, 2 replies; 12+ messages in thread
From: Duncan @ 2015-07-27  5:20 UTC (permalink / raw)
  To: linux-btrfs

Philip Seeger posted on Sun, 26 Jul 2015 22:39:04 +0200 as excerpted:

> Hi,
> 
>> 50% of the time when booting, the system go in safe mode because my 12x
>> 4TB RAID10 btrfs is taking too long to mount from fstab.
> 
> This won't help, but I've seen this exact behavior too (some time ago).
> Except that it wasn't 50% that it didn't work, more like almost
> everytime.
> Commenting out the fstab entry "fixed" it, mounting using a cronjob
> (@reboot) worked without a problem.
> 
> (As far as I remember, options like x-systemd.device-timeout didn't
> change anything.)
> 
> If someone has the answer, I'd be interested too.

You mean something like a custom systemd *.service unit file?  That's 
what I'd do here. =:^)

Were I to have that issue here[1], I'd use the usual an fstab entry, but 
I'd set the noauto option in fstab, so it doesn't get mounted with the 
rest of the partitions.  The reason to still have it in fstab is so I can 
easily set and track mount options, without having to feed them to mount 
on the commandline.  FWIW, I've been using entries with noauto in fstab 
that way, for more than a decade now, since I switched from MS when they 
jumped the "we demand the right to authorize you on your own system" 
shark with eXPrivacy, so _waaayyyy_ before systemd, and I still consider 
myself a relative Linux/Unix newbie.  That's the way it has always 
worked, and systemd doesn't change that.

Now, all my noauto filesystems I actually mount only when I'm going to 
use them, then umount them afterward.[2]  So I don't actually want them 
mounted at boot anyway.

But if I had the issue with mounts taking too long for normal processing 
at boot, but I still wanted them mounted more or less at boot so they 
were ready when I wanted to use them, I'd do what I've done for a few 
other custom services, setup my own custom systemd unit for them, with 
systemd options setup so it started either with or after everything else, 
but the system was considered booted before it actually finished, so I 
didn't have to wait for it to finish to login and start working.

With the mount options, including noauto, set in fstab, I'd simply setup 
a service unit that ran mount as its action, feeding mount LABEL= or the 
mountpoint or devicepath or whatever else I used to refer to the device, 
but letting it get the options and the other path (mountpoint or 
devicepath, the one I didn't feed it) from fstab, based on the one I did 
feed it.  IOW, the mount command would be basically the same thing I'd do 
to mount a filesystem listed in fstab, manually, only here it'd be setup 
as the operative command in a custom systemd service file.

Something like this (untested for mount, but based on a non-mount custom 
service unit I have):

Filename: /etc/systemd/system/local.boot.mount.bigpool.service

(Here, I'd name it jed.boot.mount.whatever.service, jed being my initials 
and thus a very common "custom config" file identifier, .service of 
course to identify it as a systems service, and /etc/systemd/system/ is 
the place such custom services normally go.)


File contents (between ---- delimiters):

-----------------------------------------
[Unit]
Description=Local boottime bigpool mounting
After=multi-user.target graphical.target

[Service]
Type=idle
RemainAfterExit=yes
ExecStart=/bin/mount LABEL=whatever
TimeoutStartSec=0

[Install]
WantedBy=multi-user.target
-----------------------------------------

Explanation, Unit section:

Description is what systemd will say it's starting/started.

After= is optional and tells it to run /after/ these targets.  multi-
user.target is the normal CLI target, for CLI login and servers, 
graphical.target is the normal graphical login target (*DM).  Thus, this 
says wait until the system is otherwise up, before running this service.

If you want systemd to wait until the system is otherwise up to run the 
service, use an after= similar to my setting above.  Do this if you want 
as fast a boot as possible and don't really consider this service part of 
boot but just want it already mounted when you need it, and/or if don't 
care if it takes a bit longer to actually process this service.

If OTOH, running this service won't interfere with starting other 
services (say the filesystem is all separate physical devices, not 
partitions on the same devices already busy starting other services), and 
you want it to start as soon as possible so it's running in parallel with 
the rest of the boot process, omit the after=.

Service section:

Type=idle is optional and should have a similar effect to the after= line 
in the unit section.  It'll wait until all other jobs are dispatched 
before this one runs.  However, I believe this is a fairly new service 
section option (I'm running systemd 222), and I've not used it before, 
while I have used the combination of after=multi-user.target in the unit 
section, along with wantedby=multi-user.target in the install section, to 
get what should be a similar effect.

If type= isn't set, it defaults to type=simple, which is otherwise what 
we want here.  The systemd.service manpage has more info on this and 
other service section options.

RemainAfterExit=yes tells systemd to consider the service still running 
after the executed process (mount in our case) exits.  It'll use exit 
status to track success/fail, and log STDOUT/STDERR to the journal, so 
checking service status works as expected, even after mount has exited.

ExecStart=/bin/mount LABEL=whatever is of course the main command that 
the service runs.  /bin/mount may of course be /sbin/mount or /usr/bin/
mount, etc, depending on where your distro puts it, and you'd replace 
LABEL=whatever with whatever you're using to ID that filesystem, 
mountpoint, label, UUID, devicepath, etc.  Again, with the entry in fstab 
already, but with noauto in the options, the normal mount service won't 
mount the filesystem, but you only have to supply mount the one bit of 
identifying info (here I used LABEL=) it needs to look it up in fstab, 
where it gets all the other info it needs, including mount options.

TimeoutStartSec=0 disables the usual service timeout.  Optionally, set it 
to some value longer than the system-default (see the systemd-system.conf 
manpage under DefaultTimeoutStartSec), that's long enough the mount 
should have normally completed by then.

Many services would have an ExecStop= entry as well, the command to run 
when shutting down the system.  But for mounts, systemd will 
automatically take care of that when it does the normal umounts, so you 
don't need an explicit execstop=umount whatever line.

Install section:

WantedBy=multi-user.target tells systemd where to hook in the service at 
installation so it starts at the desired time.  As explained above, 
multi-user.target is the main system CLI target, which is where most 
services go.  When the service is installed/enabled in systemd, systemd 
will place a symlink to our service in the appropriate systemd/system/
*.target.wants/ subdir, in this case /etc/systemd/system/multi-
user.target.wants/local.boot.mount.bigpool.service, pointing at 
/etc/systemd/system/local.boot.mount.bigpool.service.

Of course since you're creating the custom service unit yourself, you 
could in theory simply place the service file itself directly in the 
*.target.wants location, instead of placing it in /etc/systemd/system.  
That would enable it automatically, without an install section, but then 
you couldn't enable/disable it, except by manually moving the file.  With 
the install section in place, and with the file in the usual system 
location, you can systemctl enable/disable it, and systemd will manage 
the symlinks for it as it does with other services when you enable/
disable them.

Don't forget to enable the new service. =:^)

systemctl enable local.boot.mount.bigpool.service

(If you have systemd shell tab/autocompletion turned on, you can of 
course use it to avoid having to type in the whole thing, as well as 
helping to avoid typos.  And of course, if it's not yet mounted (or you 
can quick-umount it so you can test the service), you can start the 
service right then if desired, standard systemctl start ... .)

For more information and other unit file options available to you, and 
for the details on how the above lines work, see the usual systemd 
manpages, systemd.unit, systemd.service, systemd.exec, systemd.mount, 
etc, plus the mount and fstab manpages, and other systemd and mount 
documentation.

---
[1] I don't have the issue here as I have multiple independent btrfs, all 
on a pair of fast ssds, partitioned identically, with the btrfs raid1 
data/metadata on parallel partitions on each ssd (save for /boot and its 
backup, which are btrfs mixed-bg in dup mode, one to each device).

[2] No, I _don't_ trust this new-fangled automount stuff.  Too much like 
MS for my tastes.  I mount and umount manually, altho I do have helper 
scripts setup so it's only 3-5  keys (mt m to toggle mounting of my 
multimedia partition, for instance, or mt m m/u to specifically mount/
umount it), and some of my other scripts do mount as part of their work, 
too.  (My update script tests and mounts if necessary the filesystem 
containing the package repo, for instance, before updating it.  It also 
remounts / rw, since it's normally ro and attempted system updates would 
thus fail.)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: systemd : Timed out waiting for defice dev-disk-by…
  2015-07-27  5:20   ` Duncan
@ 2015-07-27 19:16     ` Philip Seeger
  2015-07-30 11:22     ` Rich Freeman
  1 sibling, 0 replies; 12+ messages in thread
From: Philip Seeger @ 2015-07-27 19:16 UTC (permalink / raw)
  To: linux-btrfs

On 07/27/2015 07:20 AM, Duncan wrote:
> Philip Seeger posted on Sun, 26 Jul 2015 22:39:04 +0200 as excerpted:
>
>> Hi,
>>
>>> 50% of the time when booting, the system go in safe mode because my 12x
>>> 4TB RAID10 btrfs is taking too long to mount from fstab.
>>
>> This won't help, but I've seen this exact behavior too (some time ago).
>> Except that it wasn't 50% that it didn't work, more like almost
>> everytime.
>> Commenting out the fstab entry "fixed" it, mounting using a cronjob
>> (@reboot) worked without a problem.
>>
>> (As far as I remember, options like x-systemd.device-timeout didn't
>> change anything.)
>>
>> If someone has the answer, I'd be interested too.
>
> You mean something like a custom systemd *.service unit file?  That's
> what I'd do here. =:^)
> [...]

Thanks for the tip and the detailed explanation. I believe I have tried 
something similar back then (involving a custom systemd service), but 
it's too long ago, I can't tell for sure. What I do remember clearly is 
that the timeout error came within seconds after invoking the mount 
command, maybe 5 or 10 seconds - certainly not even close to 30 seconds. 
Just wanted to throw that out there.

Other than that, I believe using the noauto option together with a 
custom systemd service sounds like it should do the trick (and sounds 
like a clean solution if you indeed want that filesystem mounted on 
demand only), but then again, mounting a filesystem without that option 
should work too. In any case, I've saved your email, maybe I'll hit that 
issue again someday - I'll try your suggestion then.


Philip

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: systemd : Timed out waiting for defice dev-disk-by…
  2015-07-27  5:20   ` Duncan
  2015-07-27 19:16     ` Philip Seeger
@ 2015-07-30 11:22     ` Rich Freeman
  2015-07-31  8:44       ` Duncan
  1 sibling, 1 reply; 12+ messages in thread
From: Rich Freeman @ 2015-07-30 11:22 UTC (permalink / raw)
  To: Duncan; +Cc: Btrfs BTRFS

On Mon, Jul 27, 2015 at 1:20 AM, Duncan <1i5t5.duncan@cox.net> wrote:
> Philip Seeger posted on Sun, 26 Jul 2015 22:39:04 +0200 as excerpted:
>
>> Hi,
>>
>>> 50% of the time when booting, the system go in safe mode because my 12x
>>> 4TB RAID10 btrfs is taking too long to mount from fstab.
>>
>> This won't help, but I've seen this exact behavior too (some time ago).
>> Except that it wasn't 50% that it didn't work, more like almost
>> everytime.
>> Commenting out the fstab entry "fixed" it, mounting using a cronjob
>> (@reboot) worked without a problem.
>>
>> (As far as I remember, options like x-systemd.device-timeout didn't
>> change anything.)
>>
>> If someone has the answer, I'd be interested too.
>
> You mean something like a custom systemd *.service unit file?  That's
> what I'd do here. =:^)

I'd have to play with it to work out the kinks, but I'm pretty sure
you'd be better off with a mount unit instead of basically reinventing
a mount unit using a service unit.

I'd also think that you could also use drop-ins to enhance the
auto-generated units created by the fstab generator, if you just
wanted to add a dependency or such to a mount unit.  However, I've
never tried to create a drop-in for a generated unit.

Mount units should take any setting in systemd.unit which includes all
the ordering/dependency/etc controls.

--
Rich

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: systemd : Timed out waiting for defice dev-disk-by…
  2015-07-30 11:22     ` Rich Freeman
@ 2015-07-31  8:44       ` Duncan
  2015-07-31 12:40         ` Philip Seeger
  0 siblings, 1 reply; 12+ messages in thread
From: Duncan @ 2015-07-31  8:44 UTC (permalink / raw)
  To: linux-btrfs

Rich Freeman posted on Thu, 30 Jul 2015 07:22:12 -0400 as excerpted:

> On Mon, Jul 27, 2015 at 1:20 AM, Duncan <1i5t5.duncan@cox.net> wrote:
>> Philip Seeger posted on Sun, 26 Jul 2015 22:39:04 +0200 as excerpted:
>>
>>>> 50% of the time when booting, the system go in safe mode because my
>>>> 12x 4TB RAID10 btrfs is taking too long to mount from fstab.
>>>
>>> I've seen this exact behavior too (some time ago).
>>> Except that it wasn't 50% that it didn't work, more like almost
>>> everytime. Commenting out the fstab entry "fixed" it, mounting
>>> using a cronjob (@reboot) worked without a problem.
>>>
>>> (As far as I remember, options like x-systemd.device-timeout didn't
>>> change anything.)

With a bit deeper look into systemd's dynamically generated mount units 
(see below), I suspect that x-systemd.device-timeout wasn't what you 
needed -- it's the wrong timeout, on the time it takes the "primary" 
device (on a multi-device btrfs, the one you point mount at) to appear, 
not on the time it takes to mount or the entire mount process to occur, 
which is what needs to change here.

More below.

>>> If someone has the answer, I'd be interested too.
>>
>> You mean something like a custom systemd *.service unit file?  That's
>> what I'd do here. =:^)
> 
> I'd have to play with it to work out the kinks, but I'm pretty sure
> you'd be better off with a mount unit instead of basically reinventing a
> mount unit using a service unit.

Using a mount unit is indeed possible, and I contemplated including it, 
but (a) I haven't really worked with them so am less directly familiar 
with them, and (b) my posts tend to be on the longer side already, and 
given (a) I decided to draw the line somewhat arbitrarily in exclusion of 
mount unit discussion.

But I'm glad you mentioned them, completing the picture, and indeed, 
they'd be the most directly apropos solution. =:^)

And actually, while I still haven't actually used mount units, I did take 
a closer look at them recently in connection with a thread you're likely 
also reading, on a different mailing list.[1]

> I'd also think that you could also use drop-ins to enhance the
> auto-generated units created by the fstab generator, if you just wanted
> to add a dependency or such to a mount unit.  However, I've never tried
> to create a drop-in for a generated unit.

Hmm... Using drop-ins on generated units hadn't really occurred to me.  
Very good point! =:^)

It addresses one of the concerns I had about using a mount unit (and why 
I chose to suggest a normal service unit) as well, since at the time of 
my original reply, I couldn't see how to use a mount unit without 
overriding the entire generated mount unit, and with it, I thought, the 
ability to use the fstab entry for that mount.  I'd used dropins before, 
but only with static service units.

> Mount units should take any setting in systemd.unit which includes all
> the ordering/dependency/etc controls.

Yes.

BTW, in case it's not clear, systemd.unit refers to the general systemd 
unit manpage.  And of course the specific mount unit manpage is 
systemd.mount, which also refers to systemd.exec, for more esoteric 
settings that would affect the environment that mount executes in, for 
instance.  And systemd.kill covers kill mode, for the timeout option 
discussed below.

For the specific case of systemd giving up on many-device btrfs mounts, 
now that I've read a bit more and am thinking in terms of dropins, I'd 
guess the following option, covered in systemd.mount and to be placed in 
the mount section with an appropriate value, should do it:

TimeoutSec=

The default timeoutsec setting is set in /etc/systemd/systemd.conf 
(systemd-system.conf manpage) as DefaultTimeoutStartSec=, with 90 seconds 
the shipped default if it's not set there.  Without a specific setting in 
the unit file, that system default would be used.

So assuming no different setting in systemd.conf and no additions to the 
dynamically generated mount unit (with the name of the mountpoint and 
located in /run/systemd/generator/*.mount), probably that 90 second 
default timeout applied, and that simply isn't enough for a many-device 
btrfs to get fully assembled and mounted.

A timeoutsec setting of 0 should disable the timeout, or set it to 
something appropriately higher than the default timeout, say 
TimeoutSec=5m, assuming that's enough for your system.

The nofail option can be added to the mount options in fstab, thus not 
stopping the boot, and/or noauto can be added, so it doesn't mount with 
the normal boot at all.  Or...

So a dropin mount-unit in /etc/systemd/system/, named the same
(encoded-mountpoint.mount) as the dynamically generated mount unit 
located in /run/systemd/generator/ , with the following content, /should/ 
do it:

-----

[Mount]
TimeoutSec=0

-----

In theory at least (neither I nor Rich has tested dropins with 
dynamically generated units), that should add the timeout setting to the 
settings already there in the generated unit file, and the timeout should 
be entirely disabled (or set to an appropriate value if not 0) for that 
unit.


Tho that would still leave the mount happening at boot time "Before=local-
fs.target", which is what the generator puts in any mount units 
dynamically generated from fstab, unless noauto was set in the mount 
options.  To have the mount unit execute later, add the following as well:

-----

[Unit]
Before=
After=multi-user.target graphical.target

-----

The blank before= resets the generator-default before=local-fs.target.  
The After=multi-user.target graphical.target starts it after the normal 
boot sequence is finished, since that's the CLI-login and GUI login 
targets, respectively.

I'm not sure, a Requires=multi-user.target (in the unit section), or 
possibly Wants=multi-user.target, may be required as well.  Normally a 
line in fstab create a unit that will be required by local-fs.target 
(see /run/systemd/generator/local-fs.target.requires/), and I'm not sure 
if the above is enough to reset that or not.  I'd have to play with it a 
bit.

Which was the other reason I suggested a service unit instead of a mount 
unit.  I'm not sure how to reset the required-by local-fs.target bit, 
given that it's not in the generated unit file nor covered in the 
systemd.mount manpage that I could see, so if the goal includes not only 
increasing the the timeout, but placing it later in the boot sequence so 
it doesn't interfere with everything else including logging in, I wasn't 
sure how to override that on mount units and am still not sure, while I 
know exactly how to do it with service units as I'm actually doing that 
with one of mine (which cats all files of an entire rather large 
directory, redirecting output to /dev/null, thereby ensuring all files in 
that dir are cached).

---
[1] Both Rich and I are gentooers, he a dev I a user, and there's a 
current thread on the gentoo-dev list discussing mount-unit type 
functionality for openrc, the sysv-init-based init system that gentoo 
still uses by default, tho systemd is an option as well.  I looked into 
systemd mount-units for a reply there.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: systemd : Timed out waiting for defice dev-disk-by…
  2015-07-31  8:44       ` Duncan
@ 2015-07-31 12:40         ` Philip Seeger
  0 siblings, 0 replies; 12+ messages in thread
From: Philip Seeger @ 2015-07-31 12:40 UTC (permalink / raw)
  To: linux-btrfs

On Fri, Jul 31, 2015 at 10:44 AM, Duncan <1i5t5.duncan@cox.net> wrote:
> For the specific case of systemd giving up on many-device btrfs mounts,
> now that I've read a bit more and am thinking in terms of dropins, I'd
> guess the following option, covered in systemd.mount and to be placed in
> the mount section with an appropriate value, should do it:
>
> TimeoutSec=
>
> The default timeoutsec setting is set in /etc/systemd/systemd.conf
> (systemd-system.conf manpage) as DefaultTimeoutStartSec=, with 90 seconds
> the shipped default if it's not set there.  Without a specific setting in
> the unit file, that system default would be used.

I'l like to stress the fact that all those timeouts were well beyond
the time it took to mount the btrfs raid manually (when it happened to
me, a long time ago - might be a different story for Vincent).
Mounting it manually did take a couple of seconds, but less then 10
seconds, which isn't even close to 30, let alone 90 seconds.
In fact, I believe it was less than 5 seconds and as far as I
remember, the fs would sometimes even be mounted for this very short
time and then (after those few seconds) systemd would unmount it.
This is something that doesn't add up in my mind, because all those
default timeout values look okay to me, but somehow, mounts appear to
time out before the time is actually up. In other words - I have never
waited 30 seconds or longer for a btrfs mount to complete, yet I
apparently hit a 90 second timeout if TimeoutSec was the culprit.

(However, again, this was my case - not sure if Vincent may be hitting
a different issue.)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: systemd : Timed out waiting for defice dev-disk-by…
  2015-07-24 18:41 systemd : Timed out waiting for defice dev-disk-by… Vincent Olivier
                   ` (3 preceding siblings ...)
  2015-07-26 20:39 ` Philip Seeger
@ 2016-03-25  7:41 ` Qu Wenruo
  4 siblings, 0 replies; 12+ messages in thread
From: Qu Wenruo @ 2016-03-25  7:41 UTC (permalink / raw)
  To: Vincent Olivier, linux-btrfs

Hi,

Although I know the post is almost one year ago, but I'm quite 
interested in the long mount time.

Any info about the fs except it's a 12 x 4T raid10?

We're investigating such long mount time, but unfortunately, we didn't 
find a good idea to reproduce it (although we don't have 12 devices though).

Thanks,
Qu

Vincent Olivier wrote on 2015/07/24 14:41 -0400:
> Hi,
>
> (Sorry if this gets sent twice : one of my mail relay is misbehaving today)
>
> 50% of the time when booting, the system go in safe mode because my 12x 4TB RAID10 btrfs is taking too long to mount from fstab.
>
> When I comment it out from fstab and mount it manually, it’s all good.
>
> I don’t like that. Is there a way to increase the timer or something ?
>
> Thanks,
>
> Vincent
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-03-25  7:41 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-24 18:41 systemd : Timed out waiting for defice dev-disk-by… Vincent Olivier
2015-07-24 18:43 ` Vincent Olivier
2015-07-24 19:26 ` Tomasz Torcz
2015-07-24 19:27 ` Chris Murphy
     [not found]   ` <1437766634.808316031@apps.rackspace.com>
2015-07-24 19:49     ` Chris Murphy
2015-07-26 20:39 ` Philip Seeger
2015-07-27  5:20   ` Duncan
2015-07-27 19:16     ` Philip Seeger
2015-07-30 11:22     ` Rich Freeman
2015-07-31  8:44       ` Duncan
2015-07-31 12:40         ` Philip Seeger
2016-03-25  7:41 ` Qu Wenruo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.