* [linux-lvm] pvscan takes 45-90 minutes booting off ISO with thin pools
@ 2018-05-14 5:02 Patrick Mitchell
2018-05-17 0:03 ` Patrick Mitchell
0 siblings, 1 reply; 3+ messages in thread
From: Patrick Mitchell @ 2018-05-14 5:02 UTC (permalink / raw)
To: linux-lvm
Sometimes when booting off an Arch installation ISO (even recent
kernel 4.16.8 & lvm2 2.02.177) LVM's pvscan takes 60-90 minutes. This
is with large thin pools, which seems to have caused such delays for
people in the past, with a fix being adding "--skip-mappings" in
thin_check_options.
This used to always happen when booting off an ISO, until I made a
custom one with "--skip-mappings". With this, it's intermittent.
Sometimes nearly instant, sometimes 45-90 minutes.
This delay never happens when booting off an install on a drive. (I'm
thinking there must be a cache that obviously doesn't exist on the
ISO?)
When there's a massive delay:
root@archiso ~ # date && ps ax | grep scan
Mon May 14 03:08:14 UTC 2018
717 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:65
718 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:19
719 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:51
720 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:115
721 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:99
722 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:68
724 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:52
725 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:49
727 ? S<s 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:67
728 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:66
731 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:83
733 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:50
748 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:2
752 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:1
753 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:3
754 ? S<Ls 0:01 /usr/bin/lvm pvscan --cache --activate ay 8:4
755 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:33
756 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:36
757 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:35
759 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:34
768 ? S<Ls 0:01 /usr/bin/lvm pvscan --cache --activate ay 259:1
And iotop shows 0 bytes being read or written for most of it.
Is Arch using pvscan incorrectly? Is it meant for a process to be ran
for each device? Is concurrently running a pvscan for each devicepath
causing lock contention? Should Arch be running one instance of
pvscan without device major and minor block numbers?
Here is Arch's "lvm2-pvscan@.service"
=====
[Unit]
Description=LVM2 PV scan on device %i
Documentation=man:pvscan(8)
DefaultDependencies=no
StartLimitInterval=0
BindsTo=dev-block-%i.device
Requires=lvm2-lvmetad.socket
After=lvm2-lvmetad.socket lvm2-lvmetad.service
Before=shutdown.target
Conflicts=shutdown.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/lvm pvscan --cache --activate ay %i
ExecStop=/usr/bin/lvm pvscan --cache %i
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [linux-lvm] pvscan takes 45-90 minutes booting off ISO with thin pools
2018-05-14 5:02 [linux-lvm] pvscan takes 45-90 minutes booting off ISO with thin pools Patrick Mitchell
@ 2018-05-17 0:03 ` Patrick Mitchell
2018-05-17 0:08 ` Patrick Mitchell
0 siblings, 1 reply; 3+ messages in thread
From: Patrick Mitchell @ 2018-05-17 0:03 UTC (permalink / raw)
To: linux-lvm
Changing the ISO's lvm.conf, setting "activation = 0" in global makes
it boot very quickly. I can then manually run a single "pvscan
--cache --activate ay" to activat everything, and it just takes a few
seconds. So, I'm thinking this has to be a locking problem with
trying to activate so many logical volumes and thin pools
simultaneously.
On Mon, May 14, 2018 at 1:02 AM, Patrick Mitchell
<patricklmitchell9@gmail.com> wrote:
> Sometimes when booting off an Arch installation ISO (even recent
> kernel 4.16.8 & lvm2 2.02.177) LVM's pvscan takes 60-90 minutes. This
> is with large thin pools, which seems to have caused such delays for
> people in the past, with a fix being adding "--skip-mappings" in
> thin_check_options.
>
> This used to always happen when booting off an ISO, until I made a
> custom one with "--skip-mappings". With this, it's intermittent.
> Sometimes nearly instant, sometimes 45-90 minutes.
>
> This delay never happens when booting off an install on a drive. (I'm
> thinking there must be a cache that obviously doesn't exist on the
> ISO?)
>
> When there's a massive delay:
>
> root@archiso ~ # date && ps ax | grep scan
> Mon May 14 03:08:14 UTC 2018
> 717 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:65
> 718 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:19
> 719 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:51
> 720 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:115
> 721 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:99
> 722 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:68
> 724 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:52
> 725 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:49
> 727 ? S<s 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:67
> 728 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:66
> 731 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:83
> 733 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:50
> 748 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:2
> 752 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:1
> 753 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:3
> 754 ? S<Ls 0:01 /usr/bin/lvm pvscan --cache --activate ay 8:4
> 755 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:33
> 756 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:36
> 757 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:35
> 759 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:34
> 768 ? S<Ls 0:01 /usr/bin/lvm pvscan --cache --activate ay 259:1
>
> And iotop shows 0 bytes being read or written for most of it.
>
> Is Arch using pvscan incorrectly? Is it meant for a process to be ran
> for each device? Is concurrently running a pvscan for each devicepath
> causing lock contention? Should Arch be running one instance of
> pvscan without device major and minor block numbers?
>
> Here is Arch's "lvm2-pvscan@.service"
>
> =====
>
> [Unit]
> Description=LVM2 PV scan on device %i
> Documentation=man:pvscan(8)
> DefaultDependencies=no
> StartLimitInterval=0
> BindsTo=dev-block-%i.device
> Requires=lvm2-lvmetad.socket
> After=lvm2-lvmetad.socket lvm2-lvmetad.service
> Before=shutdown.target
> Conflicts=shutdown.target
>
> [Service]
> Type=oneshot
> RemainAfterExit=yes
> ExecStart=/usr/bin/lvm pvscan --cache --activate ay %i
> ExecStop=/usr/bin/lvm pvscan --cache %i
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [linux-lvm] pvscan takes 45-90 minutes booting off ISO with thin pools
2018-05-17 0:03 ` Patrick Mitchell
@ 2018-05-17 0:08 ` Patrick Mitchell
0 siblings, 0 replies; 3+ messages in thread
From: Patrick Mitchell @ 2018-05-17 0:08 UTC (permalink / raw)
To: linux-lvm
On Wed, May 16, 2018 at 8:03 PM, Patrick Mitchell
<patricklmitchell9@gmail.com> wrote:
> Changing the ISO's lvm.conf, setting "activation = 0" in global makes
> it boot very quickly. I can then manually run a single "pvscan
> --cache --activate ay" to activat everything, and it just takes a few
> seconds. So, I'm thinking this has to be a locking problem with
> trying to activate so many logical volumes and thin pools
> simultaneously.
(Manually running pvscan after modifying lvm.conf to set activation back to 1.)
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2018-05-17 0:08 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-14 5:02 [linux-lvm] pvscan takes 45-90 minutes booting off ISO with thin pools Patrick Mitchell
2018-05-17 0:03 ` Patrick Mitchell
2018-05-17 0:08 ` Patrick Mitchell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).