linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Oleksandr Natalenko <oleksandr@natalenko.name>
To: Christian Hesse <list@eworm.de>
Cc: Oleksandr Natalenko <oleksandr@redhat.com>,
	LVM general discussion and development
	<linux-lvm@e1890.dsca.akamaiedge.net>,
	Heming Zhao <heming.zhao@suse.com>,
	Martin Wilck <mwilck@suse.com>,
	LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] [PATCH 1/1] pvscan: wait for udevd
Date: Wed, 17 Feb 2021 14:38:26 +0100	[thread overview]
Message-ID: <20210217133826.u4gglfsowqfvxdff@spock.localdomain> (raw)
In-Reply-To: <20210217130329.7de41147@leda>

Hi.

Please check my log excerpts and comments below.

On Wed, Feb 17, 2021 at 01:03:29PM +0100, Christian Hesse wrote:
> > But in general, I think this needs deeper analysis. Looking at
> > https://bugs.archlinux.org/task/69611, the workaround appears to have
> > been found simply by drawing an analogy to a previous similar case.
> > I'd like to understand what happened on the arch system when the error
> > occured, and why this simple ordering directive avoided it.
> 
> As said I can not reproduce it myself... Oleksandr, can you give more details?
> Possibly everything from journal regarding systemd-udevd.service (and
> systemd-udevd.socket) and lvm2-pvscan@*.service could help.

I'm not sure this issue is reproducible with any kind of LVM layout.
What I have is thin-LVM-on-LUKS-on-LVM:

```
[~]> lsblk
NAME                                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme0n1                                259:0    0 238,5G  0 disk
├─nvme0n1p1                            259:1    0     8M  0 part  /boot/EFI
└─nvme0n1p2                            259:2    0 238,5G  0 part
  ├─base-boot                          254:0    0   256M  0 lvm   /boot
  └─base-sys                           254:1    0 238,2G  0 lvm
    └─sys                              254:2    0 238,2G  0 crypt
      ├─sys-swap                       254:3    0     2G  0 lvm   [SWAP]
      ├─sys-pool_tmeta                 254:4    0   120M  0 lvm
      │ └─sys-pool-tpool               254:6    0   236G  0 lvm
      │   ├─sys-pool                   254:7    0   236G  1 lvm
      │   ├─sys-root                   254:8    0   235G  0 lvm   /
      │   ├─sys-home                   254:9    0   235G  0 lvm   /home
      │   ├─sys-containers             254:10   0   235G  0 lvm   /mnt/containers
      │   ├─sys-vms                    254:11   0   235G  0 lvm   /mnt/vms
      │   ├─sys-email                  254:12   0   235G  0 lvm   /mnt/email
      │   ├─sys-vm--01--rhel6--sda     254:13   0     8G  0 lvm
      │   ├─sys-vm--02--rhel7--sda     254:14   0    16G  0 lvm
      │   ├─sys-vm--03--rhel8--sda     254:15   0     8G  0 lvm
      │   ├─sys-vm--04--rhel6--32--sda 254:16   0     8G  0 lvm
      │   ├─sys-vm--05--rhel5--sda     254:17   0     8G  0 lvm
      │   ├─sys-vm--06--fedora--sda    254:18   0     8G  0 lvm
      │   └─sys-vm--02--rhel7--sdb     254:19   0     1G  0 lvm
      └─sys-pool_tdata                 254:5    0   236G  0 lvm
        └─sys-pool-tpool               254:6    0   236G  0 lvm
          ├─sys-pool                   254:7    0   236G  1 lvm
          ├─sys-root                   254:8    0   235G  0 lvm   /
          ├─sys-home                   254:9    0   235G  0 lvm   /home
          ├─sys-containers             254:10   0   235G  0 lvm   /mnt/containers
          ├─sys-vms                    254:11   0   235G  0 lvm   /mnt/vms
          ├─sys-email                  254:12   0   235G  0 lvm   /mnt/email
          ├─sys-vm--01--rhel6--sda     254:13   0     8G  0 lvm
          ├─sys-vm--02--rhel7--sda     254:14   0    16G  0 lvm
          ├─sys-vm--03--rhel8--sda     254:15   0     8G  0 lvm
          ├─sys-vm--04--rhel6--32--sda 254:16   0     8G  0 lvm
          ├─sys-vm--05--rhel5--sda     254:17   0     8G  0 lvm
          ├─sys-vm--06--fedora--sda    254:18   0     8G  0 lvm
          └─sys-vm--02--rhel7--sdb     254:19   0     1G  0 lvm

```

(you may ask why? because full disk encryption is mandated by my
employer, which is, surprise-surprise, Red Hat ☺; I'll Cc myself to
@rh address as well just to be sure I do not miss anything in the future)

Also, this doesn't happen if the amount of thin volumes is small (like,
less than 5). So, in order to reproduce this locally you may need to
throw quite of LVs into the system.

The issue is reproducible with this layout as well:

```
[~]> lsblk
NAME                                     MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINT
nvme0n1                                  259:0    0 465,8G  0 disk
├─nvme0n1p1                              259:1    0     8M  0 part   /boot/EFI
└─nvme0n1p2                              259:2    0   465G  0 part
  └─md0                                    9:0    0 464,9G  0 raid10
    ├─base-boot                          253:0    0   384M  0 lvm    /boot
    └─base-sys                           253:1    0 464,5G  0 lvm
      └─sys                              253:2    0 464,5G  0 crypt
        ├─sys-swap                       253:3    0     2G  0 lvm    [SWAP]
        ├─sys-pool_tmeta                 253:4    0   116M  0 lvm
        │ └─sys-pool-tpool               253:6    0 462,3G  0 lvm
        │   ├─sys-pool                   253:7    0 462,3G  1 lvm
        │   ├─sys-root                   253:8    0   462G  0 lvm    /
        │   ├─sys-home                   253:9    0   462G  0 lvm    /home
        │   ├─sys-vms                    253:10   0   462G  0 lvm    /mnt/vms
        │   ├─sys-vm--01--archlinux--sda 253:11   0     8G  0 lvm
        │   ├─sys-vm--02--windows--sda   253:12   0    50G  0 lvm
        │   └─sys-mail                   253:13   0   462G  0 lvm    /mnt/mail
        └─sys-pool_tdata                 253:5    0 462,3G  0 lvm
          └─sys-pool-tpool               253:6    0 462,3G  0 lvm
            ├─sys-pool                   253:7    0 462,3G  1 lvm
            ├─sys-root                   253:8    0   462G  0 lvm    /
            ├─sys-home                   253:9    0   462G  0 lvm    /home
            ├─sys-vms                    253:10   0   462G  0 lvm    /mnt/vms
            ├─sys-vm--01--archlinux--sda 253:11   0     8G  0 lvm
            ├─sys-vm--02--windows--sda   253:12   0    50G  0 lvm
            └─sys-mail                   253:13   0   462G  0 lvm    /mnt/mail
nvme1n1                                  259:3    0 465,8G  0 disk
├─nvme1n1p1                              259:4    0     8M  0 part
└─nvme1n1p2                              259:5    0   465G  0 part
  └─md0                                    9:0    0 464,9G  0 raid10
    ├─base-boot                          253:0    0   384M  0 lvm    /boot
    └─base-sys                           253:1    0 464,5G  0 lvm
      └─sys                              253:2    0 464,5G  0 crypt
        ├─sys-swap                       253:3    0     2G  0 lvm    [SWAP]
        ├─sys-pool_tmeta                 253:4    0   116M  0 lvm
        │ └─sys-pool-tpool               253:6    0 462,3G  0 lvm
        │   ├─sys-pool                   253:7    0 462,3G  1 lvm
        │   ├─sys-root                   253:8    0   462G  0 lvm    /
        │   ├─sys-home                   253:9    0   462G  0 lvm    /home
        │   ├─sys-vms                    253:10   0   462G  0 lvm    /mnt/vms
        │   ├─sys-vm--01--archlinux--sda 253:11   0     8G  0 lvm
        │   ├─sys-vm--02--windows--sda   253:12   0    50G  0 lvm
        │   └─sys-mail                   253:13   0   462G  0 lvm    /mnt/mail
        └─sys-pool_tdata                 253:5    0 462,3G  0 lvm
          └─sys-pool-tpool               253:6    0 462,3G  0 lvm
            ├─sys-pool                   253:7    0 462,3G  1 lvm
            ├─sys-root                   253:8    0   462G  0 lvm    /
            ├─sys-home                   253:9    0   462G  0 lvm    /home
            ├─sys-vms                    253:10   0   462G  0 lvm    /mnt/vms
            ├─sys-vm--01--archlinux--sda 253:11   0     8G  0 lvm
            ├─sys-vm--02--windows--sda   253:12   0    50G  0 lvm
            └─sys-mail                   253:13   0   462G  0 lvm    /mnt/mail
```

(this is the one from the Arch bugreport).

With regard to the journal, here it is (from the same machine in the
Arch bugreport; matches the second layout above):

```
-- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:25:57 CET. --
Feb 10 17:24:26 archlinux systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 10 17:24:26 archlinux systemd[1]: systemd-udevd.service: Succeeded.
Feb 10 17:24:26 archlinux systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 10 17:24:27 spock systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 10 17:24:27 spock mtp-probe[897]: checking bus 6, device 5: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.4"
Feb 10 17:24:27 spock mtp-probe[896]: checking bus 6, device 2: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-1"
Feb 10 17:24:27 spock mtp-probe[899]: checking bus 6, device 4: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.3"
Feb 10 17:24:27 spock mtp-probe[898]: checking bus 6, device 6: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.5"
Feb 10 17:24:27 spock mtp-probe[897]: bus: 6, device: 5 was not an MTP device
Feb 10 17:24:27 spock mtp-probe[898]: bus: 6, device: 6 was not an MTP device
Feb 10 17:24:27 spock mtp-probe[899]: bus: 6, device: 4 was not an MTP device
Feb 10 17:24:27 spock mtp-probe[896]: bus: 6, device: 2 was not an MTP device
Feb 10 17:24:27 spock systemd-udevd[881]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 10 17:24:27 spock systemd-udevd[887]: Using default interface naming scheme 'v247'.
Feb 10 17:24:27 spock systemd-udevd[887]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 10 17:24:27 spock systemd-udevd[879]: Using default interface naming scheme 'v247'.
Feb 10 17:24:27 spock systemd-udevd[879]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 10 17:24:27 spock systemd-udevd[887]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 10 17:24:27 spock systemd-udevd[887]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 10 17:24:27 spock systemd-udevd[826]: Using default interface naming scheme 'v247'.
Feb 10 17:24:27 spock systemd-udevd[826]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 10 17:24:27 spock systemd-udevd[826]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.

[~]> LC_TIME=C sudo journalctl -b -10 -u systemd-udevd.socket
-- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:27:58 CET. --
-- No entries --

[~]> LC_TIME=C sudo journalctl -b -10 -u lvm2-pvscan@\*
-- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:28:05 CET. --
Feb 10 17:24:17 archlinux systemd[1]: Starting LVM event activation on device 9:0...
Feb 10 17:24:17 archlinux lvm[463]:   pvscan[463] PV /dev/md0 online, VG base is complete.
Feb 10 17:24:17 archlinux lvm[463]:   pvscan[463] VG base run autoactivation.
Feb 10 17:24:17 archlinux lvm[463]:   2 logical volume(s) in volume group "base" now active
Feb 10 17:24:17 archlinux systemd[1]: Finished LVM event activation on device 9:0.
Feb 10 17:24:26 archlinux systemd[1]: Starting LVM event activation on device 253:2...
Feb 10 17:24:26 archlinux lvm[643]:   pvscan[643] PV /dev/mapper/sys online, VG sys is complete.
Feb 10 17:24:26 archlinux lvm[643]:   pvscan[643] VG sys run autoactivation.
Feb 10 17:24:26 archlinux lvm[643]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 10 17:24:26 archlinux lvm[643]:   WARNING: Failed to monitor sys/pool.
Feb 10 17:24:26 archlinux lvm[643]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 10 17:24:26 archlinux lvm[643]:   WARNING: Failed to monitor sys/pool.
Feb 10 17:24:26 archlinux lvm[643]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 10 17:24:26 archlinux lvm[643]:   WARNING: Failed to monitor sys/pool.
Feb 10 17:24:26 archlinux lvm[643]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 10 17:24:26 archlinux lvm[643]:   WARNING: Failed to monitor sys/pool.
Feb 10 17:24:26 archlinux lvm[643]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 10 17:24:26 archlinux lvm[643]:   WARNING: Failed to monitor sys/pool.
Feb 10 17:24:26 archlinux lvm[643]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 10 17:24:26 archlinux lvm[643]:   WARNING: Failed to monitor sys/pool.
Feb 10 17:24:26 archlinux systemd[1]: Stopping LVM event activation on device 9:0...
Feb 10 17:24:26 archlinux lvm[720]:   pvscan[720] PV /dev/md0 online.
Feb 10 17:24:26 archlinux lvm[643]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 10 17:24:26 archlinux lvm[643]:   WARNING: Failed to monitor sys/pool.
Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: State 'stop-sigterm' timed out. Killing.
Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Killing process 643 (lvm) with signal SIGKILL.
Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Main process exited, code=killed, status=9/KILL
Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Failed with result 'timeout'.
Feb 10 17:24:56 spock systemd[1]: Stopped LVM event activation on device 253:2.

[~]> LC_TIME=C sudo journalctl -b -10 --grep pvscan
-- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:31:27 CET. --
Feb 10 17:24:17 archlinux systemd[1]: Created slice system-lvm2\x2dpvscan.slice.
Feb 10 17:24:17 archlinux lvm[463]:   pvscan[463] PV /dev/md0 online, VG base is complete.
Feb 10 17:24:17 archlinux lvm[463]:   pvscan[463] VG base run autoactivation.
Feb 10 17:24:17 archlinux audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=success'
Feb 10 17:24:17 archlinux kernel: audit: type=1130 audit(1612974257.986:6): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=success'
Feb 10 17:24:26 archlinux lvm[643]:   pvscan[643] PV /dev/mapper/sys online, VG sys is complete.
Feb 10 17:24:26 archlinux lvm[643]:   pvscan[643] VG sys run autoactivation.
Feb 10 17:24:26 archlinux lvm[720]:   pvscan[720] PV /dev/md0 online.
Feb 10 17:24:27 spock systemd[1]: lvm2-pvscan@9:0.service: Control process exited, code=killed, status=15/TERM
Feb 10 17:24:27 spock systemd[1]: lvm2-pvscan@9:0.service: Failed with result 'signal'.
Feb 10 17:24:26 spock audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Feb 10 17:24:27 spock systemd[1]: Requested transaction contradicts existing jobs: Transaction for lvm2-pvscan@253:2.service/start is destructive (lvm2-pvscan@253:2.service has 'stop' job queued, but 'start' is included in transaction).
Feb 10 17:24:27 spock systemd[1]: sys-devices-virtual-block-dm\x2d2.device: Failed to enqueue SYSTEMD_WANTS= job, ignoring: Transaction for lvm2-pvscan@253:2.service/start is destructive (lvm2-pvscan@253:2.service has 'stop' job queued, but 'start' is included in transaction).
Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: State 'stop-sigterm' timed out. Killing.
Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Killing process 643 (lvm) with signal SIGKILL.
Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Main process exited, code=killed, status=9/KILL
Feb 10 17:24:56 spock systemd[1]: lvm2-pvscan@253:2.service: Failed with result 'timeout'.
Feb 10 17:24:56 spock audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@253:2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Feb 10 17:24:56 spock kernel: audit: type=1130 audit(1612974296.950:93): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@253:2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Feb 10 17:29:57 spock systemd[1]: Removed slice system-lvm2\x2dpvscan.slice.
```

FWIW, here's what it looks like now, after adding the "After=" fix:

```
[~]> LC_TIME=C sudo journalctl -b -u systemd-udevd.service
-- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:28:47 CET. --
Feb 16 22:13:37 archlinux systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 16 22:13:37 archlinux systemd[1]: systemd-udevd.service: Succeeded.
Feb 16 22:13:37 archlinux systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 16 22:13:38 spock systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 16 22:13:38 spock systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 16 22:13:38 spock mtp-probe[895]: checking bus 6, device 2: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-1"
Feb 16 22:13:38 spock mtp-probe[897]: checking bus 6, device 6: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.5"
Feb 16 22:13:38 spock mtp-probe[898]: checking bus 6, device 5: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.4"
Feb 16 22:13:38 spock mtp-probe[896]: checking bus 6, device 4: "/sys/devices/pci0000:00/0000:00:08.1/0000:0d:00.3/usb6/6-4/6-4.3"
Feb 16 22:13:38 spock mtp-probe[897]: bus: 6, device: 6 was not an MTP device
Feb 16 22:13:38 spock mtp-probe[898]: bus: 6, device: 5 was not an MTP device
Feb 16 22:13:38 spock mtp-probe[896]: bus: 6, device: 4 was not an MTP device
Feb 16 22:13:38 spock mtp-probe[895]: bus: 6, device: 2 was not an MTP device
Feb 16 22:13:38 spock systemd-udevd[825]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 16 22:13:38 spock systemd-udevd[866]: Using default interface naming scheme 'v247'.
Feb 16 22:13:38 spock systemd-udevd[860]: Using default interface naming scheme 'v247'.
Feb 16 22:13:38 spock systemd-udevd[866]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 16 22:13:38 spock systemd-udevd[860]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 16 22:13:38 spock systemd-udevd[833]: Using default interface naming scheme 'v247'.
Feb 16 22:13:38 spock systemd-udevd[833]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 16 22:13:38 spock systemd-udevd[833]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 16 22:13:39 spock systemd-udevd[892]: Using default interface naming scheme 'v247'.
Feb 16 22:13:39 spock systemd-udevd[892]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Feb 16 22:13:39 spock systemd-udevd[842]: Using default interface naming scheme 'v247'.
Feb 16 22:13:39 spock systemd-udevd[842]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.

[~]> LC_TIME=C sudo journalctl -b -u systemd-udevd.socket
-- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:29:04 CET. --
-- No entries --

[~]> LC_TIME=C sudo journalctl -b -u lvm2-pvscan@\*
-- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:29:15 CET. --
Feb 16 22:13:27 archlinux systemd[1]: Starting LVM event activation on device 9:0...
Feb 16 22:13:27 archlinux lvm[495]:   pvscan[495] PV /dev/md0 online, VG base is complete.
Feb 16 22:13:27 archlinux lvm[495]:   pvscan[495] VG base run autoactivation.
Feb 16 22:13:27 archlinux lvm[495]:   2 logical volume(s) in volume group "base" now active
Feb 16 22:13:28 archlinux systemd[1]: Finished LVM event activation on device 9:0.
Feb 16 22:13:37 archlinux systemd[1]: Starting LVM event activation on device 253:2...
Feb 16 22:13:37 archlinux lvm[665]:   pvscan[665] PV /dev/mapper/sys online, VG sys is complete.
Feb 16 22:13:37 archlinux lvm[665]:   pvscan[665] VG sys run autoactivation.
Feb 16 22:13:37 archlinux lvm[665]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 16 22:13:37 archlinux lvm[665]:   WARNING: Failed to monitor sys/pool.
Feb 16 22:13:37 archlinux lvm[665]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 16 22:13:37 archlinux lvm[665]:   WARNING: Failed to monitor sys/pool.
Feb 16 22:13:37 archlinux lvm[665]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 16 22:13:37 archlinux lvm[665]:   WARNING: Failed to monitor sys/pool.
Feb 16 22:13:37 archlinux lvm[665]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 16 22:13:37 archlinux lvm[665]:   WARNING: Failed to monitor sys/pool.
Feb 16 22:13:37 archlinux lvm[665]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 16 22:13:37 archlinux lvm[665]:   WARNING: Failed to monitor sys/pool.
Feb 16 22:13:37 archlinux lvm[665]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 16 22:13:37 archlinux lvm[665]:   WARNING: Failed to monitor sys/pool.
Feb 16 22:13:37 archlinux systemd[1]: Stopping LVM event activation on device 9:0...
Feb 16 22:13:37 archlinux lvm[743]:   pvscan[743] PV /dev/md0 online.
Feb 16 22:13:37 archlinux lvm[665]:   /usr/bin/dmeventd: stat failed: No such file or directory
Feb 16 22:13:37 archlinux lvm[665]:   WARNING: Failed to monitor sys/pool.
Feb 16 22:13:37 archlinux lvm[665]:   8 logical volume(s) in volume group "sys" now active
Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@9:0.service: Succeeded.
Feb 16 22:13:37 archlinux systemd[1]: Stopped LVM event activation on device 9:0.
Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@253:2.service: Main process exited, code=killed, status=15/TERM
Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@253:2.service: Failed with result 'signal'.
Feb 16 22:13:37 archlinux systemd[1]: Stopped LVM event activation on device 253:2.
Feb 16 22:13:38 spock systemd[1]: Starting LVM event activation on device 253:2...
Feb 16 22:13:38 spock lvm[929]:   pvscan[929] PV /dev/mapper/sys online, VG sys is complete.
Feb 16 22:13:38 spock lvm[929]:   pvscan[929] VG sys skip autoactivation.
Feb 16 22:13:38 spock systemd[1]: Finished LVM event activation on device 253:2.

[~]> LC_TIME=C sudo journalctl -b --grep pvscan | /bin/cat -
-- Journal begins at Fri 2020-12-18 16:33:22 CET, ends at Wed 2021-02-17 14:31:58 CET. --
Feb 16 22:13:27 archlinux systemd[1]: Created slice system-lvm2\x2dpvscan.slice.
Feb 16 22:13:27 archlinux lvm[495]:   pvscan[495] PV /dev/md0 online, VG base is complete.
Feb 16 22:13:27 archlinux lvm[495]:   pvscan[495] VG base run autoactivation.
Feb 16 22:13:28 archlinux audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=success'
Feb 16 22:13:28 archlinux kernel: audit: type=1130 audit(1613510008.000:6): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=success'
Feb 16 22:13:37 archlinux lvm[665]:   pvscan[665] PV /dev/mapper/sys online, VG sys is complete.
Feb 16 22:13:37 archlinux lvm[665]:   pvscan[665] VG sys run autoactivation.
Feb 16 22:13:37 archlinux lvm[743]:   pvscan[743] PV /dev/md0 online.
Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@9:0.service: Succeeded.
Feb 16 22:13:37 archlinux audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@9:0 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=success'
Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@253:2.service: Main process exited, code=killed, status=15/TERM
Feb 16 22:13:37 archlinux systemd[1]: lvm2-pvscan@253:2.service: Failed with result 'signal'.
Feb 16 22:13:37 archlinux audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@253:2 comm="systemd" exe="/init" hostname=? addr=? terminal=? res=failed'
Feb 16 22:13:38 spock lvm[929]:   pvscan[929] PV /dev/mapper/sys online, VG sys is complete.
Feb 16 22:13:38 spock lvm[929]:   pvscan[929] VG sys skip autoactivation.
Feb 16 22:13:38 spock audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-pvscan@253:2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
```

Note I do not see any presence of systemd-udevd.socket there. Am I
missing something?

Also, I see the following difference between these runs:

```
Feb 10 17:24:27 spock systemd[1]: Requested transaction contradicts existing jobs: Transaction for lvm2-pvscan@253:2.service/start is destructive (lvm2-pvscan@253:2.service has 'stop' job queued, but 'start' is included in transaction).
Feb 10 17:24:27 spock systemd[1]: sys-devices-virtual-block-dm\x2d2.device: Failed to enqueue SYSTEMD_WANTS= job, ignoring: Transaction for lvm2-pvscan@253:2.service/start is destructive (lvm2-pvscan@253:2.service has 'stop' job queued, but 'start' is included in transaction).
```

I'm not sure I understand what it means, though. Likely, this is just
because of first pvscan invocation being stuck.

If needed, I can intentionally break one of the machines again, but to
do that please let me know what kind of data I should collect. I'm not
sure plain journal will reveal more, maybe, I should apply some extra
debug options?

Thanks.

-- 
  Oleksandr Natalenko (post-factum)


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

  parent reply	other threads:[~2021-02-17 15:39 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-11 11:16 [linux-lvm] [PATCH 1/1] pvscan: wait for udevd Christian Hesse
     [not found] ` <f18dff7a-050a-d41d-c643-4616522ba4a0@suse.com>
2021-02-17  8:22   ` Martin Wilck
     [not found]     ` <20210217130329.7de41147@leda>
2021-02-17 13:38       ` Oleksandr Natalenko [this message]
2021-02-18 15:19         ` Martin Wilck
2021-02-18 15:30           ` Oleksandr Natalenko
2021-02-19  9:22             ` Martin Wilck
2021-02-19 16:37           ` David Teigland
2021-02-19 22:47             ` Zdenek Kabelac
2021-02-21 20:23               ` Martin Wilck
2021-02-22  9:57                 ` Zdenek Kabelac
2021-02-22 13:04                   ` Christian Hesse
2021-02-25 16:51                     ` Oleksandr Natalenko
2021-02-21 20:26             ` Martin Wilck
2021-02-17 13:49       ` Martin Wilck
2021-02-17 19:11         ` Oleksandr Natalenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210217133826.u4gglfsowqfvxdff@spock.localdomain \
    --to=oleksandr@natalenko.name \
    --cc=heming.zhao@suse.com \
    --cc=linux-lvm@e1890.dsca.akamaiedge.net \
    --cc=linux-lvm@redhat.com \
    --cc=list@eworm.de \
    --cc=mwilck@suse.com \
    --cc=oleksandr@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).