From mboxrd@z Thu Jan 1 00:00:00 1970 References: From: Zdenek Kabelac Message-ID: <00ae05c4-5ad8-1137-076d-57a24f093fd3@redhat.com> Date: Fri, 12 Apr 2019 15:40:56 +0200 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [linux-lvm] lvcreate hangs forever and udev work timeout Reply-To: LVM general discussion and development List-Id: LVM general discussion and development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , List-Id: Content-Type: text/plain; charset="us-ascii"; format="flowed" To: LVM general discussion and development , Eric Ren , LVM2 development Dne 12. 04. 19 v 10:58 Eric Ren napsal(a): > Hi, > > As subject, it seems a interaction problem between lvm and systemd-udev: > > ``` > #lvm version > LVM version: 2.02.130(2)-RHEL7 (2015-10-14) > Library version: 1.02.107-RHEL7 (2015-10-14) > Driver version: 4.35.0 > ``` > > lvm call trace when hangs: > ``` > > (gdb) bt > #0 0x00007f7030b876a7 in semop () from /lib64/libc.so.6 > #1 0x00007f70312b708c in _udev_wait (cookie=223161260) at libdm-common.c:2522 > #2 dm_udev_wait (cookie=223161260) at libdm-common.c:2540 > #3 0x000055e23866a60d in fs_unlock () at activate/fs.c:491 > #4 0x000055e238677855 in _file_lock_resource (cmd=, > resource=, flags=256, lv=) at > locking/file_locking.c:64 > #5 0x000055e23860a6c8 in _lock_vol (cmd=cmd@entry=0x55e239595000, > resource=, resource@entry=0x7ffe634b06b0 "#sync_names", > flags=flags@entry=256, lv_op=lv_op@entry=LV_NOOP, > lv=lv@entry=0x0) at locking/locking.c:275 > #6 0x000055e23860b013 in lock_vol (cmd=cmd@entry=0x55e239595000, > vol=, vol@entry=0x55e2386aae91 "#sync_names", > flags=flags@entry=256, lv=lv@entry=0x0) at locking/locking.c:355 > #7 0x000055e23860bcf0 in sync_dev_names > (cmd=cmd@entry=0x55e239595000) at locking/locking.c:536 > #8 0x000055e2385b4c4b in lvcreate (cmd=0x55e239595000, > argc=, argv=) at lvcreate.c:1534 > #9 0x000055e2385bd388 in lvm_run_command > (cmd=cmd@entry=0x55e239595000, argc=1, argc@entry=8, > argv=0x7ffe634b0df0, argv@entry=0x7ffe634b0db8) at lvmcmdline.c:1655 > #10 0x000055e2385bddf0 in lvm2_main (argc=8, argv=0x7ffe634b0db8) at > lvmcmdline.c:2121 > ``` > > systemd udev logs: > ``` > Apr 12 10:54:41 localhost.localdomain systemd-udevd[1363]: worker > [128147] /devices/virtual/block/dm-4 timeout; kill it > Apr 12 10:54:41 localhost.localdomain systemd-udevd[1363]: seq 2342985 > '/devices/virtual/block/dm-4' killed > ``` > > Is this a known issue? Please ask if any information needed :-) > Hi When udev kills its worker due to timeout - so udev rules was not finished within predefined timeout (which unfortunately changes according to mind change of udev developer - so it ranges from 90seconds to 300seconds depending on date of release) - you need to look out for reason why the timeout happens - - is it because disk read was dead ? - is it because it's overloaded system ? - is it because of slow read operation (i.e. some raid arrays are known to wake-up slowly) - was udev rule accessing something it shouldn't even read ? Once you know this - you can decide what's the best solution - you can extended default timeout to larger value (i.e. 300 used to be usually good for this case) you can fix/replace your failing drive. lvm2 can be unblocked with 'dmsetup udevcomplete_all' I guess until udev API will propagate its runtime timeout we can't do much more ATM... You can enable full udev debugging to obtain the reasoning which rule was executed and got frozen. Regards Zdenek