linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] lvmpolld causes IO performance issue
@ 2022-08-16  9:28 Heming Zhao
  2022-08-16  9:38 ` Zdenek Kabelac
  0 siblings, 1 reply; 23+ messages in thread
From: Heming Zhao @ 2022-08-16  9:28 UTC (permalink / raw)
  To: linux-lvm, martin.wilck; +Cc: teigland, zdenek.kabelac

Hello maintainers & list,

I bring a story:
One SUSE customer suffered lvmpolld issue, which cause IO performance dramatic
decrease.

How to trigger:
When machine connects large number of LUNs (eg 80~200), pvmove (eg, move a single
disk to a new one, cmd like: pvmove disk1 disk2), the system will suffer high
cpu load. But when system connects ~10 LUNs, the performance is fine.

We found two work arounds:
1. set lvm.conf 'activation/polling_interval=120'.
2. write a speical udev rule, which make udev ignore the event for mpath devices. 
   echo 'ENV{DM_UUID}=="mpath-*", OPTIONS+="nowatch"' >\
    /etc/udev/rules.d/90-dm-watch.rules

Run above any one of two can make the performance issue disappear.

** the root cause **

lvmpolld will do interval requeset info job for updating the pvmove status

On every polling_interval time, lvm2 will update vg metadata. The update job will
call sys_close, which will trigger systemd-udevd IN_CLOSE_WRITE event, eg:
  2022-<time>-xxx <hostname> systemd-udevd[pid]: dm-179: Inotify event: 8 for /dev/dm-179
(8 is IN_CLOSE_WRITE.)

These VGs underlying devices are multipath devices. So when lvm2 update metatdata,
even if pvmove write a few data, the sys_close action trigger udev's "watch"
mechanism to gets notified frequently about a process that has written to the
device and closed it. This causes frequent, pointless re-evaluation of the udev
rules for these devices.

My question: Does LVM2 maintainers have any idea to fix this bug?

In my view, does lvm2 could drop VGs devices fds until pvmove finish?

Thanks,
Heming

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2022-08-23  8:29 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-16  9:28 [linux-lvm] lvmpolld causes IO performance issue Heming Zhao
2022-08-16  9:38 ` Zdenek Kabelac
2022-08-16 10:08   ` [linux-lvm] lvmpolld causes high cpu load issue Heming Zhao
2022-08-16 10:26     ` Zdenek Kabelac
2022-08-17  2:03       ` Heming Zhao
2022-08-17  8:06         ` Zdenek Kabelac
2022-08-17  8:43           ` Heming Zhao
2022-08-17  9:46             ` Zdenek Kabelac
2022-08-17 10:47               ` Heming Zhao
2022-08-17 11:13                 ` Zdenek Kabelac
2022-08-17 12:39                 ` Martin Wilck
2022-08-17 12:54                   ` Zdenek Kabelac
2022-08-17 13:41                     ` Martin Wilck
2022-08-17 15:11                       ` David Teigland
2022-08-18  8:06                         ` Martin Wilck
2022-08-17 15:26                       ` Zdenek Kabelac
2022-08-17 15:58                         ` Demi Marie Obenour
2022-08-18  7:37                           ` Martin Wilck
2022-08-17 17:35                         ` Gionatan Danti
2022-08-17 18:54                           ` Zdenek Kabelac
2022-08-17 18:54                             ` Zdenek Kabelac
2022-08-17 19:13                             ` Gionatan Danti
2022-08-18 21:13                   ` Martin Wilck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).