linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Martin Wilck <martin.wilck@suse.com>
To: Heming Zhao <heming.zhao@suse.com>,
	"zdenek.kabelac@gmail.com" <zdenek.kabelac@gmail.com>
Cc: "teigland@redhat.com" <teigland@redhat.com>,
	"linux-lvm@redhat.com" <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] lvmpolld causes high cpu load issue
Date: Wed, 17 Aug 2022 13:41:17 +0000	[thread overview]
Message-ID: <4e0551e18a28ff602fae6e419dc746145e5962d3.camel@suse.com> (raw)
In-Reply-To: <727dcd28-99a2-739b-debd-a921e477e0d3@gmail.com>

On Wed, 2022-08-17 at 14:54 +0200, Zdenek Kabelac wrote:
> Dne 17. 08. 22 v 14:39 Martin Wilck napsal(a):
> 
> 
> Let's make clear we are very well aware of all the constrains
> associated with 
> udev rule logic  (and we tried quite hard to minimize impact -
> however udevd 
> developers kind of 'misunderstood'  how badly they will be impacting
> system's 
> performance with the existing watch rule logic - and the story kind
> of 
> 'continues' with  'systemd's' & dBus services unfortunatelly...

I dimly remember you dislike udev ;-)

I like the general idea of the udev watch. It is the magic that causes
newly created partitions to magically appear in the system, which is
very convenient for users and wouldn't work otherwise. I can see that
it might be inappropriate for LVM PVs. We can discuss changing the
rules such that the watch is disabled for LVM devices (both PV and LV).
I don't claim to overlook all possible side effects, but it might be
worth a try. It would mean that newly created LVs, LV size changes etc.
would not be visible in the system immediately. I suppose you could
work around that in the LVM tools by triggering change events after
operations like lvcreate.

> However let's focus on 'pvmove' as it is potentially very lengthy
> operation - 
> so it's not feasible to keep the  VG locked/blocked  across an
> operation which 
> might take even days with slower storage and big moved sizes (write 
> access/lock disables all readers...)

So these close-after-write operations are caused by locking/unlocking
the PVs?

Note: We were observing that watch events were triggered every 30s, for
every PV, simultaneously. (@Heming correct me if I'mn wrong here).

> So the lvm2 does try to minimize locking time. We will re validate
> whether 
> just necessary  'vg updating' operation are using 'write' access -
> since 
> occasionally due to some unrelated code changes it might eventually
> result 
> sometimes in unwanted 'write' VG open - but we can't keep the
> operation 
> blocking  a whole VG because of slow udev rule processing.

> In normal circumstances udev rule should be processed very fast -
> unless there 
> is something mis-designe causing a CPU overloading.
> 

IIRC there is no evidence that the udev rules are really processed
"slowly". udev isn't efficient, a run time in the order 10 ms is
expected for a worker. We tried different tracing approaches, but we
never saw "multipath -U" hanging on a lock or a resource shortage. It
seems be the sheer amount of events and processes that is causing
trouble. The customer had a very lengthy "multipath.conf" file (~50k
lines), which needs to be parsed by every new multipath instance; that
was slowing things down somewhat. Still the runtime of "multipath -U"
would be no more than 100ms, AFAICT.

Martin

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


  reply	other threads:[~2022-08-23  8:28 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-16  9:28 [linux-lvm] lvmpolld causes IO performance issue Heming Zhao
2022-08-16  9:38 ` Zdenek Kabelac
2022-08-16 10:08   ` [linux-lvm] lvmpolld causes high cpu load issue Heming Zhao
2022-08-16 10:26     ` Zdenek Kabelac
2022-08-17  2:03       ` Heming Zhao
2022-08-17  8:06         ` Zdenek Kabelac
2022-08-17  8:43           ` Heming Zhao
2022-08-17  9:46             ` Zdenek Kabelac
2022-08-17 10:47               ` Heming Zhao
2022-08-17 11:13                 ` Zdenek Kabelac
2022-08-17 12:39                 ` Martin Wilck
2022-08-17 12:54                   ` Zdenek Kabelac
2022-08-17 13:41                     ` Martin Wilck [this message]
2022-08-17 15:11                       ` David Teigland
2022-08-18  8:06                         ` Martin Wilck
2022-08-17 15:26                       ` Zdenek Kabelac
2022-08-17 15:58                         ` Demi Marie Obenour
2022-08-18  7:37                           ` Martin Wilck
2022-08-17 17:35                         ` Gionatan Danti
2022-08-17 18:54                           ` Zdenek Kabelac
2022-08-17 18:54                             ` Zdenek Kabelac
2022-08-17 19:13                             ` Gionatan Danti
2022-08-18 21:13                   ` Martin Wilck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4e0551e18a28ff602fae6e419dc746145e5962d3.camel@suse.com \
    --to=martin.wilck@suse.com \
    --cc=heming.zhao@suse.com \
    --cc=linux-lvm@redhat.com \
    --cc=teigland@redhat.com \
    --cc=zdenek.kabelac@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).