From: Martin Wilck <martin.wilck@suse.com>
To: "teigland@redhat.com" <teigland@redhat.com>,
"linux-lvm@redhat.com" <linux-lvm@redhat.com>
Cc: "rogerheflin@gmail.com" <rogerheflin@gmail.com>,
Heming Zhao <heming.zhao@suse.com>,
"zkabelac@redhat.com" <zkabelac@redhat.com>
Subject: Re: [linux-lvm] Discussion: performance issue on event activation mode
Date: Tue, 8 Jun 2021 08:26:01 +0000 [thread overview]
Message-ID: <1760ea9715bc7a16d4efe10dd95105d663a07228.camel@suse.com> (raw)
In-Reply-To: <20210607213003.GA8181@redhat.com>
On Mo, 2021-06-07 at 16:30 -0500, David Teigland wrote:
> On Mon, Jun 07, 2021 at 10:27:20AM +0000, Martin Wilck wrote:
> > Most importantly, this was about LVM2 scanning of physical volumes.
> > The
> > number of udev workers has very little influence on PV scanning,
> > because the udev rules only activate systemd service. The actual
> > scanning takes place in lvm2-pvscan@.service. And unlike udev,
> > there's
> > no limit for the number of instances of a given systemd service
> > template that can run at any given time.
>
> Excessive device scanning has been the historical problem in this area,
> but Heming mentioned dev_cache_scan() specifically as a problem. That
> was
> surprising to me since it doesn't scan/read devices, it just creates a
> list of device names on the system (either readdir in /dev or udev
> listing.) If there are still problems with excessive
> scannning/reading,
> we'll need some more diagnosis of what's happening, there could be some
> cases we've missed.
Heming didn't include his measurement results in the initial post.
Here's a small summary. Heming will be able to provide more details.
You'll see that the effects are quite drastic, factors 3-4 between
every step below, factor >60 between best and worst. I'd say these
results are typical for what we observe also on real-world systems.
kvm-qemu, 6 vcpu, 20G memory, 1258 scsi disks, 1015 vg/lv
Shown is "systemd-analyze blame" output.
1) lvm2 2.03.05 (SUSE SLE15-SP2),
obtain_device_list_from_udev=1 & event_activation=1
9min 51.782s lvm2-pvscan@253:2.service
9min 51.626s lvm2-pvscan@65:96.service
(many other lvm2-pvscan@ services follow)
2) lvm2 latest master
obtain_device_list_from_udev=1 & event_activation=1
2min 6.736s lvm2-pvscan@70:384.service
2min 6.628s lvm2-pvscan@70:400.service
3) lvm2 latest master
obtain_device_list_from_udev=0 & event_activation=1
40.589s lvm2-pvscan@131:976.service
40.589s lvm2-pvscan@131:928.service
4) lvm2 latest master
obtain_device_list_from_udev=0 & event_activation=0,
21.034s dracut-initqueue.service
8.674s lvm2-activation-early.service
IIUC, 2) is the effect of _pvscan_aa_quick(). 3) is surprising;
apparently libudev's device detection causes a factor 3 slowdown.
While 40s is not bad, you can see that event based activation still
performs far worse than "serial" device detection lvm2-activation-
early.service.
Personally, I'm sort of wary about obtain_device_list_from_udev=0
because I'm uncertain whether it might break multipath/MD detection.
Perhaps you can clarify that.
Regards
Martin
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
next prev parent reply other threads:[~2021-06-08 12:58 UTC|newest]
Thread overview: 86+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-06 6:15 [linux-lvm] Discussion: performance issue on event activation mode heming.zhao
2021-06-06 16:35 ` Roger Heflin
2021-06-07 10:27 ` Martin Wilck
2021-06-07 15:30 ` heming.zhao
2021-06-07 15:45 ` Martin Wilck
2021-06-07 20:52 ` Roger Heflin
2021-06-07 21:30 ` David Teigland
2021-06-08 8:26 ` Martin Wilck [this message]
2021-06-08 15:39 ` David Teigland
2021-06-08 15:47 ` Martin Wilck
2021-06-08 16:02 ` Zdenek Kabelac
2021-06-08 16:05 ` Martin Wilck
2021-06-08 16:03 ` David Teigland
2021-06-08 16:07 ` Martin Wilck
2021-06-15 17:03 ` David Teigland
2021-06-15 18:21 ` Zdenek Kabelac
2021-06-16 16:18 ` heming.zhao
2021-06-16 16:38 ` David Teigland
2021-06-17 3:46 ` heming.zhao
2021-06-17 15:27 ` David Teigland
2021-06-08 16:49 ` heming.zhao
2021-06-08 16:18 ` heming.zhao
2021-06-09 4:01 ` heming.zhao
2021-06-09 5:37 ` Heming Zhao
2021-06-09 18:59 ` David Teigland
2021-06-10 17:23 ` heming.zhao
2021-06-07 15:48 ` Martin Wilck
2021-06-07 16:31 ` Zdenek Kabelac
2021-06-07 21:48 ` David Teigland
2021-06-08 12:29 ` Peter Rajnoha
2021-06-08 13:23 ` Martin Wilck
2021-06-08 13:41 ` Peter Rajnoha
2021-06-08 13:46 ` Zdenek Kabelac
2021-06-08 13:56 ` Peter Rajnoha
2021-06-08 14:23 ` Zdenek Kabelac
2021-06-08 14:48 ` Martin Wilck
2021-06-08 15:19 ` Peter Rajnoha
2021-06-08 15:39 ` Martin Wilck
2021-09-09 19:44 ` David Teigland
2021-09-10 17:38 ` Martin Wilck
2021-09-12 16:51 ` heming.zhao
2021-09-27 10:00 ` Peter Rajnoha
2021-09-27 15:38 ` David Teigland
2021-09-28 6:34 ` Martin Wilck
2021-09-28 14:42 ` David Teigland
2021-09-28 15:16 ` Martin Wilck
2021-09-28 15:31 ` Martin Wilck
2021-09-28 15:56 ` David Teigland
2021-09-28 18:03 ` Benjamin Marzinski
2021-09-28 17:42 ` Benjamin Marzinski
2021-09-28 19:15 ` Martin Wilck
2021-09-29 22:06 ` Peter Rajnoha
2021-09-30 7:51 ` Martin Wilck
2021-09-30 8:07 ` heming.zhao
2021-09-30 9:31 ` Martin Wilck
2021-09-30 11:41 ` Peter Rajnoha
2021-09-30 15:32 ` heming.zhao
2021-10-01 7:41 ` Martin Wilck
2021-10-01 8:08 ` Peter Rajnoha
2021-09-30 11:29 ` Peter Rajnoha
2021-09-30 16:04 ` David Teigland
2021-09-30 14:41 ` Benjamin Marzinski
2021-10-01 7:42 ` Martin Wilck
2021-09-29 21:53 ` Peter Rajnoha
2021-09-30 7:45 ` Martin Wilck
2021-09-29 21:39 ` Peter Rajnoha
2021-09-30 7:22 ` Martin Wilck
2021-09-30 14:26 ` David Teigland
2021-09-30 15:55 ` David Teigland
2021-10-01 8:00 ` Peter Rajnoha
2021-10-18 6:24 ` Martin Wilck
2021-10-18 15:04 ` David Teigland
2021-10-18 16:56 ` heming.zhao
2021-10-18 21:51 ` Zdenek Kabelac
2021-10-19 17:18 ` David Teigland
2021-10-20 14:40 ` Martin Wilck
2021-10-20 14:50 ` David Teigland
2021-10-20 14:54 ` Martin Wilck
2021-10-20 15:12 ` David Teigland
2021-06-07 16:40 ` David Teigland
2021-07-02 21:09 ` David Teigland
2021-07-02 21:22 ` Martin Wilck
2021-07-02 22:02 ` David Teigland
2021-07-03 11:49 ` heming.zhao
2021-07-08 10:10 ` Tom Yan
2021-07-02 21:31 ` Tom Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1760ea9715bc7a16d4efe10dd95105d663a07228.camel@suse.com \
--to=martin.wilck@suse.com \
--cc=heming.zhao@suse.com \
--cc=linux-lvm@redhat.com \
--cc=rogerheflin@gmail.com \
--cc=teigland@redhat.com \
--cc=zkabelac@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).