linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: "heming.zhao@suse.com" <heming.zhao@suse.com>
To: Martin Wilck <martin.wilck@suse.com>,
	"teigland@redhat.com" <teigland@redhat.com>,
	"linux-lvm@redhat.com" <linux-lvm@redhat.com>
Cc: "rogerheflin@gmail.com" <rogerheflin@gmail.com>,
	"zkabelac@redhat.com" <zkabelac@redhat.com>
Subject: Re: [linux-lvm] Discussion: performance issue on event activation mode
Date: Wed, 9 Jun 2021 00:49:36 +0800	[thread overview]
Message-ID: <6ee904a6-d7b8-1457-513c-c31404400e8d@suse.com> (raw)
In-Reply-To: <1760ea9715bc7a16d4efe10dd95105d663a07228.camel@suse.com>

On 6/8/21 4:26 PM, Martin Wilck wrote:
> On Mo, 2021-06-07 at 16:30 -0500, David Teigland wrote:
>> On Mon, Jun 07, 2021 at 10:27:20AM +0000, Martin Wilck wrote:
>>> Most importantly, this was about LVM2 scanning of physical volumes.
>>> The
>>> number of udev workers has very little influence on PV scanning,
>>> because the udev rules only activate systemd service. The actual
>>> scanning takes place in lvm2-pvscan@.service. And unlike udev,
>>> there's
>>> no limit for the number of instances of a given systemd service
>>> template that can run at any given time.
>>
>> Excessive device scanning has been the historical problem in this area,
>> but Heming mentioned dev_cache_scan() specifically as a problem.  That
>> was
>> surprising to me since it doesn't scan/read devices, it just creates a
>> list of device names on the system (either readdir in /dev or udev
>> listing.)  If there are still problems with excessive
>> scannning/reading,
>> we'll need some more diagnosis of what's happening, there could be some
>> cases we've missed.
> 
> Heming didn't include his measurement results in the initial post.
> Here's a small summary. Heming will be able to provide more details.
> You'll see that the effects are quite drastic, factors 3-4 between
> every step below, factor >60 between best and worst. I'd say these
> results are typical for what we observe also on real-world systems.
> 
> kvm-qemu, 6 vcpu, 20G memory, 1258 scsi disks, 1015 vg/lv
> Shown is "systemd-analyze blame" output.
> 
>   1) lvm2 2.03.05 (SUSE SLE15-SP2),
>      obtain_device_list_from_udev=1 & event_activation=1
>          9min 51.782s lvm2-pvscan@253:2.service
>          9min 51.626s lvm2-pvscan@65:96.service
>      (many other lvm2-pvscan@ services follow)
>   2) lvm2 latest master
>      obtain_device_list_from_udev=1 & event_activation=1
>          2min 6.736s lvm2-pvscan@70:384.service
>          2min 6.628s lvm2-pvscan@70:400.service
>   3) lvm2 latest master
>      obtain_device_list_from_udev=0 & event_activation=1
>              40.589s lvm2-pvscan@131:976.service
>              40.589s lvm2-pvscan@131:928.service
>   4) lvm2 latest master
>      obtain_device_list_from_udev=0 & event_activation=0,
>              21.034s dracut-initqueue.service
>               8.674s lvm2-activation-early.service
> 
> IIUC, 2) is the effect of _pvscan_aa_quick(). 3) is surprising;
> apparently libudev's device detection causes a factor 3 slowdown.
> While 40s is not bad, you can see that event based activation still
> performs far worse than "serial" device detection lvm2-activation-
> early.service.
> 
> Personally, I'm sort of wary about obtain_device_list_from_udev=0
> because I'm uncertain whether it might break multipath/MD detection.
> Perhaps you can clarify that.
> 
> Regards
> Martin
> 
> 

my latest test results. there combines 3 cfg items:
devices/obtain_device_list_from_udev
global/event_activation
activation/udev_sync

<0> is under lvm2-2.03.05+
<1> ~ <8> is under lvm2-2.03.12+

all results are from "systemd-analyze blame", and I only
post top n services.


0>
with suse 15sp2 lvm2 version: lvm2-2.03.05+
"systemd-analyze blame" show the top serives:

devices/obtain_device_list_from_udev=1
global/event_activation=1
activation/udev_sync=1

     9min 51.782s lvm2-pvscan@253:2.service <===
     9min 51.626s lvm2-pvscan@65:96.service
     9min 51.625s lvm2-pvscan@65:208.service
     9min 51.624s lvm2-pvscan@65:16.service
     9min 51.622s lvm2-pvscan@8:176.service
     9min 51.614s lvm2-pvscan@65:144.service

1>
devices/obtain_device_list_from_udev=1
global/event_activation=0
activation/udev_sync=0

          18.307s dracut-initqueue.service
           6.168s btrfsmaintenance-refresh.service
           4.327s systemd-udev-settle.service
           3.633s wicked.service
           2.976s lvm2-activation-early.service  <===
           1.560s lvm2-pvscan@135:832.service
           1.559s lvm2-pvscan@135:816.service
           1.558s lvm2-pvscan@135:784.service
           1.558s lvm2-pvscan@134:976.service
           1.557s lvm2-pvscan@134:832.service
           1.556s dev-system-swap.swap
           1.554s lvm2-pvscan@134:992.service
           1.553s lvm2-pvscan@134:1008.service

2>
devices/obtain_device_list_from_udev=0
global/event_activation=0
activation/udev_sync=0

          17.164s dracut-initqueue.service
          10.420s wicked.service
           7.109s btrfsmaintenance-refresh.service
           4.471s systemd-udev-settle.service
           3.415s lvm2-activation-early.service <===
           1.679s lvm2-pvscan@135:816.service
           1.678s lvm2-pvscan@135:832.service
           1.677s lvm2-pvscan@134:992.service
           1.675s lvm2-pvscan@135:784.service
           1.674s lvm2-pvscan@134:928.service
           1.673s lvm2-pvscan@134:896.service
           1.673s dev-system-swap.swap
           1.672s lvm2-pvscan@134:1008.service


3>
devices/obtain_device_list_from_udev=1
global/event_activation=0
activation/udev_sync=1

          17.552s dracut-initqueue.service
           7.401s lvm2-activation-early.service <====
           6.519s btrfsmaintenance-refresh.service
           5.375s systemd-udev-settle.service
           3.588s wicked.service
           1.723s wickedd-nanny.service
           1.686s wickedd.service
           1.655s lvm2-pvscan@129:992.service
           1.654s lvm2-pvscan@129:960.service
           1.653s lvm2-pvscan@129:896.service
           1.652s lvm2-pvscan@130:784.service
           1.651s lvm2-pvscan@130:768.service


4>
devices/obtain_device_list_from_udev=0
global/event_activation=0
activation/udev_sync=1

          17.975s dracut-initqueue.service
          10.162s wicked.service
           8.238s lvm2-activation-early.service  <===
           6.955s btrfsmaintenance-refresh.service
           4.444s systemd-udev-settle.service
           1.800s rsyslog.service
           1.768s wickedd.service
           1.751s kbdsettings.service
           1.751s kdump-early.service
           1.602s lvm2-pvscan@135:832.service
           1.601s lvm2-pvscan@135:816.service
           1.601s lvm2-pvscan@135:784.service
           1.600s lvm2-pvscan@134:1008.service
           1.599s dev-system-swap.swap
           1.598s lvm2-pvscan@134:832.service

5>
devices/obtain_device_list_from_udev=0
global/event_activation=1
activation/udev_sync=1

          34.908s dracut-initqueue.service
          25.440s systemd-udev-settle.service
          23.335s lvm2-pvscan@66:832.service  <===
          23.335s lvm2-pvscan@65:976.service
          23.335s lvm2-pvscan@66:784.service
          23.335s lvm2-pvscan@65:816.service
          23.335s lvm2-pvscan@8:976.service
          23.327s lvm2-pvscan@66:864.service
          23.323s lvm2-pvscan@66:848.service
          23.316s lvm2-pvscan@65:800.service

6>
devices/obtain_device_list_from_udev=0
global/event_activation=1
activation/udev_sync=0

          36.222s lvm2-pvscan@134:912.service <===
          36.222s lvm2-pvscan@134:816.service
          36.222s lvm2-pvscan@134:784.service
          36.221s lvm2-pvscan@133:816.service
          36.221s lvm2-pvscan@133:848.service
          36.220s lvm2-pvscan@133:928.service
          36.220s lvm2-pvscan@133:768.service
          36.219s lvm2-pvscan@133:992.service
          36.218s lvm2-pvscan@133:784.service
          36.218s lvm2-pvscan@134:800.service
          36.218s lvm2-pvscan@133:864.service
          36.217s lvm2-pvscan@133:896.service
          36.209s lvm2-pvscan@133:960.service
          36.197s lvm2-pvscan@134:1008.service


7>
devices/obtain_device_list_from_udev=1
global/event_activation=1
activation/udev_sync=1

      2min 6.736s lvm2-pvscan@70:384.service <===
      2min 6.628s lvm2-pvscan@70:400.service
      2min 6.554s lvm2-pvscan@69:432.service
      2min 6.518s lvm2-pvscan@69:480.service
      2min 6.478s lvm2-pvscan@69:416.service
      2min 6.277s lvm2-pvscan@69:464.service
      2min 5.791s lvm2-pvscan@69:544.service


8>
devices/obtain_device_list_from_udev=1
global/event_activation=1
activation/udev_sync=0

     2min 27.091s lvm2-pvscan@129:944.service <===
     2min 26.952s lvm2-pvscan@129:912.service
     2min 26.950s lvm2-pvscan@129:880.service
     2min 26.947s lvm2-pvscan@129:960.service
     2min 26.947s lvm2-pvscan@129:928.service
     2min 26.947s lvm2-pvscan@129:832.service
     2min 26.938s lvm2-pvscan@129:848.service
     2min 26.733s lvm2-pvscan@129:864.service
     2min 16.241s lvm2-pvscan@66:976.service
     2min 15.166s lvm2-pvscan@66:992.service


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


  parent reply	other threads:[~2021-06-08 16:50 UTC|newest]

Thread overview: 86+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-06  6:15 [linux-lvm] Discussion: performance issue on event activation mode heming.zhao
2021-06-06 16:35 ` Roger Heflin
2021-06-07 10:27   ` Martin Wilck
2021-06-07 15:30     ` heming.zhao
2021-06-07 15:45       ` Martin Wilck
2021-06-07 20:52       ` Roger Heflin
2021-06-07 21:30     ` David Teigland
2021-06-08  8:26       ` Martin Wilck
2021-06-08 15:39         ` David Teigland
2021-06-08 15:47           ` Martin Wilck
2021-06-08 16:02             ` Zdenek Kabelac
2021-06-08 16:05               ` Martin Wilck
2021-06-08 16:03             ` David Teigland
2021-06-08 16:07               ` Martin Wilck
2021-06-15 17:03           ` David Teigland
2021-06-15 18:21             ` Zdenek Kabelac
2021-06-16 16:18             ` heming.zhao
2021-06-16 16:38               ` David Teigland
2021-06-17  3:46                 ` heming.zhao
2021-06-17 15:27                   ` David Teigland
2021-06-08 16:49         ` heming.zhao [this message]
2021-06-08 16:18       ` heming.zhao
2021-06-09  4:01         ` heming.zhao
2021-06-09  5:37           ` Heming Zhao
2021-06-09 18:59             ` David Teigland
2021-06-10 17:23               ` heming.zhao
2021-06-07 15:48 ` Martin Wilck
2021-06-07 16:31   ` Zdenek Kabelac
2021-06-07 21:48   ` David Teigland
2021-06-08 12:29     ` Peter Rajnoha
2021-06-08 13:23       ` Martin Wilck
2021-06-08 13:41         ` Peter Rajnoha
2021-06-08 13:46           ` Zdenek Kabelac
2021-06-08 13:56             ` Peter Rajnoha
2021-06-08 14:23               ` Zdenek Kabelac
2021-06-08 14:48               ` Martin Wilck
2021-06-08 15:19                 ` Peter Rajnoha
2021-06-08 15:39                   ` Martin Wilck
2021-09-09 19:44         ` David Teigland
2021-09-10 17:38           ` Martin Wilck
2021-09-12 16:51             ` heming.zhao
2021-09-27 10:00           ` Peter Rajnoha
2021-09-27 15:38             ` David Teigland
2021-09-28  6:34               ` Martin Wilck
2021-09-28 14:42                 ` David Teigland
2021-09-28 15:16                   ` Martin Wilck
2021-09-28 15:31                     ` Martin Wilck
2021-09-28 15:56                     ` David Teigland
2021-09-28 18:03                       ` Benjamin Marzinski
2021-09-28 17:42                     ` Benjamin Marzinski
2021-09-28 19:15                       ` Martin Wilck
2021-09-29 22:06                       ` Peter Rajnoha
2021-09-30  7:51                         ` Martin Wilck
2021-09-30  8:07                           ` heming.zhao
2021-09-30  9:31                             ` Martin Wilck
2021-09-30 11:41                             ` Peter Rajnoha
2021-09-30 15:32                               ` heming.zhao
2021-10-01  7:41                                 ` Martin Wilck
2021-10-01  8:08                                   ` Peter Rajnoha
2021-09-30 11:29                           ` Peter Rajnoha
2021-09-30 16:04                             ` David Teigland
2021-09-30 14:41                           ` Benjamin Marzinski
2021-10-01  7:42                             ` Martin Wilck
2021-09-29 21:53                 ` Peter Rajnoha
2021-09-30  7:45                   ` Martin Wilck
2021-09-29 21:39               ` Peter Rajnoha
2021-09-30  7:22                 ` Martin Wilck
2021-09-30 14:26                   ` David Teigland
2021-09-30 15:55                 ` David Teigland
2021-10-01  8:00                   ` Peter Rajnoha
2021-10-18  6:24                   ` Martin Wilck
2021-10-18 15:04                     ` David Teigland
2021-10-18 16:56                       ` heming.zhao
2021-10-18 21:51                       ` Zdenek Kabelac
2021-10-19 17:18                         ` David Teigland
2021-10-20 14:40                       ` Martin Wilck
2021-10-20 14:50                         ` David Teigland
2021-10-20 14:54                           ` Martin Wilck
2021-10-20 15:12                             ` David Teigland
2021-06-07 16:40 ` David Teigland
2021-07-02 21:09 ` David Teigland
2021-07-02 21:22   ` Martin Wilck
2021-07-02 22:02     ` David Teigland
2021-07-03 11:49       ` heming.zhao
2021-07-08 10:10         ` Tom Yan
2021-07-02 21:31   ` Tom Yan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6ee904a6-d7b8-1457-513c-c31404400e8d@suse.com \
    --to=heming.zhao@suse.com \
    --cc=linux-lvm@redhat.com \
    --cc=martin.wilck@suse.com \
    --cc=rogerheflin@gmail.com \
    --cc=teigland@redhat.com \
    --cc=zkabelac@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).