linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: David Teigland <teigland@redhat.com>
To: Martin Wilck <martin.wilck@suse.com>
Cc: "zkabelac@redhat.com" <zkabelac@redhat.com>,
	"linux-lvm@redhat.com" <linux-lvm@redhat.com>,
	Heming Zhao <heming.zhao@suse.com>
Subject: Re: [linux-lvm] Discussion: performance issue on event activation mode
Date: Fri, 2 Jul 2021 17:02:19 -0500	[thread overview]
Message-ID: <20210702220219.GB15057@redhat.com> (raw)
In-Reply-To: <831e213cc0c54476d053ab230df29a4cb386d784.camel@suse.com>

On Fri, Jul 02, 2021 at 09:22:03PM +0000, Martin Wilck wrote:
> On Fr, 2021-07-02 at 16:09 -0500, David Teigland wrote:
> > On Sun, Jun 06, 2021 at 02:15:23PM +0800, heming.zhao@suse.com wrote:
> > > dev_cache_scan //order: O(n^2)
> > >  + _insert_dirs //O(n)
> > >  | if obtain_device_list_from_udev() true
> > >  |   _insert_udev_dir //O(n)
> > >  |
> > >  + dev_cache_index_devs //O(n)
> > 
> > I've been running some experiments and trying some patches to improve
> > this.  By setting obtain_device_list_from_udev=0, and using the
> > attached
> > patch to disable dev_cache_index_devs, the pvscan is much better.
> > 
> > systemctl status lvm2-pvscan appears to show that the pvscan command
> > itself runs for only 2-4 seconds, while the service as a whole takes
> > around 15 seconds.  See the 16 sec gap below from the end of pvscan
> > to the systemd Started message.  If that's accurate, the remaining
> > delay
> > would lie outside lvm.
> > 
> > Jul 02 15:27:57 localhost.localdomain systemd[1]: Starting LVM event
> > activation on device 253:1710...
> > Jul 02 15:28:00 localhost.localdomain lvm[65620]:   pvscan[65620] PV
> > /dev/mapper/mpathalz online, VG 1ed02c7d-0019-43c4-91b5-f220f3521ba9
> > is complete.
> > Jul 02 15:28:00 localhost.localdomain lvm[65620]:   pvscan[65620] VG
> > 1ed02c7d-0019-43c4-91b5-f220f3521ba9 run autoactivation.
> > Jul 02 15:28:00 localhost.localdomain lvm[65620]:   1 logical
> > volume(s) in volume group "1ed02c7d-0019-43c4-91b5-f220f3521ba9" now
> > active
> 
> Printing this message is really the last thing that pvscan does?

I've not seen anything measurable after that message.  However, digging
through the command init/exit paths I did find libudev setup/destroy calls
that can also be skipped when the command is not accessing libudev info.
A quick check seemed to show some further improvement from dropping those.
(That will be part of the larger patch isolating libudev usage.)

I'm still seeing significant delay between the apparent command exit and
the systemd "Started" message, but this will require a little more work to
prove.

Dave

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


  reply	other threads:[~2021-07-02 22:02 UTC|newest]

Thread overview: 86+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-06  6:15 [linux-lvm] Discussion: performance issue on event activation mode heming.zhao
2021-06-06 16:35 ` Roger Heflin
2021-06-07 10:27   ` Martin Wilck
2021-06-07 15:30     ` heming.zhao
2021-06-07 15:45       ` Martin Wilck
2021-06-07 20:52       ` Roger Heflin
2021-06-07 21:30     ` David Teigland
2021-06-08  8:26       ` Martin Wilck
2021-06-08 15:39         ` David Teigland
2021-06-08 15:47           ` Martin Wilck
2021-06-08 16:02             ` Zdenek Kabelac
2021-06-08 16:05               ` Martin Wilck
2021-06-08 16:03             ` David Teigland
2021-06-08 16:07               ` Martin Wilck
2021-06-15 17:03           ` David Teigland
2021-06-15 18:21             ` Zdenek Kabelac
2021-06-16 16:18             ` heming.zhao
2021-06-16 16:38               ` David Teigland
2021-06-17  3:46                 ` heming.zhao
2021-06-17 15:27                   ` David Teigland
2021-06-08 16:49         ` heming.zhao
2021-06-08 16:18       ` heming.zhao
2021-06-09  4:01         ` heming.zhao
2021-06-09  5:37           ` Heming Zhao
2021-06-09 18:59             ` David Teigland
2021-06-10 17:23               ` heming.zhao
2021-06-07 15:48 ` Martin Wilck
2021-06-07 16:31   ` Zdenek Kabelac
2021-06-07 21:48   ` David Teigland
2021-06-08 12:29     ` Peter Rajnoha
2021-06-08 13:23       ` Martin Wilck
2021-06-08 13:41         ` Peter Rajnoha
2021-06-08 13:46           ` Zdenek Kabelac
2021-06-08 13:56             ` Peter Rajnoha
2021-06-08 14:23               ` Zdenek Kabelac
2021-06-08 14:48               ` Martin Wilck
2021-06-08 15:19                 ` Peter Rajnoha
2021-06-08 15:39                   ` Martin Wilck
2021-09-09 19:44         ` David Teigland
2021-09-10 17:38           ` Martin Wilck
2021-09-12 16:51             ` heming.zhao
2021-09-27 10:00           ` Peter Rajnoha
2021-09-27 15:38             ` David Teigland
2021-09-28  6:34               ` Martin Wilck
2021-09-28 14:42                 ` David Teigland
2021-09-28 15:16                   ` Martin Wilck
2021-09-28 15:31                     ` Martin Wilck
2021-09-28 15:56                     ` David Teigland
2021-09-28 18:03                       ` Benjamin Marzinski
2021-09-28 17:42                     ` Benjamin Marzinski
2021-09-28 19:15                       ` Martin Wilck
2021-09-29 22:06                       ` Peter Rajnoha
2021-09-30  7:51                         ` Martin Wilck
2021-09-30  8:07                           ` heming.zhao
2021-09-30  9:31                             ` Martin Wilck
2021-09-30 11:41                             ` Peter Rajnoha
2021-09-30 15:32                               ` heming.zhao
2021-10-01  7:41                                 ` Martin Wilck
2021-10-01  8:08                                   ` Peter Rajnoha
2021-09-30 11:29                           ` Peter Rajnoha
2021-09-30 16:04                             ` David Teigland
2021-09-30 14:41                           ` Benjamin Marzinski
2021-10-01  7:42                             ` Martin Wilck
2021-09-29 21:53                 ` Peter Rajnoha
2021-09-30  7:45                   ` Martin Wilck
2021-09-29 21:39               ` Peter Rajnoha
2021-09-30  7:22                 ` Martin Wilck
2021-09-30 14:26                   ` David Teigland
2021-09-30 15:55                 ` David Teigland
2021-10-01  8:00                   ` Peter Rajnoha
2021-10-18  6:24                   ` Martin Wilck
2021-10-18 15:04                     ` David Teigland
2021-10-18 16:56                       ` heming.zhao
2021-10-18 21:51                       ` Zdenek Kabelac
2021-10-19 17:18                         ` David Teigland
2021-10-20 14:40                       ` Martin Wilck
2021-10-20 14:50                         ` David Teigland
2021-10-20 14:54                           ` Martin Wilck
2021-10-20 15:12                             ` David Teigland
2021-06-07 16:40 ` David Teigland
2021-07-02 21:09 ` David Teigland
2021-07-02 21:22   ` Martin Wilck
2021-07-02 22:02     ` David Teigland [this message]
2021-07-03 11:49       ` heming.zhao
2021-07-08 10:10         ` Tom Yan
2021-07-02 21:31   ` Tom Yan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210702220219.GB15057@redhat.com \
    --to=teigland@redhat.com \
    --cc=heming.zhao@suse.com \
    --cc=linux-lvm@redhat.com \
    --cc=martin.wilck@suse.com \
    --cc=zkabelac@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).