linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Zdenek Kabelac <zkabelac@redhat.com>
To: Peter Rajnoha <prajnoha@redhat.com>
Cc: "linux-lvm@redhat.com" <linux-lvm@redhat.com>,
	teigland@redhat.com, Heming Zhao <heming.zhao@suse.com>,
	Martin Wilck <martin.wilck@suse.com>
Subject: Re: [linux-lvm] Discussion: performance issue on event activation mode
Date: Tue, 8 Jun 2021 16:23:21 +0200	[thread overview]
Message-ID: <0322710f-fbfe-73ff-b24d-af08aae178fd@redhat.com> (raw)
In-Reply-To: <20210608135648.gr5xfwma2f3jschr@alatyr-rpi.brq.redhat.com>

Dne 08. 06. 21 v 15:56 Peter Rajnoha napsal(a):
> On Tue 08 Jun 2021 15:46, Zdenek Kabelac wrote:
>> Dne 08. 06. 21 v 15:41 Peter Rajnoha napsal(a):
>>> On Tue 08 Jun 2021 13:23, Martin Wilck wrote:
>>>> On Di, 2021-06-08 at 14:29 +0200, Peter Rajnoha wrote:
>>>>> On Mon 07 Jun 2021 16:48, David Teigland wrote:
>>>>>> If there are say 1000 PVs already present on the system, there
>>>>>> could be
>>>>>> real savings in having one lvm command process all 1000, and then
>>>>>> switch
>>>>>> over to processing uevents for any further devices afterward.  The
>>>>>> switch
>>>>>> over would be delicate because of the obvious races involved with
>>>>>> new devs
>>>>>> appearing, but probably feasible.
>>>>> Maybe to avoid the race, we could possibly write the proposed
>>>>> "/run/lvm2/boot-finished" right before we initiate scanning in
>>>>> "vgchange
>>>>> -aay" that is a part of the lvm2-activation-net.service (the last
>>>>> service to do the direct activation).
>>>>>
>>>>> A few event-based pvscans could fire during the window between
>>>>> "scan initiated phase" in lvm2-activation-net.service's
>>>>> "ExecStart=vgchange -aay..."
>>>>> and the originally proposed "ExecStartPost=/bin/touch /run/lvm2/boot-
>>>>> finished",
>>>>> but I think still better than missing important uevents completely in
>>>>> this window.
>>>> That sounds reasonable. I was thinking along similar lines. Note that
>>>> in the case where we had problems lately, all actual activation (and
>>>> slowness) happened in lvm2-activation-early.service.
>>>>
>>> Yes, I think most of the activations are covered with the first service
>>> where most of the devices are already present, then the rest is covered
>>> by the other two services.
>>>
>>> Anyway, I'd still like to know why exactly
>>> obtain_device_list_from_udev=1 is so slow. The only thing that it does
>>> is that it calls libudev's enumeration for "block" subsystem devs. We
>>> don't even check if the device is intialized in udev in this case if I
>>> remember correctly, so if there's any udev processing in parallel hapenning,
>>> it shouldn't be slowing down. BUT we're waiting for udev records to
>>> get initialized for filtering reasons, like mpath and MD component detection.
>>> We should probably inspect this in detail and see where the time is really
>>> taken underneath before we do any futher changes...
>>
>> This remains me - did we already fix the anoying problem of 'repeated' sleep
>> for every 'unfinished' udev intialization?
>>
>> I believe there should be exactly one sleep try to wait for udev and if it
>> doesn't work - go with out.
>>
>> But I've seen some trace where the sleep was repeatedly for each device were
>> udev was 'uninitiated'.
>>
>> Clearly this doesn't fix the problem of 'unitialized udev' but at least
>> avoid extremely lengthy sleeping lvm command.
> The sleep + iteration is still there!
>
> The issue is that we're relying now on udev db records that contain
> info about mpath and MD components - without this, the detection (and
> hence filtering) could fail in certain cases. So if go without checking
> udev db, that'll be a step back. As an alternative, we'd need to call
> out mpath and MD directly from LVM2 if we really wanted to avoid
> checking udev db (but then, we're checking the same thing that is
> already checked by udev means).


Few things here: I've already seen traces where we've been waiting for udev 
basically 'endlessly' - like if sleep actually does not help at all.

So either our command holds some lock - preventing 'udev' rule to finish -  or 
some other trouble is blocking it.

My point why we should wait 'just once' is - that if the 1st. sleep didn't 
help - likely all other next sleep for other devices won't help either.

So we may like report some 'garbage' if we don't have all the info from udev 
we need to - but at least it won't take so many minutes, and in some cases the 
device isn't actually needed for successful command completiion.

But of course we should figure out why udev isn't initialized in-time.


Zdenek


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


  reply	other threads:[~2021-06-08 14:23 UTC|newest]

Thread overview: 86+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-06  6:15 [linux-lvm] Discussion: performance issue on event activation mode heming.zhao
2021-06-06 16:35 ` Roger Heflin
2021-06-07 10:27   ` Martin Wilck
2021-06-07 15:30     ` heming.zhao
2021-06-07 15:45       ` Martin Wilck
2021-06-07 20:52       ` Roger Heflin
2021-06-07 21:30     ` David Teigland
2021-06-08  8:26       ` Martin Wilck
2021-06-08 15:39         ` David Teigland
2021-06-08 15:47           ` Martin Wilck
2021-06-08 16:02             ` Zdenek Kabelac
2021-06-08 16:05               ` Martin Wilck
2021-06-08 16:03             ` David Teigland
2021-06-08 16:07               ` Martin Wilck
2021-06-15 17:03           ` David Teigland
2021-06-15 18:21             ` Zdenek Kabelac
2021-06-16 16:18             ` heming.zhao
2021-06-16 16:38               ` David Teigland
2021-06-17  3:46                 ` heming.zhao
2021-06-17 15:27                   ` David Teigland
2021-06-08 16:49         ` heming.zhao
2021-06-08 16:18       ` heming.zhao
2021-06-09  4:01         ` heming.zhao
2021-06-09  5:37           ` Heming Zhao
2021-06-09 18:59             ` David Teigland
2021-06-10 17:23               ` heming.zhao
2021-06-07 15:48 ` Martin Wilck
2021-06-07 16:31   ` Zdenek Kabelac
2021-06-07 21:48   ` David Teigland
2021-06-08 12:29     ` Peter Rajnoha
2021-06-08 13:23       ` Martin Wilck
2021-06-08 13:41         ` Peter Rajnoha
2021-06-08 13:46           ` Zdenek Kabelac
2021-06-08 13:56             ` Peter Rajnoha
2021-06-08 14:23               ` Zdenek Kabelac [this message]
2021-06-08 14:48               ` Martin Wilck
2021-06-08 15:19                 ` Peter Rajnoha
2021-06-08 15:39                   ` Martin Wilck
2021-09-09 19:44         ` David Teigland
2021-09-10 17:38           ` Martin Wilck
2021-09-12 16:51             ` heming.zhao
2021-09-27 10:00           ` Peter Rajnoha
2021-09-27 15:38             ` David Teigland
2021-09-28  6:34               ` Martin Wilck
2021-09-28 14:42                 ` David Teigland
2021-09-28 15:16                   ` Martin Wilck
2021-09-28 15:31                     ` Martin Wilck
2021-09-28 15:56                     ` David Teigland
2021-09-28 18:03                       ` Benjamin Marzinski
2021-09-28 17:42                     ` Benjamin Marzinski
2021-09-28 19:15                       ` Martin Wilck
2021-09-29 22:06                       ` Peter Rajnoha
2021-09-30  7:51                         ` Martin Wilck
2021-09-30  8:07                           ` heming.zhao
2021-09-30  9:31                             ` Martin Wilck
2021-09-30 11:41                             ` Peter Rajnoha
2021-09-30 15:32                               ` heming.zhao
2021-10-01  7:41                                 ` Martin Wilck
2021-10-01  8:08                                   ` Peter Rajnoha
2021-09-30 11:29                           ` Peter Rajnoha
2021-09-30 16:04                             ` David Teigland
2021-09-30 14:41                           ` Benjamin Marzinski
2021-10-01  7:42                             ` Martin Wilck
2021-09-29 21:53                 ` Peter Rajnoha
2021-09-30  7:45                   ` Martin Wilck
2021-09-29 21:39               ` Peter Rajnoha
2021-09-30  7:22                 ` Martin Wilck
2021-09-30 14:26                   ` David Teigland
2021-09-30 15:55                 ` David Teigland
2021-10-01  8:00                   ` Peter Rajnoha
2021-10-18  6:24                   ` Martin Wilck
2021-10-18 15:04                     ` David Teigland
2021-10-18 16:56                       ` heming.zhao
2021-10-18 21:51                       ` Zdenek Kabelac
2021-10-19 17:18                         ` David Teigland
2021-10-20 14:40                       ` Martin Wilck
2021-10-20 14:50                         ` David Teigland
2021-10-20 14:54                           ` Martin Wilck
2021-10-20 15:12                             ` David Teigland
2021-06-07 16:40 ` David Teigland
2021-07-02 21:09 ` David Teigland
2021-07-02 21:22   ` Martin Wilck
2021-07-02 22:02     ` David Teigland
2021-07-03 11:49       ` heming.zhao
2021-07-08 10:10         ` Tom Yan
2021-07-02 21:31   ` Tom Yan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0322710f-fbfe-73ff-b24d-af08aae178fd@redhat.com \
    --to=zkabelac@redhat.com \
    --cc=heming.zhao@suse.com \
    --cc=linux-lvm@redhat.com \
    --cc=martin.wilck@suse.com \
    --cc=prajnoha@redhat.com \
    --cc=teigland@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).