linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Eric Ren <zren@suse.com>
To: LVM general discussion and development <linux-lvm@redhat.com>,
	Zdenek Kabelac <zkabelac@redhat.com>
Subject: Re: [linux-lvm] The benefits of lvmlockd over clvmd?
Date: Wed, 10 Jan 2018 22:42:08 +0800	[thread overview]
Message-ID: <04ad2444-ad44-3442-42cb-36b1ed18e484@suse.com> (raw)
In-Reply-To: <e644fe30-4a22-225e-9f70-1d8bc48b5d7e@redhat.com>

Zdenek,

Thanks for helping make this more clear to me :)

>
> There are couple fuzzy sentences - so lets try to make them more clear.
>
> Default mode for 'clvmd' is to 'share' resource everywhere - which 
> clearly comes from original 'gfs' requirement and 'linear/striped' 
> volume that can be easily activated on many nodes.
>
> However over the time - different use-cases got more priority so 
> basically every new  dm target (except mirror) does NOT support shared 
> storage (maybe raid will one day...).   So targets like snapshot, 
> thin, cache, raid  do require 'so called' exclusive activation.

Good to know the history about clvmd :)

>
> So here comes the difference -  lvmlockd  in its default  goes with 
> 'exclusive/local' activation and shared (old clvmd default) needs to 
> be requested.
>
> Another difference is -   'clvmd' world is 'automating' activation 
> around the whole cluster (so from node A it's possible to activate 
> volume on node B without ANY other command then 'lvchange).
>
> With 'lvmlockd' mechanism - this was 'dropped' and it's users 
> responsibility to initiate i.e. ssh command with activation on another 
> node(s) and resolve error handling.
>
> There are various pros&cons over each solution - both needs setups and 
> while 'clvmd' world is  'set & done'  lvmlockd world scripting needs 
> to be born in some way.

True.

> Also ATM  'lvmetad' can't be used even with lvmlockd - simply because 
> we are not (yet) capable to handle 'udev' around the cluster (and it's 
> not clear we ever will).

This sentence surprises me much. According to manpage of lvmlockd, it 
seems clear that lvmlockd can work with lvmetad now.
IIRC, it's not the first time you mentioned about "cluster udev". It 
gives me a impression that the currect udev system is not
100% reliable for shared disks in cluster, no matter if we use lvmetad 
or not, right? If so, could you please give an example
scenario where lvmetad may not work well with lvmlockd?

>
> On the positive side - we are working hard to enhance 'scanning' speed 
> - so in majority of use-cases there is no real performance gain with 
> lvmetad usage anyway.

Great! Thanks.

Regards,
Eric

  reply	other threads:[~2018-01-10 14:42 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-09  3:15 [linux-lvm] The benefits of lvmlockd over clvmd? Eric Ren
2018-01-09 16:06 ` David Teigland
2018-01-10  7:11   ` Eric Ren
2018-01-10  9:36     ` Zdenek Kabelac
2018-01-10 14:42       ` Eric Ren [this message]
2018-01-10 15:35         ` Zdenek Kabelac
2018-01-10 17:25           ` David Teigland
2018-01-10 16:45     ` David Teigland

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=04ad2444-ad44-3442-42cb-36b1ed18e484@suse.com \
    --to=zren@suse.com \
    --cc=linux-lvm@redhat.com \
    --cc=zkabelac@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).