From: Zdenek Kabelac <zkabelac@redhat.com>
To: Eric Ren <zren@suse.com>,
LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] The benefits of lvmlockd over clvmd?
Date: Wed, 10 Jan 2018 16:35:56 +0100 [thread overview]
Message-ID: <e1d77d74-2294-fefb-2815-0e0ef38213a6@redhat.com> (raw)
In-Reply-To: <04ad2444-ad44-3442-42cb-36b1ed18e484@suse.com>
Dne 10.1.2018 v 15:42 Eric Ren napsal(a):
> Zdenek,
>
> Thanks for helping make this more clear to me :)
>
>>
>> There are couple fuzzy sentences - so lets try to make them more clear.
>>
>> Default mode for 'clvmd' is to 'share' resource everywhere - which clearly
>> comes from original 'gfs' requirement and 'linear/striped' volume that can
>> be easily activated on many nodes.
>>
>> However over the time - different use-cases got more priority so basically
>> every new dm target (except mirror) does NOT support shared storage (maybe
>> raid will one day...). So targets like snapshot, thin, cache, raid do
>> require 'so called' exclusive activation.
>
> Good to know the history about clvmd :)
>
>>
>> So here comes the difference - lvmlockd in its default goes with
>> 'exclusive/local' activation and shared (old clvmd default) needs to be
>> requested.
>>
>> Another difference is - 'clvmd' world is 'automating' activation around
>> the whole cluster (so from node A it's possible to activate volume on node B
>> without ANY other command then 'lvchange).
>>
>> With 'lvmlockd' mechanism - this was 'dropped' and it's users responsibility
>> to initiate i.e. ssh command with activation on another node(s) and resolve
>> error handling.
>>
>> There are various pros&cons over each solution - both needs setups and while
>> 'clvmd' world is 'set & done' lvmlockd world scripting needs to be born in
>> some way.
>
> True.
>
>> Also ATM 'lvmetad' can't be used even with lvmlockd - simply because we are
>> not (yet) capable to handle 'udev' around the cluster (and it's not clear we
>> ever will).
>
> This sentence surprises me much. According to manpage of lvmlockd, it seems
> clear that lvmlockd can work with lvmetad now.
> IIRC, it's not the first time you mentioned about "cluster udev". It gives me
> a impression that the currect udev system is not
> 100% reliable for shared disks in cluster, no matter if we use lvmetad or not,
> right? If so, could you please give an example
> scenario where lvmetad may not work well with lvmlockd?
>
Hi
The world of udevd/systemd is complicated monster - which has no notation for
handling bad/duplicate/.... devices and so on.
Current design of lvmetad is not sufficient to live in ocean of bugs in this
category - so as said - ATM it's highly recommend to keep lvmetad off in clusters.
Regards
Zdenek
next prev parent reply other threads:[~2018-01-10 15:35 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-09 3:15 [linux-lvm] The benefits of lvmlockd over clvmd? Eric Ren
2018-01-09 16:06 ` David Teigland
2018-01-10 7:11 ` Eric Ren
2018-01-10 9:36 ` Zdenek Kabelac
2018-01-10 14:42 ` Eric Ren
2018-01-10 15:35 ` Zdenek Kabelac [this message]
2018-01-10 17:25 ` David Teigland
2018-01-10 16:45 ` David Teigland
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e1d77d74-2294-fefb-2815-0e0ef38213a6@redhat.com \
--to=zkabelac@redhat.com \
--cc=linux-lvm@redhat.com \
--cc=zren@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).