All of lore.kernel.org
 help / color / mirror / Atom feed
From: su liu <liusu8788@gmail.com>
To: David Teigland <teigland@redhat.com>
Cc: linux-lvm <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] lvm2 cluster aware
Date: Fri, 26 Aug 2016 14:28:03 +0800	[thread overview]
Message-ID: <CAN2gjWTwXofKjUGO0RStX3=CQJLa5kR5CG945pE9=gEtucdQQA@mail.gmail.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 2982 bytes --]

hi David Teigland. Thanks your explanation.

My use case is:

I have 3 nodes and they all have access to a SAN storage. One node acts as
admin node which manage lvs(such as creating or deleting lvs)。The other two
nodes act as compute node which run VMs. Then I want to attach the lv to
the VMs (not  multiattach)。As I know ,the metadata is stored in PVs,then I
create  a lv on admin node, It can be seen on compute node without lvmetad
adn clvmd daemon running. My questions is:

1 In this docment: http://www.tldp.org/HOWTO/LVM-HOWTO/sharinglvm1.html  It
says that

"The key thing to remember when sharing volumes is that all the LVM
administration must be done on one node only and that all other nodes must
have LVM shut down before changing anything on the admin node. Then, when
the changes have been made, it is necessary to run vgscan on the other
nodes before reloading the volume groups." .

I have not shot down lvm on compute node ,and I have not run vgscan on
compute node after I create lv on admin node, But I can see the lv  after I
create it on admin node. Should I take the operation shutdown and run vgsan?

2 If I do not use clvmd or lvmlockd, Shoud I run additional operation to
activate the lvs on compute node before attaching it to VM. and Shoud I do
another after I run the delete lv operation on admin node? As I have tried
yesterday, After I run lvremove lv on admin node, It is still has file in
directory /dev/myvg/ on compute node.

Can you explain them to me? Thanks very much.



2016-08-25 23:59 GMT+08:00 David Teigland <teigland@redhat.com>:

> On Thu, Aug 25, 2016 at 09:50:24AM +0800, su liu wrote:
> > I have a question about lvm2 cluster, The scene is that I try to imitate
> > FCSAN by mapping a rbd volume to two compute node, Then I using the rbd
> > volume to  create a PV and VG.I stoped the lvmetad daemon on the compute
> > nodes. Then I find that when I operating the VG on one compute node, the
> > changes can also be aware on another compute nodes.
> >
> > But this docment(http://www.tldp.org/HOWTO/LVM-HOWTO/sharinglvm1.html)
> says
> > that "*LVM is not cluster aware".*
> >
> > My question is that can I use the method to achieve the case that I
> create
> > or delete lv on one node whlie other compute node can using the lvs?
> >
> > Can anybody explain this?
>
> It's not safe to use lvm on shared storage without some extra mechanism to
> protect the data or coordinate access among hosts.  There are multiple
> ways, depending on what sort of sharing/coordination you want to use:
>
> - use system ID to protect VGs from other hosts,
>   http://man7.org/linux/man-pages/man7/lvmsystemid.7.html
>
> - use lvmlockd to coordinate sharing with sanlock or dlm (this is new),
>   http://man7.org/linux/man-pages/man8/lvmlockd.8.html
>
> - use clvm to coordinate sharing with dlm (this is old)
>   http://man7.org/linux/man-pages/man8/clvmd.8.html
>
>

[-- Attachment #2: Type: text/html, Size: 5105 bytes --]

             reply	other threads:[~2016-08-26  6:28 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-26  6:28 su liu [this message]
2016-08-26 15:53 ` [linux-lvm] lvm2 cluster aware David Teigland
  -- strict thread matches above, loose matches on Subject: below --
2016-08-26  6:35 su liu
2016-08-25  8:49 su liu
2016-08-26  5:08 ` Digimer
2016-08-26  9:39 ` Lentes, Bernd
2016-08-25  1:50 su liu
2016-08-25  7:37 ` Digimer
2016-08-25 15:59 ` David Teigland
2016-08-26  5:04   ` Digimer
2016-08-26 14:56     ` David Teigland

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAN2gjWTwXofKjUGO0RStX3=CQJLa5kR5CG945pE9=gEtucdQQA@mail.gmail.com' \
    --to=liusu8788@gmail.com \
    --cc=linux-lvm@redhat.com \
    --cc=teigland@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.